Are you using a GUI for each bash instance in a terminal window? A GUI
supports easy access to multiple terminal windows. A text-based interface
does not.
Most of the bash instances have their own tabs within a terminal window
- so yes, they are command line text interfaces within a gui interface.
I've got other command line interfaces running on different machines
via ssh - most of these machines don't have a gui of any kind.
And yes, text-based interfaces easy support multiple windows. I use
"screen" extensively for that purpose.
Try using Visual Studio with the Visual Assist X plugin. You'll be floored
at how much more productive you are. Edit-and-continue alone will shock you.
I have absolutely zero interest in Visual Studio. I have only used it a
bit, but I saw nothing that enticed me from my current editors (Eclipse
when I want something big and powerful for heavy work, gedit for quick
and simple work, and nano for command line editing).
No, I would not be "floored" by anything VS has - disregarding any
Windows-specific development features (I don't use MS's tools, so they
are of no interest), for every feature VS has that Eclipse doesn't have,
Eclipse has two that VS doesn't. And since most of such complex
features are rarely used, there is no added value. On the other hand,
Eclipse works cross-platform and easily supports the wide range of
compilers I use - in fact, many toolchains I use come with Eclipse
pre-configured. VS, on the other hand, won't even run on my Windows
machine (MS has declared it to be out of date, even though it is fine
for the other Windows-only software I use) - and it certainly won't run
on my Linux machines.
And no, "edit and continue" would not "shock" me. I do much of my PC
development using Python, which is a language suitable for
interpretation and modifying code as you go along. Since my C
development is mainly for embedded systems, "edit and continue" would be
impossible (or at least very impractical), even if I thought the concept
were a good idea. If I stretch back to the time when I /did/ use MS
development tools, Visual Basic 3.0 (on Windows 3.1 it was the only
practical RAD tool available until Delphi), it supported "edit and
continue". It did not "shock" me at the time, 20 years ago, but it
/did/ cause a lot of problems with inconsistent behaviour.
It's my experience. I have not found a better developer environment than
one based on a GUI, and one using edit-and-continue. Edit-and-continue
brings so much to the table that I think most people don't realize.
I am sure you find VS useful for a lot of development. But that does
not mean it is useful for /all/ file editing purposes - can you honestly
say you never use "notepad" on your system?
At some point a computer will "know you" and when you sit down at a computer
it will know, from prior patterns, that you like to read this, that, the
other thing, and it will already have those ready for you. It will know you
like to perform these searches on certain things, that you like to read so-
and-so's posts first, and it will do all of this for you.
These will be little "agents" acting on your behalf, preparing data for your
consumption, doing all of it in parallel.
Doing multiple different small tasks in parallel in not a job for a
"massively parallel computing engine" - *nix systems have been doing
vast numbers of small tasks in parallel for 40 years, mostly with only 1
or a few cpu cores.
That's what I'm referring to. And there is more as it specifically relates
to programming.
Incorrect. In my opinion. I do not believe there will be anything that
remains completely in serial for much longer.
Well, I doubt if anything I say will shake your beliefs - but you might
find reading some history will help you guess the future.
I believe there are some
physical hardware limitations being imposed upon us today because the hardware
people have not thought along those lines, or are pursuing the high-speed
core execution engine and do not want a complete paradigm shift.
People /have/ thought along these lines - and have done so for decades.
The /fact/ is that few tasks really benefit from series parallelism
(though many can benefit from some multithreading and doing a few things
at the same time - mostly in the form of waiting for different things at
the same time). The /fact/ is that even for tasks that can be naturally
parallelised, it is a difficult job prone to very hard to find errors,
and the returns are often not worth the effort.
However, I
believe categorically that there will soon be a paradigm shift in computing.
Yeah, they said that about neural networks, fuzzy logic, genetic
programming, artificial intelligence, self-programming computers,
quantum computing, etc., etc. And yet we still program in C, mostly
single threaded. Somewhere there is a pattern to be found...
We will break through the serial thread spawning other big serial thread
nature and move into something much more conducive to processing regular
top-down code in parallel by processing through both instances of an IF block,
for example, and then disregarding the results of the path not taken. In this
case, for an expense of electricity and heat generation, the process is sped
up as much as it can be because it doesn't have to wait for a prior result
before continuing on with the next result.
Several years ago I came up with a concept for a CPU. I mentioned it briefly
to Jerry Bautista at Intel when I went to visit him on the Terascale project.
I also submitted it to some folks at Tom's Hardware who "promised to pass it
along." It was a way to aggregate a series of "waiting threads" around the
current CS:EIP/RIP of the x86 engine, so that they could be kicked off in
advance to move temporally the components of reads and writes, but also to
handle future operations before knowing prior input results. The idea was that
everything in computing is essentially an equation. You can build a lengthy
equation and then fill in a few variables to determine the ultimate path taken,
but you don't need to know the value of the variables ahead of time, but rather
you can simply devise the equation on-the-fly and use the variables later. In
this way, all results of a block of code are already known, they're just
waiting on some prior input to determine which result is actually the correct
one.
I do believe that. And I'm not buying a bridge from you.
It will so long as we maintain the current CPU architecture, and the current
operating system architecture. What's needed is a new way of thinking. I
have that way of thinking ... I just need to get it all down in code, and it's
taking a long time to do that by myself.
It is easy to imagine a Utopia. But getting there from the real world
is where the /real/ problem lies.