I've tried Eclipse. I prefer Netbeans. What is it about Eclipse that is
so great? I'm not aware of many things Eclipse can do that VS can't,
especially when the Whole Tomato Visual Assist X extension is added.
Eclipse can run on Linux. VS cannot. That is a killer feature.
Apart from that, I don't have any lists over the features and plugins
that the different tools support, even without considering which might
actually be useful for /me/. The point is merely that when I need a
"big" IDE, I've got one - I don't need VS and have nothing to gain from
it. Other people have different preferences.
And the /real/ point was that sometimes a big IDE is /not/ what I want,
and would be a terrible choice for a particular editing task - in which
case I have a "middle weight" choice (gedit) and a lightweight
text-based choice (nano).
Use an Oracle VirtualBox instance running an older version of Windows. Or
get an updated version of Visual Studio. Personally I use Visual Studio 2008
only and it runs fine on everything from Windows 2000 up.
The "problem" I have (it's not a real problem, as I don't /want/ to run
VS) is that the only Windows machine I have runs XP SP2 - and MS has
declared that unsuitable for their software. Almost everything windowy
that I need will run on it, except reasonably recent versions of MS VS
and MS Office. Fortunately, I need neither.
I use VirtualBox extensively for many purposes, and occasionally use it
for other Windows versions. So yes, I /could/ use a slightly newer
version of Windows (at least XP sp 3) in a virtual box on my Linux
desktop, in order to run VS. Or I could just use Eclipse natively.
I has shocked everyone I've shown it to. The ability to fix an error and
keep on going in your debug environment ... people are floored by it.
That may apply to the people you are familiar with, but it would take a
great deal more than that to shock /me/.
Perhaps you were not using it correctly. Edit-and-continue has flaws. It
won't handle certain things. If you introduce an error it doesn't always
give you the correct error code or explanation, etc. But, on the whole it
is well tried and debugged.
The whole concept has huge limitations. I can see it can be somewhat
useful at times - it was sometimes useful when I did VB3 development.
But I didn't miss it when switching to better tools (the day Delphi 1.0
was released).
I use Notepad++ on some things. I use Sammy Mitchell's The SemWare Editor
on many others. On Linux I use nano when I'm on ssh, but for most local
things I use Gedit.
And there you have it. Different interfaces, different types of tools,
for different purposes. Gui is /not/ better than text, big IDE (like VS
or Eclipse) is /not/ better than mid level editor (gedit, Notepad++) or
text-based editor (nano). They are different, and have their advantages
and disadvantages at different times.
Go back a few posts in this thread and read the nonsense you wrote about
gui's being "far superior" than text interfaces. Then re-read your
paragraph above. Gui's are "far superior" than text interfaces for many
uses, but certainly not for everything.
Visual Studio's IDE lacks several features. Visual Assist X adds most of
those lacking features back in. It makes Visual Studio much more like
Eclipse or Netbeans. Refactoring is one of the biggest with Ctrl+Alt+R to
rename a symbol. It also provides many speedups, has an Alt+G "goto
definition" lookup, an Ctrl+Alt+F "find all references" and so on.
Plus, Visual Studio itself has a Code Definition window which constantly
shows you the code definition line for the current symbol. It too is one
of the greatest time savers there is. Especially when going back in to
maintain older code.
With the soon-to-be many cores (64+) it will be done truly in parallel.
And what do you believe will be the benefits? Who cares if your 10
tasks run for 0.5 ms in parallel or in series? I have some 380+
processes running on my machine at the moment - of which typically 1 or
sometimes 2 are actually /running/. During a large compilation I can
use the 4+4 cores effectively, but outside that there is zero difference
to having 380 processes sleeping on one core, or 380 processes sleeping
on 380 cores.
Intel have had cpu designs with large numbers of cores for a good while
now. Sun (now Oracle) have chips with 16 cores per chip, 8 threads per
core - they are good for special tasks, but useless for normal computing.
What history teaches me is that technologies progress until there is something
radical that changes the nature of the thing. Horse selling businesses were
wide, as were horse facilities, buggies, whips, farriers, and so on. When the
automobile came along it changed everything. Not all at first, but ultimately.
What you learned there is that things change /slowly/.
Even in the computing world, progress is often very slow. Hardware
/implementations/ have been getting faster and more powerful at an
impressive rate, and consumer opinions change quickly, but the
fundamentals do not change at the same rates. The C language has gone
through a lot of improvements over the years, but it is still much the
same as 30 years ago. Most of the algorithms and theories in computing
were developed decades ago. Parallel computers have existed for several
decades, but most tasks run by most computers are mostly single-threaded.
I will certainly agree that we will see many more parallel tasks in the
future, but it will not be the earth-shattering revolution that you are
imagining. Part of that is that "normal" computers are already fast
enough for most purposes (supercomputers will never be fast enough, of
course). Intel is currently having trouble because their projections
about chip sales have gone wrong - it turns out that people don't need
new, faster computers, and are happy with what they've got. And once
everyone has got a decent Android Pad, sales for these will fall off
too. No revolutions, just the ebb and flow of the market.
The same will happen with computers. People will be building programming
stables, programming buggies, programming whips, being programming farriers,
until finally they step away from those forms and move to the new thing,
which will be the massively parallel computing model.
That's because of the existing hardware. The algorithm I described does things
which are serial in nature... in parallel. It's a new thing.
It's old hat.
I did not say those things. I realize other people have done so.
Ah, so this dramatic prediction is different because /you/ said it,
rather than all those others who got it wrong?
What I do
believe is we will see the shift to a new CPU core that handles parallelism
at what today is only serial code, in a way that changes the paradigm. As
such a new language will be needed to describe it. That's what I'm building.
First, there is /no/ way to magically change serial code into parallel
code. Even if it were not possible to prove this (and I believe it is),
there are a great many other people who are a lot smarter than you or I,
with far greater resources than us, and who are working on such ideas.
The best they have come up with is superscaler processors which use
complex scheduling, register renaming, branch prediction, speculative
execution, etc., to make bits of serial code run faster.
Secondly, there already exist languages designed for running code in
parallel. And there are already systems for running bits of C code in
parallel, so that you can with relative ease use multiple cores on the
parts of your code where you can benefit from them.
Agreed. It has been very hard on me to continue. I face exceeding opposition
whenever I describe these things.
Don't stop trying - you /might/ come up with something new. But don't
get disappointed when someone tells you these particular ideas are not
realistic, or are already old. Instead, use that information to come up
with different ideas.
On top of which I profess Christianity as
the driving force for me doing this (God gave me certain abilities, and as a
believer I desire to give those abilities back to Him, and unto all men).
Both of these facets cause barriers between myself and others. It is a hard
thing for me to figure out how to get around. I believe it will require a
special type of person to do so: (1) a devout believer, (2) someone highly
skilled in software development, and software and hardware theory.
If you find your religious beliefs give you hope and encouragement,
that's fine. But please don't mix religion with programming - it
degrades both of them.
But for your interest, there is a character in comp.lang.c++ that talks
like this - maybe you two would get along.