James said:
tanix wrote:
Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,
How that can possibly be? GC kills all threads when it has
to collect? Or it can magically sweep through heap,stack,bss
etc and scan without locking all the time or stopping program?
Explain to me how?
Manual deallocation does not have to lock at all....
[...]
Except that would require an equivalent of a virtual machine
underneath.
virtual machine is also heavy performance killer...
Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)
Best optimization is when you can manually control memory management
and have access to hardware directly. Everything else
is algorithm optimization...which can be done in any language.
Looks appealing in the local scope of things.
If you get too obscessed trying to save some machine cycles,
then yes, you do have a point.
The problem is in application of any complexity, even worth mentioning,
you are no longer dealing with machine instructions, however appealing
it might look.
You are dealing with SYSTEMS.
You are dealing with structures and higher level logical constructs.
By simply changing your architecture, you may achieve orders of
magnitude more performance. And performance is not the only thing
that counts in something in the real world, although it counts
probably more than ohter things.
Except stability.
And what other things are functionality, flexibility, configurability,
the power and clarity of your user interface, that turns out to be
one of the most imporant criteria, and plenty of other things.
Yes, if you think of your code as an assembly level set of instructions,
and not matter which instruction you are looking at, you are trying
to squeze every single machine cycle out of it, then you are not
"seeing the forest for the trees".
What I see using my program is not how efficient some subcomponent
is, but how many hours does it take me to process vast amounts
of information. I could care less if GC exist, except it helps me
more than it creates problems for it, and I don't even need to
prove it to anybody. It is self evident to me. After a while, you
stop questioning certain things if you saw a large enough history.
What is the point to forever flip those bits?
Let language designers think about these things, and I assume they
have done as good of a job doing it, as state of the art allows,
especially if they are getting paid tons of money for doing that.
I trust them. I may not agree with some things, and my primary
concerns nowadays are not whether GC is more or less efficient,
but how fast I can model my app, how easy it is to do that,
how supportive my IDE, how powerful my debugger is, how easy it
is for me to move my app to a different platform and things
like this.
You can nitpick all you want, but I doubt you will be able to
prove anything of substance by doing that kind of thing.
To me, it is just a royal waste of time. Totally unproductive.
Java is compiled language in a sense that any interpreted language
is run time compiled...
Not true.
but that does not makes those languages
compiled...
Java IS compiled. Period.
Would you argue with a concept of P-Machine on the basis that
it is "interpetive", just because it uses the higher level
abstraction, sitting on the top of O/S?
Java does not evaluate strings run time and it is a strongly
typed language, and that IS the central difference between
what I call dynamically scoped languages and statically
scoped languages.
It does not matter to me if Java runs bytecodes or P-Machine
code. It is just another layer on the top of O/S, and that layer,
by the sheer fact that it is a higher level abstraction,
can optimize things under the hood MUCH better than you can
optimize things with languages with lower levels of abstraction.
For some reason, people have gotten away from coding in
assembly languages for most applications.
This is exactly the same thing.
What is the difference between C++ and C?
Well, the ONLY difference I know is higher level of abstraction.
And that is ALL there is to it.
The same exact thing as Java using the JVM to provide it the
underlying mechanisms, efficient enough and flexible enough
for you to be able to express yourself on a more abstract level.
And that is ALL there is to it.
And why do you think weakely typed languages are gaining ground?
Well, because you don't have to worry about all those nasty
things as arguments. They can be anything in run time.
And nowdays, the power of the underlying hardware is such,
that it no longer makes such a drastic difference whether you
run a strongly typed, compiled language or interpret it on
the fly, even though performance is order of magnitudes worse.
You need to put things in perspective.
What does it matter to me if web page renders in 100 ms.
versus 1 ms.?
NONE.
My brain can not work that fast to read anything in those 99 ms.
anyway.
I think the whole argument is simply a waste of time, or rather,
a waste of creative potential that could be used for something
WAY more constructive and WAY more "revolutionary".
For example php with popen calling c executable in my experience
is about three times faster as a server than jetty/solr
for example...
Well, if you use even PHP as some kind of argument, then you
are obviosly not seen the forest. Because PHP is one of the
worst dogs overall. Because it is weakly typed language.
Even Python beats it hands down.
--
Programmer's Goldmine collections:
http://preciseinfo.org
Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.