I believe there are results showing that, for certain
problems, garbage collection is more efficient than explicit
memory management.
[/QUOTE]
You mean that there are certain pieces of code that uses
garbage collection which outperforms some other piece of code
which does not. Of course if everything else is the same (i.e.
the only difference is that in one piece of code the memory is
manually freed and in the other it is left to the GC) then
there might be some value in the comparison.
Even if other aspects are different, it might be a valid
comparison. In fact, it might even be more valid. The
important aspect is that both pieces of code do the same job.
But it would still only have proved that when you organise
your code like that GC is faster, not that some other design
using manual memory management could not be faster. It would
also only prove that GC was faster than that specific
allocator.
That that specific GC was faster that that specific allocator,
when GC was used in the specific way you used it, and the
allocator was used in the specific way you used it.
The one specific benchmark I'm aware of compared
boost::shared_ptr with the Boehm collector, creating large
trees, then dropping the root pointer. In that particular
benchmark, GC beat "manual management" hands down (on several
different systems, I believe). I believe that the code that was
tested was "straight forward", the natural way to write the code
in either case, without any "optimizations" (forcing garbage
collection immediately after the root pointer was dropped, using
a custom allocator for the nodes with manual management etc.).
That benchmark might be relevant if you're application creates a
lot of large trees, then drops them. Otherwise, it really
doesn't tell you much.
More generally, almost every benchmark I've seen comparing
garbage collection with manual management shows garbage
collection to be faster. But almost every one was written by a
proponent of garbage collection, and presumably tested scenarios
(like the large tree) where garbage collection is known to be
significantly faster. (In general, the usual algorithms for
garbage collection and malloc/free are O(n), where the n for
garbage collection is the amount of memory allocated when the
collector runs, and for manual allocation, the actual number of
allocations and frees. So if you want to prove manual
allocation faster, presumably, you write a benchmark which
allocates a few very big blocks, then frees one, allocates in
alternance, so that there is always a lot of memory allocated at
any one time. Something like:
deque< char** > a ;
for ( int count = N ; count != 0 ; -- count ) {
a.push_back( new char*[ M ] ) ;
if ( a.size() > 5 ) {
delete [] a.front() ; // Only if no garbage collection
a.pop_front() ;
}
}
If manual management doesn't win hands down there, there's
something wrong. (If you really want GC to look bad here, make
sure that you only have six or sevent times sizeof(char*[M])
memory available.)
In practice, the difference in time for most applications won't
be important enough to be a concern. In the few cases it will
be, the balance will lean to one side or the other: an
application making extensive use of graph algorithms will
probably gain by using garbage collection; one say smoothing
images is likely to loose. (The Boehm collector actually makes
special allocators available for allocating large blocks which
you know won't contain pointers, precisely because of the time
it takes to scan things like images. Of course, in garbage
collected languages, the compiler tells the garbage collector
this, so you don't have to. A C++ compiler could do this too,
but only to a limited degree.)
The only cases where benchmarks makes sense is when you have
two pieces of code that performs the same task and you need to
decide which one to use. Claiming that the results of a
benchmark proves something about anything except the tested
code is very dangerous.
Or as I read somewhere: "Never trust a benchmark you didn't
falsify yourself."