That is 16 years out of date.
It's 16 years old. The wheel is far older. Neither is out of date at
all.
Then why is your best counterexample ancient history?
It's not an example, it's a survey. As noted above, it remains as
relevant as the wheel.
And sixteen years on his prophecy has not come true: we do not have
dedicated hardware for reference counting and none of the major GCs are
based upon reference counting.
You're simply displaying still more of your ignorance. Yes, there is
hardware that has dedicated reference counting capability. I know this
with absolute certainty, because I've designed precisely such hardware.
You claim that: "none of the major GCs are [sic] based upon reference
counting", but then:
Python and Mathematica are reference counted but both are slow and suffer
from stalls due to lack of incremental collection.
You cite two major GCs that _are_ based on reference counting.
Of course, as evidence of anything except that some major GCs are based
on reference counting, these mean precisely nothing -- you've shown
nothing about how much of their time is spent on reference counting vs.
other issues. As has already been pointed out, reference counting CAN be
made incremental (in fact, by its very nature it's much more incremental
than most other forms of GC) so your claim that they suffer due to the
lack of incremental collection lacks credibility, even at best.
In reality, making a collector incremental generally makes it marginally
_slower_ than an otherwise similar, but non-incremental collector. A
collector that executes all at once has to start with a heap in a
consistent state, and ensure that it is returned to a consistent state
when finished. An incremental collector must set the heap to a
consistent state at the end of each collection increment, instead of
only once at the very end of operation.
In addition, an incremental collector must restore its state each time a
collection increment begins and save its state each time a collection
increment ends. This obviously adds time to its overall execution as
well.
No. Locality is worse with reference counting which is why it has gotten
relatively slower since that ancient survey. You can easily test this
yourself.
I have. You're wrong. The fact that you'd claim this at all indicates
that you're utterly clueless about how GC works at all.
In reference counting, you have an object and the reference count is
part of that object. The reference count is never touched except when
there is an operation on the object, so locality of reference is usually
nearly perfect -- the only exception being when/if a cache line boundary
happens to fall precisely between the rest of the object and the
reference count. In many implementations, this isn't even possible; in
the relatively rare situation that it's possible, it's still quite rare.
In other garbage collection, you start from a root set of objects and
walk through everything they point at to find all the active objects.
You then collect those that are no longer in use. When carried out as a
simple mark/sweep collector, the locality of reference of this is
absolutely _terrible_ -- every active object gets touched, and all its
pointers followed at _every_ collection cycle.
Modern collectors (e.g. generational scavengers) attempt to ameliorate
this massive problem. They do this by assuming that when an object has
survived enough cycles of collection that the object will probably
survive many more cycles as well, so the object (and everything to which
it refers) is moved to another area where it those objects are simply
treated as permanent.
The first work along this line simply created two areas, one for
transient objects and another for permanent objects. More recent work
creates more or less a gradient, in which objects that have survived the
longest are inspected the last often, and as objects survive collection
cycles, they are slowly promoted up the hierarchy until they are almost
never inspected for the possibility of being garbage.
This improves locality of reference compared to old mark-sweep
collectors -- but it still has substantially worse locality of reference
than a typical reference counting system. In particular, in reference
counting, the reference count is NEVER touched unless an operation that
creates or destroys a reference to that object takes place.
By contrast, even a generational scavenger will chase pointers through
objects many times before the object is promoted to the point that the
frequency of collection on that object is reduced. Even then, in the
gradient-style system, the frequency of chasing pointers through that
object is reduced only slightly until many more fruitless attempts at
collecting it have been made -- at which point, the collection frequency
is again reduced, but (again) not stopped by any means.
This, of course, points to another serious problem with most such
garbage collectors: they are based on the assumption that the longer an
object has lived, the less likely that object is to die at any given
point in time. In short, it assumes that object usage tends to follow a
roughly stack-like model.
That's certainly often true -- but it's equally certainly not always
true. Quite the contrary, some types of applications tend to follow
something much closer to a queue model, in which most objects have
roughly similar life spans, so the longer an object has lived, the MORE
likely it is to die at any given point in time.
For applications that follow this pattern, most of the relatively recent
work in garbage collectors is not only useless, but actively
detrimental. The GC spends nearly all its time inspecting objects that
are not likely to die any time soon at all, while ignoring nearly all
the objects that are already dead or likely to die soon. The result is
lots of time wasted copying objects AND lots of memory wasted storing
objects that are actually dead.
Yet you cannot cite a single relevant reference or piece of objective
evidence to support an argument that flies in the face of almost all modern
software development (outside restricted memory environments).
Quite the contrary: the reference provided was completely relevant and
remains as accurate today as it was when it was new. The techniques to
make reference counting incremental and, if necessary, real-time work
today, just like they did 16 years ago.
Some types of data become obsolete quickly. The specific details of a
particular model of CPU that was current 16 years ago would have little
relevance today.
Other kinds of information remain valid and accurate essentially
permanently. The wheel has been known for thousands of years, and far
from becoming obsolete, its use grows every year. Many of the basic
algorithms of computer science, such as binary searching, heap sorting,
merge sorting, and yes, reference counting, are much the same -- they've
been with us for essentially the entire history of computing, but each
remains as relevant today as when it was new.
Of course, this isn't really a binary situation: there's a gradient all
the way from the wheel (thousands of years and growing) to specific CPUs
(sometimes outdated in a matter of days).
It's obvious that if reference counting could be made incremental 16
years ago, it still can. Your claim to the contrary was absolutely
false.
Much worse, however, is your apparently inability to recognize that this
information is of the sort that doesn't become out of date quickly, or
ever for that matter. Once the technique to make reference counting
incremental is discovered, that technique remains usable essentially
permanently. Mere passage of time will not change the fact that
reference counting CAN be done incrementally and (if necessary) can be
done in real time.
Much worse, however, is the lack of logical reasoning involved in your
claim. What you appear to lack is not just the knowledge of the specific
subject matter at hand, but the ability to recognize the implications of
the facts when they are made known to you. Originally you showed
ignorance, but now you're showing either fallacious reasoning, or
outright dishonesty.