multithreading.

J

Jon Harrop

Ian said:
But surely that requires coupling between the compiler and the GC? The
latter has to provide the data used by the former.

The GC places requirements on the code generated by the compiler. This is
done when the language implementation is designed though, and not at
compile time or run time.

Is that what you meant?
 
I

Ian Collins

Jon said:
The GC places requirements on the code generated by the compiler. This is
done when the language implementation is designed though, and not at
compile time or run time.

Is that what you meant?
Yes and think Dmitriy did as well when he asked "can Real garbage
collectors do this w/o help of the compiler?"

So it does require help from the compiler to generate appropriate code.

An after-the-fact GC library would not be able to do as you described.
 
J

Jon Harrop

Ian said:
So it does require help from the compiler to generate appropriate code.
Yes.

An after-the-fact GC library would not be able to do as you described.

The language implementation is now split into the compiler and the GC which
must be designed to cooperate.
 
J

Jerry Coffin

That is 16 years out of date.

It's 16 years old. The wheel is far older. Neither is out of date at
all.
Then why is your best counterexample ancient history?

It's not an example, it's a survey. As noted above, it remains as
relevant as the wheel.
And sixteen years on his prophecy has not come true: we do not have
dedicated hardware for reference counting and none of the major GCs are
based upon reference counting.

You're simply displaying still more of your ignorance. Yes, there is
hardware that has dedicated reference counting capability. I know this
with absolute certainty, because I've designed precisely such hardware.

You claim that: "none of the major GCs are [sic] based upon reference
counting", but then:
Python and Mathematica are reference counted but both are slow and suffer
from stalls due to lack of incremental collection.

You cite two major GCs that _are_ based on reference counting.

Of course, as evidence of anything except that some major GCs are based
on reference counting, these mean precisely nothing -- you've shown
nothing about how much of their time is spent on reference counting vs.
other issues. As has already been pointed out, reference counting CAN be
made incremental (in fact, by its very nature it's much more incremental
than most other forms of GC) so your claim that they suffer due to the
lack of incremental collection lacks credibility, even at best.

In reality, making a collector incremental generally makes it marginally
_slower_ than an otherwise similar, but non-incremental collector. A
collector that executes all at once has to start with a heap in a
consistent state, and ensure that it is returned to a consistent state
when finished. An incremental collector must set the heap to a
consistent state at the end of each collection increment, instead of
only once at the very end of operation.

In addition, an incremental collector must restore its state each time a
collection increment begins and save its state each time a collection
increment ends. This obviously adds time to its overall execution as
well.
No. Locality is worse with reference counting which is why it has gotten
relatively slower since that ancient survey. You can easily test this
yourself.

I have. You're wrong. The fact that you'd claim this at all indicates
that you're utterly clueless about how GC works at all.

In reference counting, you have an object and the reference count is
part of that object. The reference count is never touched except when
there is an operation on the object, so locality of reference is usually
nearly perfect -- the only exception being when/if a cache line boundary
happens to fall precisely between the rest of the object and the
reference count. In many implementations, this isn't even possible; in
the relatively rare situation that it's possible, it's still quite rare.

In other garbage collection, you start from a root set of objects and
walk through everything they point at to find all the active objects.
You then collect those that are no longer in use. When carried out as a
simple mark/sweep collector, the locality of reference of this is
absolutely _terrible_ -- every active object gets touched, and all its
pointers followed at _every_ collection cycle.

Modern collectors (e.g. generational scavengers) attempt to ameliorate
this massive problem. They do this by assuming that when an object has
survived enough cycles of collection that the object will probably
survive many more cycles as well, so the object (and everything to which
it refers) is moved to another area where it those objects are simply
treated as permanent.

The first work along this line simply created two areas, one for
transient objects and another for permanent objects. More recent work
creates more or less a gradient, in which objects that have survived the
longest are inspected the last often, and as objects survive collection
cycles, they are slowly promoted up the hierarchy until they are almost
never inspected for the possibility of being garbage.

This improves locality of reference compared to old mark-sweep
collectors -- but it still has substantially worse locality of reference
than a typical reference counting system. In particular, in reference
counting, the reference count is NEVER touched unless an operation that
creates or destroys a reference to that object takes place.

By contrast, even a generational scavenger will chase pointers through
objects many times before the object is promoted to the point that the
frequency of collection on that object is reduced. Even then, in the
gradient-style system, the frequency of chasing pointers through that
object is reduced only slightly until many more fruitless attempts at
collecting it have been made -- at which point, the collection frequency
is again reduced, but (again) not stopped by any means.

This, of course, points to another serious problem with most such
garbage collectors: they are based on the assumption that the longer an
object has lived, the less likely that object is to die at any given
point in time. In short, it assumes that object usage tends to follow a
roughly stack-like model.

That's certainly often true -- but it's equally certainly not always
true. Quite the contrary, some types of applications tend to follow
something much closer to a queue model, in which most objects have
roughly similar life spans, so the longer an object has lived, the MORE
likely it is to die at any given point in time.

For applications that follow this pattern, most of the relatively recent
work in garbage collectors is not only useless, but actively
detrimental. The GC spends nearly all its time inspecting objects that
are not likely to die any time soon at all, while ignoring nearly all
the objects that are already dead or likely to die soon. The result is
lots of time wasted copying objects AND lots of memory wasted storing
objects that are actually dead.
Yet you cannot cite a single relevant reference or piece of objective
evidence to support an argument that flies in the face of almost all modern
software development (outside restricted memory environments).

Quite the contrary: the reference provided was completely relevant and
remains as accurate today as it was when it was new. The techniques to
make reference counting incremental and, if necessary, real-time work
today, just like they did 16 years ago.

Some types of data become obsolete quickly. The specific details of a
particular model of CPU that was current 16 years ago would have little
relevance today.

Other kinds of information remain valid and accurate essentially
permanently. The wheel has been known for thousands of years, and far
from becoming obsolete, its use grows every year. Many of the basic
algorithms of computer science, such as binary searching, heap sorting,
merge sorting, and yes, reference counting, are much the same -- they've
been with us for essentially the entire history of computing, but each
remains as relevant today as when it was new.

Of course, this isn't really a binary situation: there's a gradient all
the way from the wheel (thousands of years and growing) to specific CPUs
(sometimes outdated in a matter of days).

It's obvious that if reference counting could be made incremental 16
years ago, it still can. Your claim to the contrary was absolutely
false.

Much worse, however, is your apparently inability to recognize that this
information is of the sort that doesn't become out of date quickly, or
ever for that matter. Once the technique to make reference counting
incremental is discovered, that technique remains usable essentially
permanently. Mere passage of time will not change the fact that
reference counting CAN be done incrementally and (if necessary) can be
done in real time.

Much worse, however, is the lack of logical reasoning involved in your
claim. What you appear to lack is not just the knowledge of the specific
subject matter at hand, but the ability to recognize the implications of
the facts when they are made known to you. Originally you showed
ignorance, but now you're showing either fallacious reasoning, or
outright dishonesty.
 
J

Jerry Coffin

Can Real garbage collectors do this w/o help of the compiler? I'm not
sure.

Actually, even WITH the help of the compiler, most GCs won't collect bar
until it goes out of scope.

A garbage collector starts from all known pointers, and marks each
object they point at, recursively, until all objects that can be reached
by the program have been marked as being in use. The starting point is
taken from all global variables and all variables created on the stack.

In short, the garbage collector leaves the object intact until it can
prove (in a rather simpleminded fashion, based only on pointers) that
the object _can't_ possibly be referred to ever again. At least in the
typical case, however, this is done based only on pointers to the object
NOT on analyzing code. The fact that a stack frame contains a pointer to
an object is considered reason to keep that object around, even if the
code ceases to dereference that pointer at some time.
 
D

Dilip

It's 16 years old. The wheel is far older. Neither is out of date at
all.



It's not an example, it's a survey. As noted above, it remains as
relevant as the wheel.

Jerry

Can I invite you to take a look at this post[1] back from a few years
ago where a MSFT employee laid out his thoughts on why they went the
GC way w/o bothering about refcounting? Also shortly after the post
was written, MSFT funded a project to try to add ref-counting to their
Rotor (their version of open sourced CLR) codebase[2] as a kind of
feasibility study and it failed miserably. The latter project, I have
the details only as a word document and have uploaded it here[2]. Let
me know what you think.

[1] http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&D=0&P=39459
[2] http://cid-ff220db24954ce1d.skydrive.live.com/browse.aspx/RefcountingToRotor
 
J

Jon Harrop

Jerry said:
Actually, even WITH the help of the compiler, most GCs won't collect bar
until it goes out of scope.

A garbage collector starts from all known pointers, and marks each
object they point at, recursively, until all objects that can be reached
by the program have been marked as being in use. The starting point is
taken from all global variables and all variables created on the stack.

In short, the garbage collector leaves the object intact until it can
prove (in a rather simpleminded fashion, based only on pointers) that
the object _can't_ possibly be referred to ever again. At least in the
typical case, however, this is done based only on pointers to the object
NOT on analyzing code. The fact that a stack frame contains a pointer to
an object is considered reason to keep that object around, even if the
code ceases to dereference that pointer at some time.

That's a nice theory but it is not representative of how GCs actually work
and your opening statement is completely wrong. Here is a counter-example
in OCaml:

$ cat >a.ml
open Printf

let rec init t = function
| 0 -> ()
| n -> init (n::t) (n-1)

let () =
let n = try int_of_string Sys.argv.(1) with _ -> 1000 in
let t = [n] in
let weak = Weak.create 1 in
Weak.set weak 0 (Some t);
init t n;
init [] n;
printf "%s\n" (if Weak.check weak 0 then "uncollected" else "collected");
$ ocamlopt a.ml -o a
$ ./a 10000
collected

As you can see, the value was collected before the scope ended.

I've also tested this in F# (on .NET) and the value is also collected.
 
C

Chris Thomasson

Jon Harrop said:
A data structure implementor using reference counting must be aware of the
overhead of reference counting and work around it by amortizing reference
counts whenever possible. You yourself only just described this in the
context of performance.

A data-structure implementer should always be aware of how he is going to
safely reclaim memory in the presence of multi-threading. Garbage collection
does not get you a free lunch here. Neither does reference counting.


That is irrelevant. Your argument in favor of reference counting was
completely fallacious.

Your incorrect. You need to open your eyes here:

You claimed was that reference counting is "more accurate than a
traditional
GC could ever be". Consider:

{
Bar bar()
f(bar);
g()
}

Reference counting will keep "bar" alive until its reference count happens
to be zeroed when it drops out of scope even though it is not reachable
during the call to "g()". Real garbage collectors can collect "bar" during
the call to "g" because it is no longer reachable.

So GCs can clearly collect sooner than reference counters.

Now your just trolling. Why are you thinking in terms of scope here? Did you
know that C++ allows the programmer to do something like:

{
{
Bar bar()
f(bar);
}
g()
}

See? We can go in circles for ever here. Your posing non-sense.



Which may well be before the scope happens to end.

Probably not. A GC only runs once in a while. Its nowhere near as accurate
as general forms of reference counting. Does that make reference counting
better? Na. It's just another tool in our box.



By the same logic: C++ is for programmers who don't know, or want, to
write
assembler manually.

You comparing creating efficient manual memory management schemes with
programming assembly language? Are you serious, or just trolling again?



Reference counts consume a lot more space.

Oh boy. Here we go again. I ask you: What reference counting algorithm? You
make blanket statements that are completely false!

:^/



Yes. For example, OCaml hides two bits inside each pointer totalling zero
overhead. In contrast, a reference counting system is likely to add a
machine word, bloating each and every value by 8 bytes unnecessarily on
modern hardware.

There are some counting algorithms that use pointer stealing. Anyway, your
make false assumptions based on blanket statements. Drill down on some
specific reference counting algorithms please.



Just Java?

..NET. Object meta-data is required by collectors and reference counting
algorithms. Some keep the data per-object. Some keep it per-thread. Some
keep it in a separate chunk of memory. Ect, ect...



Let's do a single-threaded benchmark first.

That's no good! Multi-threading is the way things are now. You don't get a
free lunch anymore.



Symbolic rewriting would make a good benchmark. Here is one:

http://www.lambdassociates.org/studies/study10.htm

Try implementing this using reference counting and we can see how it
compares (probably on a trickier rewrite). Replace the arbitrary-precision
ints with doubles.

I need to take a look at this, but, are you sure that this even needs memory
management? Could I just reserve a large block of memory and carve
data-structures out of it and use caching? I don't think this needs GC or
reference counting.



I admire your faith but I would like to see some evidence to back up such
claims because they run contrary to common wisdom accumulated over the
past
few decades.

Here is a stupid contrived example:
________________________________________________________
struct object {
object* cache_next;
bool cached;
[...];
};

#define OBJ_DEPTH() 100000

static object g_obj_buf[OBJ_DEPTH()] = { NULL, true };
static object* g_obj_cache = NULL;

void object_prime() {
for(int i = 0; i < OBJ_DEPTH(); ++i) {
g_obj_buf.cache_next = g_obj_cache;
g_obj_cache = &g_obj_buf;
}
}

object* object_pop() {
object* obj = g_obj_cache;
if (! obj) {
if (obj = malloc(sizeof(*obj))) {
obj->cached = false;
}
}
return obj;
}

void object_push(object* obj) {
if (obj->cached) {
obj->cache_next = g_obj_cache;
g_obj_cache = obj;
} else {
free(obj);
}
}

void foo() {
for (;;) {
object* foo = object_pop();
object_push(foo);
}
}
________________________________________________________




This is a fast object cache that will likely perform better than doing it
the GC way where nobody wants to manage their own memory:
________________________________________________________
void foo() {
for (;;) {
object* foo = gc_malloc(sizeof(*foo));
}
}
________________________________________________________



Yes manual memory management is more work and if you don't want to do that,
well, GC can be a life saver indeed.



Absolutely. GCs do not have to wait that long: they can collect sooner.

A garbage collector generally cannot be as accurate as a reference count.
Your contrived scope example is non-sense.



You've just made a series of irrelevant references here.

How are the implementation details of a GC irrelevant to a discussion on GC?



Of course. I fully expect you to fail on the above benchmark but am
willing
to give you the benefit of the doubt.

Any benchmark on GC these days has to be able to use multiple processes or
threads. Ahh... Here is something we can do! Okay, we can create a
multi-threaded daemon that serves factory requests to multiple processes.
This would use multi-threading, multi-processing, and shared memory:

I want to implement the wait-free factory as a multi-threaded daemon process
that multiple producers can register the path and name of a shared library,
which concurrent consumer processes can lookup by name; dynamically link
with the resulting library and call a "common" instance function (e.g.,
define common api). I guess you could think of it as a highly concurrent
per-computer COM daemon. The factory consumer threads can use the lock-free
reader pattern, and the producer threads can use a form of mutual exclusion;
including, but not limited, to locks.

Or we can do a efficient observer pattern:

I want the wait-free observer to be a multi-threaded daemon that can allow
multiple threads to create/register delegates; message-types and allow
consumer threads to register with those delegates and receive their
messages; producer threads create messages with subsequently signal the
delegates that manages those messages to multicast to their registered
consumers.

Now those programs will definitely test a GC to the limits. BTW, do you know
of a GC that can collect across multi-process working with shared memory???

Which one do you want to do? The multi-thread/process factory or the
multi-thread observer?

;^)



That would be a specific kind of cache. Caches do not inherently have
anything to do with allocation.

A cache is an aid to allocation.



Apparently what you wanted to ask me was how I would implement an
allocation
cache in a garbage collected language.
Yes.




The answer depends upon the language. If you look at OCaml, for example,
there is no need to because the run-time has already automated this.

Then pick a lanaguage that does do that.




You should take a look at it.
 
C

Chris Thomasson

Jon Harrop said:
We already discussed that exact reference and it has nothing to do with
garbage collection. Read the comments left on his blog post.

It does have to do with how the Java garbage collector interacts with the
data-structures he was using.
 
C

Chris Thomasson

Jon Harrop said:
That is 16 years out of date.


Then why is your best counterexample ancient history?

Trolling again.



And sixteen years on his prophecy has not come true: we do not have
dedicated hardware for reference counting and none of the major GCs are
based upon reference counting.

GC in its essence tracks references to objects. I already pointed this out:

http://groups.google.com/group/comp.lang.c++/msg/9d01df98534428ee


Python and Mathematica are reference counted but both are slow and suffer
from stalls due to lack of incremental collection.


No. Locality is worse with reference counting which is why it has gotten
relatively slower since that ancient survey. You can easily test this
yourself.

Will you stop making FALSE blanket statements! For your information,
different forms of distributed reference counting have excellent locality of
reference. You can keep the counts on a per-cpu or per-thread basis.



Yet you cannot cite a single relevant reference or piece of objective
evidence to support an argument that flies in the face of almost all
modern
software development (outside restricted memory environments).

Can you even list at least 5 difference forms of reference counting? You
make blanket statements about all of them, yet I bet that you don't even
know most of the specific algorithms. Ahh, you said that the specific
algorithms do not matter:

http://groups.google.com/group/comp.programming.threads/msg/05a4e3e3a4a75f05

I have to let you know that the details are very important. Your claim that
they are irrelevant sheds a lot of insight into your knowledge of the
matter.
 
C

Chris Thomasson

Can Real garbage collectors do this w/o help of the compiler? I'm not
sure.
With compiler support reference-counting can easily collect bar
precisely at the end of f(). Don't you think so?

Sure. And so can the programmer:

{
{
Bar bar()
f(bar);
}
g()
}


;^)
 
C

Chris Thomasson

Barry Kelly said:
Of course he can't. That's because he's living in the world of 16 years
and more ago.

I wouldn't waste so much time with Jerry, Chris and others of their ilk.
They're so steeped in obsolete experience that they're blind even to the
present, much less the future. Even Einstein never accepted quantum
theory.
[...]

Trolling? Anyway, have nothing against GC. I simply wanted to let Jon know
that most of his blanket claims about reference counting were false, or
misleading at best.
 
C

Chris Thomasson

Dilip said:
It's 16 years old. The wheel is far older. Neither is out of date at
all.



It's not an example, it's a survey. As noted above, it remains as
relevant as the wheel.

Jerry

Can I invite you to take a look at this post[1] back from a few years
ago where a MSFT employee laid out his thoughts on why they went the
GC way w/o bothering about refcounting? Also shortly after the post
was written, MSFT funded a project to try to add ref-counting to their
Rotor (their version of open sourced CLR) codebase[2] as a kind of
feasibility study and it failed miserably. The latter project, I have
the details only as a word document and have uploaded it here[2]. Let
me know what you think.

[1]
http://discuss.develop.com/archives/wa.exe?A2=ind0010A&L=DOTNET&D=0&P=39459
[2]
http://cid-ff220db24954ce1d.skydrive.live.com/browse.aspx/RefcountingToRotor

Thanks for posting that. I will read through them when I get some time. One
point on circular references... The simplistic, contrived and most naive
solutions is to use offsets from zero to determine when an object is
destroyed. Every circular reference counts as an offset. So if there are two
circular refs, then the offset would be 2. This means than when the
reference count drops to 2 its analogous to dropping to zero. Will this work
for you? Probably not. Does it work in certain scenarios, yes.
 
C

Chris Thomasson

Jon Harrop said:
Absolutely, yes. I just checked this with a couple of GC'd languages and
they both collected "bar" before the end of its scope, i.e. sooner than
referencing counting can.
[...]

Yet another naive statement. Try this:

{
{
Bar bar()
f(bar);
}
g()
}


I as the programmer know more than a reference counting algorithms does,
even your almighty savior: GC.
 
C

Chris Thomasson

Jon Harrop said:
Agreed. I hadn't realised this until I Googled and discovered that they
constitute a small clique of intraciting kooks on usenet. :)

:^D
 
J

Jon Harrop

Chris said:
Trolling? Anyway, have nothing against GC. I simply wanted to let Jon
know that most of his blanket claims about reference counting were false,
or misleading at best.

Ironic result then. :)
 
J

Jon Harrop

Jerry said:
You're simply displaying still more of your ignorance. Yes, there is
hardware that has dedicated reference counting capability. I know this
with absolute certainty, because I've designed precisely such hardware.
Citation?

You claim that: "none of the major GCs are [sic] based upon reference
counting", but then:
Python and Mathematica are reference counted but both are slow and suffer
from stalls due to lack of incremental collection.

You cite two major GCs that _are_ based on reference counting.

Sure. Lots of old programs used reference counting and CPython and the old C
core of Mathematica are examples.

They are being superceded though. We now have a multitude of Python
implementations and Mathematica is migrating to the JVM. They all have two
things in common: they're faster for allocation-intensive tasks and they
don't use reference counting.

Both PyPy and IronPython use real GCs and both beat CPython on Python's
standard GC benchmark "gcbench.py". From the horse's mouth:

"PyPy shines on gcbench, which is mostly just about allocating and freeing
many objects. Our gc is simply better than refcounting, even though we've
got shortcomings in other places." -
http://morepypy.blogspot.com/2008/03/as-fast-as-cpython-for-carefully-taken.html

Note that the inefficiency of reference counting is so severe that it can
even be significant in interpreted languages!

There are many other independent benchmarks:

CPython: 11.00s reference counted
IronPython: 4.14s .NET GC
Jython: 1.24s JVM GC

http://pyinsci.blogspot.com/2007/09/parallel-processing-in-cpython.html
Of course, as evidence of anything except that some major GCs are based
on reference counting, these mean precisely nothing -- you've shown
nothing about how much of their time is spent on reference counting vs.
other issues.

Exactly. Reference counting has been optimized out of all performant GCs.
As has already been pointed out, reference counting CAN be
made incremental (in fact, by its very nature it's much more incremental
than most other forms of GC)

Mathematica and OCaml demonstrate the opposite.
I have. You're wrong. The fact that you'd claim this at all indicates
that you're utterly clueless about how GC works at all.

Yet you still haven't ported the benchmark I cited.
In reference counting, you have an object and the reference count is
part of that object. The reference count is never touched except when
there is an operation on the object,

No: reference counts are updated whenever the value is referenced or
dereferenced and those are not operations on the value.
so locality of reference is usually
nearly perfect -- the only exception being when/if a cache line boundary
happens to fall precisely between the rest of the object and the
reference count. In many implementations, this isn't even possible; in
the relatively rare situation that it's possible, it's still quite rare.

In other garbage collection, you start from a root set of objects and
walk through everything they point at to find all the active objects.
You then collect those that are no longer in use. When carried out as a
simple mark/sweep collector, the locality of reference of this is
absolutely _terrible_ -- every active object gets touched, and all its
pointers followed at _every_ collection cycle.

That description is 48 years out of date.
Modern collectors (e.g. generational scavengers) attempt to ameliorate
this massive problem. They do this by assuming that when an object has
survived enough cycles of collection that the object will probably
survive many more cycles as well, so the object (and everything to which
it refers) is moved to another area where it those objects are simply
treated as permanent.

Older generations are not regarded as "permanent".
The first work along this line simply created two areas, one for
transient objects and another for permanent objects. More recent work
creates more or less a gradient, in which objects that have survived the
longest are inspected the last often, and as objects survive collection
cycles, they are slowly promoted up the hierarchy until they are almost
never inspected for the possibility of being garbage.

This improves locality of reference compared to old mark-sweep
collectors -- but it still has substantially worse locality of reference
than a typical reference counting system. In particular, in reference
counting, the reference count is NEVER touched unless an operation that
creates or destroys a reference to that object takes place.

Which happens all the time and is the reason why reference counting has
worse locality of reference and performance. The other reason is that
reference counting causes fragmentation because it cannot move values.
By contrast, even a generational scavenger will chase pointers through
objects many times before the object is promoted to the point that the
frequency of collection on that object is reduced. Even then, in the
gradient-style system, the frequency of chasing pointers through that
object is reduced only slightly until many more fruitless attempts at
collecting it have been made -- at which point, the collection frequency
is again reduced, but (again) not stopped by any means.

This, of course, points to another serious problem with most such
garbage collectors: they are based on the assumption that the longer an
object has lived, the less likely that object is to die at any given
point in time. In short, it assumes that object usage tends to follow a
roughly stack-like model.

That's certainly often true -- but it's equally certainly not always
true. Quite the contrary, some types of applications tend to follow
something much closer to a queue model, in which most objects have
roughly similar life spans, so the longer an object has lived, the MORE
likely it is to die at any given point in time.

For applications that follow this pattern, most of the relatively recent
work in garbage collectors is not only useless, but actively
detrimental. The GC spends nearly all its time inspecting objects that
are not likely to die any time soon at all, while ignoring nearly all
the objects that are already dead or likely to die soon. The result is
lots of time wasted copying objects AND lots of memory wasted storing
objects that are actually dead.

This is all speculation for which there is overwhelming evidence to the
contrary. In short, if any of your points were correct then people would be
building major GCs on reference counting but they are not.
Quite the contrary: the reference provided was completely relevant and
remains as accurate today as it was when it was new.

Even its prophecies that we now know to be wrong?
The techniques to
make reference counting incremental and, if necessary, real-time work
today, just like they did 16 years ago.

But they aren't used because everyone has moved on to real GCs because they
are better in almost every respect, including the one's you're trying to
contest.
Some types of data become obsolete quickly. The specific details of a
particular model of CPU that was current 16 years ago would have little
relevance today.

The memory gap is the name given to the phenomenon that has obsoleted your
argument.
Other kinds of information remain valid and accurate essentially
permanently. The wheel has been known for thousands of years, and far
from becoming obsolete, its use grows every year. Many of the basic
algorithms of computer science, such as binary searching, heap sorting,
merge sorting, and yes, reference counting, are much the same -- they've
been with us for essentially the entire history of computing, but each
remains as relevant today as when it was new.

Of course, this isn't really a binary situation: there's a gradient all
the way from the wheel (thousands of years and growing) to specific CPUs
(sometimes outdated in a matter of days).

It's obvious that if reference counting could be made incremental 16
years ago, it still can. Your claim to the contrary was absolutely
false.

Much worse, however, is your apparently inability to recognize that this
information is of the sort that doesn't become out of date quickly, or
ever for that matter. Once the technique to make reference counting
incremental is discovered, that technique remains usable essentially
permanently. Mere passage of time will not change the fact that
reference counting CAN be done incrementally and (if necessary) can be
done in real time.

Much worse, however, is the lack of logical reasoning involved in your
claim. What you appear to lack is not just the knowledge of the specific
subject matter at hand, but the ability to recognize the implications of
the facts when they are made known to you. Originally you showed
ignorance, but now you're showing either fallacious reasoning, or
outright dishonesty.

Then you should be able to provide credible references and worked counter
examples as I have.
 
C

Chris Thomasson

Jon Harrop said:
My counter example has proven that your claim was fallacious.

Please tell me exactly what's wrong with the following:

{
{
refcount<Bar> bar(new Bar);
f(bar);
}
g()
}

?
 
J

Jon Harrop

Chris said:
Please tell me exactly what's wrong with the following:

{
{
refcount<Bar> bar(new Bar);
f(bar);
}
g()
}

The problem is that it is irrelevant, not that it is wrong.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,174
Messages
2,570,940
Members
47,484
Latest member
JackRichard

Latest Threads

Top