multithreading.

C

Chris Thomasson

Jon Harrop said:
Reference counting was one of the earliest forms of garbage collection and
it is riddled with many very serious and well known problems. You are
literally reinventing the GC and you are now several decades out of date.
[...]

How can you say that when you have absolutely no idea what reference
counting algorithms I use, or how I use them? Anyway, the only type of GC
that I have found to be really useful is Proxy GC.
 
J

Jon Harrop

Chris said:
Jon Harrop said:
Reference counting was one of the earliest forms of garbage collection
and it is riddled with many very serious and well known problems. You are
literally reinventing the GC and you are now several decades out of date.
[...]

How can you say that when you have absolutely no idea what reference
counting algorithms I use, or how I use them?

Because the problems are a consequence of counting references. The precise
details don't matter beyond the fact that you're still counting references.

This has been well documented over the past 48 years.
 
C

Chris Thomasson

Jon Harrop said:
Chris said:
Jon Harrop said:
Reference counting was one of the earliest forms of garbage collection
and it is riddled with many very serious and well known problems. You
are
literally reinventing the GC and you are now several decades out of
date.
[...]

How can you say that when you have absolutely no idea what reference
counting algorithms I use, or how I use them?

Because the problems are a consequence of counting references. The precise
details don't matter beyond the fact that you're still counting
references.

The precise details don't matter? REALLY? Before I go on, please explain
yourself. You are starting to sound like you don't now any of the new
reference counting algorithms that are out there. Before I waste my time
trying to explain them to you, I need to hear your explanation on how the
details don't matter.

:^/

This has been well documented over the past 48 years.

For what algorithm? You are talking non-sense because some of the new
counting tricks have only recently been invented.
 
J

Jon Harrop

Chris said:
The precise details don't matter? REALLY? Before I go on, please explain
yourself. You are starting to sound like you don't now any of the new
reference counting algorithms that are out there. Before I waste my time
trying to explain them to you, I need to hear your explanation on how the
details don't matter.

I only just explained that and gave you a reference. Just read the
reference.
You are talking non-sense because some of the new counting tricks have
only recently been invented.

You make it sound as if reference counting is making a come-back. I'll give
you the benefit of the doubt though: can you cite any references indicating
that reference counting can be even vaguely competitive compared to a real
GC?
 
J

Jon Harrop

Chris said:
cycles aside, can you list several other very serious problems?

Sure:

.. Fragility.
.. Memory overhead.
.. Performance degradation.
.. Fragmentation.
.. Not incremental, i.e. awful worst case performance.
....
BTW, IMHO, cycles are a red-herring and can usually be designed around.

Of course: by reinventing the GC.
How are virtually zero-overhead counting algorithms out-of-date? Please
explain...

Are you referring to your own patented algorithm for which there are no
verifiable results?
http://groups.google.com/group/comp.programming.threads/browse_frm/thread/5e9357a6fb746e5d

The thread you cite is a discussion about two different articles, neither
of
which substantiate your claim:

The first is an article describing problems with a specific Java data
structure (WeakHashMap):
[...]

How do you effectively cache object's in a "collect-the-world"
environment? GC and caching don't get along _sometimes_... From the
applications point of view an object in the cache is in a quiescent state.
However, from the GC perspective a node in the cache is a pointer to a
"live object".

You are simply restating that author's misconceptions. Read the
documentation.
 
C

Chris Thomasson

Jon Harrop said:
I only just explained that and gave you a reference. Just read the
reference.


You make it sound as if reference counting is making a come-back. I'll
give
you the benefit of the doubt though: can you cite any references
indicating
that reference counting can be even vaguely competitive compared to a real
GC?

Okay, I will give just a couple of examples... You seem to think that one
needs to reference count every single object; you do not. A simple example
of this is a Proxy GC implemented with reference counting. Here are some
more details on the concept of proxy GC in general:

http://groups.google.com/group/comp.programming.threads/msg/41f29efe33e7f124

http://groups.google.com/group/comp...=en&group=comp.programming.threads&q=proxy+gc

http://groups.google.com/group/comp.lang.c++/msg/e24a9459777ec430







You can download Joe Seighs atomic-ptr-plus package from here:

http://atomic-ptr-plus.sourceforge.net

http://sourceforge.net/project/showfiles.php?group_id=127837







Dmitriy V'jukov has created nice one here:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/42b035cd0280bb31






Here is my work in the area:

http://groups.google.com/group/comp...amming.threads&q=vzoom&qt_g=Search+this+group

http://appft1.uspto.gov/netacgi/nph...70".PGNR.&OS=DN/20070067770&RS=DN/20070067770


The vZOOM algorithm uses per-thread reference counts which don't need any
memory barriers or atomic operations in order to update them.

Here are some of my proxy collector that I released into public domain:

http://home.comcast.net/~vzoom/demos/pc_sample.c

http://appcore.home.comcast.net/misc/pc_sample_h_v1.html


Here is a reader/writer usage pattern:

http://groups.google.com/group/comp.programming.threads/msg/325a046ac6ed2800








Dmitriy V'jukov has also created this low-overhead reference counting
algorithm:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/dab22a17c32c6b13








Before I go on and on:

This is a lot of information to absorbed and you probably won't be able to
learn all of it any time soon. So, before you label the work cited above as
non-sense, please try and understand some of it. The folks over on
'comp.programming.threads' have been working with these types of algorithms
for almost a decade and we can help you out.

:^)
 
C

Chris Thomasson

[comp.programming.threads added]


Jon Harrop said:
Sure:

. Fragility.
. Memory overhead.
. Performance degradation.
. Fragmentation.
. Not incremental, i.e. awful worst case performance.

Reference counting is fragile? I don't think so; its only as fragile as the
programmer using them. There are no silver bullets.


Memory overhead? No, a reference count releases the object when the
reference drops to zero which is more accurate than a traditional GC could
ever be. Are you talking about the extra word of space per-object? Did you
know that many garbage collectors use per-object meta-data as well?


Performance degradation? Again, I ask you what specific algorithms are you
talking about? It does matter.


Not incremental? Drill down on that one some more please.



...


Of course: by reinventing the GC.


Are you referring to your own patented algorithm for which there are no
verifiable results?

That's not the only one out there. Anyway, my counting algorihtm uses
per-thread counters that do not need atomics or membars to be mutated. BTW,
this algorihtm won be a T2000 from Sun:

https://coolthreads.dev.java.net
(the vzoom project)


It has made me some $$$ because I managed to license it to a few companies
who are very pleased with the results. One of them was involved with
creating embedded devices (QUADROS on an ARM9) and wanted to decrease the
overhead of their reference counting, vZOOM helped them out. Also, the
low-overhead memory allocator allowed them to create a heap out of QUADROS
tasks, they were VERY pleased with that because they were getting ready to
alter the OS source code:

http://groups.google.com/group/comp.programming.threads/msg/f6d16f5323311361
(last paragraph...)



BTW, garbage collectors have their share of problems; here are some
of them:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/5e9357a6fb746e5d

The thread you cite is a discussion about two different articles,
neither
of
which substantiate your claim:

The first is an article describing problems with a specific Java data
structure (WeakHashMap):
[...]

How do you effectively cache object's in a "collect-the-world"
environment? GC and caching don't get along _sometimes_... From the
applications point of view an object in the cache is in a quiescent
state.
However, from the GC perspective a node in the cache is a pointer to a
"live object".

You are simply restating that author's misconceptions. Read the
documentation.

I am asking you how to do it. Explain how to create effectively object
caches using "traditional" collect-the-world GC...
 
J

Jon Harrop

Chris said:
Reference counting is fragile? I don't think so; its only as fragile as
the programmer using them. There are no silver bullets.

You cannot have your cake and eat it: you said that the programmer must be
aware of the low-level details of their data structures in order to work
around the pathological performance problems inherent with reference
counting. That is a form of fragility. Fragmentation is one specific
example.

With a real GC you just fire and forget. Exceptional circumstances are
extremely rare. Improper use of weak hash tables is not an example of this.
Memory overhead? No, a reference count releases the object when the
reference drops to zero which is more accurate than a traditional GC could
ever be.

That is commonly claimed but actually wrong. Scope can keep reference counts
above zero and values alive unnecessarily when a real GC can collect them
because they are unreferenced even though they are still in scope.
Are you talking about the extra word of space per-object?
Yes.

Did you know that many garbage collectors use per-object meta-data as
well?

None that we use. Which GCs are you referring to?
Performance degradation? Again, I ask you what specific algorithms are you
talking about? It does matter.

I benchmarked all the mainstream reference counters for C++ many years ago.
If you think the situation has improved, perhaps we could benchmark C++
against some GC'd languages for collection-intensive tasks now?
Not incremental? Drill down on that one some more please.

You actually just described the problem: reference counting releases values
when their reference drops to zero. At the end of a scope, many reference
counts can be zeroed at the same time and the ensuing collections can stall
the program for an unacceptable amount of time. This can also be extremely
difficult to workaround without simply resorting to a real GC.

We had this exact problem on a 250kLOC C++ product line and eventually fixed
it by rewriting everything from scratch in a more modern language. The
worst case performance is now 5x faster. We wasted a couple of months
trying to fix the problem in C++ before having the revelation that we were
just reinventing a garbage collector and doing a poor job of it at that.
I am asking you how to do it. Explain how to create effectively object
caches using "traditional" collect-the-world GC...

You use a weak hash table to cache data without keeping it alive.

You do not use a weak hash table to cache data that has no other references
to keep it alive on the assumption that the GC will be inefficient enough
for your cache to work.
 
C

Chris Thomasson

Jon Harrop said:
You cannot have your cake and eat it: you said that the programmer must be
aware of the low-level details of their data structures in order to work
around the pathological performance problems inherent with reference
counting. That is a form of fragility. Fragmentation is one specific
example.

Wrong. You completely misunderstood me. I was talking about the implementers
of refcounting. The implementation of reference counting should try and
avoid making calls into atomic instructions and memory barriers. This is
transparent to the user.



With a real GC you just fire and forget. Exceptional circumstances are
extremely rare. Improper use of weak hash tables is not an example of
this.

Fire and forget? lol. You can get into big trouble if you fire and forget;
simple example of VERY poor programming practice. Improper use of weak
references is a direct example of the consequences that arise when a naive
programmer fires-and-forget. There are no silver bullets. GC is a tool, and
it has weaknesses.



That is commonly claimed but actually wrong.

Please show me a 100% accurate GC; good luck finding one. Well, let me help
you... A proxy GC can be accurate indeed. But, they are hardly traditional.
In fact they were invented on comp.programming.threads by Joe Seigh.



Scope can keep reference counts
above zero and values alive unnecessarily when a real GC can collect them
because they are unreferenced even though they are still in scope.

It will only collect anything when the GC decides to run!! It does not run
all the time. If it did the performance would be so crappy that nobody would
every even think about using it. If you think this is comparable to a proxy
GC then your simply not getting it. Anyway, traditional GC is NOT about
performance, its about helping some programmers that do not know how, or
want, to manage their memory manually.




How is that different than GC meta-data? BTW, there are refcounting
algorithms that do not need any per-object meta-data. For instance, vZOOM
keeps this information on a per-thread basis. There is GC meta-data that
keeps track of objects. There has to be. Think about it.



None that we use. Which GCs are you referring to?

Many GC languages use object headers; Java. Or, keep their object meta-data
in another place. It does not matter because it all takes up memory.



I benchmarked all the mainstream reference counters for C++ many years
ago.
If you think the situation has improved, perhaps we could benchmark C++
against some GC'd languages for collection-intensive tasks now?

C++ has no GC. Anyway, are you talking about a lock-free reader patterns?
What collection-intensive tasks? I know I can compete, and most likely beat,
a traditional GC with handcrafted implementations that C++ give me the
freedom to work with. C++ gives me the flexibility to use custom allocators
and I can use architecture specific optimizations. BTW, how do would you
construct a proper benchmark? What patterns? IMO, I think that a lock-free
reader pattern would be okay.



You actually just described the problem: reference counting releases
values
when their reference drops to zero.

Yeah. This is NOT true with a traditional GC.



At the end of a scope, many reference
counts can be zeroed at the same time and the ensuing collections can
stall
the program for an unacceptable amount of time. This can also be extremely
difficult to workaround without simply resorting to a real GC.

Stall? Are you sure you know how Proxy GC works? There is no stall. There is
no contrived introspection of thread stacks. There is no collect-the-world.
There is no mark-and-sweep. There is no thread suspension. Ect, ect... No
stop-the-world... None of that non-sense. Stall! Get real.



We had this exact problem on a 250kLOC C++ product line and eventually
fixed
it by rewriting everything from scratch in a more modern language. The
worst case performance is now 5x faster. We wasted a couple of months
trying to fix the problem in C++ before having the revelation that we were
just reinventing a garbage collector and doing a poor job of it at that.

Well, from your false comments on reference counting, I would expect you to
not be able to beat a tranditional garbage collector in any way shape or
form.



You use a weak hash table to cache data without keeping it alive.
You do not use a weak hash table to cache data that has no other
references
to keep it alive on the assumption that the GC will be inefficient enough
for your cache to work.

Wrong, let me inform you that a cache keeps QUIESCENT objects for further
optimized allocations. Are you sure you know what a cache is? A cache keep
objects that have no references around to optimize further allocations. Do
you know about the Solaris slab allocator?
 
C

Chris Thomasson

Chris Thomasson said:
Wrong. You completely misunderstood me. I was talking about the
implementers of refcounting. The implementation of reference counting
should try and avoid making calls into atomic instructions and memory
barriers. This is transparent to the user.
[...]

Perhaps I misunderstood you... I did state that ref-counting is only as
fragile as the programmer using it. Well, guess what, that applies to GC as
well:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/5e9357a6fb746e5d

Ref-counting and GC have their problems. Are you trying to convince me that
GC is just there and you can fire objects at it and forget? Oh, wait... You
did say exactly that. Remember?
 
J

Jerry Coffin

[ ... ]
You make it sound as if reference counting is making a come-back. I'll give
you the benefit of the doubt though: can you cite any references indicating
that reference counting can be even vaguely competitive compared to a real
GC?

Paul Wilson's garbage collector survey, section 2.1.

http://www.cs.utexas.edu/ftp/pub/garbage/gcsurvey.ps

Most of what you've said about reference counting is complete nonsense.
For example, you claim that one of the problems with reference counting
is:
. Not incremental, i.e. awful worst case performance.

As the reference above makes clear to anybody capable of reading at all:

One advantage of reference counting is the _incremental_
nature of most of its operation--garbage collection work
(updating rerence counts) is interleaved closely with the
running program's own execution. It can easily be made
completely incremental and _real time_; that is,
performing at most a small and bounded amount of work per
unit of program execution.

he goes on to say:

This is not to say that reference counting is a dead
technique. It still has advantages in terms of the
immediacy with which it reclaims most garbage, and
corresponding beneficial effects on locality of reference;
a reference counting system may perform with little
degradation when almost all of the heap space is occupied
by live objects, while other collectors rely on trading
more space for higher efficiency. Reference counts
themselves may be valuable in some systems. For example,
they may support optimizations in functional garbage
collection implementations by allowing destrutive modification
to uniquely-referenced objects. Distributed garbage collection
is often done with reference-counting between nodes of a
distributed system, combined with mark-sweep or copying
collection within a node. Future systems may find other uses
for reference counting, perhaps in hybrid collectors also
involving other techniques, or when augmented by specialized
hardware.

He mentions two points of particular interest with respect to reference
counting making a "comeback". The difference in speed between CPUs and
main memory seems to be growing, forcing ever-greater dependence on
caching -- which means that the improved locality of reference with
reference counting means more all the time. This also means that the
primary performance cost of reference counting (having to update counts)
now means relatively little as a rule -- in most cases, a few extra
operations inside the CPU mean nearly nothing, while extra references to
main memory mean a great deal.

Likewise, individual computers now correspond much more closely to the
distributed systems at the time of the survey. In particlar, it's now
quite common to see a number of separate OS instances running in virtual
machines in a single box.

You opinions on garbage collection reflect a tremendous enthusiasm, but
equally tremendous ignorance of the subject matter.
 
J

Jon Harrop

Chris said:
Wrong. You completely misunderstood me. I was talking about the
implementers of refcounting. The implementation of reference counting
should try and avoid making calls into atomic instructions and memory
barriers. This is transparent to the user.

A data structure implementor using reference counting must be aware of the
overhead of reference counting and work around it by amortizing reference
counts whenever possible. You yourself only just described this in the
context of performance.
Please show me a 100% accurate GC; good luck finding one...

That is irrelevant. Your argument in favor of reference counting was
completely fallacious.

You claimed was that reference counting is "more accurate than a traditional
GC could ever be". Consider:

{
Bar bar()
f(bar);
g()
}

Reference counting will keep "bar" alive until its reference count happens
to be zeroed when it drops out of scope even though it is not reachable
during the call to "g()". Real garbage collectors can collect "bar" during
the call to "g" because it is no longer reachable.

So GCs can clearly collect sooner than reference counters.
It will only collect anything when the GC decides to run!

Which may well be before the scope happens to end.
Anyway, traditional GC is NOT about performance, its about helping some
programmers that do not know how, or want, to manage their memory
manually.

By the same logic: C++ is for programmers who don't know, or want, to write
assembler manually.
How is that different than GC meta-data?

Reference counts consume a lot more space.
BTW, there are refcounting
algorithms that do not need any per-object meta-data. For instance, vZOOM
keeps this information on a per-thread basis. There is GC meta-data that
keeps track of objects. There has to be. Think about it.

Yes. For example, OCaml hides two bits inside each pointer totalling zero
overhead. In contrast, a reference counting system is likely to add a
machine word, bloating each and every value by 8 bytes unnecessarily on
modern hardware.
Many GC languages use object headers; Java.

Just Java?
C++ has no GC. Anyway, are you talking about a lock-free reader patterns?

Let's do a single-threaded benchmark first.
What collection-intensive tasks?

Symbolic rewriting would make a good benchmark. Here is one:

http://www.lambdassociates.org/studies/study10.htm

Try implementing this using reference counting and we can see how it
compares (probably on a trickier rewrite). Replace the arbitrary-precision
ints with doubles.
I know I can compete, and most likely beat, a traditional GC with
handcrafted implementations that C++ give me the freedom to work with.

I admire your faith but I would like to see some evidence to back up such
claims because they run contrary to common wisdom accumulated over the past
few decades.
Yeah. This is NOT true with a traditional GC.

Absolutely. GCs do not have to wait that long: they can collect sooner.
Stall? Are you sure you know how Proxy GC works? There is no stall. There
is no contrived introspection of thread stacks. There is no
collect-the-world. There is no mark-and-sweep. There is no thread
suspension. Ect, ect...

You've just made a series of irrelevant references here.
Well, from your false comments on reference counting, I would expect you
to not be able to beat a tranditional garbage collector in any way shape
or form.

Of course. I fully expect you to fail on the above benchmark but am willing
to give you the benefit of the doubt.
Wrong, let me inform you that a cache keeps QUIESCENT objects for further
optimized allocations.

That would be a specific kind of cache. Caches do not inherently have
anything to do with allocation.
Are you sure you know what a cache is? A cache keep objects that have no
references around to optimize further allocations.

Apparently what you wanted to ask me was how I would implement an allocation
cache in a garbage collected language.

The answer depends upon the language. If you look at OCaml, for example,
there is no need to because the run-time has already automated this.
Do you know about the Solaris slab allocator?

No.
 
J

Jon Harrop

Chris said:
Perhaps I misunderstood you... I did state that ref-counting is only as
fragile as the programmer using it. Well, guess what, that applies to GC
as well:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/5e9357a6fb746e5d

Ref-counting and GC have their problems. Are you trying to convince me
that GC is just there and you can fire objects at it and forget? Oh,
wait... You did say exactly that. Remember?

We already discussed that exact reference and it has nothing to do with
garbage collection. Read the comments left on his blog post.
 
J

Jon Harrop

Jerry said:
Paul Wilson's garbage collector survey, section 2.1.

http://www.cs.utexas.edu/ftp/pub/garbage/gcsurvey.ps

That is 16 years out of date.
Most of what you've said about reference counting is complete nonsense.

Then why is your best counterexample ancient history?
For example, you claim that one of the problems with reference counting
is:


As the reference above makes clear to anybody capable of reading at all:

One advantage of reference counting is the _incremental_
nature of most of its operation--garbage collection work
(updating rerence counts) is interleaved closely with the
running program's own execution. It can easily be made
completely incremental and _real time_; that is,
performing at most a small and bounded amount of work per
unit of program execution.

he goes on to say:

This is not to say that reference counting is a dead
technique. It still has advantages in terms of the
immediacy with which it reclaims most garbage, and
corresponding beneficial effects on locality of reference;
a reference counting system may perform with little
degradation when almost all of the heap space is occupied
by live objects, while other collectors rely on trading
more space for higher efficiency. Reference counts
themselves may be valuable in some systems. For example,
they may support optimizations in functional garbage
collection implementations by allowing destrutive modification
to uniquely-referenced objects. Distributed garbage collection
is often done with reference-counting between nodes of a
distributed system, combined with mark-sweep or copying
collection within a node. Future systems may find other uses
for reference counting, perhaps in hybrid collectors also
involving other techniques, or when augmented by specialized
hardware.

And sixteen years on his prophecy has not come true: we do not have
dedicated hardware for reference counting and none of the major GCs are
based upon reference counting.

Python and Mathematica are reference counted but both are slow and suffer
from stalls due to lack of incremental collection.
He mentions two points of particular interest with respect to reference
counting making a "comeback". The difference in speed between CPUs and
main memory seems to be growing, forcing ever-greater dependence on
caching --
Yes.

which means that the improved locality of reference with
reference counting means more all the time. This also means that the
primary performance cost of reference counting (having to update counts)
now means relatively little as a rule -- in most cases, a few extra
operations inside the CPU mean nearly nothing, while extra references to
main memory mean a great deal.

No. Locality is worse with reference counting which is why it has gotten
relatively slower since that ancient survey. You can easily test this
yourself.
Likewise, individual computers now correspond much more closely to the
distributed systems at the time of the survey. In particlar, it's now
quite common to see a number of separate OS instances running in virtual
machines in a single box.

You opinions on garbage collection reflect a tremendous enthusiasm, but
equally tremendous ignorance of the subject matter.

Yet you cannot cite a single relevant reference or piece of objective
evidence to support an argument that flies in the face of almost all modern
software development (outside restricted memory environments).
 
D

Dmitriy V'jukov

That is irrelevant. Your argument in favor of reference counting was
completely fallacious.

You claimed was that reference counting is "more accurate than a traditional
GC could ever be". Consider:

{
Bar bar()
f(bar);
g()
}

Reference counting will keep "bar" alive until its reference count happens
to be zeroed when it drops out of scope even though it is not reachable
during the call to "g()". Real garbage collectors can collect "bar" during
the call to "g" because it is no longer reachable.


Can Real garbage collectors do this w/o help of the compiler? I'm not
sure.
With compiler support reference-counting can easily collect bar
precisely at the end of f(). Don't you think so?


Dmitriy V'jukov
 
J

Jon Harrop

Dmitriy said:
Can Real garbage collectors do this w/o help of the compiler?

Absolutely, yes. I just checked this with a couple of GC'd languages and
they both collected "bar" before the end of its scope, i.e. sooner than
referencing counting can.

The GC is totally unaware of scope (i.e. how the programmer happened to lay
out the code) and is only aware of the global roots and current dependency
graph in the heap while the program runs. This is typically traversed in
slices of major collection that occur at allocations and when functions
return, many of which may occur during the call to "g()" and any of which
are likely to collect "bar" and anything therein that is also unreferenced.
With compiler support reference-counting can easily collect bar
precisely at the end of f(). Don't you think so?

In general, that is not solvable because the complete dependency information
is only ever available at run time. So even in theory it is only possible
to trim down value lifetimes and never achieve optimality using only static
analysis. In other words, the compiler could do better but never as well as
run time analysis like a real GC.

In practice, the nearest working solutions I am aware of are based
upon "regioning": the MLKit SML compiler and Stalin Scheme compiler.
However, once you have used regioning to statically determine object
lifetimes, reference counts are obsolete. Consequently, these approaches
make no use of referencing counting.
 
J

Jon Harrop

Barry said:
Of course he can't. That's because he's living in the world of 16 years
and more ago.

I wouldn't waste so much time with Jerry, Chris and others of their ilk.
They're so steeped in obsolete experience that they're blind even to the
present, much less the future. Even Einstein never accepted quantum
theory.

Agreed. I hadn't realised this until I Googled and discovered that they
constitute a small clique of intraciting kooks on usenet. :)
 
B

Barry Kelly

Jon said:
Yet you cannot cite a single relevant reference or piece of objective
evidence to support an argument that flies in the face of almost all modern
software development (outside restricted memory environments).

Of course he can't. That's because he's living in the world of 16 years
and more ago.

I wouldn't waste so much time with Jerry, Chris and others of their ilk.
They're so steeped in obsolete experience that they're blind even to the
present, much less the future. Even Einstein never accepted quantum
theory.

"All truth goes through three stages. First it is ridiculed; then it is
violently opposed; finally it is accepted as self-evident."
-- Schopenhauer

PS: the old physicists eventually die; and software is a field that's
kindest to the young.

-- Barry
 
I

Ian Collins

Barry said:
Of course he can't. That's because he's living in the world of 16 years
and more ago.

I wouldn't waste so much time with Jerry, Chris and others of their ilk.
They're so steeped in obsolete experience that they're blind even to the
present, much less the future.

Have you ever read Chris' posts on c.p.t? If not, you should.
 
I

Ian Collins

Jon said:
Absolutely, yes. I just checked this with a couple of GC'd languages and
they both collected "bar" before the end of its scope, i.e. sooner than
referencing counting can.

The GC is totally unaware of scope (i.e. how the programmer happened to lay
out the code) and is only aware of the global roots and current dependency
graph in the heap while the program runs. This is typically traversed in
slices of major collection that occur at allocations and when functions
return, many of which may occur during the call to "g()" and any of which
are likely to collect "bar" and anything therein that is also unreferenced.
But surely that requires coupling between the compiler and the GC? The
latter has to provide the data used by the former.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,173
Messages
2,570,939
Members
47,484
Latest member
JackRichard

Latest Threads

Top