Strange C developments

T

Tim Rentsch

Quentin Pope said:
Le 20/07/12 23:14, Jase Schick a @C3{A9}crit :
The lcc-win compiler offers a garbage collector (Boehm's) in its
standard distribution. It is a very useful feature, used for instance in
the debugger of lcc-win, in the IDE and several other applications. Of
course it is used by many of the people that have downloaded lcc-win
(more than 1 million)

[snip]

And what is the gain? With careful programming, there is no need
whatsoever for this stupid overhead.

Actual measurements show otherwise.
 
T

Tim Rentsch

Malcolm McLean said:
Quentin Pope:
Often the bottom couple of bits of pointers to memory with known
alignment properties will be used to store information (the pointer than
being anded with ~0x3ul or similar prior to dereferencing).

Many code protection methods rely on storing pointers xored with an
obfuscating mask. GCs are not sophisticated enough to track such pointers.

[snip] most garbage collectors are unacceptably inefficient for high
performance routines. [snip]

Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation. Micro-scale
"efficiencies" don't always translate into improvements in macro-scale
performance, and often quite the contrary. Furthermore the use of
automatic reclamation (aka "GC") doesn't preclude the use of special
purpose allocators for selected critical code paths, which can be
accommodated easily without having to circumvent the larger GC
framework.
 
B

Ben Bacarisse

Nobody said:
What would be the benefit in writing a pointer to a file, or to making
that invalid?

To making it invalid. That seems to be the suggesting that Malcolm is
making.
Writing a pointer to a file may be useful for virtual memory systems such
as that used by the Win16 API.

Forbidding writing pointers to files would eliminate one possible
mechanism whereby "transparent" GC would fail.

Right, but it's just one way, and not (in my experience at least) the
most common.

I can see where I went wrong though. I should have said "is the benefit
really worth it?" because I can see a minuscule benefit, just not one
that could drive a change to the language.
 
S

Stefan Ram

Tim Rentsch said:
Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation. Micro-scale

When one has nested lifetimes of objects, life is simple: one can
use automatic storage. The problems begin when lifetimes are dynamic,
which usually means that to keep track of the objects at all, one
will build some kind of graph. When that graph is a tree and the
lifetimes of subtrees do not exceed the lifetime of their supertrees,
subtrees can be free whenever their supertree is freed, which often
is still quite simple. But when the graph has a more general form,
one usually has to implement some form of reference counting or
marking scheme, which effectively is a garbage collector (Greenspun's
tenth law).

Some notes I have collected on the subject (repost):

»There were two versions of it, one in Lisp and one in
C++. The display subsystem of the Lisp version was faster.
There were various reasons, but an important one was GC:
the C++ code copied a lot of buffers because they got
passed around in fairly complex ways, so it could be quite
difficult to know when one could be deallocated. To avoid
that problem, the C++ programmers just copied. The Lisp
was GCed, so the Lisp programmers never had to worry about
it; they just passed the buffers around, which reduced
both memory use and CPU cycles spent copying.«

<[email protected]>

»A lot of us thought in the 1990s that the big battle would
be between procedural and object oriented programming, and
we thought that object oriented programming would provide
a big boost in programmer productivity. I thought that,
too. Some people still think that. It turns out we were
wrong. Object oriented programming is handy dandy, but
it's not really the productivity booster that was
promised. The real significant productivity advance we've
had in programming has been from languages which manage
memory for you automatically.«

http://www.joelonsoftware.com/articles/APIWar.html

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«

http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22JavaUrbanLegends

»Perhaps the most important realisation I had while developing
this critique is that high level languages are more important
to programming than object-orientation. That is, languages
which have the attribute that they remove the burden of
bookkeeping from the programmer to enhance maintainability and
flexibility are more significant than languages which just
add object-oriented features. While C++ adds object-orientation
to C, it fails in the more important attribute of being high
level. This greatly diminishes any benefits of the
object-oriented paradigm.«

http://burks.brighton.ac.uk/burks/pcinfo/progdocs/cppcrit/index005.htm

»The garbage collector in contemporary JVMs doesn't touch most
garbage at all. In the most common collection scenario, the JVM
figures out what objects are live and deals with them exclusively
-- and most objects die young. So by the time they get to garbage
collection, most objects that have been allocated since the last
garbage collection are already dead. The garbage collector avoids
a lot of work it would have to do if it were doing it one piece
at a time. Similarly, the JVM can optimize away many object
allocations.«

http://java.sun.com/developer/technicalArticles/Interviews/goetz_qa.html
 
M

Malcolm McLean

בת×ריך ×™×•× ×©×‘×ª,21 ביולי 2012 23:31:16 UTC+1, מ×ת Tim Rentsch:
Malcolm McLean &lt;[email protected]&gt; writes:

[snip] most garbage collectors are unacceptably inefficient for high
performance routines. [snip]

Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation.
I'm sceptical about this. Most "real" programs consist of logically often complex but in run time terms fairly light layers of program-specific code, which calls library routines to do the processor-intensive work. If you just measure the program-specific code, then you'll find that memory management isn't a big factor in the overall performance. But the resuable, processor- intensive components are themselves mainly written in C.

I don't usually write programs like that, however. Normally my programs spend most of their time in routines written by me, doing heavy processing.
 
M

Melzzzzz

Folklore. Measurements of real programs using, eg, the Boehm
collector, show macro-scale performance similar to, or better than,
the same program using manual rather than automatic reclamation.
Micro-scale

When one has nested lifetimes of objects, life is simple: one can
use automatic storage. The problems begin when lifetimes are
dynamic, which usually means that to keep track of the objects at
all, one will build some kind of graph. When that graph is a tree and
the lifetimes of subtrees do not exceed the lifetime of their
supertrees, subtrees can be free whenever their supertree is freed,
which often is still quite simple. But when the graph has a more
general form, one usually has to implement some form of reference
counting or marking scheme, which effectively is a garbage collector
(Greenspun's tenth law).

Some notes I have collected on the subject (repost):

»There were two versions of it, one in Lisp and one in
C++. The display subsystem of the Lisp version was faster.
There were various reasons, but an important one was GC:
the C++ code copied a lot of buffers because they got
passed around in fairly complex ways, so it could be quite
difficult to know when one could be deallocated. To avoid
that problem, the C++ programmers just copied. The Lisp
was GCed, so the Lisp programmers never had to worry about
it; they just passed the buffers around, which reduced
both memory use and CPU cycles spent copying.«[/QUOTE]

This is true if c++ programmers didn't use smart pointers.
With compacting garbage collector copying is done under the hood.
<[email protected]>

»A lot of us thought in the 1990s that the big battle would
be between procedural and object oriented programming, and
we thought that object oriented programming would provide
a big boost in programmer productivity. I thought that,
too. Some people still think that. It turns out we were
wrong. Object oriented programming is handy dandy, but
it's not really the productivity booster that was
promised. The real significant productivity advance we've
had in programming has been from languages which manage
memory for you automatically.«

Why? Programmer don;t have to write free(p) while close(fd) is
still needed ;)
http://www.joelonsoftware.com/articles/APIWar.html

»[A]llocation in modern JVMs is far faster than the best
performing malloc implementations. The common code path
for new Object() in HotSpot 1.4.2 and later is
approximately 10 machine instructions (data provided by
Sun; see Resources), whereas the best performing malloc
implementations in C require on average between 60 and 100
instructions per call (Detlefs, et. al.; see Resources).
And allocation performance is not a trivial component of
overall performance -- benchmarks show that many
real-world C and C++ programs, such as Perl and
Ghostscript, spend 20 to 30 percent of their total
execution time in malloc and free -- far more than the
allocation and garbage collection overhead of a healthy
Java application (Zorn; see Resources).«

This is not true. While performance of allocations with GC
is same as malloc (or faster), performance of cleaning objects
is where GC fails...
Problem is that while free(p) is pretty fast and doesn't require
anything special, GC needs to stop whole program scan memory
fro references, perform clean up and after that update all
references of blocks that are eventually moved.
It is not that bad with single process but actually
it is faster to have multiple processes than multiple threads
since per process GC is faster than per songle GC pewr multiple
threads....
There is also problems with memory trashing since GC has
to scan memory...
http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22JavaUrbanLegends

»Perhaps the most important realisation I had while developing
this critique is that high level languages are more important
to programming than object-orientation. That is, languages
which have the attribute that they remove the burden of
bookkeeping from the programmer to enhance maintainability and
flexibility are more significant than languages which just
add object-oriented features. While C++ adds object-orientation
to C, it fails in the more important attribute of being high
level. This greatly diminishes any benefits of the
object-oriented paradigm.«

http://burks.brighton.ac.uk/burks/pcinfo/progdocs/cppcrit/index005.htm

»The garbage collector in contemporary JVMs doesn't touch most
garbage at all. In the most common collection scenario, the JVM
figures out what objects are live and deals with them
exclusively -- and most objects die young. So by the time they get to
garbage collection, most objects that have been allocated since the
last garbage collection are already dead. The garbage collector avoids
a lot of work it would have to do if it were doing it one piece
at a time. Similarly, the JVM can optimize away many object
allocations.«

http://java.sun.com/developer/technicalArticles/Interviews/goetz_qa.html

Performance problems in Confluence, and in rarer circumstances for
JIRA, generally manifest themselves in either:

frequent or infrequent periods of viciously sluggish responsiveness,
which requires a manual restart, or, the application eventually and
almost inexplicably recovers some event or action triggering a
non-recoverable memory debt, which in turn envelops into an
application-fatal death spiral (Eg. overhead GC collection limit
reached, or Out-Of-Memory). generally consistent poor overall
performance across all Confluence actions

https://confluence.atlassian.com/display/DOC/Garbage+Collector+Performance+Issues
 
T

Tim Rentsch

Tim Rentsch said:
Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation. Micro-scale

[snip]

Some notes I have collected on the subject (repost): [snip]

A nice collection of quotes. Thank you for reposting them.
 
J

jacob navia

Le 22/07/12 07:38, William Ahern a écrit :
<snip examples>

Not sure whether it's a composition fallacy, or the fallacy of hasty
generalization, but regardless you can post all the examples in the world
(not that any of those were particularly persuasive) and it's still not
going to make automated GC faster.

In other words, no matter how much evidence is accumulated that you are
wrong you are not going to believe it because...

Well, of course: You KNOW you are right!
 
T

Tim Rentsch

Malcolm McLean said:
Tim Rentsch:
Malcolm McLean &lt;[email protected]&gt; writes:

[snip] most garbage collectors are unacceptably inefficient for high
performance routines. [snip]

Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation.

I'm sceptical about this. Most "real" programs consist of
logically often complex but in run time terms fairly light
layers of program-specific code, which calls library routines
to do the processor-intensive work. If you just measure the
program-specific code, then you'll find that memory management
isn't a big factor in the overall performance. But the
resuable, processor- intensive components are themselves mainly
written in C.

The programs I'm talking about were written entirely in C or
a C-ish language (eg, C++ without templates). Measurements
were done over all executed code, ie, both "program-specific"
code and "library routines" (without drawing any specific
line as to which is which).
I don't usually write programs like that, however. Normally my
programs spend most of their time in routines written by me,
doing heavy processing.

My first HLL was Fortran, and much of my early programming right
after that was assembly language of various kinds. I have the
same predispositions against GC and in favor of manual allocation
as most people of that era, I think.

However, all of the cases I'm aware of where actual measurements
of overall performance were made have found that there is no
penalty, or no significant penalty (within, say, 10% overall --
I don't remember an exact figure), for using GC rather than
manual reclamation. And GC was definitely faster in some cases.

If you have a large program that extensively uses malloc() and
manual reclamation, it would be great to have it converted
to use GC and see what the performance is like. I would love
to have a counter-point to offer to the GC fans. Of course,
if you do run some sort of comparison, it's important to try
to make it a fair comparison -- obviously the results can be
slanted one way or the other by choosing how one program is
transformed into the other. A complete result would include
both an attempt to have a level playing field, and also a
summary of what was done (or not done) to achieve that, as
well as any factors that perhaps should be taken into account
but weren't for one reason or another. The more examples the
better! So if you think you have some good examples sitting
back in your code repository, look over them and see which
ones might be good to try out...
 
T

Tim Rentsch

William Ahern said:
Tim Rentsch said:
Quentin Pope:


Often the bottom couple of bits of pointers to memory with known
alignment properties will be used to store information (the pointer than
being anded with ~0x3ul or similar prior to dereferencing).

Many code protection methods rely on storing pointers xored with an
obfuscating mask. GCs are not sophisticated enough to track such pointers.

[snip] most garbage collectors are unacceptably inefficient for high
performance routines. [snip]
Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same program
using manual rather than automatic reclamation.

What I suspect it showed was that Boehm's internal memory allocator was more
efficient than the systems' malloc/free, not that GC can be more performant
than manual management.

Measurements were of overall program performance, not just the memory
allocators. I don't think comparing just time spent in the allocation
(and deallocation) routines is very meaningful, because that ignores
the cost of keeping track (in the case of manual memory management)
of what needs to be kept alive and what can be reclaimed. Stroustrup
makes a similar point in (I think) one of the C++ books, talking
about comparing general GC to reference-counted "smart" pointers.
(AFAIK he never tried running an actual experiment, just said that
doing a comparison isn't as easy as it might seem.)
I base this interpretation off of a little Googling and the description of
this list creation/destruction example:

When running Listing 1 and similar programs on GNU/Linux, I've seen
the Boehm collector perform 1.69 times faster than malloc/free; on
OpenBSD, more than twice as fast; and on Microsoft Windows NT 4,
more than 13 times faster.

-- http://www.drdobbs.com/the-boehm-collector-for-c-and-c/184401632

The notion that GC can be faster than manual management is significantly
more dubious than the claim that JVMs can be faster than compiled code.
Unlike the JVM claim, I can't think of a single thing a GC could do better
than manual management. [snip]

Simple programs (ie, that don't allocate very much or that have
simple patterns of allocation/deallocation) don't spend much time
in the allocator (ie, in malloc()/free()). These programs probably
won't be helped by GC, but also (unless there is some sort of
horrible mismatch between the GC and the allocation patterns)
probably won't be hurt much by GC either.

Complex programs -- that allocate a lot, with lots of different
patterns of allocation/deallocation, and that build complicated
data structures in allocated memory -- will spend more time in
malloc()/free(), but also will spend a significant amount of
time keeping track of which memory blocks need to be retained
and which can be freed. (Alternatively they can act like
Firefox which never seems to free anything...). As programs
get more complex, the advantage to GC goes up, not because
the low-level routines are faster, but because GC's generally
do a better job of keeping track of what needs to be retained
in the presence of large, heterogeneous collections of allocated
blocks than manually written code does. Once we know what can
be freed, it's probably pretty much the same if the code that
does the freeing is in free() or part of a GC; but figuring
out what can be freed is often faster in a GC than in manually
written code (assuming a complex environment), because the GC
code is written with exactly that focus in mind, whereas the
manually written code is likely to be seen more as "administrative
overhead" that has to be done but isn't really part of the
problem being solved.
Bottom line: GC, like JIT'ing, requires significantly more work. That's a
huge hurdle to overcome, and it shouldn't surprise anyone that it typically
tends to be slower than similarly sophisticated alternatives.

It's important to ask what is being compared. If GC is compared
to just malloc() and free(), that's an apples and oranges comparison,
because it doesn't take into account the code that keeps track of
which blocks are still needed and which can be freed. At some level
the only comparison that accurately reflects the relative behavior
is one entire program (using manual memory management) against
another entire program (using automated memory management).
You could argue that good GC is better than bad manual management, but
memory management isn't exactly a bottleneck for most programs, at least not
in a way that is easily abstracted and optimized away using generalized
algorithms. So not only does GC have more work to do, but there's not much
profit to be had.

On the contrary -- if memory management isn't a bottleneck, then
having automatic management won't have much effect on performance,
but will have a huge effect on productivity. Software engineering
studies on not having to do manual memory management show productivity
gains in the range of 1.6 to 2.0. That's a huge factor to give up
for (what is likely to be) a small drop in performance.
You could argue that GC solves resource leaks--a type of performance
problem--but the vast majority of resource problems I've seen on my projects
are issues with slow GC reclamation and finalizer headaches. Fixing actual
leaks in C code is far more straight forward than turning knobs on a GC. And
while GC technology has improved considerably over the years, its not like C
tools and environments stopped evolving. GCs and JVMs are perpetually on the
brink of surpassing the old, stodgy techniques. And they'd beat them, too,
if it weren't for those meddling kids improving the competition.

Are you talking about GCs JVM's here, or are these experiences with
other environments? I'm curious to know more about experiences
other people have had, especially if the context(s) of those
experiences allows some level of comparison against a C or C-like
environment.
 
B

BartC

It's like JITing. You can show me all the examples in the world where
JITing
executes code faster than AoT compiled C or Fortran code. And yet, on
average, it just isn't faster. It just... isn't.

C might well be faster - once you've finally finished and debugged the
application.

Meanwhile your competitor has had his application out for six months,
because he's used a different approach. And he can upgrade his product more
quickly.

GC is just another aid to development. And maybe the speed is not that
relevant. Or maybe you can concentrate your efforts on algorithms rather
than minutiae.
 
M

Malcolm McLean

בת×ריך ×™×•× ×¨×שון, 22 ביולי 2012 09:37:10 UTC+1, מ×ת io_x:
you can not compare what you control of what you not controll
at all...
if someone want to be a programer, he/she has to build his/her own
malloc routines only for to have more
controll hisself/herself program memory...
so the way is opposite of use gc() etc...
That's one of the good things about C.

It's pretty standard to write the first version of a C function to allocatea buffer on every pass, then optimise it by allocating the buffer once andreusing it for the life of the program.
Another thing which should be standard but isn't is stack allocation. Very often you need this.

double median(const double *x, int N)
{
double *temp = malloc(N * sizeof(double));
double answer;
memcpy(temp, x, N * sizeof(double));
qsort(temp, N, sizeof(double), compfunc);
answer = (N % 2) ? x[N/2] : (x[N/2-1] + x[N/2])/2.0;
free(temp);
return answer;
}

we can't easily get rid of temp. But it could go on a stack. it doesn't need to be resized, and it's got the same lifetime as an auto variable.
 
J

jacob navia

Le 22/07/12 09:28, Gordon Burditt a écrit :
So, you make a rule that pointers to GC-memory must be kept in
(a) auto variables, (you may add or exclude "register variables" here)
Registers are obviously looked for roots, so it is safe to store
pointers in registers
(b) static or global variables, Yes
(c) gc_alloc()ed memory, Yes
(d) function arguments, Yes
or (e) function return values Yes
and they must be in their original binary form. Violating this causes
undefined behavior.
Yes


Those aren't eligible for collection. I might have global root pointers
for linked lists, but the nodes I delete out of the list won't have
anything pointing at it. And if I need to delete the list, I can
assign NULL then.
Yes. In general it is a good idea doing that even if you do not use the
collector. Suppose you do NOT set it to NULL and then (without a
collector) you free() the list. You have now a dangling pointer, that
is much WORST than a NULL pointer since many functions test for NULL but
can't test for dangling pointers.

Do I have to set the pointer to NULL if it's stored in an auto
variable (not in main()) and the ptr = NULL; statement would be
immediately followed by a return statement? (Don't look at
stack frames for functions that have already returned.)

No, in general set pointers to NULL when using global variables or
functions that run for very long periods of time. In short lived
functions garbage will be collectible when the function exits so it
is a slight optiization to avoid that.

BUT

Setting a pointer to NUL is such a CHEAP operation (some nanoseconds)
that setting unused pointers ALWAYS to NULL will never affect really
the performance of your program.
 
J

jacob navia

Le 22/07/12 09:08, Gordon Burditt a écrit :
How about the return value of a function in code like:
NODE *nptr; /* auto variable */
...
nptr = allocatenewnode(....);
if garbage collection happens to be activated (say, by another
thread) just before the return value of allocatenewnode() is stored
in nptr? Here, allocatenewnode()'s function is to gc_alloc()
something, fill it in with data, and return a pointer to that data.

Nothing bad can ever happen because the "allocatenewnode() will have
stored the pointer it received from gc_malloc somewhere and the
collector will see it.

Boehm's collector has been running for YEARS now and it is very stable,
those problems were solved long ago.
 
B

BGB

Le 22/07/12 09:08, Gordon Burditt a écrit :

Nothing bad can ever happen because the "allocatenewnode() will have
stored the pointer it received from gc_malloc somewhere and the
collector will see it.

Boehm's collector has been running for YEARS now and it is very stable,
those problems were solved long ago.

yep.

Boehm also scans registers, TLS, ..., in addition to globals and the
stack, ...

raw data also isn't really a big issue either.
 
B

BGB

the conter-point is simply this: programmer has to think
on memory of his/her program...


you can not compare what you control of what you not controll
at all...
if someone want to be a programer, he/she has to build his/her own
malloc routines only for to have more
controll hisself/herself program memory...
so the way is opposite of use gc() etc...

yes, but using a GC does not necessarily mean having to give up on
freeing memory as well (or even the use of customized/specialized memory
allocators).

for example, memory may still be manually destroyed when it is known
that it is safe to do so, which can be used as a sort of hint.

it need not be all-or-nothing.
 
P

Phil Carmody

Tim Rentsch said:
Tim Rentsch said:
Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation. Micro-scale

[snip]

Some notes I have collected on the subject (repost): [snip]

A nice collection of quotes. Thank you for reposting them.

He missed this quote regarding Boehm's GC (relative to malloc.free):

"for programs allocating primarily large objects it will be slower."

Which was said by some guy called 'Boehm'.

Phil
--
I'd argue that there is much evidence for the existence of a God.
Pics or it didn't happen.
-- Tom (/. uid 822)
 
B

BGB

Are you talking about GCs JVM's here, or are these experiences with
other environments? I'm curious to know more about experiences
other people have had, especially if the context(s) of those
experiences allows some level of comparison against a C or C-like
environment.

I am having "generally good" success with a custom written GC (yes,
primarily with plain C, and primarily native). (granted, there is some
use of a custom scripting language in the app as well).


technically, I am more using a hybrid strategy, where in cases where I
can figure out that memory is no longer needed, it is manually freed.

this is mostly because with a heap-use in the range of 500MB to 2GB,
running a GC cycle may take several seconds. partly, this is also
because the GC is kind of naive (it is a conservative concurrent
mark/sweep collector, and will block other threads if they attempt to
allocate during a GC cycle).

my applications are generally interactive and soft-real-time (so a GC
pass doesn't usually break stuff, but can still be kind of annoying).

OTOH, if a person were not using GC, they have to hunt-down / find
nearly all memory leaks, rather than just major/obvious ones (IOW: the
ones where the application will sit around spewing garbage).


it would also make development harder, since one would likely also have
to give up on such niceties as dynamic type checking (1), lead to more
complex handling of strings (the current practice is mostly just to
treat strings as value types and let the GC deal with them), ...


1: dynamic type-checking is a feature of my GC, where one can pass
around a pointer to an object, and later ask the GC what type of object
it is (typically, this type name is given when allocating the memory for
an object). I settled mostly on what seemed to be the "generally least
painful" strategy, namely identifying object types via strings
(canonically restricted to C identifier rules). (theoretically, a person
can also use numbers, and while potentially slightly more efficient, are
considerably more problematic to administer in a decentralized manner,
and as well, a person can use interned the type-names for faster checking).

in many cases, specialized type-checking predicate functions make use of
optimized checks (though, most of this, and most of the dynamic
type-system in general, is located in a secondary library and built on
top of the GC).

technically, yes, dynamic type checking isn't free, but it makes a lot
of code considerably less effort to write (a person can still primarily
use static types, leaving dynamic type-checking mostly for the cases
where it is more convenient). in some ways, it is vaguely comparable to
using "instanceof" for similar purposes in Java (or "is" in C#).


or such...
 
B

BGB

Tim Rentsch said:
Folklore. Measurements of real programs using, eg, the Boehm collector,
show macro-scale performance similar to, or better than, the same
program using manual rather than automatic reclamation. Micro-scale

[snip]

Some notes I have collected on the subject (repost): [snip]

A nice collection of quotes. Thank you for reposting them.

He missed this quote regarding Boehm's GC (relative to malloc.free):

"for programs allocating primarily large objects it will be slower."

Which was said by some guy called 'Boehm'.

it depends on the app.

in many apps, the vast majority of memory allocations are fairly small
(typically under 200-500 bytes or so), so it makes a lot of sense to
optimize primarily for smaller objects.

however, many malloc implementations seem to be less effective for small
objects (resulting in a large amount of heap-bloat and slowdown).


a more effective allocator may end up needing to use several different
allocation strategies depending on the size of the object, which may
potentially cost some in terms of raw speed in certain cases.

decided to leave out a more detailed example, as it was more related to
my own GC's architecture, and not particularly to Boehm (admittedly, I
haven't really investigated the workings of the Boehm GC in much detail).

but, a simplified/hypothetical example:
1-16 bytes: may use a slab allocator;
17-6143 bytes: uses bitmaps and cells;
6144-262143 bytes: uses a first-fit free-list allocator;
= 262144 bytes: allocates raw memory regions.


or such...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,078
Messages
2,570,570
Members
47,204
Latest member
MalorieSte

Latest Threads

Top