[Q] How far can stack [LIFO] solve do automatic garbage collectionand prevent memory leak ?

S

spinoza1111

There are many different ways to implement a heap.  The above is
not a good one, and I doubt that it's really used anywhere.

Actually, that's the only way to implement a heap in the abstract.
Forest and trees, mate. Mathematically a heap is a block of storage, a
list of free blocks and a list of allocated blocks. All the rest is
detail for the little techies to normally, get wrong. The confusion
between scientific and technical progress is a mirror of the (far more
serious) confusion between scientific progress and ethical advance.

Sure, when you free a block it is a good idea to see if you can join
it with its neighbors to get the biggest "bang for the buck". This,
again, is a detail relative to the grand plan which gives only techies
a hard-on, because the way scientific is confused with technical
progress is, in Foucault's terms, capillary. Part of ethical
regression is the overemphasis on efficiency and metaphors taken from
America's genocidal first use of nuclear weapons.
Complete in what sense?  The basic principle of how to use it is
simple.  As for how to implement it, there are many different
algorithms that can be used.

Correct, for a change.
Stack allocation is far, far simpler (usually).

And very different.
 
A

Anton Ertl

John Nagle said:
In the superscalar era, there's not much of an advantage to avoiding
stack accesses.

Apart from 4stack, I am not aware of a superscalar stack machine (and
4stack is more of an LIW than a superscalar).

OTOH, if by stack accesses you mean memory accesses through the stack
pointer on a register machine, then evidence contradicts your claim.
E.g., if we can keep one or two more of Gforth's VM's registers in
real registers rather than on the stack of an IA32 CPU, we see
significant speedups (like a factor of 2).
x86 superscalar machines have many registers not
visible to the program, as the fastest level of cache.

They have a data cache for memory accesses (about 3 cycles load-to-use
latency on current CPUs for these architectures), and they have rename
registers (not visible to programmers) that don't cache memory. They
also have a store buffer with store-to-load forwarding, but that still
has no better load-to-use latency.
In practice,
the top of the stack is usually in CPU registers.

Only if the Forth system is written that way.
The "huge number
of programmer-visible register" machines like SPARCs turned out to be
a dead end.

Really? Architectures with 32 programmer-visible registers like SPARC
(but, unlike SPARC, without register windows) are quite successful in
embedded systems (e.g., MIPS, SPARC).
So did making all the instructions the same width; it
makes the CPU simpler, but not faster, and it bulks up the program
by 2x or so.

In the beginning it also made the CPU faster. As for the bulk, here's
some data from <[email protected]>; it's the
text (code) size of /usr/bin/dpkg in a specific version of the dpkg
package:

..text
section
98132 dpkg_1.14.12_hurd-i386.deb
230024 dpkg_1.14.12_m68k.deb
249572 dpkg_1.14.12_amd64.deb
254984 dpkg_1.14.12_arm.deb
263596 dpkg_1.14.12_i386.deb
271832 dpkg_1.14.12_s390.deb
277576 dpkg_1.14.12_sparc.deb
295124 dpkg_1.14.12_hppa.deb
320032 dpkg_1.14.12_powerpc.deb
351968 dpkg_1.14.12_alpha.deb
361872 dpkg_1.14.12_mipsel.deb
371584 dpkg_1.14.12_mips.deb
615200 dpkg_1.14.12_ia64.deb

Sticking with the Linux packages (i.e., not the Hurd one), the range
in code size increase over the i386 code is 0.97 (ARM) to 1.41 (MIPS)
for the classical architectures with fixed-size instructions (RISCs).
Only the IA64 has a code size increase by a factor of 2.33. Note that
code size is not everything that's in a program binary, and the rest
should be unaffected by whether the instructions are fixed-size or
variable-sized, so the overall effect on the binary will be smaller.

- anton
 
K

Keith Thompson

Standish P said:
provide a rigorous proof that people are more interested in the
nauseating nude pictures that you post of your mother in the
newsgroups than in the subject of forth implementation.
[snip]

*plonk*
 
S

Standish P

The wikipedia page is worthless.  The flounder page has
substantial meat, but the layout and organization is a mess.  A
quick google search didn't turn up much that was general - most
articles are about implementations in specific environments.

I second your assessment.

What we have is blind leading the blind. "Keith Thompson" A CORPORATE
MINDER - with multiple accounts - on a Crusade to limit discussions of
useful nature on the usenet, must be giving anti-education, pro-
illiteracy corporatists (who did much of studies and development on
TAX-PAYER MONEY, maybe from now on we should fund Indian/Chinese/
Vietnamese/Russian/Cuban companies that have a tradition of sharing
knowledge from the socialist value system) ,lots of joy because that
means more market for their "user-friendly" , "thought-killing"
products and high priced courses.

You will see how consistently, she gives short replies, that have
ZILCH educational contents, compared to the volume of details they
boast on their websites they claim know.
 
M

Malcolm McLean

All the rest [how to implement heaps] is
detail for the little techies to normally, get wrong.
That's a fundamental feature of structured programming.

If we maintain the interface malloc(), realloc(), and free(), then we
could have a fairly simple or a fairly complicated scheme, and the
user doesn't care or need to know.

The problem is that a lot of techniques we can use to speed up memory
management, such as allocating from a stack, can't be used with this
interface. Designing good interfaces is hard.
 
H

Hugh Aguilar

Would you show me a picture, ascii art or whatever for Forth ? I know
what lisp lists look like so I dont need that for comparison. Forth
must have a convention and a standard or preferred practice for its
dicts. However, let me tell you that in postscript the dictionaries
can be nested inside other dictionaries and any such hiearchical
structure is a nested associative list, which is what linked list,
nested dictionaries, nested tables are.

You can see an example of lists in my novice package (in the list.4th
file):
http://www.forth.org/novice.html
Also in there is symtab, which is a data structure intended to be used
for symbol tables (dictionaries). Almost nobody uses linked lists for
the dictionary anymore (the FIG compilers of the 1970s did, but they
are obsolete).

I must say, I've read through this entire thread and I didn't
understand *anything* that *anybody* was saying (especially the OP). I
really recommend that people spend a lot more time writing code, and a
lot less time with all of this pseudo-intellectual nonsense. This
whole thread (and most of what I see on C.L.F. these days) reminds me
of the "dialectic method" of the early Middle Ages --- a lot of talk
and no substance.

Write some programs! Are we not programmers?
 
H

Hugh Aguilar

Its quite possible that the criticism is unfair, but dont you think
that in part some responsibility must be borne by your organization in
not doing a good job of education ?
...
She is quite humble. Take a look at this page,

http://www.forth.com/resources/evolution/index.html

That is actually pretty humorous; she managed to describe herself as a
"leading expert" twice in a single short paragraph. LOL

See! I do have a sense of humor!
http://groups.google.com/group/comp...read/thread/4c4dba9135bcf03e/8086ee13095bf78c
 
J

John Passaniti

You can see an example of lists in my novice package (in the list.4th
file):http://www.forth.org/novice.html
Also in there is symtab, which is a data structure intended to be used
for symbol tables (dictionaries). Almost nobody uses linked lists for
the dictionary anymore (the FIG compilers of the 1970s did, but they
are obsolete).

Thanks for pointing this out, Hugh. After reading the code in your
novice package and symtab, I am confused: With code of that caliber
and the obvious stunning intellect behind it, why hasn't everyone
adapted your awesome symtab for symbol tables instead? Any why hasn't
there been an effort to translate symtab into other languages so users
outside of Forth can also experience the sheer speed and hyper-
efficient use of memory and CPU? Let me say I find it refreshing that
a great programmer like yourself doesn't bother with stupid fads like
testing algorithms against large data sets and measuring performance
relative to competitive algorithms. That's all academic nonsense.
The only test and measurement anyone needs are the comments at the top
of symtab where you state your algorithm is better. You clearly
wouldn't have written that if it wasn't true.
Write some programs! Are we not programmers?

Amen! All this academic talk is useless. Who cares about things like
the big-O notation for program complexity. Can't people just *look*
at code and see how complex it is?! And take things like the years of
wasted effort computer scientists have put into taking data structures
(like hashes and various kinds of trees) and extending them along
various problem domains and requirements. Real programmers don't
waste their time with learning that junk. What good did any of that
ever do anyone?!

Thanks Hugh for a refreshing stance on what it means to be a
programmer.
 
S

Standish P

You indicated that you have a copy of Forth Application Techniques.
Sections 8.1 and 8.2 cover this topic, with some drawings.

Can someone send me a scan copy of sec 8.1 to 8.2 within the exemption
in the copyright law for my personal study and evaluation of the book
only.

I have only looked at the book cover on forth site and its table of
contents on amazon.

why elase would I ask where it is if I had a copy and would go
directly to index assuming it has a good indexing.

Alternative, a link to an open source of explanation would be
requested.
 
S

Standish P

You can see an example of lists in my novice package (in the list.4th
file):http://www.forth.org/novice.html
Also in there is symtab, which is a data structure intended to be used
for symbol tables (dictionaries). Almost nobody uses linked lists for
the dictionary anymore (the FIG compilers of the 1970s did, but they
are obsolete).

I must say, I've read through this entire thread and I didn't
understand *anything* that *anybody* was saying (especially the OP).

You didnt understand anything because no one explained anything
coherently. Admittedly, I am asking a question that would be thought
provoking to those who claim to be "experts" but these experts are
actually very stingy and mean business people, most certainly worse
than Bill Gates, only it did not occur to them his ideas and at the
right time.

I really recommend that people spend a lot more time writing code, and a
lot less time with all of this pseudo-intellectual nonsense.

You have to have a concept to write code.
 
D

David Kastrup

John Passaniti said:
Amen! All this academic talk is useless. Who cares about things like
the big-O notation for program complexity. Can't people just *look*
at code and see how complex it is?! And take things like the years of
wasted effort computer scientists have put into taking data structures
(like hashes and various kinds of trees) and extending them along
various problem domains and requirements. Real programmers don't
waste their time with learning that junk. What good did any of that
ever do anyone?!

It is my experience that in particular graduated (and in particular Phd)
computer scientists don't waste their time _applying_ that junk. They
have learnt to analyze it, they could tell you how bad their own
algorithms are (if they actually bothered applying their knowledge), but
it does not occur to them to replace them by better ones. Or even
factor their solutions in a way that the algorithms and data structures
are actually isolated.

I think there must be some programmer gene. It is not enough to be able
to recognize O(n^k) or worse (though it helps having a more exact rather
than a fuzzy notion of them _if_ you have that gene). You have to fear
it. It has to hurt. You need to feel compassion with the CPU. It's
not enough to sit there in your easychair, occasionally sucking on your
pipeline and listen to its story about a hard realtime youth and its
strained connection to its motherboard. When it stops, you have to see
its benchmarks and feel their pain in your own backplane.
 
A

Alex McDonald

You didnt understand anything because no one explained anything
coherently.

It indicates that you're asking a question that *you don't
understand*.

I'm continually amazed that people come to Usenet, wikis, websites and
other fora and ask questions that even the most basic of research (and
a bit of care with terminology aka "using the right words") would show
to be confused. A quick scan of the available literature on garbage
collection and stacks, starting with the fundamentals, would surely
show you what you need to know.
Admittedly, I am asking a question that would be thought
provoking to those who claim to be "experts" but these experts are
actually very stingy and mean business people, most certainly worse
than Bill Gates, only it did not occur to them his ideas and at the
right time.

What surprises may is that anyone bothered to answer, as your question
was neither "thought provoking" nor in need of attention from an
expert. Their generosity in the face of so much stupidity stands out
as remarkable.
 
H

Hugh Aguilar

What surprises may is that anyone bothered to answer, as your question
was neither "thought provoking" nor in need of attention from an
expert. Their generosity in the face of so much stupidity stands out
as remarkable.

I wouldn't call the OP "stupid," which is just mean-spirited. That is
not much of a welcome wagon for somebody who might learn Forth
eventually and join our rather diminished ranks. Lets go with "over-
educated" instead! I thought that his question was vague. It seemed
like the kind of question that students pose to their professor in
class to impress him with their thoughtfulness, so that he'll forget
that they never did get any of their homework-assignment programs to
actually work. I yet maintain that writing programs is what
programming is all about.

I see a lot of pseudo-intellectual blather on comp.lang.forth. The
following is a pretty good example, in which Alex mixes big pseudo-
intellectual words such as "scintilla" with gutter language such as
"turd" in an ungrammatical mish-mash --- and defends the overuse of
the return stack for holding temporary data as being readable(?!):
http://groups.google.com/group/comp...read/thread/4b9f67406c6852dd/0218831f02564410
 
A

Alex McDonald

I wouldn't call the OP "stupid," which is just mean-spirited.

Perhaps I'm just getting less forgiving the older I get, or the more I
read here. The internet is a fine resource for research, and tools
like google, archivx and so on are easy to access and take but a
little effort to use.
That is
not much of a welcome wagon for somebody who might learn Forth
eventually and join our rather diminished ranks.

I care neither to be included in your "diminished ranks", nor do I
take much regard of popularity as you define it. Standish P doesn't
want to join anything; he (like you) has an agenda for yet another
club with a membership of one.
Lets go with "over-
educated" instead! I thought that his question was vague. It seemed
like the kind of question that students pose to their professor in
class to impress him with their thoughtfulness, so that he'll forget
that they never did get any of their homework-assignment programs to
actually work.

It didn't work. He hasn't done any homework, neither do you, and it
shows.
I yet maintain that writing programs is what
programming is all about.

You remind me of those that would build a house without an architect,
or fly without bothering to study the weather.
I see a lot of pseudo-intellectual blather on comp.lang.forth. The
following is a pretty good example, in which Alex mixes big pseudo-
intellectual words such as "scintilla"

"Scintilla" gets about 2,080,000 results on google; "blather" gets
about 876,000 results. O Hugh, you pseudo-intellectual you!
with gutter language such as
"turd"

About 5,910,000 results. It has a long history, even getting a mention
in the Wyclif's 13th century bible.
in an ungrammatical mish-mash --- and defends the overuse of
the return stack for holding temporary data as being readable(?!):


I did? Where? You're making stuff up. Again.


Something you never did address, probably because the statement you
made is just another symptom of Aguilar's Disease; presenting as fact
an opinion based on personal experience, limited observation and no
research.
 
E

Edmund H. Ramm

In said:
[...]
I really recommend that people spend a lot more time writing code,
and a lot less time with all of this pseudo-intellectual nonsense.
[...]

I energetically second that!
 
J

John Bokma

David Kastrup said:
It is my experience that in particular graduated (and in particular Phd)
computer scientists don't waste their time _applying_ that junk.

Question: do you have a degree in computer science?

Since in my experience: people who talk about their experience with
graduated people often missed the boat themselves and think that reading
a book or two equals years of study.

Oh, and rest assured, it works both ways: people who did graduate are
now and then thinking it's the holy grail and no body can beat it with
home study.

Both are wrong, by the way.
 
B

Brad

I think there must be some programmer gene.  It is not enough to be able
to recognize O(n^k) or worse (though it helps having a more exact rather
than a fuzzy notion of them _if_ you have that gene).  

Some of the best minds in comp.lang.forth have a penchant for sarcasm
- one of the reasons I always read their posts. Maybe it gets lost on
the international crowd, but I love it.

-Brad
 
S

Steven D'Aprano

Oh, I am sooooo going to regret getting sucked into this tarpit... oh
well.


The
following is a pretty good example, in which Alex mixes big pseudo-
intellectual words such as "scintilla" with gutter language such as
"turd" in an ungrammatical mish-mash

You say that like it's a bad thing.

Besides, scintilla isn't a "big pseudo-intellectual" word. It might seem
so to those whose vocabulary (that's another big word, like "patronizing"
and "fatuousness") is lacking, but it's really quite a simple word. It
means "a spark", hence "scintillating", as in "he thinks he's quite the
scintillating wit, and he's half right". It also means "an iota, a
smidgen, a scarcely detectable amount", and if anyone can't see the
connection between a spark and a smidgen, there's probably no hope for
them.

Nothing intellectual about it, let alone pseudo-intellectual, except that
it comes from Latin. But then so do well more half the words in the
English language.

Anyway, I'm looking forward to hear why overuse of the return stack is a
big reason why people use GCC rather than Forth. (Why GCC? What about
other C compilers?) Me, in my ignorance, I thought it was because C was
invented and popularised by the same universities which went on to teach
it to millions of programmers, and is firmly in the poplar and familiar
Algol family of languages, while Forth barely made any impression on
those universities, and looks like line-noise and reads like Yoda. (And
I'm saying that as somebody who *likes* Forth and wishes he had more use
for it.) In my experience, the average C programmer wouldn't recognise a
return stack if it poked him in the eye.
 
D

David Kastrup

John Bokma said:
Question: do you have a degree in computer science?

Since in my experience: people who talk about their experience with
graduated people often missed the boat themselves and think that reading
a book or two equals years of study.

I have a degree in electrical engineering. But that's similarly
irrelevant. I have a rather thorough background with computers (started
with punched cards), get along with about a dozen assembly languages and
quite a few other higher level languages. I've had to write the BIOS
for my first computer and a number of other stuff and did digital
picture enhancement on DOS computers with EMM (programming 80387
assembly language and using a variant of Hartley transforms).

I have rewritten digital map processing code from scratch that has been
designed and optimized by graduated computer scientists (including one
PhD) to a degree where it ran twice as fast as originally, at the cost
of occasional crashes and utter unmaintainability. Twice as fast
meaning somewhat less than a day of calculation time for medium size
data sets (a few 100000 of data points, on something like a 25MHz 68020
or something). So I knew the problem was not likely to be easy. Took
me more than a week. After getting the thing to compile and fixing the
first few crashing conditions, I got stuck in debugging. The thing just
terminated after about 2 minutes of runtime without an apparent reason.
I spent almost two more days trying to find the problem before bothering
to even check the output. The program just finished regularly.

That has not particularly helped my respect towards CS majors and PhDs
in the function of programmers (and to be honest: their education is not
intended to make them good programmers, but to enable them to _lead_
good programmers).

That does not mean that I am incapable of analyzing, say quicksort and
mergesort, and come up with something reasonably close to a closed form
for average, min, and max comparisons (well, unless a close
approximation is good enough, you have to sum about lg n terms which is
near instantaneous, with a real closed form mostly available when n is
special, like a power of 2). And I know how to work with more modern
computer plagues, like the need for cache coherency.

So in short, I have a somewhat related scientific education, but I can
work the required math. And I can work the computers.
Oh, and rest assured, it works both ways: people who did graduate are
now and then thinking it's the holy grail and no body can beat it with
home study.

Both are wrong, by the way.

Depends. In my personal opinion, living close to the iron and being
sharp enough can make a lot of a difference.

Donald Knuth never studied computer science. He more or less founded
it. As a programmer, he is too much artist and too little engineer for
my taste: you can't take his proverbial masterpiece "TeX" apart without
the pieces crumbling. He won't write inefficient programs: he has the
respective gene and the knowledge to apply it. But the stuff he wrote
is not well maintainable and reusable. Of course, he has no need for
reuse if he can rewrite as fast as applying an interface.
 
J

John Bokma

David Kastrup said:
I have a degree in electrical engineering. But that's similarly
irrelevant.

Nah, it's not: your attitude towards people with a degree in computer
science agrees with what I wrote.
That has not particularly helped my respect towards CS majors and PhDs
in the function of programmers (and to be honest: their education is not
intended to make them good programmers, but to enable them to _lead_
good programmers).

I disagree.
That does not mean that I am incapable of analyzing, say quicksort and
mergesort,

Oh, that's what I was not implying. I am convinced that quite some
people who do self-study can end up with better understanding of things
than people who do it for a degree. I have done both: I already was
programming in several languages before I was studying CS. And my
experience is that a formal study in CS can't compare to home study
unless you're really good and have the time and drive to read formal
books written on CS. And my experience is that most self-educaters don't
have that time.

On the other hand: some people I knew during my studies had no problem
at all with introducing countless memory leaks in small programs (and
turning off compiler warnings, because it gave so much noise...)
Donald Knuth never studied computer science.

Yes, yes, and Albert Einstein worked at an office.

Those people are very rare.

But my experience (see for plenty of examples: Slashdot) is that quite
some people who don't have a degree think that all that formal education
is just some paper pushing and doesn't count. While some of those who do
have the paper think they know it all. Those people who are right in
either group are a minority in my experience.

As for electrical engineering: done that (BSc) and one of my class mates
managed to connect a transformer the wrong way around.... twice. Yet he
had the highest mark in our class.

So in short: yes, self-study can make you good at something. But
self-study IMO is not in general a replacement for a degree. Someone who
can become great after self-study would excel at a formal study and
learn more. Study works best if there is competition and if there are
challenges. I still study a lot at home, but I do miss the challenges
and competition.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,185
Members
46,736
Latest member
AdolphBig6

Latest Threads

Top