Comparision of C Sharp and C performance

S

spinoza1111

Hmm.  You have a point; he is confused enough that he might well have
mistaken the citation on the wikipedia talk page for the claim he's now
making.  I retract the accusation.

No, you're confused.

Most people think in arrays and not in tree structures due to lack of
education. This actually means that you think of the wikipedia cite
and your libelous article as two data points, and the FAQ about
Schildt as a third. But since your libelous article was the source of
citations cited in turn, it is probably the apex of a tree. It caused
a malicious campaign of libel and slander.
 
S

spinoza1111

Not if it can be proven that Seebach did so first to Schildt, for that
is my claim, and the First Amendment to the Constitution protects it,
along with my readiness to defend myself (pro se if needed) in court.
You know, it occurs to me that I really ought to point my lawyer at
this stuff.  Not because I think there'd be any point in a defamation
case, but because he usually finds my kooks funny.

I'd be careful with that. A lawyer is a sort of rent-a-Father in a
society that's killed the Father and he might laugh or snarl at you
for wasting his time.
 
S

spinoza1111

spinoza1111wrote:
You are very ignorant because you could not tell from parsing that
parsing means parsing at Chomsky level 1 in this context.

Firstly, say what you mean. Secondly, parsing (at *any* level) does not
make a compiler. It is necessary, but certainly not sufficient.
No: see above.

I'll tell you what - I'll accept that, according to *your* definition of
a compiler, Herbert Schildt wrote a compiler. According to mine,
however, he didn't.
Yes, and you lost, remember,

No, I don't remember that.
for you never answered my riposte:

So what? You write so much twaddle that nobody can reasonably be
expected to read even 1% of it.
(1) You're no Dijkstra.

Accepted.

<nonsense snipped>

 > You're not even Nilges,

Thank heaven for small mercies. Or indeed large ones.
since in 1974, byotch, I developed a data base with selection
and formatting in the absence of prior art and development tools
beyond a primitive assembler.

Bear in mind that experience of your previous erroneous and occasionally
deceptive articles leads me to treat any claim of yours with a large
pinch of salt.
(2) There was not opportunity to take academic course work in the time
of Dijkstra since if you'd done so yourself, you would know that the
content of academic CS was created by Dijkstra et al. How could
Dijkstra taken computer classes in Holland of the early fifties?

He couldn't. That's kind of the point. He couldn't, and therefore he
didn't. And therefore, by being a computer scientist despite not having
a CS degree (because he couldn't possibly have got one, because there
was no such thing at the time), he has demonstrated that a CS degree is
not a prerequisite for being a computer scientist. I mean, this is so
blindingly obvious that even a child of 7 could understand it. I suggest
you consult a child of 7.

 > I took the very first CS class offered by my own university in 1970!

It doesn't appear to have done you much good.

God help us all. Oh well, I have taught at Roosevelt University, a
third-rate school in Chicago, Princeton, and DeVry, which I am the
first to admit is a strange range. In that experience I am aware that
at the lower level a lot of bad practice is being taught,

If they let you loose, I'm not surprised.
[Note my choice of words. I don't know if C Unleashed contains a lotta
errors, and I am not gonna set up a Web site unleashing the hounds of
hell on your book.

You'd be too late. Such a site already exists. I wrote it, and I
maintain it.

Oh my goodness, *quelle Pantheon*: a clerk who's never taken a CS
class and a buncha ordinary slobs with broadband [Ben is very smart
esp. on details]

can run CS rings around a CS graduate who seems to have almost no grasp
of CS. Odd, that.

I will stand corrected. OK, you've worked with a series of companies.
And many programmers change jobs.

Especially those who choose to work on short-term contracts.

.... and other bottom feeders who become "consultants" to be
"independent" and who dialectically wind up being completely dependent
on the good will of bad people in a business that at least in the USA
may be mobbed up.

I am very familiar with the ethics of such "consultants".
 > But given the social skills you


I get on fine with almost everybody, and I have only once
(metaphorically!) stabbed anyone in the back - well, eight people in one
stab actually, but it turned out they were already (metaphorically) dead

Wow, listen to this guy...
anyway. I considered the stabbing to be a necessary precaution. They had
wasted a huge amount of time and budget turning out a piece of software
that might have embarrassed even you.

OK, you lied, eight people lost their jobs, probably unnecessarily.
Oh, but I have.

I don't think so, and if so, you have read without understanding.
"O matter, and impertinency mixt, Reason in Madnesse"! The word
"authority" has both positive and negative connotations, but the word
"authoritative" is a different (albeit related) word which has, as far
as I'm aware, only positive connotations. If I care enough, I'll take it

Literate people extend their connotations to a fully derived word.
 
S

spinoza1111

spinoza1111wrote:



You are very quick to generalise.

Based on experience. It's called "intelligence".

In my experience, nobody should brag about being a contract
programmer. Most of my career, I've not been one, and most programmers
I've known who were all-contract-all-the-time were forced to work to
rule as low level temps.

From the viewpoint of a company that's heavily invested in C, the last
thing it wants to hear is that C sucks, so obtaining a warm body
labeled "C programmer" is in the shortest of terms "rational", because
at the level of mere programmer, the programmer is expected to be
uneducated in comp sci so that he's loyal to the language he knows.

Your constant contortions to "prove" various factoids about C show
that you're overly invested in the language from a financial point of
view.
It would do you good.



I don't see how you make that out.


The decision, as it turns out, had already been made, but my evidence
was confirmation - if confirmation were needed - that the right decision
had been made.




That doesn't affect reality in the slightest. I have read King Lear,

OK, you have. So did Roger Scruton. But just as he misinterpreted it,
so did you, in the same way you don't understand C.
 
S

spinoza1111

spinoza1111wrote:
You are very ignorant because you could not tell from parsing that
parsing means parsing at Chomsky level 1 in this context.

Firstly, say what you mean. Secondly, parsing (at *any* level) does not
make a compiler. It is necessary, but certainly not sufficient.
No: see above.

I'll tell you what - I'll accept that, according to *your* definition of
a compiler, Herbert Schildt wrote a compiler. According to mine,
however, he didn't.
Yes, and you lost, remember,

No, I don't remember that.
for you never answered my riposte:

So what? You write so much twaddle that nobody can reasonably be
expected to read even 1% of it.
(1) You're no Dijkstra.

Accepted.

<nonsense snipped>

 > You're not even Nilges,

Thank heaven for small mercies. Or indeed large ones.
since in 1974, byotch, I developed a data base with selection
and formatting in the absence of prior art and development tools
beyond a primitive assembler.

Bear in mind that experience of your previous erroneous and occasionally
deceptive articles leads me to treat any claim of yours with a large
pinch of salt.
(2) There was not opportunity to take academic course work in the time
of Dijkstra since if you'd done so yourself, you would know that the
content of academic CS was created by Dijkstra et al. How could
Dijkstra taken computer classes in Holland of the early fifties?

He couldn't. That's kind of the point. He couldn't, and therefore he
didn't. And therefore, by being a computer scientist despite not having
a CS degree (because he couldn't possibly have got one, because there
was no such thing at the time), he has demonstrated that a CS degree is
not a prerequisite for being a computer scientist. I mean, this is so
blindingly obvious that even a child of 7 could understand it. I suggest
you consult a child of 7.

 > I took the very first CS class offered by my own university in 1970!

It doesn't appear to have done you much good.

God help us all. Oh well, I have taught at Roosevelt University, a
third-rate school in Chicago, Princeton, and DeVry, which I am the
first to admit is a strange range. In that experience I am aware that
at the lower level a lot of bad practice is being taught,

If they let you loose, I'm not surprised.
[Note my choice of words. I don't know if C Unleashed contains a lotta
errors, and I am not gonna set up a Web site unleashing the hounds of
hell on your book.

You'd be too late. Such a site already exists. I wrote it, and I
maintain it.

Oh my goodness, *quelle Pantheon*: a clerk who's never taken a CS
class and a buncha ordinary slobs with broadband [Ben is very smart
esp. on details]

can run CS rings around a CS graduate who seems to have almost no grasp
of CS. Odd, that.

I will stand corrected. OK, you've worked with a series of companies.
And many programmers change jobs.

Especially those who choose to work on short-term contracts.

 > But given the social skills you
display, I can only wonder how you got along with co-workers, and I
think you were a back-stabber, based on my long experience here.

I get on fine with almost everybody, and I have only once
(metaphorically!) stabbed anyone in the back - well, eight people in one
stab actually, but it turned out they were already (metaphorically) dead
anyway. I considered the stabbing to be a necessary precaution. They had
wasted a huge amount of time and budget turning out a piece of software
that might have embarrassed even you.

You've never read King Lear:

Oh, but I have.
there thou might'st behold the great image of Authoritie, a Dogg's
obey'd in Office.

"O matter, and impertinency mixt, Reason in Madnesse"! The word

Very good. Like me, you know where to find the First Folio edition.
However, unlike me, you don't connect what goes on in this newsgroup
with Lear. You don't understand that Lear was a protest against false
authority such as yours.
 
S

spinoza1111

You know, it occurs to me that I really ought to point my lawyer at
this stuff.  Not because I think there'd be any point in a defamation
case, but because he usually finds my kooks funny.

"Your" lawyer (who's probably some hard working bottom feeder with a
lot of potential and actual clients) might be irritated at you wasting
his time, because lawyers sue people and institutions with deep
pockets, and he'd see that you have no case. Although truth is not an
absolute defense, it's a good one, and the truth is that you set up a
malicious Web site which harmed Schildt, without qualifications in
computer science of a sort a lawyer would accept (hint: lawyers vastly
prefer external and verifiable certification, such as degrees in CS
from Univ of Illinois, to the claims of autodidacts and AP scores).

Lawyers can also tell the difference between self-defense in what they
call "chat rooms" and malicious standalone Web sites which have high
Google page rankings, and which can be demonstrated to be the apex of
trees of citations in which I can show that the damage to a reputation
started with you.

Lawyers also love older but fit guys like me who can be articulate on
the stand and describe factually how they helped Nash with C, worked
in C until finding a better language, published a book which has
ranked in the top ten Amazon compiler books, etc.

Lawyers don't like geeks who have evolved their own private set of
rules in crummy chat rooms.

So don't make legal threats, even in jest.
 
S

spinoza1111

The book clearly says it's an interpreter.

We've been over this, Mister ADHD: an interpreter plus a front end
scanner and parser is a compiler. If you'd taken CS at uni instead of
psychology you would know this. You see, universities, unlike
Korporations, don't see the need to make money by running fast but
incorrect software to cheat people.

Your implicit definition of a compiler as a translator that generates
native object code is false. It would if true have several absurd
results. It would exclude retargetable COMPILERS that generate C code
for portability. It would exclude .Net and Java compilers that do NOT
generate interpreted code, but code that is compiled by a small tool
(the JIT translator) at run time to native code. It would deprive most
early COMPILER developers of their Turing and other awards for
generating COMPILERS that generated interpreted code, such as the
developers of the Purdue University PUFFT compiler.

Your silly definition also invalidates pp 1 & 2 of Aho Sethi et al.'s
Dragon Book, which defines a compiler as a language transformer,
specifically on p 2 identifying directly executable machine language
as a subset of the possibilities.
I think I'll clarify that one, in any event.  The term "heap" is used
very heavily in a DOS environment.  The term "heap" is not used at all
in some Unix docs, but glibc uses it occasionally -- interestingly,
specifically to point out that malloc returns some pointers to space
allocated separately outside the heap.  (By unhappy coincidence, I know
WAY WAY too much about the details there, but they're not particularly
relevant.)

The point I was aiming for (but frankly didn't make properly) was that
the concept of the "heap" is not necessarily an intrinsic part of the
C language -- less so still is the specific memory layout, or the notion
that if you allocate enough stuff on "the heap" you will run into
the stack.  (In fact, on some of the systems I use, this is effectively
impossible, because the stack pointer and the "break" address are
far enough apart that you run into the resource limits on both long
before they come near each other.)

The C language as a syntax has nothing to do with runtime mechanisms.
However, had you attended a proper computer science program, you would
have learned that formal semantics often is explained by formal models
which aren't intended to be the only possible reality.

This goes back to Euclid. I'm sure some clowns in his audience at the
beach, while Euclid drew the diagram showing the Pythagorean theorem,
didn't "get" it, in the way you don't "get" it, and asked why they
should believe Euclid for this one triangle. Would the theorem hold at
the fag beach, they would ask, and then say, that's where you belong.

In fact, this is exactly what you did to Schildt. You mocked him for
using a simple model. And if you are gay, please don't bother telling
me that in your defense. I don't want to know, and since 1972 at least
I am aware that most "gay" people can be thugs, same as straights,
because most people confuse instrumental and communicative reason,
systematically.

You'd mock Euclid, and Turing.

You were just literally wrong when you said "the 'heap' is a DOS
term". You don't understand the meaning of a language standard, and
you also don't understand how flawed is the standard on which you
worked to advance your rather half-assed career.

It appears that the C99 standard did not explain semantics using any
one specific runtime model. However, this is a major flaw in the
standard, which to satisfy greedy vendors, made too many things
"undefined" and is written in bureaucratese-waffling style
systematically.
I think I'll probably clean that wording up at some point, probably around
the time I get to updating the page for the 4th edition.


Exactly.  My copy of the book has some notes in it, many of which I
didn't feel were worth listing.


That's my guess.  Note that I'm counting repetitions, so basically
every sample program in the book counts as an instance of the void
main error...

That is very dishonest. It's the same confusion the authors of the
anti-Schildt C FAQ made: between token and type. They counted separate
copies of citations of your stupid document as separate pieces of
evidence.

A distinct "error" isn't a recount of a previous error. What happened
is that you found the usual number of post-publication errors, padded
them with anti-Microsoft opinions, and maligned Schildt as a person.
Reminding me, I want to do a poll on what happens on real systems if
you do putchar(EOF).


Hee.

My assertion is not that my listing is a complete annotation of all
the nonsense,


The problem is that he's lying here; my page is not the "one authoritative
source".  Rather, the talk page for Herbert Schildt on Wikipedia contains
a number of debates about whether or not the "controversy" is justified,
which Spinny lost specifically on the grounds that someone pointed to my
page.  That has caused him to mistakenly think it's the sole authoritative
source, but in fact, I think anyone would consider the pages by Francis
or Clive on the same topic to be comparably authoritative -- both have
been at least as active in C standardization as I have.

You were their source and inspiration, jerk face.
So it's all tilting at windmills.  Someone accepted my page as an argument,
so he thinks if he can make it go away he can win the argument with the

**** is your problem? If you have a complaint make it to me. Stop
talking to others. It's cowardly.
 
S

Seebs

I have been a critic of pay structures that are based on hierarchies
exactly because they result in your most skilled employees either being
promoted out of their skill area (there is no reason to think that an
excellent programmer can be a manager) or leaving (actually often to be
re-employed as a consultant).

Yeah. One of the things I quite like about my employer's system; short
of the C*O level, you can promote as far as you want on a technical track,
you don't have to become a manager.

This can result in someone "taking orders" from another person who's two
or three ranks lower in the theoretical chart of titles, but who cares?
Managers manage, programmers program, everyone's happy.

-s
 
S

spinoza1111

OK that tells us something about the programmers you normally come in
contact with. Contract workers come in two bands, those who are too
awful to employ permanently and those who are so good that paying them
what they are worth as  employees would distort the company pay structures.

The Troglodyte dances before the shadow on the wall
Sprites and spirits doth he try to call
While unseen outside there is a sun
Which outside, gives light to every one.

Yes, this abstract distinction you make exists. There are really good
programmers and many "studies" have found that they are light years
ahead of their mates...in some cases so many astronomical units ahead
that their coworkers are "offended" by them and start Seebie-style
whispering campaigns.

However, to believe that companies want to pay people what they're
worth ranks along trust in the Tooth Fairy. Companies in fact make it
their business to hire underqualified people (such as people without
training in academic CS) so as to enhance stock price.
I have been a critic of pay structures that are based on hierarchies
exactly because they result in your most skilled employees either being
promoted out of their skill area (there is no reason to think that an
excellent programmer can be a manager) or leaving (actually often to be
re-employed as a consultant).

Oh good. American and G-8 societies already too unequal, you wish to
make them more so. Programming "ability" is not a fixed thing and it
has many dimensions. For example, Richard Heathfield has valuable
knowledge of details about C but almost no computing imagination.

Programming ability also disappears rapidly in cases of divorce and
substance abuse.

It is completely unlike medical or legal ability because essential to
what we think of as the "ability" of a doctor and lawyer is their
ability to sign-off on decisions that have important legal and
financial ramifications. Programmers have no such ability.

There are a few companies around that understand that employees should
be paid well for doing a good job irrespective of what that job is. I
have been known to suggest that managers at all levels should have their
pay tied to the performance/productivity of their 'team'.

We have to ask what it's for. If you're writing highly optimized C so
as to enable a financial firm to make an offer in an online market
that jacks up your stock price, and immediately cancels the offer
without purchasing a share (to name one cute post-panic trading game)
you are risking a market meltdown bigger than 2008 based on your games
and your conduct should be criminalized by legislation if it is not
already been criminalized.

Married men and women with > 2 children as opposed to single people in
my view deserve higher pay, and people whose work is a net benefit to
a real community (not some fanciful "open source" community of pirates
and slaves). Senior people especially people on the verge of
retirement deserve higher pay as do people who've troubled to get
formal education in their professions (such as a computer science
degree).

I realize that this view radically differs from the typical programmer
view.

"Intelligence is a moral category" (Adorno). If you lie, cheat and
steal, misrepresenting the reputation of another, then as we see here,
you have a tendency to say shockingly unintelligent things ("the
'heap' is a DOS term": "Nilges ain't in Risks"). cf. also Habermas: a
communicative act such as the claim that a compiler can be one which
generates interpreted code has to be made in an environment of mutual
trust and a common shared decency.

However, Americans and their friends abroad seem to have a problem
with this notion. They want everybody to be "free", and then punish
people who freely elect Hamas by paying Israel to kill their children.
 
S

spinoza1111

Yeah.  One of the things I quite like about my employer's system; short
of the C*O level, you can promote as far as you want on a technical track,
you don't have to become a manager.
The Programmer Peter Pan Syndrome is "I love to code and do not want
to be a manager". This is because the promotion to management is a
psychological symbol of having to grow up and eventually die.

The "technical track" usually dead ends at a cliff; programming
careers have been found on average too brief owing to technical change
to properly raise a family, since people with ten years or more
experience are either behind the curve, or perceived to be, or both.

Figure it out. If a large company has a stable and mature system
maintained and incrementally improved by a set of aging geeks that are
experts on this system and who (in order to do a good job) have not
enhanced their other skills, the aging geeks are a cost center. If the
company can find an Open Source solution to replace these people, it
will.

Geeks at Bell-Northern Research developed a compiler for PBX
programming, disruptively creating a new market in the 1970s. But as
early as 1980, the Ottawa compiler group, which contained very smart
people, was perceived by the suits as a cost center because the
compiler seemed to be...perfect. The Ottawa compiler group was
disbanded. But then, the Mountain View PBX people needed all sorts of
new features for new customers with new requirements, which created my
job.

But after I made the many changes needed, including a new compiler for
24 bits and a complete auto-installer, any suggestions I had as to
further modifying the compiler (such as using optimization to predict
bugs) were met, politely, with the response that Northern's strategy
would be less "proprietary" in future...even though like Apple, being
"proprietary" had made it successful. The upshot was that Northern
Telecom, Bell-Northern's parent, wasted megabucks on a variety of mad
schemes and some of its executives went to jail for stock
manipulations.

There is really no such thing as being "valued" in a company in the
way programmers fantasize they are "valued". It's like the skilled Jew
in the concentration camp who fantasizes that he'll be spared, that
the Nazis won't be so "irrational" in the large.

[No, corporations aren't Fascist dictatorships. Instead, underlying
mechanisms make rationality in the small into irrationality when it
scales up. Small businesses almost never develop Fascistic
pathologies.]

If these programmer Peter Pans lack formal education in computer
science, their next job will be at McDonald's since they have no way
to convince an employer to retrain them in a modern platform, and the
teaching profession is closed to them.

Peter, repeat after me: you want fries with that?

[Note: I don't mean to imply that Seebach is unemployable absent a
demand for C programmers. I do not know enough about him to know. But
I do know that programmer Peter Pans exist.]
 
B

Ben Bacarisse

spinoza1111 said:
Yes, and most literate people (Ben) are not of such a literal mind.

Lots of luck making the case that "parsing has nothing to do with C".

I don't want to. I wandered what type 1 grammars have to do with C
and why you limit the term "parsing" to this one type even if only "in
this context".

I think what has happened is that you expect people like me with
literal minds to auto-correct what you write until is makes technical
sense. I am happy to do that, but if I auto-correct what you say
about C I will end up with my (technical) options about it, not yours.

<snip>
 
S

spinoza1111

I don't want to.  I wandered what type 1 grammars have to do with C
and why you limit the term "parsing" to this one type even if only "in
this context".

....because in most computer science books, "parsing" is as opposed to
"scanning". "Scanning (aka lexical analysis) versus parsing
(syntactical analysis)".

Ahot, Sethi, et al. COMPILERS: PRINCIPLES TECHNIQUES AND TOOLS 2nd ed.
p5: "The first phase of a compiler is called *lexical analysis* or
*scanning*".

p. 8: "The second phase of a compiler is called *syntactical analysis*
or *parsing*".

In this usage, "parsing" is limited, but, of course, the "parsing" of
Chomsky type (or level, or stage, or ****-all) 3 (or zero, or
"regular" or sod-all) == scanning.

Intellectual (and even moral) growth involves abandoning the habit of
rote terminology and the concomitant hostility towards the terminology
of the Other, and the appreciation of structures of wider and wider
scope. If you like friend Peter didn't take komputer science then you
learned things bottom up and by chance, whereas in uni you go more top
down, learning an initial framework in which a set of words, not one
word, is the focus.

You also learn that everything has everything to do with everything,
the question being in what way we can sensibly explain.
 
B

Ben Bacarisse

spinoza1111 said:
On Jan 2, 3:28 pm, Seebs <[email protected]> wrote:

We've been over this, Mister ADHD: an interpreter plus a front end
scanner and parser is a compiler. If you'd taken CS at uni instead of
psychology you would know this.

That disagrees with the usage I learnt from my CS degree course.

There is some flexibility in these terms but I wonder what you call a
system that directly interprets a parsed form of the source code.
This seems to be what "little C" does, and I think it is useful to
have a term for such systems so as to distinguish them from
"compilers" that produce some form of target code (albeit sometimes an
intermediate code rather than machine instructions). I call them
interpreters. What do you call them if not interpreters?

<snip>
 
B

Ben Bacarisse

spinoza1111 said:
...because in most computer science books, "parsing" is as opposed to
"scanning". "Scanning (aka lexical analysis) versus parsing
(syntactical analysis)".

Ahot, Sethi, et al. COMPILERS: PRINCIPLES TECHNIQUES AND TOOLS 2nd ed.
p5: "The first phase of a compiler is called *lexical analysis* or
*scanning*".

That is still no answer. I don't have that book anymore but I don't
recall anything much about parsing type 1 grammars in it.


Do you not want me to correct your initial statement so that it makes
sense, then? I am pretty sure you meant "Chomsky type 2". Why not
just use the more usual term "context free grammars" so you don't
have to remember what type number they are?
 
B

Ben Bacarisse

Lorenzo Villari said:
I bought this one some years ago

http://www.amazon.com/C-Complete-Reference-4th-Ed/dp/0072121246

and "Little C" is introduced as "A C Interpreter" by the author. I don't
know if this is an error done by the translator from English, because
I've got the italian version, but I don't think so...

Yes. I don't think there is any doubt that the vast majority of
people (including the author) would call little C an interpreter.
 
D

Dennis \(Icarus\)

Ben Bacarisse said:
That disagrees with the usage I learnt from my CS degree course.

Same here.
There is some flexibility in these terms but I wonder what you call a
system that directly interprets a parsed form of the source code.
This seems to be what "little C" does, and I think it is useful to
have a term for such systems so as to distinguish them from
"compilers" that produce some form of target code (albeit sometimes an
intermediate code rather than machine instructions). I call them
interpreters. What do you call them if not interpreters?

The dragon book (Aho, et. al.) calls them interpreters, as do the other
compiler texts I have.

Dennis
 
R

Richard Bos

Seebs said:
I have something of one:

The fact is, you can learn a lot more sometimes from understanding why
something is wrong than you would from understanding why it is right.

That's true for reading G.B. Shaw, who is almost always wrong, but
almost always in intelligent, interesting ways.

It ain't always so for other dumb folks, bubba.

Richard
 
R

Richard Tobin

Ben Bacarisse said:
There is some flexibility in these terms but I wonder what you call a
system that directly interprets a parsed form of the source code.

The term "semi-compiler" has been used, but even that usually
implies something more than a parsed version of the source.

On the other hand, I don't think many people would dispute that a
system that converts a program to byte codes, which are then
interpreted, is a compiler. And of course even the machine
instructions generated by a native-code compiler maybe still be
"interpreted" by microcode.

I would say that the key feature of a compiler is that it translates
to something structurally simpler. Turning loops into tests and jumps,
structure field accesses into arithmetic and dereferencing, that sort
of thing. Some reduction in the complexity of the operations. A parser
on the other hand merely produces a more convenient representation of
the same operations, so a parser plus something that interprets the
parse tree is not a compiler. It's just a better interpreter than
one that operates on the program text directly.

A Fortran to C translator would be less likely to be called a compiler
than a Lisp to C translator, because the C constructs produced are likely
to be of comparable complexity to those in the original Fortran, but not
those in the original Lisp.

-- Richard
 
S

Seebs

That disagrees with the usage I learnt from my CS degree course.

That means you're too educated.

I think Spinny's actually made the key point himself, probably
unintentionally:

The point, of course, being that generally a "compiled language" is one
which doesn't have the interpreter. A normal C implementation doesn't
have an interpreter; it generates code which is run natively. Schildt
did not write a thing which generated native code, so it's not what is
normally called a "compiler" -- this his emphasis on it being an
interpreter.

You could argue, in a sort of pedantic way, that the interpreter actually
compiles-to-X, where X is something other than native code, and then
something else interprets the X.

So, given Java as an example to draw from: Does Schildt's "C interpreter"
allow you to convert C programs to some kind of data file, which another
program or piece of hardware then executes?

-s
 
K

Keith Thompson

Richard Heathfield said:
According to your definition of "compiler", sure. Most people don't
think of it like that. Most people, erroneously in my view, think of a
compiler as generating (as you say later) native object code. My own
view is that a compiler is a program that takes as input a program
written in some language, and produces as output a program that does
the same thing, but written in another language. (The "identity"
compiler, which produces output written in the /same/ language, is
cute but trivial.) If the target language is native object code, all
well and good, but it doesn't have to be. An interpreter, on the other
hand, takes a program written in some language, and executes it. Thus,
we might think of a computer itself as a "machine code interpreter".
[...]

I would restrict the definition of "compiler" a bit more than that.
A translator that leaves substantial portions of the input language
unchecked and unmodified in the output is something I'd call a
preprocessor but not a compiler.

For example, the old Ratfor preprocessor, which translated a dialect
of Fortran (that had such things if/then/else and structure loops
when Fortran itself didn't) into (then-) standard Fortran was not,
IMHO, a compiler. You could write a Fortran program that didn't
use any Ratfor-specific constructs, and the translator would leave
it unchanged. Even if there were errors in the input, the translator
would pass them through and leave it up to the Fortran compiler to
diagnose them.

Similarly, I don't consider the C preprocessor to be a compiler.

cfront, the old C++-to-C translator, on the other hand, was IMHO
a compiler. In theory, if there were *any* errors in the input,
cfront itself would diagnose them.

I don't claim that my definition is more correct than anyone else's
(well, mostly), but that's how I think of it.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,888
Messages
2,569,964
Members
46,293
Latest member
BonnieHamb

Latest Threads

Top