subroutine stack and C machine model

S

Seebs

Again you are confusing the order of evaluation of arguments to an operation
and order of evaluation of the operations themselves.

In his defense, Microsoft has a history of confusing those too, so we simply
can't tell from this description whether this supports his point or not. The
MS site describing C++ claims that precedence controls order of evaluation.
So far as I know, this is simply not true even of their compiler.

-s
 
S

spinoza1111

In his defense, Microsoft has a history of confusing those too, so we simply
can't tell from this description whether this supports his point or not.  The
MS site describing C++ claims that precedence controls order of evaluation.
So far as I know, this is simply not true even of their compiler.

Well, it was YOUR error to make it random in all cases as far as I can
tell (I am reading your famous standard now). You needed to use
existing art in compiler optimization to unscramble code that could
safely be resequenced, not break existing code.

In so claiming, MS is like Schildt following common sense and
declaring what was true during the K & R epoch (prior to the mess made
by "standardization" on behalf of greedy vendors). Unless the Optimize
switch is set in the On position, precedence and then left to right
order should control sequencing save in the special case of pre and
post increment.

Pre and post increments were at best useful stupid mistakes, since
they have no meaning when applied to something other than an lValue.
You could have declared that in "standard" C they could only be used
with LValues, but this would have required vendors to change the code.

It's quite possible that thousands of programmers and technical
writers believe wrong things about C, not because they're stupid as
you seem to believe, but because like Kant they are "citizens of a
better world" and either innocently unaware, or unable to credit, what
has been done to C.

A programming language that UNNECESSARILY violates the expectations of
the ordinary intelligent programmer is a crime against humanity. I'd
have thought we'd learned this by 1990 but this is the sort of thing
one learns in when one takes a computer science class.

Isn't it.

Gerald Weinberg told a story in The Art of Computer Programming,
Levine the Genius Tailor. A man had a suit made by an incompetent
tailor who instructed him to walk like an cripple to make the suit
fit. Leaving the tailor shop, walking like a gimp, the man was
accosted by another who asked him who made his suit. Flattered, the
gimp said, "Levine". The man said "I believe I shall visit him! Why he
must be a genius to fit a cripple like you!"

I think the moral of this parable as regards C is clear, for C
programmers, especially post Standard, pride themselves on knowing
things that make no sense, cripple the mind, diminish the soul, and
arguably make them apt to engage in the politics of personal
destruction.

I think you had a chance to clear up a mess, and you blew it.
 
S

spinoza1111

Attempting to look this up, now, I discover that lots of FORTRAN doccy
seems to just say "modifying the loop counter is banned" or "would cause
great confusion" or similar

You appear to be saying that, sometimes you can, sometimes you can't.
Not a satisfactory state of affairs.

No, it wasn't. I was there: I debugged my first Fortran compiler in
machine language form. Different compilers had radically different
semantics because manufacturers wanted to call the shots. C99 returns
to these thrilling days of yesteryear, even to engaging people
innocent of comp sci to give the whole effort the right period flavor,
like fat re-enactors in Ted Turner's movie Gettysburg pretending to be
Johnny Rebs on short rations. The result? A mess.
Well, personally I use PHP and JavaScript, both of which follow the C
model.

They both suck.
Verifiable by whom or what? I tend to verify my own code.

Back to the future. In the 1970s, we learned to review code in
"structured walkthroughs" while suspending the universal
competitiveness of capitalism. Then that rat bastard Reagan got his
pimply butt elected. Now we "verify" our own code.

Bug o rama.

I "verified" my own code on the parser discussion. Ben Bacarisse sent
me the error (without recommending a solution). I realized I made the
same error I'd made when writing my book code, which I'd fixed in the
book code after discovering it through testing. I changed my solution
from one that parsed a complete expression or addFactor on the right
to one that iterated over addFactor and multFactor.

I fixed the error as far as I know. I thought of coding the solution
in C sharp to be empirically sure, but now that I've got Ben working
for me (that's a joke son) I need not.

If you faggots would stop your obscene bullying of people who seem to
be safe targets, you'd get more work done.
Zzzz zzzz - zonk!

(Cue snores of boredom)

Why is it fashionable to be bored? Just asking. Is it hip? Od just
stupid?
Oh! Has he stopped babbling now? Right. It certainly is more useful to

I'm not babbling.
be able to modify the loop counters and it's well defined what happens,

When it is useful, you can use a WHILE construct so as to notify the
code reader to watch for counter modification.

If you're writing a loop where the counter can change, a whole new set
of considerations arises as to VERIFYING whether your code works, esp.
if everybody hates you and nobody wants to review your code (I'm a
real popular guy: everybody loves to review my code. That's a joke,
son.)

The use of while signals that the loop termination condition follows.
unlike, as you say yourself, with FORTRAN (at least, up until I stopped
using it).

When writing C, you just have to learn to take care, is all, just as
with any program.

Good old patriarchal nonsense, honored more in the breach than in the
observance. Yore Daddy, as you saw him at the age of 4, never made a
mistake, and he haunts your worklife. Grow up. Because of the
complexity of software, we have known since Dijsktra that taking care
is no defense against errors. Arguably, smart people make more
mistakes when working alone. I'm an example.

The only sizeable bit of C I wrote (some 30k lines)
ran on a MicroVAX and used to run for 9 months at a time with no extra
page faults, no intervention, and no restarts. Why 9 months? Because of
the annual site-wide power outage for maintenance of the 240kV line that
supplied power to the site. So, writing reliable code with lots of
malloc() and free() usage is quite possible.

This is an elementary mistake, because "it runs, it consumes
resources, it gives answers, the user is happy" is in no way a
correctness proof. Formally, a correctness proof is a mathematical
proof from which a program can be derived. Informally, a correctness
proof is honest agreement by a group of peers, with none of their
managers present, no office politics, nobody terrorized by the threat
of loss of medical insurance, in France, that the program works.
 
S

spinoza1111

If you meant "why don't all compilers evaluate left to right?" the the
short answer (for C) is functions with variable number sof parameters.

Not needed in an OO language, since the varargs can be assembled into
a polymorphism. However, C Sharp also supports a parameter array. What
you don't need is the false freedom of passing A String of Random
Crap.
Consider:-

  void print (int count, ...);

where the count is followed by the specified number of strings (I'm
not claimign thsi is a good design).

You'd better not or I'll tell Mom.
  print (2, "red", "blue");

On a typical system the stack on entry to the function looks like this

              <-- SP
   <ret addr>
   2
   "red"
   "blue"

No matter how many strings there are the count is always at the top of
the stack, making it easy to find. The first C compilers ased a stack
like this and the caller cleaned up the stack after the call (only the
caller knew how many items were on the stack). For efficiency reasons
Microsoft allowed you specify (with a non-standard extension) that a
function used "pascal" calling convention (even in a C program). These
evaluated arguments left to right and the called-function did the
stack cleanup. These functions couldn't use varargs arguments.

I see your point. But aren't there other ways of handling the problem?
If you need the count "first", stack an extra pointer to the end of
the stack frame and process the variables "backwards" inside the
called routine. Or, after building the stack frame "backwards",
reshuffle the values you have stacked: at this point they are pure
values and are not harmed by repeated examinations as would be the
original operands, right? Don't violate left to right expectation.

Sounds to me that runtime developer convenience trumped common sense
and decency, and this convenience was mislabeled "efficiency".
<there is some gain to do that; how many cpu cicles to gain?
<where is the advantage?
<there are theoretical problems?
<the gain for to choose a way [left to right or right to left]
<is to delete the UBs that are with that "no order particular"

giving the compiler the freedom to rearrange arguments could help
optimising compilers (though I think Jave and C# consider it better on
balance to specify the order of evaluation).

This is NOT the job of a language designer. It's the job of the
compiler developer to find out when code can be moved around. It seems
to me to have been a miscarriage of common sense that people without a
single compsci class were doing compiler optimization without coding!

I think the Standards people used compiler optimization as an excuse
to make as many compilers (other than Microsoft) "standard" by ukase.
You see, a compiler doesn't need a Standard to allow it to in effect
move source code around (by transforming its internal representation
as a DAG or other structure).

It needs the laws of mathematics and the physics of computation.

If it's more efficient at run time to evaluate a+b as b+a you do so
not because some snot nosed standards writer with no compsci said you
could. You do so because you've verified that the two statements are
mathematically the same, and the target computer isn't some sort of
device that needs a and b to be in the original order.
Yes it will, it's modifying a more than once between sequence points.

is you whitespace key borken. I've edited your code to make it
readable


this is undefined behaviour. Even this is

The problem isn't that order needed to be made an Eleusinian mystery.
It's that postincrement and preincrement needed to be restricted to
lValues.
 
M

Moi

In
spinoza1111 wrote:


Thirdly, the decision was nothing to do with Peter Seebach, since it was
made long before he joined the C committee - order of evaluation was
unspecified in C89, and probably before.

.... except for the short-circuit cases, of course.

AvK
 
T

Tim Streater

spinoza1111 said:
No, it wasn't. I was there: I debugged my first Fortran compiler in
machine language form. Different compilers had radically different
semantics because manufacturers wanted to call the shots.

What do you expect? At that point vendors were the ones providing
compilers.
C99 returns to these thrilling days of ...

Zzzzz zzzz ...
They both suck.

Ah, that explains why you were infesting the PHP or as it might be JS NG
a couple of years ago with the same old irrelevant lefty twaddle we're
seeing now.
Why is it fashionable to be bored? Just asking. Is it hip? Od just
stupid?

Your impenetrable pop history and psychology is not even close to being
on-topic and is never germane to the issue. It obscures any point you
might be making.
This is an elementary mistake, because "it runs, it consumes
resources, it gives answers, the user is happy" is in no way a
correctness proof. Formally, a correctness proof is a mathematical
proof from which a program can be derived.

What proof is that, then, for my real-time software that accepted input
from a dozen or so hardware boxes via asynch connections, and a number
of users via the local network? All overseen by a small threads-kernel
we had and glued together with some small bits of assembler? The notion
of mathematical proof for software was a nice conceit 40 years ago when
I was first starting in this business. In the 40 years since then, until
now, I've NOT ONCE heard this concept pushed.
Informally, a correctness
proof is honest agreement by a group of peers, with none of their
managers present, no office politics, nobody terrorized by the threat
of loss of medical insurance, in France, that the program works.

Well my peers at the time all had their own projects and were happy to
trust me that the software worked. They used it from time to time and I
got feedback. The real users were the install/deinstall techs, and they
were quite happy too.

All these threats, terrorisings, bullying vendors are just in your
fevered imagination - except when you issue threats in the bullying
manner that you do.
 
R

robertwessel2

In



Three errors here (at least). Firstly, it isn't random. It's
unspecified. There's a difference. It's up to the implementor to
decide how best to do it, and that's because the best way may well
vary according to circumstances. Secondly, it isn't an error, but a
deliberate design decision. (Proof: raise a DR and see how far you
get.) Thirdly, the decision was nothing to do with Peter Seebach,
since it was made long before he joined the C committee - order of
evaluation was unspecified in C89, and probably before.


FWIW, my copy of K&R1 (from 1978, if anyone cares) says on page 49
(section 2.12, "Precedence and Order of Evaluation"):

"C, like most languages, does not specify in what order the operands
of an operator are evaluated. For example, in a statement like

x = f() + g();

f may be evaluated before g or vice versa; thus if either f or g
alters an external variable that the other depends on, x can depend on
the order of evaluation. Again, intermediate results can be stored in
temporary variables to ensure a particular sequence."

And this is preceded and followed by additional info, not least
specifying that the order of evaluation of function parameters as
unspecified, explaining the multiple use/mod issue (“a=i++;” is
undefined), and talks about the allowable rearrangement of associative
and commutative operators by the compiler.
 
T

Tim Streater

FWIW, my copy of K&R1 (from 1978, if anyone cares) says on page 49
(section 2.12, "Precedence and Order of Evaluation"):

"C, like most languages, does not specify in what order the operands
of an operator are evaluated. For example, in a statement like

x = f() + g();

f may be evaluated before g or vice versa; thus if either f or g
alters an external variable that the other depends on, x can depend on
the order of evaluation. Again, intermediate results can be stored in
temporary variables to ensure a particular sequence."

My 2nd Ed (1988) has the identical text. None of this is a big deal,
it's well documented so I really don't know what Spinny's problem is
with it. There are thousands of C programmers in the world and none of
them gives a monkey's.
 
R

robertwessel2

My 2nd Ed (1988) has the identical text. None of this is a big deal,
it's well documented so I really don't know what Spinny's problem is
with it. There are thousands of C programmers in the world and none of
them gives a monkey's.


My main point is that this was standard in pre-ANSI K&R C (for which
the K&R1 is typically considered the specification).
 
S

spinoza1111

My 2nd Ed (1988) has the identical text. None of this is a big deal,
it's well documented so I really don't know what Spinny's problem is
with it. There are thousands of C programmers in the world and none of
them gives a monkey's.

OK, it started in K & R and wasn't fixed. Just made worse by way of
being Standardized, since the remit of the Standardizers was to
PROTECT VENDOR PROFITS.

The problem is that when a developer says "I don't give a rats ass",
he means "I'm incompetent and incurious, and proud of it. It's just a
job."

This is a major scandal as far as I am confirmed. It means that the
better developers, to make code more efficient and more readable in
many cases, relied on properties, not only of single compilers but
also of compiler families, here the left to right evaluation of two
function calls. Since most code is designed, coded and tested for
single compilers, their assumptions were confirmed.

Those developers were supported by Microsoft, one of the more ethical
vendors but widely hated by failures.

They, their code, and their own organizations were betrayed by a
standard which should have confirmed left to right evaluation without
the excuse of "optimization". Optimization considerations do not
belong in a language standard because optimization is not permitting a
vendor to call a weird order of evaluation standard. It's a series of
provable transformations applied to code that compiles, with each
transformation being mathematically valid, and no transformation being
applied to code not known not to have side effects.

The anti-Schildt campaign scapegoated Schildt essentially, perhaps
unconsciously, to draw attention away from their much more serious
errors, and the phrase "a monkey's ass" is telling.

It's what people say when they've used software correctness the only
way they care to use it, to destroy people. Beyond that, they don't
give a rat's ass.
 
S

Seebs

OK, it started in K & R and wasn't fixed. Just made worse by way of
being Standardized, since the remit of the Standardizers was to
PROTECT VENDOR PROFITS.

You keep saying this, but you've never shown even a shred of evidence.

Inference: You lost a job, and since you obviously aren't personally
at fault for being completely unable to get even basic facts right about
a language you've spent years bashing, you decided to blame The Standard
because, hey, it's not you, that must be why it's bad.

If you have an argument any better than random internet rage, you could
always... hmm. Post some kind of evidence? Show how that evidence leads
to or supports your conclusion?

.... nah, that'd never happen. It's too crazy.

-s
 
S

spinoza1111

What do you expect? At that point vendors were the ones providing
compilers.

Zzzzz zzzz ...



Ah, that explains why you were infesting the PHP or as it might be JS NG
a couple of years ago with the same old irrelevant lefty twaddle we're
seeing now.

Don't think I ever posted to PHP or Javascript.
Your impenetrable pop history and psychology is not even close to being
on-topic and is never germane to the issue. It obscures any point you
might be making.

If its pop how is it impenetrable? Your anomie obscures it for you.
What proof is that, then, for my real-time software that accepted input
from a dozen or so hardware boxes via asynch connections, and a number
of users via the local network? All overseen by a small threads-kernel
we had and glued together with some small bits of assembler? The notion
of mathematical proof for software was a nice conceit 40 years ago when
I was first starting in this business. In the 40 years since then, until
now, I've NOT ONCE heard this concept pushed.

That's because you'd rather watch TV. In fact, a structured
walkthrough constitutes a community form of informal proof of
correctness, but the sort of trashing behavior we see here, as well as
the substance abuse, obesity, anomie, depression, racial prejudices
and boredom of corporate programmers, mean that today most
"programmers" lack the ability to conduct structured walkthroughs.
Well my peers at the time all had their own projects and were happy to
trust me that the software worked. They used it from time to time and I
got feedback. The real users were the install/deinstall techs, and they
were quite happy too.

Corporatese. Tell me, how is it possible for people to be "happy" on
the job? It's not a question of whether anyone was happy, it's a
question whether your software was correct.
 
S

spinoza1111

He confirmed it, then, using "optimization" as a "reason", when the
issue of "optimization" has little to do with the standardization of a
language. K&R unwittingly, and C99 with malign intent, blessed invalid
reordering at a time when we already knew how to identify code that
could without error be reordered. Seebach went along because he'd not
completed a compilers course that had included the study of how we
identify code "known not to have side effects" and operations that can
be mathematically reordered.

Optimization was used as a pretext, an excuse.

The standard, statistically, continued to be Microsoft's C compilers
for better or worse. Better in that the developers at Microsoft seem
to have believed in precedence followed by left to right. Worse in the
case of the bug I found on behalf of Nash. Being passive-aggressive,
the Standardizers ignored Microsoft being unwilling to confront its
market power.
 
S

spinoza1111

In

spinoza1111wrote:


That doesn't in the slightest affect the argument you presented. If
failure to attend CS classes means one is unqualified to express
opinions about the subject or sit on CS-related standards committees,
then Dijkstra was unqualified. The reason for his failure to attend
classes is surely irrelevant. If one's inability to attend those
classes is sufficient for that person to be considered qualified,
then by your argument a myxomatosis-crazed rabbit is qualified.

Wow. Corporate thinking on drugs. My dear boy, once again. Computer
Science classes apart from seminars on breaking news DID NOT EXIST
when Dijkstra was starting out because he was busy CREATING THE
CONTENT for later classes, all the while enduring stupid (and I do
mean stupid) remarks like yours from corporate drones. Had they
existed, he would have been delighted to attend as I was delighted to
attend the first computer science class given at my university in
1970.

Peter Seebach is "no Edsger Dijkstra". Because he believes that some
sort of vague, inchoate "need to optimize" justifies not standardizing
order of evaluation, he thinks in terms of bullet points like any
corporate drone, or the military officers that used Power Point to get
good men killed in Fallujah because the bullet points said the town
was clear. Dijkstra would have asked why and would have discovered on
his own that you can preserve order of evaluation while transforming
order of execution in the back end.
 
S

spinoza1111

... except for the short-circuit cases, of course.

Oops. Yes. It is a constant that a||b and a&&b short circuit in any C
I know of. Likewise the standard could have made this LEFT TO RIGHT
rule orthogonal by applying it to the obvious cognates of logical
addition and multiplication, BUT the Standards creeps did not do so,
because vendors other than Microsoft wanted them to bless, as much as
possible, their compilers.

Turns out that "for most, nearly all" compilers, one can count on a()||
b() not evaluating b() when a() is true. But if you can't count on the
implied order of this for addition, this is wildly not orthogonal and
a mess.

Actual compilers (with exceptions such as function parameters)
evaluated left to right enforcing a defacto standard, which I'm
willing to bet is followed by the actual code of people who here claim
that the rule is NOT to do so. But in order to create secrets, which
to the anti-intellectual trump knowledge, they gravely intone to saps
that the order of evaluation is undefined...to protect their jobs.
 
S

Seebs

He confirmed it, then,

The issue never came up. No one suggested that this should be changed.
using "optimization" as a "reason", when the
issue of "optimization" has little to do with the standardization of a
language.

Not true what that language is used for high-performance computing.
Many users depend on the sorts of compiler magic enabled by that
feature.
K&R unwittingly, and C99 with malign intent, blessed invalid
reordering at a time when we already knew how to identify code that
could without error be reordered.

Simply not true. This isn't "invalid reordering". It was a considered
decision, back in the day, to allow for this sort of reordering to allow
compilers to better adapt code to processors with different semantics.
Seebach went along because he'd not
completed a compilers course that had included the study of how we
identify code "known not to have side effects" and operations that can
be mathematically reordered.

No. First off, again, the issue simply was not raised that I know of;
if it was, surely you can give us the defect number of the defect someone
raised that would have brought the question to light? No?

Secondly, it's not a question of compiler courses, but of philosophy -- a
subject you should appreciate! Philosophically, C's policy has been to
let you specify things if you mean to, and let you leave them unspecified
when you want the compiler to try to get better performance and you have
no preference between a couple of choices.
Optimization was used as a pretext, an excuse.

Again, unsupported.

Why do you keep making these dogmatic claims about events you obviously
never witnessed, in a field of inquiry where you've aggressively and
actively pursued militant ignorance?

-s
 
S

Seebs

Oops. Yes. It is a constant that a||b and a&&b short circuit in any C
I know of. Likewise the standard could have made this LEFT TO RIGHT
rule orthogonal by applying it to the obvious cognates of logical
addition and multiplication, BUT the Standards creeps did not do so,
because vendors other than Microsoft wanted them to bless, as much as
possible, their compilers.

Again, you keep making this stuff up.
Turns out that "for most, nearly all" compilers, one can count on a()||
b() not evaluating b() when a() is true.

Actually, you can count on that for every C compiler. It used to be the
case that you could count on it merely for every C compiler I have ever
heard of anyone using or encountering. But, thanks to the magic of
standardization, since 1989, it has been the case that you could count on
it for all C compilers -- because anything where you can't count on that
isn't a C compiler anymore. :)
But if you can't count on the
implied order of this for addition, this is wildly not orthogonal and
a mess.

I don't think the word "orthogonal" is what you mean. You're probably
trying for "parallel". "Orthogonality" is not the right word for "everything
behaves the same way".
Actual compilers (with exceptions such as function parameters)
evaluated left to right enforcing a defacto standard,

So you've claimed, but you haven't shown any evidence of it.
which I'm
willing to bet is followed by the actual code of people who here claim
that the rule is NOT to do so. But in order to create secrets, which
to the anti-intellectual trump knowledge, they gravely intone to saps
that the order of evaluation is undefined...to protect their jobs.

Beautiful. The actions of the committee are now simultaneously to deprive
compiler developers of jobs *and* to protect the jobs of the committee
members, most of whom are compiler developers.

You're now arguing, with what passes for sincerity from you, that something
which was published in every C reference from 1978 on, was intended to "create
secrets" when it was not changed in 1989. That's stupid. If we'd changed it,
you could argue that it was to create a secret -- people in the know would
rely on the now-specified order of evaluation, while most people would be
unable to do so. But as is, there is no secret, and never was.

I had failed to anticipate just how deep the rabbit hole goes. You
are amazing. Please continue to rant incoherently for my amusement. I
think I'll be making you into a drinking game shortly; please be sure to
post self-contradictory things with extra frequency on Friday and Saturday
nights.

-s
 
S

Seebs

My dear boy, once again. Computer
Science classes apart from seminars on breaking news DID NOT EXIST
when Dijkstra was starting out because he was busy CREATING THE
CONTENT for later classes, all the while enduring stupid (and I do
mean stupid) remarks like yours from corporate drones.

Doesn't matter.

You claimed that anyone who had not taken such classes was unqualified.
Now you invent a new category of people who can be qualified without taking
such classes.
Had they
existed, he would have been delighted to attend as I was delighted to
attend the first computer science class given at my university in
1970.

Fascinating. Was it because it was the first class they ever offered that
you kept the book, unopened, in mint condition?
Peter Seebach is "no Edsger Dijkstra".

This is entirely true, although people occasionally compare me to him for
various reasons. Mostly, as I recall, my obsession with doing things in
provably correct ways rather than relying on coincidences.
Because he believes that some
sort of vague, inchoate "need to optimize" justifies not standardizing
order of evaluation, he thinks in terms of bullet points like any
corporate drone, or the military officers that used Power Point to get
good men killed in Fallujah because the bullet points said the town
was clear.

I believe you should write the Guiness Book of World Records. While many
people regard the non sequitur as pretty much sewn up, I think you may be
a genuine contender.

Let's write this one out:
P1: Seebach believes that some sort of vague, incohate "need to
optimize" justifies not standardizing order of evaluation.
C1: Seebach thinks in terms of bullet points like any corporate
drone.

I think, and I'm going a little out on a limb here, since I never actually
*finished* my philosophy degree, that there may be a tiny little flaw, barely
worth mentioning, which is that you have only one premise, and no connection
between that premise and your conclusion.

Actually, though, that's probably harmless, because I think your premise
is probably false. See, I've never met a vague, incohate "need to optimize"
in my life. What I have run into is the general rule that compilers know
about things like cache lines, and while I've certainly heard of them, I'd
have a very hard time correctly writing code that would be nicely suited
to a variety of architectures. Even if I were good enough to do that, I'd
have learned on the 68k or something of similarly incandescent irrelevance
to the modern computing world. Whereas, interestingly enough, compiler
writers have a great deal of information about the current state of register
sets, etcetera. If one side of an expression can be elegantly implemented
using only three registers, but the other side needs six, and there's about
to be a function call which will smash a couple of them, why, that's the
sort of thing that might make it matter which order you evaluate things
in.

I don't think this is particularly vague or anything.
Dijkstra would have asked why and would have discovered on
his own that you can preserve order of evaluation while transforming
order of execution in the back end.

You can, but not always very well.

I believe it was Dijkstra who proved that you could always eliminate
goto statements -- and established the costs you would usually pay in
performance to do so. For most applications, the clarity is probably
worth it. It is, however, nice to have the option of saying "actually,
we do care about five cycles or two bytes of memory."

-s
 
S

Seebs

Similarly, Seebs was overly academic until he revealed that he hasn't
got a CS degree, at which point he becomes a "school-leaver" (forget
the psychology degree, because apparently that doesn't count). It's a
question of any stick to beat 'em with, and never mind that the stick
came out of a matchbox.

At this point, I feel obliged to help Spinny out: I never graduated from
high school, nor do I have a GED or anything comparable. Oh, and I
have a learning disorder! Also, depending on whom you ask, I may technically
be considered "gay". And I like cats but not dogs. Finally, and I think
this is particularly important, I really liked Death Magnetic and it's
probably my favorite Metallica album.

I await with bated breath discovering how this information can be used
to further explain my deep and abiding irrelevance to the field of modern
programming language design. If I don't see at least one comment about
how all competent programmers recognize that Cliff Burton was the soul
of Metallica within a week, I shall be devastated.

-s
 
K

Keith Thompson

Seebs said:
Why do you keep making these dogmatic claims about events you obviously
never witnessed, in a field of inquiry where you've aggressively and
actively pursued militant ignorance?

You don't really expect to get a meaningful answer to that question,
do you?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,810
Latest member
Kassie0918

Latest Threads

Top