Because you make, for example, the claim that nondeterminacy makes a
language more efficient.
It makes it more amenable to efficient implementations.
Whereas in my book I point out that strictly
speaking, a language can neither be efficient nor inefficient.
It can, however, be amenable to an efficient implementation, or extremely
difficult to implement efficiently.
You seem incapable of understanding how when and why we optimize.
In the real world, optimization relies on finite resources. One of the
things that people do when building optimizers is allocate finite
resources both to implementing the optimizer, and to the execution of
the optimizer.
There are compilers out there which can spend hours to days optimizing
specific hunks of code... And this can be *cost-effective*. Keep that
in mind.
So, the quuestion is: Are there real-world cases where the option of
reordering evaluations would permit optimizations which would be otherwise
prohibited? The answer is a completely unambiguous "yes".
Now, you might argue, perhaps even persuasively, that either such
optimizations will change the behavior of the code, or they could be
performed anyway. True enough.
But! What you have apparently missed is that they might change the
behavior of the code *in a way that the user does not care about*.
Let's take the example from Schildt's text:
x = f1() + f2();
As he correctly notes, the order of execution of f1 and f2 is not defined.
Now, consider:
int
f1(void) {
puts("f1");
return 3;
}
int
f2(void) {
puts("f2");
return 5;
}
Obviously, these functions have side effects. Reordering them results in
an observable change.
However, it is not at all obvious that the user cares which is which.
If we were in a case where the ABI's rules made it more expensive to perform
operations in one order than another, the option of reordering them could
improve performance. If it produces changes in behavior, that *might* be
a problem -- unless those changes are harmless. If all it changes is the
order of log messages, the developer might well prefer a faster form to
a slower one, even though it "changes" the output -- because the change is
irrelevant.
But it is what he said.
(1) Schildt was not denying a minor free(NULL) because especially in
teaching, de minimis non curat lex: the law does not deal in
trivialities. Only an INCOMPETENT teacher insists on this level of
detail when introducing students to essentials. The tyro has no use
for free(NULL).
Ahh, but this is not a book labeled "Baby's First C Primer". It claims
to be "The Complete Reference". You could dispute whether the original
text ought to have mentioned this, but consider the description of
free() on page 434:
free() must only be called with a pointer that was previously
allocated with the dynamic allocation system's functions (either
malloc(), realloc(), or calloc()). Using an invalid pointer in
the call will probably destroy the memory management system and
cause a system crash.
Here, we are not talking about an introduction to the essentials; we are
talking about a *reference work*, billed as such, and it states something
flatly untrue. It is absolutely the case that a null pointer is "an invalid
pointer".
(2) It's crazy to attack Herb for saying that in effect (x)[A(x)] (for
all x, property A is true) when he does not go on to say, there must
be only ONE free(). The student who's awake knows already that free(x)
returns x to available storage and that because of this x has no
referent. You're asking him when speaking to repeat other facts in
such a way that would only confuse. You say he's clear, and in this
you are right. You want him to be as unclear as the Standard would be
for the beginner!
People have asked on clc before why free(x) is failing when it worked the
previous time. After all, it was previously allocated.
Outside of programming, you need to start assuming the best of people
and not the worst.
I do! For instance, when I first saw one of your long posts on this topic,
I responded with careful thoughtful analysis to many of your claims and
asked for evidence for them, which was not forthcoming.
Having realized that you're generally unwilling to support your claims,
I've stopped bothering; now I'm just killing time and enjoying the fun.
For boneheads, who need to free() more than once because their code is
leaky.
Have you ever heard the phrase "belt and suspenders" applied to computing?
If you want to write robust code, it is not enough to be sure that someone
should have done something right -- even if you are that someone and have
verified it. You should be careful anyway.
Consider the following data structure:
struct foo {
int x;
int y;
unsigned char *name;
};
You might find that you have a number of these, and that most but not all of
them have a name. What should foo_free() look like?
void
foo_free(struct foo *p) {
if (p) {
free(p->name);
free(p);
}
}
Now, why are we checking 'if (p)'? Because if an error elsewhere in the
program results in inadvertantly passing a null pointer in, the reference
to p->name would invoke undefined behavior.
In pre-ISO C, you had to write:
if (p) {
if (p->name)
free(p->name);
free(p);
}
and this usage, though common, was obnoxious -- and people sometimes forgot.
Fixing this improved the language.
Naw, I just used a fancy French word. Do you know what a *frisson* is?
Yeah, I just don't see the relevance.
Schildt nowhere claimed that the stack must be laid out in any
particular way any more than a math teacher says that in order to
conform to Euclid, the triangle must be the same size as that which he
draws on the board.
Incorrect. A math teacher might refer to "a" triangle, but will rarely
refer to "the" triangle.
Again, I'll quote for you directly from the book:
Figure 16-1 shows conceptually how a C program would appear in memory.
The stack grows downward as it is used.
+----------------+
| Stack |
| | |
| v |
+----------------+
| ^ |
| | |
| Free memory |
| for |
| allocation |
+----------------+
|Global variables|
+----------------+
| |
| Program |
| |
+----------------+
(Figure 16-1 lovingly reproduced in vi, use fixed-pitch font plz.)
He continues:
"Memory to satisfy a dynamic allocation request is taken from the
heap, starting just above the global variables and growing towards the
stack. As you might guess, under fairly extreme cases the stack may
run into the heap."
This is not a mere illustration of one possible way of doing things; this is
a clear assertion that they are done in a particular way, and that that
particular way has consequences the reader must be aware of.
The student needs, at the cost of some illusions which can be
unlearned at a later date, to be helped over difficult ground. When
you've both taught and written a computer book, you'll have standing
in this field. Lists of errors don't give you standing.
Uh.
I have both taught (though not in college), and written a computer book.
I wrote a book on shell programming, which has only had one complaint made
so far about it, which is that the publisher insisted on tacking the word
"beginning" onto something that the most experienced shell programmers I
know have all considered to be rather advanced.
I have been, at some length, told that I did a very good job of presenting
the shell in a way that allows new readers to pick up the basics without
misleading them.
In your dreams. A stack, like a right triangle, is a "the" not an "a".
Herb, by referring to "the" stack was referring EITHER to the stack he
illustrated or its functional equivalent.
Ahh, but "its functional equivalent" might never "run into the heap".
He really was referring to a specific thing, not to an abstract
generalization.
In this you concede game, set and match. As I have shown, "clarity"
leads to understanding: understanding is knowledge of that which is
true.
You haven't shown it, you've asserted it. At most, you've established
that I was mistaken to claim that he was clear, but really, the dictionary
hasn't got your back this time.
It wouldn't have worked in my classes in C for the IBM Mainframe at
Trans Union in Chicago, and probably not even in my classes in C for
prospective computer science majors at Princeton. As it happened, a
few students at Trans Union complained that I used too much math. My
remit at Princeton wasn't to teach computer science and abstract data
structures. It was to get some students started in C.
That's nice.
Give newbies the credit most deserve.
I have found that relying on the reader's telepathy makes for a poor
learning experience. Fundamentally, while it's true that *most* readers
may be able to guess what you meant even when you say something else, it
is not true that *all* will -- and even "most" won't get it right all the
time.
If it were impossible to write easy-to-understand text without introducing
grave errors that the reader must magically guess were intended to be viewed
as possible examples rather than definitions, I think you would have a case;
we could reasonably assert that we have to make the books that will help the
largest number of people learn.
It's not impossible, though.
Kim King's _C Programming: A Modern Approach_ discusses "stacks" only in
terms of the abstract data type. It does not refer to the storage of
automatic objects as "a stack", and yet, I've yet to hear of anyone being
confused by King's particularly lucid explanation of how function calls
work.
I think you padded "C: the Complete Nonsense" by counting what Herb
did not say as positive sins of omission as if he should have written
a computer science treatise AND a standard.
Not really. If he says something which is untrue, that's a mark against
the accuracy of the book. Even just qualifying things with statements like
"on many common machines" would be enough that I wouldn't worry too much,
because it'd be enough to give the reader a warning not to rely too heavily
on this.
The problem with your model is that the reader might as well assume that,
while printf was used on some specific systems, other systems will have a
totally different formatted printing routine. Without some marker for
when he's talking about the language in general, and when he's talking about
a particular compiler, the reader can't be expected to know which things
generalize and which don't.
I don't think it would have been appropriate to prefix use of the
stack with a pompous prologemena on the Idea of the Stack.
Perhaps so -- in which case you could omit it entirely, since you can explain
C quite clearly without referring to "the stack".
Computer
people KNOW that things can be done in different ways.
But again, remember, we're talking about newbies. Notice how newbies come
here with questions like "I tried to type ^Z and it didn't get EOF", because
they *don't* know that things can be done multiple ways; they know that ^Z
is an EOF character, or that someone told them that anyway, and they're trying
to figure out what's happening.
-s