It makes it more amenable to efficient implementations.
THIS IS NONSENSE. "Amenable" is a word people use in low level
corporate jobs when they don't know what they are talking about. For
example, in a corporation, "I'm amenable to that" means "I'm getting
screwed, I have no health insurance, but you're the only employer in
East Jesus and I want to climb Saddle Mountain once more before I die,
therefore I will do as you say".
There are two ways to make code efficient:
(1) As in assembler, manipulate the code "by hand"
(2) Use a compiler optimizer
In the nondeterminacy of a()+b(), it is impossible to do (1) in the
sense of performing something inside a() that's used inside b().
It's unnecessary for the order of evaluation of a()+b() to be
nondeterministic to do (2). This is because modern optimizing
compilers construct a data structure (often but not always a DAG)
which finds all knowable dependencies and can rearrange a and b only
when it's safe.
Optimization works better on more-deterministic languages because the
higher determinism means that optimizer has more information. For
example, C makes pointer analysis difficult, according to Aho et al.:
"Pointer-alias analysis for C programs is particularly difficult,
because C programs can perform arbitrary computations on pointers." -
COMPILERS, Principles, Techniques and Tools, p 934 2nd ed.
C programmers like to brag that their "language" is more
"efficient" (although as I point out in my own book, this statement is
another assault on clear English).
But on the one hand it is significantly more difficult to
automatically optimize than C Sharp or Java. On the other we find that
the vaunted ability to hand optimize is ringed about with weird and
non-orthogonal strictures created by vendor greed and the cowardice of
people who make "standards".
It can, however, be amenable to an efficient implementation, or extremely
difficult to implement efficiently.
I'm not "amenable to that" for the reasons stated above.
In the real world, optimization relies on finite resources. One of the
things that people do when building optimizers is allocate finite
resources both to implementing the optimizer, and to the execution of
the optimizer.
There are compilers out there which can spend hours to days optimizing
specific hunks of code... And this can be *cost-effective*. Keep that
in mind.
So, the quuestion is: Are there real-world cases where the option of
reordering evaluations would permit optimizations which would be otherwise
prohibited? The answer is a completely unambiguous "yes".
Now, you might argue, perhaps even persuasively, that either such
optimizations will change the behavior of the code, or they could be
performed anyway. True enough.
But! What you have apparently missed is that they might change the
behavior of the code *in a way that the user does not care about*.
Let's take the example from Schildt's text:
x = f1() + f2();
As he correctly notes, the order of execution of f1 and f2 is not defined..
Now, consider:
int
f1(void) {
puts("f1");
return 3;
}
int
f2(void) {
puts("f2");
return 5;
}
Obviously, these functions have side effects. Reordering them results in
an observable change.
However, it is not at all obvious that the user cares which is which.
"Cares about", like "amenable", is another corporate barbarism, and
this example is absolutely unprofessional and appalling.
"The user" is also barbaric. What this (to Dijkstra untranslatable)
word means is that the programmer wishes at the point of use to be **
relieved of the responsibility for thinking **.
And amazingly you have claimed that in all cases of the above it
doesn't matter, now or at any time, which string comes out first,
because you are too unimaginative to speculate that the puts may not
be directed at a screen or piece of paper.
It's as if you're unaware of one of the major and most useful features
of unix, piping and redirection. If one version of the code is plugged
into another program and the second program parses the output of the
first, you will BREAK the second program if you use a different
compiler that sequences differently! And because for most compilers a
defacto order is always enforced, you will never know this until the
last possible minute.
Furthermore, "getting answers in random order" is NOT "efficient". It
is wrong.
If we were in a case where the ABI's rules made it more expensive to perform
operations in one order than another, the option of reordering them could
improve performance. If it produces changes in behavior, that *might* be
a problem -- unless those changes are harmless. If all it changes is the
order of log messages, the developer might well prefer a faster form to
a slower one, even though it "changes" the output -- because the change is
irrelevant.
One version of a program displays messages (or sends them to another
program) when compiled with one C compiler in one order, and in a
different order when compiled with another. This is not acceptable at
all.
But it is what he said.
Ahh, but this is not a book labeled "Baby's First C Primer". It claims
to be "The Complete Reference". You could dispute whether the original
text ought to have mentioned this, but consider the description of
free() on page 434:
free() must only be called with a pointer that was previously
allocated with the dynamic allocation system's functions (either
malloc(), realloc(), or calloc()). Using an invalid pointer in
the call will probably destroy the memory management system and
cause a system crash.
Here, we are not talking about an introduction to the essentials; we are
talking about a *reference work*, billed as such, and it states something
flatly untrue. It is absolutely the case that a null pointer is "an invalid
pointer".
No, NULL is not "invalid" if as you say free(NULL) is valid. Herb
means clearly by "invalid" a pointer that doesn't point to an
allocated region or one that's freed.
(2) It's crazy to attack Herb for saying that in effect (x)[A(x)] (for
all x, property A is true) when he does not go on to say, there must
be only ONE free(). The student who's awake knows already that free(x)
returns x to available storage and that because of this x has no
referent. You're asking him when speaking to repeat other facts in
such a way that would only confuse. You say he's clear, and in this
you are right. You want him to be as unclear as the Standard would be
for the beginner!
People have asked on clc before why free(x) is failing when it worked the
previous time. After all, it was previously allocated.
Outside of programming, you need to start assuming the best of people
and not the worst.
I do! For instance, when I first saw one of your long posts on this topic,
I responded with careful thoughtful analysis to many of your claims and
asked for evidence for them, which was not forthcoming.
Having realized that you're generally unwilling to support your claims,
I've stopped bothering; now I'm just killing time and enjoying the fun.
I have of course supported my claims. And now you confess to
unprofessional levels of insincerity. And isn't "killing time and
enjoying the fun" what trolls do? Fortunately, you're not a good
troll, instead you're just making a fool of yourself.
Have you ever heard the phrase "belt and suspenders" applied to computing?
Yes, from the sort of people who use "amenable". And if the emperor as
here has no clothes, a belt and suspenders won't help much.
If you want to write robust code, it is not enough to be sure that someone
should have done something right -- even if you are that someone and have
verified it. You should be careful anyway.
Starting by not using C.
Consider the following data structure:
struct foo {
int x;
int y;
unsigned char *name;
};
You might find that you have a number of these, and that most but not all of
them have a name. What should foo_free() look like?
void
foo_free(struct foo *p) {
if (p) {
free(p->name);
free(p);
}
}
Now, why are we checking 'if (p)'? Because if an error elsewhere in the
program results in inadvertantly passing a null pointer in, the reference
to p->name would invoke undefined behavior.
This nonsense is what I find most tiresome about C, since its
incoherent claim to efficiency is undercut by the need to wear belt,
suspenders, two condoms and a raincoat.
In pre-ISO C, you had to write:
if (p) {
if (p->name)
free(p->name);
free(p);
}
and this usage, though common, was obnoxious -- and people sometimes forgot.
Fixing this improved the language.
From a mess to a mess powered.
Yeah, I just don't see the relevance.
Incorrect. A math teacher might refer to "a" triangle, but will rarely
refer to "the" triangle.
Again, I'll quote for you directly from the book:
Figure 16-1 shows conceptually how a C program would appear in memory.
The stack grows downward as it is used.
+----------------+
| Stack |
| | |
| v |
+----------------+
| ^ |
| | |
| Free memory |
| for |
| allocation |
+----------------+
|Global variables|
+----------------+
| |
| Program |
| |
+----------------+
(Figure 16-1 lovingly reproduced in vi, use fixed-pitch font plz.)
You missed Herb's "conceptually" and his "would". These words mean
that "this is an example, Otto".
He continues:
"Memory to satisfy a dynamic allocation request is taken from the
heap, starting just above the global variables and growing towards the
stack. As you might guess, under fairly extreme cases the stack may
run into the heap."
This is not a mere illustration of one possible way of doing things; this is
a clear assertion that they are done in a particular way, and that that
particular way has consequences the reader must be aware of.
I would not write Herb's way but it made sense since everything he's
saying is under the scope of "conceptually" and "would". It is
subjunctive, one possibility among others. He's talking about the non-
virtual and constrained memory of his time in which job one was often
preventing a stack/heap collision.
Real programming students are, to a striking extent, selected from
populations excluded by classism and racism from high level university
education, and they often combine an interest in math with inability
to think in terms of abstractions. They need to see the abstraction
implemented in a "real world" situation once and can then be trusted
to generalize.
The great Edward G. Nilges, in chapter 2 (A Brief Introduction to
the .Net Framework) in his redoubtable book "Build Your Own
Goddamn .Net Language and Compiler" quotes Marx: all that is solid
melts into air. This means that to work with any given generation of
technology, one needs to introduce mechanisms that are later out of
date as instances of the pure idea.
Uh.
I have both taught (though not in college), and written a computer book.
I wrote a book on shell programming, which has only had one complaint made
so far about it, which is that the publisher insisted on tacking the word
"beginning" onto something that the most experienced shell programmers I
know have all considered to be rather advanced.
I have been, at some length, told that I did a very good job of presenting
the shell in a way that allows new readers to pick up the basics without
misleading them.
Amazon link, please.
Ahh, but "its functional equivalent" might never "run into the heap".
He really was referring to a specific thing, not to an abstract
generalization.
It might not run into the heap but it will run out of room if the
programmer gets gay and recursively calls his code in a loop. And even
in a modern implementation, the heap is the other region beyond the
stack and the code. Even in a modern implementation it makes sense to
picture the stack and heap as fighting each other for storage while
the code stands idly by.
Indeed, at the most abstract level, what is there at runtime but some
sort of stack, some sort of heap, and a space for code? Do tell us Mr
Shell expert...
You haven't shown it, you've asserted it. At most, you've established
that I was mistaken to claim that he was clear, but really, the dictionary
hasn't got your back this time.
That's nice.
I have found that relying on the reader's telepathy makes for a poor
learning experience. Fundamentally, while it's true that *most* readers
may be able to guess what you meant even when you say something else, it
is not true that *all* will -- and even "most" won't get it right all the
time.
If it were impossible to write easy-to-understand text without introducing
grave errors that the reader must magically guess were intended to be viewed
as possible examples rather than definitions, I think you would have a case;
we could reasonably assert that we have to make
You have no standing in either speaking about computer science or
practical instruction.
"How progress and regression are intertwined today, can be gleaned
from the concept of technical possibilities. The mechanical processes
of reproduction have developed independently of what is reproduced and
have become autonomous. They count as progressive, and anything which
does not take part in them, as reactionary and narrow-minded. Such
beliefs are promoted all the more, because the moment the super-
gadgets remain unused, they threaten to turn into unprofitable
investments. Since their development essentially concerns what under
liberalism was called “packaging,” and at the same time crushing the
thing itself under its own weight, which anyway remains external to
the apparatus, the adaptation of needs to this packaging has as its
consequence the death of the objective claim."
- TW Adorno, Minima Moralia
"The moment the super-gadgets are unused", writes Adorno, "they
threaten to turn into unprofitable investments". This updates Marx's
insight that the factory owner must run the factory day and night to
amortize his investment even if sleepless children must fall into the
machine and be killed.
C, as one of Adorno's "super-gadgets", needed from the start, and
continues to need, a Legion of the Undead to follow it. In the 1970s,
given that most of my friends were going "back to the land", I
wondered who would be interested in or support the new technology that
was already appearing. I was amazed to find that for material reasons,
these very hippie-assed scoundrels were driving the technological bus
by 1978.
This was because the super-gadgets of the time, representing such an
enormous risk and investment on the part of men who weren't hippies,
and who were like Ed Roberts former military sorts, required use in
the form of programming and the entrepreneurs of that time were
willing for hippies to work on their systems as an alternative to
losing everything. The crackdown came as soon as Reagan was elected.
But by this time, people had been trained to follow the "super
gadgets" by way of negative and positive conditioning (where the
negative conditioning was almost as seen in the 1950s science fiction
novel The Atlantic Abomination).
However, for them to be loyal to abstract computer science, and to
criticise "gadgets" like C from this perspective would have destroyed
wealth, therefore people were carefully divided into tribes, each
passionately loyal, not to truths of mathematics but to C or what ev
er.
It seemed to me at the time (for example, in Robert Tinney's crude
paintings of technical concepts on the covers of Byte Magazine) that
everyone was thinking in childish pictures and as a result becoming
the overly loyal followers of one paradigm or another, and this was
moronizing them while enriching the few. The men who their loyalty
enriched didn't seem to me to give a rat's ass about software
correctness or the public interest.
Eerily, prophetically, writing in 1948, Adorno predicts "the death of
the objective claim" and here we see that death. Everyone's
passionately loyal, not to truth or even common decency, but to some
artifact, some gadget, some goddamn piece of shit programming language
past its sell-by date.
They do not know it, but what motivates them is the fact that rich
people need them to continue to use the artifact and to sing its
praises.