Benefit of not defining the order of execution

T

Tim Rentsch

Apparently several people are of the opinion that having language
semantics be more deterministic is better than being not as
deterministic, because... well, just because. To that I say,
just 'tisn't so.

Even if a clever optimizing compiler could recover (relative to C as
it is now) all the possible parallelism of a C-like language with a
defined left-to-right order of evaluation (and it can't, but never
mind that now), it still isn't automatically better to impose a
left-to-right evaluation order, or any fixed evaluation order. In
fact, doing that to C expressions would make C a worse language,
not a better one. Here's why.

If I see a piece of C code like, for example,



then I don't have to look at the definitions for g() and h() to know
they don't interact (by which I mean, interact in any nontrivial
way). The reason? If they did interact, the code would have been
written differently, to force a particular order of evaluation.


Of course, technically I don't know that g() and h() don't interact;
I know only that the person who wrote the code didn't think it was
important to force a particular order of evaluation. But knowing
the intent (or in this case, the lack of intent) of the code's
author is just as valuable here, or perhaps more valuable. I can
always go look at the definitions for g() and h() to see if they
interact, but I can't go back and look at what the code's author
was thinking.

Now consider the case where the language specifies a left-to-right
evaluation order. Let's look again at the example line. Now I have
to wonder if g() and h() interact; to find out I have to go read
their definitions. If they don't interact, I can breathe a big sigh
of relief and go back to reading the function where they were
called. But suppose they do interact; in that case I have to
wonder if the interaction was known and deliberate, or whether it
might have been unintentional. Short of going back and asking the
original author, there really isn't any way of knowing. Discovering
what the program does and what the program was intended to do has
become a lot more work.

Conversely, if I discover that g() and h() interact in C as it
is now, it's simply an error. The original programmer either
forgot something, or misunderstood something, or was confused
about something, or whatever; which ever of these is these is
the case, the program is simply in error, and I don't have to
wonder whether the interaction was intended or not -- it wasn't.[1]

Requiring a left-to-right evaluation order has made the job of code
reading harder rather than easier. And it encourages, even if only
indirectly, the writing of non-obvious code that depends on that
evaluation order. Both of those trends are trending in the wrong
direction.


[1] Of course, it's possible to construct situations where g() and
h() interact in a non-trivial way, yet the overall program behavior
is correct no matter which order is chosen. But the vast majority
of cases are not this way; moreover, any programmer who writes such
code without putting in a comment on that aspect deserves no better
treatment than a programmer who made an outright error, or at the
very least should be prepared for some sharp criticism during code
review.
[snip]

It's an interesting argument, but in my incredibly reliable
opinion, not a sound argument, either from a computer science
perspective, or from a software engineering perspective.

The notion that code such as

a = f( g(x), h(y) );

implies that the code reader has a right to expect that g and h
do not interact is in the nature of a pious expectation. There
is no contract here, no certification by the writer of the code,
no enforcement mechanism, and, for that matter, it is possible
that the writer of the code cannot know whether there is any
interaction.

One can argue just as well that unspecified order of evaluation
makes code reading harder rather than easier; when the order is
undefined the number of possible interpretations increases
substantially. In C it is suprisingly easy to write code that
invokes undefined behaviour, easy in quite unexpected and subtle
ways. Quite often the source of the undefined behaviour is
rooted in the looseness of ordering. The upshot is that
undefined order of evaluation is a source of coding error and a
source of increased code reading work.

There is one other way in which undefined order of evaluation
makes code reading harder; it is harder to walk through code and
"play computer" because the number of paths through the code
explodes.

There is an old maxim, KISS, that applies here. Undefined order
of evaluation violates that maxim. It takes extra expertise to
write and read code that is expected to work in an environment
where undefined order or evaluation is the norm. That expertise
may be desirable in special situations but that is no reason for
it to be a universal requirement.

There is a more general software engineering issue here. In
practice there are three main routes to creating robust software
- coding practice, testing, and use of trusted components. Good
coding practice is important, and desk checking, aka, code review
is valuable. The simple truth, though, is that it doesn't catch
everything.

Another important tool for creating robust software is testing,
both unit testing and regression testing. Now comes the
inconvenient truth; testing is not proof against order of
evaluation effects. Code may work perfectly when compiled with
one (conforming) compiler or group of compilers and yet fail when
compiled with others. Moreover it may pass all tests today and
yet fail tomorrow because of a change when your compiler(s) were
upgraded. In short, undefined order of evaluation degrades the
effectiveness of testing.

And, sadly, it also degrades the trustworthiness of trusted
components - in the nature of things, if code practice is
compromised and testing is compromised, then trustworthiness is
compromised.

In short, undefined order of evaluation in C is undesirable and
is not really necessary for compiling efficient code. However C
is what it is and is not likely to change, so the issue is
academic. It and the associated little bits of klutziness are
just something C programmers have to live with.

First let me try to summarize your comments.

1. My argument is flawed (or unsound) as regards f(g(x),h(y)):
a. no guarantee that g(x) and h(y) do not in fact interact.
b. the reader has no right to expect that they don't interact.
c. no guarantee that the author intended that they don't interact.
d. no guarantee that the author thought about interaction at all.

2. My argument is flawed as regards ease of reading:
a. Unspecified OoE => number of possible interpretations goes up.
b. Unspecified OoE often is the cause of undefined behavior.
c. Unspecified OoE make it harder to play computer.

3. Unspecified OoE isn't as simple as settling on one particular
OoE. (The "KISS" principle.)

4. Three main techniques for creating good software: coding
practice, testing, "trusted components" (which I took to
mean modularity). Unspecified OoE degrades, or is not
effectively dealt with, all three.

5. (Because of all the above) Unspecified OoE is undesirable.

6. Unspecified OoE is unnecessary for compiling efficient code.

7. All the above is (most probably) moot since in all likelihood
C won't change on this aspect.

Responding to the above, in order (or mostly in order) --

Comments 1a and 1b mischaracterize my position; I trust reading
my earlier posting makes that clear. Taken as stand-alone
statements (that is, including the "no" on each lettered line,
and not including the prefacing clause "My argument ..."),
statements 1a and 1b are true.

Comments 1c and 1d may reflect what I literally said (and if so
I'm sorry for the confusion), but they don't reflect the position
I was trying to convey. Of course we have no guarantees what the
original author intended or thought about. What we do know is an
either/or: either the author expects that g(x) and h(y) should
not interact, or the author is confused about how C works.
Similarly, as to actual program behavior, again an either/or:
either g(x) and h(y) don't interact (ie, in an interfering way),
or the program is wrong.

By analogy, it's very much like seeing an expression 'a' that
does array indexing. We don't know that the variable 'i' is
in range for the array 'a' that it's indexing; but either it
is in range, or the program is wrong. Similarly, we don't know
that the author expected 'i' to be in range for 'a'; but we
do know that he should have expected it to be within range,
or he is confused about how C works.

For 2a, the statement presupposes that it makes sense to consider
interpretations with non-trivial interaction as being meaningful.
With unspecified order of evaluation, there still is only one
interpretation, and that's the same interpretation as if we do
evaluation left-to-right (or right-to-left for that matter). Of
course, there is in addition the possibility that subexpressions
interact, but what that means is that the program is wrong.
Again this is similar to 'a' -- we do need to check that 'i'
will be within range, but we don't need to think about what
happens if it isn't, because at that point we already know the
program is just wrong.

For 2b -- yes, undefined OoE can be a source of undefined
behavior. It's hard to consider this a serious objection,
because it's so easy to guard against it, by using assignment
statements. I expect most developers use a compiler switch that
warns them when a variable might be used uninitialized, and when
they get these warnings (if they are sensible) change the code so
that the warnings don't happen; the "when it doubt, spell it
out" principle. It's easy to apply this principle to potential
problems with order of evaluation.

For 2c -- it's hard to know what to say about this, because the
argument seems so silly. Yes, at some purely local level, it's
easier to simulate a deterministic mechanism than a different
mechanism that imposes constraints on the program (and fails if
those constraints are not met). But doing this doesn't really
get us anything, except more effort spent simulating. By
contast, with order of evaluation being unspecified, now when we
get to an expression we can check to see if the results depend on
the order of evaluation, and if they do, we can stop simulating
altogether. Making the simulation effort easier on a local level
hasn't made the overall task any easier; in fact usually it will
only make the overall task take longer.

For 3... First, for the sake of discussion I'll agree to
stipulate that the KISS principle should be observed in this
instance. But now comes the important second question -- what is
it that's important to keep simple? Allowing the rule used in
the language definition to be simple results in programs being
more complicated, because expressions may legally depend on
evaluation order; conversely, having a rule that is more
formally difficult used in the language definition results in
programs being simpler, because in legal programs expressions
aren't allowed to depend on evaluation order. Simplifying the
space of legal programs is the more desirable metric here. It's
true that by doing that we have (at least to some degree) raised
the difficulty of /writing/ legal programs; but the programs
we've excluded are programs that are harder to understand than
the ones still allowed, and that tradeoff is the important one to
make in this circumstance. "Easy writing makes hard reading, and
vice versa" -- correct for programming as well as for regular
prose.

For 4 -- this one surprised me, because all of these can
be brought to bear (and relatively easily) on any potential
problems having to do with evaluation order.

For coding practices: first, on the process side, there is
visual inspection (which I think you noted at some point); and
second, during development, there are coding habits that can be
used very effectively (such as: put state-changing operators
only at the top level, don't access global variables in
expressions that also call functions, or allow function calls
along at most one branch of a multi-path operator (ie, like +,
but not like &&), among others).

For testing: automated inspection (a code analysis tool to
detect potential order dependencies -- more on this below). (Yes
I know some people wouldn't call this testing; it depends on
whether it's thought of as part of the development process or as
part of the QA process, but whatever.) Another tool is a C-to-C
translator that imposes an evaluation order on all expressions --
could produce a L-to-R version, an R-to-L version, a version that
chooses randomly at each expression between one or the other, and
compare each of these against the unaltered program during
regression testing.

For modularity: for writing new code -- any function that
modifies global state can be made a 'void' function. For using
existing code -- make lists of functions in each module that (a)
modify global state, or (b) depend on global state for their
output; these lists can be consulted when reading client code,
to easily detect potential problems. Both of these techniques
will strengthen the quality of the modules provided, and not just
with respect to evaluation order.

For 5 -- I think I've made it clear above that (IMO, anyway) the
benefits for unspecified OoE were underestimated, and the costs
for preventing the problems are really not that high. But
there's another key factor, and that is the cost of writing
programs under a required (particular) OoE, and that cost is very
high. In general it's a bad idea to write code that depends on
order of evaluation, even if the order is well-defined. Also,
requiring a particular order of evaluation would distort how
expressions are written in certain cases; consider cases such
as (these examples are written using function calls, but other
state-changing operators such as ++ could also be substituted):

f() - g() OR -g() + f()
f() < g() OR g() > f()
f()[g()] OR g()[f()] (one function returns an int,
the other a pointer).

You know (meant in the generic sense of "you") that cases like
these will start to crop up the moment that there's a difference
between the forms in each pair; some developers won't be able to
resist the lure of writing "clever" code that gives such succinct
expression. Requiring a particular order of evaluation yields a
cost both in code comprehension and code modification; it would
be (please excuse me for choosing such an emphatic next word)
foolish to discount these costs or pretend they don't exist.

For 6 -- a statement along the lines of "we can still compile
efficient code" is interesting, because actually it undercuts the
argument. If it's true that clever optimizers can discover which
cases can have their orders of evaluation changed and which ones
cannot, then it's also possible to write a tool to detect where
unspecified order of evaluation might be a problem; it's the
same question! Conversely, if we can't discover such cases
automatically, then having a particular order of evaluation be
required means compilers won't be able to generate code that's as
efficient as having order of evaluation be unspecified. Either
way, it weakens the case for defining evaluation order.

For 7 -- Certainly I agree that C is unlikely to change on this
point. However, I think there being existing code doesn't change
this very much, because no working program has its meaning
changed if evaluation order were fixed. Rather, the resistance
would come because (a) the argument would be made (whether
rightly or wrongly) that efficiency would suffer, but also (b)
the resulting language would encourage worse programming,
resulting in worse programs. This problem compounds with time;
even if a program with order-of-evaluation dependency can be
understood when it's first written, as maintenance occurs it's
only a matter of time before how it works isn't understood by
anyone.

I'd like to ask people who favor defining order of evaluation
to consider the commentary above, and then would ask: Does
this affect your position? (If so of course I'm happy to hear
it but...) if not then why not?
 
T

Tim Rentsch

Golden California Girls said:
Tim said:
If I see a piece of C code like, for example,

a = f( g(x), h(y) );

then I don't have to look at the definitions for g() and h() to know
they don't interact (by which I mean, interact in any nontrivial
way). The reason? If they did interact, the code would have been
written differently, to force a particular order of evaluation.

You don't know if this code works only because it was never intended to run on a
system that does not do it in a particular order, not conforming code.[1] You
also don't know if it was written this way on purpose for some possibly
nefarious reason. But I agree if it was intended to be conforming the order of
evaluation should not cause a race.

Yes, I try to clarify this confusion just now in my response to
Richard Harter.

In another language I actually came across an interaction in the compiler for
function calls in the s = strcat(str(x), str(y), str(z)) type. The compiler did
not allocate a buffer for each return string. Bug was acknowledged but not
fixed, I believe it was documented. FWIW in this language strcat was the only
call accepting strings that could be called with a variable number of arguments.

Which underscores the point that as soon as evaluation order is
defined then someone will start writing code that depends on it.

I tend to agree but for a very different reason. At some point in the not to
distant future I can see an optimizer deciding that f() and g() can be evaluated
at the same time on different CPU's. That makes a much more obvious race if
there are any interactions.

Yes, I hadn't considered that issue but it does seem relevant,
moreso now as multi-core CPU's are becoming more common. Of
course it isn't exactly the same issue, since C does specify
that function calls execute "atomically" with respect to
evaluation in the calling expression, but practically speaking
I think the effect is basically the same.

This will also make a requirement that the library have a method of declaring
dependencies that the optimizer can read so it can make decisions about what
things can be done in parallel and what must be done single file. This also
permits the optimizer to catch the example as UB and provide a warning. This
kind of optimization leads to a language where the author has to put in explicit
sequence points so that parallelism can be enhanced.

Presumably these declarations don't have to be required for Standard
library functions because an implementation "knows" how its library is
doing what it's doing. But probably the trend will take place on a
larger scale, at least in generated code files if not in source files.
 
T

Tim Rentsch

<snip>

[Kaz is arguing against C's undefined order of evaluation]

I'm not quite convinced, but I'm getting there

Apparently several people are of the opinion that having language
semantics be more deterministic is better than being not as
deterministic, because... well, just because.

I think Kaz's argument was a little more robust than that

My comments were intended as a response to comments given in
several different postings, not just the one; it simply happened
to be the first one in the list.

As for my phrasing -- in the different postings I was responding
to, there was (at least it seems to me that there was) a common
underlying belief that "more deterministic is better" should
just be granted as an axiom. Not that there weren't other
arguments, but they all seemed to rest, at least to some degree,
on this "axiom", and that's what I was trying to identify.
In hindsight I agree it would have been better if this had
been expressed more clearly.

rubbish. As others have pointed out you assume error-free programmers

This comment is responding out of context. Do you not read
ahead before writing responses?


My sentence here is part of the same context as the above,
clarifying what was said earlier.

or just didn't think. Most programmers assume left to right
evaluation and would be quite surprised if they were told they
were wrong.

Whether the latter statement is correct or not, what I wrote is
literally true: the person who wrote the code didn't think it
was important to force a particular order of evaluation. That
can be because (a) they understand the rules of C and have judged
that order of evaluation is unimportant, or (b) they don't
understand C's evaluation rules, or neglected to consider the
effect of evaluation order for some other reason. Normally I
would expect (a) rather than (b), but it's true both cases can
come up, and it's important to understand that that's so,
because the two situations are quite different and it's
important that they be treated differently.

I don't see that

Do you mean, you don't see why that's true, and just disagree, or
you don't see why that's true, and would like further explanation?
If the latter, can you say anything about what you find confusing?
If the former, what can you say about why you believe something
different?

this is crazy! You claim that making the code more predictable
makes the code harder to analyse!

Yes, in an important sense, which is discovering the intent
of the person who wrote the code in the first place. Also
please see below.

I cannot agree

Let's call the two cases A (no interaction of consequence) and B
(there is some interaction of consequence). Now consider the two
possibilities -- say determined (D) or non-determined (N) -- for
order of evaluation. Case A is basically the same for both N and
D. For case B:

Program behavior Programmer intent
---------------- -----------------
N: Goes boom! Confused about something

D: Work through what the May have expected (A),
interactions do, and or may have expected (B);
continue analysis. can't (yet) tell which.

The choice of non-determined order of evaluation tells us more
about the programmer's understanding, and also helps prune the
analysis effort at an early stage. So we have more work to do
under D, along both of the two axes (that is, for both program
behavior and programmer intent).

Which part of the above reasoning would you say you don't agree
with?
 
T

Tim Rentsch

On 14 Feb, 04:51, Tim Rentsch <[email protected]> wrote: [...]
I know only that the person who wrote the code didn't think it was
important to force a particular order of evaluation.
or just didn't think. Most programmers assume left to right
evaluation and would be quite surprised if they were told they
were wrong.

"Most programmers"? =A0If that statement is based on actual data, that's
interesting (and disappointing); if not, it would be good to see some
actual data.

no data I'm afraid, but many programmers are remarkably poorly
informed about the tools they use.

Such programmers should be encouraged to become better educated during
their performance reviews. If subsequent to that encouragement they
can't or won't become better educated then they should be encouraged
to change professions.
 
T

Tim Rentsch

Sine different people assume different orders (and some know the order
is unspecified as far as the language is concerned) forcing a specific
order on compilers will certainly still leave people getting it wrong
*and* will make some implementations less efficient. Seems like a
loose-loose option to me.

I disagree; it's not a lose-lose option. However the issue of
peoples assumptions being satisfied or not satisfied is not
particularly important. What is important, IMNSHO, is that
outputs of working conforming programs can vary depending on
evaluation order choices by compilers. (Think regression tests
on log files.)

Here is a point of disagreement. Even if a program has been
accepted by the compiler(s) in question, and even if testing
hasn't detected any problems, a program whose outputs are
(or can be) affected by different choices of evaluation
order is not one I would call "working". Perhaps it's fair
to say the bugs it has are subtle, but in my book it still
should be thought of as a buggy program, not a working
program.
 
T

Tim Rentsch

[... order of evaluation and statements in a log file ...]
Since the order of evaluation and thus the order of the log statements
(side effects that _do_ interact in a "nasty" way) is unspecified, your
testing scheme is broken if it assumes that your log statements are only
correct when in a certain order.

If you want them to appear in a particular order, make proper use of
sequence points to enforce that order. If you tell the compiler that
the order doesn't matter, which is what you're doing when you write code
such as above, then you shouldn't be surprised when it takes advantage
of that.

What you say is correct as far as it goes, but is not really to
the point, under the amiable assumption that this discussion has
a point. BTW in one sense it is a meta discussion since C is
what it is. On the other hand it is relevant because it involves
issues of programming practice that are peculiar to C.

Here I'm confused. Do you mean that order-of-evaluation issues
come up only with C? Or are you talking about something else?
I'm not sure what you're referring to.

The fundamental objection to not definining the order of
execution is that the course of execution is not well defined.
It can vary from environment to environment, from compiler to
compiler, and even from one release of a compiler to another.

You say that like it's automatically a bad thing. I believe
it's actually a good thing, for reasons that I've tried to explain.

Stephen say, in effect, if it matters to you, code in such a way
the course of execution is well defined. Well, yes, that is what
people attempt to do. In turn this brings in a series of little
"real world" problems. Here are some of them. The first is that
we need an additional suite of coding standards. Thus

merge(sort(left),sort(right));

is a no-no. There are other little gotchas. Does your QA guy
know what they all are? Do you? Is there a standard list some
where? If there is, is it comprehensive and without error?

Let us assume that we have an appropriate coding standard. Can
you guarantee that it is not violated anywhere. Yes, you have
code reviews, but they are not 100% reliable, particularly if you
are incorporating code written by other parties. Is there a
super-lint that will identify all of these violations? And so
on.

No development process is perfect, partly because it's too
hard to make a perfect process, but also because a perfect
process is too expensive. Deciding what to do about bugs
caused by order-of-execution dependencies means comparing
the costs of such bugs, and the costs of finding or preventing
them, against the costs of other kinds of bugs, and the
costs of finding or preventing those bugs. Do you have
some data that suggests the ratios for OOE bugs are
significantly worse than those for other kinds of bugs?
If so I'm sure everyone would welcome it being reported.

Remember, your regression tests can't tell you in advance that
there are variations. What they do is compare what you get today
with what you got yesterday. The variations don't show up until
you move your builds from one environment to another.

You seem to take it as true, just as a matter of course, that testing
won't be effective in discovering problems with order-of-evaluation
dependency. It's true that a different kind of testing needs to be
done to discover such problems, but I just don't think it's that hard
to set up a C-to-C translator to "stress test" different evaluation
orders and put that into the QA process. Is one of these out there
in the open source community? I wonder...

The point of all this is that there are costs to not defining the
order of execution along with the purported benefits.

I don't think there's any disagreement on that point; the harder
question is, what are the costs and benefits on each side of the
issue? My difficulty with some of the pro-determinism statements
(without meaning to attribute them to any particular person) is that
they seem to imply (at least, sometimes) that a defined order of
evaluation doesn't carry any significant costs. If I think they do
carry significant costs, what am I to say to someone who says "No,
you're wrong, they don't carry any significant costs", and don't say
anything more? This question is somewhat unfair since such a blunt
statement hasn't been made; however, I think it does accurately
reflect the frustration that some people (and to be fair, on both
sides) have been feeling.
 
S

Stephen Sprunk

Tim said:
Here is a point of disagreement. Even if a program has been
accepted by the compiler(s) in question, and even if testing
hasn't detected any problems, a program whose outputs are
(or can be) affected by different choices of evaluation
order is not one I would call "working". Perhaps it's fair
to say the bugs it has are subtle, but in my book it still
should be thought of as a buggy program, not a working
program.

That depends on whether all the possible orders of evaluation result in
correct output, which in turn depends on the definition of "correct" in
the particular context. There may be multiple, equally-correct outputs.

S
 
S

Stephen Sprunk

Tim said:
Which underscores the point that as soon as evaluation order is
defined then someone will start writing code that depends on it.

One potential problem with leaving it unspecified is that some
programmers may determine the ordering that their particular
implementation uses and then write code that depends on that order, then
find that the code does not work properly on another implementation.

However, this is hardly the only example of such problems in C, and it's
hard to justify burdening all future implementations with a particular
order of evaluation without also "fixing" all other problems in this
class...

S
 
R

Richard Bos

Malcolm McLean said:
Mistakes creep in.

Yes, but they don't creep after 250,000 lines. They creep in from the
start. You do module testing, and code reviews, right from the start.
If, at that point, one new (-ly changed) module is found to contain
assumptions about the order of execution, you change _that_ module
slightly. If, OTOH, you already have 250,000 lines of code, and you are
only now starting to worry about whether they're correct, then I repeat:
your lack of foresight doth not my problem make.

Richard
 
T

Tim Rentsch

[snip all actual response]

Be that as it may, I was impressed with your long posting; I will
respond when I get the chance.

Very well, sir, I look forward to seeing that and
will wait until after it's posted to give further
comment.
 
T

Tim Rentsch

Stephen Sprunk said:
That depends on whether all the possible orders of evaluation result in
correct output, which in turn depends on the definition of "correct" in
the particular context. There may be multiple, equally-correct outputs.

I take your point. I was meaning to use "affected" in the
sense of changing a correct output to an incorrect output,
but certainly it could be taken as any change at all,
which is not what I intended. Thank you for making the
point.
 
K

Kaz Kylheku

But it also means that someone might write code that depends on the order of
evaluation of parameters.

Someone might do that anyway. Many programmers think there is an evaluation
order.
This would be a nightmare to read.

But you're already writing code that depends on evaluation orders,
for instance:

{ foo(); bar(); } // Not nightmare to read

Why isn't that a nightmare to read, but this is?

x = func(foo(), bar()); // Nightmare!

Is it a nightmare because the return values of foo() and bar() are retained and
then used as the arguments to a function call?

Or is the discomfort emanating from the parentheses and commas instead of curly
braces and semicolons?
 
K

Keith Thompson

Kaz Kylheku said:
Someone might do that anyway. Many programmers think there is an evaluation
order.

The solution to that is education.
But you're already writing code that depends on evaluation orders,
for instance:

{ foo(); bar(); } // Not nightmare to read

Why isn't that a nightmare to read, but this is?

x = func(foo(), bar()); // Nightmare!

Is it a nightmare because the return values of foo() and bar() are
retained and then used as the arguments to a function call?

Or is the discomfort emanating from the parentheses and commas
instead of curly braces and semicolons?

Well, in C as it's currently defined, the latter is a nightmare
because the order of evaluation is unspecified.

But in most cases, in well-written code, the order of evaluation of
function arguments really doesn't matter. If I don't care about the
order of evaluation, I can just write
x = func(foo(), bar());
If I do care, I can write:
f = foo();
b = bar();
x = func(f, b);

Defining a particular order of evaluation would make it impossible, or
at least more difficult, to write code that expresses the idea that
the order of evaluation is irrelevant.

I'm not claiming that this is a very strong argument; after all, there
are plenty of cases where you *can't* specify that you don't care
about the order of evaluation (multiple statements, for example).
 
R

Richard Bos

Kaz Kylheku said:
The following is much more plausible: the reason some programmers assume
particular orders is that:

1. they believe there /is/ a defined order (computing is deterministic), and

Which is wrong in the premise, in the conclusion, _and_ in the
reasoning.
2. they have experimented with their favorite compiler to find out what
that order is.

More likely, they have been told this by their teachers, who _also_
learned C using Turbo C++, and Turbo C++ alone (or MSVC++ ditto, or
sometimes Ganuck ditto).
Most of the world drives on the right side of the road, whereas some of the
world drives on the left, creating discomfort for travelers. That doesn't mean
we should throw out traffic codes and drive on whatever side of the road we
like.

No, but neither should we force the same side of the road on all
countries. And some countries (read: implementations) have several
one-way roads (read: pass their arguments in registers, or do even
weirder things).

Richard
 
T

Tim Rentsch

[snip all except the final]
I'd like to ask people who favor defining order of evaluation
to consider the commentary above, and then would ask: Does
this affect your position? (If so of course I'm happy to hear
it but...) if not then why not?

Well, no, it doesn't. I will elucidate.

[approx 125 lines of elucidation snipped]

I really appreciate the time and thought that went into
this response.

Reading through the comments, it seems like what we're left
with is some combination of not yet understanding each other,
and some points of plain disagreement (or "failure to agree"
if that is more agreeable). It's not clear to me exactly
how much there is of each, nor just where the dividing
lines are. It's also not clear how much the points of
non-agreement might be brought closer together by further
conversations; my sense is that it's some but I'm really
not sure how much.

However, one definite impression I am left with is that
this medium isn't a good impedance match for resolving
those uncertainties. So, I will leave the conversation
here without any further comment for now; perhaps a
better opportunity will present itself so the conversation
might continue at some point in the future.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top