Benefit of not defining the order of execution

N

Nate Eldredge

I disagree; it's not a lose-lose option. However the issue of
peoples assumptions being satisfied or not satisfied is not
particularly important. What is important, IMNSHO, is that
outputs of working conforming programs can vary depending on
evaluation order choices by compilers.

But that's just the point: programs that depend on the evaluation order
are NOT conforming.
(Think regression tests
on log files.)

Huh?
 
K

Keith Thompson

Nate Eldredge said:
But that's just the point: programs that depend on the evaluation order
are NOT conforming.

They're not portable, but there's nothing non-conforming about
depending on evaluation order as long as you avoid undefined behavior.
If a program's output depends on unspecified evaluation order, then it
can't be *strictly* conforming, but it can certainly be conforming.
Actually, the standard's definition of "conforming program" is very
weak, but as long as any difference in behavior isn't significant to
the program's functionality, such a program can even be portable.

Consider a program containing something like this:

double result = this() * that() + the_other();

Assuming that the three functions don't interact in any nasty ways
(depending on the same global variables, for example), the order of
evaluation shouldn't matter; result will get the same value for any of
the 6 possible orders. But suppose the functions contain trace
statements that (perhaps optionally) log information to a file. Then
the contents of the log can vary depending on the evaluation order,
and a testing scheme that looks for possible regressions by detecting
changes in the contents of the log will have problems.
 
J

jameskuyper

Nate said:
(e-mail address removed) (Richard Harter) writes: ....

But that's just the point: programs that depend on the evaluation order
are NOT conforming.

I think you mean "strictly conforming". Very few programs fail to meet
the extremely minimal requirements imposed upon "conforming" C code.
Most (possibly all) Lisp code qualifies, for instance.
 
K

Kaz Kylheku

If I had a situation where the order mattered, I would re-write it so
that it was obvious what order things happened in without having to
look things up.

Would you know how to force the order without looking anything up?

I suppose you would take advantage of the ``obvious'' fact that in

{ ... statement1 ; statement2 ; ... }

statement1 is fully evaluated before statement2. Well, how do you know that is
true without looking anything up? You ``just know'', right? But you must have
learned that somewhere.

So whatever you once learned in a textbook, course or tutorials is good enough,
be it right or wrong! If you encounter anything which isn't obvious according
to what you know (without looking anything up), you will rewrite it.

Boy, what gleaming, gem-like asset to a software engineering team you must be!

Maybe in a parallel universe where the C evaluation order is specified, you
would have learned about that in those courses, textbooks and tutorials, so
then you would consider evaluation order obvious, just like the way you now
regard sequenced execution of statements and full expressions to be obvious.

Many people ``learn'' that there is a C evaluation order in this universe, even
though there isn't one, and don't bother looking up the real information.
 
K

Kaz Kylheku

Richard said:
(e-mail address removed) writes:
[...]
I know only that the person who wrote the code didn't think it was
important to force a particular order of evaluation.
or just didn't think. Most programmers assume left to right
evaluation and would be quite surprised if they were told they
were wrong.
"Most programmers"? If that statement is based on actual data, that's
interesting (and disappointing); if not, it would be good to see some
actual data.

I doubt that there is any such data. It sounds plausible,
though, because people read ordinary text left to right.
Presumably they also read code the same way, i.e., by default
they interpret it left to right. This could create the
unconscious assumption that the evaluation is left to right, even
if they know better.

But it's only plausible.

There are people who strongly believe that the arguments are evaluated
right to left and pushed on to the stack as this is done.

The following is much more plausible: the reason some programmers assume
particular orders is that:

1. they believe there /is/ a defined order (computing is deterministic), and

2. they have experimented with their favorite compiler to find out what
that order is.

For this behavior, we have plenty of evidence just from the c.l.c archives
alone.
Thus when the
function is called the left most argument is a fixed offset from the
stack pointer making life simple for the compiler.

Sine different people assume different orders (and some know the order
is unspecified as far as the language is concerned) forcing a specific
order on compilers will certainly still leave people getting it wrong
*and* will make some implementations less efficient. Seems like a
loose-loose option to me.

That is a moronic argument. Obviously if there is a well-defined evaluation
order, programmers who think there is some other order will discover that they
are wrong when they run the code. Conforming compilers will insist on following
the standard-defined order.

Just because some people have the wrong preconception about what the order
should be doesn't constitute an argument against having /a/ defined evaluation
order which makes code more deterministic and portable.

Most of the world drives on the right side of the road, whereas some of the
world drives on the left, creating discomfort for travelers. That doesn't mean
we should throw out traffic codes and drive on whatever side of the road we
like.

Secondly, learn how to spell "lose". One o, not two.
 
S

Stephen Sprunk

Keith said:
Consider a program containing something like this:

double result = this() * that() + the_other();

Assuming that the three functions don't interact in any nasty ways
(depending on the same global variables, for example), the order of
evaluation shouldn't matter; result will get the same value for any of
the 6 possible orders. But suppose the functions contain trace
statements that (perhaps optionally) log information to a file. Then
the contents of the log can vary depending on the evaluation order,
and a testing scheme that looks for possible regressions by detecting
changes in the contents of the log will have problems.

Since the order of evaluation and thus the order of the log statements
(side effects that _do_ interact in a "nasty" way) is unspecified, your
testing scheme is broken if it assumes that your log statements are only
correct when in a certain order.

If you want them to appear in a particular order, make proper use of
sequence points to enforce that order. If you tell the compiler that
the order doesn't matter, which is what you're doing when you write code
such as above, then you shouldn't be surprised when it takes advantage
of that.

S
 
K

Kaz Kylheku

But that's just the point: programs that depend on the evaluation order

You know, anyone can look intelligent in this newsgroup, when the discussion
revolves around the narrowest topics.

That's why I like this kind of discussion, which requires people to stray
outside of the proverbial box, and the answers don't come from a minimal amount
of reasoning applied to chapter and verse from a single ISO document.

This discussion asks you to think hypothetically. What if the order was
defined?

What you think is ``just the point'' isn't in fact the point in anything
you've quoted.
are NOT conforming.

Not that this is particularly relevant, but you might want to look up what a
``conforming'' program is, and what is ``strictly conforming''.
 
S

Stephen Sprunk

Kaz said:
Richard said:
On Tue, 17 Feb 2009 08:35:29 -0800, Keith Thompson

(e-mail address removed) writes:
[...]
I know only that the person who wrote the code didn't think it was
important to force a particular order of evaluation.
or just didn't think. Most programmers assume left to right
evaluation and would be quite surprised if they were told they
were wrong.
"Most programmers"? If that statement is based on actual data, that's
interesting (and disappointing); if not, it would be good to see some
actual data.
I doubt that there is any such data. It sounds plausible,
though, because people read ordinary text left to right.
Presumably they also read code the same way, i.e., by default
they interpret it left to right. This could create the
unconscious assumption that the evaluation is left to right, even
if they know better.

But it's only plausible.
There are people who strongly believe that the arguments are evaluated
right to left and pushed on to the stack as this is done.

The following is much more plausible: the reason some programmers assume
particular orders is that:

1. they believe there /is/ a defined order (computing is deterministic), and

2. they have experimented with their favorite compiler to find out what
that order is.

For this behavior, we have plenty of evidence just from the c.l.c archives
alone.
True.
Thus when the function is called the left most argument is a fixed offset
from the stack pointer making life simple for the compiler.

This supports passing arguments right-to-left, but compilers on such a
system can evaluate those arguments left-to-right if they choose. I
know of a compiler on one system that evaluates right-to-left but passes
left-to-right...
That is a moronic argument. Obviously if there is a well-defined evaluation
order, programmers who think there is some other order will discover that they
are wrong when they run the code. Conforming compilers will insist on following
the standard-defined order.
True.

Just because some people have the wrong preconception about what the order
should be doesn't constitute an argument against having /a/ defined evaluation
order which makes code more deterministic and portable.

But what if the compiler could generate better code with a different
order and which way it's ordered doesn't actually matter to the
programmer? You've given up performance for no benefit other than
coddling programmers who haven't bothered to learn what C does and does
not specify. C is not a safe language for idiots; let them learn Java
or C# if they want to be protected from themselves.

Again, if the programmer cares about the order is, he can rewrite the
code in a way that guarantees it'll be executed "as if" it were in the
order specified.
Most of the world drives on the right side of the road, whereas some of the
world drives on the left, creating discomfort for travelers.

The discomfort is minor and short-lived, in my experience. Car makers
do make LHD and RHD cars for those who care, and one can drive from a
LHD to an RHD country and have their car work fine. Other vehicles,
like motorcycles, work the same in either type of country.
That doesn't mean we should throw out traffic codes and drive on whatever
side of the road we like.

Flawed argument. What your example shows is that there is actually no
need to specify which side of the road all cars must drive on, since
both cases can be accommodated; likewise, there is no need to specify in
what order C compilers evaluate expressions, since all orders can be
accommodated.

S
 
N

Nate Eldredge

As others have noted, this isn't true.

You are correct. Upon checking the definitions, I apparently meant to
say "strictly conforming".

On a related note, though, I wonder if a situation which depends on the
order of evaluation could not only see either order occur but actually
invoke undefined behavior. For example:

#include <stdio.h>
char buf[100];
int i = 0;
int f(void) { buf[i++] = 'f'; return 0; }
int g(void) { buf[i++] = 'g'; return 0; }
void foo(int a, int b) { }
int main(void) {
foo(f(), g());
printf("%.2s\n", buf);
return 0;
}

Certainly either "fg" or "gf" are legitimate outputs of this program.
But I wonder if the program is in violation of 6.5.2. buf is modified
twice. There are sequence points intervening (within f() and g()) but
they are not ordered with respect to each other. If it is in violation,
behavior is undefined and "your granny wears combat boots" is also a
legitimate output of the program.

If so, is it still true if the stores into `buf' are replaced by calls
to `putc'? In that case no object is explicitly modified, even though
the internal behavior is probably the same.
Consider a program that writes a log file that records the
sequence of calls to functions. (This is a thought experiment -
real log files aren't quite that naive.) The program processes
data and delivers results. Ordinarily the log file is just a
side effect; it is not used in the program. However it can be
used in post mortems when the program dies unexpectedly.

It can also be used in regression tests. That is, we can have a
suite of test data and and use a well defined script to make a
series of runs of the program producing test output - including
log files. Whenever we do a new build or port to a new platform
we can compare the test script results with a reference. This
kind of testing is called regression testing. It is one of the
tools of good software engineering.

The issue here is that regression test suites are (should be)
sensitive to variations in the course of execution that do not
(or so we hope) affect the desired output of the program in
ordinary use.

True, but I would say that this is the sort of "false positive" that you
must be ready for when doing this sort of regression testing. The log
file will record not only the behavior of the program, but also choices
made by the compiler which it is at liberty to change between versions.
 
K

Kaz Kylheku

You are correct. Upon checking the definitions, I apparently meant to
say "strictly conforming".

The concept of a strictly conforming C program is bound with lots of
other restrictions, such as not exceeding any implementation limits,
and not using any functionality whatsoever that isn't in the C standard.
No functions or headers that are not in the Library clause.

In real world code, there sometimes occur modules which could be part of a
strictly conforming program (but rarely are).
On a related note, though, I wonder if a situation which depends on the
order of evaluation could not only see either order occur but actually
invoke undefined behavior.

Yes. i = i++, etc.
#include <stdio.h>
char buf[100];
int i = 0;
int f(void) { buf[i++] = 'f'; return 0; }
int g(void) { buf[i++] = 'g'; return 0; }
void foo(int a, int b) { }
int main(void) {
foo(f(), g());
printf("%.2s\n", buf);
return 0;
}

Certainly either "fg" or "gf" are legitimate outputs of this program.

The program must produce one of those outputs.
But I wonder if the program is in violation of 6.5.2. buf is modified
twice.

No, because execution of functions is sequential. A sequence point occurs
before the call to f, and prior to returning from f. Same holds for g.
There are sequence points intervening (within f() and g()) but
they are not ordered with respect to each other.

Function execution is sequenced.

I want a real language in which it isn't!

The losers and sissies who need the cushy comfort of sequenced functions know
where to find C.

:)
 
F

Flash Gordon

Stephen said:
Kaz said:
Richard Harter wrote:
On Tue, 17 Feb 2009 08:35:29 -0800, Keith Thompson

(e-mail address removed) writes:
[...]
I know only that the person who wrote the code didn't think it was
important to force a particular order of evaluation.
or just didn't think. Most programmers assume left to right
evaluation and would be quite surprised if they were told they
were wrong.
"Most programmers"? If that statement is based on actual data, that's
interesting (and disappointing); if not, it would be good to see some
actual data.
I doubt that there is any such data. It sounds plausible,
though, because people read ordinary text left to right.
Presumably they also read code the same way, i.e., by default
they interpret it left to right. This could create the
unconscious assumption that the evaluation is left to right, even
if they know better.

But it's only plausible.
There are people who strongly believe that the arguments are
evaluated right to left and pushed on to the stack as this is done.

The following is much more plausible: the reason some programmers assume
particular orders is that:

1. they believe there /is/ a defined order (computing is
deterministic), and

2. they have experimented with their favorite compiler to find out
what that order is.

For this behavior, we have plenty of evidence just from the c.l.c
archives
alone.

True.

I've seen/heard the stack argument used. However, a lot of programmers
I've know started out with assembler, so they were intimately aware of
the possible parameter passing mechanisms.
This supports passing arguments right-to-left, but compilers on such a
system can evaluate those arguments left-to-right if they choose. I
know of a compiler on one system that evaluates right-to-left but passes
left-to-right...

It can be done, and with the suggested change it would have to be.
However, it would be inefficient on some implementations.

Firstly you've only addressed half of it. One of the "loose" is the loss
of efficiency. Another "loose" is you break any assembler code written
(for valid performance or other reasons) which either calls or is called
by C with more than one parameter being passed. Another lack of a win is
that you don't meet the expectations of the programmers who expect the
other order.
Obviously if there is a well-defined

True.

Of course, some compilers vendors would probably decide to just ignore
the new standard because it would be too inefficient and would break too
much.
But what if the compiler could generate better code with a different
order and which way it's ordered doesn't actually matter to the
programmer? You've given up performance for no benefit other than
coddling programmers who haven't bothered to learn what C does and does
not specify. C is not a safe language for idiots; let them learn Java
or C# if they want to be protected from themselves.

You are also giving up compatibility with code compiled before the
change is implemented (for compilers where it is a change), giving up
compatibility with compilers for other languages on the same target
(unless they change at the same time) and I'm sure causing lots of other
grief.
Again, if the programmer cares about the order is, he can rewrite the
code in a way that guarantees it'll be executed "as if" it were in the
order specified.


The discomfort is minor and short-lived, in my experience. Car makers
do make LHD and RHD cars for those who care, and one can drive from a
LHD to an RHD country and have their car work fine. Other vehicles,
like motorcycles, work the same in either type of country.

I agree. I'm currently in a country where driving is on the opposite
side to where I live. It's no causing me any problems.
Flawed argument. What your example shows is that there is actually no
need to specify which side of the road all cars must drive on, since
both cases can be accommodated; likewise, there is no need to specify in
what order C compilers evaluate expressions, since all orders can be
accommodated.

Indeed. Of course, the order parameters are placed on the stack
(assuming a stack) is generally defined by something outside of C (at
least for interfacing to separate libraries) and this normally strongly
influences the order in which the parameters are evaluated for
efficiency reasons.
 
K

Kaz Kylheku

Firstly you've only addressed half of it.

How much of a moronic argument must I stoop into addressing?
One of the "loose" is the loss of efficiency.

This has been debunked elsewhere.
Another "loose" is you break any assembler code written
(for valid performance or other reasons) which either calls or is called
by C with more than one parameter being passed.

Evaluation order in C is not the same thing as calling conventions.

The addressing order in which arguments are placed on a stack-like structure
(if that is what is used) is not tied to the order in which the argument
expressions are evaluated to produce those values.

Suppose there are six parameters, four bytes wide each, and the first one is to
go to the lowest address (and the stack grows downward). The function call can
move the stack pointer by twenty four bytes, and then as the arguments are
evaluated, it can store them at increasing offsets from the stack pointer.

Evaluation is left to right; calling conventions are preserved.
Another lack of a win is
that you don't meet the expectations of the programmers who expect the
other order.

Programmers who currently expect an order of evaluation, are simply wrong.

However, consideration should nevertheless be given to existing code bases that
depend on evaluation order from the compiler for which they are written.

A compiler can always implement a switch which selects either its legacy
evaluation order, or standard evaluation order.

There is definitely a need, when this kind of change is introduced, to have a
transition period.
You are also giving up compatibility with code compiled before the

In fact this can be implemented in such a way that it's possible to freely mix
translation units compiled with strict evaluation, and those without.
 
F

Flash Gordon

Kaz said:
How much of a moronic argument must I stoop into addressing?


This has been debunked elsewhere.


Evaluation order in C is not the same thing as calling conventions.

The addressing order in which arguments are placed on a stack-like structure
(if that is what is used) is not tied to the order in which the argument
expressions are evaluated to produce those values.

Suppose there are six parameters, four bytes wide each, and the first one is to
go to the lowest address (and the stack grows downward). The function call can
Path: news.gradwell.net!news-peer-lilac.gradwell.net!news.glorb.com!news.motzarella.org!motzarella.org!news.motzarella.org!not-for-mail
From: Kaz Kylheku <[email protected]>
Newsgroups: comp.lang.c
Subject: Re: Benefit of not defining the order of execution
Date: Thu, 19 Feb 2009 05:47:52 +0000 (UTC)
Organization: A noiseless patient Spider
Lines: 61
Message-ID: <[email protected]>
References: <9292ffdf-a637-459e-ae0e-13f1cfac76eb@t11g2000yqg.googlegroups.com>
<[email protected]> <[email protected]>
<927c2638-224d-401d-9426-3e3bd314911a@p13g2000yqc.googlegroups.com>
<[email protected]> <[email protected]>
<[email protected]> <[email protected]>
<[email protected]>
<[email protected]>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Trace: news.eternal-september.org U2FsdGVkX1+l/QehUUsnfs3X+hpr8cGz5wIr9TvOg5+eASt5Uu9ifJk9vmBqUW60j+53Ecb1IVHvvnH1S6lOftBbT4fvg3XavQ4NhGJEGbP2eGkyHew6wgpO18p3U/GzvMm4SlCSfJA=
X-Complaints-To: Please send complaints to (e-mail address removed) with full headers
NNTP-Posting-Date: Thu, 19 Feb 2009 05:47:52 +0000 (UTC)
X-Auth-Sender: U2FsdGVkX1+lz+lT7TpDOcs6r29OOG2twlJsBxj3QmZcCiEvxIkD6w==
Cancel-Lock: sha1:9yBYkh6ApoU41WWtfoj6bLmKPco=
User-Agent: slrn/0.9.9p1 (Linux)
Xref: news.flash-gordon.me.uk comp.lang.c:223424



How much of a moronic argument must I stoop into addressing?

None. However, calling an argument moronic does not make it moronic.
This has been debunked elsewhere.

Nope. See below.
Evaluation order in C is not the same thing as calling conventions.

The addressing order in which arguments are placed on a stack-like structure
(if that is what is used) is not tied to the order in which the argument
expressions are evaluated to produce those values.

No, but there can be efficiency implications in not following it.
Suppose there are six parameters, four bytes wide each, and the first one is to
go to the lowest address (and the stack grows downward). The function call can
move the stack pointer by twenty four bytes, and then as the arguments are
evaluated, it can store them at increasing offsets from the stack pointer.

Evaluation is left to right; calling conventions are preserved.

At the cost of additional instructions to move the stack pointer back
and forth. Adding extra instructions is decreasing efficiency.
Programmers who currently expect an order of evaluation, are simply wrong.

I agree they are wrong. However, you are arguing for changing it from
something they do not expect to something they do not expect. That is
not a win.
However, consideration should nevertheless be given to existing code bases that
depend on evaluation order from the compiler for which they are written.

A compiler can always implement a switch which selects either its legacy
evaluation order, or standard evaluation order.

There is definitely a need, when this kind of change is introduced, to have a
transition period.

With any change in the standard you get a transition period.
In fact this can be implemented in such a way that it's possible to freely mix
translation units compiled with strict evaluation, and those without.

Only by doing things like making the code less efficient on some
processors by doing such things as you describe.
 
S

Stephen Sprunk

Kaz said:
Programmers who currently expect an order of evaluation, are simply wrong.

However, consideration should nevertheless be given to existing code
bases that depend on evaluation order from the compiler for which they
are written.

What about all that code out there that depends on a compiler's current
evaluation order which is different from your proposed standard? It
instantly breaks. Sure, it'd break if the compiler folks decided to do
something different, but the compiler folks are aware of that and the
order of evaluation they use is generally tied to system-specific
details inherent to the platform (e.g. on a system that passes arguments
right-to-left, a compiler that evaluates right-to-left today will almost
certainly stay that way forever).
A compiler can always implement a switch which selects either its legacy
evaluation order, or standard evaluation order.

As if compilers didn't have enough command line switches...
There is definitely a need, when this kind of change is introduced, to have a
transition period.

There is no objective need for the change in the first place. We've
gotten along fine with the order being unspecified for several decades
now, and it's not the kind of thing that the committee tends to address
this late in the game; they're now looking at ways to add functionality
to the language with new constructs, not change the meaning of old ones.

S
 
S

Stephen Sprunk

Golden said:
I tend to agree but for a very different reason. At some point in the not
to distant future I can see an optimizer deciding that f() and g() can be
evaluated at the same time on different CPU's. That makes a much more
obvious race if there are any interactions.

A less futuristic example is to consider what happens if f() and g() can
be inlined. Today, since the order of evaluation is unspecified, the
compiler is allowed to mix the execution of the two together with
optimal scheduling; in Kaz's proposal f() must be completed before g()
can start, so the compiler can't produce optimal code. In an extreme
case, like f() being all FP code and g() being all integer code, this
may double execution time.

Though we are all aware that the Standard says nothing about performance
directly, we all know that the Standard also jumps through a lot of
hoops (the "as if" rule, all of the unspecified, implementation-defined
and undefined behaviors, etc.) to allow optimizing code for performance
(rather than, e.g. safety or deterministic behavior), and this is a step
in the wrong direction.

S
 
N

Nate Eldredge

Stephen Sprunk said:
A less futuristic example is to consider what happens if f() and g()
can be inlined. Today, since the order of evaluation is unspecified,
the compiler is allowed to mix the execution of the two together with
optimal scheduling; in Kaz's proposal f() must be completed before g()
can start, so the compiler can't produce optimal code. In an extreme
case, like f() being all FP code and g() being all integer code, this
may double execution time.

I am confused again. From elsewhere in the thread, I had the impression
that under the current standard the compiler may execute f() and g() in
either order, but it still must execute them sequentially. So it seems
to me that if f() and g() are inlined, and the compiler wants to mix
their instructions together, it must do so in a manner that behaves "as
if" they were executed sequentially in one order or the other. E.g. in

int i = 10;

int f(void) { i++; i *= 2; return i; }
int g(void) { i++; i *= 3; return i; }
printf("%d %d\n", f(), g());

it can print "22 69" or "33 68". But it cannot move the `i++; i *= 3;' in g()
in between the `i++' and `i *= 2' in f() and print "24 72". I think
you'd need for this code to invoke undefined behavior for that to be
legal, and the consensus seems to be that this code doesn't.

I guess there are three possibilities:

1. The ordering of f() and g() matters to the program.
2. The ordering of f() and g() doesn't matter to the program (they are
independent), and the compiler can prove it.
3. The ordering of f() and g() doesn't matter, but the compiler can't
prove it.

Code in case 1 is currently incorrect, but would be made correct (or at
least well-defined) by the proposal.

It is case 3 that the proposal would hurt. At present the compiler is
free to evaluate them in the more efficient order, at the expense of
causing case 1 to behave unpredictably. Case 2 in principle is
unaffected, since by the "as if" rule if the compiler knows the ordering
doesn't matter, it can choose the more efficient order anyway. But the
other issue is that under the proposal the compiler either has to try
and prove the independence, adding complexity, or else not try, dumping
everything into the less efficient case 3.
 
S

Stephen Sprunk

Nate said:
I am confused again. From elsewhere in the thread, I had the impression
that under the current standard the compiler may execute f() and g() in
either order, but it still must execute them sequentially. So it seems
to me that if f() and g() are inlined, and the compiler wants to mix
their instructions together, it must do so in a manner that behaves "as
if" they were executed sequentially in one order or the other.

There can be a difference between order of evaluation and order of
execution, as long as a conforming program can't tell the difference.
Sequence points don't always (or even often) directly translate into
sequential execution.

However, I think you are right. There must be a sequence point between
evaluation of the two functions, regardless of which is evaluated first.
If the functions interact, the "as if" rule would prevent the compiler
from certain optimizations that would violate the sequence point. If
the functions do not interact, the "as if" rule would allow them to
execute concurrently regardless of the sequence point or the order of
evaluation.

Good catch.
I guess there are three possibilities:

1. The ordering of f() and g() matters to the program.
2. The ordering of f() and g() doesn't matter to the program (they are
independent), and the compiler can prove it.
3. The ordering of f() and g() doesn't matter, but the compiler can't
prove it.

Code in case 1 is currently incorrect, but would be made correct (or at
least well-defined) by the proposal.

It is case 3 that the proposal would hurt. At present the compiler is
free to evaluate them in the more efficient order, at the expense of
causing case 1 to behave unpredictably. Case 2 in principle is
unaffected, since by the "as if" rule if the compiler knows the ordering
doesn't matter, it can choose the more efficient order anyway. But the
other issue is that under the proposal the compiler either has to try
and prove the independence, adding complexity, or else not try, dumping
everything into the less efficient case 3.

I think this is a much better statement of my case than what I've
managed so far...

S
 
J

jameskuyper

Stephen Sprunk wrote:
....
A less futuristic example is to consider what happens if f() and g() can
be inlined. Today, since the order of evaluation is unspecified, the
compiler is allowed to mix the execution of the two together with
optimal scheduling; ...

There is a sequence point at the beginning of every function call.
That sequence point guarantees that all of the side-effects of one
function call will be complete before the next one starts.
Interleaving of function calls is possible, but is constrained by the
requirement that side-effects of the function calls cannot be
interleaved.
 
F

Flash Gordon

Richard said:
Mea cullpa.

Unless the corrections are amusing I ignore them anyway. I've previously
posted the reasons why my spelling is less than perfect.
 
C

Chris Dollin

jameskuyper said:
Stephen Sprunk wrote:
...

There is a sequence point at the beginning of every function call.
That sequence point guarantees that all of the side-effects of one
function call will be complete before the next one starts.
Interleaving of function calls is possible, but is constrained by the
requirement that side-effects of the function calls cannot be
interleaved.

.... in a way that is visible.

if X, Y are distinct locations, and f updates X and doesn't read Y,
and g updates Y and doesn't read X, then the updates can be carried
out in either order. I think.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top