On 10/12/2011 10:25 AM, (e-mail address removed) wrote:
....
There is no such statement in the C standard, and I can tell you
(from personal recollection) that requiring it was NOT WG14's intent
during the standardisation of C90. The BNF was intended to specify
ONLY the precedence of operators and NOT the evaluation order; that
was specified by the side-effect rules.
If you had looked up the section of the standard I pointed you to,
you would have seen both that the word used is "grouping", which
is clearly intended to distinguish it from execution order, and a
clear statement that the order of evaluation is unspecified. And
THAT was the intent of WG14 during the standardisation of C90.
I am fully aware that a lot of people are now claiming that the
BNF has always been meant to define the execution order, but that
flatly contradicts large chunks of other wording (especially the
side-effect rules). Whether it is now what compilers do, I don't
know (and don't much care, either).
The freedom of C implementations to rearrange the order of evaluation is
great, but it's not completely unconstrained.
While there are people who have misinterpreted the BNF as fully
specifying the execution order, I don't consider that to be a common
position among those most familiar with the standard. A more common
position, and IMO fully defensible, is that the BNF implies constraints
on the execution order. For instance, in ((a+b) + (c+d)), the a+b can be
executed before or after the (c+d), but both must be executed before the
final addition can be performed, because that addition requires the
results of those execution. The standard does not say anything
explicitly about that fact, because it doesn't need to - it's implicit
in what the standard does explicitly say about the dependency of the
final value on the values of the sub-expressions.
I've seen claims, (possibly from you?), that it's possible for a
conforming implementation to generate code for a+b+c which calculates
the result of a+b after (sic!) the result of that very calculation has
already been added to c. However, when I asked for details of how that
was supposed to work, I learned that the supposedly-conforming code
calculated a+b twice - once for adding to c, and the second time for the
sole purpose (as far as I could tell) of "proving" that it's permissible
to compute it afterward.
I'll concede that the as-if rule allows spurious extra computations to
be inserted at any time, so long as they don't affect the final result.
However, for an implementation that pre-#defines __STDC_IEC_559__, it
seems to me that the "final result" necessarily includes the values of
testable floating point environment flags; at least, if it occurs within
code that actual performs such tests.
You know this, but for the benefit of those who don't: C99 added
<fenv.h>, providing portable C support for such flags.
I remember a long discussion we had (partly off-line) in which you
claimed that C99's provision of such support, in that form, made the
situation worse than it would have been without it. You presented a very
real list of weaknesses in the specifications, primarily consisting of
things that are optional which, if I understood your arguments, could
only be considered useful if mandatory. It seems to me that if a feature
is optional, but I have a portable way of testing whether it's
supported, and a portable way of making use of it if it is supported,
that's unambiguously more useful than having nothing portably specified
about that feature - I never understood your claims to the contrary.