how long is double

M

Martijn Lievaart

"
The fourth paragraph of section 5 of the C++ standard: Except where
noted, the order of evaluation of operands of individual operators and
subexpressions of individual

expressions, and the order in which side effects take place, is
unspecified.

Got it. Thx.

M4
 
R

Ron Natalie

=>
Suppose I have expression, like

double d = 1.1 + x + 2.2 + 3.3; // we have double x in scope

here the compiler must generate code that will do 3 separate additions an
that order? And emitting code equivalent to expression

double d = x + 6.6;

The compiler is free to reorder the expression. If you want to enforce ordering
you have to introduce sequence points in the calculation.
 
R

Ron Natalie

James Curran said:
According to the Note in the Standard (1.9.15, pg 7, PDF Pg 33):
"operators can be regrouped according to the usual mathematical
rules....[caveat about machines in which overflows produce an
exception]...the above expression statement can be rewritten by the
implementation in any of the above ways because the same result will occur."
First, Note's are non-normative.
Second, the regrouping it's talking about isn't just the reordering of the
order of evaluation. The operative description is in the beginning of
Section 5 (4th paragraph).
 
M

Michal Kowalski

IMHO a more likely explanation can be that the 3 numbers in a + b + c are
added in a different order. And that is a possible source of difference at
the last bit of the precision.


Good catch. This is exactly what happens in case of a test app I compiled
using VC 6.0 in Debug & Release.

Cumulative results (of summation) in both cases are as follows:

debug:
ST0 = -1.14359638839963047e+0001
ST0 = -1.15118805451323815e+0001
ST0 = -1.22593847193835845e+0001 <= end result

release:
ST0 = -7.47504174251202524e-0001
ST0 = -8.23420835387278838e-0001
ST0 = -1.22593847193835827e+0001 <= end result


MK
 
E

Eric J. Kostelich

Is there a good test suite that will work for this? (Both for
detecting hardware failures/bugs and C++ compiler stupidities.)

William Kahan's well-known Paranoia program attempts to deduce,
in a portable way, the basic properties of floating-point arithmetic
on a given implementation. Versions are available for K&R C, Fortran 66,
and others at www.netlib.org/paranoia/. Though archaic, the Fortran
version should be compatible with modern Fortran compilers; the C
version may require modification to be compatible with ANSI C or C++.

--Eric
 
K

kanze

[ f'up comp.lang.c++ ]
In message <[email protected]>, Balog Pal <[email protected]>
writes
I am not sure that the compiler has that much freedom when the order
produces different results. This is not the same as the requirements
re order of evaluation of sub-expressions (i.e. that there is no
requirement)
I think so. if b is almost -c and a is much smaller, more precision is
lost is a is added to b first than when b is added to c first. There
is nothing the compiler can do about this. If this order changes due
to optimization switches may be seen as a QOI issue, but even that is
a bridge to far for me.

I think that this was Francis' point. According to the standard, a+b+c
is (a+b)+c. The compiler is free to rearrange this any way it pleases,
as long as the results are the same as if it had done (a+b)+c. On most
machines, with integer arithmetic, there is no problem. On no machine
that I know of, however, can the compiler legally rearrange floating
point, unless it absolutely knows the values involved.

There was quite a lot of discussion about this when the C standard was
first being written. K&R explicitly allowed rearrangement, even when it
would result in different results. In the end, the C standard decided
not to allow this.
 
K

kanze

James Curran said:
According to the Note in the Standard (1.9.15, pg 7, PDF Pg 33):
"operators can be regrouped according to the usual mathematical
rules....[caveat about machines in which overflows produce an
exception]...the above expression statement can be rewritten by the
implementation in any of the above ways because the same result will
occur."
I guess the key point is the meaning of "same result". I was told
(pre-C89) that a C compile could assume infinite precision when
reordering floating point expressions, something not ruled by that
statement, and not addressed (as far as I could see) elsewhere in the
Standard.

In K&R 1, there was an explicit license for the compiler to rearrange
expressions according to the usual laws of algebra. Thus, without
considering possible overflow or rounding errors. The authors of the C
standard removed this liberty, intentionally.

I suppose that a compiler writer could wriggle out on the grounds that
the standard doesn't require a minimum precision for floating point
arithmetic. On the other hand, the considerations of overflow would
still probably hold -- while most hardware will give the correct results
for integer arithmetic, provided they are representable, even if there
was an intermediate overflow, this is not generally the case for
floating point.
 
P

Peter van Merkerk

Francis Glassborow said:
FP arithmetic is very sensitive to such things as rounding mode and
order of evaluation. On x86 architectures there are considerable
differences between calculations done entirely in register and ones
where the intermediate results are written back to memory. My guess is
that in debug mode more intermediate results are being written back and
thereby are being stripped of guard digits.

That is correct.
For example your problem with '0' can be the consequence of subtracting
two values that are almost equal and are actually 'equal' within the
limits of the precision supported by memory values (which often have
lower precision than register values). This is an interesting case
because it means that the heavily optimised (minimum of writing back)
release version works as naively expected while the debug version that
adheres strictly to the semantics of the abstract C++ machine fails.

I ran into this problem in project I did several years; the release build
produced slightly different results than the debug build. The Microsoft
compiler has a 'Improve Float Consistency' option (/Op) which fixes this
problem. Unfortunately enabling this option slows down floating point
intensive code quite a bit.
 
M

Martijn Lievaart

I think that this was Francis' point. According to the standard, a+b+c
is (a+b)+c. The compiler is free to rearrange this any way it pleases,
as long as the results are the same as if it had done (a+b)+c. On most
machines, with integer arithmetic, there is no problem. On no machine
that I know of, however, can the compiler legally rearrange floating
point, unless it absolutely knows the values involved.

There was quite a lot of discussion about this when the C standard was
first being written. K&R explicitly allowed rearrangement, even when it
would result in different results. In the end, the C standard decided
not to allow this.

Then this seems a place where C and C++ differ, see the answer and quote
from the C++ standard from Ron Natalie.

Anyone who can confirm this?

M4
 
P

P.J. Plauger

Is there a good test suite that will work for this? (Both for
detecting hardware failures/bugs and C++ compiler stupidities.)

Not that a test program would eliminate the need to know what you're
doing, of course.

Fred Tydeman has an incredibly thorough test suite for floating-point
support. That's what we've used to hunt down the most subtle problems,
both in our own libraries and in the environments we build it on.
We have a product called a Quick Proofer which is way less thorough,
but still does a remarkably good job of highlighting FPP lapses.
And we're developing a very powerful set of math function tests in
house that we're not in a hurry to sell.

As for free stuff, there's bugger all out there that's worth the
bother.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
F

Francis Glassborow

Ron Natalie said:
The compiler is free to reorder the expression. If you want to enforce ordering
you have to introduce sequence points in the calculation.

There are two places in the Standard that might be considered relevant:

1) 1.9 para 15 (which is a note and so non-normative but can give a clue
as to intent)

There it requires that the operators are really associative and
commutative -- and a non-normative footnote to the non-normative note
adds that this is never considered to be the case for overloaded
operators)

It then proceeds to give source code examples re int values and fails to
give any guidance in the case of floating point.

When I combine the above with C's rules (which explicitly forbid
re-ordering) I come to the conclusion that fp arithmetic operators are
not 'really' associative and commutative and so the limited licence to
regroup (note not re-order) does not apply. But even if it did the best
that could be achieved with the above is:

double d = 1.1 + x + (2.2 + 3.3);
which the compiler could transform to :

double d = 1.1 + x + 5.5;

Had the writers of that section meant re-order they could have said so.

2) Section 5 para 4 is actually no more helpful. Historically this
formulation has not been taken as a licence for re-ordering successive
applications of the same operator other than where explicit licence is
granted (or can be deduced fro the as-if rule) if op is a left-to-right
associative operator:

a op b op c;

must be evaluated as:
(a op b) op c;
and not as:
a op (b op c);

However:

a op1 b op2 c op1 d;

with op1 having a higher precedence than op2 allows for (a op1 b) and (c
op1 d) to be evaluated in either order. That has been the normal
interpretation of the rule with regard to order of evaluation of
sub-expressions.

It is probably safer with modern high optimisation implementations to
write code with sequence points enforcing intent but I can see nothing
in the C++ Standard that allows C++ to treat floating point expressions
differently to the way that a C compiler is required to.
 
F

Francis Glassborow

In message said:
Then this seems a place where C and C++ differ, see the answer and quote
from the C++ standard from Ron Natalie.

Anyone who can confirm this?

I do not think so, I think C++ in the non-normative note [1.9 para 15]
was attempting to make current practice explicit -- i.e. regrouping.
Clause 5 para 4 largely para-phrases 6.5 paras 1-3 of the current C
Standard (or 6.3 in the old C Standard)

I do not think we intended any extra licence in C++ other than that
granted in C.
 
J

John Potter

Then this seems a place where C and C++ differ, see the answer and quote
from the C++ standard from Ron Natalie.
Anyone who can confirm this?

His quote is about order of evaluation. It relates to a*b+c*d where
either of a*b or c*d may be evaluated first. In the code under
discussion, a+b+c, it is (a+b)+c. C may be evaluated before a+b but
changing it to a+(b+c) or (a+c)+b is only allowed by the as-if rule.
The non-normative note in 1.9/15 amplifies this. It seems that the
C++ rules are the same as the C rules.

I have no idea how Ron based his other post that claimed an expression
could be reordered giving different results. There is no support in
either of the sited verses.

John
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,159
Messages
2,570,879
Members
47,413
Latest member
ReeceDorri

Latest Threads

Top