On 11/10/13 16:03, Alf P. Steinbach wrote:
But anyway, what does MinGW g++ do?
I'm no wizard with g++ options, so maybe the example below lacks the
specific option that will cause a bit of nuclear fireworks.
I'm just hoping "-ofast" will suffice to get that awfully expensive and
unacceptable integer comparison in the "if" condition, removed:
[example]
[D:\dev\test]
g++ (GCC) 4.7.2
[D:\dev\test]
g++ foo.cpp -fno-wrapv -Ofast
[D:\dev\test]
a
x*a = 2147483647, x+b = -2147483648
Oh thank Odin! The war is over!
[D:\dev\test]
[/code]
Oh dear also g++ produced wrap-around behavior.
Which means that changing the standard in that direction would not
introduce any new inefficiency (assuming for the sake of discussion that
it is an inefficiency, which I'm not sure I agree with), but instead
capture existing practice, at least for these two compilers.
I've just tested gcc (version 4.5.1) with your code. The only change I
made was using "const char*" instead of "auto const", because my gcc is
a little older. And I have no "-Ofast", so tested with -O2 and -Os:
$ g++ t2.cpp -o t2 -Os -Wall && ./t2
x*a = 2147483647, x+b = -2147483648
Oh thank Odin! The war is over!
$ g++ t2.cpp -o t2 -O2 -Wall && ./t2
t2.cpp: In function ‘void foo(int) [with int a = 1, int b = 1]’:
t2.cpp:30:31: instantiated from here
t2.cpp:17:5: warning: assuming signed overflow does not occur when
assuming that (X + c) >= X is always true
x*a = 2147483647, x+b = -2147483648
Firing nukes at ourselves!
Thanks for testing that
.
Using -O2 makes no difference here :-(, with my main g++ installation,
but it's nice to see the issue for real.
Note that execution of the "if" body with a "false" "if" condition that
is assumed by the programmer to hold in the "if" body, caused
self-immolation -- broken assumptions generally do wreak havoc.
Also note that the behavior is different with optimization (release
build) and without (which can be the case for a debug build), with no
apparent problem for the debug build.
So the gcc optimization yields
(1) broken assumptions, perhaps thereby causing a nuclear attack on
one's own position, and
(2) possibly/probably active sabotage of debugging efforts to fix that,
with no problem apparent in the debugging
which nastiness IMHO is a pretty steep cost for avoiding one or two
machine code instructions in a rare special case.
We are talking about /undefined behaviour/ here.
Yes, that's what this sub-thread is about and has been about, a special
case of formal UB.
Good observation.
And with well-defined signed arithmetic -- modular -- one would have
avoided that UB.
And one would then therefore also avoid the broken assumption, and
therefore also the resulting self-immolation or other unplanned effect.
As mentioned earlier, and I think you agreed with that, formal UB is not
a carte blanche for the compiler to do unreasonable things, such as
removing the function call in "x = foo( i++, i++ )", or removing whole
loops and, except when the programmer tries to find out what's going on
by debugging, executing "if" bodies when their conditions don't hold.
If you want to check
reality, you will have to do /far/ better than a single check of a
couple of compilers on one platform.
Give me test results from a dozen code cases in each of C and C++,
compiled with multiple compilers (at least MSVC, gcc, llvm and Intel),
with several versions, on at least Windows and Linux, each with 32-bit
and 64-bit targets, and each with a wide variety of command line
switches. Show me that /everything/ except gcc /always/ treats signed
overflow as modular, and I start to consider that there might be a pattern.
Then I'll send you to check perhaps 40 different embedded compilers for
30 different targets.
I believe those requirements are far more stringent than the reality
checking before the decision to make std::string's buffer guaranteed
contiguous, at the committee's meeting at Lillehammer in 2005.
However, while your requirements are unrealistic it's unclear what they
are requirements for.
Some others and I are discussing the issue of whether it might be a good
idea (or not) to make signed arithmetic formally modular. As I've shown
you by linking to Google Groups' archive, this is an old discussion in
clc++. I referred to a thread 9 years ago, but it goes even further
back: most arguments are well known; regarding this issue you have
contributed nothing new so far.
But your requirements statement above indicates that you are discussing
whether gcc is alone in its treatment of signed overflow.
I don't know, but I do think you're alone in discussing that.
Alternatively, I'll settle for quotations from the documentation for
said compilers guaranteeing this behaviour, if you don't want to test
them all.
I'll just note now that with guaranteed modular signed arithmetic, one
would more probably add an assertion about e.g. "x+b > 0".
Such simple assertions would then not only most probably catch the
invalid actual argument, but the added knowledge could in many cases be
used by the compiler to optimize the code.
Of course, instead of CHANGING the semantics of existing types, one
could introduce new types, perhaps via a header like <stdint.h>.
In passing, new types are also IMO a good solution for gcc's problems
with IEEE conformance (it's of two minds because semantics changes with
options, and so it reports false guarantees via numeric_limits).
So, the general idea is a win-win-win: getting rid of UB, detecting bugs
up front, getting optimization without counter-intuitive cleverness --
and incidentally getting rgw same behavior for release and debug
builds of the code, which I think is very much desirable.
- Alf