Strange - a simple assignment statement shows error in VC++ but worksin gcc !

H

Harald van Dijk

Tell me, do you check your integer additions for max conditions?

Can anyone here mention an architecture which crashed and did not wrap?

gcc has a -ftrapv option. lcc-win has a differently named but similar
option (and IIRC is a bit more reliable).

With no special options, just optimisations enabled, gcc neither crashes
nor wraps on integer overflow. (It would wrap, except it optimises on the
assumption that there is no overflow, so INT_MIN and INT_MAX+1 could
print out identical, yet not compare equal.)
 
J

jameskuyper

Rainer said:
You can subtitutes 'the consequences of' instead of 'the result of' if
you happen to like that better. This may be a Germanism.

As far as the standard is concerned, the result of an expression is
the value that it has (a value that, in this case, gets discarded).
The word "consequences" could be understood as including the side-
effects (in this case, the change in the value of i); that is not what
the standard means when it uses the term "result".

....
But 'absence of information about X' is nothing magical.

It's not absence of information, it's an absence of constraints; and
it is, furthermore, only a absence from the C standard; both
information and constraints might be available from other sources
(POSIX, implementation documentation, etc.).

My point was that, to many people, what is allowed by that absence
seems magical. For instance, they can't imagine any non-magical
mechanism whereby their program might abort before it even executes
the first line of main(), and as a result never actually executes the
line of code that gives the implementation permission to generate such
an executable.
If the behaviour is undefined, the value of the expression cannot be
defined,

I've already addressed that argument in the part you clipped; you
obviously found it unconvincing, and not worth responding to, so there
appears to be little point in further discussion.
 
B

Ben Bacarisse

Richard said:
Flash Gordon said:
Rainer said:
Rainer Weikusat wrote:
[...]
But i++ is inherently safe - at least on most common archtectures. And
certainly there's nothing complicated going on, it is obvious what
happens.
[...]

No, it's not inherently safe.

int i = INT_MAX;
i++;

The "i++" in the above invokes undefined behavior.
'Undefined behaviour' cannot be 'invoked'. The correct statement would
be 'I [meaning you] have no idea what will happen because of that'. A
reason why this would be so could be that 'the result of evaluating
this particular expression is undefined',
From your wording I'm not sure if you realise that the behaviour of
the program (not just the result of the expression) is undefined on
executing that code.

Since one is a necessary consequence of the other, a claim like the
one above doesn't make much sense (and it is somewhat out-of-context
here, because the topic was the behaviour of this expression).

Wrong. The behaviour on that expression being executed could be that the
program crashes, rather than that it returns a value.

And yet I have hardly ever seen code which checks the boundaries for
integer addition/increment. Ever.

"ever" or "hardly ever"? The fact that you hedge one and not the other
suggests you have seen code that checks arithmetic ranges.
because normally the expected behaviour is to wrap.

But it is not usually correct. It is hard to think if a non-contrived
case where the wrapping behaviour is what the programmer wants. They
may not check, but they expect the program simply to go wrong when an
int overflows.
Tell me, do you check your integer additions for max conditions?

When it matters, yes. Often, I just accept that the program will go
horribly wrong, but then I don't write important code anymore. When I
used to, I checked.
Can anyone here mention an architecture which crashed and did not
wrap?

My intel laptop, for one. Most implementations that I have used for
serious work could be told to trap on integer overflow. Some, like
big IBMs and the Alpha architecture did this in hardware but even when
this assist was not available, compiling with overflow trapping is a
great help to debugging.
This is typical c.l.c nonsense where reality is obscured. Billions of
lines of code rely on integers wrapping

I have to take your word for that, but given the odd effect of an
integer operation "wrapping", I would be surprised if all of this code
is correct.
and no one in their right mind
is going to tell me any different.

Ah. My mistake (again). Please ignore this post!
 
J

jameskuyper

Richard said:
Ben Bacarisse said: ....

I'd be surprised if it exists (in the volume claimed). It is
self-evident that it isn't correct, because it relies on undefined
behaviour.

Not all C code needs to be portable in order to be correct. Code with
relies on an implementation-specific promise about the behavior on
overflow could be perfectly correct, so long as such code is not
supposed to be portable to any implementation which fails to make that
promise.
 
N

Nate Eldredge

Ben Bacarisse said:
"ever" or "hardly ever"? The fact that you hedge one and not the other
suggests you have seen code that checks arithmetic ranges.

"He's hardly ever sick at sea!"
My intel laptop, for one. Most implementations that I have used for
serious work could be told to trap on integer overflow. Some, like
big IBMs and the Alpha architecture did this in hardware but even when
this assist was not available, compiling with overflow trapping is a
great help to debugging.

What compiler was this, and how did you enable it? AFAIK, the x86
architecture doesn't trap automatically on integer overflow, but
requires the compiled code to explicitly check whether it has occurred
(though it does provide the INTO instruction to make this checking a
little more convenient). I don't believe I've seen a C compiler that
actually generated code to do this, however.
 
B

Ben Bacarisse

Richard Heathfield said:
Ben Bacarisse said:


Why?

Because the alternative would be even less productive. I don't want a
debate about exactly how much code relies on "silent wrapping" --
especially with Just Plain Richard.
I'd be surprised if it exists (in the volume claimed). It is
self-evident that it isn't correct, because it relies on undefined
behaviour.

There are other ways to define correct. Yours is perfectly
reasonable, but had I used it (and I often do) I would simply be
making a circular point. JPR knows the code is UB, so to say it is
incorrect because it is UB adds very little.

A less contentious phrasing might have been "I doubt that the code in
question behaves as intended even on implementations that provide ints
that wrap silently on overflow."
 
F

Flash Gordon

Rainer said:
Flash Gordon said:
Rainer said:
Rainer Weikusat wrote:
[...]
But i++ is inherently safe - at least on most common archtectures. And
certainly there's nothing complicated going on, it is obvious what
happens.
[...]

No, it's not inherently safe.

int i = INT_MAX;
i++;

The "i++" in the above invokes undefined behavior.
'Undefined behaviour' cannot be 'invoked'. The correct statement would
be 'I [meaning you] have no idea what will happen because of that'. A
reason why this would be so could be that 'the result of evaluating
this particular expression is undefined',
From your wording I'm not sure if you realise that the behaviour of
the program (not just the result of the expression) is undefined on
executing that code.
Since one is a necessary consequence of the other, a claim like the
one above doesn't make much sense (and it is somewhat out-of-context
here, because the topic was the behaviour of this expression).
Wrong.

What topics I am discussing is up to me, not up to you (I know that's
not what you would like the 'wrong' to have applied to, but since that
doesn't make any sense anyway, I have chosen to interpret it sensibly).

The behaviour of the program being undefined is not a consequence of the
result of the expression being undefined, which is what I was referring
to. I see now that you intended it the other way around.
I wasn't writing about what might or might not happen when code
generated from input source code without a defined meaning would be
executed. It useless to speculate about that, anyway.

As I said, I commented on that because the way you phrased your response
it was not clear that you realised the behaviour was undefined rather
than just the result.
No, it is the absence of anything specific. That's why it is called
_UN_DEFINED.

Which, in turn, is a definition of undefined behaviour. It is even in
the standard as a definition of it.
But you wrongly connected the 'mathematical' from the first clause
with the second, alluding to 'the mathematical value of the expression',
this being a term used in footnote 49, which is an explanation of how
to convert a 'large' value to a 'small' unsigned integer type.

Not that any of this would anyhow relate to my original statement
about how an expressioned claimed to have undefined behaviour could be
defined to have a value which causes the behaviour to be undefined
without - well - not having undefined behaviour.

As I say, it is quite clear what the standard means when it talks about
a result not being in the range of representable values for its type. It
obviously does not require the program to produce that result, since it
says the behaviour is undefined, therefor it can only mean what the
result would be mathematically rather than the undefined behaviour you
get when it is actually evaluated.
 
J

jameskuyper

George Peter Staplin wrote:
....
We aren't back in the bad old days where people generally had to pay for
each minute of Internet, and every message cost the person that was pulling
it from a news feed. Many of the approaches of modern Usenet were a result
of constraints like that, which no longer always apply.

My time is much more valuable to me than the cost of connecting to the
internet; that's been true for as long as I've had internet access.
The time I must waste reading enough of the trolls' messages to
realize that I should have ignored them is valuable enough to me to
justify my complaints about off-topic messages. Message filtering
helps, but I begrudge even the time that I have to spend figuring out
that someone belongs on my kill list. I shouldn't have to worry about
this; I wish comp.lang.c.moderated had a latency period that was short
enough to make it a viable alternative to comp.lang.c.
 
R

Richard Tobin

Richard Heathfield said:
Can you name any implementations that give an undertaking about
behaviour on signed integer overflow?

gcc with a suitable command-line flag.

-- Richard
 
C

CBFalconer

Richard said:
I don't know of any, but I do know that some compilers now
*assume* that overflow does not occur, and so do optimisations
inconsistent with wrapping.

There's not much we can do about array overruns and misuse of
pointers, due to the C construction methods, but on almost all
platforms we can efficiently detect and trap on integer overflow.
It would be very handy to have this available. Note that the
compiler has to be smart enough to eliminate unsigned overflow.
 
C

CBFalconer

Nate said:
.... snip ...


What compiler was this, and how did you enable it? AFAIK, the x86
architecture doesn't trap automatically on integer overflow, but
requires the compiled code to explicitly check whether it has
occurred (though it does provide the INTO instruction to make this
checking a little more convenient). I don't believe I've seen a C
compiler that actually generated code to do this, however.

I built a Pascal compiler about 25 years ago that did this.
 
I

Ike Naar

Can anyone here mention an architecture which crashed and did not wrap?
This is typical c.l.c nonsense where reality is obscured. Billions of
lines of code rely on integers wrapping and no one in their right mind
is going to tell me any different. But meanwhile in c.l.c ....

I seem to remember that the VAX could be configured to trap on integer
overflow. And, of course, the Burroughs (a.k.a Unisys A series).
 
B

Ben Bacarisse

Nate Eldredge said:
"He's hardly ever sick at sea!"


What compiler was this, and how did you enable it?

gcc -ftrapv is the most common way to get this behaviour nowadays.
The Alpha compiler was on an HP-UX system in the dim and distant
past. I don't remember them all. I certainly can't remember how to
enable the checking!
AFAIK, the x86
architecture doesn't trap automatically on integer overflow, but
requires the compiled code to explicitly check whether it has occurred
(though it does provide the INTO instruction to make this checking a
little more convenient). I don't believe I've seen a C compiler that
actually generated code to do this, however.

gcc generates a call to a function.
 
K

Keith Thompson

Richard Heathfield said:
jameskuyper said:


Sure.

Can you name any implementations that give an undertaking about
behaviour on signed integer overflow? (I'll start. Borland's Turbo
C++ 2.0 gives an assurance in its documentation that overflow is
ignored.)

Saying that overflow is "ignored" doesn't necessarily tell you what
the result is going to be. It probably means that it does ordinary
two's-complement wraparound, but I don't know that I'd want to depend
on that unless the documentation actually guaranteed it.
 
C

CBFalconer

Ike said:
I seem to remember that the VAX could be configured to trap on
integer overflow. And, of course, the Burroughs (a.k.a Unisys A
series).

I advise ignoring, possibly plonking, Richard the nameless. He is
a troll with no idea of reality. The fact is that integer overflow
causes undefined behaviour. The reason is that the standard says
so.
 
C

CBFalconer

George said:
Antoninus Twink wrote:
.... snip ...
.... snip ...

It's only offensive if you choose to take it that way. People
often have other motives for saying such things. You don't
have to internalize it or believe it. So it's ultimately your
choice, and you pick your poison.

Twink, and Han, are monstrous trolls on c.l.c. They have shown no
signs of reforming, and most have them firmly plonked. Please
don't feed the trolls.
 
N

Nate Eldredge

Ben Bacarisse said:
gcc -ftrapv is the most common way to get this behaviour nowadays.
Thanks.

The Alpha compiler was on an HP-UX system in the dim and distant
past. I don't remember them all. I certainly can't remember how to
enable the checking!


gcc generates a call to a function.

So it does.

Interestingly, gcc 4.2.1 on my amd64 machine doesn't do it right. The
program

#include <stdio.h>
int twotimes(int x) { return 2 * x; }
int main(void) { printf("%d\n", twotimes(2000000000)); return 0; }

when compiled with -ftrapv, runs without error and prints -294967296.
gcc appears to be calling a function which checks to see if the
multiplication overflows a `long' (64 bits), rather than an `int'. I'm
compiling a more recent version now to see if this bug still exists.
 
J

jacob navia

Nate said:
"He's hardly ever sick at sea!"


What compiler was this, and how did you enable it? AFAIK, the x86
architecture doesn't trap automatically on integer overflow, but
requires the compiled code to explicitly check whether it has occurred
(though it does provide the INTO instruction to make this checking a
little more convenient). I don't believe I've seen a C compiler that
actually generated code to do this, however.

lcc-win generates this with the overflow checking option
 
J

jacob navia

Ben said:
gcc -ftrapv is the most common way to get this behaviour nowadays.
The Alpha compiler was on an HP-UX system in the dim and distant
past. I don't remember them all. I certainly can't remember how to
enable the checking!


gcc generates a call to a function.

as lcc-win does.
 
R

Rainer Weikusat

jameskuyper said:
As far as the standard is concerned, the result of an expression is
the value that it has

You are free to misunderstand me at leisure, but not to reinterpret in
the same way after a clarification.

[...]
It's not absence of information, it's an absence of constraints;

See above. You are shuffling around terms in order to confuse the
issue.

[...]
My point was that, to many people, what is allowed by that absence
seems magical.

Too many people cannot get their heads around the idea that
'undefined' implies 'nobody knows anything about it' and not 'program
will crash' and too many other people try to reinforce this notion by
wandering off into long strains of phantastic speculation.
For instance, they can't imagine any non-magical mechanism whereby
their program might abort before it even executes the first line of
main(), and as a result never actually executes the line of code
that gives the implementation permission to generate such an
executable.

Neither can I. But that I happen to have a compiler which translates
some 'terms' which (coincidentally) look like C, but which are not C
because their meaning is undefined into particular machine code whose
behaviour is completely well defined is as besides the point as
speculations (like above) regarding what 'mythical other compilers
could do' in such a case. This is either fiction or a collection of
facts about certain software.
I've already addressed that argument in the part you clipped;

You have expressed your conviction that 'addition in C' is certainly
(conceptually) identical to 'addition in mathematics', as defined on a
particular set (of the many sets it could be defined on), namely, the
set of integral numbers, and then argued based on that. That's as
(un)justified as being convinced that 'addition in C' is certainly
(conceptually) identical to 'addition in mathematics, as defined on
the 'mod 2^n' (two to the power of n)-ring' and arguing based on that.
I assume you understand were the second would lead to.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top