Making Fatal Hidden Assumptions

C

Chris Torek

Keith Thompson said:
Ask Jack to lend you his bottle. You'll soon change your mind.

To clarify a bit ...

A mathematician named Klein
Thought the Moebius band was divine
Said he, "If you glue
The edges of two
You'll get a weird bottle like mine!"

:)

(A Moebius band has only one side. It is a two-dimensional object
that exists only in a 3-dimensional [or higher] space. A Klein
bottle can only be made in a 4-dimensional [or higher] space, and
is a 3-D object with only one side. The concept can be carried on
indefinitely, but a Klein bottle is hard enough to contemplate
already.)
 
C

CBFalconer

Andrey said:
There are actual environments where 's - 1' alone is enough to cause a
crash. In fact, any non-flat memory model environment (i.e. environment
with 'segment:eek:ffset' pointers) would be a good candidate. The modern
x86 will normally crash, unless the implementation takes specific steps
to avoid it.

This illustrates the fact that usenet threads are uncontrollable.
I wrote the original to draw attention to hidden assumptions, and
it has immediately degenerated into thrashing about the one real
error in the sample code. I could have corrected and eliminated
that error by a slight code rework, but then I would have modified
Mr Hsiehs code. There were at least seven further assumptions,
most of which were necessary for the purposes of the code, but
strictly limited its applicability.

My aim was to get people to recognize and document such hidden
assumptions, rather than leaving them lying there to create sneaky
bugs in apparently portable code.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 
J

Jordan Abel

Ah, sorry. I didn't read the lot carefully enough.

I don't imagine it to be anything. I suspect others do, and that's why
there is a potential for accusations of racism
 
P

Paul Keinanen

There are actual environments where 's - 1' alone is enough to cause a
crash. In fact, any non-flat memory model environment (i.e. environment
with 'segment:eek:ffset' pointers) would be a good candidate. The modern
x86 will normally crash, unless the implementation takes specific steps
to avoid it.

Exactly which x86 mode are you referring to ?

16 bit real mode, virtual86 mode or some 32 mode (which are after all
segmented modes with all segmented registers with the same value) ?

If s is stored in 16 bit mode in ES:DX with DX=0, then p=s-1 would
need to decrement ES by one and store 000F in DX. Why would reloading
ES cause any traps, since no actual memory reference is attempted ?
Doing p++ would most likely just increment DX by one to 0010, thus
ES:DX would point to s again, which is a legal address, but with a
different internal representation.

IIRC some 32 bit addressing mode would trap if one tried to load the
segment register, but again, how could the caller generate such
constructs as s = ES:0 at least from user mode. In practice s = ES:0
could only be set by a kernel mode routine calling a user mode
routine, so this is really an issue only with main() parameters.

Paul
 
A

Andrew Reilly

Incorrect. It is not about "lawyers", it is about actual _crashes_. The
reason why 's - 1' itself can (an will) crash on certain platforms is
the same as the one that will make it crash in exactly the same way in
"assembly language" on such platforms.

No, my point was that the language lawyers have taken a perfectly
appealing and generally applicable abstraction, and outlawed certain
obvious constructions on the flimsy grounds that it was easier to
pervert the abstraction than to support it on some uncommon (or indeed
hypothetical) hardware.
Trying to implement the same code in assembly language on such a
platform would specifically force you to work around the potential
crash, sacrificing efficiency for safety. In other words, you'd be
forced to use different techniques for doing 's - 1' in contexts where
it might underflow and in contexts where it definitely will not
underflow.

The assembly language version of the algorithm would *not* crash, because
the assembly language of the perverted platform on which that was a
possibility would require a construction (probably using an explicit
integer array index, rather than pointer manipulation) that would cause
exactly zero inefficiency or impairment of safety. (Because the index is
only *used* in-bounds.)
C language, on the other hand, doesn't offer two different '-' operators
to for these two specific situations. Instead C language outlaws (in
essence) pointer underflows.

No it doesn't. The C language allows "inner" pointers to be passed to
functions, with no other way for the function to tell whether s - 1 is
legal or illegal in any particular call context. It is therefore clear
what the abstraction of pointer arithmetic implies. That some platforms
(may) have a problem with this is not the language's fault. It's just a
bit harder to support C on them. That's OK. There are plenty of other
languages that don't allow that construct at all (or even have pointers as
such), and they were clearly the targets in mind for the people who
developed such hardware. The standard authors erred. It should have been
incumbant on implementers on odd platforms to support the full power of
the language or not at all, rather than for all other (C-like) platforms
to carry the oddness around with them, in their code. However, it's clear
that's a very old mistake, and no-one's going to back away from it now.
This is a perfectly reasonable approach for a higher level language.

C is not a higher-level language. It's a universal assembler. Pick
another one.
 
C

CBFalconer

It's not deprecated, it's illegal. Once you have involved UB all
bets are off. Without the p-1 the p++ statements are fine, as long
as they don't advance the pointer more than one past the end of the
object.
Correct, p now points to x

and a statement --p or p-- would be illegal. However p++ would be
legal. But *(++p) would be illegal, because it dereferences past
the confines of the object x.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 
C

Christian Bau

Andrew Reilly said:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.

Consider a typical implementation with 32 bit pointers and objects that
can be close to 2 GByte in size.

typedef struct { char a [2000000000]; } giantobject;

giantobject anobject;

giantobject* p = &anobject;
giantobject* q = &anobject - 1;
giantobject* r = &anobject + 1;
giantobject* s = &anobject + 2;

It would be very hard to implement this in a way that both q and s would
be valid; for example, it would be very hard to achieve that q < p, p <
r and r < s are all true. If q and s cannot be both valid, and there
isn't much reason why one should be valid and the other shouldn't, then
neither can be used in a program with any useful guarantees by the
standard.
 
C

Christian Bau

Andrew Reilly said:
Andrew Reilly said:
On Tue, 07 Mar 2006 13:28:37 -0500, Arthur J. O'Dwyer wrote:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist.
So we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There
are plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of
iterating through an array at a non-unit stride, using the normal p < s
+ N termination condition. The loop finishes with p > s + N and the
standard says "pow, you're dead", when the semantically identical code
written with integer indexes has no legal problems.

The standard is specifically designed to allow for architectures where
constructing an invalid pointer value can cause a trap even if the pointer
is not dereferenced.

And are there any? Any in common use? Any where the equivalent (well
defined) pointer+offset code would be slower?

Question: If the C Standard guarantees that for any array a, &a [-1]
should be valid, should it also guarantee that &a [-1] != NULL and that
&a [-1] < &a [0] and &a [-1] < &a [0]?

In that case, what happens when I create an array with a single element
that is an enormously large struct?
 
C

Christian Bau

Andrew Reilly said:
It's not always equivalent. The trouble starts with

char a[8];
char *p;

for ( p = a+1 ; p < a+8 ; p += 2 ) {}

intending that the loop terminates on p == a+9 (since it skips a+8). But
how do we know that a+9 > a+8 ? If the array is right at the top of some
kind of segment, the arithmetic might have wrapped round.

a+9 > a+8 because a + 9 - (a + 8) == 1, which is > 0. Doesn't matter if
the signed or unsigned pointer value wrapped around in an intermediate
term. On many machines that's how the comparison is done anyway. You're
suggesting that having the compiler ensure that a+8 doesn't wrap around
wrt a is OK, but a+9 is too hard. I don't buy it.

I just tried the following program (CodeWarrior 10 on MacOS X):

#include <stdio.h>

#define SIZE (50*1000000L)
typedef struct {
char a [SIZE];
} bigstruct;

static bigstruct bigarray [8];

int main(void)
{
printf("%lx\n", (unsigned long) &bigarray [0]);
printf("%lx\n", (unsigned long) &bigarray [9]);
printf("%lx\n", (unsigned long) &bigarray [-1]);

if (&bigarray [-1] < & bigarray [0])
printf ("Everything is fine\n");
else
printf ("The C Standard is right: &bigarray [-1] is broken\n");

return 0;
}

The output is:

2008ce0
1cd30160
ff059c60
The C Standard is right: &bigarray [-1] is broken
 
A

Al Balmer

Keith Thompson said:

Ask Jack to lend you his bottle. You'll soon change your mind.

To clarify a bit ...

A mathematician named Klein
Thought the Moebius band was divine
Said he, "If you glue
The edges of two
You'll get a weird bottle like mine!"

:)

(A Moebius band has only one side. It is a two-dimensional object
that exists only in a 3-dimensional [or higher] space. A Klein
bottle can only be made in a 4-dimensional [or higher] space, and
is a 3-D object with only one side. The concept can be carried on
indefinitely, but a Klein bottle is hard enough to contemplate
already.)

But that was Felix. Who's Jack?
 
A

Al Balmer

C is not a higher-level language. It's a universal assembler. Pick
another one.

Nice parrot. I think the original author of that phrase meant it as a
joke.

I spent 25 years writing assembler. C is a higher-level language.
 
K

Keith Thompson

Andrew Reilly said:
C is not a higher-level language.

It's higher-level than some, lower than others. I'd call it a
medium-level language.
It's a universal assembler.

Not in any meaningful sense of the word "assembler".
 
D

Dik T. Winter

....
The first time I see this code, but:
....
This will not result in the desired answer on the Cray 1.
On the Cray 1 a byte pointer has the word address (64 bit words)
in the lower 48 bits and a byte offset in the upper 16 bits.
So this code actually tests whether the *word* address is even.
And so the code will fail to give the correct answer in the
following case:
char f[] = "0123456789";
int i;
f[1] = 0;
i = strlen(f + 2);
when f starts at an even word address it will give the answer 1
instead of the correct 8.

Note that in here the byte-offset in the pointer is ignored, so
d points to the integer that contains the character array:
"0\000234567".

Again an hidden assumption I think. (It is exactly this hidden
assumption that made porting of a particular program extremely
difficult to the Cray 1. The assumption was that in a word
pointer the lowest bit was 0, and that bit was used for
administrative purposes.)
 
D

Dik T. Winter

>
>
> I get queasy reading the rants against 1's complement architectures; I
> wish Seymour Cray were still around to address this.

There are quite a few niceties indeed. Negation of a number is really
simple, just a logical operation, and there are others. This means
simpler hardware for the basic operations on signed objects, except for
the carry. It was only when I encountered the PDP that I saw the first
2's complement machine.

On the other hand, when Seymour Cray started his own company, those
machines where 2's complement. And he shifted from 60 to 64 bit
words, but still retained octal notation (he did not like hexadecimal
at all).
 
A

Andrew Reilly

It's not deprecated, it's illegal. Once you have involved UB all
bets are off. Without the p-1 the p++ statements are fine, as long
as they don't advance the pointer more than one past the end of the
object.

It's no more "illegal" than any of the other undefined behaviour that you
pointed out in that code snippet. There aren't different classes of
undefined behaviour, are there?

I reckon I'll just go with the undefined flow, in the interests of
efficient, clean code on the architectures that I target. I'll make sure
that I supply a document specifying how the compilers must behave for all
of the undefined behaviours that I'm relying on, OK? I have no interest
in trying to make my code work on architectures for which they don't hold.

Of course, that list will pretty much just describe the usual flat-memory,
2's compliment machine that is actually used in almost all circumstances
in the present day, anyway. Anyone using anything else already knows that
they're in a world of trouble and that all bets are off.
 
D

Dik T. Winter

Now responding to the basic article:

> #define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080)
> Let us start with line 1! The constants appear to require that
> sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
> really looked too closely, and it is possible that the ~x term
> allows for larger sizeof(int), but nothing allows for larger
> CHAR_BIT.

It does not allow for larger sizeof(int) (as it does not allow for
other values of CHAR_BIT). When sizeof(int) > 4 it will only show
whether there is a zero byte in the low order four bytes. When
sizeof(int) < 4 it will give false positives. Both constants have
to be changed when sizeof(int) != 4. Moreover, it will not work on
1's complement or sign-magnitude machines. Using unsigned here is
most appropriate.
> if ((((int) p) & (SW - 1)) == 0) {
> Then we come to the purpose of the statement, which is to discover
> if the pointer is suitably aligned for an int. It does this by
> bit-anding with SW-1, which is the concealed sizeof(int)-1. This
> won't be very useful if sizeof(int) is, say, 3 or any other
> non-poweroftwo. In addition, it assumes that an aligned pointer
> will have those bits zero. While this last is very likely in
> todays systems, it is still an assumption. The system designer is
> entitled to assume this, but user code is not.

It is false on the Cray 1 and its derivatives. See another article
by me where I show that it may give wrong answers.
 
A

Andrew Reilly

Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.

Consider a typical implementation with 32 bit pointers and objects that
can be close to 2 GByte in size.

Yeah, my world-view doesn't allow individual objects to occupy half the
address space or more. I'm comfortable with that restriction, but I can
accept that there may be others that aren't. They're wrong, of course :)
typedef struct { char a [2000000000]; } giantobject;

giantobject anobject;

giantobject* p = &anobject;
giantobject* q = &anobject - 1;
giantobject* r = &anobject + 1;
giantobject* s = &anobject + 2;

It would be very hard to implement this in a way that both q and s would
be valid; for example, it would be very hard to achieve that q < p, p <
r and r < s are all true. If q and s cannot be both valid, and there
isn't much reason why one should be valid and the other shouldn't, then
neither can be used in a program with any useful guarantees by the
standard.

Yes, very hard indeed. Partition your object or use a machine with bigger
addresses. Doesn't seem like a good enough reason to me to break a very
useful abstraction.

Posit: you've got N bits to play with, both for addresses and integers.
You need to be able to form a ptrdiff_t, which is a signed quantity, to
compute d = anobject.a - anobject.a[j], for any indices i,j within the
range of the array. The range of signed quantities is just less than half
that of unsigned. That range must therefore define how large any
individual object can be. I.e., half of your address space. Neat, huh?

Yeah, yeah, for any complicated problem there's an answer that is simple,
neat and wrong.
 
K

Keith Thompson

Andrew Reilly said:
It's no more "illegal" than any of the other undefined behaviour that you
pointed out in that code snippet. There aren't different classes of
undefined behaviour, are there?

Right, "illegal" probably isn't the best word to describe undefined
behavior. An implementation is required to diagnose syntax errors and
constraint violations; it's specifically *not* required to diagnose
undefined behavior (though it's allowed to do so).
I reckon I'll just go with the undefined flow, in the interests of
efficient, clean code on the architectures that I target. I'll make sure
that I supply a document specifying how the compilers must behave for all
of the undefined behaviours that I'm relying on, OK? I have no interest
in trying to make my code work on architectures for which they don't hold.

Ok, you can do that if you like. If you can manage to avoid undefined
behavior altogether, your code is likely to work on *any* system with
a conforming C implementation; if not, it may break when ported to
some exotic system.

For example, code that makes certain seemingly reasonable assumptions
about pointer representations will fail on Cray vector systems. I've
run into such code myself; the corrected code was actually simpler and
cleaner.

If you write code that depends on undefined behavior, *and* there's a
real advantage in doing so on some particular set of platforms, *and*
you don't mind that your code could fail on other platforms, then
that's a perfectly legitimate choice. (If you post such code here in
comp.lang.c, you can expect us to point out the undefined behavior;
some of us might possibly be overly enthusiastic in pointing it out.)
Of course, that list will pretty much just describe the usual flat-memory,
2's compliment machine that is actually used in almost all circumstances
in the present day, anyway. Anyone using anything else already knows that
they're in a world of trouble and that all bets are off.

All bets don't *need* to be off if you're able to stick to what the C
standard actually guarantees.
 
A

Andrew Reilly

Question: If the C Standard guarantees that for any array a, &a [-1]
should be valid, should it also guarantee that &a [-1] != NULL

Probably, since NULL has been given the guarantee that it's unique in some
sense. In an embedded environment, or assembly language, the construct
could of course produce NULL (for whatever value you pick for NULL), and
NULL would not be special. I don't know that insisting on the existence of
a unique and special NULL pointer value is one of the standard's crowning
achievements, either. It's convenient for lots of things, but it's just
not the way simple hardware works, particularly at the limits.
and that
&a [-1] < &a [0]

Sure, in the ptrdiff sense that I mentioned before.
I.e., (a - 1) - (a + 0) < 0 (indeed, identically -1)
In that case, what happens when I create an array with a single element
that is an enormously large struct?

Go nuts. If your address space is larger than your integer range, (as, is
the case for I32LP64 machines), your compiler might have to make sure that
it performs the difference calculation to sufficient precision.

I still feel comfortable about this failing to work for objects larger
than half the address space, or even for objects larger than the range of
an int. That's IMO, a much less uncomfortable restriction than the one
that the standard seems to have stipulated, which is that the simple and
obvious pointer arithmetic that you've used in your examples works in some
situations and doesn't work in others. (Remember: it's all good if those
array references are in a function that was itself passed (&foo[n], for
n>=1) as the argument.)

Cheers,
 
K

Keith Thompson

Andrew Reilly said:
Yeah, my world-view doesn't allow individual objects to occupy half the
address space or more. I'm comfortable with that restriction, but I can
accept that there may be others that aren't. They're wrong, of course :)

I can easily imagine a program that needs to manipulate a very large
data set (for a scientific simulation, perhaps). For a data set that
won't fit into memory all at once, loading as much of it as possible
can significantly improve performance.
typedef struct { char a [2000000000]; } giantobject;

giantobject anobject;

giantobject* p = &anobject;
giantobject* q = &anobject - 1;
giantobject* r = &anobject + 1;
giantobject* s = &anobject + 2;

It would be very hard to implement this in a way that both q and s would
be valid; for example, it would be very hard to achieve that q < p, p <
r and r < s are all true. If q and s cannot be both valid, and there
isn't much reason why one should be valid and the other shouldn't, then
neither can be used in a program with any useful guarantees by the
standard.

Yes, very hard indeed. Partition your object or use a machine with bigger
addresses. Doesn't seem like a good enough reason to me to break a very
useful abstraction.

Your "very useful abstraction" is not something that has *ever* been
guaranteed by any C standard or reference manual.
Posit: you've got N bits to play with, both for addresses and integers.
You need to be able to form a ptrdiff_t, which is a signed quantity, to
compute d = anobject.a - anobject.a[j], for any indices i,j within the
range of the array. The range of signed quantities is just less than half
that of unsigned. That range must therefore define how large any
individual object can be. I.e., half of your address space. Neat, huh?


The standard explicitly allows for the possibility that pointer
subtraction within a single object might overflow (if so, it invokes
undefined behavior). Or, given that C99 requires 64-bit integer
types, making ptrdiff_t larger should avoid the problem for any
current systems (I don't expect to see full 64-bit address spaces for
a long time).

The standard is full of compromises. Not everyone likes all of them.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,225
Members
46,815
Latest member
treekmostly22

Latest Threads

Top