Making Fatal Hidden Assumptions

A

Andrew Reilly

K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.
The expression pa+1 is similar, but with one special case. If pa
points
to the last element in the array, you might expect that pa+1 would be
undefined; but actually the C standard specifically allows you to
evaluate pa+1 in that case. Dereferencing that pointer, or incrementing
it /again/, however, invoke undefined behavior.

Basically: A C pointer must always point to something. "The
negative-oneth element of array a" is not "something."

Only because the standard says so. The standard is stupid, in that
respect.
 
B

Ben Pfaff

Andrew Reilly said:
It's precicely this sort of tomfoolery on the part of the C standards
committee that has brought the language into such ill-repute in recent
years. It's practically unworkable now, compared to how it was in (say)
the immediately post-ANSI-fication years.

The semantics you're complaining (one-past-the-end) about didn't
change in important ways from C89 to C99. I don't know why you'd
think the C99 semantics are unreasonable if you didn't think the
C89 semantics were unreasonable.
 
A

Andrew Reilly

The Rationale says the Committee considered defining
the effects at both ends (bilateral dispensations?), but rejected it for
efficiency reasons. Consider an array of large elements -- structs of
32KB size, say. A system that actually performed hardware checking of
pointer values could accommodate the one-past-the-end rule by allocating
just one extra byte after the end of the array, a byte that the special
pointer value could point at without setting off the hardware's alarms.
But one-before-the-beginning would require an extra 32KB, just to hold
data that could never be used ...

So the standards body broke decades of practice and perfectly safe and
reasonable code to support a *hypothetical* implementation that was so
stupid that it checked pointer values, rather than pointer *use*?
Amazing.
 
K

Keith Thompson

Andrew Reilly said:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist. So
we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There are
plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of iterating
through an array at a non-unit stride, using the normal p < s + N
termination condition. The loop finishes with p > s + N and the standard
says "pow, you're dead", when the semantically identical code written with
integer indexes has no legal problems.

The standard is specifically designed to allow for architectures where
constructing an invalid pointer value can cause a trap even if the
pointer is not dereferenced.

For example, given:

int n;
int *ptr1 = &n; /* ptr1 points to n */
int *ptr2 = ptr1 - 1 /* ptr2 points *before* n in memory; this
invokes undefined behavior. */

Suppose some CPU uses special instructions and registers for pointer
values. Suppose arr happens to be allocated at the very beginning of
a memory segment. Just constructing the value ptr1-1 could cause a
trap of some sort. Or it could quietly yield a value such that
ptr2+1 != ptr1.

By saying that this is undefined behavior, the C standard isn't
forbidding you to do it; it's just refusing to tell you how it
behaves. If you're using an implementation that guarantees that this
will work the way you want it to, and if you're not concerned about
portability to other implementations, there's nothing stopping you
from doing it.

On the other hand, if the standard had defined the behavior of this
construct, it would have required all implementations to support it.
On a system that does strong checking of all pointer values, a C
compiler might have to generate inefficient code to meet the
standard's requirements.
 
B

Ben Pfaff

Andrew Reilly said:
So the standards body broke decades of practice and perfectly safe and
reasonable code to support a *hypothetical* implementation that was so
stupid that it checked pointer values, rather than pointer *use*?

You can still write your code to make whatever assumptions you
like. You just can't assume that it will work portably. If, for
example, you are writing code for a particular embedded
architecture with a given compiler, then it may be reasonable to
make assumptions beyond those granted by the standard.

In other words, the standard provides minimum guarantees. Your
implementation may provide stronger ones.
 
A

Andrew Reilly

Andrew Reilly said:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist.
So we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There
are plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of
iterating through an array at a non-unit stride, using the normal p < s
+ N termination condition. The loop finishes with p > s + N and the
standard says "pow, you're dead", when the semantically identical code
written with integer indexes has no legal problems.

The standard is specifically designed to allow for architectures where
constructing an invalid pointer value can cause a trap even if the pointer
is not dereferenced.

And are there any? Any in common use? Any where the equivalent (well
defined) pointer+offset code would be slower?

I'll need that list to know which suppliers to avoid...
For example, given:

int n;
int *ptr1 = &n; /* ptr1 points to n */ int *ptr2 = ptr1 - 1 /*
ptr2 points *before* n in memory; this
invokes undefined behavior. */

Suppose some CPU uses special instructions and registers for pointer
values. Suppose arr happens to be allocated at the very beginning of a
memory segment. Just constructing the value ptr1-1 could cause a trap
of some sort. Or it could quietly yield a value such that ptr2+1 !=
ptr1.

Suppose the computer uses tribits.

Standards are meant to codify common practice. If you want a language
that only has object references and array indices, there are plenty of
those to chose from.
By saying that this is undefined behavior, the C standard isn't
forbidding you to do it; it's just refusing to tell you how it behaves.

And that helps who?
If you're using an implementation that guarantees that this will work
the way you want it to, and if you're not concerned about portability to
other implementations, there's nothing stopping you from doing it.

Which implementations?
On the other hand, if the standard had defined the behavior of this
construct, it would have required all implementations to support it. On
a system that does strong checking of all pointer values, a C compiler
might have to generate inefficient code to meet the standard's
requirements.

That would be a *good* thing. Checking any earlier than at reference time
breaks what it is about C that makes it C.

OK, I've made enough of a fool of myself already. I'll go and have that
second cup of coffee for the morning, before I start going on about having
the standard support non-2's complement integers, or machines that have no
arithmetic right shifts...
 
J

Jordan Abel

OK, I've made enough of a fool of myself already. I'll go and have that
second cup of coffee for the morning, before I start going on about having
the standard support non-2's complement integers, or machines that have no
arithmetic right shifts...

Arithmetic right shift isn't particularly useful on some machines that
do. notably twos-complement ones.
 
A

Al Balmer

It's precicely this sort of tomfoolery on the part of the C standards
committee that has brought the language into such ill-repute in recent
years. It's practically unworkable now, compared to how it was in (say)
the immediately post-ANSI-fication years.

Well, shucks, I manage to make it work pretty well most every day.
Does that mean I'm in ill-repute too?
 
B

Ben Pfaff

Andrew Reilly said:
And are there any? Any in common use?

x86 in non-flat protected mode would be one example. Attempting
to load an invalid value into a segment register causes a fault.
 
A

Al Balmer

So the standards body broke decades of practice and perfectly safe and
reasonable code to support a *hypothetical* implementation that was so
stupid that it checked pointer values, rather than pointer *use*?
Amazing.

Decades of use? This isn't a new rule.

An implementation might choose, for valid reasons, to prefetch the
data that pointer is pointing to. If it's in a segment not allocated
....
 
A

Andrew Reilly

x86 in non-flat protected mode would be one example. Attempting to load
an invalid value into a segment register causes a fault.

I wasn't aware that address arithmetic generally operated on the segment
register in that environment, rather on the "pointer" register used within
the segment. I haven't coded in that environment myself, so I have no
direct experience to call on. My understanding was that the architecture
was intrinsically a segment+offset mechanism, so having the compiler
produce the obvious code in the offset value (i.e., -1) would not incur
the performance penalty that has been mentioned. (Indeed, it's loading
the segment register that causes the performance penalty, I believe.)
 
A

Andrew Reilly

An implementation might choose, for valid reasons, to prefetch the data
that pointer is pointing to. If it's in a segment not allocated ...

Hypothetical hardware that traps on *speculative* loads isn't broken by
design? I'd love to see the initialization sequences, or the task
switching code that has to make sure that all pointer values are valid
before they're loaded. No, scratch that. I've got better things to do.

Cheers,
 
B

Ben Pfaff

Andrew Reilly said:
x86 in non-flat protected mode would be one example. Attempting to load
an invalid value into a segment register causes a fault.

I wasn't aware that address arithmetic generally operated on the segment
register in that environment, rather on the "pointer" register used within
the segment. [...]

Address arithmetic might not, but the standard doesn't disallow
it. Other uses of invalid pointers, e.g. comparing a pointer
into a freed memory block against some other pointer, seem more
likely to do so.
 
K

Keith Thompson

Andrew Reilly said:
Andrew Reilly said:
On Tue, 07 Mar 2006 13:28:37 -0500, Arthur J. O'Dwyer wrote:
K&R answers your question. If pa points to some element of an array,
then pa-1 points to the /previous element/. But what's the "previous
element" relative to the first element in the array? It doesn't exist.
So we have undefined behavior.

Only because the standard says so. Didn't have to be that way. There
are plenty of logically correct algorithms that could exist that involve
pointers that point somewhere outside of a[0..N]. As long as there's no
de-referencing, no harm, no foul. (Consider the simple case of
iterating through an array at a non-unit stride, using the normal p < s
+ N termination condition. The loop finishes with p > s + N and the
standard says "pow, you're dead", when the semantically identical code
written with integer indexes has no legal problems.

The standard is specifically designed to allow for architectures where
constructing an invalid pointer value can cause a trap even if the pointer
is not dereferenced.

And are there any? Any in common use? Any where the equivalent (well
defined) pointer+offset code would be slower?

I really don't know, but the idea of allowing errors to be caught as
early as possible seems like a good one.

[...]
Suppose the computer uses tribits.

Do you mean trinary digits rather than binary digits? The C standard
requires binary representation for integers.
Standards are meant to codify common practice. If you want a language
that only has object references and array indices, there are plenty of
those to chose from.


And that helps who?

It (potentially) helps implementers to generate the most efficient
possible code, and it helps programmers to know what's actually
guaranteed to work across all possible platforms with conforming C
implementations.

[...]
OK, I've made enough of a fool of myself already. I'll go and have that
second cup of coffee for the morning, before I start going on about having
the standard support non-2's complement integers, or machines that have no
arithmetic right shifts...

C99 allows signed integers to be represented in 2's-complement,
1's-complement, or signed-magnitude (I think I mispunctuated at least
one of those).

C has been implemented on machines that don't support floating-point,
or even multiplication and division, in hardware. The compiler just
has to do whatever is necessary to meet the standard's requirements.
 
K

Keith Thompson

Andrew Reilly said:
So the standards body broke decades of practice and perfectly safe and
reasonable code to support a *hypothetical* implementation that was so
stupid that it checked pointer values, rather than pointer *use*?
Amazing.

Consider the alternative.

#define LEN 100
#define INC 5000
int arr[LEN];
int *ptr = arr;
/* A */
ptr += 2*INC;
ptr -= INC;
ptr -= INC;
/* B */
ptr -= INC;
ptr -= INC;
ptr += 2*INC;
/* C */

What you're suggesting, I think, is that ptr==arr should be true at
points A, B, and C, for any(?) values of LEN and INC. It happens to
work out that way sometimes (at least in the test program I just
tried), but I can easily imagine a system where guaranteeing this
would place an undue burden on the compiler.
 
A

Al Balmer

Hypothetical hardware that traps on *speculative* loads isn't broken by
design? I'd love to see the initialization sequences, or the task
switching code that has to make sure that all pointer values are valid
before they're loaded. No, scratch that. I've got better things to do.
It doesn't have to make sure. It's free to segfault. You write funny
code, you pay the penalty (or your customers do.) Modern hardware does
a lot of speculation. It can preload or even precompute both branches
of a conditional, for example.
 
E

Eric Sosman

Andrew Reilly wrote On 03/07/06 16:41,:
And are there any? Any in common use? Any where the equivalent (well
defined) pointer+offset code would be slower?

I've been told that IBM AS/400 bollixes the bogus
arithmetic, at least under some circumstances. A friend
told of fixing code that did something like

if (buffer_pos + amount_needed > buffer_limit) {
... enlarge the buffer ...
}
memcpy (buffer_pos, from_somewhere, amount_needed);
buffer_pos += amount_needed;

This looks innocuous to devotees of flat address spaces
(Flat-Earthers?), but it didn't work on AS/400. If the
sum `buffer_pos + amount_needed' went past the end of the
buffer, the result was some kind of NaP ("not a pointer")
and the comparison didn't kick in. Result: the code never
discovered that the buffer needed enlarging, and merrily
tried to memcpy() into a too-small area ...

I have no personal experience of the AS/400, and I may
have misremembered some of what my friend related. Would
anybody with AS/400 knowledge care to comment?
 
R

Randy Howard

Andrew Reilly wrote
(in article
Hypothetical hardware that traps on *speculative* loads isn't broken by
design? I'd love to see the initialization sequences, or the task
switching code that has to make sure that all pointer values are valid
before they're loaded. No, scratch that. I've got better things to do.

This is a lot of whining about a specific problem that can
easily be remedied just by changing the loop construction. The
whole debate is pretty pointless in that context, unless you
have some religious reason to insist upon the method in the
original.
 
R

Robin Haigh

Andrew Reilly said:
It's precicely this sort of tomfoolery on the part of the C standards
committee that has brought the language into such ill-repute in recent
years. It's practically unworkable now, compared to how it was in (say)
the immediately post-ANSI-fication years.

The code in question could trivially have p replaced by (p+p_index)
everywhere, where p_index is an int, and all of the arithmetic currently
effected on p is instead effected on p_index. I.e., if p was set to s,
and p_index set to -1, and p_index++ appeared as the first element inside
the outer do loop.

So having the standard make the semantic equivalent "undefined" only
serves to make the standard itself ever more pointless.

It's not always equivalent. The trouble starts with

char a[8];
char *p;

for ( p = a+1 ; p < a+8 ; p += 2 ) {}

intending that the loop terminates on p == a+9 (since it skips a+8). But
how do we know that a+9 > a+8 ? If the array is right at the top of some
kind of segment, the arithmetic might have wrapped round.

To support predictable pointer comparisons out of range, the compiler would
have to allocate space with a safe buffer zone. Complications are setting
in.

Ints have the nice property that 0 is in the middle and we know how much
headroom we've got either side. So it's easy for the compiler to make the
int version work (leaving it to the programmer to take responsibility for
avoiding overflow, which is no big deal).

Pointers don't have that property. The compiler can't take sole
responsibility for avoiding overflow irrespective of what the programmer
does. If the programmer wants to go out of range and is at the same time
responsible for avoiding overflow, then he has to start worrying about
whereabouts his object is and what headroom he's got.
Bah, humbug. Think I'll go back to assembly language, where pointers do
what you tell them to, and don't complain about it to their lawyers.

Can't see how assembly programmers avoid the same kind of issue. I can see
how they could ignore it. The code will work most of the time.
 
C

CBFalconer

Ben said:
You can still write your code to make whatever assumptions you
like. You just can't assume that it will work portably. If,
for example, you are writing code for a particular embedded
architecture with a given compiler, then it may be reasonable
to make assumptions beyond those granted by the standard.

Which was the point of my little article in the first place. If,
for one reason or another, you need to make assumptions, document
what they are. To do this you first need to be able to recognize
those assumptions. You may easily find you could avoid making the
assumption in the first place without penalty, as in this
particular "p = p - 1;" case.

Why is everybody concentrating on the one real error in the example
code, and not on the hidden assumptions. Errors are correctible,
assumptions need to be recognized.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,813
Latest member
lawrwtwinkle111

Latest Threads

Top