multi dimensional arrays as one dimension array

H

Harald van Dijk

Harald van =?UTF-8?b?RMSzaw==?= said:
Correct. And the result is specified as a pointer that you can convert
back to the original type to get something comparing equal to the
original pointer, but nothing more than that. In the special case of a
character type, the standard points out that the result points to the
first byte of the same object, and allows access to all bytes of the
same object. If it was necessary to explicitly specify this for
character types, why does it implicitly apply to other types as well?

The two cases aren't the same. In the case of, say, an int matrix, such
as

int m[3][5];

the storage for m is guaranteed to hold contiguous int objects, with the
same representation as an array of int. The "arrayness" is already
there.

int m[3][5] is guaranteed to hold contiguous bytes, with the same
representation as an array of unsigned char, right? I really don't see how
your explanation doesn't apply to that just as well.
Converting to (char*) is different because the object being pointed at
may be just a scalar;

Given int i;, &i may be treated as a pointer to the first element of an
array of length 1, and i has the same representation as int[1], does it
not?
It's because of imposing an array representation onto a non-array object
that character pointers are singled out in defining their conversion
semantics.

Even if that is the case, I still don't see where the conversion for other
types is defined at all.
 
H

Harald van Dijk

Which view is more reasonable:

So you're arguing for what the standard should say, instead of what it
does say? If so, I agree that the current wording is far from perfect, but
then there are a lot more options than only the literal text, and your
view. There are plenty of options in between.
 
T

Tim Rentsch

Harald van =?UTF-8?b?RMSzaw==?= said:
So you're arguing for what the standard should say, instead of what it
does say? If so, I agree that the current wording is far from perfect, but
then there are a lot more options than only the literal text, and your
view. There are plenty of options in between.

It sounds like you're agreeing with me, at least in part, but let
me see if I can clarify what I'm saying.

I don't mean to argue for what the standard should say (at least,
not now). The question is, How are we to understand what the
standard does say?

One way to understand the standard is to treat it much like we
would a math textbook. Statements in the standard are "axioms";
using the axioms we prove "theorems" about what C must allow or
disallow, and how it behaves for those things it allows. This
view might also be called, or at least is very close to, a
"literal text" view.

At the other end of the spectrum, we can understand what the
standard says by judging what we believe the intentions were (or
perhaps are) of the committee members in writing what they did
(using "writing" in a collective sense here). Under this model,
the text in the standard provides hints as to what was intended,
and what was intended determines what we understand the standard
to say.

In between these two is looking for the most consistent reading.
This model is somewhat like formulating a scientific theory.
Statements in the standard are "facts", and to understand what
the standard says a "theory" is formed about what it means; one
"theory" is better than another if it's more consistent with all
"facts" stated in the standard. More directly, one reading is
better than another if it's more consistent with all statements
in the standard.

I read your question (and followup statements) as asking about
things more under the second model - for "what the standard
should say", substitute "what did the committee intend the
standard to say", and the match is pretty close.

My question ("Which view is more reasonable:...") was meant under
the third ("most consistent reading") model. I believe this
model is more productive than the other two for deciding how the
standard is to be understood. In the earlier post where two
alternatives were listed, I didn't mean to imply that these were
the only two alternatives possible, only to compare them to see
which is more consistent with all statements that the standard
makes. By all means, if you can suggest a third option that is
more consistent with the whole standard, I would like to hear it.

Or, in the alternative, if you would like to suggest a more
productive way for how we can understand what the standard means,
and why, I'd like to hear that too.
 
T

Tim Rentsch

Harald van =?UTF-8?b?RMSzaw==?= said:
Harald van =?UTF-8?b?RMSzaw==?= said:
On Thu, 04 Sep 2008 05:18:18 -0700, Tim Rentsch wrote:
With the exception of character types, does the standard describe
the conversion of an array to anything other than its initial
element?

Conversion of one pointer type to another is always allowed, subject
to the condition that the pointer in question is suitably aligned for
the new type.

Correct. And the result is specified as a pointer that you can convert
back to the original type to get something comparing equal to the
original pointer, but nothing more than that. In the special case of a
character type, the standard points out that the result points to the
first byte of the same object, and allows access to all bytes of the
same object. If it was necessary to explicitly specify this for
character types, why does it implicitly apply to other types as well?

The two cases aren't the same. In the case of, say, an int matrix, such
as

int m[3][5];

the storage for m is guaranteed to hold contiguous int objects, with the
same representation as an array of int. The "arrayness" is already
there.

int m[3][5] is guaranteed to hold contiguous bytes, with the same
representation as an array of unsigned char, right? I really don't see how
your explanation doesn't apply to that just as well.

Bytes aren't the same as characters; clearly the standard
distinguishes between them. The array m is guaranteed to be made
up of bytes, but that's different from saying it's guaranteed to
have contiguous unsigned char objects. Furthermore the paragraph
in question includes char and signed char as well, which don't
necessarily match the whole byte representation.

Converting to (char*) is different because the object being pointed at
may be just a scalar;

Given int i;, &i may be treated as a pointer to the first element of an
array of length 1, and i has the same representation as int[1], does it
not?

True but incidental to my point, which is that (char*) may point
"inside" an otherwise unitary object. The conversion to (char*)
_could_ have been defined as allowing access to only one byte of
an object rather than all of them; that would be inconvenient,
but it wouldn't be inconsistent.

Even if that is the case, I still don't see where the conversion for other
types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior? If so then the same statements that define
conversion from (int *) to (const int *) also define conversions
between other object pointer types (with the usual disclaimer
about alignment).
 
V

vippstar

Bytes aren't the same as characters; clearly the standard
distinguishes between them. The array m is guaranteed to be made
up of bytes, but that's different from saying it's guaranteed to
have contiguous unsigned char objects. Furthermore the paragraph
in question includes char and signed char as well, which don't
necessarily match the whole byte representation.

What? How does the standard distinguish between bytes and characters?
See 3.6, the definition of byte:
addressable unit of data storage large enough to hold any member of the basic character
set of the execution environment

An array m is guaranteed to have contiguous unsigned char objects,
precisely sizeof m unsigned chars. (or bytes)
 
T

Tim Rentsch

What? How does the standard distinguish between bytes and characters?
See 3.6, the definition of byte:


An array m is guaranteed to have contiguous unsigned char objects,
precisely sizeof m unsigned chars. (or bytes)

Some upthread context got lost. The section that guarantees we can
treat arbitrary bytes as character objects is (quoting...)

6.3.2.3p7, more specifically the last two sentences:

When a pointer to an object is converted to a pointer to a
character type, the results points to the lowest addressed
byte of the object. Successive increments of the result, up
to the size of the object, yield pointers to the remaining
bytes of the object.

This section is the only section that allows arbitrary bytes to be
treated as character objects (unsigned or otherwise). Hence this
section is necessary to treat bytes as characters.
 
J

James Kuyper

Tim said:
Harald van =?UTF-8?b?RMSzaw==?= said:
Even if that is the case, I still don't see where the conversion for other
types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior? If so then the same statements that define
conversion from (int *) to (const int *) also define conversions
between other object pointer types (with the usual disclaimer
about alignment).

6.3.2.2p2 is the relevant statement: "For any qualifier q, a pointer to
a non-q-qualified type may be converted to a pointer to the q-qualified
version of the type; the values stored in the original and converted
pointers shall compare equal". In this context 'q' refers to the
qualifier "const".

Per 6.5.9p6, the fact that these pointers shall compare equal implies
that they point at the same object.

Given that we know that cpz points at z[0], I don't see any basis for
claiming that the behavior of that code is undefined. However, that
would no longer be the case if z were declared as, for example, an array
of doubles.

Your claim that "the same statements ... also define conversions between
other object pointer types" implies that 6.3.2.2p2 will also be
applicable to those other conversions. How would you interpret 6.3.2.2p2
as being applicable to, for instance, the conversion from double* to
int*? Modulo the usual disclaimer about alignment, of course.
 
J

James Kuyper

Tim Rentsch wrote:
....
I don't mean to argue for what the standard should say (at least,
not now). The question is, How are we to understand what the
standard does say?

One way to understand the standard is to treat it much like we
would a math textbook. Statements in the standard are "axioms";
using the axioms we prove "theorems" about what C must allow or
disallow, and how it behaves for those things it allows. This
view might also be called, or at least is very close to, a
"literal text" view.

At the other end of the spectrum, we can understand what the
standard says by judging what we believe the intentions were (or
perhaps are) of the committee members in writing what they did
(using "writing" in a collective sense here). Under this model,
the text in the standard provides hints as to what was intended,
and what was intended determines what we understand the standard
to say.

In between these two is looking for the most consistent reading.
This model is somewhat like formulating a scientific theory.
Statements in the standard are "facts", and to understand what
the standard says a "theory" is formed about what it means; one
"theory" is better than another if it's more consistent with all
"facts" stated in the standard. More directly, one reading is
better than another if it's more consistent with all statements
in the standard.

The best model for the standard is none of these things. It's a contract
between implementors of C and C developers. It's a contract negotiated
by the C committee, which implementors and developers are free to adopt
or ignore. The essence of the contract is that if developers write code
which adheres to the requirements of the contract, then implementors are
required to produce implementations which give that code the behavior
required by the contract.

As such, neither mathematics, nor literary criticism, nor science
provides the appropriate analogy for how this document should be read.
The right analogy is the legal system. Some obnoxious regulars regularly
refer to some of the other regulars as "Bible thumpers", because we
routinely cite sections of the standard. Well, lawyers are also noted
for their frequent use of citations, and that is a much more appropriate
analogy.

The intent of the lawmakers is always a relevant issue in a legal case,
but there are strict (though frequently debated) limits on how far a
judge should go in using the "intent" of the law to influence his
interpretation of what the law actually says.
 
J

James Kuyper

Tim said:
James Kuyper said:
Tim Rentsch wrote:
...
A plausible analysis, but not on point, since the example code
above doesn't cast array[1], it casts array, which allows access
to the whole object.
Citation, please - where does the standard say that such a conversion
allows access to the whole object? Where, in fact, does the standard say
anything at all about what you can do with the converted pointer value,
other than convert it back to it's original type?

Which view is more reasonable:

A. Pointer conversion yields a pointer to the same object as
the original (assuming no alignment problems); or

This is clearly impossible in cases where the new pointer type points at
an object of a different size than the original. What I think was the
intent of the committee is that whenever a pointer conversion is
actually permitted, it results in a pointer to an object with the same
starting location in memory. I consider it a defect of the standard that
there is no wording anywhere which says so in the general case, only in
a couple of special cases.
B. Pointer conversion follows a strict constructionist view -
the only thing you can do with a converted pointer is
convert it back to the original type and compare it against
the unconverted original (assuming non-char types, etc)?

Of course, no one really believes (B); if they did, then they
should insist that a code sequence like

I believe that in a great many cases, (B) is the only thing the standard
actually says, and that in most of those cases this was in fact the
committee's intent. The following code fragment is not one of those cases:
int i = 0;
const int *p = &i;
return *p;

produces undefined behavior.

As I just explained in another message, this is covered by 6.3.2.2p2.
I should have responded earlier to this point, but I was too tired of
this subject to be interested in responding to the rest of your message
(and I'm still not interested in doing so).
 
H

Harald van Dijk

Harald van =?UTF-8?b?RMSzaw==?= said:
The two cases aren't the same. In the case of, say, an int matrix,
such as

int m[3][5];

the storage for m is guaranteed to hold contiguous int objects, with
the same representation as an array of int. The "arrayness" is
already there.

int m[3][5] is guaranteed to hold contiguous bytes, with the same
representation as an array of unsigned char, right? I really don't see
how your explanation doesn't apply to that just as well.

Bytes aren't the same as characters; clearly the standard distinguishes
between them. The array m is guaranteed to be made up of bytes, but
that's different from saying it's guaranteed to have contiguous unsigned
char objects.

Do you think *((unsigned char *)&m + (0 ... 15*sizeof(int)-1)) are
invalid, or that they are not separate objects, or both? The only way they
can be valid is if the bytes themselves are separate objects (and they
match the definition of an object), because the result of unary * is only
defined if its operand points to a function or to an object, and clearly
it does not point to a function.
Furthermore the paragraph in question includes char and
signed char as well,

Yes, it does.
which don't necessarily match the whole byte
representation.

Which means the result may be less meaningful when you access an object as
an array of signed char.
[snip]
Even if that is the case, I still don't see where the conversion for
other types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior?

Yes. I agree with James Kuyper's explanation of why this is valid, but
would like to add that just as he mentions that it doesn't apply to
conversions between double* and int*, it also doesn't apply to conversions
between int(*)[] and int*.
 
T

Tim Rentsch

James Kuyper said:
Tim Rentsch wrote:
...

The best model for the standard is none of these things. It's a contract
between implementors of C and C developers. It's a contract negotiated
by the C committee, which implementors and developers are free to adopt
or ignore. The essence of the contract is that if developers write code
which adheres to the requirements of the contract, then implementors are
required to produce implementations which give that code the behavior
required by the contract.

As such, neither mathematics, nor literary criticism, nor science
provides the appropriate analogy for how this document should be read.
The right analogy is the legal system. Some obnoxious regulars regularly
refer to some of the other regulars as "Bible thumpers", because we
routinely cite sections of the standard. Well, lawyers are also noted
for their frequent use of citations, and that is a much more appropriate
analogy.

The intent of the lawmakers is always a relevant issue in a legal case,
but there are strict (though frequently debated) limits on how far a
judge should go in using the "intent" of the law to influence his
interpretation of what the law actually says.

I notice you didn't include a disclaimer that you are not a
lawyer. :)

If we take the legal system as our model for deciding questions
about the standard, then the only real test for comparing two
opposing interpretations would be to offers arguments before some
court, and let the court decide. I don't find this model very
useful, since there are no such courts.

Or, did you mean by using the analogy of the legal system that
people should argue incessantly and there is no particular metric
for deciding which views have more merit? I don't find this
model very useful either, for obvious reasons.
 
T

Tim Rentsch

James Kuyper said:
Tim said:
Harald van =?UTF-8?b?RMSzaw==?= said:
On Tue, 09 Sep 2008 13:04:08 -0700, Tim Rentsch wrote: ...
It's because of imposing an array representation onto a non-array object
that character pointers are singled out in defining their conversion
semantics.
Even if that is the case, I still don't see where the conversion for other
types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior? If so then the same statements that define
conversion from (int *) to (const int *) also define conversions
between other object pointer types (with the usual disclaimer
about alignment).

6.3.2.2p2 is the relevant statement: "For any qualifier q, a pointer to
a non-q-qualified type may be converted to a pointer to the q-qualified
version of the type; the values stored in the original and converted
pointers shall compare equal". In this context 'q' refers to the
qualifier "const".

I'm assuming you meant to write 6.3.2.3p2 (for 6.3.2.2p2).
Per 6.5.9p6, the fact that these pointers shall compare equal implies
that they point at the same object.

Your logic is faulty. Two pointers can compare equal and yet still
not be pointing at the same object. The stated conclusion is just not
a valid deduction.
Given that we know that cpz points at z[0], I don't see any basis for
claiming that the behavior of that code is undefined. However, that
would no longer be the case if z were declared as, for example, an array
of doubles.

Does this mean you think

printf( "*(unsigned*)z == %u\n", *(unsigned*)z );

has defined behavior, or undefined behavior?
Your claim that "the same statements ... also define conversions between
other object pointer types" implies that 6.3.2.2p2 will also be
applicable to those other conversions.

It doesn't imply that, because 6.3.2.3p2 doesn't define the results of
access, it defines only the results of comparison.
How would you interpret 6.3.2.2p2
as being applicable to, for instance, the conversion from double* to
int*? Modulo the usual disclaimer about alignment, of course.

I don't, because 6.3.2.3p2 is only about comparison, not about access.
The analogous guarantee for double* and int* is made in 6.3.2.3p7;
the guarantee is weaker in this case (not counting alignment) because
double* and int* can't be compared directly, whereas const int* and
int* can.
 
R

Richard

Tim Rentsch said:
James Kuyper said:
Tim said:
On Tue, 09 Sep 2008 13:04:08 -0700, Tim Rentsch wrote: ...
It's because of imposing an array representation onto a non-array object
that character pointers are singled out in defining their conversion
semantics.
Even if that is the case, I still don't see where the conversion for other
types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior? If so then the same statements that define
conversion from (int *) to (const int *) also define conversions
between other object pointer types (with the usual disclaimer
about alignment).

6.3.2.2p2 is the relevant statement: "For any qualifier q, a pointer to
a non-q-qualified type may be converted to a pointer to the q-qualified
version of the type; the values stored in the original and converted
pointers shall compare equal". In this context 'q' refers to the
qualifier "const".

I'm assuming you meant to write 6.3.2.3p2 (for 6.3.2.2p2).
Per 6.5.9p6, the fact that these pointers shall compare equal implies
that they point at the same object.

Your logic is faulty. Two pointers can compare equal and yet still
not be pointing at the same object. The stated conclusion is just not
a valid deduction.


I'm interested in this in the real world. Any pointers I have seen which
compare equal are equal. They also point to the same object. Please
expand on this.
 
T

Tim Rentsch

Harald van =?UTF-8?b?RMSzaw==?= said:
Harald van =?UTF-8?b?RMSzaw==?= said:
On Tue, 09 Sep 2008 13:04:08 -0700, Tim Rentsch wrote:
The two cases aren't the same. In the case of, say, an int matrix,
such as

int m[3][5];

the storage for m is guaranteed to hold contiguous int objects, with
the same representation as an array of int. The "arrayness" is
already there.

int m[3][5] is guaranteed to hold contiguous bytes, with the same
representation as an array of unsigned char, right? I really don't see
how your explanation doesn't apply to that just as well.

Bytes aren't the same as characters; clearly the standard distinguishes
between them. The array m is guaranteed to be made up of bytes, but
that's different from saying it's guaranteed to have contiguous unsigned
char objects.

Do you think *((unsigned char *)&m + (0 ... 15*sizeof(int)-1)) are
invalid, or that they are not separate objects, or both? The only way they
can be valid is if the bytes themselves are separate objects (and they
match the definition of an object), because the result of unary * is only
defined if its operand points to a function or to an object, and clearly
it does not point to a function.

I believe the accesses ((unsigned char*)&m), 0 <= i < sizeof(int[3][5]),
are valid, but they are valid only because of statements made in 6.3.2.3p7.

Yes, it does.


Which means the result may be less meaningful when you access an object as
an array of signed char.

The are just as meaningful for signed char as for unsigned char,
because 6.3.2.3p7 applies equally to both.
[snip]
It's because of imposing an array representation onto a non-array
object that character pointers are singled out in defining their
conversion semantics.

Even if that is the case, I still don't see where the conversion for
other types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior?

Yes. I agree with James Kuyper's explanation of why this is valid, but
would like to add that just as he mentions that it doesn't apply to
conversions between double* and int*, it also doesn't apply to conversions
between int(*)[] and int*.

That reasoning is incorrect, as I explained in my response to his
comments.
 
T

Tim Rentsch

James Kuyper said:
Tim said:
James Kuyper said:
Tim Rentsch wrote:
...
A plausible analysis, but not on point, since the example code
above doesn't cast array[1], it casts array, which allows access
to the whole object.
Citation, please - where does the standard say that such a conversion
allows access to the whole object? Where, in fact, does the standard say
anything at all about what you can do with the converted pointer value,
other than convert it back to it's original type?

Which view is more reasonable:

A. Pointer conversion yields a pointer to the same object as
the original (assuming no alignment problems); or

This is clearly impossible in cases where the new pointer type points at
an object of a different size than the original. What I think was the
intent of the committee is that whenever a pointer conversion is
actually permitted, it results in a pointer to an object with the same
starting location in memory. I consider it a defect of the standard that
there is no wording anywhere which says so in the general case, only in
a couple of special cases.
B. Pointer conversion follows a strict constructionist view -
the only thing you can do with a converted pointer is
convert it back to the original type and compare it against
the unconverted original (assuming non-char types, etc)?

Of course, no one really believes (B); if they did, then they
should insist that a code sequence like

I believe that in a great many cases, (B) is the only thing the standard
actually says, and that in most of those cases this was in fact the
committee's intent. The following code fragment is not one of those cases:

I notice you didn't answer the question about which view is more
reasonable.

Your comment in another post about using a legal system analogy is
illuminating. In law, it's perfectly acceptable to argue several
inconsistent theories at once; arguing one theory doesn't preclude
arguing another, inconsistent, theory in the very same breath. All
that matters is whether the arguments convince a jury (or judge).

Any model that explicitly allows several inconsistent interpretations
is not, IMO, an especially useful one for reading the C standard.

As I just explained in another message, this is covered by 6.3.2.2p2.

As I just explained in a response to that message, your
logic there was faulty; 6.3.2.3p2 makes a guarantee
only about comparison, not about access.
 
B

Bartc

Richard said:
I'm interested in this in the real world. Any pointers I have seen which
compare equal are equal. They also point to the same object. Please
expand on this.

Perhaps comparing a int* with a char* for example? They could contain the
same address but point to slightly different objects.
 
T

Tim Rentsch

Richard said:
Tim Rentsch said:
James Kuyper said:
Tim Rentsch wrote:

It's because of imposing an array representation onto a non-array object
that character pointers are singled out in defining their conversion
semantics.
Even if that is the case, I still don't see where the conversion for other
types is defined at all.

Do you believe that

int z[100] = {0};
const int *cpz = z;
printf( "*cpz == %d\n", *cpz );

has defined behavior? If so then the same statements that define
conversion from (int *) to (const int *) also define conversions
between other object pointer types (with the usual disclaimer
about alignment).

6.3.2.2p2 is the relevant statement: "For any qualifier q, a pointer to
a non-q-qualified type may be converted to a pointer to the q-qualified
version of the type; the values stored in the original and converted
pointers shall compare equal". In this context 'q' refers to the
qualifier "const".

I'm assuming you meant to write 6.3.2.3p2 (for 6.3.2.2p2).
Per 6.5.9p6, the fact that these pointers shall compare equal implies
that they point at the same object.

Your logic is faulty. Two pointers can compare equal and yet still
not be pointing at the same object. The stated conclusion is just not
a valid deduction.


I'm interested in this in the real world. Any pointers I have seen which
compare equal are equal. They also point to the same object. Please
expand on this.

Here is one example:

int m[3][5];
int *p = &m[1][5];
int *q = &m[2][0];
assert( p == q );

Even though p == q, accesses through p may not (definedly) affect the
value of *q. I believe some optimizers rely on such things being
true.

A second example:

void *stuff = malloc( 15 * sizeof (int) );
int (*p10)[10] = stuff;
int (*p15)[15] = stuff;
int (*r)[] = p10;
int (*s)[] = p15;
assert( r == s );

Here r == s, but accesses through r may affect only the first 10
elements, whereas accesses through s may affect all 15 elements.
Here the two objects overlap at the beginning, but have different
extents. Like the first case, I believe optimizers may rely on
changes through r not affecting (*s)[10] through (*s)[14].
 
J

James Kuyper

Tim said:
I'm assuming you meant to write 6.3.2.3p2 (for 6.3.2.2p2).

You're correct. Sorry for the typo.
Your logic is faulty. Two pointers can compare equal and yet still
not be pointing at the same object. The stated conclusion is just not
a valid deduction.

Keep in mind that 6.5.9p6 isn't of the form "if A, then B", which is of
course not reversible. It is of the form "if AND ONLY IF A, then B",
from which it is perfectly valid to conclude "if B, then A".

Sure; two null pointers could compare equal, but 'z' can't be null.

Two pointers past the end of the same array could compare equal, but
again 'z' can't be such a pointer.

The only way permitted by the standard for cpz to compare equal to z
without pointing at the same object is if it points one past the end of
the end of an array object that happens to immediately precede 'z' in
memory. However, the reason for that exception is to allow commonplace
implementations where such a pointer is simultaneously a pointer
one-past-the end of one array AND a pointer to the first element of the
second. I will grant that the wording used falls a little short of
actually guaranteeing that. I am sure that the committee's intent was
that wherever the standard guarantees that two non-null pointers to
objects must compare equal, it should be taken as meaning that they
point at the same object.
Given that we know that cpz points at z[0], I don't see any basis for
claiming that the behavior of that code is undefined. However, that
would no longer be the case if z were declared as, for example, an array
of doubles.

Does this mean you think

printf( "*(unsigned*)z == %u\n", *(unsigned*)z );

has defined behavior, or undefined behavior?

The standard doesn't say where (unsigned*)z points; it follows that
dereferencing it could have undefined behavior. I consider this a defect
in the standard, and unlikely to have been the intent of the committee.
In practice, unless alignment is an issue, I wouldn't expect anything to
actually go wrong.
 
J

James Kuyper

Tim said:
I notice you didn't include a disclaimer that you are not a
lawyer. :)

My foreign-born in-laws constantly come to me for advice on the American
legal system, about which I know a great deal more than they do. In
particular, I know enough to include such a disclaimer with every answer
I give them.

Unlike true lawyers, there's no legal requirement that language lawyers
have credentials. Therefore I'm just as much of a language lawyer as
anyone, and far better qualified to be one than most (modesty is clearly
not one of my virtues :). No disclaimer is needed.

I apologize for taking your joking comment seriously; but I couldn't
come up with a good joking response.
If we take the legal system as our model for deciding questions
about the standard, then the only real test for comparing two
opposing interpretations would be to offers arguments before some
court, and let the court decide. I don't find this model very
useful, since there are no such courts.

The C committee is the relevant court, and you bring cases before that
court by filing Defect Reports. Just as with real courts, the C
committee sometimes makes bad or incorrect decisions, but they are
nonetheless the highest relevant authority. In principle, ISO has some
authority over them, but I doubt that ISO's authority would ever be used
to decide a technical issue over the meaning of the standard. ISO is
mainly concerned with procedural issues about how the standard is created.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,825
Latest member
VernonQuy6

Latest Threads

Top