struct named 0

  • Thread starter Mohd Hanafiah Abdullah
  • Start date
X

xarax

Malcolm said:
But in this case we are not casting an integer 0 to a pointer, but a null
pointer to an integer.

No, you're not.

You casting integer 0 into a null pointer, offsetting
that pointer by the distance to the member field, then
taking the address of that field (which converts the
pointer back to an integer -- that reverses whatever
happened in converting the integer to a pointer). That
leaves a simple integer constant that can be resolved
at compile time.

The & cancels out the ->. There is no load of a pointer
into an address register whatsoever and no runtime
dereference.
 
O

Old Wolf

Malcolm said:
Also consider if a null pointer is cast to an integral type, or if a pointer
derived by adding an offset to the null pointer is cast to an integer. This
cast is a simple bitwise conversion, so results will differ on a platform on
which NULL is not all bits zero.

Actually the cast is implementation-defined (and the implementation
can define it to be UB in some, or all, cases).
It would make sense for an implementation to give 0 for
(int)(void *)0, even if (void *)0 were not all bits zero.

The expression (int)&((doomdata*)0)->b is relying on 2 non-standard
things:
- the compiler doesn't actually dereference 0
- the cast to int works as if pointers are ints in a flat
memory model and (int)NULL == 0

It's irrelevant to this expression whether NULL is all-bits-zero
or not.
 
X

xarax

Jack Klein said:
(This deserves a separate thread, but since I asked the above
question here, I'll continue here too.)

As I understand the expression constitutes an access to the structure
member.

1. Does a member access constitute an access to the *whole* structure?
eg.:

The term 'access' is really only used in the C standard in conjunction
with the volatile qualifier, where the wording is unfortunately vague
enough that it can be construed several different ways.
struct A { int i; int _i; };
struct B { int i; float f; };
struct A a = {0};
struct B *pb = (struct B*)&a;
pb->i; //UB?
(*pb).i; //UB?

The two statements above actually have defined behavior, but not for
the reason you might think. The language guarantees that a pointer to
structure, suitably cast, is also a pointer to its first member. So
in given that 'pb' holds any of the following:

- a pointer to any structure type whose first member is an int

- a pointer to an array of ints

- a pointer to a single int

...and the int pointed to has a valid value, the expressions, though
not recommended, will work as designed. The compiler must generate
code equivalent to *(int *)pb, and if there is actually an int there
all is well.
Do I access the first int sub-object in `a' only, or do I access
the whole object `a'?

Now that depends what you mean by access. Let's assume a Pentium
(1/2/3/4) platform with a typical compiler, which means the size of
either of your structures is 8 8-bit bytes. Now let's also assume
that this implementation allocates all structures on an address evenly
divisible by 8, a not uncommon performance feature of such
implementations.

With the assumptions above, if the processor needs to access the
structure from memory it will perform a 64-bit access physically, so
even though your code does not direct the abstract machine to touch
the value of other members in any way, the entire memory space holding
the structure will be physically read.
2. I see certain similarity between structs and arrays (in fact,
both are called "aggregates").
Why is it that for array:
&a[5];
doesn't constitute object access (6.5.3.2#3), whereas for struct:
&s.m;
&ps->m;
the expressions do constitute access?
Why is the language designed like this?

There are actually more differences than similarities between structs
and arrays, despite the fact that both are aggregates. Structs are
first class objects, meaning they can be assigned, passed to and
returned from functions by value, and their names are never implicitly
converted to pointers. Arrays are not first class objects and do not
share any of the characteristics above.

As for other differences in this particular case, this is spelled out
by paragraph 3 of 6.5.3.2 of C99:

[begin quotation]
The unary & operator returns the address of its operand. If the
operand has type ''type'', the result has type ''pointer to type''. If
the operand is the result of a unary * operator, neither that operator
nor the & operator is evaluated and the result is as if both were
omitted, except that the constraints on the operators still apply and
the result is not an lvalue. Similarly, if the operand is the result
of a [] operator, neither the & operator nor the unary * that is
implied by the [] is evaluated and the result is as if the & operator
were removed and the [] operator were changed to a + operator.
Otherwise, the result is a pointer to the object or function
designated by its operand.
[end quotation]

Note the differences between applying '&' to the result of a '*'
operator and to the result of a '[]' operator. In the former case,
neither '&' nor '*' are evaluated as such, but note "the constraints
on the operators still apply".

Now let's back up to paragraph 1 of 6.5.3.2, which lists the
constraints for the unary '&' operator:

[begin quotation]
The operand of the unary & operator shall be either a function
designator, the result of a [] or unary * operator, or an lvalue that
designates an object that is not a bit-field and is not declared with
the register storage-class specifier.
[end quotation]

Notice that the expression under discussion,

(int)&((doomdata*)0)->a;

...is none of these things. Specifically, the operand of the '&'
operator, '((doomdata*)0)->a' is:

- not a function designator

- not the result of a [] operator

- not the result of a unary * operator

Yes it is unary *. a->b is an alias for (*a).b
- and, because of the null pointer, not an lvalue

Finally consider one last thing, namely that regardless of whether
there is an actual access to an object, the expression explicitly
performs pointer arithmetic on a null pointer, and such use of a null
pointer is undefined in and of itself.

Therefore, & cancels out the -> to yield a simple
integer constant that is resolved at compile time.
 
C

Chris Torek

... the compiler will not generate a load into an address register,
because the & cancels out the ->. The compiler has all the information
it needs to resolve the expression to a constant offset value.

There is only one problem: the compiler really sucks. It does
only what it is absolutely forced to by the C Standard. The C
Standard *allows* it to follow the pointer first, and only then
compute the offset, so it does.

(Well, which "the" compiler are *you* talking about?)
 
X

xarax

Chris Torek said:
There is only one problem: the compiler really sucks. It does
only what it is absolutely forced to by the C Standard. The C
Standard *allows* it to follow the pointer first, and only then
compute the offset, so it does.

Following the pointer into memory would make it
impossible to determine the address of the field.

The & cancels out the apparent dereference.
 
C

Chris Torek

Following the pointer into memory would make it
impossible to determine the address of the field.

Yes. This is why the compiler throws the result away after following
the pointer.
The & cancels out the apparent dereference.

No, the "&" makes the compiler throw away the result of the dereference.

Unfortunately, by then it is too late.

I did say this compiler really sucks. But it conforms.
 
X

xarax

Chris Torek said:
Yes. This is why the compiler throws the result away after following
the pointer.


No, the "&" makes the compiler throw away the result of the dereference.

Unfortunately, by then it is too late.

I did say this compiler really sucks. But it conforms.

Totally ridiculous.
 
O

Old Wolf

xarax said:
No, you're not.

You casting integer 0 into a null pointer,

Casting 0 to a pointer must give a null pointer.
offsetting that pointer by the distance to the member
field, then taking the address of that field
(which converts the pointer back to an integer

That conversion is implementation-defined
-- that reverses whatever happened in converting the
integer to a pointer).

There is no requirement for that to be true. In fact
it can't be true, if there is not an exact mapping
from integers to pointers (eg. segmented architecture,
IA64, etc.)

If you're still not convinced, imagine that NULL
lives at address 0xDEADBEEF. Then &((foo *)0)->bar
might be 0xDEADC00C, for example, and when that's
converted back to an integer you might get 0xDEADC00C
still. Not a very accurate struct offset.
Furthermore, if that is outside the range of signed int
(likely on a 32-bit system) you get undefined behaviour
(again).
That leaves a simple integer constant that can be resolved
at compile time.

There is no load of a pointer into an address register
whatsoever and no runtime dereference.

For your compiler on Tuesdays, perhaps. There is nothing
(except possible sales figures..) to stop a compiler from
loading 0 to an address register and then incrementing
it, causing a hardware exception.
 
C

Christian Bau

Jack Klein said:
I am unsure of your meaning here. Do you mean an integral constant at
the source level, as in:

double *dp = (double * 0);
(double *) 0;
...or do you actually mean the value of an integer object, as in:

int x = 0;
double *dp = (double *x);

Both.
If you mean the latter, I have never noticed anything in C99 requiring
the result be a null pointer.

Could you please clarify and include chapter & verse?

Not directly. The result of conversion from int to for example double*
is defined directly in one special case: When the int is a null pointer
constant, that is the value is 0, and it is an integer constant
expression. But since the result of a conversion only depends on the
value converted and nothing else, the result of converting _any_ int of
value 0 must always be the same. As it is a null pointer in some cases,
it must be a null pointer in all cases.
 
S

S.Tobias

Christian Bau said:
The result of conversion from int to for example double*
is defined directly in one special case: When the int is a null pointer
constant, that is the value is 0, and it is an integer constant
expression.

No, the value is not important here, you're misquoting the Standard.
The Standard says: "integer constant expression with the value 0",
so first you look up what is an integer constant expression, and
then you filter only those that have the value zero. Examples are:
0
0x0
0u
(1-1)
'\000'
(int)0.0
7/11
sizeof 13 - sizeof (int)
But since the result of a conversion only depends on the
value converted and nothing else, the result of converting _any_ int of
value 0 must always be the same. As it is a null pointer in some cases,
it must be a null pointer in all cases.

The Std does not say such a thing. If it wanted the value 0 to be
converted into a null pointer, it would say "integer expression with
the value 0" or just "integer value 0". Integer constant expression
is a special case that a compiler must recognize at compile-time.

Conversions between pointer and integer types are explicitly
implementation defined.
 
C

Christian Bau

"S.Tobias said:
No, the value is not important here, you're misquoting the Standard.
The Standard says: "integer constant expression with the value 0",
so first you look up what is an integer constant expression, and
then you filter only those that have the value zero. Examples are:
0
0x0
0u
(1-1)
'\000'
(int)0.0
7/11
sizeof 13 - sizeof (int)

So you are saying that I should look at integer constant expressions
with a value of zero, instead of looking at integers with a value of
zero that are integer constant expressions???

The Std does not say such a thing.

The Standard says that the conversion is implementation-defined.
"Implementation-defined" implies "defined". "Defined" implies: The
implementation gives rules what the result will be if a value is
converted. The result of _any_ operation only depends on the values
involved (and of course on the definitions given by the C Standard or
the implementation) and nothing else; for example it is independent of
the representation of a value. Now add two and two together and you get
exactly what I said.
If it wanted the value 0 to be
converted into a null pointer, it would say "integer expression with
the value 0" or just "integer value 0".

That assumption is naive.
Integer constant expression
is a special case that a compiler must recognize at compile-time.

Only in order to allow an implicit conversion to an appropriate pointer
type that wouldn't be present otherwise. The conversion itself is
implementation defined, with one special case defined by the C Standard.
Conversions between pointer and integer types are explicitly
implementation defined.

As an implementation must conform to _everything_ that the C Standard
guarantees, an implementation is not absolutely free in how it can
define the result of that conversion. Whatever definition the C
implementor chooses, it must be consistent with the requirement that for
example (double *) 0 is a null pointer. If you look at the list of
situations where null pointer constants are recognised and handled
specially by the compiler, you will find that casts are not among them.
 
C

Chris Torek

Totally ridiculous.

Maybe so; and you can feel free to reject such compilers (I would
myself, given the option). But while there is "what I think the
Standard *should* require", there is also "what the Standard actually
requires". Sometimes they differ. (I will note here that I also
object to the C89 wording that makes &a[N] illegal even though a+N
is legal. I am not sure whether this is fixed in C99.)

One should feel free to set one's own standards to something other
than those provided by C89 or C99; but one should be aware of where
they may differ. As another example, I do not use compilers on
which only six monocase characters matter in external identifiers
-- but I know that C89 says this is all I can rely on. Any *sensible*
C environment does better, just as any sensible compiler avoids
unnecessary, wasteful pointer-following.

It is OK to require good sense, as long as you remember that the
C Standard does not. :)
 
J

J. J. Farrell

xarax said:
No, you're not.

You casting integer 0 into a null pointer, offsetting
that pointer by the distance to the member field, then
taking the address of that field (which converts the
pointer back to an integer -- that reverses whatever
happened in converting the integer to a pointer). That
leaves a simple integer constant that can be resolved
at compile time.

The & cancels out the ->. There is no load of a pointer
into an address register whatsoever and no runtime
dereference.

Instead of continually repeating this, please prove the rest of
us wrong by quoting the sections of the Standard that require
the compiler to behave as you describe, and thus prevent the
construct resulting in undefined behaviour.
 
J

Jack Klein

Actually the cast is implementation-defined (and the implementation
can define it to be UB in some, or all, cases).

No, it can't. The term 'implementation-defined' under the C standard
does not allow an implementation to turn implementation-defined
behavior into undefined behavior. For implementation-defined
behavior, the implement must perform consistently according to rules
which it must document.

A classic example of implementation-defined behavior is whether right
shifts of signed integer types sign-extend or truncate. The
implementation must specify which behavior it will produce under any
specific sets of circumstances, and then must consistently provide
that behavior.
It would make sense for an implementation to give 0 for
(int)(void *)0, even if (void *)0 were not all bits zero.

The expression (int)&((doomdata*)0)->b is relying on 2 non-standard
things:
- the compiler doesn't actually dereference 0
- the cast to int works as if pointers are ints in a flat
memory model and (int)NULL == 0

It's irrelevant to this expression whether NULL is all-bits-zero
or not.

Agreed, with everything else.
 
J

Jack Klein

So you are saying that I should look at integer constant expressions
with a value of zero, instead of looking at integers with a value of
zero that are integer constant expressions???

Yes, because an integer with a value of anything cannot be an integer
constant expression. The value of an object, even one with a const
qualifier, is not and never can be an integer constant expression in
C, although it can be down the hall for our friends in comp.lang.c++.

Here's the definition of the term "integer constant expression",
defined in paragraph 6 of 6.6 "Constant expressions" of the 1999
standard:

[begin]
An integer constant expression shall have integer type and shall
only have operands that are integer constants, enumeration constants,
character constants, sizeof expressions whose results are integer
constants, and floating constants that are the immediate operands of
casts. Cast operators in an integer constant expression shall only
convert arithmetic types to integer types, except as part of an
operand to the sizeof operator.
[end]

Notice that there is absolutely no mention of the value of an object,
whether const qualified or not.

Put it another way, an integer constant expression is the type you
need to define an array:

int x [27]; /* OK */

....but:

const int array_size = 27;
int x [array_size]; /* illegal at file scope even with C99 */
/* legal as automatic at block scope */
/* as a variable length array in C99, */
/* completely illegal prior to C99 */
The Standard says that the conversion is implementation-defined.
"Implementation-defined" implies "defined". "Defined" implies: The
implementation gives rules what the result will be if a value is
converted. The result of _any_ operation only depends on the values
involved (and of course on the definitions given by the C Standard or
the implementation) and nothing else; for example it is independent of
the representation of a value. Now add two and two together and you get
exactly what I said.

No you are wrong. There are several special cases for constant
expressions in source code, which are translated by the compiler at
compile time.

Consider:

char alert1 [3] = "\a";
char alert2 [3] = { '\\', 'a', '\0' };

The compiler performs not one but two compile time constant
conversions on the initialization for alter1. First it converts the
two source file characters \a into a single implementation-defined
character. Second, it converts the quoted string by stripping away
the quotes and placing a '\0' terminator at the end. The result is a
string with strlen() of 1.

The compiler performs two compile time constant conversions on alert2,
one similar to alert1 and the other completely different. The first
is to convert the source file characters \\ into a single
implementation-defined character in the execution character set.
Secondly, it converts 'a' from the source to the execution character
set, if they happen to be different. The result is a string with a
strlen() of 2.

If puts(alert1) causes your computer to emit an audible beep and a
newline, puts(alert2) will most certainly not cause a beep. Instead
it will send the two printable characters '\' and 'a' to the standard
output followed by a newline. Nary an audible sound to be heard.
That assumption is naive.

Your assumption that the value of any object can be a "constant
expression" is what is naive. The term "null pointer constant" is
specifically defined in paragraph 3 of 6.2.6.3 as:

[begin]
An integer constant expression with the value 0, or such an expression
cast to type void *, is called a null pointer constant.55) If a null
pointer constant is converted to a pointer type, the resulting
pointer, called a null pointer, is guaranteed to compare unequal
to a pointer to any object or function.
[end]

Notice the explicit "integer constant expression", not "integer
value", and as in the snippet I quoted above, "integer constant
expression" does not include the value of an integer object or any
other object.
Only in order to allow an implicit conversion to an appropriate pointer
type that wouldn't be present otherwise. The conversion itself is
implementation defined, with one special case defined by the C Standard.


As an implementation must conform to _everything_ that the C Standard
guarantees, an implementation is not absolutely free in how it can
define the result of that conversion. Whatever definition the C
implementor chooses, it must be consistent with the requirement that for
example (double *) 0 is a null pointer. If you look at the list of
situations where null pointer constants are recognised and handled
specially by the compiler, you will find that casts are not among them.

Yes, (double *)0 is a null pointer constant of type pointer to double.
But (double *)int_with_value_0 is not, because 0 in the first case is
an integer constant expression, but the value of the object
'int_with_value_0' is not.
 
J

Jack Klein

Jack Klein said:
On 28 Nov 2004 10:41:21 GMT, "S.Tobias"

typedef struct {
int a;
int b;
} doomdata;

Would this be correct?
(int)&((doomdata*)0)->a;

Technically it is still undefined behavior, as the semantics of the
expression dereference a null pointer.

(This deserves a separate thread, but since I asked the above
question here, I'll continue here too.)

As I understand the expression constitutes an access to the structure
member.

1. Does a member access constitute an access to the *whole* structure?
eg.:

The term 'access' is really only used in the C standard in conjunction
with the volatile qualifier, where the wording is unfortunately vague
enough that it can be construed several different ways.
struct A { int i; int _i; };
struct B { int i; float f; };
struct A a = {0};
struct B *pb = (struct B*)&a;
pb->i; //UB?
(*pb).i; //UB?

The two statements above actually have defined behavior, but not for
the reason you might think. The language guarantees that a pointer to
structure, suitably cast, is also a pointer to its first member. So
in given that 'pb' holds any of the following:

- a pointer to any structure type whose first member is an int

- a pointer to an array of ints

- a pointer to a single int

...and the int pointed to has a valid value, the expressions, though
not recommended, will work as designed. The compiler must generate
code equivalent to *(int *)pb, and if there is actually an int there
all is well.
Do I access the first int sub-object in `a' only, or do I access
the whole object `a'?

Now that depends what you mean by access. Let's assume a Pentium
(1/2/3/4) platform with a typical compiler, which means the size of
either of your structures is 8 8-bit bytes. Now let's also assume
that this implementation allocates all structures on an address evenly
divisible by 8, a not uncommon performance feature of such
implementations.

With the assumptions above, if the processor needs to access the
structure from memory it will perform a 64-bit access physically, so
even though your code does not direct the abstract machine to touch
the value of other members in any way, the entire memory space holding
the structure will be physically read.
2. I see certain similarity between structs and arrays (in fact,
both are called "aggregates").
Why is it that for array:
&a[5];
doesn't constitute object access (6.5.3.2#3), whereas for struct:
&s.m;
&ps->m;
the expressions do constitute access?
Why is the language designed like this?

There are actually more differences than similarities between structs
and arrays, despite the fact that both are aggregates. Structs are
first class objects, meaning they can be assigned, passed to and
returned from functions by value, and their names are never implicitly
converted to pointers. Arrays are not first class objects and do not
share any of the characteristics above.

As for other differences in this particular case, this is spelled out
by paragraph 3 of 6.5.3.2 of C99:

[begin quotation]
The unary & operator returns the address of its operand. If the
operand has type ''type'', the result has type ''pointer to type''. If
the operand is the result of a unary * operator, neither that operator
nor the & operator is evaluated and the result is as if both were
omitted, except that the constraints on the operators still apply and
the result is not an lvalue. Similarly, if the operand is the result
of a [] operator, neither the & operator nor the unary * that is
implied by the [] is evaluated and the result is as if the & operator
were removed and the [] operator were changed to a + operator.
Otherwise, the result is a pointer to the object or function
designated by its operand.
[end quotation]

Note the differences between applying '&' to the result of a '*'
operator and to the result of a '[]' operator. In the former case,
neither '&' nor '*' are evaluated as such, but note "the constraints
on the operators still apply".

Now let's back up to paragraph 1 of 6.5.3.2, which lists the
constraints for the unary '&' operator:

[begin quotation]
The operand of the unary & operator shall be either a function
designator, the result of a [] or unary * operator, or an lvalue that
designates an object that is not a bit-field and is not declared with
the register storage-class specifier.
[end quotation]

Notice that the expression under discussion,

(int)&((doomdata*)0)->a;

...is none of these things. Specifically, the operand of the '&'
operator, '((doomdata*)0)->a' is:

- not a function designator

- not the result of a [] operator

- not the result of a unary * operator

Yes it is unary *. a->b is an alias for (*a).b

No, it is not. it is the result of the -> operator. Nor is what you
call an 'alias' the result of a unary * operator, either. It is the
result of the . operator. On of the operands to the . operator is the
unary *, but the final expression is the result of the . operator.

Given:

int x [3] = { 1, 2, 3 };

Then:

*x
*(x + 1)

Are expressions that are the result of unary * operator, but:

*x + 1

The expression above it the result of the '+' operator, one of whose
operands happens to be the result of the unary * operator.
Therefore, & cancels out the -> to yield a simple
integer constant that is resolved at compile time.

No, the standard specifically states that the combinations "&[]" and
"&*" cancel out. Go back and read the quotation from paragraph 3,
above. Where does it state that the combinations "&." or "&->" cancel
out? Nowhere. So they don't.
 
J

Jack Klein

/snip/

It does not dereference any pointer. The & cancels out
the ->.

I see now that you have repeated the statement above quite a few times
in this thread. I provided a much more detailed reply to one of your
direct responses to me farther down the thread, but I will repeat part
of it here:

The & operator does NOT cancel out the -> dereference operator, nor
does it cancel out the . operator. I quoted C99 6.5.3.2#3 from the
standard in where it states that the & operator essentially cancels
out the [] operator and the * operator. I know you have seen that
citation, you quoted it in a reply to my post that contained it.

If you want to keep contending that it cancels the . or -> comma
operators, put up or shut up. Cite a section of the C standard that
says so, as mine did for two other operators, or admit that you are
wrong.
 
R

Richard Bos

Christian Bau said:
As an implementation must conform to _everything_ that the C Standard
guarantees, an implementation is not absolutely free in how it can
define the result of that conversion. Whatever definition the C
implementor chooses, it must be consistent with the requirement that for
example (double *) 0 is a null pointer. If you look at the list of
situations where null pointer constants are recognised and handled
specially by the compiler, you will find that casts are not among them.

In addition to what Jack already wrote, that last statement is plain
wrong for the simple reason that there is no such list. What the
Standard says, in 6.3.2.3#3, is

# If a null pointer constant is converted to a pointer type, the
# resulting pointer, called a null pointer, is...

Since a cast to double * definitely (in fact, quite explicitly) _is_ a
conversion to pointer type, it is the very paragraph which defines both
null pointer constants and null pointers which demands that (double *)0
results in a null pointer.
As for an integer _object_, or any other non-constant integer
expression, with the value zero, the Standard makes no such exception
for them. The only exception is for null pointer _constants_, and that
exception is both made explicit, and mentioned again in 6.3.2.3#6 _as
the only exception_.

Richard
 
O

Old Wolf

Jack Klein said:
No, it can't. The term 'implementation-defined' under the C standard
does not allow an implementation to turn implementation-defined
behavior into undefined behavior. For implementation-defined
behavior, the implement must perform consistently according to rules
which it must document.

Thanks for the clarification. I think what I was trying to
say is, the implementation could define it as a hardware
exception (or some other condition that terminates the program).
(Is that right?)
 
C

Christian Bau

In addition to what Jack already wrote, that last statement is plain
wrong for the simple reason that there is no such list. What the
Standard says, in 6.3.2.3#3, is

# If a null pointer constant is converted to a pointer type, the
# resulting pointer, called a null pointer, is...

Since a cast to double * definitely (in fact, quite explicitly) _is_ a
conversion to pointer type, it is the very paragraph which defines both
null pointer constants and null pointers which demands that (double *)0
results in a null pointer.

The "special handling" happens in cases like

int* p = 0; // Only legal because of special handling
if (p == 0)... // Only legal because of special handling

Both examples would be illegal if you replace 0 with 1. (double *) 1 is
perfectly legal, with implementation defined behavior.
As for an integer _object_, or any other non-constant integer
expression, with the value zero, the Standard makes no such exception
for them. The only exception is for null pointer _constants_, and that
exception is both made explicit, and mentioned again in 6.3.2.3#6 _as
the only exception_.

The Standard does not have an explicit written rule for converting
integers of value 0 that are not null pointer constants, but the
"implementation defined behavior" cannot distinguish between a zero that
is a null pointer constant and a zero that is not a null pointer
constant. The conversion is _only_ based on the value. The C Standard
gives an explicit guarantee (gives an explicit requirement for any
conforming implementation) that in a certain subset of all situations
where a value of 0 is converted, the result will be a null pointer. The
implementation has no choice but converting _all_ zero values to null
pointers.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,156
Messages
2,570,878
Members
47,405
Latest member
DavidCex

Latest Threads

Top