Trap Representations - c99 [again]

P

pemo

As far as I understand it, a trap representation [TR] means something like -
an uninitialised automatic variable might /implicitly/ hold a bit-pattern
that, if read, *might* cause a 'trap' (I'm not sure what 'a trap' means
here - anyone?).



I also read into this that, an *initialised* automatic variable, may never
hold a bit pattern that might, when read, cause a 'trap'. I.e., if an auto
is explicitly initialised, it may *never* hold a trap value; no matter how
it's been initialised - right?



So, firstly I'd like to know whether my interpretations of the standard are
correct here - always given that my phrasing, and choice of words is not
contentious (words always are contentious to a degree of course)??



Now, this second bit builds on my first bit. I think!



It seems to me that, for example, a char cannot possibly contain an implict
set of bits that could cause a TR - or is it that I'm not considering some
possible alphabet/possible-machine where this is feasible [and, if I could -
is this really worth worrying about?]? For example, consider this code:



// ex1.



char x;



int i = x;



As 'x' is read, the c99 standard says that it may contain a TR - however,
given that a char is eight-bits (CHAR_BIT is always 8 bits right?), and that
there's an ASCII code for all 256 possible values of a CHAR_BIT that can
'accidentally' be set in 'x' - x can *never* hold a set of bits that could
cause a trap. Right?



Now, *if* there was such a set of bits, it seems to me that *if* an auto
could *always* contain this set, that that would be *great* - as it would
prevent a lot of bugs like the one in ex1 above. But given my example, this
doesn't seem possible - all possible bit patterns are legal here - and the
only way of knowing that the bits in 'x' are 'wrong', is to know that 'x'
wasn't explicitly initialised. Building upon that, every C compiler I've
ever used issues a warning along the lines of 'x' is not initialised before
it is used - if such a diagnostic is *required* by the c99 standard, then
traps should never occur - if of course you're paying attention to the
warnings your compiler issues!



Also, if it were possible [to always trap in such a situation], it would
require some runtime-checking right - either by the OS, or by the compiled
code itself? And the latter seems to go against a bit of the C ethos as I
understand it, i.e., that the compiler doesn't check [at compile time], nor
does the compiler generate code that checks at runtime - a C compiler
assumes that you should know what you're doing, and be ever diligent when
you use it [the C language]?



Lastly, am I right in thinking that TRs would simply /go away/ *if*
compilers/the-std *mandated* that every automatic to be initialised -
whether it be a struct union or whatever? Does such a restraint seem
something that a /later/ incarnation of the C standard might impose - and
that the ground is being prepared via the introduction of TRs?



Oh - ok, there's another 'lastly' . what was the rationale/driving-force
behind putting TRs into the standard - does anyone here know, or is this
part a question for comp.std.c?
 
V

Vladimir S. Oka

As far as I understand it, a trap representation [TR] means something
like - an uninitialised automatic variable might /implicitly/ hold a
bit-pattern that, if read, *might* cause a 'trap' (I'm not sure what
'a trap' means here - anyone?).

Well, I don't think I can give you a definitive answer, but I can point
to a few things I think you got wrong...

BTW, I think that "a trap" is meant to describe the facility that some
processors have that generates a software interrupt (a trap) in case an
operation is attempted on an illegal value of the register/memory
location.
I also read into this that, an *initialised* automatic variable, may
never hold a bit pattern that might, when read, cause a 'trap'. I.e.,
if an auto is explicitly initialised, it may *never* hold a trap
value; no matter how it's been initialised - right?

I think an automatic variable can be initialised by an argument passed
to the function that is itself a trap representation. Admittedly,
reading that argument itself invokes a trap and the variable itself may
not get to be set, but I think UB thus invoked may do just that.
It seems to me that, for example, a char cannot possibly contain an
implict set of bits that could cause a TR - or is it that I'm not
considering some possible alphabet/possible-machine where this is
feasible [and, if I could - is this really worth worrying about?]?

I think such a case is conceivable, although I can't give you an
example.
For example, consider this code:

// ex1.

char x;

int i = x;

As 'x' is read, the c99 standard says that it may contain a TR -
however, given that a char is eight-bits (CHAR_BIT is always 8 bits
right?),

Wrong. CHAR_BIT is guaranteed to be at least 8 bits. Admittedly, most
modern machines will have it as 8, but there were, and yet may be
machines with 9 or more.
and that there's an ASCII code for all 256 possible values of

ASCII defines only values 0 to 127.
a CHAR_BIT that can 'accidentally' be set in 'x' - x can *never* hold
a set of bits that could cause a trap. Right?

Again, I think machines /may/ exist that break that assumption. Imagine
a hypothetical machine with CHAR_BIT == 8, but supporting only pure
ASCII (0 to 127) and all other values being trap representations.
Now, *if* there was such a set of bits, it seems to me that *if* an
auto could *always* contain this set, that that would be *great* - as
it would prevent a lot of bugs like the one in ex1 above. But given
my example, this doesn't seem possible - all possible bit patterns are
legal here - and the only way of knowing that the bits in 'x' are
'wrong', is to know that 'x' wasn't explicitly initialised. Building
upon that, every C compiler I've ever used issues a warning along the
lines of 'x' is not initialised before it is used - if such a
diagnostic is *required* by the c99 standard, then traps should never
occur - if of course you're paying attention to the warnings your
compiler issues!

Also, if it were possible [to always trap in such a situation], it
would require some runtime-checking right - either by the OS, or by
the compiled code itself?

As I mentioned earlier, some CPUs actually generate the trap for certain
values without the need for code to do anything special.
And the latter seems to go against a bit of the C ethos
as I understand it, i.e., that the compiler doesn't check [at compile
time], nor does the compiler generate code that checks at runtime - a
C compiler assumes that you should know what you're doing, and be ever
diligent when you use it [the C language]?

Lastly, am I right in thinking that TRs would simply /go away/ *if*
compilers/the-std *mandated* that every automatic to be initialised -
whether it be a struct union or whatever? Does such a restraint seem
something that a /later/ incarnation of the C standard might impose -
and that the ground is being prepared via the introduction of TRs?

Oh - ok, there's another 'lastly' . what was the
rationale/driving-force behind putting TRs into the standard - does
anyone here know, or is this part a question for comp.std.c?

These I leave to my betters...
 
M

Michael Mair

pemo said:
As far as I understand it, a trap representation [TR] means something like -
an uninitialised automatic variable might /implicitly/ hold a bit-pattern
that, if read, *might* cause a 'trap' (I'm not sure what 'a trap' means
here - anyone?).

A trap representation for a certain type is essentially a bit pattern
which, when interpreted as the representation of an object of the
respective type, is invalid.
This could happen through missing initialisation of an automatic
variable, writing a value to one union member and then reading another,
changing the representation via overlaying an array of unsigned char,
....
I also read into this that, an *initialised* automatic variable, may never
hold a bit pattern that might, when read, cause a 'trap'. I.e., if an auto
is explicitly initialised, it may *never* hold a trap value; no matter how
it's been initialised - right?

If you assign a "valid" value to the variable, then yes.
Imagine you cause a signed integer overflow which results in
a trap representation (possible) or you initialise a float
or an integer (even an unsigned integer) with a double value
which cannot be represented -- or the "initialiser" already
is invalid or the initialisation happens after UB already
has been invoked.

So, firstly I'd like to know whether my interpretations of the standard are
correct here - always given that my phrasing, and choice of words is not
contentious (words always are contentious to a degree of course)??

Try to be clear and concise.
The "uninitialised variable" example is nice enough but make
clear that it is only an example -- and do not cut away too
much meaning from trap representation.
Now, this second bit builds on my first bit. I think!
It seems to me that, for example, a char cannot possibly contain an implict
set of bits that could cause a TR - or is it that I'm not considering some
possible alphabet/possible-machine where this is feasible [and, if I could -
is this really worth worrying about?]? For example, consider this code:

AFAIR, this is explicitly guaranteed by the standard for char
and signed char: They may contain padding bits but these must
be "non-trapping".
// ex1.

char x;
int i = x;

As 'x' is read, the c99 standard says that it may contain a TR - however,
given that a char is eight-bits (CHAR_BIT is always 8 bits right?), and that
there's an ASCII code for all 256 possible values of a CHAR_BIT that can
'accidentally' be set in 'x' - x can *never* hold a set of bits that could
cause a trap. Right?

Wrong argumentation, IMO.
Now, *if* there was such a set of bits, it seems to me that *if* an auto
could *always* contain this set, that that would be *great* - as it would
prevent a lot of bugs like the one in ex1 above. But given my example, this
doesn't seem possible - all possible bit patterns are legal here - and the
only way of knowing that the bits in 'x' are 'wrong', is to know that 'x'
wasn't explicitly initialised. Building upon that, every C compiler I've
ever used issues a warning along the lines of 'x' is not initialised before
it is used - if such a diagnostic is *required* by the c99 standard, then
traps should never occur - if of course you're paying attention to the
warnings your compiler issues!

Also, if it were possible [to always trap in such a situation], it would
require some runtime-checking right - either by the OS, or by the compiled
code itself? And the latter seems to go against a bit of the C ethos as I
understand it, i.e., that the compiler doesn't check [at compile time], nor
does the compiler generate code that checks at runtime - a C compiler
assumes that you should know what you're doing, and be ever diligent when
you use it [the C language]?

If the implementation supports that, say by a "noninit" bit for
every byte in hardware or whatever, it is free to provide it as
an extension.
Lastly, am I right in thinking that TRs would simply /go away/ *if*
compilers/the-std *mandated* that every automatic to be initialised -
whether it be a struct union or whatever? Does such a restraint seem
something that a /later/ incarnation of the C standard might impose - and
that the ground is being prepared via the introduction of TRs?

No. You still can generate TRs in other ways.
Apart from that, the embedded people will not thank you for
cluttering their code with unnecessary initialisations to zero
if this costs time or space.
Oh - ok, there's another 'lastly' . what was the rationale/driving-force
behind putting TRs into the standard - does anyone here know, or is this
part a question for comp.std.c?

The rationale is freely available and can be downloaded.

I like to think of floating point numbers and floating point
"exceptions" for certain representations as good examples but
I did not get this from the rationale.


Cheers
Michael
 
K

Keith Thompson

pemo said:
As far as I understand it, a trap representation [TR] means something like -
an uninitialised automatic variable might /implicitly/ hold a bit-pattern
that, if read, *might* cause a 'trap' (I'm not sure what 'a trap' means
here - anyone?).

The standard doesn't say what a "trap" is. Colloquially, it's often
something like a segmentation fault.

For the most part, the standard uses the term "trap" only in the
context of a "trap representation". Any attempt to use a trap
representation invokes undefined behavior; that could be a seg fault,
an incorrect result, or even quiet "normal" behavior.
I also read into this that, an *initialised* automatic variable, may never
hold a bit pattern that might, when read, cause a 'trap'. I.e., if an auto
is explicitly initialised, it may *never* hold a trap value; no matter how
it's been initialised - right?

An auto object can hold a trap representation if it's been initialized
to a trap representation. This can happen only via undefined
behavior, but of course undefined behavior can do anything.
It seems to me that, for example, a char cannot possibly contain an implict
set of bits that could cause a TR - or is it that I'm not considering some
possible alphabet/possible-machine where this is feasible [and, if I could -
is this really worth worrying about?]? For example, consider this code: [snip]
As 'x' is read, the c99 standard says that it may contain a TR - however,
given that a char is eight-bits (CHAR_BIT is always 8 bits right?), and that
there's an ASCII code for all 256 possible values of a CHAR_BIT that can
'accidentally' be set in 'x' - x can *never* hold a set of bits that could
cause a trap. Right?

CHAR_BIT is at least 8; it can be larger.

ASCII specifies only 128 values, not 256 -- but that's irrelevant,
since the standard doesn't require ASCII.

There's a specific guarantee that unsigned char has no padding bits
and no trap representations. There's no such guarantee for signed
char, or for plain char if it happens to be signed.
Now, *if* there was such a set of bits, it seems to me that *if* an auto
could *always* contain this set, that that would be *great* - as it would
prevent a lot of bugs like the one in ex1 above. But given my example, this
doesn't seem possible - all possible bit patterns are legal here - and the
only way of knowing that the bits in 'x' are 'wrong', is to know that 'x'
wasn't explicitly initialised. Building upon that, every C compiler I've
ever used issues a warning along the lines of 'x' is not initialised before
it is used - if such a diagnostic is *required* by the c99 standard, then
traps should never occur - if of course you're paying attention to the
warnings your compiler issues!

Warnings for uninitialized variables are not required. Such warnings
cannot be 100% accurate.

The C standard does not require the existence of a trap representation
for any type; it merely allows it for integer types other than
unsigned char, and for floating-point and pointer types. Requiring
some trap representation would be a burden; it would mean that, for
example, a 16-bit unsigned integer could not represent all the values
in the range 0..65535. Requiring some particular action on use of a
trap representation would be an even greater burden; depending on the
CPU, it could require an explicit check on each use of any value.

[...]
Lastly, am I right in thinking that TRs would simply /go away/ *if*
compilers/the-std *mandated* that every automatic to be initialised -
whether it be a struct union or whatever? Does such a restraint seem
something that a /later/ incarnation of the C standard might impose - and
that the ground is being prepared via the introduction of TRs?

No. Given C's ability to do low-level memory access, initializing all
object could not eliminate trap representations. For example, suppose
there's some trap representation for type double. Even if no
arithmetic operation could produce a trap representation, it could
still be produced by reading a value from a file, or by aliasing a
double object with an array of unsigned char. The possibility of trap
representations cannot be eliminated without crippling C's ability to
do the kind of low-level operations at which it excels.
Oh - ok, there's another 'lastly' . what was the rationale/driving-force
behind putting TRs into the standard - does anyone here know, or is this
part a question for comp.std.c?

It's simply a way of acknowledging the reality that, on some systems,
certain values of some types are invalid, and can cause unpredictable
behavior if you try to use them. The existence of trap
representations doesn't require any particular behavior by any
implementation or program. It merely indicates certain circumstances
in which the standard cannot, and doesn't attempt to, define the
behavior.

Finally, a personal note. Both your question and my answer (which I
hope is useful to someone) seem to be quite pedantic. Perhaps it's
time for you to give up on the idea that pedantry is somehow a bad
thing, and that your earlier insulting posts on the topic of pedantry
were misplaced.
 
J

Jordan Abel

pemo said:
As far as I understand it, a trap representation [TR] means something like -
an uninitialised automatic variable might /implicitly/ hold a bit-pattern
that, if read, *might* cause a 'trap' (I'm not sure what 'a trap' means
here - anyone?).

The standard doesn't say what a "trap" is. Colloquially, it's often
something like a segmentation fault.

I'd say it's more likely to be a bus error [SIGBUS on unix systems,
though it could conceivably be lumped in as a segmentation fault on
other systems] or FPE than a segmentation fault.
CHAR_BIT is at least 8; it can be larger.

ASCII specifies only 128 values, not 256 -- but that's irrelevant,
since the standard doesn't require ASCII.

ASCII only specifies 95 values. (control characters are in a different
standard, IIRC)
There's a specific guarantee that unsigned char has no padding bits
and no trap representations. There's no such guarantee for signed
char, or for plain char if it happens to be signed.

Though, there is such a guarantee [if you read between the lines] for
int8_t, if present on your system. [256 possible values (-128..127) and
no padding bits, what other conclusion is there?]
 
J

Jack Klein

pemo said:
As far as I understand it, a trap representation [TR] means something like -
an uninitialised automatic variable might /implicitly/ hold a bit-pattern
that, if read, *might* cause a 'trap' (I'm not sure what 'a trap' means
here - anyone?).

The standard doesn't say what a "trap" is. Colloquially, it's often
something like a segmentation fault.

I'd say it's more likely to be a bus error [SIGBUS on unix systems,
though it could conceivably be lumped in as a segmentation fault on
other systems] or FPE than a segmentation fault.
CHAR_BIT is at least 8; it can be larger.

ASCII specifies only 128 values, not 256 -- but that's irrelevant,
since the standard doesn't require ASCII.

ASCII only specifies 95 values. (control characters are in a different
standard, IIRC)

The original version of the ASCII standard did not include lowercase
letters and some of the punctuation marks. The 1967 version added
them. Both versions included the control characters, they have always
been part of the ASCII standard.

GIYF.
 
B

Barry Schwarz

As far as I understand it, a trap representation [TR] means something like -
an uninitialised automatic variable might /implicitly/ hold a bit-pattern
that, if read, *might* cause a 'trap' (I'm not sure what 'a trap' means
here - anyone?).



I also read into this that, an *initialised* automatic variable, may never
hold a bit pattern that might, when read, cause a 'trap'. I.e., if an auto
is explicitly initialised, it may *never* hold a trap value; no matter how
it's been initialised - right?

An automatic pointer that is initialized with the value returned from
malloc and then freed contains an indeterminate value. Accessing this
value in any way invokes undefined behavior and could be considered a
trap representation.
So, firstly I'd like to know whether my interpretations of the standard are
correct here - always given that my phrasing, and choice of words is not
contentious (words always are contentious to a degree of course)??



Now, this second bit builds on my first bit. I think!



It seems to me that, for example, a char cannot possibly contain an implict
set of bits that could cause a TR - or is it that I'm not considering some
possible alphabet/possible-machine where this is feasible [and, if I could -
is this really worth worrying about?]? For example, consider this code:



// ex1.



char x;



int i = x;



As 'x' is read, the c99 standard says that it may contain a TR - however,
given that a char is eight-bits (CHAR_BIT is always 8 bits right?), and that
there's an ASCII code for all 256 possible values of a CHAR_BIT that can
'accidentally' be set in 'x' - x can *never* hold a set of bits that could
cause a trap. Right?

You'll never know because the mere act of evaluating an uninitialized
variable invokes undefined behavior.
Now, *if* there was such a set of bits, it seems to me that *if* an auto
could *always* contain this set, that that would be *great* - as it would
prevent a lot of bugs like the one in ex1 above. But given my example, this
doesn't seem possible - all possible bit patterns are legal here - and the
only way of knowing that the bits in 'x' are 'wrong', is to know that 'x'
wasn't explicitly initialised. Building upon that, every C compiler I've
ever used issues a warning along the lines of 'x' is not initialised before
it is used - if such a diagnostic is *required* by the c99 standard, then
traps should never occur - if of course you're paying attention to the
warnings your compiler issues!

Consider
long x = some value;
double y,z; /* sizeof(long) == sizeof(double) */
memcpy(&y, &x, sizeof x);
z = y;

There are no uninitialized variables but y could contain a TR.
Also, if it were possible [to always trap in such a situation], it would
require some runtime-checking right - either by the OS, or by the compiled
code itself? And the latter seems to go against a bit of the C ethos as I
understand it, i.e., that the compiler doesn't check [at compile time], nor
does the compiler generate code that checks at runtime - a C compiler
assumes that you should know what you're doing, and be ever diligent when
you use it [the C language]?



Lastly, am I right in thinking that TRs would simply /go away/ *if*
compilers/the-std *mandated* that every automatic to be initialised -
whether it be a struct union or whatever? Does such a restraint seem
something that a /later/ incarnation of the C standard might impose - and
that the ground is being prepared via the introduction of TRs?



Oh - ok, there's another 'lastly' . what was the rationale/driving-force
behind putting TRs into the standard - does anyone here know, or is this
part a question for comp.std.c?


Remove del for email
 
C

Chris Torek

For the most part, the standard uses the term "trap" only in the
context of a "trap representation". Any attempt to use a trap
representation invokes undefined behavior; that could be a seg fault,
an incorrect result, or even quiet "normal" behavior.

I am not sure if the following actually agrees with what the C99
Standard says, but let me try it out, and see if comp.lang.c
readers agree.

C has a "storage model" in which values are represented by
bit patterns stored in regions of memory ("objects"). These
regions are at least partly labeled by types, although in
various situations you can "paste on" a new type-label, or
temporarily obscure one with another.

Simple examples are not really controversial:

double d = 3.14159265358979322384626433832795;
unsigned char *cp = (unsigned char *)&d;
char *s = "";
size_t i;

for (i = 0; i < sizeof d; i++) {
printf("%s%2.2x", s, cp);
s = " ";
}
putchar('\n');

When I turn this into a full program and run it, I get:

18 2d 44 54 fb 21 09 40

on an Intel x86 machine. This shows the underlying representation
of a "double" holding a close approximation to pi.

Now, if you take any actual, digital computer memory (i.e., sample
a RAM chip of some sort) and examine the bits stored therein, they
always have some value. Even on powerup, when the memory is full
of random garbage (and hence ECC errors on systems with ECC), each
bit, taken in isolation, reads back as either a 0 or a 1.

What C99 attempts to acknowledge, using the phrase "trap representation",
is that there are some bit pattern(s) for some type(s) that *do
not* represent a value. That is, every value has some representation,
but not every representation represents "a value".

Many people seem to think this is nonsense, at least until you
remind them that IEEE-style "float" and "double" values can be
set to two different kinds of NaN (Not-a-Number) "values", called
"quiet" and "signaling" NaNs. In IEEE arithmetic, floating
point numbers can be "classified" as:

- zero (+0 and -0 both)
- normal (positive and negative)
- "denorm" (also positive and negative)
- infinity (+inf and -inf both), and
- NaN (signaling and quiet)

and there are rules as to what the results are when operating on
infinities and NaNs (0.0/0.0 = NaN, 1.0/0.0 = +Inf, -1.0/0.0 =
-Inf, normal + +Inf = +Inf, normal - +Inf = -Inf, and so on).
Moreover, upon loading a "NaN" bit pattern, if the type of the NaN
is "signaling NaN" (SNaN), you normally get a floating-point
exception (usually translated to SIGFPE in C). If the exception
is handled or ignored, the SNaN is changed into a quiet one (QNaN),
by setting the "quiet" bit. In other words, the bit pattern
magically changes from one instruction to the next, even if
the instructions are just load-and-store!

[Aside: it is kind of hard to get the SIGFPE for signaling NaNs on
IA32. Among other things, the treatment of NaN bit patterns
apparently varies between different versions of the architecture.
However, the magical "SNaN => QNaN" transformation has caused
interesting "features" in, e.g., Java, in which certain byte-swapping
floating-point operations sometimes produce incorrect results.]

Now, if floating-point data has representations that mean "not a
value" or "trap when you access this value", why cannot integer
representations have the same feature? The C99 standard acknowledges
the possibility, and generalizes it, giving what I think is the
real meaning for "trap representation":

A trap representation is any bit pattern that, when stored in an
object of some particular type, does not represent a value of that
type.

C99 says that "unsigned char" has no trap representations. In
other words, no matter what bit pattern is found in an "unsigned
char", it represents some value. Moreover, additional restrictions
on unsigned char make each <value,representation> pair unique, i.e.,
every value has just one unique representation, and -- because there
are no trap representations -- every representation has just one
unique value.

This is not true for other types (including on real machines, where
some have many ways to represent 0.0, and others have many different
bit patterns that are all "NaN"s). On typical "real" machines,
only floating-point bit patterns have this property, but C99 allows
all data types -- except of course "unsigned char" -- to have them.
If some object has a "valid" bit pattern stored in it, it has a
value; but if it has an "invalid" bit pattern stored in it, it has
a C99 "trap representation". Even if printf()ing will print
something sensible, it can still be labeled a "trap representation"
just by claiming it is an "invalid" value.
 
K

Keith Thompson

Chris Torek said:
I am not sure if the following actually agrees with what the C99
Standard says, but let me try it out, and see if comp.lang.c
readers agree.
[big snip]

I'm not going to quote most of what you wrote, because it would be
redundant to do so just for the sake of saying that I agree with all
of it. But to expand on one point:

[...]
If some object has a "valid" bit pattern stored in it, it has a
value; but if it has an "invalid" bit pattern stored in it, it has
a C99 "trap representation". Even if printf()ing will print
something sensible, it can still be labeled a "trap representation"
just by claiming it is an "invalid" value.

Suppose an implementation has 16-bit 2's-complement ints, with
INT_MIN == -32768 and INT_MAX == +32767. Given those values there are
no trap representations, because all 65536 possible 16-bit
representations correspond to valid values.

Now suppose a new version of the implementation changes INT_MIN to
-32767, declares in its documentation that the bit pattern that
formerly represented the value -32768 is a trap representation, *and
makes no other changes*. The new version is still conforming (though
perhaps silly). It happens that code that assigns the value -32768 to
an int variable doesn't trap, printing such a value results in
"-32768", and everything works consistently. The only difference is
that there's a trap representation, and any program that uses it
invokes undefined behavior -- which always manifests itself in this
benign manner.

Given that -32768 (or rather, the representation that formerly
corresponded to the value -32768) is a trap representation, the
implementation is free to do anything it likes with that
representation. No actual trap is ever required.
 
W

Walter Roberson

Chris Torek said:
C has a "storage model" in which values are represented by
bit patterns stored in regions of memory ("objects").
What C99 attempts to acknowledge, using the phrase "trap representation",
is that there are some bit pattern(s) for some type(s) that *do
not* represent a value. That is, every value has some representation,
but not every representation represents "a value".
Many people seem to think this is nonsense,

But not the ones who had to deal with loads of 0xdeadbeef ;-)
at least until you
remind them that IEEE-style "float" and "double" values can be
set to two different kinds of NaN (Not-a-Number) "values", called
"quiet" and "signaling" NaNs.
Moreover, upon loading a "NaN" bit pattern, if the type of the NaN
is "signaling NaN" (SNaN), you normally get a floating-point
exception (usually translated to SIGFPE in C).

Earlier this week I was reading a paper on the lesser known
properties of IEEE 754. If my memory serves me, signaling NANs
do not signal when loaded and stored, but -do- signal when
used in any arithmetic operation (unless, as you noted, the
exception is dealt with one way or another.)
 
C

Chris Torek

Earlier this week I was reading a paper on the lesser known
properties of IEEE 754. If my memory serves me, signaling NANs
do not signal when loaded and stored, but -do- signal when
used in any arithmetic operation (unless, as you noted, the
exception is dealt with one way or another.)

Well, in practice it is even more complicated than this, because
different CPUs handle them differently. Sufficiently old x86 CPUs
apparently never signal at all, for instance. The internal
implementation on the x86 (using an FPU stack that always keeps
the number in 80-bit format) makes everything tricky.

Some CPUs do not bother to implement some (or sometimes even
all) IEEE arithmetic in hardware, too, and rely on the operating
system to simulate the correct behavior. This is another fertile
area for, ah, "system disagreements". :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,954
Messages
2,570,116
Members
46,704
Latest member
BernadineF

Latest Threads

Top