What shall return "0.0 ? 1 : 0" ?

S

Seungbeom Kim

Because of the vagaries of how floating point is represented,
I believe it is possible in principle for 0.0 to be seen as
true (that is, != 0) in a conforming implementation, if that
implementation (a) does not have an exact FP representation for zero,
and (b) has implementation-defined rounding rules which are defined
suitably. AFAIK both (a) and (b) may be true in a conforming
implementation, that is, I don't know of any requirement that
prevents the possibility of either (or of both together).

Even if it's possible that there isn't an exact representation for zero
and that '0.0' has to be represented as a non-zero, won't '0.0 == 0'
still be true because the right-hand side has to be converted to the
same non-zero FP value first? Then, the value of (bool)0.0, assuming
it is defined as '0.0 != 0', should be false as well.
In that case, the non-zero value may behave effectively as a substitute
for zero, or an "acting" zero, except that it might not print out as
an exact zero (just as an acting president doesn't look like the real
president, though having the same power :D).
 
B

Ben Bacarisse

Seungbeom Kim said:
Even if it's possible that there isn't an exact representation for zero
and that '0.0' has to be represented as a non-zero, won't '0.0 == 0'
still be true because the right-hand side has to be converted to the
same non-zero FP value first? Then, the value of (bool)0.0, assuming
it is defined as '0.0 != 0', should be false as well.
In that case, the non-zero value may behave effectively as a substitute
for zero, or an "acting" zero, except that it might not print out as
an exact zero (just as an acting president doesn't look like the real
president, though having the same power :D).

The test involved in a conditional expression is whether the value of
the first expression "compares equal to zero". It's not absolutely
clear to me that this is intended to mean "exp == 0" or not (i.e. to
reference the semantics of the -- operator), but I can't think of any
better meaning.

That aside, 0.0 == 0 is interesting. On a machine that has an exact
floating zero, 0 must convert to it, but 0.0 need not (surely this is
unintended?). On one that does not have an exact floating zero, very
similar wording is used in 6.3.1.4 p2 (for the conversion) as is used in
6.4.4.2 p3 (for the representation of the constant):

"the result is either the nearest representable value, or the larger
or smaller representable value immediately adjacent to the nearest
representable value, chosen in an implementation-defined manner."

"the result is either the nearest higher or nearest lower
representable value, chosen in an implementation-defined manner"

but there's nothing to say that the implementation must choose in the
same way in both cases. A perverse implementation could "round" 0.0 up
and convert 0 by going the other way!
 
P

Phil Carmody

Tim Rentsch said:
If you read the footnote to 5.2.4.2.2 p1, and also 5.2.4.2.2 p3,
I think you'll agree that the condition you describe need not
be an actual representable value in a particular conforming
implementation. The value does exist in the model, but the
model may not reflect what the implementation actually uses,
and even if it does, the implementation might not provide FP
numbers with f(1) == 0. Needless to say, I did consult this
section (and these paragraphs) before making my earlier
comments. So I still think it's possible for an implementation
to not have zero as a representable FP value.

I don't see how there can be any meaningful values for 5.2.4.2.2 p13
if zero is non-zero. Zero must be less than epsilon, as epsilon must
be greater than it. And zero must me expressible.

And zero must compare equal to negative zero. That can only be true
if zero is zero.

Anyway, logic aside, in this particular case does F.1 apply?
"""
An implementation that de#nes _ _STDC_IEC_559_ _ shall conform to the
speci#cations in this annex.
"""
Which, if it applies, makes them bang to rights.

Phil
--
I'd argue that there is much evidence for the existence of a God.
Pics or it didn't happen.
-- Tom (/. uid 822)
 
M

Martin Shobe

If you read the footnote to 5.2.4.2.2 p1, and also 5.2.4.2.2 p3,
I think you'll agree that the condition you describe need not
be an actual representable value in a particular conforming
implementation. The value does exist in the model, but the
model may not reflect what the implementation actually uses,
and even if it does, the implementation might not provide FP
numbers with f(1) == 0. Needless to say, I did consult this
section (and these paragraphs) before making my earlier
comments. So I still think it's possible for an implementation
to not have zero as a representable FP value.
What about 6.7.9 paragraph 10?

10 If an object that has automatic storage duration is not initialized
explicitly, its value is
indeterminate. If an object that has static or thread storage duration
is not initialized
explicitly, then:
— if it has pointer type, it is initialized to a null pointer;
— if it has arithmetic type, it is initialized to (positive or unsigned)
zero;
— if it is an aggregate, every member is initialized (recursively)
according to these rules,
and any padding is initialized to zero bits;
— if it is a union, the first named member is initialized (recursively)
according to these
rules, and any padding is initialized to zero bits;

How could a real type be initialized to positive or unsigned zero if
there isn't one?

Martin Shobe
 
R

Ralf Damaschke

Tim Rentsch said:
If you read the footnote to 5.2.4.2.2 p1, and also 5.2.4.2.2 p3,
I think you'll agree that the condition you describe need not
be an actual representable value in a particular conforming
implementation.

Sorry, I won't. The only weak point I see is that the standard
uses the term "model" which admittedly might mean that the
implementation may use complete different approaches. But my
preferred interpretation is that "model" means that the
representation is not broken down to bits (such that e.g. digits
might be represented by BCD) and that there must be a homomorphism
to the implementation.

The footnote says that the floating-point _arithmetic_ [emphasis by
me] may differ from the model; that does not affect the definition
of a model's floating-point number in p2.

P3 only introduces additional FP numbers (and only for value != 0).

-- Ralf
 
B

Ben Bacarisse

Bill Leary said:
"Ben Bacarisse" wrote in message


I encountered just this case quite a few years ago. K&R compiler, so
not as relevant to this exchange as it could be. It converted "0.0"
to one thing, which wasn't all bits zero. But converted "0," used in
an expression with a float, to all bits zero.

I think you quoted the wrong part. That looks like an example of
something I wrote a few paragraphs earlier:

| On a machine that has an exact floating zero, 0 must convert to it,
| but 0.0 need not (surely this is unintended?)."

<snip>
 
I

Ike Naar

"Ben Bacarisse" wrote in message


I encountered just this case quite a few years ago. K&R compiler, so not as
relevant to this exchange as it could be. It converted "0.0" to one thing,
which wasn't all bits zero. But converted "0," used in an expression with a
float, to all bits zero.

Thus:
float wocka = 0.0;
if (wocka == 0.0)

worked. But:
float wocka = 0;
if (wocka == 0.0)

didn't.

Could that have been caused by float-to-double conversion
in the if condition?

What would that compiler do with

float wocka = 0;
if (wocka == 0.0f)

or

double wocka = 0;
if (wocka == 0.0)

?
 
I

Ike Naar

That aside, 0.0 == 0 is interesting. On a machine that has an exact
floating zero, 0 must convert to it, but 0.0 need not (surely this is
unintended?). On one that does not have an exact floating zero, very
similar wording is used in 6.3.1.4 p2 (for the conversion) as is used in
6.4.4.2 p3 (for the representation of the constant):

"the result is either the nearest representable value, or the larger
or smaller representable value immediately adjacent to the nearest
representable value, chosen in an implementation-defined manner."

"the result is either the nearest higher or nearest lower
representable value, chosen in an implementation-defined manner"

but there's nothing to say that the implementation must choose in the
same way in both cases. A perverse implementation could "round" 0.0 up
and convert 0 by going the other way!

That also raises the question: in

static double d;

what would be the initial value of d?
0 or 0.0 ?
 
B

Ben Bacarisse

Ike Naar said:
That also raises the question: in

static double d;

what would be the initial value of d?
0 or 0.0 ?

I think it must be floating zero. My remarks about machine with no
exact zero are somewhat hypothetical. I am not yet convinced that such
a machine can support a conforming C implementation, but people whose
opinions I respect currently disagree.

As someone else has already point out (sorry, I don't recall who right
now), the rules for default initialisation of objects with static storage
duration state that arithmetic types are initialised to "(positive or
unsigned) zero;". I don't think that can mean anything but exact zero
(whatever that really means).
 
R

Robert Miles

Yes, my mistake (but the tested code is the initial one and has the
wrong value)

I have reported the issue on the "Visual C++ Language" forums (I can't
understand why they ditched the microsoft.* groups for this piece of
webforum junk by the way)

Partly because their connection to Google Groups allowed a large
inflow of spam and posts saying that other operating systems are
better.

Partly because they are moving away from supporting newsgroups at
all.
 
T

Tim Rentsch

Martin Shobe said:
What about 6.7.9 paragraph 10?

10 If an object that has automatic storage duration is not initialized
explicitly, its value is
indeterminate. If an object that has static or thread storage duration
is not initialized
explicitly, then:
* if it has pointer type, it is initialized to a null pointer;
* if it has arithmetic type, it is initialized to (positive or unsigned)
zero;
* if it is an aggregate, every member is initialized (recursively)
according to these rules,
and any padding is initialized to zero bits;
* if it is a union, the first named member is initialized (recursively)
according to these
rules, and any padding is initialized to zero bits;

How could a real type be initialized to positive or unsigned zero if
there isn't one?

I take this to mean the initialization is done the same as saying
'={0}' for the initializer, except also saying the result will
never be (a representation for) negative zero. Of course it's
possible that it indicates a requirement elsewhere, and it's also
possible that it indicates a subconscious assumption that is not
actually a requirement, but I don't think this passge is meant to
/impose/ a requirement that zero be representable. So basically
I don't draw any conclusions just from this paragraph one way or
the other.
 
T

Tim Rentsch

Ralf Damaschke said:
Tim Rentsch said:
If you read the footnote to 5.2.4.2.2 p1, and also 5.2.4.2.2 p3,
I think you'll agree that the condition you describe need not
be an actual representable value in a particular conforming
implementation.

Sorry, I won't. The only weak point I see is that the standard
uses the term "model" which admittedly might mean that the
implementation may use complete different approaches. But my
preferred interpretation is that "model" means that the
representation is not broken down to bits (such that e.g. digits
might be represented by BCD) and that there must be a homomorphism
to the implementation.

The footnote says that the floating-point _arithmetic_ [emphasis by
me] may differ from the model; that does not affect the definition
of a model's floating-point number in p2.

P3 only introduces additional FP numbers (and only for value != 0).

The point of mentioning paragraph 3 is that it only uses the word
'may'; it doesn't state any actual requirements that any
particular numbers, or forms of numbers, be representable.

The point of mentioning the footnote is that it gives a fairly
general licennse for implentations to use different schemes. For
example, I think it would be conforming to implement "floating
point" numbers using scaled, fixed-point arithmetic and use
hundreds or thousands of bits in each 'float' or 'double'.
Focusing on the word 'arithmetic' in the footnote seems like a
red herring to me - a change in how numbers are represented will
naturally lead to a change in how arithmetic works, but if
representation is kept constant then we wouldn't expect a big
variation in how arithmetic works.

For paragraph 1, I think the right word to focus on is not
'model' but 'characteristics' - "The /characteristics/ of
floating types are defined in terms of a model". The point of
this abstraction is to be able to talk about aspects of floating
point numbers without knowing anything about how they will be
represented. Indeed, if floating point numbers are required to
be represented in terms of the model of paragraphs 1 and 2, then
all that is needed is the parameters listed in paragraph 1 -
everything else can be derived from these (plus a little more
information like infinities, NaN's, unnormalized/subnormals, etc,
but that's fairly negligible). The point of the model is to be
able to talk about how we can expect floating-point numbers will
behave /without/ knowing spefically how they are represented.

Finally, last point - even if floating-point numbers are known to
be represented using the form shown in paragraph 2, I don't see
anything in the standard that requires any particular values be
representable; they just have to satisfy the various macro
definitions laid out in 5.2.4.2.2. I believe all of these can
be satisfied using a representation of the form shown in
paragraph 2, but without there being a representation that
is exactly zero.
 
T

Tim Rentsch

Phil Carmody said:
I don't see how there can be any meaningful values for 5.2.4.2.2 p13
if zero is non-zero.

I don't see what you're getting at here. Taking 'double' as
the canonical floating-point type, are you talking about
DBL_EPSILON, DBL_MIN, or DBL_TRUE_MIN? I don't see any
problem with any one of these, under a working assumption of
no reprsentation for zero. What am I missing?
Zero must be less than epsilon, as epsilon must
be greater than it.

Certainly zero must be less than any positive number. That
doesn't mean zero is representable.
And zero must me expressible.

Isn't that exactly the question we are trying to answer, ie,
whether floating point types must have a representation for
zero? I don't see that the Standard's requirements imply
that.
And zero must compare equal to negative zero. That can only be true
if zero is zero.

Yes, if there is a zero, and if there is a negative zero. I
think the Standard is pretty clear that an implementation
need not have a floating-point negative zero.
Anyway, logic aside, in this particular case does F.1 apply?
"""
An implementation that de#nes _ _STDC_IEC_559_ _ shall conform to the
speci#cations in this annex.
"""
Which, if it applies, makes them bang to rights.

I'm pretty sure you're right that supprting IEEE floating-point
means there will be a representation for zero. In fact I would be
surprised (though not astonized) if any actual implementation
didn't have a floating-point representation for zero. My question
is, do the Standard's requirements imply that the floating-point
types /must/ have a representation for zero? So far I still think
the answer is no (at least in the hypothetical world of DS9000's,
etc).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,077
Messages
2,570,569
Members
47,206
Latest member
MalorieSte

Latest Threads

Top