assert( x > 0.0 && 1 && x == 0.0 ) holding

D

Daniel Vallstrom

I'm having problems with inconsistent floating point behavior
resulting in e.g.

assert( x > 0.0 && putchar('\n') && x == 0.0 );

holding. (Actually, my problem is the dual one where I get
failed assertions for assertions that at first thought ought
to hold, but that's not important.)

At the end is a full program containing the above seemingly
inconsistent assertion. On my x86 using gcc the assertion
doesn't fail when x is of type double.

AFAIK, what is happening is that at first x temporary resides
in a 80 bit register with a higher precision than the normal
64 bits. Hence x is greater than 0.0 at first even though
the "real" 64-bit value is 0.0. If you cast the value to long
double and print it you can see that it indeed is slightly
larger than 0.0 at first, but then becomes 0.0.

Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?

What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:

volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );

Is that the best solution?


Daniel Vallstrom



/* Tests weird inconsistent floating point behavior resulting in
something like "assert( x > 0.0 && 1 && x == 0.0 );" holding!
Daniel Vallstrom, 041030.
Compile with e.g: gcc -std=c99 -pedantic -Wall -O -lm fpbug.c
*/

#include <stdio.h>
#include <math.h>
#include <assert.h>

int main( void )
{
double x = nextafter( 0.0, -1.0 ) * nextafter( 0.0, -1.0 );

/* The putchar-conjunct below is just something arbitrary in */
/* order to clear the x-register as a side-effect. At least */
/* that's what I guess is happening. */
assert( x > 0.0 && putchar('\n') && x == 0.0 );

return 0;
}
 
E

Elliott Back

Daniel said:
I'm having problems with inconsistent floating point behavior
resulting in e.g.

assert( x > 0.0 && putchar('\n') && x == 0.0 );

Testing whether a floating point number is equal to zero is an
indefinite thing, due to roundoff, truncation, and other FP errors. You
can never really be sure if it's equal, although the IEEE specification
will define 0 to be within a certain FP range.

Is there some other way you can structure the logic?
 
C

Chris Croughton

Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?

It all depends (as the late Professor Joad used to say) on what you mean
by equality of floating point numbers. Or rather, on what your compiler
and processor think it means. Can you get your compiler to generate
assembler code and check? It may be that it is looking for "absolute
value less than a very small amount" for equality, and the value is not
quite zero. Or perhaps your multiply which generated it resulted in
underflow and a NAN ("Not A Number", a special value indicating that
something odd happened) which is evaluated as 'zero' for equality but
still has a sign.
What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:

volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );

Is that the best solution?

I don't understand what you are trying to achieve by having a condition
which should always evaluate as false like that. Or are you expecting y
(or x) to change value in the short time between testing y > 0 and y ==
0? The putchar() in the middle just confuses things more. Is this
something you found in a larger program? The nextafter() function call
would imply that.

Try putting in printfs tracing the value of x, with ridiculously high
precision:

printf("%-30.30g\n", x);

That should show whether the value is actually zero (with no rounding)
or not.

(I can't get it to fail even with very small values of x on my gcc
2.95.4 Debial GNU/Linux Duron 1200 system...)

Chris C
 
D

Daniel Vallstrom

Elliott Back said:
Testing whether a floating point number is equal to zero is an
indefinite thing, due to roundoff, truncation, and other FP errors. You
can never really be sure if it's equal, although the IEEE specification
will define 0 to be within a certain FP range.

I don't think that's the issue here. I don't mind the limited
precision. My problem is the flip-flop behavior, showing as x
being one thing at first, then suddenly another thing without
anything happening in between. If it's the 0.0 you object to, you
can replace that with an y!=0.0. I'll add such an example at
the end of the post.

Is there some other way you can structure the logic?

Don't know what you mean by this. If you mean weakening the
assertions (in the real program) I would never do that. The
volatile solution seems better.


Daniel Vallstrom



/* Tests weird inconsistent floating point behavior resulting in
something like "assert( x > y && 1 && x == y );" holding!
Daniel Vallstrom, 041030.
Compile with e.g: gcc -std=c99 -pedantic -Wall -O -lm fpbug2.c
*/

#include <stdio.h>
#include <math.h>
#include <assert.h>

int main( void )
{
double y = nextafter( 0.0, 1.0 );
double x = y + 0x1.0p-2 * y;

/* The putchar-conjunct below is just something arbitrary in */
/* order to clear the x-register as a side-effect. At least */
/* that's what I guess is happening. */
assert( x > y && putchar('\n') && x == y );

return 0;
}
 
C

Christian Bau

Elliott Back said:
Testing whether a floating point number is equal to zero is an
indefinite thing, due to roundoff, truncation, and other FP errors. You
can never really be sure if it's equal, although the IEEE specification
will define 0 to be within a certain FP range.

This is just stupid.

Testing whether a floating point number is equal to zero is a perfectly
reasonable thing to do on any C implementation that conforms to IEEE 754
- unfortunately, the x86 stack-based floating-point arithmetic is
absolutely braindamaged (SSE2 has fixed this, so the problem will go
away in the next few years) and does not conform to IEEE 754 in the most
common mode that it is used.
 
C

Christian Bau

Chris Croughton said:
It all depends (as the late Professor Joad used to say) on what you mean
by equality of floating point numbers. Or rather, on what your compiler
and processor think it means. Can you get your compiler to generate
assembler code and check? It may be that it is looking for "absolute
value less than a very small amount" for equality, and the value is not
quite zero. Or perhaps your multiply which generated it resulted in
underflow and a NAN ("Not A Number", a special value indicating that
something odd happened) which is evaluated as 'zero' for equality but
still has a sign.

In both cases the implementation would not be conforming to IEEE 754
(which it isn't anyway, as the assert proves). In an implementation that
conforms to IEEE 754, x compares equal to zero if and only if x is
either a positive zero or x is a negative zero, no "less than some very
small amount" bullshit. And a NaN definitely doesn't compare equal to
anything, not even equal to itself, and most definitely not equal to
zero, and it also doesn't compare greater than anything, including zero.

I don't understand what you are trying to achieve by having a condition
which should always evaluate as false like that. Or are you expecting y
(or x) to change value in the short time between testing y > 0 and y ==
0? The putchar() in the middle just confuses things more. Is this
something you found in a larger program? The nextafter() function call
would imply that.

Looks like that is exactly what is happening: Either the value x is at
the same time greater than zero and equal to zero, or it changes between
the evaluation of (x > 0.0) and (x == 0.0). The first must not happen on
any implementation that conforms to IEEE 754, the second must not happen
on any Standard C implementation.
 
T

Tim Rentsch

I'm having problems with inconsistent floating point behavior
resulting in e.g.

assert( x > 0.0 && putchar('\n') && x == 0.0 );

holding. (Actually, my problem is the dual one where I get
failed assertions for assertions that at first thought ought
to hold, but that's not important.)

At the end is a full program containing the above seemingly
inconsistent assertion. On my x86 using gcc the assertion
doesn't fail when x is of type double.

AFAIK, what is happening is that at first x temporary resides
in a 80 bit register with a higher precision than the normal
64 bits. Hence x is greater than 0.0 at first even though
the "real" 64-bit value is 0.0. If you cast the value to long
double and print it you can see that it indeed is slightly
larger than 0.0 at first, but then becomes 0.0.

Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?

What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:

volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );

Is that the best solution?

[program snipped]

It looks to me like your analysis is right, including what to do about
resolving it. One minor suggestion: you might try writing the assert
without an explicit temporary, thusly:

assert( *(volatile double *)&x > 0.0 && putchar('\n')
&& *(volatile double *)&x == 0.0 );

which has a somewhat nicer feel in the context of the assert usage.
If this has the same behavior as the assert code with the explicit
temporary (and I think it should) you might want to use this form
instead, perhaps with a suitable CPP macro for the 'volatile' access.

Alternatively, you might use 'nextafter()' to compute the
smallest non-zero double, and test

assert( x >= smallest_nonzero_double && putchar('\n') && x == 0.0 );

This idea might give you another way of thinking about the problem
you're trying to solve. Floating point numbers are tricky; if you're
going to be testing them in assert's you probably want to think about
what conditions you're testing very, very carefully. Not that I
think you don't know that already. :)
 
D

Daniel Vallstrom

To clarify, the real construction I'm having problem with
looks something like this:

assert( y >= f(...) );

The assertion is correct (because previous calculations
effectively have yielded just that) but erroneously fails
because the f value gets extra precision yielding it
strictly larger than y.
At the end is a full program containing the above seemingly
inconsistent assertion. On my x86 using gcc the assertion
doesn't fail when x is of type double.

AFAIK, what is happening is that at first x temporary resides
in a 80 bit register with a higher precision than the normal
64 bits. Hence x is greater than 0.0 at first even though
the "real" 64-bit value is 0.0. If you cast the value to long
double and print it you can see that it indeed is slightly
larger than 0.0 at first, but then becomes 0.0.

Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?

What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:

volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );

Is that the best solution?

[program snipped]

It looks to me like your analysis is right, including what to do about
resolving it.

Good to know. I'll go with the volatile solution.
One minor suggestion: you might try writing the assert
without an explicit temporary, thusly:

assert( *(volatile double *)&x > 0.0 && putchar('\n')
&& *(volatile double *)&x == 0.0 );

which has a somewhat nicer feel in the context of the assert usage.
If this has the same behavior as the assert code with the explicit
temporary (and I think it should) you might want to use this form
instead, perhaps with a suitable CPP macro for the 'volatile' access.

I thought about a volatile cast (only the simple
(volatile double) though; yours seems safer) but felt
very unsure of the meaning of a volatile cast. Having now
glanced through the standard I'm still unsure (even if it
might work on a test-example).

However, while looking through the standard, I found this
(C99, 6.5.4 Cast operators, p81):


86) If the value of the expression is represented with
greater precision or range than required by the type
named by the cast (6.3.1.8), then the cast specifies a
conversion even if the type of the expression is
the same as the named type.

Hence, a simple (double)x should work. But I tried that
before posting the original post and it didn't work! I.e.

assert( (double)x > 0.0 && putchar('\n') && x == 0.0 );

still doesn't fail. Looks like a bug in gcc. I'll add a full
program showing the bug at the end. To be fair, gcc doesn't
claim to be a C99 compiler but perhaps earlier standards also
guaranteed footnote 86) to hold? It sounds sensible. (The use
of nextafter is unimportant and can be coded in pre-C99.)

Alternatively, you might use 'nextafter()' to compute the
smallest non-zero double, and test

assert( x >= smallest_nonzero_double && putchar('\n') && x == 0.0 );

This is unacceptable since it would make the assertion
strictly weaker (the real assertion in the real program
that is; the above is strictly stronger).

This idea might give you another way of thinking about the problem
you're trying to solve. Floating point numbers are tricky; if you're
going to be testing them in assert's you probably want to think about
what conditions you're testing very, very carefully. Not that I
think you don't know that already. :)

I'm trying but gcc and the hardware failing me is not being helpfull;p


Daniel Vallstrom


/* Tests weird inconsistent floating point behavior resulting in something
like "assert( (double)x > 0.0 && 1 && x == 0.0 );" holding! At least
on my x86 the asertion doesn't fail using gcc. This shows a bug in gcc.
Daniel Vallstrom, 041031.
Compile with e.g: gcc -std=c99 -pedantic -Wall -O -lm fpbug3.c
*/

#include <stdio.h>
#include <math.h>
#include <assert.h>

int main( void )
{
double x = 0x1.0p-2 * nextafter( 0.0, 1.0 );

/* The putchar-conjunct below is just something arbitrary in */
/* order to clear the x-register as a side-effect. At least */
/* that's what I guess is happening. */
assert( (double)x > 0.0 && putchar('\n') && (double)x == 0.0 );

return 0;
}
 
C

Chris Torek

To clarify, the real construction I'm having problem with
looks something like this:

assert( y >= f(...) );

The assertion is correct (because previous calculations
effectively have yielded just that) but erroneously fails
because the f value gets extra precision yielding it
strictly larger than y.

[massive snippage]
I thought about a volatile cast (only the simple
(volatile double) though; yours seems safer) but felt
very unsure of the meaning of a volatile cast.

The meaning is up to the implementation (but this is true even for
ordinary "volatile double" variables). Using a separate volatile
double variable will work on today's gcc variants, at least, and
in my opinion is a reasonable thing to ask of them.
However, while looking through the standard, I found [...]
Hence, a simple (double)x should work. But I tried that
before posting the original post and it didn't work!
... Looks like a bug in gcc. ... gcc doesn't
claim to be a C99 compiler but perhaps earlier standards also
guaranteed footnote 86) to hold?

Yes, this is "supposed to work" even in C89. GCC has an option,
"-ffloat-store", that gets it closer to conformance in both cases.
The problem is that you do not want to use -ffloat-store, not
only because it apparently does not always work[%], but also because
it has horrible effects on performance.

Ultimately, the problem boils down to "the x86 CPU does not implement
IEEE single- and double-precision floating point quickly, but rather
only IEEE-extended".[%%] With "-ffloat-store", gcc gives you a choice
between "goes fast and doesn't work" or "crawls so slowly as to be
useless, but works". :) The "volatile double" trick is a sort
of compromise that will probably get you there.

[% I am not sure what it is that is supposed to not-work when
using -ffloat-store.]

[%% There is a precision control field in the FPU control word,
but it affects only the mantissa, not the exponent. Depending on
your particular numbers, manipulating this field may or may not
suffice.]
 
M

Mark McIntyre

To clarify, the real construction I'm having problem with
looks something like this:

assert( y >= f(...) );

The assertion is correct (because previous calculations
effectively have yielded just that) but erroneously fails
because the f value gets extra precision yielding it
strictly larger than y.

For floating point values of y other than zero and some integers, this is a
doomed exercise. I realise you've already considered and discounted the
inherent inaccuracy of floats, but except in special circumstances, you
can't do this kind of comparison.

And by the way, are you using an assert to perform a check in release code?
Thats generally a bad idea.
 
D

Daniel Vallstrom

Mark McIntyre said:
For floating point values of y other than zero and some integers, this is a
doomed exercise. I realise you've already considered and discounted the
inherent inaccuracy of floats, but except in special circumstances, you
can't do this kind of comparison.

Sure you can. As said, the problem was that x86 floating point sucks
and that gcc is buggy. But even given that, the workaround is simple
and works fine. The workaround, as said, looks like this:

volatile double yvol = (double)y;
volatile double fvol = (double)f(...);
assert( yvol >= fvol );

The double casts guarantee this to work with any correct compiler.
And it also works with gcc since gcc handles volatiles in a reasonable
way.

Furthermore there is no performance hit to talk about, at least in this
case.
And by the way, are you using an assert to perform a check in release code?
Thats generally a bad idea.

Where did this question come from? Anyway, if anything, the opposite
is true. For example, what about that big blackout in northeast
america some time ago? IIRC, the reason it got so widespread was
that persons supervising the grid didn't notice anything wrong for
hours. And the reason for that was because the system diagnosing the
grid had entered into a state not anticipated by the makers I think.
Instead of failing an assert --- which presumably would have
alerted the technicians --- the program kept on going as if
nothing was wrong, showing no changes. (That's the picture of the
blackout I have at least. Truthfully, I'm not completely sure that
all the details in that story are correct:)

Of course, if it's better that a program keeps going on, even if
its state is screwed up, then so be it. The point is that in many
situations it is better to keep the asssertions active.

I'm a big fan of a Hoare flavored assert system, effectively
guaranteeing that the program is correct with high probability.
For example, say that you are coding a sorting function. Then,
besides assertions in the sorting function itself, there should
be a big assertion at the end, asserting that all the properties
that should hold at the end actually do so, e.g. that the array
say is actually sorted.

Some assertions could take a lot of time and even change the
complexity of the program, from e.g. O(n) to O(n^2). Hence you
have to layer the assertions according to how much they slow
down the program.

If you feel that you can't use the assertion construction
provided by C you should implement your own equivalent system
rather than define NDEBUG.

In anticipation, a counter-argument to assertions that sometimes
comes up is that if you think assertions are so important you
should instead handle the cases properly. I.e. instead of doing
"assert(p)" you should do "if (!p) ...". That argument shows
a misunderstanding of what asserts are about, namely assuring
that the program is correct, not handling cases. Assertions
look like "assert(true)" and it's nonsense to instead go
"if (!true)...".


Daniel Vallstrom
 
M

Mark McIntyre

Sure you can.

Really? Go tell that to Goldberg et al.
As said, the problem was that x86 floating point sucks
and that gcc is buggy.

Maybe.
But even given that, the workaround is simple
and works fine. The workaround, as said, looks like this:

volatile double yvol = (double)y;

So you're telling me that casting FP values to doubles and using the
volatile keyword removes the imprecision inherent in storing irrational
numbers in finite binary format? If so, this is a special extension to gcc,
a) not guaranteed by the Standard and b) offtopic here.
Where did this question come from?

What does that matter? You post to CLC, you expect to find people noticing
all sorts of other things.
Anyway, if anything, the opposite is true.

assert() calls abort(). This is a bad way to handle errors.
For example, what about that big blackout in northeast
america some time ago? IIRC, the reason it got so widespread was
that persons supervising the grid didn't notice anything wrong for
hours. And the reason for that was because the system diagnosing the
grid had entered into a state not anticipated by the makers I think.
Instead of failing an assert --- which presumably would have
alerted the technicians --- the program kept on going as if
nothing was wrong, showing no changes.

Mhm. So an assert() would have blacked out the *entire* US, including
bringing down the computer systems, sealing the technicians in their
airtight computer facility etc. Good solution.
Of course, if it's better that a program keeps going on, even if
its state is screwed up, then so be it. The point is that in many
situations it is better to keep the asssertions active.

I disagree, but YMMV. Remind me not to ask you to work on any
mission-critical systems of mine.... :)
In anticipation, a counter-argument to assertions that sometimes
comes up is that if you think assertions are so important you
should instead handle the cases properly.

Indeed you should.
 
T

Tim Rentsch

Tim Rentsch said:
[lots snipped]

It looks to me like your analysis is right, including what to do about
resolving it.

Good to know. I'll go with the volatile solution.

After thinking about this more I'm coming around to the point of view
that using 'volatile' (or something similar) in the assertion is not
the right answer. But first let me address the use of 'volatile'.

I thought about a volatile cast (only the simple
(volatile double) though; yours seems safer) but felt
very unsure of the meaning of a volatile cast. Having now
glanced through the standard I'm still unsure (even if it
might work on a test-example).

The '*(volatile double *)&x' idea came up in a thread (in fact it was
a response from Chris Torek) answering a question of mine about
volatile. As I recall that thread also concluded that just casting
directly, as in '(volatile double) x' is not guaranteed to work. Also
I remember that the '*(volatile double *)&x' approach is mentioned
explicitly in the standard (or perhaps the rationale - I don't always
remember which is which) as the right way to do this kind of forced
access.

Using a volatile variable should work equally well. My model (well,
part of it anyway) for volatile is assigning to a volatile variable
means a "store" must be done, and referencing a volatile variable
means a "load" must be done. I think that model is good operationally
even if it doesn't correspond exactly to what's said in the standard.

Furthermore, this suggests another means to accomplish the forced
access that doesn't suffer the restriction of needing an L-value
(which the address-casting approach has) or need another local
variable:

static inline int
double_GE( volatile double a, volatile double b ){
return a >= b;
}

. . .

assert( double_GE( y, f() ) );

Using this approach on the test program caused the "can't possibly
succeed" assertion to fail, as desired. Incidentally, leaving
off the 'volatile' specifiers on the parameters left the assertion
in the "can't possibly succeed, yet it does" state.

However, while looking through the standard, I found this
(C99, 6.5.4 Cast operators, p81):


86) If the value of the expression is represented with
greater precision or range than required by the type
named by the cast (6.3.1.8), then the cast specifies a
conversion even if the type of the expression is
the same as the named type.

Hence, a simple (double)x should work. But I tried that
before posting the original post and it didn't work!
[snip related test with gcc]

It seems right that the standard mandates that casting to double
should make the asssertion do what you'd expect, and it therefore
seems right that the program not doing that means gcc has a bug.
Personally I think this specification is mildly insane; if x is a
double (and assuming x has a floating point value rather than NaN),
the expression 'x == (double)x' should ALWAYS be true. But even if
we accept that using a '(double)' cast will correct the 64/80 bit
problem, this still seems like the wrong way to solve the problem,
for the same reason that using a 'volatile' forced access seems
wrong - see below.

This is unacceptable since it would make the assertion
strictly weaker (the real assertion in the real program
that is; the above is strictly stronger).

I hear you. My next question is, what is it that you are really
trying to guarantee?

I'm trying but gcc and the hardware failing me is not being helpfull;p

No kidding.

Rewriting the assertion - whether using (double) or volatile or some
other mechanism - and doing nothing else isn't the right way to solve
the problem. Here's my reasoning.

Why are you writing the assertion in the first place? Presumably it's
to guarantee that some other piece of code that depends on that
assertion being true is going to execute at some point in the (perhaps
near) future. For example, if

assert( y >= f() );

perhaps we are going to form the expression 'y - f()' and depend on
that value being non-negative.

Whatever it is that we're depending on should be exactly expressed by
what is in the assertion (or at least, should be implied by what is in
the assertion). If we write the assertion one way, and the later code
a different way, that won't be true - the assertion could succeed, and
the later code fail, or vice versa. So, whatever it is we do, the
assertion should be written in the very same form as the code that
the assertion is supposed to protect. Thus, if we have

static inline double
double_minus( volatile double a, volatile double b ){
return a - b;
}

/* make sure y - f() is non-negative */
assert( double_minus( y, f() ) >= 0 );

then the later code should use

difference = double_minus( y, f() );

and not

difference = y - f();

You see what I mean? That's related to my question about
reformulating the assertion expression, so that the assertion
expression and the subsequent code that depends on it are guaranteed
to be in sync, whatever it is that the subsequent really needs to
guarantee. Because the subsequent code may have just the same
problems that the code in the assertion expression has.



Picking up an earlier part of the message:
To clarify, the real construction I'm having problem with
looks something like this:

assert( y >= f(...) );

The assertion is correct (because previous calculations
effectively have yielded just that) but erroneously fails
because the f value gets extra precision yielding it
strictly larger than y.

I've been in touch with the person who is now chairing the IEEE-754
revision committee, and he's going to take up this whole question with
the committee. He's asked for a code sample that illustrates the
'y >= f()' problem; if you get that to me I'll forward it to him.
If it's too large to post please feel free to send it to me in email.
 
M

Michael Mair

Mark said:
Really? Go tell that to Goldberg et al.

Why? As the OP used nextafter() in his failing example, at least this
is correct; and for many choices of f(), this also is possible.
If f() is keeping track of the way everything is rounded, then you
certainly can ask for >=. The point just is that you have to be
very careful.
BTW: I rather like the "extended" version of the Goldberg paper provided
by Sun.


With certainty.

So you're telling me that casting FP values to doubles and using the
volatile keyword removes the imprecision inherent in storing irrational
numbers in finite binary format? If so, this is a special extension to gcc,
a) not guaranteed by the Standard and b) offtopic here.

The gcc/x86 issue has been brought up time and again -- in fact,
my first post to c.l.c was about that. Most people initially
are not sure whether the double cast has to work as intended
(and also intended by the standard) or whether there is some
exception to the rule they do not know.
The thing why it works with the volatile variable and the
volatile double * cast trick is due to a better implementation
of the volatile semantics.
Further: We are not talking about irrational numbers but rather
about numbers representable by long double and double.

And by the way, are you using an assert to perform a check in release code?
Thats generally a bad idea.
[snip]
Anyway, if anything, the opposite is true.

assert() calls abort(). This is a bad way to handle errors.

Correct.
One remark: Many people create their own assertion macro which
does more cleanup, spits out more information and so on.

However, one usually places assertions where one thinks that
they always will hold and #defines them away for the release
version -- error checking and cleanup still has to take place.


Cheers,
Michael
 
M

Mark Piffer

[lotsoftextsnipped]
I hear you. My next question is, what is it that you are really
trying to guarantee?



No kidding.

Rewriting the assertion - whether using (double) or volatile or some
other mechanism - and doing nothing else isn't the right way to solve
the problem. Here's my reasoning.
[carefully crafted argument snipped]

I agree, no amount of reformulation trickery will make his problems go
away if the flaw is in ignoring a basic rule for floating point
calculations: avoid doing arithmetic with denormalized numbers. The
result x is obviously denormalized (or even underflowed) at it's point
of use and therefore the 64-Bit representation vanishes into 0.0. For
a calculation which is possible to end up in a range close to 0, do
yourself a favour and limit it's result to machine precision (i.e. if
|x|<epsilon then x = 0).

Mark
 
D

Dan Pop

In said:
I'm having problems with inconsistent floating point behavior
resulting in e.g.

assert( x > 0.0 && putchar('\n') && x == 0.0 );

holding. (Actually, my problem is the dual one where I get
failed assertions for assertions that at first thought ought
to hold, but that's not important.)

At the end is a full program containing the above seemingly
inconsistent assertion. On my x86 using gcc the assertion
doesn't fail when x is of type double.

AFAIK, what is happening is that at first x temporary resides
in a 80 bit register with a higher precision than the normal
64 bits. Hence x is greater than 0.0 at first even though
the "real" 64-bit value is 0.0. If you cast the value to long
double and print it you can see that it indeed is slightly
larger than 0.0 at first, but then becomes 0.0.

Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?

Yes. However, assert((double)x > 0.0 && (double)x == 0.0) *must* fail.
It doesn't in gcc for x86, however, it is a well known bug that the
gcc people don't seem very eager to fix.
What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:

volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );

Is that the best solution?

Yes, if it works. gcc is doing very aggressive optimisations in this
area and happily ignoring my casts above, although this is forbidden by
the standard. You may actually want to use memcpy to copy the value of
x in y and have a look at the generated code.
/* Tests weird inconsistent floating point behavior resulting in
something like "assert( x > 0.0 && 1 && x == 0.0 );" holding!
Daniel Vallstrom, 041030.
Compile with e.g: gcc -std=c99 -pedantic -Wall -O -lm fpbug.c
*/

Adding -ffloat-store will fix your particular problem. However, this is
not a panacea for all the precision-related bugs of gcc and it is slowing
down floating-point intensive code. It is, however, the first thing you
may want to try when puzzled by gcc's behaviour.

Dan
 
T

Tim Rentsch

Mark McIntyre said:
So you're telling me that casting FP values to doubles and using the
volatile keyword removes the imprecision inherent in storing irrational
numbers in finite binary format? If so, this is a special extension to gcc,
a) not guaranteed by the Standard and b) offtopic here.

Since Daniel included a citation (it was section 6.5.4 I believe)
about '(double)' forcing a conversion in certain cases, the issue is
certainly on-topic, regardless of what the ultimate resolution of the
question [about '(double)' removing extra precision] might be.

That's also true for the question about whether 'volatile' implies
moving out of extended precision. Regardless of whether it does
or does not remove extended precision, it's on-topic to ask if
the standard requires it to.

Just by the way, I don't think the answer to either of those questions
is as clear-cut as the "not guaranteed" statement implies. So if
there's an argument supporting that position, it would be good to
hear it, including relevant citations.
 
T

Tim Rentsch

I agree, no amount of reformulation trickery will make his problems go
away if the flaw is in ignoring a basic rule for floating point
calculations: avoid doing arithmetic with denormalized numbers. The
result x is obviously denormalized (or even underflowed) at it's point
of use and therefore the 64-Bit representation vanishes into 0.0. For
a calculation which is possible to end up in a range close to 0, do
yourself a favour and limit it's result to machine precision (i.e. if
|x|<epsilon then x = 0).

It's true that the result here is denormalized in a sense (it's as
normalized as it's possible for this value to be, given how floating
point numbers are represented). But the larger problem is more
pervasive than just numbers close to zero, or denormalized numbers.

What is the right way to handle the larger problem?
 
M

Mark McIntyre

Why? As the OP used nextafter() in his failing example, at least this
is correct; and for many choices of f(), this also is possible.

I wonder if you actually *read* my original comment. Which said exactly the
same as you.
If f() is keeping track of the way everything is rounded, then you
certainly can ask for >=.

I suspect you can't, based on experience, but really don't have the energy
to argue, especially as my son is burbling in my left ear about some sort
of computerised walking car....
The point just is that you have to be very careful.
Indeed.


With certainty.

That was "maybe, but in the context of CLC topicality, who knows?"!
 
C

Christian Bau

I agree, no amount of reformulation trickery will make his problems go
away if the flaw is in ignoring a basic rule for floating point
calculations: avoid doing arithmetic with denormalized numbers.

Avoid doing floating point arithmetic with tiny numbers on a processor
that is so brain-damaged that it cannot decide if they are denormalized
or not. On a proper implementation of IEEE 754, denormalized numbers are
no problem.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,982
Messages
2,570,186
Members
46,740
Latest member
JudsonFrie

Latest Threads

Top