D
Daniel Vallstrom
I'm having problems with inconsistent floating point behavior
resulting in e.g.
assert( x > 0.0 && putchar('\n') && x == 0.0 );
holding. (Actually, my problem is the dual one where I get
failed assertions for assertions that at first thought ought
to hold, but that's not important.)
At the end is a full program containing the above seemingly
inconsistent assertion. On my x86 using gcc the assertion
doesn't fail when x is of type double.
AFAIK, what is happening is that at first x temporary resides
in a 80 bit register with a higher precision than the normal
64 bits. Hence x is greater than 0.0 at first even though
the "real" 64-bit value is 0.0. If you cast the value to long
double and print it you can see that it indeed is slightly
larger than 0.0 at first, but then becomes 0.0.
Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?
What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:
volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );
Is that the best solution?
Daniel Vallstrom
/* Tests weird inconsistent floating point behavior resulting in
something like "assert( x > 0.0 && 1 && x == 0.0 );" holding!
Daniel Vallstrom, 041030.
Compile with e.g: gcc -std=c99 -pedantic -Wall -O -lm fpbug.c
*/
#include <stdio.h>
#include <math.h>
#include <assert.h>
int main( void )
{
double x = nextafter( 0.0, -1.0 ) * nextafter( 0.0, -1.0 );
/* The putchar-conjunct below is just something arbitrary in */
/* order to clear the x-register as a side-effect. At least */
/* that's what I guess is happening. */
assert( x > 0.0 && putchar('\n') && x == 0.0 );
return 0;
}
resulting in e.g.
assert( x > 0.0 && putchar('\n') && x == 0.0 );
holding. (Actually, my problem is the dual one where I get
failed assertions for assertions that at first thought ought
to hold, but that's not important.)
At the end is a full program containing the above seemingly
inconsistent assertion. On my x86 using gcc the assertion
doesn't fail when x is of type double.
AFAIK, what is happening is that at first x temporary resides
in a 80 bit register with a higher precision than the normal
64 bits. Hence x is greater than 0.0 at first even though
the "real" 64-bit value is 0.0. If you cast the value to long
double and print it you can see that it indeed is slightly
larger than 0.0 at first, but then becomes 0.0.
Is not failing "assert( x > 0.0 && 1 && x == 0.0 );"
acceptable?
What is the best workaround to the problem? One possibility is
using volatile intermediate variables, when needed, like this:
volatile double y = x;
assert( y > 0.0 && putchar('\n') && y == 0.0 );
Is that the best solution?
Daniel Vallstrom
/* Tests weird inconsistent floating point behavior resulting in
something like "assert( x > 0.0 && 1 && x == 0.0 );" holding!
Daniel Vallstrom, 041030.
Compile with e.g: gcc -std=c99 -pedantic -Wall -O -lm fpbug.c
*/
#include <stdio.h>
#include <math.h>
#include <assert.h>
int main( void )
{
double x = nextafter( 0.0, -1.0 ) * nextafter( 0.0, -1.0 );
/* The putchar-conjunct below is just something arbitrary in */
/* order to clear the x-register as a side-effect. At least */
/* that's what I guess is happening. */
assert( x > 0.0 && putchar('\n') && x == 0.0 );
return 0;
}