G
Guest
Hello,
I have a problem with float comparisons. Yes, I have read FAQ point
14.4ff.
Assume this little code sniplet :
#include <stdio.h>
int main(int argc, char *argv[]) {
double a = 3 ;
double b = 5 ;
if( (a/b) > 0.6)
printf(">\n") ;
else
printf("<=\n") ;
return EXIT_SUCCESS ;
}
As you might guess, it sometimes returns ">", and sometimes "<=".
Now it is of much importance for me that the program in a really, really
repeatable fashion. The threshold may be arbitrary, but what I want to
avoid at all costs are different results depending on what compiler
options I use, for example.
In my "real world" program, a and b are ints, so I could solve the
problem by doing
if( (a * 10) > (b * 6) )
But my question is: if it _were_ doubles, what should I do then? How to
get a fully deterministic statement, independent of architecture,
compilers and stuff?
Regards,
January
I have a problem with float comparisons. Yes, I have read FAQ point
14.4ff.
Assume this little code sniplet :
#include <stdio.h>
int main(int argc, char *argv[]) {
double a = 3 ;
double b = 5 ;
if( (a/b) > 0.6)
printf(">\n") ;
else
printf("<=\n") ;
return EXIT_SUCCESS ;
}
As you might guess, it sometimes returns ">", and sometimes "<=".
Now it is of much importance for me that the program in a really, really
repeatable fashion. The threshold may be arbitrary, but what I want to
avoid at all costs are different results depending on what compiler
options I use, for example.
In my "real world" program, a and b are ints, so I could solve the
problem by doing
if( (a * 10) > (b * 6) )
But my question is: if it _were_ doubles, what should I do then? How to
get a fully deterministic statement, independent of architecture,
compilers and stuff?
Regards,
January