I said I wouldn't try doubles in the real project, but it didn't prove
that difficult and I couldn't let it rest. Anyway, it didn't work as
well as I hoped. This is some of the output I got:
x = 64.200000 / (107.000000 / 5) = 64.200000 / 21.400000 = 3.000000 = 2
Notice the 3.000000 = 2? That's what's screwing my program over.
Anyway, as I understand from you guys, I need infinite bits to get 100%
accuracy. That's probably not going to happen anytime soon, so I need
another solution. Is there a way to make the program use the calculated
3.000000 instead of the 2.9999999999999999999999999 it actually
calculated? Is there some way to specify how accurate I need things?
Thanks for any help!
Ian said:
Jordi said:
I have made a little C program that generates the following output:
227.000000 / 5 = 45.400002
Here's the code:
int main (int argc, char* argv[]) {
float var = 227;
printf("%f / 5 = %f\n", var, var / 5);
return 0;
}
Obviously, the output should be 45.4 (or 45.400000), but for some
mysterious reason C decided to add 0.000002* to the result. The error
might seem minor and this program is obviously trivial, but I need to
do similar calculations in a much bigger project and the problem I'm
encountering is caused by exactly this error.
* It seems that it is not .000002 is added, but rather
..0000015258789076710855... (which I found out after subtracting 45.4
from the result and then multiplying it with 10,000,000,000,000,000)
Why does C do this and what can I do about it?
It doesn't, the floating point hardware or emulation does.
Floating point isn't precise. Avoid floats and use doubles, even then,
you have to understand the limitations of floating point math.
You must have a dodgy FPU, the result is 45.400000 on my system