Consider the following code:
int main(int argc, char* argv[]) {
double d1;
d1 = 11.0;
d1 -= 1.8;
d1 -= 3.0; //HERE THE DEBBUGER SHOWS 6.1999999999999993 !!!
printf("%f", d1); //THE OUTPUT IS 6.200000 (ok, we have less digits)
getch();
return 0;
}
I don't understand it. Is it the debuger shows me wrong data, or
something i don't know?
There is no exact value of 6.2 in binary floating point (nor of
1.8). 0.1 is an infinite repeating decimal in binary. Also,
floating point operations are subject to round-off error.
For example, (1 + 100000000000000000000) - 100000000000000000000 has
a darn good chance of coming out zero.
If it's not an exact integer, and it doesn't end in 5 (with trailing
0's removed) it doesn't have an exact representation in binary
floating point.
If it's not an exact integer, has at least two decimal places, and
it doesn't end in 25 or 75 (with trailing 0's removed), it doesn't
have an exact representation in binary floating point.
If it's not an exact integer, has at least three decimal places,
and it doesn't end in 125, 375, 625, or 875 (with trailing 0's
removed), it doesn't have an exact representation in binary floating
point.
It appears that the double value you printed had roundoff error of 1 in the
least significant bit, making it match the 'before' value below:
6.2 as double:
Before: 6.199999999999999289457264239899814128875732421875000000000000
Value: 6.200000000000000177635683940025046467781066894531250000000000
After: 6.200000000000001065814103640150278806686401367187500000000000
6.2 as float:
Before: 6.199999332427978515625000000000000000000000000000000000000000
Value: 6.199999809265136718750000000000000000000000000000000000000000
After: 6.200000286102294921875000000000000000000000000000000000000000
For IEEE doubles, which I suspect you are using, DBL_DIG is 15, and the printed
result is 1 off in the 16th place, so you have nothing to complain about.
Gordon L. Burditt