CBFalconer said:
No, I think you miss the point. Floating point is inherently an
approximation, and the above says that anything within the
resolution (i.e. the approximation) is to be considered equal. Any
time you see (a == b) for floating point operands, it is probably a
bug. For example:
for (a = 1.0; a != 0; a -= 0.1) dosomethingwith(a);
While I agree that this is likely to fail ...
for (a = 1.0; !DUNEQUAL(a, 0.0); a -= 0.1) dosomethingwith(a);
.... there is no guarantee that this second form is correct, for the
definition supplied above. Why? Because each subtraction may invoke a
roundoff. After several roundings, you maybe off by more than one lsb
for a number with the magnitude of 1, let alone, say, one lsb of 1e-7.
Note also that DUNEQUAL, is not commutative, which is fails the
principle of least astonishment.
for (a = 1.0; a > 0; a -= 0.1) dosomethingwith(a);
may behave. But it is a crapshoot whether it does an extra cycle.
And, to make it robust, you could do
for (a = 1.0; a > 0.1/2; a -= 0.1) dosomethingwith(a);
or, to use the best approximation of of each nominal value of a:
int i;
for (i = 10; i > 0; i--) { dosomethingwith (i*0.1); }
or, for the pragmatic types
for (a = 10; a > 0; a -= 1) dosomethingwith (a/10);
The last form is not guaranteed, but performs correctly on any floating
point system I am familiar with, except maybe logarithmic systems, due
to the limited number of bits required to exactly represent the integer
values.