Dik T. Winter said:
This is different. We want to compare with 1.0, and add epsilon toe
avoid false negatives. But in this case we get a true negative.
On the other hand, if f is mathematially greater than 1+epsilon,
but slightly smaller due to floating point errors we get a false
positive...
But indeed, comparing to 1.0 + epsilon is just as silly as comparing
to plain 1.0. The bottom line is that when you want to use
floating-point you better know what you are doing so that you can
judge how you wish to compare. (In all my years of programming in
numerical mathematics I rarely, if at all, coded a line like
f <= 1.0 + epsilon, but many lines like f <= 1.0.)
If you compare to 1.0 + epsilon you exclude the false negatives (since the
real value we are interested in is 1.0, not 1.0 + epsilon). If you compare
to 1.0 - epsilon you exclude the false positives.
When f is in the range 1 +/- epsilon we have a problem. It depends what you
expect f to be th result of, and why you are making the comparison.
Let's say that you are writng some sort of lighting algorithm that requires
vetors to be either normalised or scaled to be under 1. If you want to
exclude / trap any unnormalised vectors, you probably expect most of the
vectors to be the result of dividing by a square root, and thus
mathematically unity. You probably want to accept any a whisker over 1.0 in
length.
On the other hand if f is a probability, you probably wnat to compare to a
hard 1.0 - p might easily be exactly 1.0, but if it is a whisker over that
probably represents something bad elsewhere.
Unfortunately there aren't any hard or fast rules.