[ ... ]
Yes, those are the issues I was talking about when I said:
*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.
Somehow I seem to have missed that (part of that?) post, but I think
it's basically inaccurate.
On a real machine, a floating point comparison typically takes two
operands and produces some single-bit results (usually in a special
flags register). It starts by doing a floating point subtraction of
one of those operands from the other, and then examines the result of
that subtraction to see whether it's zero, negative, etc., and sets
flags appropriately based on those conditions.
Now, you (Pete) seem to be focusing primarily on the floating point
subtraction itself. While there's nothing exactly wrong with that,
it's a long ways from the whole story. The floating point subtraction
just produces a floating point result -- and it's the checks I
mentioned (for zero and NaN) that actually determine the state of the
zero flag.
As such, far from being a peripheral detail important only to
nitpickers, this is really central, and the subtraction is nearly a
peripheral detail that happens to produce a value to be examined --
in particular, it's also perfectly reasonable (and fairly common) to
set the flags based on other operations as well as subtraction.
As I recall, the question that started this sub-thread was mostly one
of whether a floating point comparison was particularly expensive. In
this respect, I'd say Pete is basically dead-on: a floating point
comparison is quite fast not only on current hardware, but even on
ancient stuff (e.g. 4 clocks on a 486). Unless the data being
compared has been used recently enough to be in registers (or at
least cache) already, loading the data from memory will almost always
take substantially longer than doing the comparison itself.
I suspect, however, that this mostly missed the point that was
originally attempting to be made: which is that under _most_
circumstances, comparing floating point numbers for equality is a
mistake. Since floating point results typically get rounded, you
usually want to compare based on whether the difference between the
two is smaller than some delta. This delta will depend on the
magnitude of the numbers involved. The library contains a *_EPSILON
in float.h (and aliases for the same general idea in other headers)
that defines the smallest difference that can be represented between
1 and something larger than 1.
Therefore, if you're doing math with doubles (for example) you start
by estimating the amount of rounding that might happen based on what
you're doing. Let's assume you have a fairly well-behaved
computation, and it involves a dozen or so individual calculations.
You then do your comparison something like:
delta = ((val1+val2)/2.0) * DOUBLE_EPSILON * 12.0;
if ( val1-val2 <= delta)
// consider them equal.
else
// consider them unequal.
While the comparison itself is fast and cheap, it's really only a
small part of the overall job.