It might also be possible that the version using operator<
only can be better optimized by the compiler.
Or the reverse might also be possible.
If you call lhs.real() < rhs.real() and then lhs.real ==
rhs.real(), it's probable that the compiler will produce two
comparisons and two conditional jumps from those.
On most machines, a comparison will set status bits; the
compiler will likely see that the comparison has already been
done, and only test the status bits. Of course, the compiler
could also recognize that !x<y is the same as x>y, and recognize
that the status bits were already set for that as well. With a
good optimizing compiler, I'd expect no difference.
However, if you call lhs.real() < rhs.real() and then
!(lhs.real() < rhs.real()), it's theoretically possible that
the compiler will notice the pattern and create only one
comparison and one conditional jump.
Or not. Back in the old days, Fortran had a three way if
(arithmetic if), which didn't test for true/false, but for
positive/zero/negative. I would expect that handling such cases
would involve well known optimization techniques. (Curiously,
both g++ and VC++ seem to use three comparison instructions in
both cases. On the AMD machine where I'm writing this, I'd have
expected something along the lines of:
xor %eax, %eax
ucomisd %xmm0, %xmm2
jnz $end
ucomisd %xmm1, %xmm3
$end:
addc 0, $eal
rep
for either of them, supposing I understand the instruction set
correctly---I've never really studied it. G++ is far from that.
(The issue on the 32 bit Intel architecture on which I have VC++
is more complex, because floating point comparison doesn't
directly affect the CPU status codes. Still, VC++ compares and
reads the status codes twice for the first double, where once
would definitely suffice.)