C++ doubt

P

Pete Becker

Peter said:
Essentially, 1.999999 and 2.0 are both valid representations of the same
number (remember the topic of significant digits?).

No, they're two distinct numbers.
Also, comparing doubles
in any way is an intensive CPU operation.

Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.
 
P

Peter Gordon

If you *really* need to test equality you test equality. If you need
to test "close enough" you test "close enough." Exact equality of
floating point values is sometimes meaningful.
That's fine, but dangerous. You are highly likely to get compiler
differences. Not all of them obey the Banker's standard
 
P

Pete Becker

Peter said:
That's fine, but dangerous. You are highly likely to get compiler
differences. Not all of them obey the Banker's standard

It's not at all dangerous to understand the problem you're trying to
solve and use that knowledge to write code that solves it correctly.
It's extremely dangerous to use approximate solutions just because you
don't understand the problem well enough to get correct results.

Most programmers (including me) don't understand floating point math
well enough to use it in serious applications. Too many programmers (not
including me) assume that they can write something that's approximately
right and the result will be good enough.
 
P

Peter Gordon

It's not at all dangerous to understand the problem you're trying to
solve and use that knowledge to write code that solves it correctly.
It's extremely dangerous to use approximate solutions just because you
don't understand the problem well enough to get correct results.

Most programmers (including me) don't understand floating point math
well enough to use it in serious applications. Too many programmers (not
including me) assume that they can write something that's approximately
right and the result will be good enough.
In binary, there is no correct result, they are all approximations.
The point of having a standard is so every system calculates the
same approximation. The first group that "butted their heads" up
against this problem were bankers. The odd cent difference does
not matter much, but is not good enough for balancing books.
They defined, or paid to have defined, a standard for coping with
this imprecision. It has become the standard for the computer
industry.

However, lets agree, that using an equality test on floating
point numbers is not a good idea.
 
P

Pete Becker

Peter said:
In binary, there is no correct result, they are all approximations.

Nonsense. 0.5 + 0.5 is one, whether you do it in binary or decimal.
The point of having a standard is so every system calculates the
same approximation. The first group that "butted their heads" up
against this problem were bankers. The odd cent difference does
not matter much, but is not good enough for balancing books.
They defined, or paid to have defined, a standard for coping with
this imprecision. It has become the standard for the computer
industry.

Nope. It may be a standard for financial computations, but there is far
more to numerical analysis than financial computations.
However, lets agree, that using an equality test on floating
point numbers is not a good idea.

Since I've said at least three times that that's not the case, it's not
likely that we'll agree on it.
 
G

Gary Labowitz

Just for fun:

#include <iostream>
#include <ostream>
using namespace std;

int main( )
{
double d;
float f;
d = 1.1;
f = 1.1;

if (static_cast<float>(d) == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

if (d == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows unequal.

float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd;

if (tempd == d)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

return 0;
}
 
V

velthuijsen

Let me see if I get this right this time:
I made the (stupid) mistake of looking that the doubles in normal
decimal notation. If I'd done it in hex I'd seen the last bytes as 0.
And using the normal math on the resulting number I should get the
extra digits.
Right?
 
K

Karl Heinz Buchegger

Let me see if I get this right this time:
I made the (stupid) mistake of looking that the doubles in normal
decimal notation. If I'd done it in hex I'd seen the last bytes as 0.
And using the normal math on the resulting number I should get the
extra digits.
Right?

right.

A somewhat similar example in decimal would be:
What is the value of 1.0 / 3.0 ?
If you insist on 4 decimal places this equals: 0.3333
But if you allow for 7 decimal places, you don't get 0.3333000
but you get 0.3333333
 
H

Howard

Gary Labowitz said:
Just for fun:

float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd; // did you mean tempd = ?

if (tempd == d) // or did you mean to check tempf here?
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

return 0;
}

I suspect you made a coding error there. You're computing a value for
tempf, but that's not what you're using in the comparison. Did you mean to
check the value of tempf, or to put the result of the && in the double
tempd? I suspect the latter, since that wouldn't lose precision, but you
didn't say.

-Howard
 
R

Randy

Gary said:
Just for fun:

#include <iostream>
#include <ostream>
using namespace std;

int main( )
{
double d;
float f;
d = 1.1;
f = 1.1;

if (static_cast<float>(d) == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

I got unequal.
if (d == f)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows unequal.

I got unequal.
float tempf;
double tempd;
tempf = d;
tempd = d;
tempf = tempf && tempd;

if (tempd == d)
cout << "They are equal.\n";
else
cout << "They are unequal.\n";
//Shows equal.

Of course! tempd is a double and was made equal to d. The tempf is not
used so there is no point in even having it. However if you made a typo
and wanted to type "if (tempf == d) then I got unequal.
 
R

Randy

Of course! tempd is a double and was made equal to d. The tempf is not
used so there is no point in even having it. However if you made a typo
and wanted to type "if (tempf == d) then I got unequal.

I just re-read this, and it doesn't sound quite like I wanted. No
offence was offered :)
 
P

Pete Becker

Randy said:
I got unequal.

Some compilers by default don't do all of the adjustments to floating
point types that the standard requires, so the comparison would actually
be d as a double against f promoted to double. That makes for faster
computations, but technically doesn't comply to the C and C++
requirements. Check for a compiler switch that controls this, if you
prefer slow math. <g>
 
G

Gary Labowitz

Howard said:
I suspect you made a coding error there. You're computing a value for
tempf, but that's not what you're using in the comparison. Did you mean to
check the value of tempf, or to put the result of the && in the double
tempd? I suspect the latter, since that wouldn't lose precision, but you
didn't say.

The latter. I intended to take the "float" portion of the double and && in
the second half of the "double" portion with it.
The whole thing could have been better shown by displaying the bits of the
two variables. Oh well.
 
P

Peter Julian

Pete Becker said:
No, they're two distinct numbers.


Not on any architecture I'm aware of. Two doubles are equal if they have
the same bit pattern.


No, the result is not neccessarily the same bit pattern. Different CPU
architectures and compilers have different floating point data
representations along with intermediate results that vary in precision (in
some cases, even the same architecture will generate different results in
debug than they will in release mode). And some decimal values, like 0.1,
can't be accurately represented in a floating point value.
 
P

Pete Becker

Peter said:
No, the result is not neccessarily the same bit pattern.

Huh? Again: on every architecture I'm aware of, two doubles are equal if
they have the same bit pattern. It has nothing to do with where they
came from. It's simply a fact.
Different CPU
architectures and compilers have different floating point data
representations along with intermediate results that vary in precision (in
some cases, even the same architecture will generate different results in
debug than they will in release mode). And some decimal values, like 0.1,
can't be accurately represented in a floating point value.

Yes, but that has nothing to do with what I said, nor does it have
anything to do with your cliam, which was that "comparing doubles in any
way is an intensive CPU operation." That is simply false. Comparing
doubles for exact equality is trivial.

It is not necessary to make the sign of the cross to ward off evil
whenever anyone mentions testing floating point values for equality.
What is necessary is objective analysis.
 
P

Pete Becker

Peter said:
Thats exactly my point, the 2 doubles *must* have the same bit pattern. In
other words, this is guarenteed to fail unless 64-bit double precison or
rounded 32-bit precision is involved (the output is self explanatory):

I have no idea what point you're trying to make here. I made the simple
statement that two doubles are equal if their representations have the
same bit pattern. It doesn't matter whether you got those two values
from doubles, floats, or ints. If they hold the same bit pattern they
are equal, and that is a meaningful and useful definition of equality*.
The fact that doing a computation in two different ways can produce
results that are not equal is irrelevant. Remember, all this started
from your assertion that "comparing doubles IN ANY WAY is an intensive
CPU operation [emphasis added]." That is not true, because you can
meaningfully compare doubles by treating them as suitably sized integral
values and comparing them as such. That is not an intensive CPU operation.

If your point is simply that doing what looks like the same computation
in two different ways can produce different results, then you're wasting
everyone's time, because that was established much earlier in this thread.


*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.
 
G

Gary Labowitz

Pete Becker said:
Peter said:
Thats exactly my point, the 2 doubles *must* have the same bit pattern. In
other words, this is guarenteed to fail unless 64-bit double precison or
rounded 32-bit precision is involved (the output is self explanatory):

I have no idea what point you're trying to make here. I made the simple
statement that two doubles are equal if their representations have the
same bit pattern. It doesn't matter whether you got those two values
from doubles, floats, or ints. If they hold the same bit pattern they
are equal, and that is a meaningful and useful definition of equality*.
The fact that doing a computation in two different ways can produce
results that are not equal is irrelevant. Remember, all this started
from your assertion that "comparing doubles IN ANY WAY is an intensive
CPU operation [emphasis added]." That is not true, because you can
meaningfully compare doubles by treating them as suitably sized integral
values and comparing them as such. That is not an intensive CPU operation.

If your point is simply that doing what looks like the same computation
in two different ways can produce different results, then you're wasting
everyone's time, because that was established much earlier in this thread.


*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.

Peter, perhaps you know this for sure: When we were first developing (circa
1956) is was pretty standard to do comparisons by subtracting one operand
from the other and check the hardware zero and overflow flags. If the
hardware operation resulted in a zero result, then the operands were
considered equal. As you can guess, there are cases where the initial values
in bits were not exactly identical, but after scaling, conversions, and
other manipulations doing the subtraction, the result could be zero (all 0
bits, with underflow, if I recall right). Do modern CPU's operate in this
way? Or are they required to simulate it?
My mindset is that there is no actual "compare" of bit structures, but
manipulation to effect subtraction and check of result. Comments?
 
P

Pete Becker

Gary said:

That's "Pete". said:
perhaps you know this for sure: When we were first developing (circa
1956) is was pretty standard to do comparisons by subtracting one operand
from the other and check the hardware zero and overflow flags. If the
hardware operation resulted in a zero result, then the operands were
considered equal. As you can guess, there are cases where the initial values
in bits were not exactly identical, but after scaling, conversions, and
other manipulations doing the subtraction, the result could be zero (all 0
bits, with underflow, if I recall right). Do modern CPU's operate in this
way? Or are they required to simulate it?
My mindset is that there is no actual "compare" of bit structures, but
manipulation to effect subtraction and check of result. Comments?

On the Intel architecture, for example, the representation of floating
point values is carefully contrived so that you can determine the
relative order of two values (other than NaNs) by comparing their bits
as if they represented signed integral types. Values are always
normalized (except, of course, for values that are too small to
normalize), the high-order bit contains the sign, the next few bits
contain a biased exponent (all-zeros is the smallest [negative] exponent
value, a 0 exponent is 127 for floats, etc.), and the remaining bits are
the fraction. I assume the hardware takes advantage of this...
 
J

Jerry Coffin

[ ... ]
Huh? Again: on every architecture I'm aware of, two doubles are equal if
they have the same bit pattern. It has nothing to do with where they
came from. It's simply a fact.

Depending on implementation, two NaNs that have identical bit
patterns can still compare as being unequal.

In the other direction, two floating point numbers can have different
bit patterns and still compare as equal to each other (e.g. on an
Intel x86, 0.0 can be represented by a large number of different bit
patterns).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,146
Messages
2,570,832
Members
47,374
Latest member
anuragag27

Latest Threads

Top