C++ doubt

P

Pete Becker

Jerry said:
Depending on implementation, two NaNs that have identical bit
patterns can still compare as being unequal.

In the other direction, two floating point numbers can have different
bit patterns and still compare as equal to each other (e.g. on an
Intel x86, 0.0 can be represented by a large number of different bit
patterns).

Yes, those are the issues I was talking about when I said:

*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.
 
J

Jerry Coffin

[ ... ]
Yes, those are the issues I was talking about when I said:

*For the nitpickers: yes, if the values are NaNs they are supposed to
compare unequal. And, of course, there are some values that sometimes
have multiple representations (for example, 0 and -0 are distinct values
that compare equal), so it is not true that two values are equal only if
they have the same bit pattern. But that's a completely different
discussion.

Somehow I seem to have missed that (part of that?) post, but I think
it's basically inaccurate.

On a real machine, a floating point comparison typically takes two
operands and produces some single-bit results (usually in a special
flags register). It starts by doing a floating point subtraction of
one of those operands from the other, and then examines the result of
that subtraction to see whether it's zero, negative, etc., and sets
flags appropriately based on those conditions.

Now, you (Pete) seem to be focusing primarily on the floating point
subtraction itself. While there's nothing exactly wrong with that,
it's a long ways from the whole story. The floating point subtraction
just produces a floating point result -- and it's the checks I
mentioned (for zero and NaN) that actually determine the state of the
zero flag.

As such, far from being a peripheral detail important only to
nitpickers, this is really central, and the subtraction is nearly a
peripheral detail that happens to produce a value to be examined --
in particular, it's also perfectly reasonable (and fairly common) to
set the flags based on other operations as well as subtraction.

As I recall, the question that started this sub-thread was mostly one
of whether a floating point comparison was particularly expensive. In
this respect, I'd say Pete is basically dead-on: a floating point
comparison is quite fast not only on current hardware, but even on
ancient stuff (e.g. 4 clocks on a 486). Unless the data being
compared has been used recently enough to be in registers (or at
least cache) already, loading the data from memory will almost always
take substantially longer than doing the comparison itself.

I suspect, however, that this mostly missed the point that was
originally attempting to be made: which is that under _most_
circumstances, comparing floating point numbers for equality is a
mistake. Since floating point results typically get rounded, you
usually want to compare based on whether the difference between the
two is smaller than some delta. This delta will depend on the
magnitude of the numbers involved. The library contains a *_EPSILON
in float.h (and aliases for the same general idea in other headers)
that defines the smallest difference that can be represented between
1 and something larger than 1.

Therefore, if you're doing math with doubles (for example) you start
by estimating the amount of rounding that might happen based on what
you're doing. Let's assume you have a fairly well-behaved
computation, and it involves a dozen or so individual calculations.
You then do your comparison something like:

delta = ((val1+val2)/2.0) * DOUBLE_EPSILON * 12.0;

if ( val1-val2 <= delta)
// consider them equal.
else
// consider them unequal.

While the comparison itself is fast and cheap, it's really only a
small part of the overall job.
 
P

Pete Becker

Jerry said:
Now, you (Pete) seem to be focusing primarily on the floating point
subtraction itself.

No, what I'm talking about is based on the specific representation, and
shortcuts that can be used in semi-numeric algorithms.
While there's nothing exactly wrong with that,
it's a long ways from the whole story.

Of course. Remember, the context here is the assertion that "comparing
doubles in any way is an intensive CPU operation." That's far too broad
a generalization.
 
G

Gary Labowitz

Pete Becker said:
Gary said:

That's "Pete". said:
perhaps you know this for sure: When we were first developing (circa
1956) is was pretty standard to do comparisons by subtracting one operand
from the other and check the hardware zero and overflow flags. If the
hardware operation resulted in a zero result, then the operands were
considered equal. As you can guess, there are cases where the initial values
in bits were not exactly identical, but after scaling, conversions, and
other manipulations doing the subtraction, the result could be zero (all 0
bits, with underflow, if I recall right). Do modern CPU's operate in this
way? Or are they required to simulate it?
My mindset is that there is no actual "compare" of bit structures, but
manipulation to effect subtraction and check of result. Comments?

On the Intel architecture, for example, the representation of floating
point values is carefully contrived so that you can determine the
relative order of two values (other than NaNs) by comparing their bits
as if they represented signed integral types. Values are always
normalized (except, of course, for values that are too small to
normalize), the high-order bit contains the sign, the next few bits
contain a biased exponent (all-zeros is the smallest [negative] exponent
value, a 0 exponent is 127 for floats, etc.), and the remaining bits are
the fraction. I assume the hardware takes advantage of this...

Okay, Pete!
I worked on the microcode for the S/360 Mod 40, so I'm talking IBM mainframe
here. I'm sure the Intel architecture takes advantage of checking NaN and
sign differences before doing any data manipulations to compare values.
Special conditions like that are usually knocked out immediately. I always
wanted to study the Intel designs, but I kept putting it off until now I'm
not likely ever to look into it.
 
P

Peter Julian

Pete Becker said:
Huh? Again: on every architecture I'm aware of, two doubles are equal if
they have the same bit pattern. It has nothing to do with where they
came from. It's simply a fact.

Thats exactly my point, the 2 doubles *must* have the same bit pattern. In
other words, this is guarenteed to fail unless 64-bit double precison or
rounded 32-bit precision is involved (the output is self explanatory):

#include <iostream>
#include <iomanip>

int main()
{
double d = 21.1;
double d_result = 2.11 * 10.0;
std::cout << "d = " << std::setprecision(20) << d;
std::cout << "\nd_result = " << std::setprecision(20) << d_result;
if( d == d_result )
{
std::cout << "\nthe exact same bit pattern.\n";
}
else
{
std::cout << "\nnot the same bit pattern.\n";
}

return 0;
}

Take note: some compilers will optimize the double into an int in release
mode. Some compilers will not. I'm not discussing integers here.

output:
d = 21.100000000000001
d_result = 21.099999999999998
not the same bit pattern.
Yes, but that has nothing to do with what I said, nor does it have
anything to do with your cliam, which was that "comparing doubles in any
way is an intensive CPU operation." That is simply false. Comparing
doubles for exact equality is trivial.

Do you still think its trivial? think again. Its not. In fact its both as
time consuming as it is inexact.
It is not necessary to make the sign of the cross to ward off evil
whenever anyone mentions testing floating point values for equality.
What is necessary is objective analysis.

versus proof? Please... read up on IEEE 754
test away-> http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html
 
Joined
May 29, 2008
Messages
1
Reaction score
0
Hi I am a B-Tech student and I have few doubts in C++.
It is regarding automation of telescope.
I have to set up a telescope for its remote application.

If any body can help I will further post few problems.
 
Joined
Jun 13, 2010
Messages
1
Reaction score
0
Doubt

Hello, I've just started learning C++

I downloaded Turbo C++ and installed in on my PC.

There's a book that I'm referring to and there's this program for the beginners that's not working.

/* Calculation of simple interest */
/* Author gekay Date: 25/05/2004 */
main( )
{
int p, n ;
float r, si ;
p = 1000 ;
n = 3 ;
r = 8.5 ;
/* formula for simple interest */
si = p * n * r / 100 ;
printf ( "%f" , si ) ;
}

it says ERROR NONAME00.cpp 12 : Function 'printf' should have a prototype
WARNING NONAME00.cpp 13: Function should have a value.

What is this? How can this be solved?

I've done wonders in Visual Basic but I'm a novice in C++

I need a helping hand!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,146
Messages
2,570,832
Members
47,374
Latest member
anuragag27

Latest Threads

Top