Linux oddity

L

lilburne

Mattias said:
It is strange though that static_cast change the result in that
unexpected way.
On Linux r1 and r2 will differ

double tmp = val * multiplier;
int r1 = static_cast<int> (tmp);
int r2 = static_cast<int> (val * multiplier);

Shrug - you'd expect the optimizer to only do the
calculation once, and in any case to produce the same code.
The only way that you can really determine what is going on
is to examine the compiler output (on g++ use the -S
option). But all FP calculations and particularly
comparisons should be done with caution, you should always
be prepared for inaccuracy and code for it.

An example of FP problems with division on my system.

#include <iostream>

int main()
{
double d = 123456.789012345;
double e = d;
cout.precision(20);
cout << d << endl;
d *= 10.0;
cout << d << endl;
cout << "attempting multiplication" << endl;
double f = d;
d *= 0.1;
cout << d << endl;
if (e != d){
cout << "Not equal" << endl;
}
cout << "attempting division" << endl;
f /= 10.0;
cout << f << endl;
if (e != f){
cout << "Not equal" << endl;
}
return 0;
}
 
R

Ron Natalie

Mattias Ekholm said:
It is strange though that static_cast change the result in that
unexpected way.
On Linux r1 and r2 will differ

double tmp = val * multiplier;
int r1 = static_cast<int> (tmp);
int r2 = static_cast<int> (val * multiplier);
This is most likely a bug that nobody is worked up about to argue
about.

The intel floating processor is able to handle 80 bit floats. When it does
the store to the 64 bit double, it rounds (because for MOST of the time
the processor is left in round mode). So the 64 bit float is rounded enough
to make it's fractional part 480. The direct 80 bit store in truncation mode is
less than 480.0 so it stores 479. The conversion of 80bit float->32 bit int
does not use a 64 bit intermediary.
 
K

Keith S.

lilburne said:
Well whether you do 64 bit or 80 bit FP operations isn't really the
issue. The problem is that code like

int i = 0.24*2000;

or

if (x == y) {
...
}

where x and y are doubles, are actually bugs if you care about accuracy.
FP calculations are essentially inaccurate and great care needs to be
taken to ensure the stability of FP results. This is one of the reasons
why we test our application on more than one architecture.

Code like int i = 0.24*2000 is not a bug. The user requested a valid
calculation, the fact that the hardware/software couldn't give the
correct result is the bug.

Anyhow, why does float to int conversion truncate? It makes no sense
to me, it should round. 0.999999 should convert to 1 not 0.


- Keith
 
L

lilburne

Keith said:
Code like int i = 0.24*2000 is not a bug. The user requested a valid
calculation, the fact that the hardware/software couldn't give the
correct result is the bug.

That is the nature of FP calculations if you don't like it
either don't use them, or program in such a way that
inaccurate calculations are expected and dealt with. See
"semi-numerical algorithms" by Knuth. Digital is not analog.
Anyhow, why does float to int conversion truncate? It makes no sense
to me, it should round. 0.999999 should convert to 1 not 0.

Its been that way since way before languages like C or C++
were invented. If you want to round to the nearest integer
you add 0.5 first.
 
K

Keith S.

lilburne said:
Its been that way since way before languages like C or C++ were
invented. If you want to round to the nearest integer you add 0.5 first.

Err, stupidity may well go back a long way but why encourage it?

- Keith
 
L

lilburne

Keith said:
Err, stupidity may well go back a long way but why encourage it?

Because it has predictable behaviour, and there is a simple
solution of adding 0.5 before converting to an int.
 
K

Keith S.

lilburne said:
Because it has predictable behaviour, and there is a simple solution of
adding 0.5 before converting to an int.

But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?

- Keith
 
S

Sam Holden

double x = -10.9;
But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?

Many accountants would disagree with your simple rounding scheme.
As would many statisticians. As would many computer scientists.
 
K

Karl Heinz Buchegger

Keith S. said:
But rounding is predicatable.

Really?

the same problem you saw at the border of 0.999999998 to 1.0 occourse
at the border from 0.499999999 to 0.5

That's the way it is and there is nothing you (or I can) do against it.
It is an inherent property of how floating gpoint calculations are done on
a computer. Learn to live with it.

The next pitfall waiting for you is the comparison.
An experienced programmer doesn't write

if( SomeDoubleNumber == 0.24 )
...

for the very same reason. Depending on the history of SomeDoubleNumber
(what you did prior to that variable), the number in it may be greater
then 0.24 or may be less then 0.24, but it is almost never exactly 0.24.

The way to deal with it, is to change your way of thinking. With floating point
numbers you never say: do they equal. Instead you say: Is the difference small
enough such that I can treat them as beeing equal.

if( fabs( SomDoubleNumer - 0.24 ) < SomeEpsilon )
// they are equal

What you insert for SomeEpsilon is again dependent on what you previously
did to SomeDoubleNumber.
If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?

Nothing. In the time it has taken you to write your reply you
could have implemented a round() function which does exactly what you
want. This round function is sufficient for you, but there are other
ways of rounding too and that's why there is no such function
prebuilt.
 
K

Keith S.

Karl said:
Really?

the same problem you saw at the border of 0.999999998 to 1.0 occourse
at the border from 0.499999999 to 0.5

That's a different problem. The specific one I had was with the
fact that Linux gcc is doing the conversion from a float with
80 bit accuracy rather than 64 bit accuracy. By trying to be
more accurate, it is non portable. Interestingly even M$ VC6
gets it right, along with the other platforms I need to support.

Oh well, more #ifdef linux_x86...

- Keith
 
K

Karl Heinz Buchegger

Keith S. said:
That's a different problem. The specific one I had was with the
fact that Linux gcc is doing the conversion from a float with
80 bit accuracy rather than 64 bit accuracy. By trying to be
more accurate, it is non portable.

You are under a wrong imporession:
floating point arithmetic is never portable.
That's because C++ (as well as C) doesn't specify how floating
point arithmetic has to be done. There are various schemes around
and all of them suffer from the same problem (stuffing indefinit
many numbers into a fixed amount of bytes) and have different
ways to deal with it.
Interestingly even M$ VC6
gets it right, along with the other platforms I need to support.

There is no right or wrong.
Change the specific numbers and what you fell as 'right' turns around.
Oh well, more #ifdef linux_x86...

No. That's the wrong way.
Adding epsilons, rounding corrections and accounting for
numerical problems is the way to go.

double tmp = ...;

int i = tmp + 0.5;

or to take sign into account:

int round( double d )
{
if( d > 0.0 )
return d + 0.5;
return d - 0.5;
}


"Working with floating point numbers is like moving
piles of sand. Every time you do it, you loose a
little sand and add a litle dirt."

It is this dirt you need to take into account. No floating
point hardware or system can change that. And a few #ifdef's
are certainly not the tool to deal with that fact.
 
P

Pete Becker

Keith S. said:
But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?

That's one way to round. There are several more. You can round toward
zero (i.e. truncate, in typical implementations), round down (negative
numbers get more negative), round up. Then there's the question of what
to do with that 0.5. Most people round that one up. Banker's rounding
rounds to the nearest even value (1.5 goes to 2.0, 2.5 also goes to
2.0). That removes a slight bias.

For more details, see www.petebecker.com/js200007.html.
 
R

Ron Natalie

Keith S. said:
But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?

Truncation is predictable as well. Predictability is good, but it's not
the reason.
 
R

Ron Natalie

Karl Heinz Buchegger said:
You are under a wrong imporession:
floating point arithmetic is never portable.

Still the fact that storing a double to a variable changes it's value is
really bogus.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,145
Messages
2,570,825
Members
47,371
Latest member
Brkaa

Latest Threads

Top