Linux oddity

K

Keith S.

Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?

- Keith
 
M

Michael Lehn

Keith said:
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?

- Keith

Hmmm, seams to be a numerical problem:

#include <cstdio>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int>(val * multiplier);
printf("val * multiplier - 480 = %20.20f\n", val * multiplier - 480);
return 0;
}

but it's strange. I assumed that the result would only depend on the CPU
not the Operating System...
 
R

Rob Williscroft

Keith S. wrote in
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?

The comilers are using diferent rounding rules, as I believe
they are allowed to do. Note that 0.24 can't on most machines
be represented as an exact value, what you actually get is
something like 0.23999999999997. So the 2 compilers that
give 480 are rounding to up or to the nearest value, and the
linux box is rounding down. Its the static_cast<int>() that
is doing the rounding.

Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.

HTH

Rob.
 
K

Keith S.

Rob said:
The comilers are using diferent rounding rules, as I believe
they are allowed to do. Note that 0.24 can't on most machines
be represented as an exact value, what you actually get is
something like 0.23999999999997. So the 2 compilers that
give 480 are rounding to up or to the nearest value, and the
linux box is rounding down. Its the static_cast<int>() that
is doing the rounding.

Understood, although I would have expected that static_cast<int>'s
behaviour was the same (with respect to rounding method) on
different platforms, especially when using the same compiler...

- Keith
 
N

Noah Roberts

Rob said:
Keith S. wrote in



The comilers are using diferent rounding rules, as I believe
they are allowed to do. Note that 0.24 can't on most machines
be represented as an exact value, what you actually get is
something like 0.23999999999997.

That is probably what is happening on the linux machine.

So the 2 compilers that
give 480 are rounding to up or to the nearest value, and the
linux box is rounding down. Its the static_cast<int>() that
is doing the rounding.

afaik you always round down when converting to int.
Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.

HTH

Rob.

BTW, in gnome calculator on linux I get 480. Check out their code and
see why.
 
R

Ron Natalie

Rob Williscroft said:
Its the static_cast<int>() that is doing the rounding.

It is NOT. The floating point to int conversion always ignores the fractional part.
It is what you originally said, the conversion of the literal .24 to it's double value that
is picking which of the two representable values that it falls between.
Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.

These aren't going to help.
 
K

Keith S.

Ron said:
It is NOT. The floating point to int conversion always ignores the fractional part.

Sorry, but this is not true. Try the code with gcc on Solaris

int result = static_cast<int> (round(val * multiplier));

is the workaround I use to get consistent behavious on all compilers
(i.e. to get Linux gcc to behave like everything else).

- Keith
 
R

Ron Natalie

Keith S. said:
Sorry, but this is not true. Try the code with gcc on Solaris
and you'll find that static_cast<int> *rounds* to the nearest
int, rather than *truncating* as gcc on Linux does.

Nonsense. I just tried it and it trnucates. It has to. I've been writing
code for over a decade that relies on this behavior and I've never
come accross a compiler that gets it wrong yet.

While rounding behavior in the FLOATING POINT calculations is at the
discretion of the compiler (and IEEE FP defaults to rounding). The
floating point to integer conversion in C and C++ is mandated to be
truncation. This runs into fun and games on the Pentium as G++ as
well as several other compilers do something really stupid to accomplish
the truncation that kills performance (setting the fcw to change the rounding
mode).
int result = static_cast<int> (round(val * multiplier));

round() is neither floor() or ceil().
The static_cast by the way is totally unnecessary (other that to supress a possible
compiler warning).
 
K

Keith S.

Ron said:
Nonsense. I just tried it and it trnucates. It has to. I've been writing
code for over a decade that relies on this behavior and I've never
come accross a compiler that gets it wrong yet.

Oh all right then. Here is the result on Linux:

[keith@pc-keiths keith]$ uname -a
Linux pc-keiths 2.4.19-16mdkenterprise #1 SMP Fri Sep 20 17:34:59 CEST
2002 i686 unknown unknown GNU/Linux
[keith@pc-keiths keith]$ gcc test.cpp
[keith@pc-keiths keith]$ a.out
result = 479 (should be 480)

and here is the same code run on SunOS (VC6 gives the same result too):

45 otto% uname -a
SunOS otto 5.8 Generic_108528-09 sun4u sparc SUNW,Sun-Blade-100
46 otto% gcc test.cpp
47 otto% a.out
result = 480 (should be 480)

The static_cast by the way is totally unnecessary (other that to supress a possible
compiler warning).

which is exactly why it's there :)

- Keith
 
L

lilburne

Keith said:
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?

Check the status of the floating point chip. The calculation
may be being done with 80 bit precision in the first two
cases and 64 bit precision in the later case.
 
R

Ron Natalie

Keith S. said:
Oh all right then. Here is the result on Linux:
No, no, no. The conversion to int is NOT rounding. Try looking at the
floating point value BEFORE it is converted to int. In your Sun case it
is slightly more than 480, in the Linux case it is slighly less than 480.
THEN the when it is truncated to int, one gives 480 an the other 479.

The imprecision occurred when .24 was converted to a floating point number.
Try comparing the floating value val*multiplier with 480.0. It's less than 480.0
on the LINUX and greater than 480.0 on the Sun.
..
 
K

Keith S.

Ron said:
No, no, no. The conversion to int is NOT rounding. Try looking at the
floating point value BEFORE it is converted to int. In your Sun case it
is slightly more than 480, in the Linux case it is slighly less than 480.
THEN the when it is truncated to int, one gives 480 an the other 479.

The imprecision occurred when .24 was converted to a floating point number.
Try comparing the floating value val*multiplier with 480.0. It's less than 480.0
on the LINUX and greater than 480.0 on the Sun.

Hmm, You're right. However, this doesn't help the original problem
which is the behaviour is different on different platforms,
and gcc Linux seems to be the odd one out. Every other
platform/compiler gives the expected answer except Linux
(including my 27 year old pocket calculator).


- Keith
 
K

Keith S.

lilburne said:
Check the status of the floating point chip. The calculation may be
being done with 80 bit precision in the first two cases and 64 bit
precision in the later case.

How do you do this?

- Keith
 
L

lilburne

Keith said:
Thanks, very interesting article. A shame that the
linux developers couldn't see that pedantic accuracy
is less important that sensible results.

Well whether you do 64 bit or 80 bit FP operations isn't
really the issue. The problem is that code like

int i = 0.24*2000;

or

if (x == y) {
...
}

where x and y are doubles, are actually bugs if you care
about accuracy. FP calculations are essentially inaccurate
and great care needs to be taken to ensure the stability of
FP results. This is one of the reasons why we test our
application on more than one architecture.
 
R

Ron Natalie

lilburne said:
where x and y are doubles, are actually bugs if you care
about accuracy. FP calculations are essentially inaccurate

They are not "essentially inaccurate" unless you've got a really sloppy
implementation. The issue is numbers that appear to be exactly representable
in decimal, are NOT in floating point, yielding small errors.
 
L

lilburne

Ron said:
They are not "essentially inaccurate" unless you've got a really sloppy
implementation. The issue is numbers that appear to be exactly representable
in decimal, are NOT in floating point, yielding small errors.

Seems like you're saying that FP calculations are
"essentially inaccurate" too. The small error exhibited here
resulted in a gross difference in result when the integer
conversion took place.

Those that care about the maths go to great pain to avoid
instability in the expressions used, and are particularly
careful about rounding errors, and loss of significance.
 
R

Rob Williscroft

Ron Natalie wrote in
It is NOT. The floating point to int conversion always ignores the
fractional part. It is what you originally said, the conversion of the
literal .24 to it's double value that is picking which of the two
representable values that it falls between.

Right, thanks for the correction.

Rob.
 
M

Mattias Ekholm

It is strange though that static_cast change the result in that
unexpected way.
On Linux r1 and r2 will differ

double tmp = val * multiplier;
int r1 = static_cast<int> (tmp);
int r2 = static_cast<int> (val * multiplier);

/Mattias
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,145
Messages
2,570,825
Members
47,371
Latest member
Brkaa

Latest Threads

Top