(double one=1.) != 1. ?

F

Felix E. Klee

Hi,

when I compile the program below with
gcc -O0 -o ctest ctest.c
(Platform: Intel Celeron (Coppermine), LINUX 2.4.20) I get the
following output:

one != one_alt
one_alt != one
1. != one_alt
one_alt != 1.

This looks reasonable to me. After all, one and one_alt don't have the
same binary representation. When I compile it with
g++ -O0 -o ctest ctest.c
I get a different output, though:

one != one_alt
one_alt != one
1. == one_alt
one_alt == 1.

Could someone enlighten me why "1." and "one" seem to be different when
compiled with the C++ compiler g++?

Felix


The program ctest.c:

#include <stdio.h>

#define compare_a_b(a,b) \
printf("%s %s %s\n", #a, (a == b) ? "==" : "!=", #b)

int main() {
unsigned char c[] = {2, 0, 0, 0, 0, 0, 240, 63};
double one = 1., one_alt;
int i;
for (i = 0; i < 8; ++i)
((unsigned char*)&one_alt) = c;
compare_a_b(one, one_alt);
compare_a_b(one_alt, one);
compare_a_b(1., one_alt);
compare_a_b(one_alt, 1.);

return 0;
}
 
R

Ron Natalie

Felix E. Klee said:
unsigned char c[] = {2, 0, 0, 0, 0, 0, 240, 63};
double one = 1., one_alt;
int i;
for (i = 0; i < 8; ++i)
((unsigned char*)&one_alt) = c;


Got me, all this is implementation dependent and you change the implementation.
 
F

Felix E. Klee

unsigned char c[] = {2, 0, 0, 0, 0, 0, 240, 63};
double one = 1., one_alt;
int i;
for (i = 0; i < 8; ++i)
((unsigned char*)&one_alt) = c;


Got me, all this is implementation dependent and you change the implementation.


Hm, but shouldn't the binary representation of floating point numbers be
independent of the compiler? After all, I let both executables run on
the same system and floating point arithmetics is handled by the FPU. In
addition, even on different systems I would expect to get the same
output, as long as they both support the same IEEE standard and as long
as the behavior of my program is well defined by that standard.

Felix
 
R

Ron Natalie

Felix E. Klee said:
Hm, but shouldn't the binary representation of floating point numbers be
independent of the compiler?

The representation may be...but it's quite possible that the code generated to
do the comparisons may be different with different compilers.

Implementation dependent means just that IMPLEMENTATION DEPENDENT.
If you want details as to why the implemetnations do different things, try
asking on a gcc group.
 
G

Gianni Mariani

Felix said:
Hi,

when I compile the program below with
gcc -O0 -o ctest ctest.c
(Platform: Intel Celeron (Coppermine), LINUX 2.4.20) I get the
following output:

one != one_alt
one_alt != one
1. != one_alt
one_alt != 1.

This looks reasonable to me. After all, one and one_alt don't have the
same binary representation. When I compile it with
g++ -O0 -o ctest ctest.c
I get a different output, though:

one != one_alt
one_alt != one
1. == one_alt
one_alt == 1.

Could someone enlighten me why "1." and "one" seem to be different when
compiled with the C++ compiler g++?

I cannot reproduce your results. I'm using version gcc 3.3.1.

Both compilers give the second answer.
 
A

Andrey Tarasevich

Felix said:
unsigned char c[] = {2, 0, 0, 0, 0, 0, 240, 63};
double one = 1., one_alt;
int i;
for (i = 0; i < 8; ++i)
((unsigned char*)&one_alt) = c;


Got me, all this is implementation dependent and you change the implementation.


Hm, but shouldn't the binary representation of floating point numbers be
independent of the compiler?


Formally speaking, no, it shouldn't.
After all, I let both executables run on
the same system and floating point arithmetics is handled by the FPU.

Not necessary. Implementation might decide to use FPU to handle
floating-point arithmetic. Or it might decide not to use it. In older
times, when FPU was optional, some implementations could "emulate"
floating-point arithmetic using main CPU. Sometimes, CPU-based
floating-point arithmetic used binary representation, which was
different from the one used by FPU on this platform.
 
S

Sean Kenwrick

Felix E. Klee said:
Hi,

when I compile the program below with
gcc -O0 -o ctest ctest.c
(Platform: Intel Celeron (Coppermine), LINUX 2.4.20) I get the
following output:

one != one_alt
one_alt != one
1. != one_alt
one_alt != 1.

This looks reasonable to me. After all, one and one_alt don't have the
same binary representation. When I compile it with
g++ -O0 -o ctest ctest.c
I get a different output, though:

one != one_alt
one_alt != one
1. == one_alt
one_alt == 1.

Could someone enlighten me why "1." and "one" seem to be different when
compiled with the C++ compiler g++?

Felix


The program ctest.c:

#include <stdio.h>

#define compare_a_b(a,b) \
printf("%s %s %s\n", #a, (a == b) ? "==" : "!=", #b)

int main() {
unsigned char c[] = {2, 0, 0, 0, 0, 0, 240, 63};
double one = 1., one_alt;
int i;
for (i = 0; i < 8; ++i)
((unsigned char*)&one_alt) = c;
compare_a_b(one, one_alt);
compare_a_b(one_alt, one);
compare_a_b(1., one_alt);
compare_a_b(one_alt, 1.);

return 0;
}


The only thing I can think of is that the literal 1.0 does not imply a type
of double, therefore the compiler might be treating it as a float.
Perhaps the g++ compiler is implicitly casting the one_alt variable as a
float type when you compare to 1.0. Seems unlikely though, since I
would imagine that it would be more sensible to cast the 1.0 (ie. the float)
to a double when comparing a float to a double.

To get to the bottom of this you might need to print out some more debug.
E.g. change your macro to accept the size of each parameter as well and to
print that out. (E.g. compare_a_b(1.,one_alt,sizeof(1.),sizeof(one_alt))
Also force the macro to print out the values of each byte that make up the
parameters:

#define compare_a_b(a,b,siz_a,siz_b) { \
printf("siz_a=%d, siz_b=%d %s %s %s\n",siz_a,siz_b, #a, (a == b) ? "==" :
"!=", #b); \
printf("a ="); for(int k=0;k<siz_a;k++) printf("%d ", ((unsigned char
*)&a)[k]); \
printf("\nb ="); for(int k=0;k<siz_b;k++) printf("%d ", ((unsigned char
*)&b)[k]); \
printf("\n");}



then in your code:

compare(one_alt,1.,sizeof(one_alt),sizeof(1.));

This should help you get to the bottom of things.

Sean
 
R

Ron Natalie

Sean Kenwrick said:
The only thing I can think of is that the literal 1.0 does not imply a type
of double, therefore the compiler might be treating it as a float.

It's double unless the compiler is broken.
 
F

Felix E. Klee

I cannot reproduce your results. I'm using version gcc 3.3.1.

I am using
gcc version 3.3 20030226 (prerelease) (SuSE Linux)
Both compilers give the second answer.

I just checked with version 3.3.1. And now I also get the same results
with gcc and g++. However, I always get the first answer, not the
second.

Felix
 
F

Felix E. Klee

The only thing I can think of is that the literal 1.0 does not imply a type
of double

1.0 does imply a type of double. 1.0f would imply a type of float.

Felix
 
F

Felix E. Klee

Formally speaking, no, it shouldn't.

That's certainly true.
After all, I let both executables run on
the same system and floating point arithmetics is handled by the FPU.

Not necessary. [...]

Sorry for being unclear. I was speaking about the system I was executing
the program on, and on that system I know that FP arithmetics is handled
by the FPU. As a user of GMP and long time user of computers without an
FPU I am aware that floating point arithmetics can be emulated with
integer arithmetics. But, unless explicitly told to do so, a standard
gcc/g++ wouldn't create FP emulation code when run on a Pentium III.

The most likely explanation for the differences in the output of the
gcc-compiled ctest and the output of the g++-compiled ctest is that, as
Ron already said, "the comparisons may be different with different
compilers".

Similarly, g++ seems to create different code for the comparisons "one
== one_alt" and "1. == one_alt". This is a bug if the C++ standard is
violated (I assume that "one == 1" if "one" was initialized as "double
one = 1."). With version 3.3.1, however, this potential bug is not there
anymore (see my answer to Gianni).

Felix
 
R

Ron Natalie

Felix E. Klee said:
Similarly, g++ seems to create different code for the comparisons "one
== one_alt" and "1. == one_alt". This is a bug if the C++ standard is
violated (I assume that "one == 1" if "one" was initialized as "double
one = 1."). With version 3.3.1, however, this potential bug is not there
anymore (see my answer to Gianni).

It's not a violation of the C++ standard. The C++ standard doesn't place
any requirements on you making denormalized values by copying arbitrary
data over floats. Whatever the compiler wants to do with that is fine.
It should make any difference in legitimate calculations.
 
F

Felix E. Klee

It's not a violation of the C++ standard. The C++ standard doesn't place
any requirements on you making denormalized values by copying arbitrary
data over floats.

It's not a denormalized value on *my* system. In fact I got this exact
number out of a floating point calculation. I didn't make it up.
Whatever the compiler wants to do with that is fine.
It should make any difference in legitimate calculations.
^^^^^^
You meant shouldn't, right?

In that case I have to object. I found the bug while debugging code
similar to the following:

if (a >= -1. && a <= 1.)
acos(a);

Under certain circumstances "a" might have the value of "one_alt" (see
my OP). In this case "a <= 1." returns true, but acos(a) returns NAN!
BTW, I know that floating point comparisons are a sensible subject, but
in this case I don't see any problem since the domain of acos is -
according to the C standard (and therefore probably the C++ standard,
though I couldn't find it explicitly in there) - [-1,1]. I now added
code similar to

if (a == 1.)
a = 1.;
else if (a == -1.)
a = -1.; // Don't know if this is necessary

Felix
 
R

Ron Natalie

Felix E. Klee said:
It's not a denormalized value on *my* system. In fact I got this exact
number out of a floating point calculation. I didn't make it up.
Sure enough, it's not, I just decoded that value by hand (assuming an LSB first
iEEE encoding).

The value is NOT 1. It's slightly more than one. It is most certainly a domain
error for acos.

The format is

S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
0 01111111111111 00000000000000000000000000000000000000000000000000000000010

E=1023 which is an exponent of zero. The fractional part is added to 1 which makes it
2*epsilon bigger than 1.0.

What calculation yielded this value.
 
F

Felix E. Klee

Under certain circumstances "a" might have the value of "one_alt" (see
my OP). In this case "a <= 1." returns true, but acos(a) returns NAN!

I forgot to mention that there actually might be an explanation of
problems like this. "a" might be slightly different than 1.0 and at the
same time have a different exponent than the default representation of
1.0. In this case the mantissa and exponent of "a" might have to be
updated during the comparison, and consequently the small difference
might get lost. However, this does not seem to be the case here. The
only difference seems to be in the mantissa:

one_alt = (2,0,0,0,0,0,240,63)
one = (0,0,0,0,0,0,240,63)

The code to output these numbers was similar to

unsigned char *c = (unsigned char*)&one;
cout << "(";
for (int i = 0; i < 8; ++i)
cout << (short)c << ((i < 7) ? "," : ")");

Felix
 
?

=?iso-8859-1?Q?Juli=E1n?= Albo

Ron Natalie escribió:

I had a problem with gcc when comparing a value with 0.0, a value was
strictly positive but n > 0 give false. Perhpas this is another symptom
of the same problem?

The workaround I found is to define double zero= 0.0; and does the
comparaison with zero. Note that zero is not const, declaring it as
const does not solve the problem.

I had this problem with gcc 3.3.2

Regards.
 
F

Felix E. Klee

What calculation yielded this value.

I appended a program below that calculates and outputs (in the same
format I used before) this number. But why do you ask? Is this number
somehow special, i.e. should it normally not appear as a result of
calculations?

BTW, would you say that code like
if (x >= -1 && x <= 1)
y = acos(x);
"calls for trouble"? Should I write
if (x >= -1+eps && x <= 1-eps)
y = acos(x);
else if (x >= -1 && x < -1+eps)
y = pi;
else if (x > 1-eps && x <= 1)
y = 0;
instead, or is this a waste of space and precision?

Felix


The program (compile with gcc version 3.3 20030226 (prerelease) (SuSE
Linux) or similar on an Intel Celeron (Coppermine) or similar):

#include <iostream>
using namespace std;

int main() {
double x = double(80)/100;
double y = double(20)/100;
double one_alt = (y-1) / (2*(x-1)) - 1;
unsigned char *c = (unsigned char*)&one_alt;
for (int i = 0; i < 8; ++i)
cout << (short)c << ((i < 7) ? "," : "\n");
return 0;
}
 
R

Ron Natalie

Felix E. Klee said:
I appended a program below that calculates and outputs (in the same
format I used before) this number. But why do you ask? Is this number
somehow special, i.e. should it normally not appear as a result of
calculations?

It isn't special...I just wanted to know why you think it is the value 1?
double x = double(80)/100;
double y = double(20)/100;
double one_alt = (y-1) / (2*(x-1)) - 1;
unsigned char *c = (unsigned char*)&one_alt;
for (int i = 0; i < 8; ++i)
cout << (short)c << ((i < 7) ? "," : "\n");
return 0;


Ahh...You do know that .8 and .2 may end up not being precisely representable and
as a result introduce tiny errors in your calculation?
 
F

Felix E. Klee

It isn't special...I just wanted to know why you think it is the value 1?

That's not what I think. I know that floating point numbers that have a
precise representation in decimal form don't necessarily have a precise
representation in binary form. What was puzzling me was the fact that
"one_alt <= 1." returns "true", although one_alt is greater than 1.
(also puzzling was the fact that "one" is treated differently from 1.,
but let's forget about this for now).

As described in a previous posting, I understand that a comparison
"x<=y" might return a wrong result if the two numbers that are compared
don't have the same exponent. Let me give an example: Suppose that the
two numbers x and y have the internal binary representations "11*2^10"
and "1*2^11", respectively. Of course x is greater than y. Some weird
FPU (not that of a typical computer, of course), however, might convert
x to "1*2^11" before the comparison, and consequently "x<=y" returns
"true". Note that in the case of "one" and "one_alt" the exponents are
the same, so the above explanation is not applicable and I still don't
know what's going on.

I conclude that code like
if (x >= -1. && x <= 1.)
y = acos(x);
is bad and should be replaced by
if (x >=-1+eps && x <= 1-eps)
y = acos(x);
else if (x >= -1 && x < -1+eps)
y = pi;
else if (x > 1-eps && x <= 1)
y = 0;
I already posted that in my previous reply, but unfortunately I didn't
get a comment from you. Could you please give me one now?

Felix
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,148
Messages
2,570,838
Members
47,385
Latest member
Joneswilliam01

Latest Threads

Top