Code to print each part of double as a separate group of bits

K

Kai-Uwe Bux

Virtual_X said:
Virtual_X said:
i have write another powerful function "as i thought :)" to check
double equality with very high
precision because it check equality for every bit instead of the
"Microsoft msdn method"
here's my function "depend on the pb()"
bool d_eq(double x,double y)
{
byte bx[8];
byte by[8];
pb(reinterpret_cast<unsigned char*>(&x),bx);
pb(reinterpret_cast<unsigned char*>(&y),by);
for(int i=0;i < 8;i++)
for(int o=0;o < 8;o++)
if (bx.bit[o] != by.bit[o]) return false;

return true;
}

Where [quoted from some other posting]:


struct byte
{
bool bit[8]; // 8 = sizeof(double);
};
void pb(unsigned char *ch,byte *bin)
{
for(int i=0;i <=7;i++)
for(int o=7;o >=0;o--)
{
if(ch & 1<<o)
bin.bit[abs(o-7)]= true;
else
bin.bit[abs(o-7)]= false;
}
}


May I ask what advantage d_eq(x,y) has compared to x==y?

you aren't able to use the operator "==" with floating-point numbers


That is incorrect.
"it's not a precision method"

That is a meaningless statement.
watch the standard

The standard is with me on this issue (operator== test for being equal, and
that's it).
for more info or check this link
http://www.cprogramming.com/tutorial/floating_point/understanding_floating_point.html


That link says (among other things):

... why is it so hard to know when two floats are equal? In one sense, it
really isn't that hard; the == operator will, in fact, tell you if two
floats are exactly equal (i.e., match bit for bit). ...

which seems to agree with me.

whereas this link is unrelated to C++ as C++ does not require IEEE
conformance.


d_eq compare each bit so it's very precision
try that code

double x=215.256487876545;
double y=215.256487876545;

cout << d_eq(x,y);

and then try

double x=215.256487876545;
double y=215.2564878765449; // i change only the last number and
add another in the fraction

cout << d_eq(x,y);

you will watch the precision

The standard guarantes that you will get the same with operator== provided
the type double on your implementation has the precision required to
distinguish x and y.

I think your d_eq() is a non-portable equivalent of

bool truly_equal ( double lhs, double rhs ) {
double volatile x = lhs;
double volatile y = rhs;
return ( x == y );
}

Here, I hope the volatile causes a write to memory so that any excess
precision that the parameters may have according to [5/10] goes away.

absolutely , as you see in microsoft msdn code you must limit the
equality precision by

#define EPSILON 0.0001 // Define your own tolerance

in d_eq() you don't need for that just put any numbers
the function d_eq() as i thought is very important in advanced math
application when you need
for high precision like
the number 215.256487876545 not equal to 215.2564

Actually, in advanced math you may need a different precision that 0.00001
but you will practically always want a precision that is below the inherent
precision of the floating point type. Otherwise, it is very very hard to
ensure that loops terminate. Consider, for instance the bisection method
for solving equations implemented like this:

double low = something;
double high = something_else;
while ( ! d_eq( low, high ) ) {
double middle = (low+high) / 2.0;
if ( f(low) and f(middle) have same sign ) {
high = middle;
else {
low = middle;
}
}

It may happen that the loop does not terminate.

That is, why I doubt that your method yields the _desired_ result.


Best

Kai-Uwe Bux
 
V

Virtual_X

Virtual_X wrote:
Virtual_X wrote:
As in IEEE754
double consist of
sign bit
11 bits for exponent
52 bits for fraction
i write this code to print double parts as it explained in ieee754
i want to know if the code contain any bug , i am still c++ beginner
#include <iostream>
using namespace std;
struct byte
{
bool bit[8]; // 8 = sizeof(double);
What a misleading comment! struct is called 'byte' What does the
size of the array (8) have to do with the size of 'double'? It is
a pure coincidence, isn't it?
sorry i don't know what you exactly mean
but i think the remain code will explain why i make array(8)
"because byte = 8 bits"
Whenever you write your code, remember that it's going to be read by
somebody at some point. If not, if you write your code once and never
let anybody (even yourself) look at it again, then why put any comments
in it at all?
The declaration of the 'bit' member is
bool bit[8];
The comment right next to it is
// 8 = sizeof(double)
Does that mean that the '8' in the declaration is used because on your
system 'sizeof(double)' yields 8? That's how I am reading it.
that was a mistake
i change the code and forget to remove the comment , sorry :)
BTW, not all systems that support IEEE 754 double have their 'byte'
equal to 8 bits. So, your program is only portable to the systems
that share the size of 'char' with yours.
i think the code can be portable if we use sizeof sothat we can know
the size of double
and for unsigned char may be we can change it by bool "i didn't try
that" which always 1 byte
V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
i have write another powerful function "as i thought :)" to check
double equality with very high
precision because it check equality for every bit instead of the
"Microsoft msdn method"
here's my function "depend on the pb()"
bool d_eq(double x,double y)
{
byte bx[8];
byte by[8];
pb(reinterpret_cast<unsigned char*>(&x),bx);
pb(reinterpret_cast<unsigned char*>(&y),by);
for(int i=0;i < 8;i++)
for(int o=0;o < 8;o++)
if (bx.bit[o] != by.bit[o]) return false;
return true;
}
and Microsoft msdn method
#define EPSILON 0.0001 // Define your own tolerance
#define FLOAT_EQ(x,v) (((v - EPSILON) < x) && (x <( v + EPSILON)))
int main() {
float a, b, c;
a = 1.345f;
b = 1.123f;
c = a + b;
// if (FLOAT_EQ(c, 2.468)) // Remove comment for correct result
if (c == 2.468) // Comment this line for correct result
printf_s("They are equal.\n");
else
printf_s("They are not equal! The value of c is %13.10f "
"or %f",c,c);
}
you can find it herehttp://msdn2.microsoft.com/en-us/library/c151dt3s(VS.80).aspx
i alway thought that C/C++ really powerful- Hide quoted text -
- Show quoted text -
Unless I am missing something, your method is attempting the
equivalent of the built in == operator, which should never ever be
used to check equality on floating point types. Because floating
points are always rounded to the best binary representation.

not it's a new idea for floating-point numbers equality not based on
the operator "=="
it's instead compare each bit in the both variable
i write it because i know that the operator "==" can't used with
floating-point numbers
i know that please read my function carefuly to see what it exactly do
and watch that it depend on the first function "pb()"

Your function does the equivalent of the == operator
Test the above block, replacing == with a call to your d_eq()
and watch your function give you unintended results. If you know
it, then why are you attempting to comprare each bit exactly,
knowing that they will not be exactly the same????


i don't check if the two double is close enough and you can test the
function
instead i check if the two doubles are exactly the same "that's very
important in some
advanced math application" and by changing some parts of the code to
use sizeof "to know the double size in bytes" the code will be
portable enough- Hide quoted text -
- Show quoted text -

I am NOT saying that your function checks if two doubles are close
enough, I am saying that it SHOULD!!! It is not very important in
adcanced math applications, as you say, because any time any math is
performed on a double, the bits are not going to be the same as the
expected result. I challenge you to show me any math application where
the d_eq function has practicle use and behaves as expected.

I am going to guess that you understand that you should not use == for
comparison of two doubles, but MISUNDERSTAND the REASON. I say this
because your function is performing the same action as the == operator
would, it is comparing the bits for equality. If your mother says
"don't put your hand on the stove" and you put your foot there
instead, you are still going to get burned.

If you still feel I am in error, then please present a compilable test
case using your function d_eq, and the == operator, and show how the
results differ, you may present some of this "advanced math" as well.


some advanced math application like the physics simulators all
researches now in that field tend to get a high precision like
manipulation of floating-point numbers check links like

http://crd.lbl.gov/~dhbailey/
http://crd.lbl.gov/~dhbailey/mpdist/
http://www.egenix.com/products/python/mxExperimental/mxNumber/mxNumber.pdf
http://www.myphysicslab.com/index.html
http://www.ode.org/

google it to see more and more in that field

and why when using "Pi" in game development we don't just say Pi=3.14
instead of a high precision floating-point number like
Pi=3.141592653589793238 because that precision definitely affect the
game graphics

if you want to make a good program in that field you must definitely
care about numerical precision

number 2: the code you want to test to check the different between
"d_eq" and "=="

first any time you use "==" will not produce the same result (That not
my talk you can find it in the standard or ask for it"

second: check this example
double x=0.00001;
double y=0.000009 + 0.000001;

if (d_eq(x,y)) cout << "d_eq equivalent\n";
if (x == y) cout << "== equivalent";

this example will show you that you can't rely on "==" for floating-
point equivalently , try test it in different architectures and check
the result

finally may be you are right and may be not
at all thank's for your review
 
V

Virtual_X

Virtual_X said:
Virtual_X wrote:
i have write another powerful function "as i thought :)" to check
double equality with very high
precision because it check equality for every bit instead of the
"Microsoft msdn method"
here's my function "depend on the pb()"
bool d_eq(double x,double y)
{
byte bx[8];
byte by[8];
pb(reinterpret_cast<unsigned char*>(&x),bx);
pb(reinterpret_cast<unsigned char*>(&y),by);
for(int i=0;i < 8;i++)
for(int o=0;o < 8;o++)
if (bx.bit[o] != by.bit[o]) return false;
return true;
}
Where [quoted from some other posting]:
struct byte
{
bool bit[8]; // 8 = sizeof(double);
};
void pb(unsigned char *ch,byte *bin)
{
for(int i=0;i <=7;i++)
for(int o=7;o >=0;o--)
{
if(ch & 1<<o)
bin.bit[abs(o-7)]= true;
else
bin.bit[abs(o-7)]= false;
}
}
May I ask what advantage d_eq(x,y) has compared to x==y?

you aren't able to use the operator "==" with floating-point numbers


That is incorrect.
"it's not a precision method"

That is a meaningless statement.
watch the standard

The standard is with me on this issue (operator== test for being equal, and
that's it).
for more info or check this link

http://www.cprogramming.com/tutorial/floating_point/understanding_flo...

That link says (among other things):

... why is it so hard to know when two floats are equal? In one sense, it
really isn't that hard; the == operator will, in fact, tell you if two
floats are exactly equal (i.e., match bit for bit). ...

which seems to agree with me.

whereas this link is unrelated to C++ as C++ does not require IEEE
conformance.


d_eq compare each bit so it's very precision
try that code
double x=215.256487876545;
double y=215.256487876545;
cout << d_eq(x,y);
and then try
double x=215.256487876545;
double y=215.2564878765449; // i change only the last number and
add another in the fraction
cout << d_eq(x,y);
you will watch the precision

The standard guarantes that you will get the same with operator== provided
the type double on your implementation has the precision required to
distinguish x and y.

I think your d_eq() is a non-portable equivalent of

bool truly_equal ( double lhs, double rhs ) {
double volatile x = lhs;
double volatile y = rhs;
return ( x == y );
}

Here, I hope the volatile causes a write to memory so that any excess
precision that the parameters may have according to [5/10] goes away.


absolutely , as you see in microsoft msdn code you must limit the
equality precision by
#define EPSILON 0.0001 // Define your own tolerance
in d_eq() you don't need for that just put any numbers
the function d_eq() as i thought is very important in advanced math
application when you need
for high precision like
the number 215.256487876545 not equal to 215.2564

Actually, in advanced math you may need a different precision that 0.00001
but you will practically always want a precision that is below the inherent
precision of the floating point type. Otherwise, it is very very hard to
ensure that loops terminate. Consider, for instance the bisection method
for solving equations implemented like this:

double low = something;
double high = something_else;
while ( ! d_eq( low, high ) ) {
double middle = (low+high) / 2.0;
if ( f(low) and f(middle) have same sign ) {
high = middle;
else {
low = middle;
}
}

It may happen that the loop does not terminate.

That is, why I doubt that your method yields the _desired_ result.

Best

Kai-Uwe Bux


thanks alot
i admit yor are right "i not an expert but just beginner"
 
C

Christopher

[snip]

You are claiming you wrote a function that compares every bit for
equality. Ok, I trust you did that.
Now tell me once again, what does operator == do? It checks every bit
for equality.
Can we please establish that:
"compares every bit for equality" == "compares every bit for
equality"???




[snip]
Again. You are saying your function is a "new idea" as opposed to ==,
but they do the same thing! You said so yourself!: "it check equality
for every bit"
operator == also checks every bit for equality!!!

How is "checks every bit for equality" != "it check equality for every
bit", other than some grammar issues??


http://crd.lbl.gov/~dhbailey/http:/...yphysicslab.com/index.htmlhttp://www.ode.org/

google it to see more and more in that field

and why when using "Pi" in game development we don't just say Pi=3.14
instead of a high precision floating-point number like
Pi=3.141592653589793238 because that precision definitely affect the
game graphics
if you want to make a good program in that field you must definitely
care about numerical precision



Why are you posting web sites and making an argument that is for my
argument?
Yes, we all care about precision.
Yes, alot of applications need high precision.
However, "high precision" != "check every bit for equality"

number 2: the code you want to test to check the different between
"d_eq" and "=="

first any time you use "==" will not produce the same result (That not
my talk you can find it in the standard or ask for it"

second: check this example
double x=0.00001;
double y=0.000009 + 0.000001;

if (d_eq(x,y)) cout << "d_eq equivalent\n";
if (x == y) cout << "== equivalent";

this example will show you that you can't rely on "==" for floating-
point equivalently , try test it in different architectures and check
the result

You are arguing for my argument again: "this example will show that
you can't rely on "==" for floating point equivalently" is exactly
what I am telling you.
The point you are missing is that == is the same as checking every bit
for equality.

Your test case is incomplete.
Where did you state what the expected results are?
What output did you get from your test case to compare with the
expected results?
Where did you prove d_eq works as it is intended?
Where did you prove that d_eq works at all?
I assume you got no output at all from this test case...

finally may be you are right and may be not
at all thank's for your review

Your welcome, however, it seems you think you're the expert. I just
program advanced math and graphics applications for a living.
From your original post:
"i want to know if the code contain any bug , i am still c++ beginner"
 
C

Christopher

[snip]
thanks alot
i admit yor are right "i not an expert but just beginner"- Hide quoted text -


Do this for me, to clarify things and make certain we both understand
what the other is saying, because I have a feeling there is some
understanding being lost in interpretation of what was typed.

State what your argument is
State what my argument is

Then maybe we can make sense of things a little easier.
 
V

Virtual_X

Do this for me, to clarify things and make certain we both understand
what the other is saying, because I have a feeling there is some
understanding being lost in interpretation of what was typed.

State what your argument is
State what my argument is

Then maybe we can make sense of things a little easier.

thanks for your time , as i say i am just a beginner who want to learn
more
not to show my abilities
 
J

James Kanze

On Nov 1, 1:13 pm, Virtual_X <[email protected]> wrote:
Unless I am missing something, your method is attempting the
equivalent of the built in == operator,

Not really, since it will report that -0.0 is not equal to +0.0.
In sum, he will find that 0.0/-1.0 != 0.0/1.0. I don't think
I'd use his method, evern.
which should never ever be used to check equality on floating
point types.

Never is a strong word. You should use == to check for equality
of floating point types anytime you want to check for equality.
Depending on the application, of course, checking for exact
equality might not be what you want, but that's a different
issue.
Because floating points are always rounded to the best binary
representation.

That statement doesn't really mean anything. Machine floating
point values are not real numbers, and their arithmetic doesn't
obey the same rules as real numbers. For example, (a+b)+c is
not necessarily the same thing as a+(b+c). If you're going to
use machine floating point values, you have to understand
machine floating point arithmetic. Sometimes, it means that you
don't want to check for equality. Other times, it means that
equality actually works better than with real numbers. It all
depends---it's just different from the math you learned in high
school.
float x = 0.01f * 10.0f;
if( x == 0.1f )
{
// Will not always return true!!!
}
Hence the Microsoft method you found where some threshhold is used.
Most people seem to code thier own equality method that takes two
doubles and a threshold as parameters so you can check if the two
doubles are "close enough" to each other in the context they are used.
This isn't specific to MS or Linux environments, I've seen such
methods done in both. You wouldn't necessarilly want to measure feet a
car has traveled around the world to a presision of .00000001, but you
might for say the width of skin cell in centimeters.

Most of the time, such "approximately equal" methods are not a
good solution. An equality function which isn't transitive
poses a number of problems of its own. In the end, you have to
understand the problem, do the analysis, and choose the
appropriate method. It requires real understanding; there are
no automatic solutions.
 
J

James Kanze

[...]
The standard guarantes that you will get the same with
operator== provided the type double on your implementation has
the precision required to distinguish x and y.

The standard also requires +0.0 to compare equal to -0.0, and
the C99 standard has some requirements concerning NaN when the
implementation uses IEEE. (These will presumably be part of the
next C++ standard as well.)
I think your d_eq() is a non-portable equivalent of
bool truly_equal ( double lhs, double rhs ) {
double volatile x = lhs;
double volatile y = rhs;
return ( x == y );
}
Here, I hope the volatile causes a write to memory so that any
excess precision that the parameters may have according to
[5/10] goes away.

It shouldn't be necessary. You're passing by value, so the
compiler has to initialize the "variables" with the double. It
can, of course, avoid this under the as if rule, but only if the
results are the same.

Not all compilers get this right, of course, and volatile might
help if you're trying to work around a compiler bug.
 
J

James Kanze

I don't think it's illegal. I think it's ill-defined.
That is, there is no promise the compiler will pack
the union the same way every time. It might not line
things up the same way. It might pad differently.

It's undefined behavior. The compiler might hide information
concerning the last assigned element of the union somewhere,
check it when you read, and reformat your hard disk if you read
anything other than the last element written.

Since it is undefined behavior, a compiler is free to do
anything with it. Including define it for that compiler. I
believe that some compilers (e.g. g++) do define things in a way
as to allow accessing a different element than the last one
accessed to work.

There is, of course, a second problem with the original code:
how the compiler lays out bit fields is very implementation
dependent, and in fact does vary greatly from one compiler to
the next. Given the definition of double_bits, above, some
compilers will put the sign in the top bit, others in the lowest
bit, and some will put fraction in a different word than
exponent and sign. Using bit fields for this sort of thing is
far from portable.

The only more or less portable way of doing this is to use memcpy:

double value ;
uint_64 asUInt ;
memcpy( &asUInt, &value, sizeof( double ) ) ;
std::cout << "sign : " << ((asUInt >> 63) & 0x1 <<
std::endl ;
std::cout << "exponent : " << ((asUInt >> 52) & 0x7FF <<
std::endl ;
std::cout << "mantissa : "
<< (((asUInt ) & 0X000FFFFFFFFFFFFFULL)
| ((asUInt & 0x7FFFFFFFFFFFFFFULL) == 0
? 0 : 0x0010000000000000ULL))
<< std::endl ;

Of course, even this still depends on the implementation 1)
using IEEE format, 2) having uint_64 and 3) supporting long
long, none of which are required by the standard.
I seem to have a hazy memory of a compiler that
would change how it packed unions depending on
how stuff lined up at memory page boundaries.
So two instances of the same union might line
up differently.

That definitely wouldn't be conformant. Nor even usable: what
happened when you had a pointer to the union, and tried to use
it in a different compilation unit.
So I don't think you will get a compiler error.
And it will probably work fine *most* of the time.
It's just that it might stop working without warning.

I've used compilers which would optimize strangely in such
cases. For example, which would detect that you never read the
value written into x.value (since reading x.bits doesn't count,
according to the standard), and so suppressed the assignment.
The company from which I bought the compiler (Microsoft) is
still in business, and in fact, not doing too badly, although
I'm pretty sure that its current products do not use the
original code base. And according to the standard, it's a
legitimate optimization.

According to the standard, the "preferred" way of doing such
type punning is to use reinterpret_cast. Presumably, a compiler
should turn off all optimization when it sees a
reinterpret_cast. Except, of course, that unlike the above, the
results of a reinterpret_cast can span translation unit
boundaries, so the compiler might not see that a
reinterpret_cast was involved. And it's still formally
implementation defined (and doesn't always work with g++).

It's also possible to get the values for the mantissa and the
exponent by means of functions like frexp. Off hand, this
sounds (or sounded to me) unwieldy and slow, but when I actually
tried, it turned out to be not that complicated, and
surprisingly rapid, at least under Solaris on a Sun Sparc.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,197
Messages
2,571,040
Members
47,635
Latest member
SkyePurves

Latest Threads

Top