B
ben
i'm learning about the floating point format that's used on my computer
(apple mac so powerpc)
i've written a function to print out the bits in a float to see how
floats are represented and i also have a software programmer's
calculator called BinCalc which shows the bits of whatever number.
the bits that my code and the bits that the calculator show, for the
same value, don't match. for example for the number 1.0 my code says:
10111111_00000000_00000000_10000000
and the calculator says:
00111111_10000000_00000000_00000000
and for the value 1.6 my code prints:
10111111_11001101_11001100_11001100
and the calculator says:
00111111_11001100_11001100_11001101
the code that's used to print the float bits is below.
the first obvious difference is the left bit, the high order bit.
that's the bit that says if the value's negative or posative, the sign
bit right? so it really looks like there's something wrong with my code
becuase neither values in the above two examples are negative but both
print outs from my code has the left most bit set to 1. and you'd have
thought that the format that the software calculator is using would be
the same format that my computer's using, so the two representations
should tally i'd have thought. anyone know what's going on?
thanks, ben.
#include <stdio.h>
void bitfloatprint(float f)
{
unsigned bytes = sizeof(float); // number of bytes in a float
char bits; // number of bits to shift mask over by
while( bytes ) { // one loop per byte in the float (HO to LO)
for( bits = 7; bits >= 0; bits-- ) {
putchar(
( *((unsigned char *)&f + bytes) & 1 << bits ) == 0 ? '0' : '1'
);
// casting in above line so that +1 means plus
// one byte rather than plus one float
}
if( bytes != 1 )
putchar('_');
bytes--;
}
putchar('\n');
}
int main(void)
{
float f = 1.0;
bitfloatprint(f);
return 0;
}
(apple mac so powerpc)
i've written a function to print out the bits in a float to see how
floats are represented and i also have a software programmer's
calculator called BinCalc which shows the bits of whatever number.
the bits that my code and the bits that the calculator show, for the
same value, don't match. for example for the number 1.0 my code says:
10111111_00000000_00000000_10000000
and the calculator says:
00111111_10000000_00000000_00000000
and for the value 1.6 my code prints:
10111111_11001101_11001100_11001100
and the calculator says:
00111111_11001100_11001100_11001101
the code that's used to print the float bits is below.
the first obvious difference is the left bit, the high order bit.
that's the bit that says if the value's negative or posative, the sign
bit right? so it really looks like there's something wrong with my code
becuase neither values in the above two examples are negative but both
print outs from my code has the left most bit set to 1. and you'd have
thought that the format that the software calculator is using would be
the same format that my computer's using, so the two representations
should tally i'd have thought. anyone know what's going on?
thanks, ben.
#include <stdio.h>
void bitfloatprint(float f)
{
unsigned bytes = sizeof(float); // number of bytes in a float
char bits; // number of bits to shift mask over by
while( bytes ) { // one loop per byte in the float (HO to LO)
for( bits = 7; bits >= 0; bits-- ) {
putchar(
( *((unsigned char *)&f + bytes) & 1 << bits ) == 0 ? '0' : '1'
);
// casting in above line so that +1 means plus
// one byte rather than plus one float
}
if( bytes != 1 )
putchar('_');
bytes--;
}
putchar('\n');
}
int main(void)
{
float f = 1.0;
bitfloatprint(f);
return 0;
}