E
ej
I ran into something today I don't quite understand and I don't know all the
nitty gritty details about how Python stores and handles data internally.
(I think) I understand why, when you type in certain floating point values,
Python doesn't display exactly what you typed (because not all decimal
numbers are exactly representable in binary, and Python shows you the full
precision of what is representable). For example:
So, I think I've got a pretty good handle on what the struct module is all
about. Let's take that number, 148.73, and use struct functions to look at
the raw bit pattern of what would be in a 32-bit register using IEEE754
float representation:
That is, the four bytes representing this are 0x43, 0x14, 0xBA, 0xE1
Now let's go back the other way, starting with this 32 bit representation,
and turn it back into a float:
Hmmmm... Close, but I seem to be losing more the I would expect here. I
initially thought I should be able to get back to at least what python
previously displayed: 148.72999999999999
I know there are 23 bits of mantissa in an IEEE-754, with a free '1'...
'0xe2f1a7'
Looks like it takes 6 * 4 = 24 bits to represent that as an int....
I am starting to think my expectation is wrong...
If that's true, then I guess I am confused why Python is displaying
148.72999572753906 when you unpack the 4 bytes, implying a lot more
precision that was available in the original 32-bits? Python is doing
64-bit floating point here? I'm obviously not understanding something...
help?
-ej
nitty gritty details about how Python stores and handles data internally.
(I think) I understand why, when you type in certain floating point values,
Python doesn't display exactly what you typed (because not all decimal
numbers are exactly representable in binary, and Python shows you the full
precision of what is representable). For example:
148.729999999999990.90000000000000002
and
So, I think I've got a pretty good handle on what the struct module is all
about. Let's take that number, 148.73, and use struct functions to look at
the raw bit pattern of what would be in a 32-bit register using IEEE754
float representation:
'0x4314BAE1L'hex(unpack('L', pack('f', x))[0])
That is, the four bytes representing this are 0x43, 0x14, 0xBA, 0xE1
Now let's go back the other way, starting with this 32 bit representation,
and turn it back into a float:
148.72999572753906unpack('>f', pack('BBBB', 0x43, 0x14, 0xBA, 0xE1))[0]
Hmmmm... Close, but I seem to be losing more the I would expect here. I
initially thought I should be able to get back to at least what python
previously displayed: 148.72999999999999
I know there are 23 bits of mantissa in an IEEE-754, with a free '1'...
'0xe2f1a7'
Looks like it takes 6 * 4 = 24 bits to represent that as an int....
I am starting to think my expectation is wrong...
If that's true, then I guess I am confused why Python is displaying
148.72999572753906 when you unpack the 4 bytes, implying a lot more
precision that was available in the original 32-bits? Python is doing
64-bit floating point here? I'm obviously not understanding something...
help?
-ej