Simple addition

M

Mathieu Malaterre

Hi,

I tried a simple addition with python and I don't understand what is
going on:

$python744.50999999999999

weird isn't it ? I use python2.3.

comments welcome
Mathieu
 
B

Brian

Mathieu said:
$python
744.50999999999999

weird isn't it ? I use python2.3.

Mathieu, you can find a full explanation of what you're seeing at the
following documentation link:
http://www.python.org/doc/current/tut/node15.html

From that page:

"Note that this is in the very nature of binary floating-point: this is
not a bug in Python, it is not a bug in your code either, and you'll see
the same kind of thing in all languages that support your hardware's
floating-point arithmetic (although some languages may not display the
difference by default, or in all output modes).

Python's builtin str() function produces only 12 significant digits, and
you may wish to use that instead. It's unusual for eval(str(x)) to
reproduce x, but the output may be more pleasant to look at:

0.1

It's important to realize that this is, in a real sense, an illusion:
the value in the machine is not exactly 1/10, you're simply rounding the
display of the true machine value."
 
P

Piet van Oostrum

B> 0.1

B> It's important to realize that this is, in a real sense, an illusion: the
B> value in the machine is not exactly 1/10, you're simply rounding the
B> display of the true machine value."

On the other hand, python could have done better. There are algorithms to
print floating point numbers properly with a more pleasant output[1]:
in this particular case python could have given "0.1" also with "print 0.1".
Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.

This is because ideally it should print the representation with the least
number of digits that when read back gives the same internal value as the
number printed. In this case that is obviously "0.1".

[1] Guy L. Steele, Jr. Jon L. White, How to print floating-point numbers
accurately, Proceedings of the ACM SIGPLAN 1990 conference on Programming
language design and implementation, Pages: 112 - 126.
 
T

Terry Reedy

Piet van Oostrum said:
B> 0.1

B> It's important to realize that this is, in a real sense, an illusion: the
B> value in the machine is not exactly 1/10, you're simply rounding the
B> display of the true machine value."

On the other hand, python could have done better.

Python gives you a choice between most exact and 'pleasant'. This *is*
better, in my opinion, than no choice.
There are algorithms to
print floating point numbers properly with a more pleasant output[1]:
in this particular case python could have given "0.1" also with "print
0.1".

What? In 2.2:0.1

did this change in 2.3?
Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.

They are not useless if you want more accuracy about what you have and what
you will get with further computation. Tracking error expansion is an
important part of designing floating point calculations.
This is because ideally it should print the representation with the least
number of digits that when read back gives the same internal value as the
number printed. In this case that is obviously "0.1".

This is opinion, not fact. Opinions are divided.

Terry J. Reedy
 
D

Dan Bishop

Mathieu Malaterre said:
Hi,

I tried a simple addition with python and I don't understand what is
going on:

$python
744.50999999999999

You computer does arithmetic in binary. None of these numbers can be
exactly represented as a binary fraction.

464.73 = bin 111010000.10 11101011100001010001 11101011100001010001...
279.78 = bin 100010111.1 10001111010111000010 10001111010111000010...

They get rounded to the nearest 53-bit float:

464.73 ~= 0x1.D0BAE147AE147p+8
279.78 ~= 0x1.17C7AE147AE14p+8
--------------------
0x2.E8828F5C28F5Bp+8
~= 0x1.744147AE147AEp+9 after normalization

The exact decimal equivalent of this sum is
744.509999999999990905052982270717620849609375. Python's repr()
rounds this to 17 decimal digits, or "744.50999999999999".
 
P

Piet van Oostrum

B> It's important to realize that this is, in a real sense, an illusion:
TR> the
B> value in the machine is not exactly 1/10, you're simply rounding the
B> display of the true machine value."
TR> Python gives you a choice between most exact and 'pleasant'. This *is*
TR> better, in my opinion, than no choice.

0.10000000000000001 is not more exact then 0.1. It is a false illusion of
exactness.
There are algorithms to
print floating point numbers properly with a more pleasant output[1]:
in this particular case python could have given "0.1" also with "print
TR> 0.1".

TR> What? In 2.2:TR> 0.1

TR> did this change in 2.3?

Ok, mistake, I should have left out the print. But you should know what I
mean.

TR> They are not useless if you want more accuracy about what you have and what
TR> you will get with further computation. Tracking error expansion is an
TR> important part of designing floating point calculations.

TR> This is opinion, not fact. Opinions are divided.

It would cause no errors and it would prevent a lot of the questions that
appear about every few days here about this subject. So what is the
advantage of printing 0.10000000000000001 or xx.xxx999999999998?
 
D

Dan Bishop

Terry Reedy said:
Piet van Oostrum said:
[repr(0.1) is ugly!]
Unfortunately most C libraries only use the stupid algorithm which often
gives some useless digits.

They are not useless if you want more accuracy about what you have

Why not display the *exact* decimal representation,
"0.1000000000000000055511151231257827021181583404541015625"?
and what you will get with further computation.
Tracking error expansion is an
important part of designing floating point calculations.

We're talking about human-readable representation, not calculations.
 
R

Rainer Deyke

Dan said:
Why not display the *exact* decimal representation,
"0.1000000000000000055511151231257827021181583404541015625"?

This has my vote. Unfortunately Python seems incapable of figuring out all
of those digits.

Python 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.'0.1000000000000000100000000000000000000000000000000000000000000000'
 
A

Andrew Koenig

Unfortunately most C libraries only use the stupid algorithm which
often
Why not display the *exact* decimal representation,
"0.1000000000000000055511151231257827021181583404541015625"?

One can do better than that--or at least something I think is better.

Many moons ago, Guy Steele proposed an elegant pair of rules for converting
between decimal and internal floating-point, be it binary, decimal,
hexadecimal, or something else entirely:

1) Input (i.e. conversion from decimal to internal form) always yields
the closest (rounded) internal value to the given input.

2) Output (i.e. conversion from internal form to decimal) yields the
smallest number of significant digits that, when converted back to internal
form, yields exactly the same value.

This scheme is useful because, among other things, it ensures that all
numbers with only a few significant digits will convert to internal form and
back to decimal without change. For example, consider 0.1. Converting 0.1
to internal form yields the closest internal number to 0.1. Call that
number X. Then when we write X back out again, we *must* get 0.1, because
0.1 is surely the decimal number with the fewest significant digits that
yields X when converted.

I have suggested in the past that Python use these conversion rules. It
turns out that there are three strong arguments against it:

1) It would preclude using the native C library for conversions, and
would probably yield different results from C under some circumstances.

2) It is difficult to implement portably, and if it is not implemented
portably, it must be reimplemented for every platform.

3) It potentially requires unbounded-precision arithmetic to do the
conversions, although a clever implementation can avoid it most of the time.

I still think it would be a good idea, but I can see that it would be more
work than is feasible. I don't want to do the work myself, anyway :)
 
R

Rod Haper

Rainer said:
Dan said:
Why not display the *exact* decimal representation,
"0.1000000000000000055511151231257827021181583404541015625"?


This has my vote. Unfortunately Python seems incapable of figuring out all
of those digits.

Python 2.3.2 (#49, Oct 2 2003, 20:02:00) [MSC v.1200 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.

'0.1000000000000000100000000000000000000000000000000000000000000000'

Python 2.3.3 seems to be able to do it on Red Hat Linux 9.0:

[rodh@rodh rodh]$ python
Python 2.3.3 (#1, Dec 20 2003, 17:47:13)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

Must be a M$ MSC problem.
 
P

Piet van Oostrum

AK> Many moons ago, Guy Steele proposed an elegant pair of rules for converting
AK> between decimal and internal floating-point, be it binary, decimal,
AK> hexadecimal, or something else entirely:

That was exactly what I was suggesting. I even included the bib reference.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,183
Messages
2,570,965
Members
47,513
Latest member
JeremyLabo

Latest Threads

Top