To get the accurate value of 1 - 0.999999999999999 ,how to implement the python algorithm ?
BTW ,Windows’s calculator get the accurate value ,anyone who knows how to implement it ?
Windows calculator is an application, not a programming language. Like
all applications, it has to deal with the finite accuracy of the
underlying processor and language, and choose an algorithm that will
please its users.
The Pentium chip (and its equivalents from AMD), used by Windows
machines and most others, has about 18 digits of accuracy in its binary
floating point math. However, being binary, the data has to be
converted from decimal to binary (when the user types it in) and binary
to decimal (when displaying it). Either of those conversions may have
quantization errors, and it's up to the program to deal with those or
other inaccuracies.
If you subtract two values, either of which may have quantization
errors, and they are quite close, then the apparent error is
magnified. Out of your 18 digits internal accuracy, you now have only
about 2.
Therefore many programs more concerned with apparent accuracy will
ignore the binary floating point, and do their work in decimal. That
doesn't eliminate calculation errors, but only quantization errors.
That makes the user think he is getting more accuracy than he really is.
Since that seems to be your goal, I suggest you look into the Decimal
class, locating in the stdlib decimal.
import decimal
a = decimal.Decimal(4.3)
print(a)
5.0999999999999996447286321199499070644378662109375
Note that you still seem to have some "error" since the value 4.3 is a binary float, and has already been quantized. If you want to avoid the binary stuff entirely, try going directly from string to Decimal.
b = decimal.Decimal("5.1")
print(b)
5.1
Back to your original contrived example,
c = decimal.Decimal("1.0")
d = decimal.Decimal("0.999999999999999")
print(c-d)
1E-15
The Decimal class has the disadvantage that it's tons slower on any modern machine I know of, but the advantage that you can specify how much precision you need it to use. It doesn't eliminate errors at all, just one class of them.
e = decimal.Decimal("3.0")
print(c/e)
0.3333333333333333333333333333
That of course is the wrong answer. The "right" answer would never stop printing. We still have a finite number of digits.
print(c/e*e)
0.9999999999999999999999999999
"Fixing" this is subject for another lesson, someday.