A
andresj
I was doing some programming in Python, and the idea came to my mind:
using fractions instead of floats when doing 2/5.
The problem arises when you try to represent some number, like 0.4 in
a float. It will tell you that it's equal to 0.40000000000000002.
"This is easy to fix", you may say. "You just use the decimal.Decimal
class!". Well, firsly, there would be an excess of typing I would need
to do to calculate 0.4+0.6:
from decimal import Decimal
print Decimal("0.4")+Decimal("0.6")
Secondly, what happens if I need to sum 1/3 and 0.4? I could use
Decimal to represent 0.4 precisely, but what about 1/3? Sure, I could
use _another_ class which works in a base (binary, decimal, octal,
hexadecimal) in which 1/3 can be represented exactly... Not to mention
the problem of operating with those two different classes...
So the solution I think is using a fraction type/class, similar to the
one found in Common Lisp. If you have used CLisp before, you only need
to type:
(+ 1/3 6/10)
to get the exact result. (Yes, I also hate the (operator arg1 arg2)
syntax, but it's just an example). I would like to have something
similar in Python, in which dividing two numbers gives you a fraction,
instead of an integer (python 2.x) or a float (decided for python
3.x).
an implementation could be like this:
class frac(object): # PS: This (object) thing will be removed in
python 3.0, right?
def __init__(self, numerator, denominator):
pass
def __add__(self, other):
pass
#...
(I have an implementation of the frac class done (this meaning, it
works for me), and although it's pretty dirty, I'd be happy to post it
here if you want it.)
My idea, in summary would be that this python shell session is true:
I would like to get some feedback on this idea. Has this been posted
before? If so, was it rejected? and for what?
Also, I would like to know if you have improvements on the initial
design, and if it would be appropiate to send it as a PEP.
using fractions instead of floats when doing 2/5.
The problem arises when you try to represent some number, like 0.4 in
a float. It will tell you that it's equal to 0.40000000000000002.
"This is easy to fix", you may say. "You just use the decimal.Decimal
class!". Well, firsly, there would be an excess of typing I would need
to do to calculate 0.4+0.6:
from decimal import Decimal
print Decimal("0.4")+Decimal("0.6")
Secondly, what happens if I need to sum 1/3 and 0.4? I could use
Decimal to represent 0.4 precisely, but what about 1/3? Sure, I could
use _another_ class which works in a base (binary, decimal, octal,
hexadecimal) in which 1/3 can be represented exactly... Not to mention
the problem of operating with those two different classes...
So the solution I think is using a fraction type/class, similar to the
one found in Common Lisp. If you have used CLisp before, you only need
to type:
(+ 1/3 6/10)
to get the exact result. (Yes, I also hate the (operator arg1 arg2)
syntax, but it's just an example). I would like to have something
similar in Python, in which dividing two numbers gives you a fraction,
instead of an integer (python 2.x) or a float (decided for python
3.x).
an implementation could be like this:
class frac(object): # PS: This (object) thing will be removed in
python 3.0, right?
def __init__(self, numerator, denominator):
pass
def __add__(self, other):
pass
#...
(I have an implementation of the frac class done (this meaning, it
works for me), and although it's pretty dirty, I'd be happy to post it
here if you want it.)
My idea, in summary would be that this python shell session is true:
I would like to get some feedback on this idea. Has this been posted
before? If so, was it rejected? and for what?
Also, I would like to know if you have improvements on the initial
design, and if it would be appropiate to send it as a PEP.