If float were a perfect superset of int, and the only logical superset
when you want non-integers, then it'd be fine. But if you're mixing int
and Decimal, you have to explicitly convert,
I don't think so. Operations on mixed int/Decimal arguments return
Decimal. There's no conversion needed except to get the original Decimal
number in the first place. (Decimal is not a built-in and there's no
literal syntax for them.)
py> from decimal import Decimal as D
py> x, y = 1, D(2)
py> x/y
Decimal('0.5')
py> x//y
Decimal('0')
whereas if you're mixing
int and float, you don't. Why is it required to be explicit with Decimal
but not float? Of all the possible alternate types, why float? Only
because...
Because float is built-in and Decimal is not. Because Decimal wasn't
introduced until Python 2.4 while the change to the division operator was
first begun back in Python 2.2.
http://python.org/dev/peps/pep-0238/
Guido writes about why his decision to emulate C's division operator was
a mistake:
http://python-history.blogspot.com.au/2009/03/problem-with-integer-division.html
... it already existed. There's no particular reason to up-cast to
float, specifically, and it can cause problems with large integers -
either by losing accuracy, or by outright overflowing.
If you reject C integer division, you have to do *something* with 1/2.
Ideally you'll get a number numerically equal to one half. It can't be a
Decimal, or a Fraction, because back in 2001 there were no Decimal or
Fraction types, and even now in 2014 they aren't built-ins.
(Besides, both of those choices have other disadvantages. Fractions are
potentially slow and painful, with excessive accuracy. See Guido's
comments in his blog post above. And Decimal uses base 10 floating point,
which is less suitable for serious numeric work than base 2 floats.)
Suppose you take an integer, multiply it by 10, and divide it by 5. In
theory, that's the same as multiplying by 2, right?
That's a bit of a contrived example. But go on.
Mathematically it
is. In C it might not be, because the multiplication might overflow; but
Python, like a number of other modern languages, has an integer type
that won't overflow.
Only since, um, version 2.2 I think. I don't have 2.1 easily available,
but here's 1.5:
[steve@ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:18) [GCC 4.1.2 20080704 (Red Hat
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, AmsterdamTraceback (innermost last):
File "<stdin>", line 1, in ?
OverflowError: integer pow()
So don't forget the historical context of what you are discussing.
In Python 2, doing the obvious thing works:
x * 10 / 5 == x * 2
Ignoring old versions of Python 2.x, correct. But that is a contrived
example. Mathematically x/5*10 also equals 2*x, but not with C division
semantics. This is with Python 2.7:
(10, 14, 14)
Let's try it with true (calculator) division:
(14.0, 14.0, 14)
With a bit of effort, I'm sure you can find values of x where they are
not all equal, but that's because floats only have a finite precision. In
general, true division is less surprising and causes fewer unexpected
truncation errors.
In Python 3, you have to say "Oh but I want my integer division to
result in an integer":
x * 10 // 5 == x * 2
No, // doesn't mean "divide and coerce to an integer", it is
*truncating* division. The type being truncated may choose to return
an int, but that's not compulsory:
(Decimal('5.875'), Decimal('5'))
Yes, I can see that it's nice for simple interactive use. You type "1/2"
and you get back 0.5. But doesn't it just exchange one set of problems
("dividing integers by integers rounds")
It doesn't round, it truncates.
[steve@ando ~]$ python2.7 -c "print round(799.0/100)"
8.0
[steve@ando ~]$ python2.7 -c "print 799/100"
7
for another set ("floating
point arithmetic isn't real number arithmetic")?
It's not like that avoiding that problem is an option.