[Bryan]
your BigDec class is so fast it's like night and day compared to the standard
functions.
Not so: it does arithmetic much slower than the builtin arithmetic.
The only thing it's faster at is producing a decimal string at the
very end, and that only pays when the integer is so large that the
quadratic-time nature of the builtin binary-to-decimal conversion
becomes apparent. BigDec still uses the builtin binary-to-decimal
conversion for integers up to 5000 decimal digits, which is already
larger than virtually any real app ever uses. For ints much longer
than that, BigDec conversion to decimal string takes time linear in
the number of decimal digits, which has an unboundedly growing
advantage over the quadratic-time builtin conversion as the number of
digits increases. So the only thing it's good for is printing very
large powers in decimal. If you were willing to see your very large
powers in hex (base 16) instead, the builtin "print hex(large_power)"
is again much faster than BigDec.
are you implying that your BigDec isn't 100% portable or correct or has some
limitations?
Limitations? Heck, it doesn't even implement addition. The only
things it implements are multiplication and exponentiation of positive
integers (and relying heavily on the builtin binary longs to do most
of that), and efficient conversion to decimal string. That's a tiny
subset of what builtin longs support.
why can't python longs or bigints use your implementation under the covers?
Because it would be insane <wink>. After a few years of work (if
speed-obsessed volunteers appear to do it), the new-in-2.3.4 decimal
module should approach comparable speeds. For more reasons than I
count now, trying to use the current version of the decimal module to
print large powers in decimal is slower than using builtin longs.