inverse of a matrix with Fraction entries

C

casevh

casevh said:
I coded a quick matrix inversion function and measured running times
using GMPY2 rational and floating point types. For the floating point
tests, I used a precision of 1000 bits. With floating point values,
the running time grew as n^3. With rational values, the running time
grew as n^4*ln(n).

Did you clear the denominators before you started?

-- [mdw]

No. It more of an exercise in illustrating the difference between
arithmetic operations that have a constant running time versus those
that have an n*ln(n) or worse running time.

casevh
 
S

Steven D'Aprano

As you perform repeated calculations with rationals, the size of the
values (usually) keep growing. (Where size is measured as the length of
numerator and denominator.) The speed and memory requirements are no
longer constant.

You're not comparing apples with apples. You're comparing arbitrary
precision calculations with fixed precision calculations. If you want
fixed memory requirements, you should use fixed-precision rationals. Most
rationals I know of have a method for limiting the denominator to a
maximum value (even if not necessarily convenient).

On the other hand, if you want infinite precision, there are floating
point implementations that offer that too. How well do you think they
perform relative to rationals? (Hint: what are the memory requirements
for an infinite precision binary float equal to fraction(1, 3)? *wink*)

Forth originally didn't offer floats, because there is nothing you can do
with floats that can't be done slightly less conveniently but more
accurately with a pair of integers treated as a rational. Floats, after
all, *are* rationals, where the denominator is implied rather than
explicit.

I suspect that if rational arithmetic had been given half the attention
that floating point arithmetic has been given, most of the performance
difficulties would be significantly reduced. Perhaps not entirely
eliminated, but I would expect that for a fixed precision calculation, we
should have equivalent big-Oh behaviour, differing on the multiplicative
factors.

In any case, the real lesson of your benchmark is that infinite precision
is quite costly, no matter how you implement it :)
 
C

casevh

You're not comparing apples with apples. You're comparing arbitrary
precision calculations with fixed precision calculations. If you want
fixed memory requirements, you should use fixed-precision rationals. Most
rationals I know of have a method for limiting the denominator to a
maximum value (even if not necessarily convenient).

On the other hand, if you want infinite precision, there are floating
point implementations that offer that too. How well do you think they
perform relative to rationals? (Hint: what are the memory requirements
for an infinite precision binary float equal to fraction(1, 3)? *wink*)

Forth originally didn't offer floats, because there is nothing you can do
with floats that can't be done slightly less conveniently but more
accurately with a pair of integers treated as a rational. Floats, after
all, *are* rationals, where the denominator is implied rather than
explicit.

I suspect that if rational arithmetic had been given half the attention
that floating point arithmetic has been given, most of the performance
difficulties would be significantly reduced. Perhaps not entirely
eliminated, but I would expect that for a fixed precision calculation, we
should have equivalent big-Oh behaviour, differing on the multiplicative
factors.

In any case, the real lesson of your benchmark is that infinite precision
is quite costly, no matter how you implement it :)

I think most users are expecting infinite precision when they use
rationals. Trying to explain limited precision rational arithmetic
might be interesting.

Knuth described "floating-slash" arithmetic that used a fixed number
of bits for both the numerator and denominator and a rounding
algorithm that prefers "simple" fractions versus more complex
fractions. IIRC, the original paper was from the 1960s.

casevh
 
S

Steven D'Aprano

I think most users are expecting infinite precision when they use
rationals. Trying to explain limited precision rational arithmetic might
be interesting.

Most users are expecting infinite precision decimals when they use
floats, and get surprised and dismayed by things like these:
False


Trying to explain arithmetic on computers is interesting, no matter what
you use.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top