You're not comparing apples with apples. You're comparing arbitrary
precision calculations with fixed precision calculations. If you want
fixed memory requirements, you should use fixed-precision rationals. Most
rationals I know of have a method for limiting the denominator to a
maximum value (even if not necessarily convenient).
On the other hand, if you want infinite precision, there are floating
point implementations that offer that too. How well do you think they
perform relative to rationals? (Hint: what are the memory requirements
for an infinite precision binary float equal to fraction(1, 3)? *wink*)
Forth originally didn't offer floats, because there is nothing you can do
with floats that can't be done slightly less conveniently but more
accurately with a pair of integers treated as a rational. Floats, after
all, *are* rationals, where the denominator is implied rather than
explicit.
I suspect that if rational arithmetic had been given half the attention
that floating point arithmetic has been given, most of the performance
difficulties would be significantly reduced. Perhaps not entirely
eliminated, but I would expect that for a fixed precision calculation, we
should have equivalent big-Oh behaviour, differing on the multiplicative
factors.
In any case, the real lesson of your benchmark is that infinite precision
is quite costly, no matter how you implement it