140 seems pretty extreme. Most tests I've run put numeric calculations
A Jpeg libaary with a good optimization does only integer algorithm
and this is different from FP operations which are already slow on
Intel CPU's.
But i find find it strange that numerics (without using special
extensions) should only be 10-20 times slower then C. This is not what
you find in other benchmarks.
That 10-20 times slower metric was doing numerical integration in pure
Python and ANSI standard C, originally run using Python version 2.0 (2.3
is roughly 20% faster than 2.0). Certainly it was using floating point
math, but that suggests that there is less of a difference using Python
and FP math than when using integer math.
For example in the great language shootout
"
http://www.bagley.org/~doug/shootout/bench/sieve/"
you find that the "Sieve of Eratosthenes" as an integer algorithm 207
times slower then c. I think the algorithms are compareable in their
use of instructions.
I just sent the author an update to his version in Python that reduces
runtime from ~47 seconds to ~37 seconds, putting it at only ~160 times
slower.
Unfortunately the test still isn't fair. The Python version ends up
creating and destroying lists (arrays) of 8193 elements, and over the
running of the algorithm, will need to allocate and free 8 megs. It
turns out that creating and destroying is faster in Python, but that
really just means that Python should probably have a better sequence
initialization function.
- Josiah