alexru said:
Is there any standardized interpreter speed evaluation tool? Say I
made few changes in interpreter code and want to know if those changes
made python any better, which test should I use?
Not trying to be a smart-aleck, but the test you use should reflect your
definition of the phrase "any better." For example, suppose you decided
that you could speed things up by pre-calculating a few dozen megabytes
of data, and including that in the python.dll. This would change the
memory footprint of Python, and possibly the interpreter's startup/load
time, not just the runtime of whatever loop you are testing.
But if I assume you know all that and just want to do timings. There
are at least two stdlib functions that can help:
import time
time.time() and time.clock() will give you a wall-clock floating point
number, and you can subtract two of these within the same program to see
how long something takes. This approach ignores interpreter startup,
and does nothing to compensate for other processes running on the
system. But it can be very useful, and easy to run. The resolution on
each function varies by OS, so you may have to experiment to see which
one gives the most precision.
import time
start = time.time()
dotest()
print "Elapsed time", time.time() - start
timeit module can be used within code for timing, or it may be used to
load and run a test from the command line. In the latter version, it
includes the startup time for the interpreter, which might be
important. It also can execute the desired code repeatedly, so you can
get some form of averaging, or amortizing.
*** python -m timeit ....
Note that due to system caching, doing multiple runs consecutively may
give different results than ones that are separated by other programs
running. And of course when you recompile and link., the system buffers
will contain an unusual set of data. So there are ways (which I don't
recall) of flushing the system buffers to let programs start on equal
footing.
DaveA