M
Makoto Kuwata
I released Benchmarker 1.1.0.
http://pypi.python.org/pypi/Benchmarker/
Benchmarker is a small utility to benchmark your code.
Example
=======
ex.py::
def fib(n):
return n <= 2 and 1 or fib(n-1) + fib(n-2)
from benchmarker import Benchmarker
bm = Benchmarker(30) # or Benchmarker(width=30, out=sys.stderr,
header=True)
## Python 2.5 or later
with bm('fib(n) (n=33)'): fib(33)
with bm('fib(n) (n=34)'): fib(34)
with bm('fib(n) (n=35)'): fib(35)
## Python 2.4
bm('fib(n) (n=33)').run(fib, 33) # or .run(lambda: fib(33))
bm('fib(n) (n=34)').run(fib, 34) # or .run(lambda: fib(34))
bm('fib(n) (n=35)').run(fib, 35) # or .run(lambda: fib(35))
## print compared matrix
bm.print_compared_matrix(sort=False, transpose=False)
Output::
$ python ex.py
utime stime total real
fib(n) (n=33) 1.890 0.000 1.890 1.900
fib(n) (n=34) 3.030 0.010 3.040 3.058
fib(n) (n=35) 4.930 0.010 4.940 4.963
---------------------------------------------------------------
real [01] [02] [03]
[01] fib(n) (n=33) 1.900s - 60.9% 161.2%
[02] fib(n) (n=34) 3.058s -37.9% - 62.3%
[03] fib(n) (n=35) 4.963s -61.7% -38.4% -
Changes in release 1.1.0
========================
* Enhance Benchmarker.run() to take function args.
::
bm = Benchmarker()
bm('fib(34)').run(fib, 34) # same as .run(lambda: fib(34))
* (experimental) Enhance Benchmarker.run() to use function name as title
if title is not specified.
::
def fib34(): fib(34)
bm = Benchmarker()
bm.run(fib34) # same as bm('fib34').run(fib34)
* Enhanced to support compared matrix of benchmark results.
::
bm = Benchmarker(9)
bm('fib(30)').run(fib, 30)
bm('fib(31)').run(fib, 31)
bm('fib(32)').run(fib, 32)
bm.print_compared_matrix(sort=False, transpose=False)
## output example
# utime stime total real
#fib(30) 0.440 0.000 0.440 0.449
#fib(31) 0.720 0.000 0.720 0.722
#fib(32) 1.180 0.000 1.180 1.197
#--------------------------------------------------------------------------
# real [01] [02] [03]
#[01] fib(30) 0.4487s - 60.9% 166.7%
#[02] fib(31) 0.7222s -37.9% - 65.7%
#[03] fib(32) 1.1967s -62.5% -39.6% -
* Benchmark results are stored into Benchmarker.results as a list of tuples.
::
bm = Benchmarker()
bm('fib(34)').run(fib, 34)
bm('fib(35)').run(fib, 35)
for result in bm.results:
print result
## output example:
#('fib(34)', 4.37, 0.02, 4.39, 4.9449)
#('fib(35)', 7.15, 0.05, 7.20, 8.0643)
* Time format is changed from '%10.4f' to '%9.3f'
* Changed to run full-GC for each benchmarks
http://pypi.python.org/pypi/Benchmarker/
Benchmarker is a small utility to benchmark your code.
Example
=======
ex.py::
def fib(n):
return n <= 2 and 1 or fib(n-1) + fib(n-2)
from benchmarker import Benchmarker
bm = Benchmarker(30) # or Benchmarker(width=30, out=sys.stderr,
header=True)
## Python 2.5 or later
with bm('fib(n) (n=33)'): fib(33)
with bm('fib(n) (n=34)'): fib(34)
with bm('fib(n) (n=35)'): fib(35)
## Python 2.4
bm('fib(n) (n=33)').run(fib, 33) # or .run(lambda: fib(33))
bm('fib(n) (n=34)').run(fib, 34) # or .run(lambda: fib(34))
bm('fib(n) (n=35)').run(fib, 35) # or .run(lambda: fib(35))
## print compared matrix
bm.print_compared_matrix(sort=False, transpose=False)
Output::
$ python ex.py
utime stime total real
fib(n) (n=33) 1.890 0.000 1.890 1.900
fib(n) (n=34) 3.030 0.010 3.040 3.058
fib(n) (n=35) 4.930 0.010 4.940 4.963
---------------------------------------------------------------
real [01] [02] [03]
[01] fib(n) (n=33) 1.900s - 60.9% 161.2%
[02] fib(n) (n=34) 3.058s -37.9% - 62.3%
[03] fib(n) (n=35) 4.963s -61.7% -38.4% -
Changes in release 1.1.0
========================
* Enhance Benchmarker.run() to take function args.
::
bm = Benchmarker()
bm('fib(34)').run(fib, 34) # same as .run(lambda: fib(34))
* (experimental) Enhance Benchmarker.run() to use function name as title
if title is not specified.
::
def fib34(): fib(34)
bm = Benchmarker()
bm.run(fib34) # same as bm('fib34').run(fib34)
* Enhanced to support compared matrix of benchmark results.
::
bm = Benchmarker(9)
bm('fib(30)').run(fib, 30)
bm('fib(31)').run(fib, 31)
bm('fib(32)').run(fib, 32)
bm.print_compared_matrix(sort=False, transpose=False)
## output example
# utime stime total real
#fib(30) 0.440 0.000 0.440 0.449
#fib(31) 0.720 0.000 0.720 0.722
#fib(32) 1.180 0.000 1.180 1.197
#--------------------------------------------------------------------------
# real [01] [02] [03]
#[01] fib(30) 0.4487s - 60.9% 166.7%
#[02] fib(31) 0.7222s -37.9% - 65.7%
#[03] fib(32) 1.1967s -62.5% -39.6% -
* Benchmark results are stored into Benchmarker.results as a list of tuples.
::
bm = Benchmarker()
bm('fib(34)').run(fib, 34)
bm('fib(35)').run(fib, 35)
for result in bm.results:
print result
## output example:
#('fib(34)', 4.37, 0.02, 4.39, 4.9449)
#('fib(35)', 7.15, 0.05, 7.20, 8.0643)
* Time format is changed from '%10.4f' to '%9.3f'
* Changed to run full-GC for each benchmarks