S
Serge Orlov
Martin said:Felipe said:I love benchmarks, so as I was testing the options, I saw something very
strange:
$ python2.4 -mtimeit 'x = range(100000); '
100 loops, best of 3: 6.7 msec per loop
$ python2.4 -mtimeit 'x = range(100000); del x[:]'
100 loops, best of 3: 6.35 msec per loop
$ python2.4 -mtimeit 'x = range(100000); x[:] = []'
100 loops, best of 3: 6.36 msec per loop
$ python2.4 -mtimeit 'x = range(100000); del x'
100 loops, best of 3: 6.46 msec per loop
Why the first benchmark is the slowest? I don't get it... could someone
test this, too?
In the first benchmark, you need space for two lists: the old one and
the new one; the other benchmarks you need only a single block of
memory (*). Concluding from here gets difficult - you would have to study
the malloc implementation to find out whether it works better in one
case over the other. Could also be an issue of processor cache: one
may fit into the cache, but the other may not.
Addition to the previous message. Now I follow you
two arrays and cache seems to be the second reason for slowdown, but
iterating backwards is also contributing to slowdown.