B
Ben Bacarisse
Well, I've been having way too much fun(?) with this toy problem,
writing multiple solutions and then trying out different strategies
for packaging tests, measuring performance, and so forth.
======== OUTPUT (lightly edited to conserve space) ========
simple version of replace():
scans input twice
does not try to avoid recomputing string lengths
uses C library string functions
performing timing tests (4 repeats)
timing (length 4004, changing 2 occurrences of 2 chars to 2 chars) 20000 times
times (seconds): 0.56 0.54 0.54 0.54
You time each call individually with a timer whose granularity you
don't report (it has a resolution of 1 microsecond, but I don't know
if it has that granularity).
The above times work out at about 26.5 microseconds per call, so your
method is probably OK, but it won't work on faster hardware. On my
2.53GHz Core2 Duo P8700-based laptop a similar function takes 1.6
microseconds per call and summing times individual times is not
reliable. [Aside: you can get it to be more reliable (under certain
assumptions) by taking the granularity into account and doing a little
statistics.]
Anyway, I just wanted to check: is more than 16 times slower than mine
times a reasonable result on your machine?
Note: I am not timing your function but mine although I doubt that
much of the 16 times is due to either my code or my system's libc.
<big snip>