On Mar 5, 11:55 pm, "Alf P. Steinbach" <
[email protected]> wrote:
[...]
I've done some performance testing on Windows and Linux
--www.webEbenezer.net/comparison.html. On Windows I use
clock and on Linux I use gettimeofday. From what I can
tell gettimeofday gives more accurate results than clock
on Linux. Depending on how this thread works out, I may
start using the function Victor mentioned on Windows.
On Unix based machines, clock() and gettimeofday() measure
different things. I use clock() when I want what clock()
measures, and gettimeofday() when I want what gettimeofday()
measures. For comparing algorithms to see which is more
effective, this means clock().
I've just retested the test that saves/sends a list<int> using
clock on Linux. The range of ratios from the Boost version to
my version was between 1.4 and 4.5. The thing about clock is
it returns values like 10,000, 20,000, 30,000, 50,000, 60,000,
etc.
This sounds like a defective (albeit legal) implementation.
Posix requires CLOCKS_PER_SEC to be 1000000 precisely so that
implementations can offer more precision if the system supports
it. Linux does. I'd file a bug report.
Of course, historically, a lot of systems had clocks generated
from the mains, which meant a CLOCKS_PER_SEC of 50 (in Europe)
or 60 (in North America). On such systems, better precision
simply wasn't available, and I've gotten into the habit of not
counting on values of benchmarks that run for less than about 5
minutes. So I would tend not to noticed such anomalies as you
describe.
I would be more comfortable with it if I could get it to round
its results less. The range of results with gettimeofday for
the same test is not so wide -- between 2.0 and 2.8. I don't
run other programs while I'm testing besides a shell/vi and
firefox. I definitely don't start or stop any of those
between the tests, so I'm of the opinion that the elapsed time
results are meaningful.
The relative values are probably meaningful if the actual values
are large enough (a couple of minutes, at least) and they are
reproduceable. The actual values, not really (but that's
generally not what you're interested in).
In my own tests, with clock(), under both Linux and Solaris, I
generally get differences from one run to the next of
considerably less than 10%. Which is about as accurate as
you're going to get, I think. Under Windows, I have to be more
careful about the surrounding environment, and even then, there
will be an outlier from time to time.
Except for the part about the functions being purely CPU, this
describes my approach/intent.
Again, it depends on what you are trying to measure. If you
want to capture disk transfer speed, for example, then clock()
is NOT the function you want (except under Windows).