CPU time Vs WALL time, Sometimes Walltime lesser than Cputime ?

A

anto.anish

Hi -

I have been using clock() for calculating CPU time and time() for
calculating Wall time. However, since time() does not provided milli/
microsecond accurancy, i started using gettimeofday() as below to
calculate walltime,

struct timeval tv1,tv2;
struct timezone tz1, tz2;
gettimeofday(&tv1,&tz1);
double time_start1 = (double) tv1.tv_sec + (double)tv1.tv_usec/
1000000.0;
//ALL ALGO PROCESSING HERE
gettimeofday(&tv2,&tz2);
double time_stop1 = (double) tv2.tv_sec + (double)tv2.tv_usec/
1000000.0;
LOGGER.info("BackgroundEstimationAlgoT Run WALL_TIME: %0.3f",(double)
(time_stop1 - time_start1));

Certain times, i have been noticing that WALLTIME calculated is lesser
than CPUTIME. I am not sure why ? Double checked the simple code,
nothing seems to be wrong in simple substraction. My understanding was
always WALLTIME(elapsed time) remains higher than CPUTIME(compute
time).

I test my application on a Linux cluster with 24 compute nodes.
Any pointers as to why is this happening ?

Thanks
Anish
 
J

James Kanze

Hi -

I have been using clock() for calculating CPU time and time()
for calculating Wall time. However,  since time() does not
provided milli/ microsecond accurancy, i started using
gettimeofday() as below to calculate walltime,
struct timeval tv1,tv2;
struct timezone tz1, tz2;
gettimeofday(&tv1,&tz1);
double time_start1 = (double) tv1.tv_sec + (double)tv1.tv_usec/
1000000.0;
//ALL ALGO PROCESSING HERE
gettimeofday(&tv2,&tz2);
double time_stop1 = (double) tv2.tv_sec + (double)tv2.tv_usec/
1000000.0;
LOGGER.info("BackgroundEstimationAlgoT Run WALL_TIME: %0.3f",(double)
(time_stop1 - time_start1));
Certain times, i have been noticing that WALLTIME calculated
is lesser than CPUTIME. I am not sure why ? Double checked the
simple code, nothing seems to be wrong in simple substraction.
My understanding was always WALLTIME(elapsed time) remains
higher than CPUTIME(compute time).

You don't seem to be measuring CPU time at all here, so I'm not
sure what you're comparing. Clock() is supposed to return CPU
time, to the systems best approximation: it doesn't always, and
there will be some jitter, so it's quite possible for
differences of clock() to return a value larger than the
differences between to calls to time() (or gettimeofday()). The
differences shouldn't normally be very larger, however.
I test my application on a Linux cluster with 24 compute
nodes.

That could also affect things, if you were running multiple
threads. I would expect (but the standard doesn't say anything
about it) that clock() would be the sum of the times spent in
two threads.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,818
Latest member
Brigette36

Latest Threads

Top