Allan said:
You are correct, it has nothing to do with the system time at all.
Is this strictly true? The time.h header also specifies CLOCKS_PER_SEC which
one can use in order to find out the time in seconds. If CLOCKS_PER_SEC is
1000 then we know that the resolution is microsecond.
You mean "millisecond," but it doesn't really matter:
CLOCKS_PER_SEC tells us the units in which `clock_t' expresses
its value, but not the accuracy with which that value is
measured.
("Huh?")
One light-year is the distance a photon travels in one
year in an undisturbed vacuum. The "units" program available
on many Unix systems tells me that this distance is 9.460528e+15
meters. Does that mean that the length of the light-year is
known to an accuracy of plus-or-minus half a meter? Of course
not: it just means that the meter is one of the standard units
in which length is expressed.
On the system I'm using at the moment, CLOCKS_PER_SEC is
one million, meaning that `clock_t' values are expressed to
a precision of one microsecond. But the underlying hardware
clock ticks at 100Hz, so clock() cannot actually measure an
interval shorter than ten milliseconds.
Thought experiment: Express your age as a `clock_t' value
(ignoring possible overflow), using CLOCKS_PER_SEC as it's
defined on your favorite platform. Do you believe the answer?
Precision is one thing, accuracy is another.