Jordan Abel said:
I wonder how I can calculate how much time any routine spend?
I 'd like to get date format minute and secund[sic].
Why do you want to do this in C (or C++)? The minimum resolution for C time
function will be much larger than the time spent in the C routine
Easy to say, given that among the absolutely nothing that the standard
says about time_t is included absolutely nothing about its resolution.
[Elsewhere, it also says nothing about the amount of time spent in any C
routine]
Exactly, _nothing_ is guaranteed for time_t or clock_t. It could be large.
It could be small. But, when one is approaching 4Ghz (4 with 9 zeros) for
desktop CPU's, and faster even for mini-frames or main-frames, it's very
likely it won't be small enough. In which case, he'll need a CPU
instruction or a high-speed hardware clock. If he is using a PC, Intel's
rdtsc, Read Time-Stamp Counter Instruction, is 64-bits in size and
_guaranteed_ to increment by one for every CPU clock. The same or similar
would be true of a high-speed hardware clock. Calling a system function to
do the same thing that one or a few assembly instructions can do with much
less overhead doesn't make any sense in this situation, at least to me. I'm
not about to ignore reality, mathematics, or history, just because it isn't
in _the_ specification somewhere.
Rod Pemberton