response time

  • Thread starter =?ISO-8859-1?Q?Jos=E9?=
  • Start date
?

=?ISO-8859-1?Q?Jos=E9?=

I'm programming a simple script to calculate response time from one
server to another server.
I put the time in secons in a variable. I take the web, and I take the
time again, the difference is the time one servers takes to take the page.
I can only calculate it in seconds, is there a way to do it in miliseconds?

Thanks
 
A

Alex Martelli

José said:
I'm programming a simple script to calculate response time from one
server to another server.
I put the time in secons in a variable. I take the web, and I take the
time again, the difference is the time one servers takes to take the page.
I can only calculate it in seconds, is there a way to do it in
miliseconds?

After "import time", time.time() returns the time (elapsed since an
arbitrary epoch) with the unit of measure being the second, but the
precision being as high as the platform on which you're running will
allow. The difference between two results of calling time.time() is
therefore in seconds _and fractions_; whether the precision is (e.g.)
1/100 of a second, or 1/1000 of a second, or whatever, depends on
what platform you're running. In any case, just multiply that difference
by 1000 and you'll have it in milliseconds (possibly rounded e.g. to the
closest 10 milliseconds if you underlying platform doesn't provide
better precision than that, of course).


Alex
 
J

John J. Lee

Alex Martelli said:
José said:
I'm programming a simple script to calculate response time from one
server to another server. [...]
I can only calculate it in seconds, is there a way to do it in
miliseconds?

After "import time", time.time() returns the time (elapsed since an
arbitrary epoch) with the unit of measure being the second, but the
precision being as high as the platform on which you're running will
allow. The difference between two results of calling time.time() is
[...]

Also note that Windows' time(), in particular, has a precision of only
around 50 milliseconds (according to Tim Peters, so I haven't bothered
to test it myself ;-). Pretty strange.


John
 
J

John J. Lee

forgot to add: time.clock() might be more useful on Windows, if you
want high precision.


John
 
P

Peter Hansen

John J. Lee said:
Alex Martelli said:
José said:
I'm programming a simple script to calculate response time from one
server to another server. [...]
I can only calculate it in seconds, is there a way to do it in
miliseconds?

After "import time", time.time() returns the time (elapsed since an
arbitrary epoch) with the unit of measure being the second, but the
precision being as high as the platform on which you're running will
allow. The difference between two results of calling time.time() is
[...]

Also note that Windows' time(), in particular, has a precision of only
around 50 milliseconds (according to Tim Peters, so I haven't bothered
to test it myself ;-). Pretty strange.

Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the
original IBM PC (because oscillators were/are relatively expensive, so you
wanted to re-use them whenever possible, even if just a submultiple) and
then by 4 again to produce the clock signal that went to the chip involved
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks
per second. So other than it being closer to 55 ms than 50, you're right.

Google searches with "18.2 14.31818" will produce lots of background for
all that.

-Peter
 
?

=?ISO-8859-15?Q?Jos=E9?=

Thank you very much, I haven't problem with the precision because I use
it in a machine with Linux Red Hat, but thank you for the explanations
anyway.
 
J

John J. Lee

Peter Hansen said:
Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the [...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks
[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.


John
 
P

Peter Hansen

John J. Lee said:
Peter Hansen said:
Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the [...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks
[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.

Can you imagine the overhead of the DOS timer interrupt executing over 500
times a second?! It would have crippled the system. In fact, from what
I recall of the overhead associated with that interrupt, that might well
have consumed every last microsecond of CPU time.

Also, the hardware probably doesn't even support an "eight bit counter".
That is, there's a good chance that the behaviour described comes entirely
"for free", after setup, whereas using any other value would have required
a periodic reload, in software, which would have been deemed an unacceptable
burden on performance. I believe one of the first links to the Google
search I mentioned has the part number of the timer chip in question, so
you could investigate further if you're curious.

And if you wonder why Windows still had to stick with the same value,
well, let's just say that it's one of the best proofs that I've seen
that even Windows 98 is nothing more than a glossy GUI shell on top
of DOS.

-Peter
 
A

Alex Martelli

John said:
Peter Hansen said:
Strange, but based on a relatively mundane thing: the frequency
(14.31818MHz) of the NTSC color sub-carrier which was used when
displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for
the [...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks
[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.

The original IBM PC (8088, 64KB of memory if you were lucky, and
two 160 KB floppies), which is where all of these numbers come from,
didn't exactly have all that much power to spare. Dealing with 18.2
clock interrupts a second was plenty -- dealing with way more was
probably considered out of the question by the original designers.

We _are_ talking about more than 20 years ago, after all (and I'm
sure none of those designers could possibly dream that their numbers
had to be chosen, not for ONE computer model, but for models that
would span 15 or more turns of Moore Law's wheel...!_).


Alex
 
P

Peter Hansen

Peter said:
John J. Lee said:
Peter Hansen said:
Strange, but based on a relatively mundane thing: the frequency (14.31818MHz)
of the NTSC color sub-carrier which was used when displaying computer output
on a TV. This clock was divided by 3 to produce the 4.77MHz clock for the [...]
in time-keeping, which then counted on every edge using a 16-bit counter
which wrapped around every 65536 counts, producing one interrupt every
65536/(14.31818*1000000/12) or about 0.5492 ms, which is about 18.2 ticks
[...]

That doesn't explain it AFAICS -- why not use a different (smaller)
divisor? An eight bit counter would give about 0.2 ms resolution.

Can you imagine the overhead of the DOS timer interrupt executing over 500
times a second?! It would have crippled the system.

Oops: 5000 times a second, even worse. :) I have a vague memory that
the DOS timer interrupt could take well over a millisecond to execute
on the old machines, so it simply wasn't feasible in any case.
 
A

Andrew Dalke

Alex:
We _are_ talking about more than 20 years ago, after all (and I'm
sure none of those designers could possibly dream that their numbers
had to be chosen, not for ONE computer model, but for models that
would span 15 or more turns of Moore Law's wheel...!_).

I am hoping for symbolic reasons that in another couple of years it
will be possible to buy a 4.77 GHz processor. Then place it
side-by-side with an original PC and gape at the differences. 1000x
clock speed (and 100,000x performance?), 200,000x more
memory, 1,000,000x more disk space.

Andrew
(e-mail address removed)
 
J

John J. Lee

Peter Hansen said:
Can you imagine the overhead of the DOS timer interrupt executing over 500
times a second?!
No.


It would have crippled the system. In fact, from what
I recall of the overhead associated with that interrupt, that might well
have consumed every last microsecond of CPU time.

I see. :)

[...]
burden on performance. I believe one of the first links to the Google
search I mentioned has the part number of the timer chip in question, so
you could investigate further if you're curious.
[...]

No thanks!-)


John
 
E

Emile van Sebille

Andrew Dalke:
I am hoping for symbolic reasons that in another couple of years it
will be possible to buy a 4.77 GHz processor.

Great! Another good reason to _not_ clean out the garage.

Emile van Sebille
(e-mail address removed)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,169
Messages
2,570,920
Members
47,463
Latest member
FinleyMoye

Latest Threads

Top