The meaning of Benchmark's times

A

Alex Young

Hi all,

I thought I understood what user and system times meant. Then I saw
this from Benchmark, while comparing response times of a WEBrick server
and a Mongrel server:

user system total real
webrick 29.580000 4.920000 34.500000 ( 87.191739)
mongrel 27.640000 4.540000 32.180000 ( 58.508197)

Clearly the mongrel server wins, but why does it only show in the
wallclock measurement? Where did the rest of the time go? This is
repeatable, so I don't think it's interference from any other processes
on the machine.

To be clear here, what I'm actually measuring is the time taken for
10000 queries to be serviced from a separate process, and the webrick
and mongrel servers are also each in their own process. Everything's on
localhost.

Any ideas?
 
B

Brian Candler

I thought I understood what user and system times meant. Then I saw
this from Benchmark, while comparing response times of a WEBrick server
and a Mongrel server:

user system total real
webrick 29.580000 4.920000 34.500000 ( 87.191739)
mongrel 27.640000 4.540000 32.180000 ( 58.508197)

Clearly the mongrel server wins, but why does it only show in the
wallclock measurement?

With webrick, the process is waiting for some external event.
Where did the rest of the time go? This is
repeatable, so I don't think it's interference from any other processes
on the machine.

Something which might be hurting webrick is if it hasn't turned off TCP
Nagle (socket option TCP_NODELAY). If the process sends less than 1500
bytes, the kernel will wait for 0.1 seconds to see if there's more to come
before actually sending a packet.

However, in the real world this is unlikely to be a problem.
To be clear here, what I'm actually measuring is the time taken for
10000 queries to be serviced from a separate process, and the webrick
and mongrel servers are also each in their own process. Everything's on
localhost.

You'd need to be more specific than that. Are you opening a fresh TCP
connection for each query? Or are you sending multiple queries down the same
connection, using HTTP/1.1?

Are you sending 10000 queries one after the other, or do you have (say) 100
query threads, each sending 100 queries?

30 seconds of wallclock time averages only 3ms per query, but the Nagle
explanation might work if you are running, say, 32 queries concurrently.

Other possibilities might be writing to log files, if the process does an
open-write-close or write-flush each time. Running the process under
'strace' might give you a better idea.

HTH,

Brian.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,234
Messages
2,571,178
Members
47,811
Latest member
Adisty

Latest Threads

Top