R
Rivaaj
Hi
I have been timing a client application's response time when invoking a
simple web method (it simply returns). A single test client invokes this
method synchronously without a delay between a response and the next
request. An average time in the order of 11ms is measured. The web service
is not servicing requests from any other client apps. When a second test
client is run concurrently with the first an average time in the order of
22ms is measured. The median however remains fairly constant between these
two cases.
Is there a reason why the average time would be affected so greatly by the
addition of a second client?
Thanks
Rivaaj
I have been timing a client application's response time when invoking a
simple web method (it simply returns). A single test client invokes this
method synchronously without a delay between a response and the next
request. An average time in the order of 11ms is measured. The web service
is not servicing requests from any other client apps. When a second test
client is run concurrently with the first an average time in the order of
22ms is measured. The median however remains fairly constant between these
two cases.
Is there a reason why the average time would be affected so greatly by the
addition of a second client?
Thanks
Rivaaj