E
Elbert Lev
Python 2.3 Windows NT 4.0 (512M, PIII-500)
Here are results of performance comparison of several strategies of
multiple socket handling (server side).
To do this comparison I've written 3 servers and a client:
1. thread per connection (SocketServer based)
2. asynchat based
3. asynchat + worker thread.
4. multithreaded client
All these servers are reading text strings from connected clients;
convert them to upper case and return to the client.
In the case of asynchat + worker thread, worker thread waits on Queue
and posts "uppercased data" to server queue which is periodically
checked by loop.
Because asyncore.loop does not return control to the caller, I had to
overwrite this function, by extending asyncore.loop:
def loop(timeout=30.0, use_poll=0, map=None, after_pool = None):
...............................
while map:
poll_fun(timeout, map)
if (after_pool):
after_pool(map)
In essence after timeout has expired I check the server queue.
The client is written in such a way, that after connection is made, it
does not start the conversation for 2 minutes, allowing to check the
overhead. The timeout value has to be set reasonably low: 200-500 ms.
(With 400 sockets and timeout = 0.2 overhead is 3-5 percent CPU time).
Client connection sends random length strings in the range(1, 9999),
reads the reply and sleeps up to 30 seconds. Then it repeats the loop.
Here are the results in the form x/y, where: x is average precent of
cpu usage and y maximum precent of cpu usage,
NS - number of connected sockets.
NS thread per connection asyncore asyncore+worker
64 0.5/2.0 2.7/6.9 2/6
128 1.0/3.0 7.4/22.8 6.6/19
256 1.6/6.0 26/43 21/33
400 3.5/9.0 52/75 46/67
The throuput and reply latency are measured on the client and are
aproximately the same for all servers.
Any comments?
Here are results of performance comparison of several strategies of
multiple socket handling (server side).
To do this comparison I've written 3 servers and a client:
1. thread per connection (SocketServer based)
2. asynchat based
3. asynchat + worker thread.
4. multithreaded client
All these servers are reading text strings from connected clients;
convert them to upper case and return to the client.
In the case of asynchat + worker thread, worker thread waits on Queue
and posts "uppercased data" to server queue which is periodically
checked by loop.
Because asyncore.loop does not return control to the caller, I had to
overwrite this function, by extending asyncore.loop:
def loop(timeout=30.0, use_poll=0, map=None, after_pool = None):
...............................
while map:
poll_fun(timeout, map)
if (after_pool):
after_pool(map)
In essence after timeout has expired I check the server queue.
The client is written in such a way, that after connection is made, it
does not start the conversation for 2 minutes, allowing to check the
overhead. The timeout value has to be set reasonably low: 200-500 ms.
(With 400 sockets and timeout = 0.2 overhead is 3-5 percent CPU time).
Client connection sends random length strings in the range(1, 9999),
reads the reply and sleeps up to 30 seconds. Then it repeats the loop.
Here are the results in the form x/y, where: x is average precent of
cpu usage and y maximum precent of cpu usage,
NS - number of connected sockets.
NS thread per connection asyncore asyncore+worker
64 0.5/2.0 2.7/6.9 2/6
128 1.0/3.0 7.4/22.8 6.6/19
256 1.6/6.0 26/43 21/33
400 3.5/9.0 52/75 46/67
The throuput and reply latency are measured on the client and are
aproximately the same for all servers.
Any comments?