How to kill a SocketServer?

  • Thread starter Jean-Pierre Bergamin
  • Start date
J

Jean-Pierre Bergamin

Me again... :)

Is there any possibility to kill a SocketServer that was started like this:


class ServerThread(threading.Thread):

class MyHandler(SocketServer.StreamRequestHandler):
def handle(self):
line = self.rfile.readline()
self.wfile.write(line.upper())

def run(self):
s = SocketServer.TCPServer(('', 1234), self.MyHandler)
s.server_forever()

def stop_server()
# Any chance to stop the server here???
what_to_do() # ????????

server_thread = ServerThread()
server_thread.start()
# do other things
server_thread.stop_server()

The problem is, that I found no way to interrupt the blocking call to
self.rfile.readline(). Is there a way to do that?

Using an asynchronous server is not an option.


Thanks in advance

James
 
D

Diez B. Roggisch

The problem is, that I found no way to interrupt the blocking call to
self.rfile.readline(). Is there a way to do that?

From the SocketServerDocs:

---------------
serve_forever()

Handle an infinite number of requests. This simply calls handle_request()
inside an infinite loop.

---------------

So simply call handle_request yourself, like this:

while self.run_me:
s.server_forever()


run_me is a variable that can be set to false in stop_server
 
J

Josiah Carlson

Using an asynchronous server is not an option.

You don't need to answer, but I'm curious as to why this is not an
option. Do you not have select on your platform?

- Josiah
 
J

Jean-Pierre Bergamin

Diez said:
From the SocketServerDocs:

---------------
serve_forever()

Handle an infinite number of requests. This simply calls
handle_request() inside an infinite loop.

---------------

So simply call handle_request yourself, like this:

while self.run_me:
s.server_forever()

I'm already having such a construct. The problem is the following:
The server waits for a connection and it calls in
SocketServer.TCPServer.handle_request() the function self.socket.accept().
This call blocks until a connections is made.

I have no chance to interrupt this call. I also tried:

def stop_server:
self.socket.shutdown(2)
self.socket.close()
self.run_me = False

The accept() call still won't get interrupted. :-(

Other ideas?


James
 
J

Jean-Pierre Bergamin

Josiah said:
You don't need to answer, but I'm curious as to why this is not an
option. Do you not have select on your platform?

Since the whole application uses much CPU power, the performance (esp. the
reaction time) of an asynchronous server is too low.

We found out that a threading server is much the better choise for this, but
we'd need a way to stop a server and start it again (wich new parameters).


Regards

James
 
D

Donn Cave

"Jean-Pierre Bergamin said:
... The problem is the following:
The server waits for a connection and it calls in
SocketServer.TCPServer.handle_request() the function self.socket.accept().
This call blocks until a connections is made.

I have no chance to interrupt this call. I also tried:

def stop_server:
self.socket.shutdown(2)
self.socket.close()
self.run_me = False

The accept() call still won't get interrupted. :-(

Other ideas?

I seem to recall a very similar question last week, so you
might look around to see what answers that one got. There
was at least one that proposed a connection to the service.
That makes a lot of sense to me, especially if you're in a
position to change the service protocol if necessary.

Donn Cave, (e-mail address removed)
 
P

Peter Hansen

Jean-Pierre Bergamin said:
The accept() call still won't get interrupted. :-(

Other ideas?

You have three choices.

1. Run your server as a separate process, communicating with it
via some kind of RPC, and just kill it when desired.

2. Use non-blocking sockets. This is the standard and
simplest approach in many ways.

3. Arrange to have one thread open a socket connection to the
application, waking up your accept()ing thread. Then check
a flag which tells the server thread to exit.

By definition, blocking calls block, and you can't safely kill
a thread in Python so pick one from the above and run with it...

-Peter
 
J

Julian Smith

You have three choices.

1. Run your server as a separate process, communicating with it
via some kind of RPC, and just kill it when desired.

2. Use non-blocking sockets. This is the standard and
simplest approach in many ways.

3. Arrange to have one thread open a socket connection to the
application, waking up your accept()ing thread. Then check
a flag which tells the server thread to exit.

I have a class that can be constructed from a socket and looks like a file
object, but whose blocking read() method can be interrupted by a different
thread. The read() method uses poll() to block on both the real underlying
file descriptor and an internal file descriptor created using os.pipe().

It works on OpenBSD and Cygwin, but I haven't tried it on anything else yet.
I'm a relative newcomer to Python, so I'm sure there are some subleties that
I've missed.

See http://www.op59.net/cancelable.py if you're interested.
By definition, blocking calls block, and you can't safely kill
a thread in Python so pick one from the above and run with it...

-Peter


- Julian
 
P

Peter Hansen

Julian said:
Peter Hansen said:
You have three choices.
[snip]

I have a class that can be constructed from a socket and looks like a file
object, but whose blocking read() method can be interrupted by a different
thread. The read() method uses poll() to block on both the real underlying
file descriptor and an internal file descriptor created using os.pipe().

It works on OpenBSD and Cygwin, but I haven't tried it on anything else yet.
I'm a relative newcomer to Python, so I'm sure there are some subleties that
I've missed.

See http://www.op59.net/cancelable.py if you're interested.

Looks interesting. I guess I should have qualified my answer
by saying something like "platform-specific code might allow for
other solutions". :) I'm pretty sure cancelable.py's solution
wouldn't work on Windows (but if I'm wrong, then of course:
"amongst our choices are separate process, non-blocking socket,
connection from another thread, and cancelable.py, and of course
an almost fanatical devotion to the Pope...") ;-)

-Peter
 
J

Josiah Carlson

Since the whole application uses much CPU power, the performance (esp. the
reaction time) of an asynchronous server is too low.

We found out that a threading server is much the better choise for this, but
we'd need a way to stop a server and start it again (wich new parameters).

That is very interesting.

After doing some research into heavily multi-threaded servers in Python
a few years back, I discovered that for raw throughput, a properly
written async server could do far better than a threaded one.

If your request processing takes the most time, you may consider a
communication thread and a processing thread.

If your processing thread does a lot of waiting on a database or
something else, it may make sense to have one communication thread, and
a handful of database query threads.

What part of a request takes up the most time?

- Josiah
 
P

Peter Hansen

Josiah said:
That is very interesting.

After doing some research into heavily multi-threaded servers in Python
a few years back, I discovered that for raw throughput, a properly
written async server could do far better than a threaded one.

If your request processing takes the most time, you may consider a
communication thread and a processing thread.

If your processing thread does a lot of waiting on a database or
something else, it may make sense to have one communication thread, and
a handful of database query threads.

What part of a request takes up the most time?

The OP indicated that "esp. the reaction time" was his concern, not
throughput. If he's right about that, then in principal he could well
be right that an async server would not be the most appropriate.
(I don't really believe that either, though...)

-Peter
 
D

Donn Cave

Since the whole application uses much CPU power, the performance (esp. the
reaction time) of an asynchronous server is too low.

We found out that a threading server is much the better choise for this, but
we'd need a way to stop a server and start it again (wich new parameters).

That is very interesting.

After doing some research into heavily multi-threaded servers in Python
a few years back, I discovered that for raw throughput, a properly
written async server could do far better than a threaded one.

If your request processing takes the most time, you may consider a
communication thread and a processing thread.

If your processing thread does a lot of waiting on a database or
something else, it may make sense to have one communication thread, and
a handful of database query threads.

What part of a request takes up the most time?[/QUOTE]

I wonder if you're making it more complicated than necessary.

The application uses a lot of CPU as it handles multiple
concurrent client connections. That makes a pretty clear
case for some kind of parallel thread architecture, not
only to dispatch inputs promptly but also to take advantage
of commonly available SMP hardware. As you acknowledge
above, the details of that architecture depend a lot on
what exactly they're doing - how many connections over
time, the nature of the request processing, etc. But at
any rate, now they have a thread waiting in accept, and they
need to shake it loose. My answer is `define a shutdown
request as part of the service protocol', but maybe you
would have a better idea.

Donn Cave, (e-mail address removed)
 
J

Josiah Carlson

That is very interesting.
I wonder if you're making it more complicated than necessary.

The application uses a lot of CPU as it handles multiple
concurrent client connections. That makes a pretty clear
case for some kind of parallel thread architecture, not
only to dispatch inputs promptly but also to take advantage
of commonly available SMP hardware. As you acknowledge

Just because it handles multiple concurrent client connections, doesn't
mean it is CPU bound. Also, 'threading' in Python is not the same as
'threading' in C or other languages. I've got an SMP machine, and I've
used threads in Python, and no matter what you do, it doesn't use the
entirety of both processors (I've said as much in another discussion
thread). I don't know the case in other operating systems, but I would
imagine you get the same thing, likely the result of the Python GIL.

One should also be aware that Python threads take up a nontrivial amount
of processor to swap context, so now you're looking at a situation where
many threads may not be as helpful as you (or even the original poster)
expects.

Now, stackless with microthreads pretty much solve the "threads are
slow" problem, but thinking without threads will work on all Python
installations.

above, the details of that architecture depend a lot on
what exactly they're doing - how many connections over
time, the nature of the request processing, etc. But at

I agree, which is why I asked "what part of a request takes the most
time". Since the original poster hasn't replied yet, I expect he's
figured out what needs to be done in his situation.

any rate, now they have a thread waiting in accept, and they
need to shake it loose. My answer is `define a shutdown
request as part of the service protocol', but maybe you
would have a better idea.

Yeah, I would go asynchronous with a timeout.

while running:
asyncore.poll(1)
sys.exit(0)

- Josiah
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,184
Messages
2,570,979
Members
47,578
Latest member
LC_06

Latest Threads

Top