D
David Jacobowitz
Hello, this is a question for all the perl people out there who have
written internet servers.
I'm currently have a perl-based server that acts as a hub for a simple
message passing scheme. Clients periodically connect, send a message
to a user (the server puts the message in an in-memory queue for the
recipient, and check for their own messages, with the server splatting
back out the clients message queue and then deleting it.
Each transaction is small and short, and I have everything working
pretty well with a single instance server of Net::Server. And I'm not
doing anything funky with select(); I'm just answering and completing
each transaction in order. This seems to make sense to me, because
there is really not much work per transaction over and above reading
and writing data to the socket.
The thing is, I want this server to be able to take hundreds or maybe
thousands of connections per second. This will never work with a
single process (I don't think. My laptop seems to saturate around 600
xactions/sec, but then the laptop is also running the client processes
as well)
With Net::Server, turning a server into a pre-forked server is pretty
easy. But then each process is independent from that point forward,
and so obviously message queues won't be shared between them. So, I'm
thinking of using IPC::Shared.
But, with all the overhead of IPC will this be any faster than the
single process? Is there an easy way to see where the cycles are
going? If most of the cycles are going to data-structure maintenance,
then I don't see a point in doing this work. If most of them are going
to handling socket stuff, then it would be a win, assuming it works.
Has anyone here made such a server? I'm curious for hints.
As an aside, my application does not require that any user be able to
leave a message for any other user, so it would be okay to segment the
message queues into groups. But for this to work, I'd need a way to
make sure that each client connection matches up with the same server
process on sequential accesses. I could do this, of course, by putting
another server in front of the other servers whose only job in life is
to track which back-end server the client first connected with and
then keep sending the client to the same back-end server on subsequent
connections. But in this case, I'm just creating more or less the same
bottleneck again.
Hints and ideas very much welcome.
thanks,
-- dave j
written internet servers.
I'm currently have a perl-based server that acts as a hub for a simple
message passing scheme. Clients periodically connect, send a message
to a user (the server puts the message in an in-memory queue for the
recipient, and check for their own messages, with the server splatting
back out the clients message queue and then deleting it.
Each transaction is small and short, and I have everything working
pretty well with a single instance server of Net::Server. And I'm not
doing anything funky with select(); I'm just answering and completing
each transaction in order. This seems to make sense to me, because
there is really not much work per transaction over and above reading
and writing data to the socket.
The thing is, I want this server to be able to take hundreds or maybe
thousands of connections per second. This will never work with a
single process (I don't think. My laptop seems to saturate around 600
xactions/sec, but then the laptop is also running the client processes
as well)
With Net::Server, turning a server into a pre-forked server is pretty
easy. But then each process is independent from that point forward,
and so obviously message queues won't be shared between them. So, I'm
thinking of using IPC::Shared.
But, with all the overhead of IPC will this be any faster than the
single process? Is there an easy way to see where the cycles are
going? If most of the cycles are going to data-structure maintenance,
then I don't see a point in doing this work. If most of them are going
to handling socket stuff, then it would be a win, assuming it works.
Has anyone here made such a server? I'm curious for hints.
As an aside, my application does not require that any user be able to
leave a message for any other user, so it would be okay to segment the
message queues into groups. But for this to work, I'd need a way to
make sure that each client connection matches up with the same server
process on sequential accesses. I could do this, of course, by putting
another server in front of the other servers whose only job in life is
to track which back-end server the client first connected with and
then keep sending the client to the same back-end server on subsequent
connections. But in this case, I'm just creating more or less the same
bottleneck again.
Hints and ideas very much welcome.
thanks,
-- dave j