J
Julien Schmurfy
Hi,
I have an eventmachine machine ruby server which goal is to serve
requests but for each request it needs to do some requests to others
servers, I know this could be achievied easily in a full asynchronous
way but we have some restrictions currently which forbid that.
The current way we do it is by issuing the request and then freezing the
thread which is woke up when the answer arrives, the code that does the
request looks like (I have kept only the essential, sendRequest simply
build the request and send it on the eventmachine socket):
def sendSyncRequest(obj_name, cmd_name, args = {}, peer = nil)
raise "Cannot freeze main thread, synchronous call cannot be made in
eventmachine thread !!!" if EM::reactor_thread?
# in reactor loop
# issue the call and put the thread to sleep, wake up on return
th = Thread.current
return_value = nil
error = nil
req = sendRequest(obj_name, cmd_name, args, peer)
req.errback do |err|
error = err
th.wakeup
th.priority= 1
th.priority= 0
end
req.callback do |ret|
return_value = ret
th.wakeup
th.priority= 1
th.priority= 0
end
# go to sleep
sleep
# return result or error
end
My problem is that 30% of the requests made this way take 200ms or a
little more to return when the packet on the network arrived in less
than 5ms (I used ngrep to sniff what happens on the network and check
the timings).
The priority change in each callback made our problem a little less
problematic by reducing this 30% number but I wish to find a better and
real solution if any exists, I hate writing code without understanding
why some things happen and this is exactly the case here...
I suppose the priority change reschedule the thread to the top of the
queue but I doubt this hack will continue to work with high concurrency.
I hope someone can help.
I have an eventmachine machine ruby server which goal is to serve
requests but for each request it needs to do some requests to others
servers, I know this could be achievied easily in a full asynchronous
way but we have some restrictions currently which forbid that.
The current way we do it is by issuing the request and then freezing the
thread which is woke up when the answer arrives, the code that does the
request looks like (I have kept only the essential, sendRequest simply
build the request and send it on the eventmachine socket):
def sendSyncRequest(obj_name, cmd_name, args = {}, peer = nil)
raise "Cannot freeze main thread, synchronous call cannot be made in
eventmachine thread !!!" if EM::reactor_thread?
# in reactor loop
# issue the call and put the thread to sleep, wake up on return
th = Thread.current
return_value = nil
error = nil
req = sendRequest(obj_name, cmd_name, args, peer)
req.errback do |err|
error = err
th.wakeup
th.priority= 1
th.priority= 0
end
req.callback do |ret|
return_value = ret
th.wakeup
th.priority= 1
th.priority= 0
end
# go to sleep
sleep
# return result or error
end
My problem is that 30% of the requests made this way take 200ms or a
little more to return when the packet on the network arrived in less
than 5ms (I used ngrep to sniff what happens on the network and check
the timings).
The priority change in each callback made our problem a little less
problematic by reducing this 30% number but I wish to find a better and
real solution if any exists, I hate writing code without understanding
why some things happen and this is exactly the case here...
I suppose the priority change reschedule the thread to the top of the
queue but I doubt this hack will continue to work with high concurrency.
I hope someone can help.