R
Raul Parolari
I am using Ruby (as a prototype version) to communicate with a network
of solar cells (driven by firmware), over Udp. It works most of the time
perfectly (delivering messages with 1-5 milliseconds precision, even
beyond what we need).
But once in a while, this happens (ubuntu machine) while waiting the
response (from solar cells):
result = select(( [ comm ], nil, [ comm ], 0.050))
if result.nil?
# handle 50 msec timeout
else
# data
end
All is perfect for several thousands of messages, until this: we receive
data (in the "else" branch) 5 seconds later; so, the select remained
blocked for 5 seconds without signaling the 50 msec timeout.
I controlled the Api (even going back even to Comer & Stevens books, as
the system calls map to the C ones), and I think that the code (I only
show a small part above) is correct.
Finally, I concluded that the garbage collector must be entering in
action and for a few seconds (that usually are very close to 5)
everything stops in Ruby.
We were planning to port this area to C++ in any case; but at a certain
moment the performance results were so accurate that we thought to keep
Ruby even for communication (as it allows, using a bit of
metaprogramming, to read on the fly configuration files, in other words
all that we love with Ruby, etc).
If anyone has insight on this, let me know.
(Please, only avoid answers like "if you want real time, use C, not
Ruby!"; we know that. But the point is that Ruby was a magnificent
surprise in this area, aside from what described above, and I just
wonder if there is any thought or insight in this area).
Raul Parolari
of solar cells (driven by firmware), over Udp. It works most of the time
perfectly (delivering messages with 1-5 milliseconds precision, even
beyond what we need).
But once in a while, this happens (ubuntu machine) while waiting the
response (from solar cells):
result = select(( [ comm ], nil, [ comm ], 0.050))
if result.nil?
# handle 50 msec timeout
else
# data
end
All is perfect for several thousands of messages, until this: we receive
data (in the "else" branch) 5 seconds later; so, the select remained
blocked for 5 seconds without signaling the 50 msec timeout.
I controlled the Api (even going back even to Comer & Stevens books, as
the system calls map to the C ones), and I think that the code (I only
show a small part above) is correct.
Finally, I concluded that the garbage collector must be entering in
action and for a few seconds (that usually are very close to 5)
everything stops in Ruby.
We were planning to port this area to C++ in any case; but at a certain
moment the performance results were so accurate that we thought to keep
Ruby even for communication (as it allows, using a bit of
metaprogramming, to read on the fly configuration files, in other words
all that we love with Ruby, etc).
If anyone has insight on this, let me know.
(Please, only avoid answers like "if you want real time, use C, not
Ruby!"; we know that. But the point is that Ruby was a magnificent
surprise in this area, aside from what described above, and I just
wonder if there is any thought or insight in this area).
Raul Parolari