R
Rainer Weikusat
Charles DeRykus said:[...]...
The solution is really simply to rate-limit requests being sent
[...]This 'parallelism' idea is inherently broken because replies aren't
guaranteed to arrive.
Maybe, I'm confused but my understanding is that each boxcar on the
"POE-Zug" fires off its icmp ping, waits for the user-configurable
timeout, and gathers any/all responses.
This description assumes a full-fledged 'virtual thread' per ping but
it is generally correct: The 'parallelism' thing requires an
individual timeout per request in flight in order to keep going in
face of replies which may never arrive.
Overall send-rate is throttled with the configurable parallelism
setting.
'Send 1000 requests as fast as you can, then, do nothing for ten
seconds' is not the same as 'continue sending a request every 0.01s
for 10 seconds': Again simplifying things, 'an ethernet is binary': At
any given time, it is either 'in use' or 'not in use'. The bulk send
means it is 'in use' for a relatively long period of time at the
beginning and will be 'in use' for a similarly long period of time as
soon as the replies start arriving. Otherwise, it will be 'in use' for
many short time periods and 'be available' in between (the same is
true for resources on the sending/ receiving host where it means 'be
available to deal with replies').
If that rate is properly tuned to avoid saturating the network,
where does the train run off the tracks? (I'd guess even an
alternative micro-sleep between requests might potentially bog down
a network without tuning)
Even the slowest isosychronous injection of 'packets' into a
network might be the straw breaking the camel's back. That's why
'serious' general purpose algorithms for congestion avoidance are
adaptive (and usually not exactly simple). But it's getting late here,
I still have some (household) work to do and I'm not entirely
sober. Because of this, I'm now terminating this with another 'vulgar
appeal to common sense': Hearing works better when one stops shouting
more frequently.