M
mastermagrath
Hi all,
I've written a fairly simple tool that essentially creates a new udp
IO::Socket::INET object.
A loop is ran which uses print to pass in data sizes between 1 to 1024
bytes.
Each increment is separated by a time of X milliseconds so this
essentially allows selection of number of offered packets per second.
The main thread creates a simple tk gui interface that allows the user
to select the byte size and packets per second.
The gui displays what the (theoretical) throughput should be based on
the selected byte size * (packets per sec) but also tallys the actual
number of packets sent and thus the actual throughput.
As a test bed i simply connect to the loop back address 127.0.0.1 of my
PC. (yes i use windows!!)
Everything works perfectly up until the user goes above ~30 packets per
sec. At this point the theoretical and actual throughputs and packets
per sec start to diverge.
The actual number of packets sent starts to fall below what the user
selected (theoretical).
I use Win32::Sleep((1 / $packetspersec) * 1000); in the loop to change
the increment time to match the user selected packets per sec.
Any ideas what the problem might be?
Also, there seems to be an absolute maximum of around 100 packets per
sec that is possible. Thing is i have a small console based tool
(written in C++ i think) that can generate much higher packet rates for
any byte size and which does actually transmit what it says it does.
Is there timing issues with threads or do i need to look into setting
my script to a higher priority?
Thanks in advance.
I've written a fairly simple tool that essentially creates a new udp
IO::Socket::INET object.
A loop is ran which uses print to pass in data sizes between 1 to 1024
bytes.
Each increment is separated by a time of X milliseconds so this
essentially allows selection of number of offered packets per second.
The main thread creates a simple tk gui interface that allows the user
to select the byte size and packets per second.
The gui displays what the (theoretical) throughput should be based on
the selected byte size * (packets per sec) but also tallys the actual
number of packets sent and thus the actual throughput.
As a test bed i simply connect to the loop back address 127.0.0.1 of my
PC. (yes i use windows!!)
Everything works perfectly up until the user goes above ~30 packets per
sec. At this point the theoretical and actual throughputs and packets
per sec start to diverge.
The actual number of packets sent starts to fall below what the user
selected (theoretical).
I use Win32::Sleep((1 / $packetspersec) * 1000); in the loop to change
the increment time to match the user selected packets per sec.
Any ideas what the problem might be?
Also, there seems to be an absolute maximum of around 100 packets per
sec that is possible. Thing is i have a small console based tool
(written in C++ i think) that can generate much higher packet rates for
any byte size and which does actually transmit what it says it does.
Is there timing issues with threads or do i need to look into setting
my script to a higher priority?
Thanks in advance.