A
Alexander Lamb
--Apple-Mail-13--882754710
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=ISO-8859-1;
delsp=yes;
format=flowed
Hello list,
I am implementing a very simple script to ping web servers or =20
services (to monitor how our environment is functionning).
Some production url's run on more than one host. Therefore, I start a =20=
thread for each separate url.
The function run in my threads is:
def doPing(uri_string, probe)
s =3D uri_string
while true
begin
timeout(@seconds_before_timeout) do |timeout_length|
start =3D Time.new
begin
open(s) do |result|
if result.status[0] !=3D "200"
probe.addToLogFile([s,'ERR',0,result.status[1]])
else
probe.addToLogFile([s,'OK',Time.new - start,''])
end
end
rescue Exception
probe.addToLogFile([s,'ERR',0,$!])
end
end
rescue Timeout::Error
probe.addToLogFile([s,'ERR',0,'timeout'])
end
sleep(@seconds_between_ping)
end
end
However, this is a problem. Indeed, I want also to measure the time =20
(round-trip) it takes for the ping (these are only simple pings for =20
the time being). As you can see I get the local time before and after =20=
the call. But this doesn't work with threads. Indeed, since the =20
process is shared among threads, the time will be dependent on the =20
number of threads I am running and not a correct view of the actual =20
time it takes to ping.
I can't define the piece of code between the two times as critical =20
and only for one thread because if the open-uri blocks, it will =20
prevent another thread to ping another url in the mean time.
Any idea? Maybe use processes instead of threads?
Thanks for any hints!
--
Alexander Lamb
Service d'Informatique M=E9dicale
H=F4pitaux Universitaires de Gen=E8ve
(e-mail address removed)
+41 22 372 88 62
+41 79 420 79 73
--Apple-Mail-13--882754710--
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=ISO-8859-1;
delsp=yes;
format=flowed
Hello list,
I am implementing a very simple script to ping web servers or =20
services (to monitor how our environment is functionning).
Some production url's run on more than one host. Therefore, I start a =20=
thread for each separate url.
The function run in my threads is:
def doPing(uri_string, probe)
s =3D uri_string
while true
begin
timeout(@seconds_before_timeout) do |timeout_length|
start =3D Time.new
begin
open(s) do |result|
if result.status[0] !=3D "200"
probe.addToLogFile([s,'ERR',0,result.status[1]])
else
probe.addToLogFile([s,'OK',Time.new - start,''])
end
end
rescue Exception
probe.addToLogFile([s,'ERR',0,$!])
end
end
rescue Timeout::Error
probe.addToLogFile([s,'ERR',0,'timeout'])
end
sleep(@seconds_between_ping)
end
end
However, this is a problem. Indeed, I want also to measure the time =20
(round-trip) it takes for the ping (these are only simple pings for =20
the time being). As you can see I get the local time before and after =20=
the call. But this doesn't work with threads. Indeed, since the =20
process is shared among threads, the time will be dependent on the =20
number of threads I am running and not a correct view of the actual =20
time it takes to ping.
I can't define the piece of code between the two times as critical =20
and only for one thread because if the open-uri blocks, it will =20
prevent another thread to ping another url in the mean time.
Any idea? Maybe use processes instead of threads?
Thanks for any hints!
--
Alexander Lamb
Service d'Informatique M=E9dicale
H=F4pitaux Universitaires de Gen=E8ve
(e-mail address removed)
+41 22 372 88 62
+41 79 420 79 73
--Apple-Mail-13--882754710--