A
akanksha
I am using mechanize for scraping some urls.
begin
page = agent.get(url)
rescue
puts "oops!!"
end
catches invalid urls etc. , but how to I handle timeout errors?
In particular this is the error I get :
request-header: accept => */*
request-header: user-agent => WWW-Mechanize/0.5.1
(http://rubyforge.org/projects/mechanize/)
/usr/local/lib/ruby/1.8/timeout.rb:54:in `rbuf_fill': execution expired
(Timeout::Error)
from /usr/local/lib/ruby/1.8/timeout.rb:56:in `timeout'
from /usr/local/lib/ruby/1.8/timeout.rb:76:in `timeout'
Could someone also point me to some urls that always timeout. I tried
writing an inifinite while loop to simulate similar timeout erros, but
that did not work (or maybe I hit Ctrl C too soon).
Appreciate the help. Thanks!
Akanksha
begin
page = agent.get(url)
rescue
puts "oops!!"
end
catches invalid urls etc. , but how to I handle timeout errors?
In particular this is the error I get :
request-header: accept => */*
request-header: user-agent => WWW-Mechanize/0.5.1
(http://rubyforge.org/projects/mechanize/)
/usr/local/lib/ruby/1.8/timeout.rb:54:in `rbuf_fill': execution expired
(Timeout::Error)
from /usr/local/lib/ruby/1.8/timeout.rb:56:in `timeout'
from /usr/local/lib/ruby/1.8/timeout.rb:76:in `timeout'
Could someone also point me to some urls that always timeout. I tried
writing an inifinite while loop to simulate similar timeout erros, but
that did not work (or maybe I hit Ctrl C too soon).
Appreciate the help. Thanks!
Akanksha