K
Kingsley
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
I've been having an intermittent problem with urllib.
With an interval of around 15 minutes (i.e.: this is run every 15m)
this code runs fine for about 1-2 weeks, but then gets it's
knickers in a twist, and never seems to return, nor except.
try:
log("ABOUT TO USE urllib TO FETCH ["+self.url+"]")
f = urllib.urlopen(self.url)
temp_xml = f.read()
[1] log("DONE! GOT %d bytes for [%s]" % (len(str(temp_xml)),self.url))
f.close()
refreshed = True
except:
[2] log("Error: fetching ["+self.url+"], retrying ... ")
...
In the 'broken' state, I never see the log message [1] or [2], it
just sits in either the urlopen() or read() forever.
The code is running in a separate thread.
TCP timeout is set to 45 seconds.
Running on Linux (gentoo)
<complete-guess>
What I think is wrong, is that the server is sitting
behind a somewhat-dodgey ADSL connection. The server it's
contacting is also on a dodgey ADSL connection.
I'm guessing that the socket is opened with
the ADSL modem is NATing the connection, but when
the backend internet connection bounces, the NAT'ed
connection somehow stays up, leaving the connection
somehow dangling; alive, but dead too...
</complete-guess>
I would have thought that some urllib-internal timeout would
fix this?!
I could watch the thread from another thread, implementing
my own timeout... But then (AFAIK) there's no way to terminate
a mis-behaving thread anyways.
Any suggestions?
thanks,
- -Kingsley
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.4 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFEut+ISo2jKyi4JlgRApIdAJ4iIWDLXR9am659XAS1ajLv1ry12wCfeOiI
FUNbrisdyo4j3Yle4zN2ESk=
=EPU2
-----END PGP SIGNATURE-----
Hash: SHA1
Hi,
I've been having an intermittent problem with urllib.
With an interval of around 15 minutes (i.e.: this is run every 15m)
this code runs fine for about 1-2 weeks, but then gets it's
knickers in a twist, and never seems to return, nor except.
try:
log("ABOUT TO USE urllib TO FETCH ["+self.url+"]")
f = urllib.urlopen(self.url)
temp_xml = f.read()
[1] log("DONE! GOT %d bytes for [%s]" % (len(str(temp_xml)),self.url))
f.close()
refreshed = True
except:
[2] log("Error: fetching ["+self.url+"], retrying ... ")
...
In the 'broken' state, I never see the log message [1] or [2], it
just sits in either the urlopen() or read() forever.
The code is running in a separate thread.
TCP timeout is set to 45 seconds.
Running on Linux (gentoo)
<complete-guess>
What I think is wrong, is that the server is sitting
behind a somewhat-dodgey ADSL connection. The server it's
contacting is also on a dodgey ADSL connection.
I'm guessing that the socket is opened with
the ADSL modem is NATing the connection, but when
the backend internet connection bounces, the NAT'ed
connection somehow stays up, leaving the connection
somehow dangling; alive, but dead too...
</complete-guess>
I would have thought that some urllib-internal timeout would
fix this?!
I could watch the thread from another thread, implementing
my own timeout... But then (AFAIK) there's no way to terminate
a mis-behaving thread anyways.
Any suggestions?
thanks,
- -Kingsley
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.4 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFEut+ISo2jKyi4JlgRApIdAJ4iIWDLXR9am659XAS1ajLv1ry12wCfeOiI
FUNbrisdyo4j3Yle4zN2ESk=
=EPU2
-----END PGP SIGNATURE-----