urllib.urlretireve problem

R

Ritesh Raj Sarraf

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hello Everybody,

I've got a small problem with urlretrieve.
Even passing a bad url to urlretrieve doesn't raise an exception. Or does
it?

If Yes, What exception is it ? And how do I use it in my program ? I've
searched a lot but haven't found anything helping.

Example:
try:

urllib.urlretrieve("http://security.debian.org/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb")
except IOError, X:
DoSomething(X)
except OSError, X:
DoSomething(X)

urllib.urlretrieve doesn't raise an exception even though there is no
package named libparl5.6

Please Help!

rrs
- --
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
"Stealing logic from one person is plagiarism, stealing from many is
research".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCRcCk4Rhi6gTxMLwRAlb2AJ0fB3V5ZpwdAiCxfl/rGBWU92YBEACdFYIJ
8bGZMJ5nuKAqvjO0KEAylUg=
=eaHC
-----END PGP SIGNATURE-----
 
L

Larry Bates

I noticed you hadn't gotten a reply. When I execute this it put's the following
in the retrieved file:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no
t found on this server.<P>
</BODY></HTML>

You will probably need to use something else to first determine if the URL
actually exists.

Larry Bates
 
G

gene.tani

Mertz' "Text Processing in Python" book had a good discussion about
trapping 403 and 404's.

http://gnosis.cx/TPiP/

Larry said:
I noticed you hadn't gotten a reply. When I execute this it put's the following
in the retrieved file:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
The requested URL
/pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb was no
 
R

Ritesh Raj Sarraf

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Larry said:
I noticed you hadn't gotten a reply.  When I execute this it put's the
following in the retrieved file:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
The requested URL /pool/updates/main/p/perl/libparl5.6_5.6.1-8.9_i386.deb
was no t found on this server.<P>
</BODY></HTML>

You will probably need to use something else to first determine if the URL
actually exists.

I'm happy that at least someone responded as this was my first post to the
python mailing list.

I'm coding a program for offline package management.
The link that I provided could be obsolete by newer packages. That is where
my problem is. I wanted to know how to raise an exception here so that
depending on the type of exception I could make my program function.

For example, for Temporary Name Resolution Failure, python raises an
exception which I've handled well. The problem lies with obsolete urls
where no exception is raised and I end up having a 404 error page as my
data.

Can we have an exception for that ? Or can we have the exit status of
urllib.urlretrieve to know if it downloaded the desired file.
I think my problem is fixable in urllib.urlopen, I just find
urllib.urlretrieve more convenient and want to know if it can be done with
it.

Thanks for responding.

rrs
- --
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
"Stealing logic from one person is plagiarism, stealing from many is
research".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCSuYS4Rhi6gTxMLwRAu0FAJ9R0s4TyB7zHcvDFTflOp2joVkErQCfU4vG
8U0Ah5WTdTQHKRkmPsZsHdE=
=OMub
-----END PGP SIGNATURE-----
 
D

Diez B. Roggisch

I'm coding a program for offline package management.
The link that I provided could be obsolete by newer packages. That is
where my problem is. I wanted to know how to raise an exception here so
that depending on the type of exception I could make my program function.

For example, for Temporary Name Resolution Failure, python raises an
exception which I've handled well. The problem lies with obsolete urls
where no exception is raised and I end up having a 404 error page as my
data.

Can we have an exception for that ? Or can we have the exit status of
urllib.urlretrieve to know if it downloaded the desired file.
I think my problem is fixable in urllib.urlopen, I just find
urllib.urlretrieve more convenient and want to know if it can be done with
it.

It makes no sense having urllib generating exceptions for such a case. From
its point of view, things work pefectly - it got a result. No network error
or whatsoever.

Its your application that is not happy with the result - but it has to
figure that out by itself.

You could for instance try and see what kind of result you got using the
unix file command - it will tell you that you received a html file, not a
deb.

Or check the mimetype returned - its text/html in the error case of yours,
and most probably something like application/octet-stream otherwise.

Regards,

Diez
 
S

Skip Montanaro

Diez> It makes no sense having urllib generating exceptions for such a
Diez> case. From its point of view, things work pefectly - it got a
Diez> result. No network error or whatsoever.

You can subclass FancyURLOpener and define a method to handle 404s, 403s,
401s, etc. There should be no need to resort to grubbing around with file
extensions and such.

Skip
 
G

gene.tani

..from urllib2 import urlopen
.. try:
.. urlopen(someURL)
.. except IOError, errobj:
.. if hasattr(errobj, 'reason'): print 'server doesnt exist, is
down, DNS prob, or we don't have internet connect'
.. if hasattr(errobj, 'code'): print errobj.code
 
W

Wade

Diez said:
It makes no sense having urllib generating exceptions for such a case. From
its point of view, things work pefectly - it got a result. No network error
or whatsoever.

Its your application that is not happy with the result - but it has to
figure that out by itself.

You could for instance try and see what kind of result you got using the
unix file command - it will tell you that you received a html file, not a
deb.

Or check the mimetype returned - its text/html in the error case of yours,
and most probably something like application/octet-stream otherwise.

Regards,

Diez

Also be aware that many webservers (especially IIS ones) are configured
to return some kind of custom page instead of a stock 404, and you
might be getting a 200 status code even though the page you requested
is not there. So depending on what site you are scraping, you might
have to read the page you got back to figure out if it's what you
wanted.

-- Wade Leftwich
Ithaca, NY
 
R

Ritesh Raj Sarraf

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
You could for instance try and see what kind of result you got using the
unix file command - it will tell you that you received a html file, not a
deb.

Or check the mimetype returned - its text/html in the error case of yours,
and most probably something like application/octet-stream otherwise.

Using the unix file command is not possible at all. The whole goal of the
program is to help people get their packages downloaded from some other
(high speed) machine which could be running Windows/Mac OSX/Linux et
cetera. That is why I'm sticking strictly to python libraries.

The second suggestion sounds good. I'll look into that.

Thanks,

rrs
- --
Ritesh Raj Sarraf
RESEARCHUT -- http://www.researchut.com
Gnupg Key ID: 04F130BC
"Stealing logic from one person is plagiarism, stealing from many is
research".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)

iD8DBQFCTDhV4Rhi6gTxMLwRAi2BAJ4zp7IsQNMZ1zqpF/hGUAjUyYwKigCeKaqO
FbGuuFOIHawZ8y/ICf87wOI=
=btA5
-----END PGP SIGNATURE-----
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,228
Members
46,818
Latest member
SapanaCarpetStudio

Latest Threads

Top