Problem with urllib.urlretrieve

R

ralobao

Hi,

i am doing a program to download all images from an specified site.
it already works with most of the sites, but in some cases like:
www.slashdot.org it only download 1kb of the image. This 1kb is a html
page with a 503 error.

What can i make to really get those images ?

Thanks

Your Help is aprecciate.
 
F

fishboy

Hi,

i am doing a program to download all images from an specified site.
it already works with most of the sites, but in some cases like:
www.slashdot.org it only download 1kb of the image. This 1kb is a html
page with a 503 error.

What can i make to really get those images ?

Thanks

Your Help is aprecciate.

I did something like this a while ago. I used websucker.py in the
Tools/ directory. And then added some conditionals to tell it to only
create files for certain extentions.

As to why it fails in your case, (/me puts on psychic hat) I guessing
slashdot does something to stop people from deep-linking their image
files to stop leeches.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,201
Messages
2,571,049
Members
47,655
Latest member
eizareri

Latest Threads

Top