N
Nomen Nescio
I've installed scrapy and gotten a basic set-up working, and I have a
few odd questions that I haven't been able to find in the
documentation.
I plan to run it occasionally from the command line or as a cron job,
to scrape new content from a few sites. To avoid duplication, I have
in memory two sets of long with the md5 hashes of the URLs and files
crawled, and the spider ignores any that it has seen before. I need to
load them from two disk files when the scrapy job starts, and save
them to disk when it ends. Are there hooks or something similar for
start-up and shut-down tasks?
How can I put a random waiting interval between HTTP GET calls?
Is there any way to set the proxy configuration in my Python code, or
do I have so set the environment variables http_proxy and https_proxy
before running scrapy?
thanks
few odd questions that I haven't been able to find in the
documentation.
I plan to run it occasionally from the command line or as a cron job,
to scrape new content from a few sites. To avoid duplication, I have
in memory two sets of long with the md5 hashes of the URLs and files
crawled, and the spider ignores any that it has seen before. I need to
load them from two disk files when the scrapy job starts, and save
them to disk when it ends. Are there hooks or something similar for
start-up and shut-down tasks?
How can I put a random waiting interval between HTTP GET calls?
Is there any way to set the proxy configuration in my Python code, or
do I have so set the environment variables http_proxy and https_proxy
before running scrapy?
thanks