J
Jens Müller
Hello,
what would be best practise for speeding up a larger number of http-get
requests done via urllib? Until now they are made in sequence, each request
taking up to one second. The results must be merged into a list, while the
original sequence needs not to be kept.
I think speed could be improved by parallizing. One could use multiple
threads.
Are there any python best practises, or even existing modules, for creating
and handling a task queue with a fixed number of concurrent threads?
Thanks and regards!
what would be best practise for speeding up a larger number of http-get
requests done via urllib? Until now they are made in sequence, each request
taking up to one second. The results must be merged into a list, while the
original sequence needs not to be kept.
I think speed could be improved by parallizing. One could use multiple
threads.
Are there any python best practises, or even existing modules, for creating
and handling a task queue with a fixed number of concurrent threads?
Thanks and regards!