B
Brian Quinlan
Hey all,
I recently implemented a package that I'd like to have include in the
Python 3.x standard library (and maybe Python 2.x) and I'd love to
have the feedback of this list.
The basic idea is to implement an asynchronous execution method
patterned heavily on java.util.concurrent (but less lame because
Python has functions as first-class objects). Here is a fairly
advanced example:
import futures
import functools
import urllib.request
URLS = [
'http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()
# Use a thread pool with 5 threads to download the URLs. Using a pool
# of processes would involve changing the initialization to:
# with futures.ProcessPoolExecutor(max_processes=5) as executor
with futures.ThreadPoolExecutor(max_threads=5) as executor:
future_list = executor.run_to_futures(
[functools.partial(load_url, url, 30) for url in URLS])
# Check the results of each future.
for url, future in zip(URLS, future_list):
if future.exception() is not None:
print('%r generated an exception: %s' % (url,
future.exception()))
else:
print('%r page is %d bytes' % (url, len(future.result())))
In this example, executor.run_to_futures() returns only when every url
has been retrieved but it is possible to return immediately, on the
first completion or on the first failure depending on the desired work
pattern.
The complete docs are here:
http://sweetapp.com/futures/
A draft PEP is here:
http://code.google.com/p/pythonfutures/source/browse/trunk/PEP.txt
And the code is here:
http://pypi.python.org/pypi/futures3/
All feedback appreciated!
Cheers,
Brian
I recently implemented a package that I'd like to have include in the
Python 3.x standard library (and maybe Python 2.x) and I'd love to
have the feedback of this list.
The basic idea is to implement an asynchronous execution method
patterned heavily on java.util.concurrent (but less lame because
Python has functions as first-class objects). Here is a fairly
advanced example:
import futures
import functools
import urllib.request
URLS = [
'http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()
# Use a thread pool with 5 threads to download the URLs. Using a pool
# of processes would involve changing the initialization to:
# with futures.ProcessPoolExecutor(max_processes=5) as executor
with futures.ThreadPoolExecutor(max_threads=5) as executor:
future_list = executor.run_to_futures(
[functools.partial(load_url, url, 30) for url in URLS])
# Check the results of each future.
for url, future in zip(URLS, future_list):
if future.exception() is not None:
print('%r generated an exception: %s' % (url,
future.exception()))
else:
print('%r page is %d bytes' % (url, len(future.result())))
In this example, executor.run_to_futures() returns only when every url
has been retrieved but it is possible to return immediately, on the
first completion or on the first failure depending on the desired work
pattern.
The complete docs are here:
http://sweetapp.com/futures/
A draft PEP is here:
http://code.google.com/p/pythonfutures/source/browse/trunk/PEP.txt
And the code is here:
http://pypi.python.org/pypi/futures3/
All feedback appreciated!
Cheers,
Brian