B
Brian Quinlan
Hey all,
I've been working on an Java-style futures implementation in Python.
Futures are a way of representing asynchronous operations e.g.
operations that are run in another thread or process. The are are a easy
but powerful way of parallelizing sequential operations. The also
provide a consistent interface across implementations e.g. they can
provide the same interface to threading and to multiprocessing.
For example:
def is_prime(n):
"Return True iff n is prime"
...
def check_primes(numbers):
return map(is_prime, numbers)
Could be parallelized as:
def check_primes(numbers):
# ProcessPoolExecutor will create one worker process
# per CPU if called without arguments. Using threads
# is valueless because of the GIL.
with futures.ProcessPoolExecutor() as executor:
return executor.map(is_prime, numbers)
A more complex example:
def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()
### Download the content of some URLs - ignore failures.
def download_urls(urls, timeout=60):
url_to_content = {}
for url in urls:
try:
url_to_content = load_url(url, timeout=timeout) ...ould be very much appreciated! Cheers, Brian
I've been working on an Java-style futures implementation in Python.
Futures are a way of representing asynchronous operations e.g.
operations that are run in another thread or process. The are are a easy
but powerful way of parallelizing sequential operations. The also
provide a consistent interface across implementations e.g. they can
provide the same interface to threading and to multiprocessing.
For example:
def is_prime(n):
"Return True iff n is prime"
...
def check_primes(numbers):
return map(is_prime, numbers)
Could be parallelized as:
def check_primes(numbers):
# ProcessPoolExecutor will create one worker process
# per CPU if called without arguments. Using threads
# is valueless because of the GIL.
with futures.ProcessPoolExecutor() as executor:
return executor.map(is_prime, numbers)
A more complex example:
def load_url(url, timeout):
return urllib.request.urlopen(url, timeout=timeout).read()
### Download the content of some URLs - ignore failures.
def download_urls(urls, timeout=60):
url_to_content = {}
for url in urls:
try:
url_to_content = load_url(url, timeout=timeout) ...ould be very much appreciated! Cheers, Brian