paralell ftp uploads and pool size

B

ben

Hello,

I have a python script that uploads multiple files from the local machine to a remote server in parallel via ftp using p process pool:

p = Pool(processes=x)

Now as I increase the value of x, the overall upload time for all files drops as expected. If I set x too high however, then an exception is thrown. The exact value at which this happens varies, but is ~20

Traceback (most recent call last):
File "uploadFTP.py", line 59, in <module>
FTP_Upload().multiupload()
File "uploadFTP.py", line 56, in multiupload
p.map(upload_function,files)
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 148, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 422, in get
raise self._value
EOFError

Now this is not a problem - 20 is more than enough - but I'm trying to understand the mechanisms involved, and why the exact number of processes at which this exception occurs seems to vary.

I guess it comes down to the current resources of the server itself...but any insight would be much appreciated!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,968
Messages
2,570,150
Members
46,697
Latest member
AugustNabo

Latest Threads

Top