Multiprocessing taking too much time

S

Shailendra

Hi All,
I have a following situation.
==================PSUDO CODE START==================
class holds_big_array:
    big_array  #has a big array

    def get_some_element(self, cond) # return some data from the array
from the big array
==================PSUDO CODE END====================
I wanted to use multiprocessing module to parallelise calling
"get_some_element". I used following kind of code

==================PSUDO CODE START==================
pool = Pool(processes=2)
holder =holds_big_array() #class instantiation
def callback_f(result):
         do something with result
loop many times
   pool.apply_async(holder.get_some_element,args,callback=callback_f)
pool.close()
pool.join()
==================PSUDO CODE END====================
Note: Had to do something to enable instance method being pickled...

I tested this with less than realistic size of big_array . My parallel
version works much slower than than the normal serial version (10-20
sec vs 7-8 min). I was wonder what could be the possible reason. Is it
something to do that it is a instance method and some locking will
make other process wait for the locks. Any idea how to trace where the
program is spending time?

Let me know if the information give is inadequate.

Thanks in advance.
Shailendra Vikas
 
J

John Nagle

Hi All,
I have a following situation.
==================PSUDO CODE START==================
class holds_big_array:
big_array #has a big array

def get_some_element(self, cond) # return some data from the array
from the big array
==================PSUDO CODE END====================
I wanted to use multiprocessing module to parallelise calling
"get_some_element". I used following kind of code

==================PSUDO CODE START==================
pool = Pool(processes=2)
holder =holds_big_array() #class instantiation
def callback_f(result):
do something with result
loop many times
pool.apply_async(holder.get_some_element,args,callback=callback_f)
pool.close()
pool.join()
==================PSUDO CODE END====================
Note: Had to do something to enable instance method being pickled...

I tested this with less than realistic size of big_array . My parallel
version works much slower than than the normal serial version (10-20
sec vs 7-8 min). I was wonder what could be the possible reason.

It's hard to tell from your "PSUDO CODE", but it looks like each
access to the "big array" involves calling another process.

Calling a function in another process is done by creating an
object to contain the request, running it through "pickle" to convert
it to a stream of bytes, sending the stream of bytes through a socket or
pipe to the other process, running the byte stream through "unpickle" to
create an object like the original one, but in a different process, and
calling a function on the newly created object in the receiving process.
This entire sequence has to be done again in reverse
to get a reply back.

This is hundreds of times slower than a call to a local function.

The "multiprocessing module" is not a replacement for thread-level
parallelism. It looks like it is, but it isn't. It's only useful for
big tasks which require large amounts of computation and little
interprocess communication. Appropriately-sized tasks to send out
to another process are things like "parse large web page" or
"compress video file", not "access element of array".

John Nagle
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,955
Messages
2,570,117
Members
46,705
Latest member
v_darius

Latest Threads

Top