W
Wen Jiang
Hi,
I have been using pyMPI to parallelize my code and found that the
function mpi.send() leaks memory a lot and thus is not really working
for large amount fo data communication. It actually fails after the
leak accumulates more than 2G. I wonder if others have the same
experience or I did something wrong. I compiled python 2.4, mpich
1.2.6, pyMPI 2.1b4 on Opteron cluster running Rocks 3.3.
Here is a small test script with 2 CPUs to demo the memory leak:
import mpi
n = 10000
i=0
data = [0]*40000
while i < n:
if mpi.rank==1:
mpi.send(data, 0)
elif mpi.rank==0:
msg, status = mpi.recv()
n+=1
if one watchs the memory usage using 'top', one can see one process use
little and constant amount of memory (recv for rank=0) and the other
process uses more and more memory (send for rank=1).
I have been using pyMPI to parallelize my code and found that the
function mpi.send() leaks memory a lot and thus is not really working
for large amount fo data communication. It actually fails after the
leak accumulates more than 2G. I wonder if others have the same
experience or I did something wrong. I compiled python 2.4, mpich
1.2.6, pyMPI 2.1b4 on Opteron cluster running Rocks 3.3.
Here is a small test script with 2 CPUs to demo the memory leak:
import mpi
n = 10000
i=0
data = [0]*40000
while i < n:
if mpi.rank==1:
mpi.send(data, 0)
elif mpi.rank==0:
msg, status = mpi.recv()
n+=1
if one watchs the memory usage using 'top', one can see one process use
little and constant amount of memory (recv for rank=0) and the other
process uses more and more memory (send for rank=1).