P
Per B. Sederberg
Hi Everybody:
I'm having a difficult time figuring out a a memory use problem. I
have a python program that makes use of numpy and also calls a small C
module I wrote because part of the simulation needed to loop and I got
a massive speedup by putting that loop in C. I'm basically
manipulating a bunch of matrices, so nothing too fancy.
That aside, when the simulation runs, it typically uses a relatively
small amount of memory (about 1.5% of my 4GB of RAM on my linux
desktop) and this never increases. It can run for days without
increasing beyond this, running many many parameter set iterations.
This is what happens both on my Ubuntu Linux machine with the
following Python specs:
Python 2.4.4c1 (#2, Oct 11 2006, 20:00:03)
[GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.'1.0rc1'
and also on my Apple MacBook with the following Python specs:
Python 2.4.3 (#1, Apr 7 2006, 10:54:33)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Well, that is the case on two of my test machines, but not on the one
machine that I really wish would work, my lab's cluster, which would
give me 20-fold increase in the number of processes I could run. On
that machine, each process is using 2GB of RAM after about 1 hour (and
the cluster MOM eventually kills them). I can watch the process eat
RAM at each iteration and never relinquish it. Here's the Python spec
of the cluster:
Python 2.4.4 (#1, Jan 21 2007, 12:09:48)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] on linux2
Type "help", "copyright", "credits" or "license" for more information.'1.0.1'
It also showed the same issue with the April 2006 2.4.3 release of python.
I have tried using the gc module to force garbage collection after
each iteration, but no change. I've done many newsgroup/google
searches looking for known issues, but none found. The only major
difference I can see is that our cluster is stuck on a really old
version of gcc with the RedHat Enterprise that's on there, but I found
no suggestions of memory issues online.
So, does anyone have any suggestions for how I can debug this problem?
If my program ate up memory on all machines, then I would know where
to start and would blame some horrible programming on my end. This
just seems like a less straightforward problem.
Thanks for any help,
Per
I'm having a difficult time figuring out a a memory use problem. I
have a python program that makes use of numpy and also calls a small C
module I wrote because part of the simulation needed to loop and I got
a massive speedup by putting that loop in C. I'm basically
manipulating a bunch of matrices, so nothing too fancy.
That aside, when the simulation runs, it typically uses a relatively
small amount of memory (about 1.5% of my 4GB of RAM on my linux
desktop) and this never increases. It can run for days without
increasing beyond this, running many many parameter set iterations.
This is what happens both on my Ubuntu Linux machine with the
following Python specs:
Python 2.4.4c1 (#2, Oct 11 2006, 20:00:03)
[GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.'1.0rc1'
and also on my Apple MacBook with the following Python specs:
Python 2.4.3 (#1, Apr 7 2006, 10:54:33)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Well, that is the case on two of my test machines, but not on the one
machine that I really wish would work, my lab's cluster, which would
give me 20-fold increase in the number of processes I could run. On
that machine, each process is using 2GB of RAM after about 1 hour (and
the cluster MOM eventually kills them). I can watch the process eat
RAM at each iteration and never relinquish it. Here's the Python spec
of the cluster:
Python 2.4.4 (#1, Jan 21 2007, 12:09:48)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] on linux2
Type "help", "copyright", "credits" or "license" for more information.'1.0.1'
It also showed the same issue with the April 2006 2.4.3 release of python.
I have tried using the gc module to force garbage collection after
each iteration, but no change. I've done many newsgroup/google
searches looking for known issues, but none found. The only major
difference I can see is that our cluster is stuck on a really old
version of gcc with the RedHat Enterprise that's on there, but I found
no suggestions of memory issues online.
So, does anyone have any suggestions for how I can debug this problem?
If my program ate up memory on all machines, then I would know where
to start and would blame some horrible programming on my end. This
just seems like a less straightforward problem.
Thanks for any help,
Per