E
Ethan Metsger
Hello, all. I'm new to the group, but I've been doing Java programming for a
couple years fairly consistently.
First, the specs: I'm running a Fedora Core 3 Linux system with JRE 1.4.2
(Athlon XP 1800+/512MB RAM).
I am working on a research project for my master's thesis which requires that
I load several seconds of video into memory for processing. (We're attempting
to track human torsos.) Each frame is stored on disk as a PPM (for ease of
use and portability purposes). I am using binary PPMs, so there is some data
compression, but nothing terribly significant, so they consume around 250kB
apiece. My test data set is 110 frames (about 25MB). This is a small data
set, representing around 3.6s of video. I would not expect loading 110 PPM
objects to stress the JVM or the system.
However, I get several java.lang.OutOfMemory errors when attempting to load
them all together. At present, I am trying to optimize the choice of the
torso segment, so it is possible for me to look at each frame individually and
see if I have acquired a better choice this time around. I have threaded this
process using a thread pool (five threads, 200ms poll). Each thread also
spawns another Process (which blocks the current thread execution).
Unfortunately, the time will come when I will need to have the pixmap data
handy. I could reread it, but this would incur a performance hit that I'm not
really willing to take unless I absolutely have to. I have done some reading
on NIO, particularly using MappedByteBuffers, which proponents say can
increase performance of reads and writes.
But I am particularly interested in their properties of memory management.
Since a MappedByteBuffer allocates memory on the native heap instead of the
JVM, will it clear up this problem? (Obviously, others may arise.) Can I use
a MappedByteBuffer to improve not only R/W performance, but also memory issues
related to reading in data?
If you could, please cc: me along with any replies to the newsgroup.
Sincerely,
Ethan Metsger
couple years fairly consistently.
First, the specs: I'm running a Fedora Core 3 Linux system with JRE 1.4.2
(Athlon XP 1800+/512MB RAM).
I am working on a research project for my master's thesis which requires that
I load several seconds of video into memory for processing. (We're attempting
to track human torsos.) Each frame is stored on disk as a PPM (for ease of
use and portability purposes). I am using binary PPMs, so there is some data
compression, but nothing terribly significant, so they consume around 250kB
apiece. My test data set is 110 frames (about 25MB). This is a small data
set, representing around 3.6s of video. I would not expect loading 110 PPM
objects to stress the JVM or the system.
However, I get several java.lang.OutOfMemory errors when attempting to load
them all together. At present, I am trying to optimize the choice of the
torso segment, so it is possible for me to look at each frame individually and
see if I have acquired a better choice this time around. I have threaded this
process using a thread pool (five threads, 200ms poll). Each thread also
spawns another Process (which blocks the current thread execution).
Unfortunately, the time will come when I will need to have the pixmap data
handy. I could reread it, but this would incur a performance hit that I'm not
really willing to take unless I absolutely have to. I have done some reading
on NIO, particularly using MappedByteBuffers, which proponents say can
increase performance of reads and writes.
But I am particularly interested in their properties of memory management.
Since a MappedByteBuffer allocates memory on the native heap instead of the
JVM, will it clear up this problem? (Obviously, others may arise.) Can I use
a MappedByteBuffer to improve not only R/W performance, but also memory issues
related to reading in data?
If you could, please cc: me along with any replies to the newsgroup.
Sincerely,
Ethan Metsger