J
James Aguilar
Oh wise readers of comp.lang.python,
Lend a newbie your ears. I have read several old articles from this
group about memory mapping and interprocess communication and have
Googled the sh** out of the internet, but have not found sufficient to
answer my questions.
Suppose that I am writing a ray tracer in Python. Well, perhaps not a
ray tracer. Suppose that I am writing a ray tracer that has to update
sixty times a second (Ignore for now that this is impossible and silly.
Ignore also that one would probably not choose Python to do such a
thing.). Ray tracing, as some of you may know, is an inherently
parallelizable task. Hence, I would love to split the task across my
quad-core CPU (Ignore also that such things do not exist yet.).
Because of GIL, I need all of my work to be done in separate processes.
My vision for this is that I would create a controller process and read
in the data (The lights, the matrices describing all of the objects,
and their colors.), putting it into a memory mapped file. Then I would
create a single child process for each CPU and assign them each a range
of pixels to work on. I would say GO and they would return the results
of their computations by placing them in an array in the memory mapped
file, which, when completed, the parent process would pump out to the
frame buffer. Meanwhile, the parent process is collecting changes from
whatever is controlling the state of the world. As soon as the picture
is finished, the parent process adjusts the data in the file to reflect
the new state of the world, and tells the child processes to go again,
etc.
So, I have a couple of questions:
* Is there any way to have Python objects (Such as a light or a color)
put themselves into a byte array and then pull themselves out of the
same array without any extra work? If each of the children had to load
all of the values from the array, we would probably lose much of the
benefit of doing things this way. What I mean to say is, can I say to
Python, "Interpret this range of bytes as a Light object, interpret
this range of bytes as a Matrix, etc." This is roughly equivalent to
simply static_casting a void * to an object type in C++.
* Are memory mapped files fast enough to do something like this? The
whole idea is that I would avoid the cost of having the whole world
loaded into memory in every single process. With threads, this is not
a problem -- what I am trying to do is figure out the Pythonic way to
work around the impossibility of using more than one processor because
of the GIL.
* Are pipes a better idea? If so, how do I avoid the problem of
wasting extra memory by having all of the children processes hold all
of the data in memory as well?
* Are there any other shared memory models that would work for this
task?
OK, I think that is enough. I look forward eagerly to your replies!
Yours,
James Aguilar
Lend a newbie your ears. I have read several old articles from this
group about memory mapping and interprocess communication and have
Googled the sh** out of the internet, but have not found sufficient to
answer my questions.
Suppose that I am writing a ray tracer in Python. Well, perhaps not a
ray tracer. Suppose that I am writing a ray tracer that has to update
sixty times a second (Ignore for now that this is impossible and silly.
Ignore also that one would probably not choose Python to do such a
thing.). Ray tracing, as some of you may know, is an inherently
parallelizable task. Hence, I would love to split the task across my
quad-core CPU (Ignore also that such things do not exist yet.).
Because of GIL, I need all of my work to be done in separate processes.
My vision for this is that I would create a controller process and read
in the data (The lights, the matrices describing all of the objects,
and their colors.), putting it into a memory mapped file. Then I would
create a single child process for each CPU and assign them each a range
of pixels to work on. I would say GO and they would return the results
of their computations by placing them in an array in the memory mapped
file, which, when completed, the parent process would pump out to the
frame buffer. Meanwhile, the parent process is collecting changes from
whatever is controlling the state of the world. As soon as the picture
is finished, the parent process adjusts the data in the file to reflect
the new state of the world, and tells the child processes to go again,
etc.
So, I have a couple of questions:
* Is there any way to have Python objects (Such as a light or a color)
put themselves into a byte array and then pull themselves out of the
same array without any extra work? If each of the children had to load
all of the values from the array, we would probably lose much of the
benefit of doing things this way. What I mean to say is, can I say to
Python, "Interpret this range of bytes as a Light object, interpret
this range of bytes as a Matrix, etc." This is roughly equivalent to
simply static_casting a void * to an object type in C++.
* Are memory mapped files fast enough to do something like this? The
whole idea is that I would avoid the cost of having the whole world
loaded into memory in every single process. With threads, this is not
a problem -- what I am trying to do is figure out the Pythonic way to
work around the impossibility of using more than one processor because
of the GIL.
* Are pipes a better idea? If so, how do I avoid the problem of
wasting extra memory by having all of the children processes hold all
of the data in memory as well?
* Are there any other shared memory models that would work for this
task?
OK, I think that is enough. I look forward eagerly to your replies!
Yours,
James Aguilar