Looking for an IPC solution

L

Laszlo Nagy

There are just so many IPC modules out there. I'm looking for a solution
for developing a new a multi-tier application. The core application will
be running on a single computer, so the IPC should be using shared
memory (or mmap) and have very short response times. But there will be a
tier that will hold application state for clients, and there will be
lots of clients. So that tier needs to go to different computers. E.g.
the same IPC should also be accessed over TCP/IP. Most messages will be
simple data structures, nothing complicated. The ability to run on PyPy
would, and also to run on both Windows and Linux would be a plus.

I have seen a stand alone cross platform IPC server before that could
serve "channels", and send/receive messages using these channels. But I
don't remember its name and now I cannot find it. Can somebody please help?

Thanks,

Laszlo
 
M

Marco Nawijn

There are just so many IPC modules out there. I'm looking for a solution

for developing a new a multi-tier application. The core application will

be running on a single computer, so the IPC should be using shared

memory (or mmap) and have very short response times. But there will be a

tier that will hold application state for clients, and there will be

lots of clients. So that tier needs to go to different computers. E.g.

the same IPC should also be accessed over TCP/IP. Most messages will be

simple data structures, nothing complicated. The ability to run on PyPy

would, and also to run on both Windows and Linux would be a plus.



I have seen a stand alone cross platform IPC server before that could

serve "channels", and send/receive messages using these channels. But I

don't remember its name and now I cannot find it. Can somebody please help?



Thanks,



Laszlo

Hi,

Are you aware and have you considered zeromq (www.zeromq.org)? It does not provide a messaging system, but you could use things like simple strings (json) or more complicated things like Protobuf.

Marco
 
M

Marco Nawijn

There are just so many IPC modules out there. I'm looking for a solution

for developing a new a multi-tier application. The core application will

be running on a single computer, so the IPC should be using shared

memory (or mmap) and have very short response times. But there will be a

tier that will hold application state for clients, and there will be

lots of clients. So that tier needs to go to different computers. E.g.

the same IPC should also be accessed over TCP/IP. Most messages will be

simple data structures, nothing complicated. The ability to run on PyPy

would, and also to run on both Windows and Linux would be a plus.



I have seen a stand alone cross platform IPC server before that could

serve "channels", and send/receive messages using these channels. But I

don't remember its name and now I cannot find it. Can somebody please help?



Thanks,



Laszlo

Hi,

Are you aware and have you considered zeromq (www.zeromq.org)? It does not provide a messaging system, but you could use things like simple strings (json) or more complicated things like Protobuf.

Marco
 
P

Paul Rubin

Laszlo Nagy said:
application will be running on a single computer, so the IPC should be
using shared memory (or mmap) and have very short response times.

Zeromq (suggested by someone) is an option since it's pretty fast for
most purposes, but I don't think it uses shared memory. The closest
thing I can think of to what you're asking is MPI, intended for
scientific computation. I don't know of general purpose IPC that uses
it though I've thought it would be interesting. There are also some
shared memory modules around, including POSH for shared objects, but
they don't switch between memory and sockets AFAIK.

Based on your description, maybe what you really want is Erlang, or
something like it for Python. There would be more stuff to do than just
supply an IPC library.
 
L

Laszlo Nagy

Zeromq (suggested by someone) is an option since it's pretty fast for
most purposes, but I don't think it uses shared memory.
Interesting question. The documentation says:

http://api.zeromq.org/2-1:zmq-ipc

The inter-process transport is currently only implemented on operating
systems that provide UNIX domain sockets.

(OFF: Would it be possible to add local IPC support for Windows using
mmap()? I have seen others doing it.)

At least, it is functional on Windows, and it excels on Linux. I just
need to make transports configureable. Good enough for me.
The closest
thing I can think of to what you're asking is MPI, intended for
scientific computation. I don't know of general purpose IPC that uses
it though I've thought it would be interesting. There are also some
shared memory modules around, including POSH for shared objects, but
they don't switch between memory and sockets AFAIK.

Based on your description, maybe what you really want is Erlang, or
something like it for Python. There would be more stuff to do than just
supply an IPC library.
Yes, although I would really like to do this job in Python. I'm going to
make some tests with zeromq. If the speed is good for local
inter-process communication, then I'll give it a try.

Thanks,

Laszlo
 
W

Wolfgang Keller

There are just so many IPC modules out there. I'm looking for a
solution for developing a new a multi-tier application. The core
application will be running on a single computer, so the IPC should
be using shared memory (or mmap) and have very short response times.

Probably the fastest I/RPC implementation for Python should be
OmniOrbpy:

http://omniorb.sourceforge.net/

It's cross-platform, language-independent and standard-(Corba-)
compliant.
I have seen a stand alone cross platform IPC server before that could
serve "channels", and send/receive messages using these channels. But
I don't remember its name and now I cannot find it. Can somebody
please help?

If it's just for "messaging", Spread should be interesting:

http://www.spread.org/

Also cross-platform & language-independent.

Sincerely,

Wolfgang
 
A

Aaron Brady

There are just so many IPC modules out there. I'm looking for a solution

for developing a new a multi-tier application. The core application will

be running on a single computer, so the IPC should be using shared

memory (or mmap) and have very short response times. But there will be a

tier that will hold application state for clients, and there will be

lots of clients. So that tier needs to go to different computers. E.g.

the same IPC should also be accessed over TCP/IP. Most messages will be

simple data structures, nothing complicated. The ability to run on PyPy

would, and also to run on both Windows and Linux would be a plus.



I have seen a stand alone cross platform IPC server before that could

serve "channels", and send/receive messages using these channels. But I

don't remember its name and now I cannot find it. Can somebody please help?



Thanks,



Laszlo

Hi Laszlo,

There aren't a lot of ways to create a Python object in an "mmap" buffer. "mmap" is conducive to arrays of arrays. For variable-length structures like strings and lists, you need "dynamic allocation". The C functions "malloc" and "free" allocate memory space, and file creation and deletion routines operate on disk space. However "malloc" doesn't allow you to allocate memory space within memory that's already allocated. Operating systems don't provide that capability, and doing it yourself amounts to creating your own file system. If you did, you still might not be able to use existing libraries like the STL or Python, because one address might refer to different locations in different processes.

One solution is to keep a linked list of free blocks within your "mmap" buffer. It is prone to slow access times and segment fragmentation. Another solution is to create many small files with fixed-length names. The minimum file size on your system might become prohibitive depending on your constraints, since a 4-byte integer could occupy 4096 bytes on disk or more. Oryou can serialize the arguments and return values of your functions, and make requests to a central process.
 
A

Aaron Brady

There are just so many IPC modules out there. I'm looking for a solution

for developing a new a multi-tier application. The core application will

be running on a single computer, so the IPC should be using shared

memory (or mmap) and have very short response times. But there will be a

tier that will hold application state for clients, and there will be

lots of clients. So that tier needs to go to different computers. E.g.

the same IPC should also be accessed over TCP/IP. Most messages will be

simple data structures, nothing complicated. The ability to run on PyPy

would, and also to run on both Windows and Linux would be a plus.



I have seen a stand alone cross platform IPC server before that could

serve "channels", and send/receive messages using these channels. But I

don't remember its name and now I cannot find it. Can somebody please help?



Thanks,



Laszlo

Hi Laszlo,

There aren't a lot of ways to create a Python object in an "mmap" buffer. "mmap" is conducive to arrays of arrays. For variable-length structures like strings and lists, you need "dynamic allocation". The C functions "malloc" and "free" allocate memory space, and file creation and deletion routines operate on disk space. However "malloc" doesn't allow you to allocate memory space within memory that's already allocated. Operating systems don't provide that capability, and doing it yourself amounts to creating your own file system. If you did, you still might not be able to use existing libraries like the STL or Python, because one address might refer to different locations in different processes.

One solution is to keep a linked list of free blocks within your "mmap" buffer. It is prone to slow access times and segment fragmentation. Another solution is to create many small files with fixed-length names. The minimum file size on your system might become prohibitive depending on your constraints, since a 4-byte integer could occupy 4096 bytes on disk or more. Oryou can serialize the arguments and return values of your functions, and make requests to a central process.
 
V

vasudevram

Probably the fastest I/RPC implementation for Python should be

OmniOrbpy:



http://omniorb.sourceforge.net/



It's cross-platform, language-independent and standard-(Corba-)

compliant.









If it's just for "messaging", Spread should be interesting:



http://www.spread.org/



Also cross-platform & language-independent.



Sincerely,



Wolfgang

Though I'm not the OP, thanks for the info. Will put Spread on my stack to check out ...
 
L

Laszlo Nagy

Hi Laszlo,

There aren't a lot of ways to create a Python object in an "mmap" buffer. "mmap" is conducive to arrays of arrays. For variable-length structures like strings and lists, you need "dynamic allocation". The C functions "malloc" and "free" allocate memory space, and file creation and deletion routines operate on disk space. However "malloc" doesn't allow you to allocate memory space within memory that's already allocated. Operating systems don't provide that capability, and doing it yourself amounts to creating your own file system. If you did, you still might not be able to use existing libraries like the STL or Python, because one address might refer to different locations in different processes.

One solution is to keep a linked list of free blocks within your "mmap" buffer. It is prone to slow access times and segment fragmentation. Another solution is to create many small files with fixed-length names. The minimum file size on your system might become prohibitive depending on your constraints, since a 4-byte integer could occupy 4096 bytes on disk or more. Or you can serialize the arguments and return values of your functions, and make requests to a central process.
I'm not sure about the technical details, but I was said that
multiprocessing module uses mmap() under windows. And it is faster than
TCP/IP. So I guess the same thing could be used from zmq, under Windows.
(It is not a big concern, I plan to operate server on Unix. Some clients
might be running on Windows, but they will use TCP/IP.)
 
L

Laszlo Nagy

Probably the fastest I/RPC implementation for Python should be
OmniOrbpy:

http://omniorb.sourceforge.net/

It's cross-platform, language-independent and standard-(Corba-)
compliant.
I don't want to use IDL though. Clients will be written in Python, and
it would be a waste of time to write IDL files.
If it's just for "messaging", Spread should be interesting:

http://www.spread.org/

Also cross-platform & language-independent.
Looks promising. This is what I have found about it:

http://stackoverflow.com/questions/35490/spread-vs-mpi-vs-zeromq

So, it really depends on whether you are trying to build a parallel
system or distributed system. They are related to each other, but the
implied connotations/goals are different. Parallel programming deals
with increasing computational power by using multiple computers
simultaneously. Distributed programming deals with reliable
(consistent, fault-tolerant and highly available) group of computers.

I don't know the full theory behind distributed programming or parallel
programming. ZMQ seems easier to use.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,819
Latest member
masterdaster

Latest Threads

Top