Ruby dispatcher and work processes

M

Martin C.

I would like to implement an (open source) application platform
analogous to SAP's ABAP platform in Ruby. (See below for link to SAP
architecture diagrams).

In part, this requires having different Ruby processes talk to each
other, specifically having one dispatcher process dishing out work to
standalone work processes.

I am at a loss though as to how to go about this. In part, one stumbling
block I see is that if each work process is a Ruby process, I might load
ruby code (files/programs) into memory in a work process, but I couldn't
release it again. (Obviously I don't want to incur the expense of
starting up Ruby for each incoming request).

Any ideas on what existing frameworks I could look at? I was
wondering about using MagLev, especially to take advantage of storing
data in shared memory between processes in an easy way. (i.e. each work
process would be a MagLev instance).

Any comments or suggestions would be welcome.

Links to SAP ABAP architecture information:

http://help.sap.com/saphelp_NW70EHP1core/helpdata/en/fc/eb2e8a358411d1829f0000e829fbfe/content.htm

http://help.sap.com/SAPhelp_nw70/helpdata/en/84/54953fc405330ee10000000a114084/content.htm
 
R

Robert Klemme

I would like to implement an (open source) application platform
analogous to SAP's ABAP platform in Ruby. (See below for link to SAP
architecture diagrams).

Maybe the announced fairy framework is for you.
In part, this requires having different Ruby processes talk to each
other, specifically having one dispatcher process dishing out work to
standalone work processes.

You could have a Queue instance and have several worker processes read
from it via DRb. That would be about the simplest scenario I can
think of.
I am at a loss though as to how to go about this. In part, one stumbling
block I see is that if each work process is a Ruby process, I might load
ruby code (files/programs) into memory in a work process, but I couldn't
release it again. (Obviously I don't want to incur the expense of
starting up Ruby for each incoming request).

You could terminate a worker process after it has processed a number
of requests or has run for a particular time thus balancing process
creation overhead and memory usage.
Any ideas on what existing frameworks I could look at? I was
wondering about using MagLev, especially to take advantage of storing
data in shared memory between processes in an easy way. (i.e. each work
process would be a MagLev instance).

Whether I would add the complexity of shared memory I would make
dependent on the amount of data that needs to be shared. If requests
and responses are small I would certainly not use shmem. Also, if the
data is read and stored elsewhere (e.g. RDBMS) I'd probably not bother
to use shmem.

Kind regards

robert


PS: Get well soon to your dog!

PPS: I see Pink does not need to wait any more.
 
M

Martin C.

Robert Klemme wrote in post #966549:
Maybe the announced fairy framework is for you.
Fairy framework?

You could have a Queue instance and have several worker processes read
from it via DRb. That would be about the simplest scenario I can
think of.
SAP's method of dispatching works on a "push" rather than a "pull" as
far as I can tell.
You could terminate a worker process after it has processed a number
of requests or has run for a particular time thus balancing process
creation overhead and memory usage.
Sounds complex, but I guess is worth looking into. Is there a way of
reloading a class instead? I.e. If all "programs" or "applications"
executed on the platform were implemented as a particular class, perhaps
reloading that class with new content could work?
Whether I would add the complexity of shared memory I would make
dependent on the amount of data that needs to be shared. If requests
and responses are small I would certainly not use shmem. Also, if the
data is read and stored elsewhere (e.g. RDBMS) I'd probably not bother
to use shmem.
The advantage is that if one process handles your current request and a
different one your next, they need only attach to the shared memory
without unloading and reloading or the overhead of persisting to a
database between requests (or that is at least the thinking - SAP
obviously have perfected this over literally decades).
 
T

Tony Arcieri

[Note: parts of this message were removed to make it a legal post.]

Any ideas on what existing frameworks I could look at? I was
wondering about using MagLev, especially to take advantage of storing
data in shared memory between processes in an easy way. (i.e. each work
process would be a MagLev instance).


This may be a more general question about the ABAP architecture, but why
does it need both a database and shared state between workers?

This may just be an architectural preference on my part, but I prefer my
Ruby processes be shared nothing and totally stateless, with all state
stored in the database and only in the database.

If you really need shared state between workers, I'd suggest using JRuby and
having each worker run as a separate thread. JRuby provides concurrent
execution of Ruby code without a global interpreter lock, and all workers
share a heap.

IronRuby also supports this, as does the Rubinius "hydra" branch.
 
A

ara.t.howard

Any ideas on what existing frameworks I could look at? I was
wondering about using MagLev, especially to take advantage of storing
data in shared memory between processes in an easy way. (i.e. each work
process would be a MagLev instance).

Any comments or suggestions would be welcome.

i think it's already written for you:

gem install slave

https://github.com/ahoward/slave/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,997
Messages
2,570,239
Members
46,827
Latest member
DMUK_Beginner

Latest Threads

Top