Using fork() on XP to run processes in parallel

F

Franklin H.

On XP:

I'd like to fork off multiple processes which later return data
(let's a single hash reference) to the parent and terminate. The
reason I'd like to do it this way rather than sequentially is that
each process takes some time to complete and I'd like to save time by
running them in parallel.

I've looked closely at perlipc but really can't make heads or tails
of how to apply it to what I'm doing. Where else to get started?
 
G

Gunnar Hjalmarsson

Franklin said:
I'd like to fork off multiple processes which later return data
(let's a single hash reference) to the parent and terminate. The
reason I'd like to do it this way rather than sequentially is that
each process takes some time to complete and I'd like to save time by
running them in parallel.

I've looked closely at perlipc but really can't make heads or tails
of how to apply it to what I'm doing. Where else to get started?

I, too, find that stuff difficult to get a grasp of, but I'm still
successfully forking multiple processes by help of the CPAN module
Parallel::ForkManager.
 
X

xhoster

Franklin H. said:
On XP:

I'd like to fork off multiple processes which later return data
(let's a single hash reference) to the parent and terminate.

If you use fork and only send the hash reference, the parent won't be able
to derefence it. You have to send the whole hash, for example with
Storable or Data::Dumper.
The
reason I'd like to do it this way rather than sequentially is that
each process takes some time to complete and I'd like to save time by
running them in parallel.

If CPU is the bottleneck, this will only save time if you have a multi-CPU
machine. If IO latency is the bottleneck, it may be better to look into
non-blocking IO rather than fork.

I've looked closely at perlipc but really can't make heads or tails
of how to apply it to what I'm doing. Where else to get started?

Even the parts on "Using open() for IPC" and "Safe Pipe Opens"?
That is probably where I would start.

Since you are on windows, I'd also look into using threads (perldoc
threads).

Xho
 
X

xhoster

Gunnar Hjalmarsson said:
I, too, find that stuff difficult to get a grasp of, but I'm still
successfully forking multiple processes by help of the CPAN module
Parallel::ForkManager.

But that module doesn't facilitate communication back to the parent,
AFAICT.

Xho
 
G

Gunnar Hjalmarsson

But that module doesn't facilitate communication back to the parent,
AFAICT.

No, that's correct, you need to use one or more temporary files for
that. But it does facilitate the forking itself. ;-)
 
F

Franklin H.

If CPU is the bottleneck, this will only save time if you have a multi-CPU
machine. If IO latency is the bottleneck, it may be better to look into
non-blocking IO rather than fork.
Actually each process is actually a LWP POST request. The bottleneck is
on the remote server.
If you use fork and only send the hash reference, the parent won't be able
to derefence it. You have to send the whole hash, for example with
Storable or Data::Dumper.
So perhaps it just makes sense to take the easy way out here and use
temporrary files. :-(
 
A

Anno Siegel

Franklin H. said:
Actually each process is actually a LWP POST request. The bottleneck is
on the remote server.
So perhaps it just makes sense to take the easy way out here and use
temporrary files. :-(

Whether you use temp files or pipes or whatever, you will have to write
something to them that another process can decode as a hash. Storable
and Data::Dumper are likely candidates in either case.

Anno
 
F

Franklin H.

Anno Siegel wrote in comp.lang.perl.misc:
Whether you use temp files or pipes or whatever, you will have to write
something to them that another process can decode as a hash. Storable
and Data::Dumper are likely candidates in either case.
Yep, I have had good luck with Data::Dumper in the past. I plan on
using it again.

Still, I am some further problems with this. I posted under a new topic
to comp.lang.perl.misc with the subject "problem with forks writing to
files on XP" found on google groups at
http://groups-beta.google.com/group...9480e/abea508f93af34ac?hl=en#abea508f93af34ac
 
X

xhoster

Actually each process is actually a LWP POST request. The bottleneck is
on the remote server.

From your computer's perspective, the remote server is just a device for
IO, so that still holds. :)

But you might want to look at LWP::parallel. (I can't vouch for it as I've
never needed it). Also, you might need to consider, if you haven't
already, how the remote server will appreciate all the parallel attention
you will bestoying on it.

So perhaps it just makes sense to take the easy way out here and use
temporrary files. :-(

That might be a good solution (depending on lots of unknowns here), but
those temporary files are still going to need to be parsed. If you just
write a hash-ref out to the temp file, you will be in the same boat as if
you pass just a hash-ref through a pipe. This is where "use threads" might
be an advantage.

Xho
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,169
Messages
2,570,920
Members
47,464
Latest member
Bobbylenly

Latest Threads

Top