D
Dave Saville
I am writing a perl daemon on a Raspberry Pi.
The perl script talks and listens to another process that it has
started via fork/exec.
Normally when one forks it is usual to close unneeded file handles -
the first question then is should one close *all* the open handles if
you are going to call exec anyway?
Secondly, I was under the impression that it did not matter in which
order named pipes are opened. The forked process is reading one named
pipe and writing to a second. But more often than not my perl script
hangs trying to open one.
open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
$!";
never returns.
With two xterms I can "echo hi > pipe" and "cat < pipe" and it matters
not which order I do them in - the first waits until the second runs.
But surely open should not be trying to read should it?
TIA
The perl script talks and listens to another process that it has
started via fork/exec.
Normally when one forks it is usual to close unneeded file handles -
the first question then is should one close *all* the open handles if
you are going to call exec anyway?
Secondly, I was under the impression that it did not matter in which
order named pipes are opened. The forked process is reading one named
pipe and writing to a second. But more often than not my perl script
hangs trying to open one.
open my $_STDOUT, '<', 'fifo_stdout' or die "Can't open fifo_stdout
$!";
never returns.
With two xterms I can "echo hi > pipe" and "cat < pipe" and it matters
not which order I do them in - the first waits until the second runs.
But surely open should not be trying to read should it?
TIA