Reassign or discard Popen().stdout from a server process

J

John O'Hagan

I'm starting a server process as a subprocess. Startup is slow and
unpredictable (around 3-10 sec), so I'm reading from its stdout until I get a
line that tells me it's ready before proceeding, in simplified form:

import subprocess
proc = subprocess.Popen(['server', 'args'], stdout=subprocess.PIPE)
while proc.stdout.readline() != "Ready.\n":
pass

Now I can start communicating with the server, but I eventually realised that
as I'm no longer reading stdout, the pipe buffer will fill up with output from
the server and before long it blocks and the server stops working.

I can't keep reading because that will block - there won't be any more output
until I send some input, and I don't want it in any case.

To try to fix this I added:

proc.stdout = os.path.devnull

which has the effect of stopping the server from failing, but I'm not convinced
it's doing what I think it is. If I replace devnull in the above line with a
real file, it stays empty although I know there is more output, which makes me
think it hasn't really worked.

Simply closing stdout also seems to stop the crashes, but doesn't that mean
it's still being written to, but the writes are just silently failing? In
either case I'm wary of more elusive bugs arising from misdirected stdout.

Is it possible to re-assign the stdout of a subprocess after it has started?
Or just close it? What's the right way to read stdout up to a given line, then
discard the rest?

Thanks,

john
 
N

Nobody

I can't keep reading because that will block - there won't be any more
output until I send some input, and I don't want it in any case.

To try to fix this I added:

proc.stdout = os.path.devnull

which has the effect of stopping the server from failing, but I'm not
convinced it's doing what I think it is.

It isn't. os.path.devnull is a string, not a file. But even if you did:

proc.stdout = open(os.path.devnull, 'w')

that still wouldn't work.
If I replace devnull in the above line with a real file, it stays empty
although I know there is more output, which makes me think it hasn't
really worked.

It hasn't.
Simply closing stdout also seems to stop the crashes, but doesn't that mean
it's still being written to, but the writes are just silently failing? In
either case I'm wary of more elusive bugs arising from misdirected stdout.

If you close proc.stdout, the next time the server writes to its stdout,
it will receive SIGPIPE or, if it catches that, the write will fail with
EPIPE (write on pipe with no readers). It's up to the server how it deals
with that.
Is it possible to re-assign the stdout of a subprocess after it has started?
No.

Or just close it? What's the right way to read stdout up to a given
line, then discard the rest?

If the server can handle the pipe being closed, go with that. Otherwise,
options include redirecting stdout to a file and running "tail -f" on the
file from within Python, or starting a thread or process whose sole
function is to read and discard the server's output.
 
J

John O'Hagan

It isn't. os.path.devnull is a string, not a file. But even if you did:

proc.stdout = open(os.path.devnull, 'w')

that still wouldn't work.

As mentioned earlier in the thread, I did in fact use open(), this was a typo,
[...]
If the server can handle the pipe being closed, go with that. Otherwise,
options include redirecting stdout to a file and running "tail -f" on the
file from within Python, or starting a thread or process whose sole
function is to read and discard the server's output.

Thanks, that's all clear now.

But I'm still a little curious as to why even unsuccessfully attempting to
reassign stdout seems to stop the pipe buffer from filling up.

John
 
N

Nobody

But I'm still a little curious as to why even unsuccessfully attempting to
reassign stdout seems to stop the pipe buffer from filling up.

It doesn't. If the server continues to run, then it's ignoring/handling
both SIGPIPE and the EPIPE error. Either that, or another process has the
read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using
non-blocking I/O or select() so that it doesn't block writing its
diagnostic messages.
 
J

John O'Hagan

It doesn't. If the server continues to run, then it's ignoring/handling
both SIGPIPE and the EPIPE error. Either that, or another process has the
read end of the pipe open (so no SIGPIPE/EPIPE), and the server is using
non-blocking I/O or select() so that it doesn't block writing its
diagnostic messages.

The server fails with stdout=PIPE if I don't keep reading it, but doesn't fail
if I do stdout=anything (I've tried files, strings, integers, and None) soon
after starting the process, without any other changes. How is that consistent
with either of the above conditions? I'm sure you're right, I just don't
understand.

Regards,

John
 
N

Nobody

The server fails with stdout=PIPE if I don't keep reading it, but
doesn't fail if I do stdout=anything (I've tried files, strings,
integers, and None) soon after starting the process, without any other
changes. How is that consistent with either of the above conditions? I'm
sure you're right, I just don't understand.

What do you mean by "fail". I wouldn't be surprised if it hung, due to the
write() on stdout blocking. If you reassign the .stdout member, the
existing file object is likely to become unreferenced, get garbage
collected, and close the pipe, which would prevent the server from
blocking (the write() will fail rather than blocking).

If the server puts the pipe into non-blocking mode, write() will fail with
EAGAIN if you don't read it but with EPIPE if you close the pipe. The
server may handle these cases differently.
 
J

John O'Hagan

What do you mean by "fail". I wouldn't be surprised if it hung, due to the
write() on stdout blocking. If you reassign the .stdout member, the
existing file object is likely to become unreferenced, get garbage
collected, and close the pipe, which would prevent the server from
blocking (the write() will fail rather than blocking).

If the server puts the pipe into non-blocking mode, write() will fail with
EAGAIN if you don't read it but with EPIPE if you close the pipe. The
server may handle these cases differently.

By "fail" I mean the server, which is the Fluidsynth soundfont rendering
program, stops producing sound in a way consistent with the blocked write() as
you describe. It continues to read stdin; in fact, Ctrl+C-ing out of the block
produces all the queued sounds at once.

What I didn't realise was that the (ineffective) reassignment of stdout has the
side-effect of closing it by dereferencing it, as you explain above. I asked on
the Fluidsynth list and currently it ignores if the pipe it's writing to has
been closed . All makes sense now, thanks.


John
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,968
Messages
2,570,154
Members
46,702
Latest member
LukasConde

Latest Threads

Top