S
Samuel A. Falvo II
I have a shell script script.sh that launches a Java process in the
background using the &-operator, like so:
#!/bin/bash
java ... arguments here ... &
In my Python code, I want to invoke this shell script using the
Subprocess module. Here is my code:
def resultFromRunning_(command):
"""Invokes a shell command, and returns the stdout response.
Args:
command: A string containing the complete shell command to
run.
Results:
A string containing the output of the command executed.
Raises:
ValueError if a non-zero return code is returned from the
shell.
OSError if command isn't found, inappropriate permissions,
etc.
"""
L = log4py.Logger().get_instance()
L.info("Executing: " + command)
p = subprocess.Popen(
command,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
close_fds=True
)
outputChannel = p.stdout
output = outputChannel.read()
result = p.wait()
if result:
raise(ShellError(command, result, output))
L.info("Result = " + str(output))
return output
When running the aforementioned code, it kicks off the shell script,
and, the shell script kicks off the Java process. However, the Python
code never returns from outputChannel.read() until I explicitly kill
the Java process myself via the kill shell command.
I've researched this issue on Google and various other websites, and
maybe I'm missing the obvious, but I cannot seem to find any
documentation relevant to this problem. Lots of references to bugs
filed in the past, that appear to be fixed, or to websites talking
about how the Popen module has a 64K limit on its data queue size, but
nothing relevent to my situation.
Can anyone inform me or point me to the appropriate documentation on
how to properly invoke a shell command such that any spawned children
processes don't cause Python to hang on me? I assume it has something
to do with process groups, but I'm largely ignorant of how to control
those.
Thanks in advance.
background using the &-operator, like so:
#!/bin/bash
java ... arguments here ... &
In my Python code, I want to invoke this shell script using the
Subprocess module. Here is my code:
def resultFromRunning_(command):
"""Invokes a shell command, and returns the stdout response.
Args:
command: A string containing the complete shell command to
run.
Results:
A string containing the output of the command executed.
Raises:
ValueError if a non-zero return code is returned from the
shell.
OSError if command isn't found, inappropriate permissions,
etc.
"""
L = log4py.Logger().get_instance()
L.info("Executing: " + command)
p = subprocess.Popen(
command,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
close_fds=True
)
outputChannel = p.stdout
output = outputChannel.read()
result = p.wait()
if result:
raise(ShellError(command, result, output))
L.info("Result = " + str(output))
return output
When running the aforementioned code, it kicks off the shell script,
and, the shell script kicks off the Java process. However, the Python
code never returns from outputChannel.read() until I explicitly kill
the Java process myself via the kill shell command.
I've researched this issue on Google and various other websites, and
maybe I'm missing the obvious, but I cannot seem to find any
documentation relevant to this problem. Lots of references to bugs
filed in the past, that appear to be fixed, or to websites talking
about how the Popen module has a 64K limit on its data queue size, but
nothing relevent to my situation.
Can anyone inform me or point me to the appropriate documentation on
how to properly invoke a shell command such that any spawned children
processes don't cause Python to hang on me? I assume it has something
to do with process groups, but I'm largely ignorant of how to control
those.
Thanks in advance.