85

I'm trying to write a Python script that starts a subprocess, and writes to the subprocess stdin. I'd also like to be able to determine an action to be taken if the subprocess crashes.

The process I'm trying to start is a program called nuke which has its own built-in version of Python which I'd like to be able to submit commands to, and then tell it to quit after the commands execute. So far I've worked out that if I start Python on the command prompt like and then start nuke as a subprocess then I can type in commands to nuke, but I'd like to be able to put this all in a script so that the master Python program can start nuke and then write to its standard input (and thus into its built-in version of Python) and tell it to do snazzy things, so I wrote a script that starts nuke like this:

subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])

Then nothing happens because nuke is waiting for user input. How would I now write to standard input?

I'm doing this because I'm running a plugin with nuke that causes it to crash intermittently when rendering multiple frames. So I'd like this script to be able to start nuke, tell it to do something and then if it crashes, try again. So if there is a way to catch a crash and still be OK then that'd be great.

Peter Mortensen
  • 30,030
  • 21
  • 100
  • 124
jonathan topf
  • 7,417
  • 13
  • 53
  • 81
  • 2
    If you are looking to quickly write a string to a subprocess stdin, use `input` of [`subprocess.run`](https://docs.python.org/3/library/subprocess.html#subprocess.run); e.g., `subprocess.run(['cat'], input='foobar'.encode('utf-8'))` – anishpatel Oct 25 '18 at 18:14

3 Answers3

104

It might be better to use communicate:

from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]

"Better", because of this warning:

Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.

jro
  • 8,880
  • 2
  • 30
  • 36
  • aaah great, thanks fro that, also if i just do an import subprocess will i still need to import popen PIPE etc? – jonathan topf Dec 12 '11 at 15:26
  • 1
    No, you don't, but then you need to reference them like `subprocess.PIPE`. This method also imports everything in the subprocess module. The `from subprocess import PIPE` introduces into the current namespace, so that you can use just `PIPE`. – jro Dec 12 '11 at 15:54
  • the only problem im having wit this method is that the program freezes up whilst the process thinks, i would like it if the python script could launch the process and monitor its stdout from afar – jonathan topf Dec 13 '11 at 04:49
  • You might want to take a look at [this question](http://stackoverflow.com/q/7977829/991521) regarding on processing smaller pieces of output. Alternatively, it sounds like something like a thread could help you if you want the execution without blocking your main thread. – jro Dec 13 '11 at 07:44
  • thanks jro, do you know if there is there way to keep the process open after the communicate, if for example i need to write to the stdin more than once? – jonathan topf Dec 13 '11 at 08:57
  • 3
    The `communicate` method reads data until EOF is received. If you want to interact with the process dynamically, access the pipe using `p.stdin.write('data')`. For the reading, see my previous comment. The warning is about this way of communicating though, so take care you don't fill up the buffers. Easiest way to verify this, is to just try it... – jro Dec 13 '11 at 09:04
  • thanks jro, im trying that now, running into problems with formatting strings for write() in python 3.x, getting there though, thanks again – jonathan topf Dec 13 '11 at 09:10
  • 11
    For python 3.4 you need to do `p.communicate(input="data for input".encode())` – qed Nov 09 '14 at 02:16
  • @jro I suppose you might have wanted to set the stdout parameter of `p` to `stdout=STDOUT`? Otherwise no need to import it. –  Dec 09 '16 at 19:39
  • 2
    When you're searching the answer for the question above, for example if you need to execute "ping" command and `communicate` doesn't do stick, this answer is utterly unhelpful. – UpmostScarab May 21 '17 at 08:09
14

To clarify some points:

As jro has mentioned, the right way is to use subprocess.communicate.

Yet, when feeding the stdin using subprocess.communicate with input, you need to initiate the subprocess with stdin=subprocess.PIPE according to the docs.

Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.

Also qed has mentioned in the comments that for Python 3.4 you need to encode the string, meaning you need to pass Bytes to the input rather than a string. This is not entirely true. According to the docs, if the streams were opened in text mode, the input should be a string (source is the same page).

If streams were opened in text mode, input must be a string. Otherwise, it must be bytes.

So, if the streams were not opened explicitly in text mode, then something like below should work:

import subprocess
command = ['myapp', '--arg1', 'value_for_arg1']
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.communicate(input='some data'.encode())[0]

I've left the stderr value above deliberately as STDOUT as an example.

That being said, sometimes you might want the output of another process rather than building it up from scratch. Let's say you want to run the equivalent of echo -n 'CATCH\nme' | grep -i catch | wc -m. This should normally return the number characters in 'CATCH' plus a newline character, which results in 6. The point of the echo here is to feed the CATCH\nme data to grep. So we can feed the data to grep with stdin in the Python subprocess chain as a variable, and then pass the stdout as a PIPE to the wc process' stdin (in the meantime, get rid of the extra newline character):

import subprocess

what_to_catch = 'catch'
what_to_feed = 'CATCH\nme'

# We create the first subprocess, note that we need stdin=PIPE and stdout=PIPE
p1 = subprocess.Popen(['grep', '-i', what_to_catch], stdin=subprocess.PIPE, stdout=subprocess.PIPE)

# We immediately run the first subprocess and get the result
# Note that we encode the data, otherwise we'd get a TypeError
p1_out = p1.communicate(input=what_to_feed.encode())[0]

# Well the result includes an '\n' at the end, 
# if we want to get rid of it in a VERY hacky way
p1_out = p1_out.decode().strip().encode()

# We create the second subprocess, note that we need stdin=PIPE
p2 = subprocess.Popen(['wc', '-m'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)

# We run the second subprocess feeding it with the first subprocess' output.
# We decode the output to convert to a string
# We still have a '\n', so we strip that out
output = p2.communicate(input=p1_out)[0].decode().strip()

This is somewhat different than the response here, where you pipe two processes directly without adding data directly in Python.

Hope that helps someone out.

eaydin
  • 968
  • 8
  • 7
-2

You can provide a file-like object to the stdin argument of subprocess.call().

The documentation for the Popen object applies here.

To capture the output, you should instead use subprocess.check_output(), which takes similar arguments. From the documentation:

>>> subprocess.check_output(
...     "ls non_existent_file; exit 0",
...     stderr=subprocess.STDOUT,
...     shell=True)
'ls: non_existent_file: No such file or directory\n'