Any time you are working with any form of inter-process communication, control flow needs to be carefully thought out. This remains the case with the file objects provided by this module (or the os module equivalents).
When reading output from a child process that writes a lot of data to standard error while the parent is reading from the child's standard out, a deadlock can occur. A similar situation can occur with other combinations of reads and writes. The essential factors are that more than _PC_PIPE_BUF bytes are being written by one process in a blocking fashion, while the other process is reading from the other process, also in a blocking fashion.
There are several ways to deal with this situation.
The simplest application change, in many cases, will be to follow this model in the parent process:
import popen2 r, w, e = popen2.popen3('python slave.py') e.readlines() r.readlines() r.close() e.close() w.close()
with code like this in the child:
import os import sys # note that each of these print statements # writes a single long string print >>sys.stderr, 400 * 'this is a test\n' os.close(sys.stderr.fileno()) print >>sys.stdout, 400 * 'this is another test\n'
In particular, note that sys.stderr
must be closed after
writing all data, or readlines() won't return. Also note
that os.close() must be used, as sys.stderr.close()
won't close stderr
(otherwise assigning to sys.stderr
will silently close it, so no further errors can be printed).
Applications which need to support a more general approach should integrate I/O over pipes with their select() loops, or use separate threads to read each of the individual files provided by whichever popen*() function or Popen* class was used.
See About this document... for information on suggesting changes.