i use a simple general code to read output from a (Linux) binary, by
executing the command in a subprocess :
p = subprocess.Popen( mycommand, stdout=subprocess.PIPE )
for line in p.stdout:
[..process line..]
i find this seems the default method, and it goes well : whenever an output line comes, it’s processed. And when no line comes from stdout, the for-loop halts until it does.
This is dependent on the command being run. Som flush their output
line-byline letting you process lines as they are emitted. But more
common is for a command attached to a pipe to buffer its output and
flush only when the buffer becomes full, so you would see bursts of
lines at the Python end.
However, in the meantime i want to constantly update displaying the
value of some timer, which should run constantly in parallel … so i’m
thinking of using a second subprocess for that timer job, but how to
code such ?
Usually you start each (make the Popen
call for each), then consume
the output as above. The commands commence running as soon as Popen()
returns.
But I’d just use a Thread
for your timer, running a little function
like:
def ticker():
while busy:
print("now =", time.time())
time.sleep(1)
The busy
variable can some from an outer scope (eg a global, or better
from the function where youy set this stuff up). You can se it to
False
after your Popen(mycommand,...)
output finishes, and then the
ticker Thread
can notice that and exit its loop.
Can i create another for-loop to catch its output ?
I guess not …
It needn’t produce output, or at least, not output you need to handle
yourself.
Btw. i also stumbled upon ‘asyncio’ but i never used it … can that be
used to code my idea ?
Dunno. I have yet to have a use for it myself.
See the threading
module for Thread
s, and the time
module for
time.time()
and time.sleep()
.
Cheers,
Cameron Simpson cs@cskk.id.au