Oh, it’s actually a looped thing. That’s a little harder to transform, although definitely not impossible. Depending on how quickly you need it to respond, and how frequently your other loop iterates (one second in your example), you could do something like this:
def spawn_longrunner(): # give it a better name based on what it actually does
global longrunner
longrunner = subprocess.Popen(["long_running_process"], close_fds=False)
spawn_longrunner()
while True:
subprocess.run(["short_running_process"], capture_output=True, timeout=1)
if longrunner.returncode is not None:
do_some_stuff()
spawn_longrunner()
In effect, what this does is: Every time you finish one of the short-running processes, check if the long-running one has finished; if so, do the subsequent work, and then restart. (I’m assuming here that do_some_stuff() is relatively fast and doesn’t itself need to be parallelized against the short-running processes; if that’s not the case, you definitely want threads or something here.)
That might not be suitable, though. Trouble is, to get a more effective event-driven system, you would need to do a bigger transformation of your code. Here’s how you could do it with asyncio. There’s a lot more code here because I’ve gone for fully-runnable rather than any stubs; hopefully that’s useful.
import asyncio
# Be my own short-running process
import sys
if "subproc" in sys.argv:
print("Hi, here's some output")
import time
time.sleep(0.5)
print("Here's some more")
# Uncomment to see the timeout in force
#time.sleep(1.5)
#print("I'll have timed out before this")
sys.exit()
# End subprocess code, now back to the main
def do_some_stuff():
print("Doing some stuff!")
async def long_running_processes():
while True:
global proc; proc = await asyncio.create_subprocess_exec("sleep", "10", close_fds=False)
await proc.wait() # Wait for termination
do_some_stuff()
async def main():
thread = asyncio.create_task(long_running_processes())
while True:
proc = await asyncio.create_subprocess_exec("python3", "aioparallel.py", "subproc", stdout=asyncio.subprocess.PIPE)
try:
out, err = await asyncio.wait_for(proc.communicate(), timeout=1)
except asyncio.TimeoutError:
print("Stopped the subprocess after one second")
proc.kill()
else:
print("Got %d lines of output" % len(out.decode().strip().split("\n")))
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
loop.run_until_complete(main())
finally:
proc.kill()
The key distinction here is that, instead of threads, we have tasks. Now, it’s entirely possible that this has NOTHING WHATSOEVER to do with your problem, and it’s all been a waste of time; but I have known weird things to happen with threads and subprocesses being mixed on different platforms. (And yes, for once that isn’t a euphemism for “on anything other than Linux”; in fact, subprocess issues happen on basically every platform, but they’re different issues. Isn’t cross-platform coding fun?)
As a side note, this handles the timeout directly, since there’s no subprocess.run()
involved. So you have the flexibility to do whatever you wish. I’ve written it to be broadly equivalent to run()'s timeout behaviour (kill the process and then raise rather than returning output) but I don’t know what your actual requirements are here.