Handling SIGINT in multiprocessing on Windows

On Linux the following code allows the main process to shutdown its child process when a SIGINT is received (if Ctrl+C is pressed after the Wait for event message):

import multiprocessing as mp
import signal
import time

def func(shutdown_event):
    signal.signal(signal.SIGINT, signal.SIG_IGN)
    print("Wait for event")
    is_set = shutdown_event.wait(timeout=10)
    print(f"Event outcome: {is_set}")

class Main():
    def run(self):
        self.shutdown_event = mp.Event()
        self.start_time = time.perf_counter()
        signal.signal(signal.SIGINT, self.handler)
        child_process = mp.Process(target=func, args=(self.shutdown_event,))
        child_process.start()
        child_process.join(timeout=5)
        print(f"Exit code: {child_process.exitcode}")

    def handler(self, signum, frame):
        elapsed_time = time.perf_counter() - self.start_time
        print(f"Main process interrupted after {elapsed_time:.3f}s")
        self.shutdown_event.set()

if __name__ == "__main__":
    mp.set_start_method("spawn")
    Main().run()

If the code is run on Windows the signal handler is only called once the join() times out:

Wait for event
Main process interrupted after 5.0430s
Exit code: None
Event outcome: True

After some digging I learned that on Windows most blocking calls aren’t woken up by an interrupt (see [Python-Dev] Interrupt thread.join() with Ctrl-C / KeyboardInterrupt on Windows).

Is there a good way to handle interrupts on Windows when using multiprocessing?

Further Context
I came across this because I am working on an application that uses a Manager to share data with a process running on a remote server. To keep the logs in one place I have an additional process running on the local machine that handles the log-messages. The goal is to shut everything down in the right sequence so no logs get lost.

I would first try to join as a sanity check. Maybe join in a loop with a small timeout. Your code does work on Linux.

The issue with the code cited is not related to multiprocessing; at least not directly. But I heard that on Windows you may not indeed receive signal with the parent process. I suspect, given signal.signal(signal.SIGINT, signal.SIG_IGN) in your code, you probably were receiving the event in you child process. Does not make sense to me otherwise.

1 Like

I tried joining with a smaller timeout than what I have in the example code above and the interrupt is handled shortly after the timeout as before. Since then I’ve also found open issues related to this Windows-specific bug (e.g. Can’t gracefully ctrl+C multiprocessing pool on Python3 & Windows · Issue #82609 · python/cpython).

The signal does reach the parent process once the blocking call releases its lock. I also tried adding a signal handler to the child process to see what happens and it leads to the same behaviour, the handler gets called once the blocking event.set() call times out.

I was referring to this one which looked suspicious to me:

Is behavior the same without this line?

1 Like

Without the signal.signal(signal.SIGINT, signal.SIG_IGN) line a KeyboardInterrupt is raised in the child process, which leads to the main process signal handler getting called shortly after. The blocking event.wait() method gets woken up by the interrupt in this case unlike process.join() in the main process.

Wait for event
Process Process-1:
Traceback (most recent call last):
  File "C:\Users\USER\AppData\Local\Programs\Python\Python313\Lib\multiprocessing\process.py", line 313, in _bootstrap
    self.run()
    ~~~~~~~~^^
  File "C:\Users\USER\AppData\Local\Programs\Python\Python313\Lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\USER\Documents\interrupt_example.py", line 8, in func
    is_set = shutdown_event.wait(timeout=10)
  File "C:\Users\USER\AppData\Local\Programs\Python\Python313\Lib\multiprocessing\synchronize.py", line 356, in wait
    self._cond.wait(timeout)
    ~~~~~~~~~~~~~~~^^^^^^^^^
  File "C:\Users\USER\AppData\Local\Programs\Python\Python313\Lib\multiprocessing\synchronize.py", line 268, in wait
    return self._wait_semaphore.acquire(True, timeout)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
KeyboardInterrupt
Main process interrupted after 1.605s
Exit code: 1

The reason I used that line was because I’d prefer not to interrupt the child process, instead shutting it down from the main process. It feels like that might not be possible (at least on Windows).

FYI this is exactly how it works in Unix. Windows has some specifics, yeah.