Concerns regarding deprecation of "fork()" with alive threads


I’d like to initiate a discussion regarding the deprecation of the fork() usage with active background threads starting from Python 3.12. This deprecation warning was introduced in the following GitHub pull request:

I am the author of a library that makes use of threads internally. My library is designed to be compatible with multiprocessing regardless of the start method ("fork" or "spawn") selected by the user. I implemented my library to ensure full compatibility with the “fork” method by using Lock and os.register_at_fork() meticulously.

My project has worked well so far. Unfortunately, this new DeprecationWarning will make people worry about usage of my library with forked process whereas it’s safe. Users may now encounter warnings, which is unacceptable to me as I aim for maximum usability in various scenarios. Losing compatibility with fork() would represent a significant regression.

I agree that mixing fork() and threads is a possible source of deadlock and we should discourage users from doing so. However, I think there are valid use cases, and I’d like to suggest a way for library authors to mark a thread as “enabled during fork” if they know what they’re doing.

A preliminary implementation could involve:

  • Introducing a mechanism to mark a Thread as “enabled during fork” before its execution.
  • Maintaining a private count of active threads marked as “enabled during fork”.
  • Modifying the warn_about_fork_with_threads() function to include an early exit without warning if the total number of threads equals the count of “enabled during fork” threads plus one.

I’m quite concerned by this and would love to see a possible solution. Let me know what you think we can consider doing.

Pinging @gpshead as he is the author of the pull request.

Thank you for your attention and consideration.


While that is a great thing for you to do, unfortunately on POSIX doing enough isn’t actually possible. It is impossible to safely code threads that can be used in an application that calls fork() in Python because the CPython runtime itself is by definition unsafe for use after fork(). No C API other than those listed as “async-signal-safe” in signal-safety(7) - Linux manual page and the related POSIX standard(s) can be presumed compatible with use in a forked child process. The CPython runtime requires way more.

“It seems to work well so far” is unfortunately what people have been living with for ages despite not accurately reflecting execution safety.

The new DeprecationWarning is intended to make developers realize that they have a potential problem to look into. Being a DeprecationWarning, it isn’t supposed to show up to end users of applications.

Q1: Is this showing up to end users of applications? Can you give us a reproducable example?

Q2: Would it be useful to improve the text of the DeprecationWarning coming from os.fork() to help developers who see it? Got suggestions?

Developers should respond by adjusting their use of multiprocessing or concurrent.futures to explicitly specify either the "forkserver" or "spawn" start methods via a context. So while I understand what you feel… The message you don’t want to hear is that you are not technically “Losing compatibility with fork() because, without knowing, you never actually had it in the first place. :frowning: The warning is in no way intended to be seen as blaming you!

The ultimate goal of the warning is to get Python developers who’s applications are already existing risky situations to be aware of that and consider appropriate corrective coding, such as if using multiprocessing or concurrent.futures, switch start methods ASAP.

Q3: Would silencing this warning and thus delaying developer knowledge they likely need to adapt their code not use the “fork” start method until the default multiprocessing start method can be changed away from “fork” per the deprecation process that has started help the world more than harm it?

Such code is already at risk and always has been. In some environments it already doesn’t work. This would be hiding a truthful warning from developers.

Unfortunately, there is no possible implementation of OS-based threading that could call a Python thread “safe” for use when something is forking. The CPython runtime internals will never be async-signal-safe for use after fork(). Thus any Python based thread is fundamentally unsafe at fork time. So adding APIs to mark Python threads as “okay for fork” vs “unknown” would be adding an API that cannot honestly make such guarantees because it doesn’t matter what the Python code design is, the very fact that it uses the Python VM in the thread at all is the problem. So no matter what we all wish were true, valid use cases aren’t possible.

Q4: So what’s the resolution here? The warning could be silenced until the default flip. BUT… That’d keep the existing hidden problem status quo of “your code has a potentially serious problem that you never realized” that existing code has had for ages and only occasionally been tripped up by when the execution environment or system libraries change - requiring a systems-level debugging session to attempt to understand the likely deadlock. What people have been relying on “seeming to work fine” already doesn’t work on various platform configurations; I expect that to increase over time.

Silencing an accurate warning is not without precedent. I originally had a much more verbose warning in place for any Linux user using the “fork” default context that we rapidly determined was too noisy, because most code just uses the default, and thus rolled it back.

In terms of deciding on whether to remove this os.fork() warning in the presence of threads in 3.12.0 or not, lets let our release manager @thomas make that call.


P.S. I should’ve started my entire long reply off with “Thank you for testing with pre-releases! This kind of feedback is exactly why we want people to do so!” :slight_smile: :heart:


The Python tests also affected. Multiprocessing tests and test_httpservers produce this warning. It is not good when we try to hunt stray warnings in tests. Since it is a deprecation warning, we cannot simply silence it. In future it will be an error, so the tests will be broken.

What actions are expected from users of http.server when they see a warning? How they can solve the problem on their side?


Hi Greg, could you help add some more context to this ? Signals are somewhat different than threads, so it’s not clear why being async-signal safe is required for safe use of threads after a fork – well, at least not to me :slightly_smiling_face:

Since I would expect many other people to also not know, it may be good to have some more discussion and then perhaps add a summary in the “What’s new” file.

Signals actually interrupt execution of other calls and their handlers run in what context, thread and stack (in most cases) was live at the moment. The functions listed in the signal-safety man page are ones which are known to be safe to call, since they are aware of this type of use. In particular, they are reentrant (a signal may have interrupted one of the APIs you want to use in the signal handler) and can be called on the same stack and in the same context without affecting an already running call to the same function in the same thread.

Now, thread-safety is something different. Threads each use their own stack and so state kept on the stack or in the thread-local variables is not shared between threads. As a result, functions can be thread-safe, while not being async-signal safe. printf() is such a function.

Calling fork() inside a signal handler is also not necessarily async-signal safe (it may be calling a async-signal unsafe function in one of the registered pthread_atfork() handlers). But I don’t think we’re talking about running multiprocessing from inside a signal handler :wink:

So I guess what I’m missing is the link between async-signal safe and having this as a requirement to make use of threads safe after calling fork() and perhaps you can help fill that gap, since the fork() man page isn’t clear on this. The OpenGroup fork() man page provides some more context, but only touches upon async-signal safety when it comes to running pthread_atfork() handlers, not in the general case, as implied by the Linux man page.

Now, in general, using fork() in a multi-threaded application will almost always cause problems one way or another (missing resource deallocation, I/O corruption, deadlocks, losing unfinished work, etc.), mostly due to the fact that only one thread survives the fork. As such, the warning is adequate and indeed useful, since a Python programmer may not be aware that e.g. some 3rd party module started background threads without the programmer knowing.


1 Like

The posix page also mentions the general case (with the relevant part formatted as bold text):

  • A process shall be created with a single thread. If a multi-threaded process calls fork(), the new process shall contain a replica of the calling thread and its entire address space, possibly including the states of mutexes and other resources. Consequently, to avoid errors, the child process may only execute async-signal-safe operations until such time as one of the exec functions is called.When the application calls fork() from a signal handler and any of the fork handlers registered by pthread_atfork() calls a function that is not async-signal-safe, the behavior is undefined.

Thanks, @ronaldoussoren .

I wonder why these standards are not more explicit about why specifically only the async-signal safe functions are seen as being safe to call.

I guess the fear is that code in some other (now dead) thread may have introduced unfinished changes in global state, which could affect functions not specifically designed for out-of-order operation.

E.g. lets say there was a printf() call in a thread that was interrupted by the fork(). Calling printf() in the then single remaining thread could result in data corruption in the buffers or even in deadlocks due to mutexes still blocking access to the buffers.

That makes sense and bridges my gap in understanding why something that’s required for signal handlers now gets applied to a thread and fork context :slightly_smiling_face:


What doesn’t help is that there are different platform implementation that can have different guarantees. A libc could use pthread_atfork handlers to make printf safe when mixing thread and forks, even if this is not guaranteed by the spec. That can lead to code that works on some platforms, but fails due to race conditions on others.



POSIX probably plays safe by using the most strict recommendation that’s guaranteed by the standard.

In essence: threads and fork don’t go well together; use one or the other.

1 Like

Posix documents the state of unix existing implementations.

I heard that the rule to add something to posix is to show that 3 major unix implementations do something the same way.

These days linux kernel API is arguably more important then posix.
After all windows is posix compliant!

I appreciate all your responses; they have helped me gain a better understanding of the rationale behind the deprecation. Thank you very much.

Initially, I believed that the combination of Lock and os.register_at_fork() was a safe solution, especially considering its early use by CPython to address issues related to multi-threading and forking: bpo-6721: Hold logging locks across fork().
However, it has become clear that this solution is challenging to implement correctly and can lead to subtle deadlocks (64aa6d2, 4c3da78, 99b00ef).
As a result, I concluded that the deprecation of multi-threading and fork had occurred because of its high error-prone nature.

Now, I understand from this discussion that this workaround is actually not compliant with the latest POSIX standards (see pthread_atfork(3), pthread_atfork(3p), fork(2), fork(3p) and also this defect report). The problem is summarized in the z/OS documentation here:

The intended purpose of pthread_atfork() is to provide a mechanism for maintaining the consistency of mutex locks between parent and child processes. The handlers are expected to be straightforward programs, designed simply to manage the synchronization variables and must return to ensure that all registered handlers are called. Historically, the prepare handler acquired needed mutex locks, and the parent and child handlers released them. Unfortunately, this usage is not practical on the z/OS platform and is not guaranteed to be portable in the current UNIX standards. When the parent process is multi-threaded (invoked pthread_create() at least once), the child process can only safely call async-signal-safe functions before it invokes an exec() family function. This restriction was added to the POSIX.1 standard in 1996. Because functions such as pthread_mutex_lock() and pthread_mutex_unlock() are not async-signal-safe, unpredictable results may occur if they are used in a child handler.

While multi-threading and “lock protected fork” may be viable on some operating systems, it cannot be relied upon in the general case.

The warning may appear to end users at least in two scenarios:

  • When they launch their application with warnings enabled (e.g., python -W default
  • When they execute the test suite of my library (since I’m unit-testing the multi-threading + fork case, and pytest captures the warnings).

It’s true that in the most common case, they won’t see anything.

For example, the very last snippet in the multiprocessing documentation executed with -Wd would display a warning on Linux. It can be simplified as follows:

import multiprocessing

def worker(queue):
    while not queue.empty():
        item = queue.get()

if __name__ == "__main__":
    queue = multiprocessing.Queue()
    data = [1, 2, 3]

    for val in data:

    process = multiprocessing.Process(target=worker, args=(queue,))

Every usage of multiprocessing.Queue() with "fork" start method now needs careful consideration. On Linux, any program that calls put() before starting a process will trigger a warning internally. I suppose this applies to other parts of the standard library, such as logging.QueueHandler for example.

I’m not entirely sure if this idea is both useful and doable, but here’s what I’m thinking off the bat:

  1. Suggesting to use the "spawn" start method instead of "fork" for multi-threaded programs.
  2. Including the name of one of the background thread (e.g., "QueueFeederThread"), which is especially important if it originates from a third-party library. The user may not be directly using threading, which could make it challenging for them to pinpoint the source of the warning.

Additionally, it’s important to emphasize in the documentation that "fork" must not be used in a multi-threaded application. Currently, there’s only a faint warning: “Note that safely forking a multithreaded process is problematic.” We could explain why this combination isn’t a good idea and make it crystal clear that it’s likely incorrect and definitely not portable.

I guess no… :slight_smile:

Originally, my objective was to suppress the warning in what I believed to be a secure scenario, not silencing it globally. I concur that it’s crucial to promptly inform users about the risks associated with "fork" and initiate a transition to "spawn".

The fact that I’m here today serves as evidence of its effectiveness.

If we decide that combining multi-threading and forking should be universally disallowed, regardless of whether someone chooses to do so due to implicit OS support or any other reason, it logically follows that we should not offer an option to disable the warning.

The initial intent of flagging threads was to grant users and library authors the flexibility to make their own choice, along with the corresponding responsibilities and safety considerations. It’s not clear whether it’s “not safe at all” or “not safe on some OS”. It felt to me as if we were adding a quite strict rule for a problem that in practice can actually be worked around most of the time.

Still, I won’t deny that offering a way to break the standard recommendations isn’t a great idea. Therefore, the solution for library authors is surely to implement the feature differently or drop support for forking.


Ronald’s replies linked to the right place and has explained things well, thanks!

It is an unfortunate naming of the issue thing with respect to “async-signal-safe” actually being a term that applies to post-fork() processes just as much as it does to code running within a C signal handler. signal handlers came long before threads, thus defining the term, when threading was bolted on the same set made sense to have restrictions for similar concurrency reasons post-fork. but the term “async-signal-safe” apparently stuck.

Yes, it’s about global state - in particular locks, though any shared resource or process-stateful thing can have issues. memory allocators that involve locks are a frequent source of threading+fork deadlocks (ex: LD_PRELOAD tcmalloc to use a higher performance thread aware memory allocator instead of glibc’s malloc - or build and link your code against it rather than libc’s malloc), as are other platform libraries on some POSIXy OSes. Some system C libraries go to pains to try and keep more of their APIs lock-free and “safe” for use in the post-fork environment beyond what POSIX requires (glibc in particular) but that is less common and not really guaranteed by anything, thus it cannot be relied upon to be true nor is there a good way to detect when it is true.

macOS has been going this route with many system libraries already, thus its default multiprocessing start method already being changed away from “fork” several releases ago because their default platform implementation already forced the issue upon users.

glibc being the most common mainstream Linux distro norm has allowed us to use a normal deprecation cycle to change the default multiprocessing start method otherwise.

As for whether or not this DeprecationWarning ever becomes an actual error, that is not decided. We wanted to surface the existing issue to developers to increase awareness of the design problem. It isn’t Python that imposes the restriction here, but the underlying platform. It is ideally an educational warning.

Currently the deprecation message points to the place where os.fork() is used. But some parts of the stdlib use it indirectly, so users of that API are left with deprecation message which does not help to find the place in their code that should be modified, and do not provide any suggestion for fixing this issue. So I think that we should do the following:

  1. For every feature which uses os.fork() indirectly (or can use it in some circumstances), it should be documented what are the limitations of using this feature in multithread environment.
  2. The deprecation warning should “pop up” on the surface of the stdlib. If pty.fork() uses os.fork(), the deprecation warning should point to the line in the user code that uses pty.fork(), not in And if some other stdib function uses pty.fork(), the deprecation warning should point to the line in the user code that uses that function. If socketserver uses os.fork(), the user should get a deprecation warning when it creates a forking server in the multithreading environment (threads can also be started after the creation of the server, so it is complicated).
  3. All our tests should be warning-free. And it means that when warnings will be turned into errors, only optional part of tests will be removed. Remaining tests should be enough to test the remaining functions.
  4. If possible, there should be a way to specify that other method should be used instead of os.fork() and not use it by default if it is unsafe. Or use them unconditionally if this does not have a performance cost.

I afraid that there is too little time and too much work to do this in 3.12. I propose to revert the deprecation in 3.12 and start to implement the above plan for 3.13.


I understand the pain but I don’t think it is possible to surface such a warning at the ideal place all of the time. This warning serves as an indication of the fundamental low level problem. The why behind that and which code deserves “blame” for choosing something that happens to use os.fork and thus what workarounds or solutions are needed is rarely so simple and clear cut.

I don’t see that high bar of where it shows up criteria for enabling the warning ever being possible to meet.

The list of things in the stdlib that use os.fork() is very small:

  • os.fork() - blamed directly on use if threaded.
  • os.forkpty() - blamed directly on use if threaded.
  • socketserver.ForkingMixIn - ForkingMixIn.process_request’s os.fork call can be blamed. TODO: We can update the documentation to mention or link to the fork+threading problem.
  • http.server.CGIHTTPRequestHandler - http/ CGIHTTPRequestHandler.run_cgi called from its do_POST method can be blamed. But as far as I can tell, nobody uses this ancient thing so I’m proposing we just deprecate CGIHTTPRequestHandler like we already did with the cgi and cgitb modules.
  • multiprocessing and concurrent.futures.ProcessPool when using the multiprocessing “fork” start method (posix platform default start method until 3.14).
  • pty.fork() - The pty.fork() function’s os.forkpty or os.fork call can be blamed. Those seem obvious enough already. TODO: We can update documentation to mention this.
  • Various places in the CPython Lib/test/ test do use os.fork() - but these are outside of the stdlib - they are internal only for us and we can deal with any of those that actually trigger a warning as we see fit, I doubt most of them are a problem. It remains a TODO for us to remediate those that remain in test code internally.

In general, adding a wrapper to catch and re-warn at a different stacklevel would be boilerplate code that otherwise slows everything down for normal non-warning use. ie: could take that approach but it isn’t code I’d want.

3.12 supports something slightly better than “stacklevel=” from our warnings framework with the new warn(skip_file_prefixes=) feature. That lets code attempt to give a warning a more logical code location to show up. That is only usable via a Python level API call, not from any PyErr_Warn* APIs. Making use of it from C code is thus pretty messy, though it might be usable to provide a stdlib API surface for where a warning appears (ie: blame the user of pty.fork instead of calling os.fork might be feasible that way?)

I do not expect a skip_file_prefixes= warn call could meaningfully handle the socketserver.ForkingMixIn use case as that is a situation where you really want to store the location a class derived from that was constructed from outside of the standard library and issue a warning blaming that long ago not in the current call stack instance construction or even a class definition from within the much later on ForkingMixIn.process_request code where it calls os.fork() triggering the warning if threads exist. Nothing can do that kind of warn about some other code at a distance today. (and it’d be a pretty esoteric feature request…)

There’s also this relevant issue: Support full stack trace extraction in warnings. · Issue #87693 · python/cpython · GitHub - warnings being focused on a single stack frame often misses the point and hides their utility. Why are we so single frame focused? What people often need to see is a stack trace so they can follow that, like a traceback, and identify the actual entry point where a change might make sense. Without having to rerun code with some unique setup where one specific warning is selected to turn into a raised exception.

To make people more aware, I think we ought to have this highlighted in the “What’s new” document and perhaps a section in the main documentation.

I’d like to start writing up something to that effect. Where would be the best place to have this: “What’s new”, next to os.fork() in a separate section or in both places ?


One concrete problem with “do stuff after fork() and hope that it works fine” is the problem of locks which are in an undefined state after fork(): see

Proposed documentation updates: gh-100228: Document the os.fork threads DeprecationWarning. by gpshead · Pull Request #109767 · python/cpython · GitHub

Thanks for writing this up. Please see my comments on the ticket.

1 Like

I recall hitting a bunch of off issues with fork and running Apache Airflow on Mac.

All comes down to libraries not being safe to be loaded for the first time post fork.

Makes me wonder if there should be a new Posix (or other) standard of some sort to natively allow fork/threads to be mixed.

Of course that’s not directly in Python’s scope, though folks do have cases to want this to work somehow.

1 Like

It’s not implemented in operating systems because it is really hard to implement it. It’s a hard problem.

For example, if you have a lock and you fork, the new process has a single thread, all other threads are gone. What should be the state of the lock? The kernel doesn’t know the lock. The libc doesn’t track locks. The answer is: the lock is in an undefined state. Well. That’s just one example. Anything which is “state full” can have the same issue. Only a minority of objects and structures are “stateless”: mostly things managed by the kernel, not the user space. For example, variables allocated on stack are not cleaned properly on fork(). Nothing is cleaned on fork(). Everything is manual and you have to go the hard way of os.register_atfork() :frowning: Moreover, os.register_atfork() callbacks cause their own issues. For example, they can delay when a module is cleared at Python exit (during the Python finalization).

Similar problem: pthread_exit & PyThread_exit_thread from PyEval_RestoreThread etc. are harmful.

I don’t think that anyone wants to create a standard based on these complicated problems. For example, I would prefer the opposite: remove daemon threads from Python to fix the PyThread_exit_thread() issue.