Getting Rid of Daemon Threads

I have never said otherwise. I was simply delineating a potential answer to the problem.

They largely work, except when they don’t and crash miserably at shutdown (or lead to unexpected behavior).

That is not an excuse for hand-waving the problem away. Also, let’s not discuss Java’s problems here.

1 Like

The daemon thread exception handler should check if the interpreter is shutting down. If it is, just swallow the exception and let the thread exit. That’s mostly what is expected based on the docs. The surprising behavior is when you see NameError: sys.stdout not defined or similar.

2 Likes

Forcing people to add complicated C/C++/Rust code to pure-Python projects does not seem like a solution to me.

GitHub found 179k potential usages: Code search results · GitHub

Granted, there are other possible replacements that do not involve C or ctypes, but it isn’t a decision that should be taken lightly or quickly.

1 Like

Fundamentally what we need here is a model for cancellation that is able to be applied at the OS level. It’ll take time for that to propagate through everyone who needs to know about it, but the main reason we can’t shut down daemon threads is because we can’t interrupt them.

If “everything” can be made interruptible/cancellable, then we have a chance of being able to replace daemon threads with regular threads and wait for them to clean up before shutting down.

3 Likes

Of course. Hence the clause you omitted: “And then a little later: rude
termination” (which is the current situation already, but applied
before the runtime dismantlement). It’s no worse than the current
situation, offers blocking threads a nonpolling avenue to notice
shutdown (with exceptions like the one you cite).

This procedure arranges that threads cease operation before shutting
down the runtime data structures, so that they do not use its resources
when they’re being dismantled.

It’s not a cure-all, but it does offer a clean shutdown approach for
pure-Python threads.

Thanks for all the feedback folks. The many examples were very helpful! :heart:

Given all the feedback, it definitely seems like there isn’t a good alternative to daemon threads. I’ll probably mostly drop this at this point.

Yeah, at the least I’d like to make them sound less convenient (to folks that don’t need them).

FWIW, my motivation isn’t necessary about subinterpreters. Runtime (ergo per-interpreter) initialization and finalization are a bit of a mess, though much better than they used to be thanks to the hard work of many people.

My main interest in this area is to make init and fini easier to maintain and extend, with discrete, well-encapsulated component-oriented lifecycle and order of operations. This would not only benefit core development, but also improve our embedding story (which would be a win for the community).

One such component would be “threading”. Daemon threads are probably the most obvious case where the encapsulation I’m aiming for is currently deeply violated. That’s a big part of why the topic matters to me. I’m also tired of the steady stream of related bugs over the many years–it feels like at least one a month. Perhaps we’ve finally solved the story for daemon threads with the current state of take_gil(), but my instinct is still “get rid of daemon threads if possible”.

Yeah, we don’t manage these threads very well. Mostly this pertains to the PyGILState_* API. FWIW, improving our management of these threads is on my near-term TODO list.

FWIW, interpreters created through the PEP 734 API already do not support daemon threads at all. Its one of the options in PyInterpreterConfig, and the “zero” value for that option is “not allowed”. They’re also disabled by the “default” config (_PyInterpreterConfig_INIT) used with Py_NewInterpreterFromConfig().

I was hoping that would be sufficient, but given the examples in this thread, we may need to optionally loosen that restriction, or at least provide a better solution.

That’s a possibility I suppose. I’m personally reticent to go down the route of adding more complexity to finalization, but ultimately it might be unavoidable if there aren’t good alternatives to daemon threads.

Also a reasonable possibility.

Good point.

a big +1

6 Likes

Others have said most of what I want to say.

At work, we end up using daemon threads quite a bit. Code search hits isn’t really the right metric for load-bearing-ness, but in my main codebase:

λ rg -t py 'daemon=True' | wc -l
     269

Clarify Docs About Daemon Threads · Issue #125857 · python/cpython · GitHub mentions ctypes.pythonapi.PyThreadState_SetAsyncExc as the alternative for the most common use case we have. I’d assumed we didn’t have a wrapper in threading because we didn’t feel comfortable with people using it.

1 Like

Clean shutdown is required by applications embedding Python, and start/shutdown interpreter repeatedly.
In such application, can developer prohibit daemon threads like subinterpreter?

On the other hand, most Python applications don’t require 100% clean shutdown.
Kill the process after calling atexit function is enough for them.
Can we keep daemon threads available for such applications?
Would daemon threads be harmful to such an application?

3 Likes

The use case in a project I’m involved in is that we have helper threads spawned by our custom logging handler. They are, generally, gracefully shut down as a part of the execution of logging.shutdown() – which is done within the logging’s atexit handler. According to the logging.shutdown() function’s docs, that handler is registered relatively early (so it can be expected to be executed relatively late):

When the logging module is imported, it registers this function as an exit handler (see atexit), so normally there’s no need to do that manually.

Why we require those helper threads to have daemon=True is that we need those threads to survive until that atexit handler is executed. And, AFAIK, all non-daemon threads (as long as we say about threading.Thread-created ones) are joined from the main thread before any atexit handlers are executed.


PS [EDIT] I recall also other cases when using atexit-registered handlers was handy in the context of shutting down and joining threads – in all such cases daemon=True was necessary for the same reason.

Note: even in cases in which we fully controlled the registration of our atexit handlers, we still needed to use atexit.register() rather than threading._register_atexit() – given that the latter is just a CPython internal (not a public API with stability guarantees).

2 Likes

PS When we are talking here about daemon threads, do we also mean _thread.start_new_thread()-spawned ones? (Considering that they are daemonic in the sense of how they behave… After all, any threading.Thread-created threads are based on them anyway.)

1 Like

Good point. To provide the benefits I was originally looking for, “daemon thread” would have to include any thread not created through threading.Thread. That would definitely include those created using _thread.start_new_thread(), along with every one added using PyGILState_Ensure() and PyThreadState_New()/PyThreadState_Swap(). Disallowing any of those cases would be more problematic than just getting rid of Thread(daemon=True), which is part of why I realized it isn’t worth it.

1 Like

We can still track them and raise warnings at the end, yeah?

We often could, though I’m not sure if these warnings would be actionably helpful as they’re about things that may not be wholly in control of the application/test developer. I really like Sam’s point that Python is the odd duck in terms of non-daemon threads even being a concept.

As far as I can tell I assume there will never be the OS platform APIs we desire as far as clean thread interruption on POSIX goes. I think there may technically be ways to do it on some, but "clean, “maintainable”, and “robust in the face of arbitrary C/C++/Rust code” in the process are… probably at odds with one another - everyone gets to fight over control of global state.

… anyways, onwards with the gh-87135: Hang non-main threads that attempt to acquire the GIL during finalization by jbms · Pull Request #105805 · python/cpython · GitHub bugfix backports for me. Hanging condemned threads being better than randomly crashing.

4 Likes

The note about atexit running after threads are joined (which is true, I double checked the finalisation code) makes me wonder if we could improve things by switching that to a 2-phase shut down process.

Specifically, add a new atexit.register_early callback list that gets executed before the thread join. Callbacks registered that way would be able to tell non-daemon threads to shut down, unlike regular exit handlers.

3 Likes

AnyIO just recently made a change to always use daemon threads in its “blocking portal” API. The reason for this is to enable “loitering event loops” - global async event loops running in the background until the process exits. This enables a technique for implementing synchronous wrappers for asynchronous APIs. Removing support for daemon threads would therefore be absolutely catastrophic to my use cases unless a new method for shutting down non-daemonic threads at exit is introduced.

4 Likes

wrt anyio blocking portal and similar:

The background io event loop case doesn’t seem like something that benefits from being a daemon thread (alternative method with a sync context manager that shuts down the background thead’s event loop and joins thread at exit here)

With that said, while I see some benefit to preventing daemon threads for those who have cases sensitive to the issues they pose, the structured solutions available require significantly more forethought in code structure for minimal gain to most people, and while it doesn’t strictly require it, essentially requires that extension code have a way to either be signaled to stop or be otherwise designed as interruptable. This is not a small lift in all cases.

I’m technically in favor of getting rid of all daemon threads, but there’s too much existing that this is likely to impact with large changes required to fix it.

I’d be relatively in favor of preventing daemon threads in subinterpreters all together.
I’d be entirely in favor of the ability to configure a python install to be built without support for daemon threads (see embedded use cases)

I created a separate Ideas thread to discuss improving the ergonomics of running non-daemon background threads: Improving support for non-daemon background threads

1 Like

Currently, daemon threads completely segfault if a thread decides to create a subinterpreter:

import threading
import _interpreters

def main():
    _interpreters.create()

threading.Thread(target=main, daemon=True).start()

Disallowing daemon threads only inside subinterpreters won’t fix this problem. Really, what we need is a better way for daemon threads to communicate with the main thread and find the right time to shut themselves down. The situation right now is that finalizing the main interpreter prevents future re-acquiring of the GIL, which is fine normally, but since subinterpreters can have their own GIL, this doesn’t end up blocking them and they crash. (I’m not sure how free-threading addresses this, but maybe it’s something that we could borrow for the default build.)

1 Like

I believe `atexit.register_early` pre-finalization callback API · Issue #126168 · python/cpython · GitHub would be enough to let daemon threads opt-in to getting a shutdown request before the interpreter starts getting finalised.

If folks are willing to rely on a nominally-private-but-also-stable-for-years API, threading._register_atexit already provides that behaviour (at least as far back as 3.9).

This seems incredibly disruptive. The concept of a daemon thread is universal and appears in many languages. Daemon threads are being used in the standard library, pypi modules and in production code. A github code search shows many hits. Expect uses to increase when free-threading becomes standard.

1 Like