PEP 684: A Per-Interpreter GIL

Would Cython defer to CPython doing the multiple-interpreters check?

If there was an official mechanism to indicate (in)compatibility then we’d likely use it.

Is there something inherent to Cython that makes it incompatible with multiple interpreters?

I think this was what I already answered. However: I’m currently not sure whether we can support all of the current Cython feature-set with multiple interpreters. I’m mainly thinking about C functions that can access Python globals. (I’m also not sure this is something that Python can help with). So even in future when we support it properly it may be that some Cython modules will never be able to support multiple interpreters. But that’s very much a future problem…

Perhaps it is an error (in sqlite) to define a function in one interpreter and name it in a query submitted from another. I think there are environments where a new platform thread might invoke a call-back but I’m influenced by Java not C in that conviction.

Good point. The interpreter has to be identifiable in a thread-safe way before PyGILState_Ensure() completes. I wonder if only case-specific solutions can exist, in this case in the callback_context.

At this point I’m strongly leaning toward adding a moduledef slot for “supports use in multiple interpreters” (i.e. an opt-in flag). However, I don’t see the point of a distinct “supports per-interpreter GIL” slot since there doesn’t seem to be much interest for one without the other currently.

Contrary to what PEP 489 says, the default would be “does not support use in multiple interpreters”. Ideally the opposite would be the default, but it seems like there are enough extensions out there that would be a problem, even among those that implement multi-phase init.

That said, I expect we could switch the default at some point in the future. With that in mind, it would make sense to add an explicit “does not support use in multiple interpreters” moduledef slot now (matching the current default).

I’ve updated PEP 684 after the last set of feedback. You can see the changes in https://github.com/python/peps/pull/2807.

The PEP text is still at https://peps.python.org/pep-0684/.

Significant changes:

  • settled on keeping the allocators global but requiring that they all be thread-safe
  • the state of the existing “small block” will be moved to PyInterpreterState
  • dropped references to mimalloc
  • simplified the C-API changes
  • clarified the situation with incompatible extension modules
  • proposed that extensions always opt in to per-interpreter GIL support with a new PyModuleDef slot (at least until we have enough evidence that multi-phase init is sufficient)
  • expanded “How to Teach This”

For me the most critical things to settle are:

  • Are we okay to require that the “mem” and “object” allocators be thread-safe, whereas currently we say they can rely on the GIL?
  • Can we avoid making extensions opt in to supporting per-interpreter GIL (if they already implement multi-phase init)?

Open questions (from the PEP):

  • Are we okay to require “mem” and “object” allcoators to be thread-safe?
  • How would a per-interpreter tracemalloc module relate to global allocators?
  • Would the faulthandler module be limited to the main interpreter (like the signal module) or would we leak that global state between interpreters (protected by a granular lock)?
  • How likely is it that a module works under multiple interpreters (isolation) but doesn’t work under a per-interpreter GIL?
  • If it is likely enough, what can we do to help extension maintainers mitigate the problem and enjoy use under a per-intepreter GIL?
  • What would be a better (scarier-sounding) name for importlib.util.allow_all_extensions?
2 Likes

9 posts were split to a new topic: How to share module state among multiple instances of an extension module?

My vote goes to no: make 3.12 safe, then remove the limitations.
For example, PyMem_SetAllocator with PYMEM_DOMAIN_MEM or PYMEM_DOMAIN_OBJ could block creating independent GILs, and new PyMem_SetGlobalAllocator could be added.

And, I guess setting memory allocators should be blocked if multiple GILs exist? Apparently, after Python is initialized, PyMem_SetAllocator should be only used only for hooks that wrap the current allocator (is that right @vstinner?), but creating such a hook using PyMem_GetAllocator gets you a race condition. IMO the best thing the initial implementation can do is to fall, and leave a better solution for later.

A wrinkle is that PyMem_SetAllocator has no way to signal failure – it silently ignores errors. Guess it predates PyStatus?

IMO, the solution is to not opt in for now. If synchronization/introspection API is missing, let’s add it after the PEP is in place. (IMO there are many issues in this area – that’s why I’m trying to convince Eric to make the initial implementation safe but limited.)

2 Likes

Agreed. The PEP shouldn’t need more than that.

That said, a thread-safety restriction on the allocators is the simplest way forward for a safe 3.12 (under a per-interpreter GIL). Or were you talking only about the constraint on extension modules?

Do you mean if someone sets a custom mem/object allocator then subinterpreters with their own GIL should not be allowed? That is reasonable, if we don’t have enough information to conclude that existing custom allocators (used with PyMem_SetAllocator()) are thread-safe.

What would this do?

Yeah, that’s a race we’d have to resolve. However, rather than disallowing it, I’d expect a solution with a granular global lock, like we have for the interpreters list.

Right. We’d have to do something like leave the current allocator in place and return. Then you’d have to call PyMem_GetAllocator() afterward to see if your allocator is set. A function that returned a result could be helpful.

Regardless, it would make more sense to me if we had a separate API for wrapping the existing allocator after init (e.g. PyMem_WrapAllocator()). Then PyMem_SetAllocator() would apply only to the actual allocator and only be allowed before runtime init. However, that is definitely not part of this PEP (nor necessary for it).

Agreed.

I was talking about both :‍)

Yes, that seems like the easiest safe way forward.

Same as PyMem_SetAllocator, but allow subinterpreters with their own GILs – i.e. that allocator would be assumed to be thread-safe.
(Yes, it needs a better name.)

Yes. It’s out of scope for this PEP, but :

We probably should expose API for user-defined granular global locks. AFAIK we don’t have a good way to “allocate lock if not already allocated” that would work with multiple GILs.
Such a lock would be useful one-per-process modules (the isolation opt-out), as well as for Marc-André’s use case. IMO, this should be addressed relatively quickly, so people don’t start writing extensions that are only usable in the main interpreter. (I see relying on a single main interpreter as technical debt. Eventually I’d like to allow a library to call PyInitialize without caring whether there’s already an interpreter around. The concept of a main interpreter complicates that, but if it’s contained in the core, it should be manageable.)

1 Like

Thanks for clarifying. I agree that we should look into a new allocator set/get API that relates to interpreters. However, I don’t think this PEP needs that.

That’s a good idea. I’ll make a separate post just about this.

Regardless, I was hoping to leave specific APIs that help extension modules out of this PEP. From PEP 684:

We will work with popular extensions to help them support use in multiple interpreters. This may involve adding to CPython’s public C-API, which we will address on a case-by-case basis.

I’m sure we will add a fair number of utility APIs that might help extension maintainers reach multi-interpreter and per-interpreter GIL compatibility. It seems like the PEP would be out-of-phase with that effort, so it would be better to not include specific additions in the proposal.

+1

Yeah, that’s certainly something to look into (but not for this PEP). I known @steve.dower has some thoughts in this area, and certainly @vstinner does and I do. That said, I’d rather any further discussion on this get its own DPO thread, to avoid side-tracking the PEP discussion.

I started a thread at https://discuss.python.org/t/a-new-c-api-for-extensions-that-need-runtime-global-locks/20668.

1 Like

faulthandler the crash reporting feature would remain per process. Just as it can do with dumping the current traceback of each thread in the VM, it should presumably be extended to do that for each subinterpreter so that it is clear which tracebacks belong to what.

faulthandler.dump_traceback* APIs could just dump thread stacks related to the calling interpreter? Or easier: simply restrict all faulthandler APIs to being called from the main interpreter rather than allowing them from subinterpreters. Given they deal with process wide state, just don’t let subinterpreters call them at all.

1 Like

Will per-interpreter GIL work in a WASM context ? to bring parallelism also in this web context.
(Pyodide and Jupyterlite comes to mind)

It’s not a clear-cut answer as it all depends on how you want to utilize per-interpreter GILs. WebAssembly does not natively have threads, so it would be no different than the situation today. If those Emscripten-based WebAssembly runtimes support some version of threads and that can be used from a pthread API, then it should be transparent. But all of that is up to Pyodide and Emscripten.

2 Likes

CPython’s runtime relies on some global state that is shared between all interpreters. That will remain true with a per-interpreter GIL, though there will be less shared state.

From what I understand, WASM does not support any mechanism for sharing state between web workers (the only equivalent to threads of which I’m aware). So using multiple interpreters isn’t currently an option, regardless of a per-interpreter GIL. IIUC, at best you could run one runtime per web worker, which is essentially multiprocessing.

1 Like

I just want to add that per-interpreter GIL would greatly increase Python’s usefulness for User-Defined Functions (UDFs) in DuckDB. DuckDB automatically parallelises SQL queries, including those with UDFs. However, thus far, we have been severely blocked to do this with Python as a UDF implementation language because of the GIL. The only way around this currently is to fork additional processes and to ship inputs and outputs around between processes with all the associated headaches. So yes, please add this!

7 Likes

Thanks for the insight!

1 Like

Thanks for all the hard work and insights on this! Is PEP 684 still targeted for the 3.12 release?

Yeah, we’re still aiming for 3.12, assuming the PEP is accepted by the Steering Council.

5 Likes

On behalf of the Steering Council, I’m happy to report that we have accepted PEP 684.

@eric.snow, thanks for all of your efforts on this PEP and all of the supporting work it took to get us here over the years!

37 Likes

With just this PEP, is there a performance gain from using subinterpreters in threading.Threads as opposed to just raw threading.Threads.

I’m trying to understand any additional level of concurrency we get via just this PEP. It sort of sounds like it’s the same as threading.Threads for now until we get per-interpretor GIL.

1 Like