PEP 703 (Making the Global Interpreter Lock Optional in CPython) acceptance

TBH, I’m not sure you would need any “branding appeal” for something that is already technically appealing. People who would benefit from it probably know what the GIL is, or have heard about it, and the implications of “no GIL” are more explicit and more easily understood than a vague “multicore”.

It depends what you call “pure Python”. If it means that you don’t write non-Python code, then it’s easy: just call into NumPy or any other numerical library that releases the GIL before doing computations.

A more sophisticated example would be using Numba with nogil.

1 Like

As per this discussion: PEP 703: Making the Global Interpreter Lock Optional (3.12 updates) - #14 by bluetech

As per What is "Pure Python?" - Stack Overflow, which to me is commonly what is understood as “pure Python”, I mean:

mean it’s all implemented in Python, and not (as is sometimes done) with parts written in C or other languages

So no, for this definition of pure Python, I would exclude numpy, or any third party package that requires a compiler, or any call to another process, to install their source distribution.

I would think this isn’t true for asynchronous code, even if it’s pure Python?

Why would you exclude Numpy or any third-party package, but not CPython itself?

Are you even sure that the Python-level dependencies you’re using are all pure Python? They might have a C accelerator here and there.

In any case, even with only CPython and the stdlib, you can still benefit from multiple cores, for example using multiprocessing or concurrent.futures.ProcessPool, or for example by calling zlib or hashlib from multiple cores.

1 Like

I think this is straying off topic. The SC will weigh in on a name choice for the C macro (we’ve been asked to) and given the community contention around the term multicore we’re more likely pick something different.

no need to discuss what is and isn’t parallel here.


I still like the name Unlocked Python.

… but I’m likely in the minority.

It’s nice because it no longer has the gil, and sounds swell.

For explaining it to people: “Traditional Python is typically locked to one piece of python code at a time in a single process. Unlocked removes that limit so multiple threads of python code can run at once in a single process.”

It sounds nice, makes sense, and explains nicely for folks unfamiliar.

1 Like

How about parallel python?

Considering the version removes a lock, how about something like “unlocked iterpreter” or “unlocked python”?

I think “parallel” is a good word to use as a base. I think it needs to be a little bit more specific as what removing GIL gives us over the standard build is specifically parallel threading (more specifically, threading module), right? Multiprocessing in Python can already achieve running different operations in parallel which some people here started discussing vigilantly/ad nauseam. On the other hand, the threading can’t achieve the same thing since GIL locks the whole interpreter instead of locking the specific resources that the thread actually needs.

So the gist of it is - threading is what can only be concurrent with a standard build, while with no GIL it can actually run in parallel.

Therefore, I suggest “parallel threading” or if you want it to be on the nose then “truly parallel threading” :smile:

Some potential issues with that name that I see are:

  • some people could not immediately recognize the difference between concurrent and parallel but I’m not sure if any other term would make it a clearer difference than parallel when compared to concurrent.
  • “parallel threading” is two words, not one word, and so could perhaps be a bit long for use in the C macros/function names. So maybe there, “parallel” would be enough. But for the user-facing name (and in the documentation of those C macros/functions), I think using two words works all right. Prefixing it with the aforementioned “truly” could work to market it better in something like release highlights :slight_smile:
  • I suppose that in a realm of CPUs with only a single core/thread, “truly parallel threading” still ends up being concurrent since there’s no hardware support to actually run multiple operations at the same time but I’m not sure if that matters - for people running Python on such CPUs, those builds will just end up working roughly the same since on the thread switch, Python will be unlocking GIL either way (overhead of a switch could be a bit smaller with no-GIL but that’s something that would fall under a micro-optimization, I imagine).

I like nogil, for the reason that it is the only term that is completely accurate and also what the build has traditionally been known as. I’m very unconvinced that we need a marketing term that avoids using a negative to convince people that it’s useful. People are going to use whatever build of Python comes with all the packages that they want. The people who need convincing are package authors who know what the GIL is.

I don’t think any of the alternatives are good:

  • “Free Threading” is pretty ambiguous and not a well-defined term. I’ve seen it refer to a programming model without any locks at all, which is not the case here. If you google “free threading” many of the top results are forums about nogil.
  • As people have mentioned, Python can already use multiple cores, even if you restrict yourself to the standard library.
  • “Unlocked” sounds like a marketing term

I find the heated discussion a bit odd… it’s just a name, after all :slight_smile:

In the past, we’ve always called this “free-threading”… Greg Stein was the first (IIRC) to try such a patch back in 1999. And even Sam and the SC use the term, so why not simply stick with that instead of having heated discussions ?

At the end of the day, it’s all going to be Python.


A name that will hopefully only be relevant for a release or two, at that!


FWIW, I appreciate the thought folks have put into naming this feature. I’m also confident that, at this point, the Steering Council has a good sense of an appropriate name to use, particularly for the technical aspects like the feature macro, as @gpshead said. Furthermore, I agree with @malemburg pretty much entirely.

One thing I want to clarify is that CPython already supports multi-core (AKA parallel programming), even with a GIL. I don’t just mean multiprocessing or Dask or releasing the GIL for blocking calls. I mean actually executing Python code in parallel, not just concurrently (which the GIL normally prevents in multi-threaded programs).

As of 3.12 you can use multiple interpreters (“subinterpreters”) that don’t share the GIL. (See PEP 684.) That means Python code can truly run in parallel in two threads if those threads are using different interpreters. Unfortunately, in 3.12 the feature is only accessible via the C-API.

I do have a PEP that proposes a stdlib module to expose the feature to Python code (PEP 554), but it didn’t make it in time for 3.12. (I’m also in the process of replacing PEP 554 since it 7 years old and full of the accumulation of 7 years of discussion.) My plan is to target 3.13 and publish a PyPI module in the coming month or two to use for 3.12.

As @malemburg said, PEP 703 is strictly about supporting free-threading (with a single interpreter).