PEP 749: Implementing PEP 649

It’s a PEP… about a PEP.

I have started work on implementing PEP 649, deferred evaluation of annotations, for Python 3.14. You can see the progress in PRs linked to Implement PEP 649 · Issue #119180 · python/cpython · GitHub, and in my branch most of the functionality is implemented:

>>> def f(x: SomeAlias): pass
>>> SomeAlias = int
>>> f.__annotations__
{'x': <class 'int'>}

But during the implementation work, I encountered a number of areas where either the existing PEP was too vague, or I wanted to do things a little differently. Therefore, I wrote a new PEP to supplement PEP 649 with some additions and some tweaks to the behavior it introduces.

The abstract of PEP 749 lists the changes made:

  • from __future__ import annotations (PEP 563) will continue to exist with its current behavior at least until Python 3.13 reaches its end-of-life. Subsequently, it will be deprecated and eventually removed.
  • A new standard library module, annotations, is added to provide tooling for annotations. It will include the get_annotations() function, an enum for annotation formats, a ForwardRef class, and a helper function for calling __annotate__ functions.
  • Annotations in the REPL are lazily evaluated, just like other module-level annotations.
  • We specify the behavior of wrapper objects that provide annotations, such as classmethod() and code that uses functools.wraps().
  • There will not be a code flag for marking __annotate__ functions that can be run in a “fake globals” environment.
  • Setting the __annotations__ attribute directly will not affect the __annotate__ attribute.
  • We add functionality to allow evaluating type alias values and type parameter bounds and defaults (which were added by PEP 695 and PEP 696) using PEP 649-like semantics.

I think the first two will be the most controversial. For both, the PEP includes a detailed discussion of alternatives and why I reject them.

The main open issue at the moment is the name of the new module: annotations (the PEP’s current proposal), annotools, annotationslib, something else?

My implementation plan at the moment is to initially land the implementation with the tweaks proposed in this PEP, then spend the next few months getting everything into CPython and testing with important third-party libraries. This process may lead us to find additional areas where PEP 649 needs tweaks, and we’ll add those tweaks to PEP 749. Once everything has stabilized, and hopefully still well before the Python 3.14 feature freeze, I’ll submit PEP 749 to the Steering Council.


The work being done is amazing, thank you!

I’m just wondering about this though. Why not get PEP 749 (in its current state) accepted before the implementation lands (more specifically: the bits not already accepted through 649)?

We have a bulk of details where divergence from 649 appears desirable, which you’ve thankfully written up already. I suspect that any remaining eventual tweaks would be smaller in volume, and might be handled as smaller updates.

But whatever their volume, I don’t see how currently-unknown future tweaks should hold up acceptance of what’s already on the table. I worry that landing stuff counter to what’s currently approved is not a good precedent, and ends up putting the SC in a bind, because a hypothetical “no” down the line would then come with large revert pain for everything that’s already grown on top.

PEP 749 is essentially a list of minor (or major) divergences from PEP 649. I anticipate that there will be more divergences that we identify over the next few months and add to PEP 749, and I’d like to present all of those to the SC at once to avoid going back and forth.

I think that’s a relatively minor concern in this case. The main problematic area would be the proposed new annotations module, but even for that, we know that we will need most of the code in that new module even if we were to implement PEP 649 exactly as written. If we merge the change to CPython now and the SC later tells me to put the code in inspect instead, it’s going to be easy enough to move the code.

I want to make sure that most of PEP 649/749 lands in the main branch soon so that we’ll have lots of time to test it and find issues with it before Python 3.14 becomes final.


Even if the SC haven’t been formally asked for approval yet, I assume they’ll still keep an eye on how things are going, and chime in if they have specific concerns. It’s also an area where “seems like a good idea in principle” may fail in practice once stdlib modules like dataclasses actually attempt to adapt to the changes, so I think it makes sense to ask the SC for ratification of details that have already been demonstrated to work rather than asking for approval for everything that is being tried along the way.

On PEP 749 itself, I don’t think the current notes for wrapper functions are quite right, as they won’t do the right thing when __annotations__ has already been populated on the wrapped function.

I previously posted some thoughts on how to handle that to `functools.update_wrapper` will require changes for PEP 649 compatibility · Issue #21 · larryhastings/co_annotations · GitHub and I still believe that adding a suitable __annotate__ implementation to the wrapper object is the better approach.

The introduction of the annotations module also avoids the circular dependency I mentioned in that issue (inspect depends on functools, so functools depending on an inspect.get_annotations API would pose a problem beyond the mere fact of inspect being an expensive dependency)

The other thing I noticed was that the “stringizer” used to make the SOURCE format reconstruction possible was being kept private.

While splitting that functionality out from ForwardRef itself definitely makes sense, it seems like it might be a useful API to expose in its own right (potentially using a name like EvalToSourceRef or similar). I guess keeping it private initially doesn’t rule out making it public later, though.

Regarding the future import: the PEP explains why eventual deprecation is better than immediate deprecation or immediately making it a no-op. What about making it a no-op eventually instead of immediately? That’s an alternative that I would have expected to be considered.

You already know my opinion on this, but: I would much prefer that the new module be called annotationslib or annotools rather than annotations. I think there’s too much potential for confusion with from __future__ import annotations. As well as human confusion, linters might get quite confused if you had both from __future__ import annotations and import annotations in the same module. Semantically those import statements are doing pretty different things, but I might have to add some special casing to Ruff to get it to understand that this doesn’t count as “redefined while unused”, for example.


Even though I agree that there might be initial confusion and care in implementations, I personally like Jelle’s choice of annotations. In the long run (once __annotations__ is finally removed), it will be nice to live in a world where modules are named with ordinary words that I think are easier to read.


But ordinary words make searches very much harder. I argued and lost the point in previous discussions, for example in packaging: there is a project and module called packaging, effectively impossible to search for or reference (only code markup can be a tiny distinction for the ordinary word and concept of packaging); there are projects installer and build (this one even conflicting with common local directory called build!).

Good names in my opinion are pip, importlib, graphlib, functools, etc.


What’s wrong with making inspect fast enough to just add the function there? It’s entirely made up of definitions, rather than doing work, so I assume the speed issue is because it imports 20 modules in case you call one of the functions that needs it. We can pretty easily halve the import time by moving most of the imports into the functions.

How quick does it need to load in order to be acceptable to not add an entirely new module just for this?


It’s true that inspect is mainly slow to import because it imports so many other modules. But ultimately the reason why it imports so many other modules is because it has so many functions that are doing such disparate and unrelated things. I like the idea of a new, slimmed-down standard-library module for this functionality as I think it starts to chip away at the root problem here, which is that the inspect API is honestly just too big for a single-file module. (If we were writing it from scratch today, I’d probably argue that it should be a package rather than a single-file module.)

Putting it in a separate module also fixes the cyclic dependency issue with functools that @ncoghlan mentioned in PEP 749: Implementing PEP 649 - #4 by ncoghlan.


As @AlexWaygood noted, the main benefit I see in a dedicated annotations handling module is that it means the modules that inspect depends on (like functools) can use it without introducing any circular dependency problems.

The lower level API doesn’t actually need to be public to serve that purpose, so keeping inspect as the public API would technically be fine.

That said, I personally find the argument about the “mechanics of annotations” and the “meaning of annotations” being sufficiently different topics that it’s worth having a module dedicated to the former sufficiently persuasive that I still agree it’s worth moving the public APIs that PEP 649 proposed adding to inspect and typing to a new dedicated module.

I do agree with @merwok about the hassles of using plain nouns as module names, though. How long will it take for a search for “python annotations” to actually start offering the module docs as an early search result, rather than leaving them buried under a pile of articles covering Python annotations in general? The fact annotationslib is free on PyPI is also a decent point in its favour, so it seems like a reasonable option to switch to rather than considering any more exotic alternatives (the main other name that occurred to me was evaltools, and that feels overly non-specific).


I weakly prefer annotationlib (no s) to annotationslib but I kind of like the plain annotations more than either. There’s something very satisfying about a feature moving from from __future__ import annotations to just import annotations. The future arrived!

It’s hard to predict the search-engine impact, especially as search engines have started devolving into AI nonsense. I’d rather have a nicer experience writing the code, personally.


If there’s going to be a new module with one of these names, I also prefer annotationlib, but I prefer it over all the other options.

I’m less concerned about this. It doesn’t seem like this functionality is for most users - the point is to let most users “just” write annotations, and then the clever libraries know how to use the lib to understand them. I have no problem making the clever libraries do a bit more typing, especially if it preserves namespace for end users.


This is a good point, I’d probably prefer to use annotations for my own module and I’m unlikely to actually import this most of the time.

I suspect from annotationslib import get_annotations would be the most common form for the import statement, but for folks that genuinely prefer the short module name import annotationslib as annotations remains available.

I’d be surprised if anyone actually did that though (it isn’t like folks are running around writing import contextlib as context).

Each time I see folks writing import annotations, I actually like the name less and less, as I think “but I’m not importing any annotations, I’m importing a collection of utilities for working with annotations”.

(contrast that with the collections module or the types module, which literally do publish a set of collections and a set of interpreter data types respectively)


Googling “python typing” gives me the relevant docs for the library as the first result and that is also a very general noun. So I don’t expect it to be a problem with annotations.

Moreover, even in the unlikely case it doesn’t (or until it does) googling “python annotations lib” certainly will, and that’s just one keystroke more.

annotationslib would also become the second longest stdlib module name after multiprocessing, and while the latter’s length is justified by its subject matter the former’s suffix is extending an already lengthy name for little practical purpose. Either annotools, annolib or annotations would be much better in that regard.

(apologies for bikeshedding)

1 Like

Thanks everyone for the feedback!

This is a key point I hadn’t considered: it’s likely that some people will write both from __future__ import annotations and import annotations in the same module, at least in the near future. That’s going to confuse linters, but more importantly it will look strange to users. @merwok also makes a good point about the confusion that can result from using a common word as the module name.

I’m now leaning towards using annotationlib as the module name; it sounds best to me among the options being considered.

The PEP has a section explaining why we can’t make it a no-op now. The same concerns would still apply to some extent if we make it a no-op in five(ish) years instead, though obviously the ecosystem would have had a lot more time to get ready for the change.

I think making it a no-op in the future instead could be an acceptable alternative. I’ll list it as a rejected alternative for now, but if others support this approach, I’d be open to switching to it.

I wouldn’t be opposed to making the stringizer public if there’s a good use case, but it’s going to be a very annoying object to deal with in user code (you can see my current draft implementation here: gh-119180: Add `annotations` module to support PEP 649 by JelleZijlstra · Pull Request #119891 · python/cpython · GitHub). The object needs to override basically all dunder methods, including __getattr__. (This is important because users might do e.g. if TYPE_CHECKING: import tensorflow, then in an annotation use tensorflow.SomeType. When the annotations are evaluated in FORWARDREF or SOURCE format, we want tensorflow to evaluate to a Stringizer, and then getting the SomeType attribute should evaluate to another Stringizer.) I’d prefer to keep this object hidden in the implementation, and instead expose APIs that return more tractable objects like strings and ForwardRefs.

You’re probably right that this performs more intuitively in the case where __annotations__ modified on the wrapped function. It’s unfortunate though that this will make the implementation of update_wrapper more complex. If we generate a wrapper __annotate__, we can’t really have __annotate__ in either functools.WRAPPER_ASSIGNMENTS or functools.WRAPPER_UPDATES, so we may have to invent a new mechanism.


Yeah, I already resigned myself to it needing custom inline code, since I don’t see a reasonable way to avoid it that doesn’t cause other problems (the wrapper function is only 3 lines, but it needs to be a closure to work as desired).

To keep in the spirit of the existing parameters, we could add a “WRAPPER_DELEGATIONS” list, and a separate “SUPPORTED_WRAPPER_DELEGATIONS” tuple, with "__annotate__" as the only initial entry in both sequences.

Isn’t that how it’s done for other future imports. It’s still valid to do from __future__ import generators even though that’s mandatory since 2.3.

If possible, I think it would be great if we could avoid emitting a DeprecationWarning if from __future__ import annotations is used. Most code will continue to work just fine and it would just be one of the warnings that get ignored by default as it spams to much.

Tools like pyupgrade or ruff can and will evolve to remove it automatically anyway.

That feels pretty different to me, because the generators future did end up actually being implemented in Python. But this future is never going to be implemented – it’s no longer the future! While it’s true that the vast majority of users will hopefully see no breakage at all when they switch from PEP 563 semantics to the new default of PEP 649, that’s not going to be true for 100% of users. For the small number of users who are going to have their code broken by this change, I’d much rather they have an explicit deprecation warning prompting them to remove the __future__ import ahead of time.

It will be much easier for people to diagnose the cause of their code breaking if they notice the deprecation warning, attempt to remove the __future__ import ahead of time, and find that their code breaks. If there’s no deprecation warning or eventual removal, and the __future__ just silently becomes a no-op in a certain version of Python as one of a thousand other changes, I think diagnosing the cause of the breakage to your code would be much harder.