A massive PEP 649 update, with some major course corrections

Howdy howdy. I’ve been doing a lot more thinking about PEP 649 since the last discussion topic from a few weeks back. I propose to revise some important details, detailed below.

One proviso before I begin. It’s a gnarly topic, and my proposal has evolved a lot, and I feel like it’s been a real struggle to get the solution right. There may be important details I omitted, or text I neglected to update. So if something seems awful, or doesn’t make sense, or I’ve been self-contradictory, please start by making a (kind) request for clarification. Who knows, it may be an honest mistake–or I might even have a good reason that I neglected to mention. Anyway, please hold off tearing my proposal apart until I’ve confirmed the detail was intentional, ok?

Now that I’ve–finally!–posted this, I’m going to start updating PEP 649 in parallel. There’s still a chance to get this into Python 3.12, I think, but time is running short. Hopefully we won’t uncover any more overlooked details requiring major course corrections…!

My thanks to Carl Meyer, Eric V. Smith, and Mark Shannon for participating in the private email thread leading to this post. Extra-special thanks go to Carl for his massive contribution to this discussion! He must have spent countless hours corresponding with me, and he made numerous good suggestions which I’ve incorporated. I’m grateful he was willing to contribute so much of his time and expertise. (He also saved me from some dumb blunders! Phew!)

Here goes!


Let’s start with the easy stuff: renaming some things. First up is __co_annotations__. It was a placeholder name, a relic from the very early days of PEP 649 when the attribute actually stored a code object. It’s long past time we gave it a better name. Since “maybe we should change the name” has been a TBD item in PEP 649 for more than a year, and nobody has suggested anything, it seems it’s up to me. My best idea is __compute_annotations__. If you have a better suggestion, go ahead and post it, and I’ll consider it.

For the rest of this post I’ll use the name __compute_annotations__, even in historical contexts, just for consistency’s sakes.

In the last go-round I also proposed a format parameter for inspect.get_annotations. format could be one of three values: VALUES, HYBRID, and STRINGS. I want to amend these names too.

First, they should be singular: change VALUES to VALUE, and change STRINGS to STRING.

Second, I’m not convinced STRING is the best name. I picked it because PEP 563 called these “stringized” annotations. But the name by itself doesn’t convey much–it’s pretty generic. Admittedly this isn’t a major concern. But if we put our minds to it maybe we can arrive at something better. So far my best ideas are SOURCE, CODE, and SOURCE_CODE. I note none of these are strictly accurate; it isn’t actual source code, it’s reconstructed source code. But RECONSTRUCTED_SOURCE seems too long… and, maybe someday, it actually will be the original source code, which would render the word RECONSTRUCTED in the name anachronistic. STRINGIZED also seems a little too long, and would also be inaccurate if we ever switch to preserving the actual source code. So far no really great name has suggested itself to me.

And now I’m not so sure about the HYBRID name either. Maybe PROXY is better? I think the format name should describe the value(s) you get back, as opposed to describing a “mode” that inspect.get_annotations is operating in. And the objects it creates for undefined symbols are proxies for the actual values. I have yet another name to propose here, but I might have a different use for that name coming a little later in this proposal. I’ll tell you about it then.

(I wouldn’t want to rename HYBRID to MOCK. The ForwardRef proxy objects aren’t mock objects, they’re text representations of an expression you might be able to evaluate later. Although there are some similarities, they don’t really behave like actual mock objects like unittest.mock.Mock.)

I have polls at the bottom of this post so you can vote on all this renaming. If you have an alternate name suggestion, please put that in a comment by itself, and we can use Discuss comment “hearts” to count as votes for that name. (I’d just add the alternate suggestions to the poll itself, but Discuss doesn’t let you modify polls once it’s been open for like fifteen minutes.)

For the rest of this document, I’ll use the singular versions of the names so far–VALUE, HYBRID, and STRING.

(Also: everywhere I talk about inspect.get_annotations in this document, assume I’m talking about typing.get_type_hints too. For example, typing.get_type_hints will also support the format parameter and all these formats. I’m hoping I can just reimplement typing.get_type_hints on top of inspect.get_annotations so I only have to do this work in one place.)


Second change: so far PEP 649 specifies that will initially be gated by from __future__. Several people, including myself, now think that we should just pull the trigger and make it default behavior in 3.12. What do you think? There’s a poll.


Third change: in the previous thread, Carl Meyer argued that there are use cases for requesting HYBRID format, then later evaluating the stringized values to get real values. Currently, users who enable PEP 563’s “stringized annotations”, then later try to evaluate those strings, have a lot of trouble evaluating them correctly. It can be hard to get the right globals() for an annotation, and handling closures correctly is nigh impossible. Carl wanted the placeholder values in HYBRID format to not simply be strings, but to be evaluateable objects that contain the strings and all the context needed to correctly evaluate them.

(If you aren’t current on the “stringizer” and “fake globals” runtime environment concepts this proposal is based on, please refer to my previous discussion thread where these concepts were first introduced.)

Here’s how I want this to look from the user’s perspective:

  • These objects will be typing.ForwardRef objects. Actually I expect to hoist ForwardRef out of the typing module and put it somewhere else–possibly the inspect module, possibly into some internal module with an underscore. I don’t think we’ll need to reimplement it in C, but I’m leaving that as an open question for now.
  • ForwardRef objects will internally contain a reference to the globals, locals, and closure information necessary to evaluate the string. (Handling closures is messy but doable; we’ll have to reconstruct a dict out of the closures, and load that information into locals when calling eval.)
  • The API to ask a ForwardRef to evaluate itself should be: you call the ForwardRef object without arguments, as in, it supports __call__. This would evaluate the string and return the result. (Or it might raise an exception.) This is so obviously the correct API that no further discussion seems necessary. Currently ForwardRef objects do have an _evaluate method, but this is internal-only, unsupported, takes globals and locals arguments, and so on. (Maybe I can remove it when I add __call__, maybe not.)
  • Internally, ForwardRef will be the “stringizer”–instances of the ForwardRef class will be doing the stringizing. This is necessary for HYBRID format; it’s not viable to build the values with “stringizer” objects, then replace them with ForwardRef objects at the last minute. There may be objects with arbitrary internal references to the “stringizer” objects and it’s not reasonable to tear apart the constructed objects and replace their “stringizer” objects with ForwardRef objects.

But now we have a problem: when stringizing, __call__ on a “stringizer” has to return a new “stringizer” object. How can ForwardRef.__call__ return the real value to the user, but also return a “stringizer” (aka ForwardRef) when stringizing? It’ll have to have a special “stringizer” mode that’s off by default. There’ll be an internal, unsupported bit of API that lets you turn on “stringizer mode” on a ForwardRef object. Don’t worry, we’ll turn off “stringizer mode” before the user ever sees the object. Internally, the “fake globals” runtime environment will keep track of every ForwardRef object it creates. When __compute_annotations__ finishes, it’ll iterate over all the ForwardRef objects and switch off “stringizer mode” on each one.

And speaking of “fake globals”. This environment will also need “fake locals”, to catch class namespace lookups for annotated methods. We’ll also have to create a “fake closure tuple” for __compute_annotations__ functions that use closures.

(I even considered even creating “fake constants”, where we replace co_consts with tuple(ForwardRef(repr(c)) for c in co_consts). But this would mean creating a modified version of the code object and running that. It’s probably easier to leave the constants alone. If we’re generating STRING format, and any of the values in the resulting annotations are constants (e.g. manually stringized type hints), we’ll just call repr on them during the same pass when we extract the strings from the ForwardRef objects.)


On to the fourth change. Previously I proposed inspect.get_annotations would accept a format parameter, specifying the format for the annotation values. So far I’ve proposed three of these “formats”–VALUE, HYBRID, and STRING. I now propose a fourth: FORWARDREF. This would be like STRING format, but instead of the annotation values all being strings, they’d all be ForwardRef objects. (In case it’s helpful, I first proposed this format in this discuss thread.)

How is this different from HYBRID format? In HYBRID format, if the annotation refers to a global or local that’s been bound, it uses the real value. It’s only when the expression uses an undefined global or local that we create a “stringizer” to represent that missing name. In FORWARDREF format, every global or local (or closure) would be replaced with a “stringizer”. Every value in the dict it returns would be a ForwardRef object, guaranteed, wheras in HYBRID format the values you get may differ depending on what’s been imported so far. (Even annotation values that were constants would get wrapped in ForwardRef objects, for consistency’s sakes.)

Why do I propose adding this? Because we get it for free. In order to compute HYBRID and STRING formats, inspect.get_annotations has to be able to create ForwardRef objects for every name. And in the case of STRING format, the last step, just before returning, is to extract the strings from the ForwardRef objects and build a dict mapping to those. So with our current approach, we literally have to compute FORWARDREF format in order to compute STRING format.

I can’t think of an implementation of inspect.get_annotations that could support HYBRID format where we don’t essentially get FORWARDREF format for free. Even if in the future we stored the annotation strings from the source code in the .pyc file somewhere, and so STRING format was produced in a completely different way, we’d still need to support HYBRID format, which means we’d still have all the code needed to support FORWARDREF format too. So, as long as we have to permanently support all the functionality to support this format, we might as do the small amount of extra work to give it to the users, right?

I admit I haven’t come up with a convincing use case for it. The closest I get to a use case is, “it’s more consistent than HYBRID, but the values are evaluatable unlike STRING, and that seems like it could be useful.” But that’s pretty thin. So I’m not actually proposing adding it to PEP 649, per se. I included the proposal here so we could discuss it. There’ll be a poll about this at the end–should we add FORWARDREF or not? In particular, if you have a good use case for FORWARDREF format, please speak up!

For the rest of the document, I’ll describe FORWARDREF as if it’s an accepted part of the proposal. But to be clear: you shouldn’t interpret that as me trying to drum up support for it. I don’t really care whether we keep or reject FORWARDREF; I just want to do the right thing for Python users. If the community doesn’t want or need it, let’s reject it–that’s fine by me. I mean, hey, that reduces my workload! If only very slightly.

Now that I’ve introduced FORWARDREF format, let me stipulate that these formats will be defined as integer values, specifically:

VALUE=1
HYBRID=2
FORWARDREF=3
STRING=4

They won’t be Enum values, or strings, or instances of a custom class, etc.

The values are also guaranteed to be contiguous, and the inspect module will have attributes representing the minimum and maximum format values:

FORMAT_MIN = VALUE
FORMAT_MAX = STRING

This should prove useful for code working with different formats–more on this very soon.


Fifth, I previously asked the question: should __compute_annotations__ be a public or private API? Nearly all respondents said it should be private, at least for now. Since then I’ve realized __compute_annotations__ must be a public API, and for a very good reason: functools.partial, attrs, dataclasses, etc. Any code that wraps an existing class or function, returning a new object with the same or modified annotations, and which wants to support HYBRID, FORWARDREF, or STRING format, will have to write its own __compute_annotations__. And since you can write such code in pure Python, we need to support this API from Python.

Here’s where it gets a little messy. If we simply declared __compute_annotations__ to be a public API, and otherwise kept the API and implementation the same as previously proposed, third party implementations of __compute_annotations__ would be maddening to write in Python. This is because of the “fake globals” runtime environment that make HYBRID, FORWARDREF, and STRING formats possible. When run in this environment, these third-party __compute_annotations__ functions couldn’t do any real work–because any global symbol they referenced would be replaced with a “stringizer”! They couldn’t evaluate global values, call functions in global modules, etc. All their globals would be fakes.

Now, they could sidestep this by smuggling in the globals they need as default parameters:

def __compute_annotations__(self, inspect=inspect):
    ...

Or they could do their work in a different method. Only the top-level call, the __compute_annotations__ call itself, runs in the “fake globals” runtime environment. They could call a different method through self, which would be a real value (because it’s an argument), and that function wouldn’t run in a “fake globals” runtime environment:

def __compute_annotations__(self):
    return self.actual_compute_annotations()

This means actual_compute_annotations could be written conventionally–looking up global values, calling functions in libraries, etc. All its globals would be real, and it would run normally.

But now they have the opposite problem: if they compute the annotations in a different function like actual_compute_annotations, the HYBRID, FORWARDREF, and STRING formats wouldn’t render properly, precisely because they’re not running in the “fake globals” runtime environment. How can they compute these other formats?

There’s a straightforward solution to this, and it ties back neatly to the fact that these are wrappers: their annotations are defined on the object they’re wrapping. They can simply call inspect.get_annotations on the original object. That would produce the original annotations in the correct format, and they can then modify the result as needed. Easy peasy.

Except… how would know which format to ask for? As previously defined, __compute_annotations__ is never explicitly told what format it’s producing. It’s implicit in the runtime environment. True, you could use some coding tricks to sniff out what environment you’re running in, but even that is insufficient–FORWARDREF and STRING formats actually run in identical runtime environments. The difference between the two is in the cleanup pass run after __compute_annotations__ returns. Unless we change the API, it’s literally impossible for attrs et al to correctly support all formats.

It’s not a hard fix, but it feels like a big change: __compute_annotations__ must itself take the format parameter, specifying VALUE, HYBRID, FORWARDREF, and STRING formats. This allows third-party __compute_annotations__ functions to handle any format, because now we explicitly tell them exactly what we want.

The __compute_annotations__ functions generated by the CPython compiler won’t be sophisticated enough to handle HYBRID, FORWARDREF, and STRING formats themselves. They’ll only know how to compute VALUE format, aka real values. They’ll still get run in the “fake globals” runtime environment to produce the other formats. But I expect third-party __compute_annotations__ functions to directly support every format. So here’s how the API should work: if __compute_annotations__ supports the requested format, it must return a dict in that format, and if it doesn’t support that format, it must raise NotImplementedError(). The function would then get run in the “fake globals” runtime environment, requesting VALUE format.

Alas, the “fake globals” runtime environment is so obnoxious that we should never run any __compute_annotations__ function in that environment unless it explicitly opts in. Carl had the best suggestion for this: add a new flag to co_flags (the code object bitfield) that specifies “This code object supports being run in a fake globals runtime environment”. It’d be inconvenient for pure-Python wrapper libraries to set this flag for their __compute_annotations__ functions, but I think that’s for the best; I expect they’re going to be real code, with flow control and such, not a simple return statement. They’ll do most of their work by calling inspect.get_annotations on the thing they’re wrapping. (It may make sense for extension modules that create their own code objects to set the flag, I’m not sure.)

The logic inside inspect.get_annotations now works something like this pseudocode:

c_a = o.__compute_annotations__
try:
    return c_a(format)
except NotImplementedError:
    if not supports_fake_globals(c_a.__code__):
        return {}
    c_a_with_fake_globals = rebind_with_fake_globals(c_a, format)
    return c_a_with_fake_globals(VALUE)

In the general case, it does mean raising an exception, which is a little slow. But this code path isn’t used for VALUE format, and in any case I don’t expect folks are examining annotations in performance-sensitive code.

Bringing it all together, here’s the new API definition for __compute_annotations__:

__compute_annotations__(format: int) -> dict

Returns a new dictionary object mapping attribute/parameter names to their annotation values.

Takes a format parameter specifying the format in which annotations values should be provided. Must be one of the following:

  • inspect.VALUE
    Values are the result of evaluating the annotation expressions.
  • inspect.STRING
    Values are the text string of the annotation as it appears in the source code. May only be approximate; whitespace may be normalized, and constant values may be optimized.
  • inspect.FORWARDREF
    Values are ForwardRef expression proxy objects, containing the string of the annotation value as per STRING format. The ForwardRef objects contain references to all the context needed (globals/locals/closure) to evaluate themselves correctly.
  • inspect.HYBRID
    Values are real annotation values (VALUE format) for defined values, and ForwardRef proxies (FORWARDREF format) for undefined values. Real objects may be exposed to, or contain references to, ForwardRef proxy objects.

If __compute_annotations__ doesn’t support the specified format, it must raise NotImplementedError(). __compute_annotations__ must always support VALUE format; it must not raise NotImplementedError() when called with format=VALUE.

When called with format=VALUE, __compute_annotations__ may raise NameError; it must not raise NameError when called requesting any other format.

If an object doesn’t have any annotations, __compute_annotations__ should preferably be deleted or set to None, rather than set to a function that returns an empty dict.

Here’s what a __compute_annotations__ function generated by the compiler would look like, if it was written in Python:

def __compute_annotations__(format):
    if format != 1:
        raise NotImplementedError()
    return { ... }

As mentioned before, the code object for this __compute_annotations__ function would have the special “safe for fake globals” flag set.

Note that we compare format to the hard-coded value 1. This is set in stone as the constant for VALUE format. There are various reasons it’s hard-coded here, but here’s the most important: when it’s run in a “fake globals” runtime environment, __compute_annotations__ can’t look up inspect.VALUE… because it’d get a ForwardRef! (However, when format is not 1, that means it’s not being run in a “fake globals” runtime environment, and therefore it’s safe to look up NotImplementedError. __compute_annotations__ can rely on the fact that it’ll only be asked for VALUE format when run in a “fake globals” runtime environment.)

Also, to clarify a topic that came up in private discussions: __compute_annotations__ functions generated by Python never cache anything. They recompute the annotations dict every time they’re called. This isn’t a requirement for the __compute_annotations__ API; third-party __compute_annotations__ functions can cache whatever they like. But the only caching of annotations defined in Python-the-language is the internal cache for the __annotations__ property in functions, classes, and modules. If the internal cache for the __annotations__ property is unset, and __compute_annotations__ is set, and the user asks for __annotations__, the getter will call __compute_annotations__ and cache and return the result.

Finally let’s consider what __compute_annotations__ might look like for a wrapper object. For simplicity, I’ll contrive a super-simple example. This class is a clone of functools.partial, but it only handles wrapping one argument, which is always named arg:

def __compute_annotations__(self, format):
    ann = inspect.get_annotations(self.wrapped_fn, format)
    del ann['arg']
    return ann

Our third-party wrapper’s __compute_annotations__ method doesn’t have to worry about running in a “fake globals” runtime environment, because it hasn’t set the special opt-in flag on its code object. But it also doesn’t need to implement any of the formats itself–it can rely on inspect.get_annotations to do all the hard work. All it really needs to do is adjust the computed annotations dict as needed, in this case removing the entry for 'arg'. Happily this __compute_annotations__ is forwards-compatible; if we add support for new formats in the future, it can rely on inspect.get_annotations to support that new format, and it doesn’t even need to change.

Of course, other wrappers may not be so lucky; they may need to modify annotation values, or add new ones. And they can’t do that for a new format they’ve never seen before. But defining __compute_annotations__ as a public API with this interface at least gives third-party code the chance to do that work and fully support all formats. (As I always think of it: we’re giving them “a lever and a place to stand”. Hat tip to my old pal Archimedes!) I’m optimistic that currently-maintained third party libraries will want to do this work and add first-class support for all annotation formats.

Oh, and, I defined FORMAT_MIN and FORMAT_MAX in case third-party code wants to pre-calculate all the formats for __compute_annotations__. This would permit them to iterate over all formats and cache the results.


One more messy topic: how should inspect.get_annotations behave when code manually modifies, overwrites, or deletes annotations?

Traditionally __annotations__ wasn’t a special object. It was just an attribute that stored a reference to a dict, and user code could modify the dict as it saw fit. This leaves open the definite possibility for the user manually changing the annotations on an object. User code could potentially:

  • modify the dict, adding/removing/changing keys and values,
  • set o.__annotations__ to a new value (hopefully another dict!), or
  • delete o.__annotations__.

If the user does any of these things, how should the output of inspect.get_annotations change?

First, I want to support this behavior as best I can. My starting goal is 100% backwards compatibility with existing code that manipulates o.__annotations__. Although I haven’t seen any code deleting o.__annotations__, I have seen reasonable code that overwrites or modifies o.__annotations__–and that code must continue to work. (So, I don’t propose changing o.__annotations__ to a read-only dict, or preventing the user from overwriting or deleting the attribute, or anything else that would break existing code.)

However, if you manually change __annotations__, that means __compute_annotations__ is now out-of-date. And there’s simply no viable way to automatically __compute_annotations__ to match. What should Python do?

Once again I refer to the Zen: “in the face of ambiguity, refuse the temptation to guess”. If the user modifies, deletes, or overwrites o.__annotations__, we don’t know whether or not the output of o.__compute_annotations__ still matches the new annotations. Rather than keep it around, hoping that maybe it matches, o should drop its reference to __compute_annotations__. That way it can’t get called and we won’t generate stale values.

In the cases of overwriting or deleting o.__annotations__, we have it easy. o.__annotations__ is already a property; we just make the “setter” and “deleter” methods on o drop its reference to its __compute_annotations__. This is the first component to our solution.

But we don’t have a reasonable way of detecting when the user modifies the o.__annotations__ dict in place.

(Or do we? CPython 3.12 adds a new “watch” facility to PyDict, which lets a callback get notified any time a “watched” dict is modified. But this would be a pretty heavyweight solution. It’d require allocating memory for callback state for every __annotations__ dict generated, to let the callback map the annotations dict back to o, which we’d then need to look up somehow. And even then it wouldn’t notify you if code mutated a mutable value inside the dict. In any case I don’t want to define the language to depend on this implementation feature–after all, other implementations of Python may not have such a facility. And it’s not defined as part of the language–it’s not exposed anywhere in the language or library. By the same token, I don’t want to define o.__compute_annotations__ as returning a new subclass of dict that explicitly remembers when it’s been changed; I think this is too big and expensive for an incomplete solution, solving what is ultimately a small problem.)

What should we do? Let’s start by breaking down the problem into smaller chunks: what should inspect.get_annotations do for each of the supported formats?

For VALUE format, if o.__annotations__ is set, inspect.get_annotations(o, VALUE) will simply return a copy of it. So if the user overwrites or modifies o.__annotations__, VALUE format will automatically reflects those changes. And if the user deletes o.__annotations__, o will drop its reference to __compute_annotations__, and inspect.get_annotations(o, VALUE) will return an empty dict–which would be the correct behavior. VALUE format already works fine in all scenarios.

What about HYBRID format? Consider this observation: if o.__annotations__ is set to a value, that means that the annotations dict must be computable–conceptually, all the values needed to compute the annotations are defined. Which means that if we computed HYBRID format right now, it would turn out identically to VALUE format! There wouldn’t be any undefined names we’d need to wrap with a ForwardRef.

Therefore, when you call inspect.get_annotations(o, HYBRID), the first step is to see if o.__annotations__ is set. If it is, return a copy of it, just like VALUE format does… because that’s the correct value. And if the user overwrites or modifies o.__annotations__, by definition o.__annotations__ must be set. So in all scenarios where user code modifies annotations dicts, HYBRID format simply works the same as VALUE–which means it’s in good shape too.

(HYBRID format only tries running o.__compute_annotations__ in a “fake globals” runtime environment if o.__annotations__ isn’t defined, and if o.__compute_annotations__(HYBRID) doesn’t return a dict. Since this presupposes that o.__annotations__ isn’t set, we simply can’t have the sticky problem of “the user modified the existing annotations dict” by definition.)

It’s STRING and FORWARDREF formats where we run into a problem. We can’t return o.__annotations__ like the other two formats. And we can’t simply turn the annotations values into strings with repr, like this:

return {k: repr(v) for k, v in o.__annotations__.items()}

because that computes the repr of the value of the annotation, rather than reproducing the original source code of the annotation. These two strings are often very, very different.

If the user overwrites or deletes o.__annotations__, a request for STRING and FORWARDREF formats will return an empty dict, which is correct. The real unsolved problem here is when the user modifies o.__annotations__ in situ, then asks for STRING or FORWARDREF format. We don’t have a good way of detecting and handling this. In this case we’d call the out-of-date __compute_annotations__ method and return stale data.

A cursory examination of code in the wild suggests this won’t be a major problem. Most of the time, third-party code that manually creates annotations overwrites o.__annotations__, or sets them on an object that didn’t define any annotations at compile time. That will all work fine. Code that modifies the existing __annotations__ dict, on an object that had annotations defined at runtime, seems quite rare. In the discussion around this point, Carl found eight examples of existing code in published third-party libraries that modify o.__annotations__ directly, including three in attrs. The good news: only one of the eight would actually result in stale data–the others would all produce correct results in practice. (And nope, the bad one wasn’t in attrs.)

I think we’ve now whittled this problem to be small enough that we can just mention it in the inspect.get_annotations documentation, as follows:

If you directly modify the o.__annotations__ dict, by default these changes may not be reflected in the dictionary returned by inspect.get_annotations when requesting either STRING or FORWARDREF format. Rather than modifying o.__annotations__ directly, consider replacing o.__compute_annotations__ with a function that computes the annotations dicts with your desired values. Failing that, it’s best to overwrite o.__compute_annotations__ with None, or delete o.__compute_annotations__, to prevent inspect.get_annotations from generating stale results for STRING and FORWARDREF formats.

Now, let’s bring all these semantics together, and write a simplified pseudocode version of inspect.get_annotations. I’ll elide a lot of border case error handling code, and just concentrate on the main conceptual flow:

def get_annotations(o, format):
    if format == VALUE:
        return dict(o.__annotations__)

    if format == HYBRID:
        try:
            return dict(o.__annotations__)
        except NameError:
            pass

    if not hasattr(o.__compute_annotations__):
        return {}

    c_a = o.__compute_annotations__
    try:
        return c_a(format)
    except NotImplementedError:
        if not can_be_called_with_fake_globals(c_a):
            return {}
        c_a_with_fake_globals = make_fake_globals_version(c_a, format)
        return c_a_with_fake_globals(VALUE)

It seems important that inspect.get_annotations should never itself raise NotImplementedError(). For example, hand-written __compute_annotations__ functions will often call inspect.get_annotations to actually calculate the annotations; they should be able to rely on inspect.get_annotations abstracting away this error state. Instead, whenever inspect.get_annotations is run on something where it can’t produce proper output, it returns an empty dict. This is already the defined API for inspect.get_annotations and I think it should be preserved.


Finally: PEP 649 never specified how it would interact with PEP 563, “Postponed Evaluation Of Annotations”, aka “stringized annotations”. If you activate from __future__ import annotations, should Python still generate __compute_annotations__ functions? I think the answer is “no”. It would complicate the implementation of 649, and there’s no user benefit to delayed evaluation of hard-coded strings.

However, Carl had a novel suggestion here to make the transition easier from 563 to 649–and it’s a good one. He proposed the following small hack: if o is an annotated object from a module that has from __future__ import annotations active, change inspect.get_annotations(o, STRING) to return the (stringized) annotations from o. This means that users currently relying on stringized annotations can immediately switch to calling inspect.get_annotations(o, STRING), then turn off the __future__ import at their leisure.

(You can’t directly detect whether or not a module has a particular from __future__ feature enabled. But there’s a reliable indirect way to tell: from __future__ import annotations really does import an object called annotations, an instance of __future__._Feature. You can just check to see if that exists.)

This will have the curious side effect of making this expression true:

inspect.get_annotations(o, STRING) == inspect.get_annotations(o, VALUE)

when o is defined in a module with stringized annotations enabled. Otherwise this expression would never be true. It’s a little weird, but if we document it and explain our reasons I think our users will thank us.

That’s the only change I plan to make in PEP 649 regarding PEP 563 and stringized annotations. I don’t plan to modify how stringized annotations work, and Python won’t generate __compute_annotations__ functions for any of the objects in a module when from __future__ import annotations is active.


Polls

Should we rename STRING format?

  • No, keep the name STRING format.
  • Yes, change it to STRINGIZED format.
  • Yes, change it to SOURCE format.
  • Yes, change it to CODE format.
  • Yes, change it to SOURCE_CODE format.
  • Yes, but my vote is for a new name in the comments.
0 voters

Should we rename HYBRID format?

  • No, keep the name HYBRID format.
  • Yes, change it to PROXY format.
  • Yes, change it to FORWARDREF format. (I voted against the separate FORWARDREF format.)
  • Yes, but my vote is for a new name in the comments.
0 voters

Should PEP 649 initially be gated behind a from __future__ declaration?

  • No, it should be the default behavior immediately.
  • Yes, let’s not make it default behavior right away.
0 voters

Should inspect.get_annotations (and __compute_annotations__) support FORWARDREF format?

  • No. Why add support for something nobody needs? YAGNI.
  • Yes, we might as well / I have a good use case.
0 voters
5 Likes

Isn’t that information encoded in co_flags on callables defined in that function?

1 Like

Apparently so:

$ python -q
>>> def f():pass
...
>>> hex(f.__code__.co_flags)
'0x43'
>>>

$ python -q
>>> from __future__ import annotations
>>> def f(): pass
...
>>> hex(f.__code__.co_flags)
'0x1000043'
>>> 

It’s a callable that causes the function to have __annotations__ when it didn’t before, yes?

… Why not just __annotate__?

4 Likes

I always thought of __co_annotations__ as “coannotations”, from Latin “co-“ meaning “together”, similar to coordinate. I think it’s a great name.

With regards to the future import, let me say as RM I am not comfortable with this massive a change, that’s still being designed after the last alpha, less than a month before beta 1, going into 3.12 without a future import. I think I would prefer a future import even if it was targeting 3.13, because I think we will end up discovering quite a few bugs and unexpected interactions and the easiest way to deal with them from a user’s perspective is to not opt-in to the new behaviour… but I can live without a future import, if that’s the consensus, if we can get it into one of the first alphas.

7 Likes

I didn’t realize! That’s probably better.

One advantage of the “look up the _Future attribute in the module” approach is that it hypothetically allows user code to opt out of this “return VALUE format for STRING format” hack. If you don’t want that behavior, simply rename (or delete) the _Future object after importing it. Or even as part of the import:

from __future__ import annotations as goober

Oh my! That’s a great suggestion. Must have been staring me in the face for, what, most of two years?

The only downside is that it leaves us with two attributes with very similar names. That could be confusing, perhaps particularly for non-native speakers of English who aren’t familiar with the word.

But if Python can survive __getattr__ vs __getattribute__, maybe it can survive this.

I’m starting to fall in love with just calling it __annotate__. It has the “functions should be verbs” feature that I prefer, and naturally when you “annotate” you are left with “annotations”. Great stuff.

That’s a sweet thought, but I think it’s highly inobvious. I think we can find a name with a more obvious meaning.

An entirely fair viewpoint, particularly from the 3.12 RM. I’m uncomfortable too, as I’d have to write all this at the last minute if it’s going into 3.12. It’s a big pile of work and time is running short.

My plan is to propose in the PEP what the majority says it wants. But the actual decision isn’t up to me; it’s up to the Steering Council. Their ruling on PEP 649 can modify it, e.g. “this must be gated with a from __future__ import”, “we’re too late for 3.12 and it should go into 3.13”, etc. Ultimately I’ll be relying on them to tell me how to proceed.

2 Likes

I realized a couple hours after I created this topic: oops, it should probably have been in the PEPs category. Sorry for the miscategorization, folks! I’d say “it won’t happen again”, but… it probably will.

edit: Apparently you can just move topics around willy-nilly. @davidism already moved this topic to the arguably-more-correct PEPs category. Thanks, @davidism!

A little peek behind the curtain for you. This discussion was borne out of a lot of thinking about 649, and a massive multi-week email thread between four people (but mostly volleys between Carl and myself). We proposed a lot of things, and opinions differed; some ideas were abandoned, others were merely tabled for later. Here’s the most interesting of those alternate ideas, boiled out of that discussion and presented here for your interest… in case you’re not already bored with reading about this!


In my previous big 649 thread, Petr Viktorin proposed we “just store the strings”, rather than reverse-engineer them by running __compute_annotations__ in a “fake globals” runtime environment. That idea has definite merit. I’m not averse to this approach, but tabled it for now for three reasons.

First, we already know we need to support HYBRID format, which itself requires all the same machinery we’d need to reverse-compute STRING format. So, in a way, by doing the work to implement HYBRID format–which we’re already committed to doing–we get STRING format for just a little more work. In contrast, “just store the strings” would require a lot of novel work: instrumenting the compiler to also store the source code for the annotations somewhere we can get at them at runtime, then modifying inspect.get_annotations to return that when asked for STRING format.

Second, there was some worry about the memory consumption of the annotation source code strings, particularly as they’d rarely get examined at runtime. Petr was aware of this, and as part of his initial proposal suggested adding a lazy-loading facility–though this proposal was only in the abstract. Adding a lazy-loading facility to modules is something we’ve been talking about for a while now, but no firm proposal exists yet, much less an implemented solution–in part because we haven’t really needed it. Adding such a facility would probably also have to be be part of this approach, yet right now we don’t know how best to implement it. (I actually had a brainstorm about this a day or two back and posted a new topic outlining my idea. But so far that’s only an idea.)

Third, it’s not clear to me precisely what “just store the strings” means. The obvious meaning is, “store all the text of the annotation, starting immediately after the colon and continuing until immediately before either the comma or the curly-brace that ends the annotation”. But this seems a little strange once you mix in newlines and comments. Consider this code sample:

def foo(a: typing.Union[
    int, # obviously!
    str, # if you think about it, we need this too
    unloaded_module.MonkeyBusiness # surprised? don't be!
    ]): pass

The literal “just store the strings” annotation for this would be

' typing.Union[\n    int, # obviously!\n    str, # if you think about it, we need this too\n    unloaded_module.MonkeyBusiness # surprised? don't be!\n    `

What a mess! Is that really what we want? If so, then okay, this isn’t a real concern. But if we want to “clean it up” before returning it for STRING format, we’d have to figure out what “cleaning it up” meant–how far to take it. Stripping the string of leading and trailing whitespace makes sense. Strip comments? Probably. Convert newlines into spaces, and normalize non-quoted spaces into a single space? Not sure.

I’m not claiming this third concern is a showstopper, just that it’s pretty up in the air at the moment.

I suspect if we ended up going this route for whatever reason, the answer to the third question should be to use the existing PEP 563 “AST to string” implementation (which implicitly “cleans up” all syntactic details that don’t exist in the AST), both because that implementation already exists and is battle-tested (and as far as I’m aware hasn’t really caused any trouble, either in terms of maintenance or users complaining about the strings it produces), and because it would preserve existing behavior perfectly for existing users of PEP 563.

If this preserved the original source code apart from formatting cleanups, cool cool. But my understanding is that by the time the AST emerges from the compiler, some simple transformations have already been applied to it (reduction in strength? constant folding?). If our goal with this is total fidelity to the original source code, this would be an improvement–many more optimizations optimizations get applied before the bytecode is finalized–but it wouldn’t be 100% faithful.

1 Like

PEP 563 already takes care to avoid this problem, by skipping AST optimization of annotations entirely when from __future__ import annotations is active, see e.g. cpython/Python/ast_opt.c at main · python/cpython · GitHub and other checks for CO_FUTURE_ANNOTATIONS in that file.

There would be a slight wrinkle if we were reusing this implementation as part of PEP 649, in that we couldn’t (at least not easily) disable the AST optimizations only when building strings: there’s only one copy of the AST available in compilation. But I think there’d be little downside to just always skipping AST optimization of annotations; under 649 they still wouldn’t be executed by default, and we’ve already decided that introspecting them is not performance sensitive (raising and catching NotImplementedError will be much slower than whatever the performance difference is from not constant-folding.)

Anyway, this is all a bit of a digression since the current plan is to use bytecode-to-AST via fake globals, not preserve strings in compilation.

1 Like

I personally don’t know how critical that is in the end, especially if string annotations are on their way out long-term. Are optimizations at the AST level going to be that transformative for anyone that it would break something (serious question; please speak up if it will!)?

I don’t think stringized annotations are on their way out, because their use cases are still relevant. My understanding is that documentation tools (e.g. pydoc) will want to examine the “stringized” (or original source code, etc) annotations for the foreseeable future.

2 Likes

From the documentation perspective: Sphinx may benefit from faithful source-code reproductions, especially as we have requests to render documentation annotations exactly as in source code (e.g. not resolving type aliases, etc). However to allow cross referencing to work we manually convert Python objects to reStructuredText syntax. Perhaps a future Sphinx would allow passing through the pure string representation of annotations as found in source-code, without automatic cross referencing.

Hopefully of some use to explain requirements from one documentation tool wrt 649!

A

Thanks for thinking my quick naïve idea through!
And please let me know if my brainstorming isn’t useful.

I’m wondering if it’s possible to get the annotation’s tokens from the compiler. Then you could concatenate the relevant tokens. Without comments, and with normalized whitespace (which would involve listing pairs of token types that need a space inserted).
That would mean new optimizations in future Python versions don’t change the strings.

Hang on. That works for annotated functions, but not annotated classes or modules.

Thought experiment: I have a class containing no methods, in a module containing no functions. How do I use this technique to determine whether or not from __future__ import annotations is active for that class?

In the interest of doing something simple that works, I may yet stick to the “look for the annotation module attribute” approach.

Surely Sphinx is parsing the source code though, right? It’s not executing the code and trying to extract documentation at runtime…

2 Likes

Currently Sphinx (specifically, when using sphinx.ext.autodoc) does import objects[1] to document at runtime – see the warning at the top of the documentation. Moving to a parser-based approach is on the list of things to very strongly consider, but unfortunatley I/we are constrained by resources and time (as always!).

A


  1. The main entry point is here (the object is stored as self.object): sphinx/sphinx/ext/autodoc/__init__.py at 188b869fa23d43be96b64e80987d12069743b9b5 · sphinx-doc/sphinx · GitHub ↩︎

1 Like

That’s a good point. Classes and modules also have code objects, but as far as I know you can’t get to those after the class or module has been executed.