Finding edge cases for PEPs 484, 563, and 649 (type annotations)

Edge case: Import cycles

It’s common for annotations to result in extra imports and these imports can sometimes cause cycles. Example:

from __future__ import annotations
from typing import TYPE_CHECKING

    from y import Y

def xf(o: Y): ...

class X: ...

from __future__ import annotations
from typing import TYPE_CHECKING

    from x import X

def yf(o: X): ...

class Y: ...

Fails for

  • PEP 484, you’d need to manually quote types, e.g. def yf(o: "X") to be able to import this code
  • PEP 484, 563, 649 if you try to use typing.get_type_hints

See also The `if TYPE_CHECKING` problem · Issue #1 · larryhastings/co_annotations · GitHub

One more fail case for if TYPE_CHECKING.

  • PEP 649: help(xf) and help(yf) can not show type hint without manual quoting.
    • PEP 484 needs manual quoting anyway.
    • Sphinx autodoc and ipython supports stringified annotations too.

FWIW, I’m generally in favor of PEP 649 (deferred evaluation) becoming the default. All issues it has that I know about† can be worked around by using a string literal in place of a type, which is no worse than the status quo of PEP 484 (runtime execution).

† Mainly: (1) Inability to define a class with members that recursively reference the parent class if the parent class uses a class decorator. (2) Inability to refer to a type only available inside an if TYPE_CHECKING block.

Edge case: resolving an introspected type annotation within an object scope

As demonstrated in this other discussion, using introspection to get a function return type and match it with an object available in its scope (or its parent’s scope) is not always possible without PEP 563. To properly resolve the return type, it is needed to get it as it was typed in the sources.

Fails for…

PEP 649. Not sure about PEP 484.

To be clear, the poster here wants the stringified version of the original type. This is not so easy if the type is e.g. tuple[T, T][int] since that has become tuple[int, int] by the time the annotation has been objectified (both with PEP 484 and with PEP 649).


Edge case with PEP 563: using annotations not defined in the module scope

(Please note, this is taken more-or-less verbatim from the related pydantic issue).


from __future__ import annotations
from pydantic import BaseModel

def main():
    from pydantic import PositiveInt

    class TestModel(BaseModel):
        foo: PositiveInt



This is not a fundamental problem with types being left as strings, but rather with how PEP 563 was implemented:

Annotations can only use names present in the module scope as postponed evaluation using local names is not reliable (with the sole exception of class-level names resolved by typing.get_type_hints() )

Of course, I’ve used from pydantic import PositiveInt above, but it could be any import or a custom type defined within the function, including another pydantic model. It could even be simple type alias like Method = Literal['GET', 'POST'].

Personally I think this is very confusing as it’s very different from the way python scope works otherwise.

(Sorry if this has been mentioned above, I thought it best to add my example for completeness.)

1 Like

Łukasz’s (@ambv) blog post on the topic contains several edge cases and explanation around them.


I’ve made a suggestion at Recursive dataclasses · Issue #2 · larryhastings/co_annotations · GitHub that I think could resolve all the PEP 649 edge cases mentioned here, with some tooling support. The idea is that tools that want to resolve annotations with special handling for forward references or runtime-undefined names can eval(somefunc.__co_annotations__.__code__, myglobals, {}) instead of calling somefunc.__co_annotations__() directly, where myglobals is a customized globals dictionary. Depending what exactly is added to the custom globals dictionary, this approach can solve a variety of use cases, including producing high-fidelity “stringified” annotations and allowing forward references in dataclass annotations (and in annotations generally). See the linked comment for a bit more detail.


That’s a neat idea Carl. I like PEP 649 because it feels to me like the more correct way to do things. We want a mechanism to defer evaluation of some code like thing, i.e. the type annotation. Storing the annotation as a simple text string is one way to defer eval that but has downsides. E.g. you lose lexical scoping because the string object doesn’t know what lexical environment it was inside.

I’ve done some work on introspection tools that use type annotations to generate entity-relationship diagrams. If PEP 649 was accepted, I would need a way to handle something like if TYPE_CHECKING: imports. Your proposal would help solve that.


Yesterday there was a mypy issue involving some real world code that poses another interesting edge case: classes that mutually refer to each other, but in their base class:

from dataclasses import dataclass
from typing import Generic, TypeVar

T = TypeVar("T")

class Parent(Generic[T]):
    key: str

class Child1(Parent["Child2"]): ...

class Child2(Parent["Child1"]): ...

(dataclass isn’t strictly necessary in this)

Mentioning since I believe this kind of thing may be a problem from Larry Hasting’s forward class declaration idea.

Sorry for the late reply on these topics. I clearly should visit my local planning department more often :slight_smile:

Edge case: decorators that use inspect.signature + forward references, even though client code is not using annotations directly

This is somewhat similar to the forward-referencing annotations in dataclass, though we’re not looking at the contents of the annotation here.

inspect.signature is the stlib’s high-level way to work with introspection of callables. Currently it evaluates __annotations__ eagerly when given a function.

Here’s an example in which a decorator tries to report a parameter that it is effectively adding to a wrapped function’s signature.

import inspect

def with_loglevel_param(func):
    sig = inspect.signature(func)

    def wrapper(*args, loglevel=1, **kwargs):
        func(*args, **kwargs)

    wrapper_sig = inspect.signature(wrapper)

    wrapper.__signature__ = sig.replace(
        parameters = [

    return wrapper

def my_func(spam, ham: MyParamType):

class MyParamType:



  • :negative_squared_cross_mark: Eager evaluation (PEP-484, PEP-3107): includes a forward reference
  • :yellow_circle: Stringified postponed evaluation (PEP-563): it runs, though you could imagine issues figuring out with which globals to evaluate each parameters’ annotations with, should with_loglevel_param and my_func both have annotations and be defined in separate modules
  • :negative_squared_cross_mark: :yellow_circle: Descriptor postponed evaluation (PEP-649): Using inspect.signature defeats the postponed evaluation

I could imagine a backwards-compatible change to inspect.Signature/Parameter introduced by PEP-649’s implementation would resolve this. For example:

  • it would continue to postpone the evaluation of __annotations__ until it is accessed on a parameter
  • replace(get_[return_]annotation=X) would treat X as a callable that returns the annotation value
  • replace([return_]annotation=X) would treat X as the value of the annotation (equivalent to replace(get_[return_]annotation=lambda: X))

This would do nothing to solve cases where the decorator would evaluate the annotation directly, unless they also find ways to defer when the annotation gets evaluated. This doesn’t really work if you immediately need annotations, for instance I’m not sure dataclass can implement its ClassVar/InitVar support without requiring them until instantiation.

Another caveat is that this dissociates errors in annotation definitions even further, but from what I understand PEP-649 would end up showing the site of annotations’ definition in the stack trace. (Speaking of stack traces, this solution would add a few functions and frames when using inspect.)


I introduced sigtools.modifiers a while ago while migrating clize to rely on Python 3 features (annotations and keyword-only parameters) to maintain Python 2 compatibility. At the time, the docs recommended modifiers.kwoargs, modifiers.autokwoargs, modifiers.annotate prominently. Over time, I updated the docs to prioritize Python 3 syntax, then eventually removed mentions of sigtools.modifiers completely,

During this time, the usage of it spread, including in third-party tutorials, so I am committed to keep supporting it in future Python versions. (There are some usages detectable in a GitHub public search, but I imagine most users of clize don’t publish their work or run their scripts in a way that they would see DeprecationWarnings.) Anecdotally, clize and sigtools still use them despite officially dropping Python 2.7 support, as specifically removing code used to guarantee Python 2 compatibility isn’t generally a high priority.

I recognize that it is somewhat unlikely that you’ll see code that mixes both styles like this:

def version_whiplash(one: Int, two: Int = 2):

But I imagine it could occur across modules in larger codebases. I suspect, but haven’t confirmed that sigtools.modifiers could be updated to avoid eager evaluation.

modifers.annotate also has the same problem, and in addition needs to find a way (under PEP-563) to assign arbitrary values as strings into annotations (it assigns only the __signature__ attribute rather than __annotations__) in a way that won’t confuse 3rd party tools reading from __signature__.

As a side-note, I’ve been working on sigtools to have it support PEP-563, and gravitated to a solution that pairs up annotations with where they have been defined, which seems to build something that is somewhat similar to what PEP-649 proposes, except on a per-parameter basis instead of on the whole __annotations__ dict (some of sigtools’ function is to attribute each parameter to the function that originally created it, e.g. through decorators and such, so two parameters’ annotations could be evaluated differently).

It does seem odd that PEP-563 changes the meaning of __annotations__ completely, making it more difficult to support both versions (or modules compiled with different future flags) than necessary.

1 Like

I know this is old, but I think it’s still worth adding one more case (after a quick search I think this is new here).

See Is there a way to access parent nested namespaces? - #4 by dwreeves by @dwreeves.

Using from __future__ import annotations changes the reference count on a type annotation, and so makes some code that is valid before PEP 563, impossible after.

For example:

# from __future__ import annotations
import gc

def nested1():
    Bar = 'this is Bar'

    def nested2():
        class MyClass:
            bar: Bar
        return MyClass

    print('Total scopes referencing Bar:', len(gc.get_referrers(Bar)))
    return nested2


I get: Total scopes referencing Bar: 3 printed, but if I uncomment from __future__ import annotations, I get Total scopes referencing Bar: 1.

In other words having the Bar annotation as a python object keeps Bar from being GCed.

It seems this case is impossible to work around with PEP 563.

I’m confused about what you’re claiming here. Reference counts and GC are not part of the language specification. The message you reference uses sys._getframe() which also isn’t (the _ is meant as a hint there). So if you’re using this as an argument against PEP 563 I think you’re reaching. It’s like claiming that there’s a change because the size of the “python” executable changed.

I’m not arguing anything. Indeed I’m using from __future__ import annotations more and more myself.

I’m simply pointing out another edge case for PEP 563 - as per the title of this discussion.

One that is slightly subtler and harder to work around that the case I already provided above.

I only mentioned the sys._getframe() workaround, as you had previously suggested it.

Unlike your earlier edge case, this one leaves me cold – it doesn’t help me at all when deciding between the three proposed semantics.

It seems weird to frame this in terms of garbage collection. In case it wasn’t immediately obvious to anybody else, the problem is simply that Bar is not in nested2’s namespace because Bar in bar’s annotation is a string. You just need to reference Bar somehow, e.g. by doing def nested2(Bar=Bar): ....

It definitely wasn’t clear to me that that was what Samuel was on about…

Why would you care whether Bar is in nested2’s namespace? What does that phrase even mean to you?

IIUC Samuel wants to call sys._getframe() to extract the value of Bar from the callee’s local namespace and eval it e.g. by passing it to get_type_hints.

Having it be in the frame’s namespace is mainly useful for manually tracking what right value of locals/globals is needed to use with get_type_hints later. A more expanded example let’s have a decorator that uses type annotations to add some functionality to a class (like serialization/deserialization). You want to do,

class Foo:
  x: int
  y: ComplexType

Here configurable decorator may want to later evaluate ComplexType. Ideally it would do something like,

def configurable(cls):
  annotations = get_type_hints(cls)

and be good, but it’s possible that evaluating type hints eagerly like this will fail. Maybe ComplexType is recursive type defined later. One way to handle this case is to do something like,

from inspect import currentframe

def configurable(cls):
  current_frame = currentframe()
  caller_frame = current_frame.f_back
  # Save caller frame and type to a registry and delay execution of get_type_hints to later using caller frames namespaces.

I can add more code for a full example of code that works depending on whether annotation is in namespace or not. I think those snippets are core and I’ve done similar things for runtime introspection needs.

1 Like