Finding edge cases for PEPs 484, 563, and 649 (type annotations)

from __future__ import annotations
from dataclasses import dataclass

class A: ...

class B:
    a: A

from dataclasses import dataclass
from typing import get_type_hints

from mod1 import B

class C(B):
    x: int

print(get_type_hints(B.__init__))  # works fine

Fails for PEP 563

Traceback (most recent call last):
  File "", line 11, in <module>
  File "/home/tmk/.conda/envs/py10/lib/python3.10/", line 1836, in get_type_hints
    value = _eval_type(value, globalns, localns)
  File "/home/tmk/.conda/envs/py10/lib/python3.10/", line 324, in _eval_type
    return t._evaluate(globalns, localns, recursive_guard)
  File "/home/tmk/.conda/envs/py10/lib/python3.10/", line 688, in _evaluate
    eval(self.__forward_code__, globalns, localns),
  File "<string>", line 1, in <module>
NameError: name 'A' is not defined

Though you can make this work with vars(sys.modules[B.__module__]) passed to get_type_hints().

Larry, that feels like an unnecessary putdown.

1 Like

Brett can speak for himself, but the last note from him on that thread that I found said:

“I’m fine with tossing this whole idea out”

But anyway, the limitation there is that there’s no way (at least with MyPy?) to define recursive type – so that’s a limitation, but I don’t know that it’s the kind of Edge Case that is being talked about here.

My understanding of this topic is existing behavioral differences between the different peps. So I locally checked out 649 branch and ran portion of my tests to see differences. NamedTuple one looks like a minor bug. The dataclass one looks similar to the other dataclass issues. My intent here was mainly to document situations where runtime type behavior may change.

I will also clarify and say while I use runtime type inspection heavily to my knowledge both 563/649 edge cases can be handled with manual string quoting so there is always a fallback solution. As long as that works, I find it reasonable having a few edge cases need to do manual string escaping.

Then it seems you’ve misunderstood the topic. This is a call for people to describe what Brett called “edge cases”: situations where the technologies described in the PEPs don’t correctly handle a (valid) use case for annotations. Brett wants us to consider the design of each PEP, and what it does and doesn’t permit; “existing behavioral differences” suggests an examination of the current implementations of each of the PEPs, which is not the same thing. Bug reports, where the behavior is clearly not intended, are outside the intended scope of this discussion.

I don’t think there’s much of a distinction between the two. Design and implementation for this issue strongly overlap. Some of the solutions mentioned for certain bugs like suppressing name errors or lazy desciptors it is unclear to me whether that’s a design choice or an implementation detail.

As an explicit example are the dataclass examples bug reports or design issues? That’s unclear to me. The only example I listed I’d consider a likely implementation detail is NamedTuple one which I mentioned in first comment.

Assume PEP 649 is perfectly implemented for any example left here and there’s a scenario that does not work as coded. Workarounds are just illustrating how to deal with the issue today and are simply a way to help illustrate further why an edge case is problematic.

If you’re not sure about PEP 649 semantics, I’m sure @larry and other folks can clarify.

That’s covered by my opening edge case around recursive types.

When bugs in specific PEP implementations do come up here (natural), recognizing them for what they are (bugs out of spec vs the PEP) and linking to the place they are being tracked for the implementation would be good.

PEPs aren’t perfect, neither are implementations. Both may have potential issues to address that these conversations can reveal.


Edge case: Cross-module dataclass inheritance breaks get_type_hints

This is from Issue 45524: Cross-module dataclass inheritance breaks get_type_hints - Python tracker

If contains:

from __future__ import annotations

import foo
import dataclasses
import typing

class B(foo.A):


And contains:

from __future__ import annotations

import collections
import dataclasses

class A:
  x: collections.OrderedDict

Then running python gives an error:

Traceback (most recent call last):
  File "...\", line 11, in <module>
  File "...\Lib\", line 2005, in get_type_hints
    value = _eval_type(value, globalns, localns)
  File "...\Lib\", line 336, in _eval_type
    return t._evaluate(globalns, localns, recursive_guard)
  File "...\Lib\", line 753, in _evaluate
    eval(self.__forward_code__, globalns, localns),
  File "<string>", line 1, in <module>
NameError: name 'collections' is not defined

Fails for …

  • PEP 563. Works if the __future__ statements are removed.
  • Works with PEP 649.

Edge case: Import cycles

It’s common for annotations to result in extra imports and these imports can sometimes cause cycles. Example:

from __future__ import annotations
from typing import TYPE_CHECKING

    from y import Y

def xf(o: Y): ...

class X: ...

from __future__ import annotations
from typing import TYPE_CHECKING

    from x import X

def yf(o: X): ...

class Y: ...

Fails for

  • PEP 484, you’d need to manually quote types, e.g. def yf(o: "X") to be able to import this code
  • PEP 484, 563, 649 if you try to use typing.get_type_hints

See also The `if TYPE_CHECKING` problem · Issue #1 · larryhastings/co_annotations · GitHub

One more fail case for if TYPE_CHECKING.

  • PEP 649: help(xf) and help(yf) can not show type hint without manual quoting.
    • PEP 484 needs manual quoting anyway.
    • Sphinx autodoc and ipython supports stringified annotations too.

FWIW, I’m generally in favor of PEP 649 (deferred evaluation) becoming the default. All issues it has that I know about† can be worked around by using a string literal in place of a type, which is no worse than the status quo of PEP 484 (runtime execution).

† Mainly: (1) Inability to define a class with members that recursively reference the parent class if the parent class uses a class decorator. (2) Inability to refer to a type only available inside an if TYPE_CHECKING block.

Edge case: resolving an introspected type annotation within an object scope

As demonstrated in this other discussion, using introspection to get a function return type and match it with an object available in its scope (or its parent’s scope) is not always possible without PEP 563. To properly resolve the return type, it is needed to get it as it was typed in the sources.

Fails for…

PEP 649. Not sure about PEP 484.

To be clear, the poster here wants the stringified version of the original type. This is not so easy if the type is e.g. tuple[T, T][int] since that has become tuple[int, int] by the time the annotation has been objectified (both with PEP 484 and with PEP 649).


Edge case with PEP 563: using annotations not defined in the module scope

(Please note, this is taken more-or-less verbatim from the related pydantic issue).


from __future__ import annotations
from pydantic import BaseModel

def main():
    from pydantic import PositiveInt

    class TestModel(BaseModel):
        foo: PositiveInt



This is not a fundamental problem with types being left as strings, but rather with how PEP 563 was implemented:

Annotations can only use names present in the module scope as postponed evaluation using local names is not reliable (with the sole exception of class-level names resolved by typing.get_type_hints() )

Of course, I’ve used from pydantic import PositiveInt above, but it could be any import or a custom type defined within the function, including another pydantic model. It could even be simple type alias like Method = Literal['GET', 'POST'].

Personally I think this is very confusing as it’s very different from the way python scope works otherwise.

(Sorry if this has been mentioned above, I thought it best to add my example for completeness.)

1 Like

Łukasz’s (@ambv) blog post on the topic contains several edge cases and explanation around them.


I’ve made a suggestion at Recursive dataclasses · Issue #2 · larryhastings/co_annotations · GitHub that I think could resolve all the PEP 649 edge cases mentioned here, with some tooling support. The idea is that tools that want to resolve annotations with special handling for forward references or runtime-undefined names can eval(somefunc.__co_annotations__.__code__, myglobals, {}) instead of calling somefunc.__co_annotations__() directly, where myglobals is a customized globals dictionary. Depending what exactly is added to the custom globals dictionary, this approach can solve a variety of use cases, including producing high-fidelity “stringified” annotations and allowing forward references in dataclass annotations (and in annotations generally). See the linked comment for a bit more detail.


That’s a neat idea Carl. I like PEP 649 because it feels to me like the more correct way to do things. We want a mechanism to defer evaluation of some code like thing, i.e. the type annotation. Storing the annotation as a simple text string is one way to defer eval that but has downsides. E.g. you lose lexical scoping because the string object doesn’t know what lexical environment it was inside.

I’ve done some work on introspection tools that use type annotations to generate entity-relationship diagrams. If PEP 649 was accepted, I would need a way to handle something like if TYPE_CHECKING: imports. Your proposal would help solve that.


Yesterday there was a mypy issue involving some real world code that poses another interesting edge case: classes that mutually refer to each other, but in their base class:

from dataclasses import dataclass
from typing import Generic, TypeVar

T = TypeVar("T")

class Parent(Generic[T]):
    key: str

class Child1(Parent["Child2"]): ...

class Child2(Parent["Child1"]): ...

(dataclass isn’t strictly necessary in this)

Mentioning since I believe this kind of thing may be a problem from Larry Hasting’s forward class declaration idea.