Finding edge cases for PEPs 484, 563, and 649 (type annotations)

Edge case: Import cycles

It’s common for annotations to result in extra imports and these imports can sometimes cause cycles. Example:

x.py

from __future__ import annotations
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from y import Y

def xf(o: Y): ...

class X: ...

y.py

from __future__ import annotations
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from x import X

def yf(o: X): ...

class Y: ...

Fails for

  • PEP 484, you’d need to manually quote types, e.g. def yf(o: "X") to be able to import this code
  • PEP 484, 563, 649 if you try to use typing.get_type_hints

See also The `if TYPE_CHECKING` problem · Issue #1 · larryhastings/co_annotations · GitHub

One more fail case for if TYPE_CHECKING.

  • PEP 649: help(xf) and help(yf) can not show type hint without manual quoting.
    • PEP 484 needs manual quoting anyway.
    • Sphinx autodoc and ipython supports stringified annotations too.

FWIW, I’m generally in favor of PEP 649 (deferred evaluation) becoming the default. All issues it has that I know about† can be worked around by using a string literal in place of a type, which is no worse than the status quo of PEP 484 (runtime execution).

† Mainly: (1) Inability to define a class with members that recursively reference the parent class if the parent class uses a class decorator. (2) Inability to refer to a type only available inside an if TYPE_CHECKING block.

Edge case: resolving an introspected type annotation within an object scope

As demonstrated in this other discussion, using introspection to get a function return type and match it with an object available in its scope (or its parent’s scope) is not always possible without PEP 563. To properly resolve the return type, it is needed to get it as it was typed in the sources.

Fails for…

PEP 649. Not sure about PEP 484.

To be clear, the poster here wants the stringified version of the original type. This is not so easy if the type is e.g. tuple[T, T][int] since that has become tuple[int, int] by the time the annotation has been objectified (both with PEP 484 and with PEP 649).

4 Likes

Edge case with PEP 563: using annotations not defined in the module scope

(Please note, this is taken more-or-less verbatim from the related pydantic issue).

Example:

from __future__ import annotations
from pydantic import BaseModel

def main():
    from pydantic import PositiveInt

    class TestModel(BaseModel):
        foo: PositiveInt

    print(TestModel(foo=1))

main()

This is not a fundamental problem with types being left as strings, but rather with how PEP 563 was implemented:

Annotations can only use names present in the module scope as postponed evaluation using local names is not reliable (with the sole exception of class-level names resolved by typing.get_type_hints() )

Of course, I’ve used from pydantic import PositiveInt above, but it could be any import or a custom type defined within the function, including another pydantic model. It could even be simple type alias like Method = Literal['GET', 'POST'].

Personally I think this is very confusing as it’s very different from the way python scope works otherwise.

(Sorry if this has been mentioned above, I thought it best to add my example for completeness.)

1 Like

Łukasz’s (@ambv) blog post on the topic contains several edge cases and explanation around them.

2 Likes

I’ve made a suggestion at Recursive dataclasses · Issue #2 · larryhastings/co_annotations · GitHub that I think could resolve all the PEP 649 edge cases mentioned here, with some tooling support. The idea is that tools that want to resolve annotations with special handling for forward references or runtime-undefined names can eval(somefunc.__co_annotations__.__code__, myglobals, {}) instead of calling somefunc.__co_annotations__() directly, where myglobals is a customized globals dictionary. Depending what exactly is added to the custom globals dictionary, this approach can solve a variety of use cases, including producing high-fidelity “stringified” annotations and allowing forward references in dataclass annotations (and in annotations generally). See the linked comment for a bit more detail.

5 Likes

That’s a neat idea Carl. I like PEP 649 because it feels to me like the more correct way to do things. We want a mechanism to defer evaluation of some code like thing, i.e. the type annotation. Storing the annotation as a simple text string is one way to defer eval that but has downsides. E.g. you lose lexical scoping because the string object doesn’t know what lexical environment it was inside.

I’ve done some work on introspection tools that use type annotations to generate entity-relationship diagrams. If PEP 649 was accepted, I would need a way to handle something like if TYPE_CHECKING: imports. Your proposal would help solve that.

4 Likes

Yesterday there was a mypy issue involving some real world code that poses another interesting edge case: classes that mutually refer to each other, but in their base class:

from dataclasses import dataclass
from typing import Generic, TypeVar

T = TypeVar("T")

@dataclass
class Parent(Generic[T]):
    key: str

@dataclass
class Child1(Parent["Child2"]): ...

@dataclass
class Child2(Parent["Child1"]): ...

(dataclass isn’t strictly necessary in this)

Mentioning since I believe this kind of thing may be a problem from Larry Hasting’s forward class declaration idea.