FWIW, I’m generally in favor of PEP 649 (deferred evaluation) becoming the default. All issues it has that I know about† can be worked around by using a string literal in place of a type, which is no worse than the status quo of PEP 484 (runtime execution).
Edge case: resolving an introspected type annotation within an object scope
As demonstrated in this other discussion, using introspection to get a function return type and match it with an object available in its scope (or its parent’s scope) is not always possible without PEP 563. To properly resolve the return type, it is needed to get it as it was typed in the sources.
To be clear, the poster here wants the stringified version of the original type. This is not so easy if the type is e.g. tuple[T, T][int] since that has become tuple[int, int] by the time the annotation has been objectified (both with PEP 484 and with PEP 649).
from __future__ import annotations
from pydantic import BaseModel
def main():
from pydantic import PositiveInt
class TestModel(BaseModel):
foo: PositiveInt
print(TestModel(foo=1))
main()
This is not a fundamental problem with types being left as strings, but rather with how PEP 563 was implemented:
Annotations can only use names present in the module scope as postponed evaluation using local names is not reliable (with the sole exception of class-level names resolved by typing.get_type_hints() )
Of course, I’ve used from pydantic import PositiveInt above, but it could be any import or a custom type defined within the function, including another pydantic model. It could even be simple type alias like Method = Literal['GET', 'POST'].
Personally I think this is very confusing as it’s very different from the way python scope works otherwise.
(Sorry if this has been mentioned above, I thought it best to add my example for completeness.)
I’ve made a suggestion at Recursive dataclasses · Issue #2 · larryhastings/co_annotations · GitHub that I think could resolve all the PEP 649 edge cases mentioned here, with some tooling support. The idea is that tools that want to resolve annotations with special handling for forward references or runtime-undefined names can eval(somefunc.__co_annotations__.__code__, myglobals, {}) instead of calling somefunc.__co_annotations__() directly, where myglobals is a customized globals dictionary. Depending what exactly is added to the custom globals dictionary, this approach can solve a variety of use cases, including producing high-fidelity “stringified” annotations and allowing forward references in dataclass annotations (and in annotations generally). See the linked comment for a bit more detail.
That’s a neat idea Carl. I like PEP 649 because it feels to me like the more correct way to do things. We want a mechanism to defer evaluation of some code like thing, i.e. the type annotation. Storing the annotation as a simple text string is one way to defer eval that but has downsides. E.g. you lose lexical scoping because the string object doesn’t know what lexical environment it was inside.
I’ve done some work on introspection tools that use type annotations to generate entity-relationship diagrams. If PEP 649 was accepted, I would need a way to handle something like if TYPE_CHECKING: imports. Your proposal would help solve that.
Yesterday there was a mypy issue involving some real world code that poses another interesting edge case: classes that mutually refer to each other, but in their base class:
from dataclasses import dataclass
from typing import Generic, TypeVar
T = TypeVar("T")
@dataclass
class Parent(Generic[T]):
key: str
@dataclass
class Child1(Parent["Child2"]): ...
@dataclass
class Child2(Parent["Child1"]): ...
(dataclass isn’t strictly necessary in this)
Mentioning since I believe this kind of thing may be a problem from Larry Hasting’s forward class declaration idea.