Type annotations, PEP 649 and PEP 563

Yes, agreed, importing things in functions is not common nor encouraged. And for the majority of use cases, even the current behavior of PEP 563 should work fine.

I think the main issue boils down to avoiding frustration from (and improving the inclusion of) newcomers and non-experts, in those particular corner cases.

Here’s an example that would be a bit more reasonable and probably a bit more conventional (some lines longer):

from __future__ import annotations

import typing

from pydantic import BaseModel, PositiveInt

def main():

    class Foo(BaseModel):
        total: PositiveInt

    class Bar(BaseModel):
        foo: Foo

    # crashes with current PEP 563
    typing.get_type_hints(Bar)

main()

This would result in a NameError: name 'Foo' is not defined.

For a newcomer, after seeing the error, it would probably not be obvious that using the class Foo (declared inside the function) as a type annotation is not supported. Although Foo alone (without Bar) would be supported.

In many cases, it would probably be fine to just move those models outside the function, but it would not be obvious for newcomers why and when that is needed. And I wouldn’t expect people that use these libraries and tools to have enough expertise to know the underlying details and understand the why of the errors and how to handle them (I myself didn’t understand those details until recently :sweat_smile:).

Another corner case that could need to be able to import types used in annotations at runtime is avoiding cyclic imports (as in the second example in my previous post).


This possible paper cut becomes a bit more delicate/relevant with the fact that many (most?) users of Pydantic come via FastAPI. And FastAPI is growing in adoption, in particular by newcomers to Python. That’s probably because the intention in FastAPI’s design and docs is to be easy to learn and use by everyone. In several cases I’ve seen, people are even migrating to Python from other languages to use FastAPI. And I can imagine how frustrating it could be for all these developers that are not experts in Python internals to not be able to have some classes or imports inside of functions in some corner cases, although in most cases it would work well. So, in the end, all this is just to better support those newcomers and non-experts, and to avoid non-obvious limitations in functionality.

1 Like

Perhaps a beginner’s question: Will it still be possible to use a ForwardRef(“my_class”) type annotation and be able to resolve this forwardref at runtime. I am currently developing add-ons to Odoo and the class models from the orm are built at runtime and stored in a registry with a string as identifier. Today, via the FormardRef mechanism, I am able to replace the ForwardRef with the real class at the end of the registry initialization by passing this registry as the localns for evaluating the ForwadRref. Will this use case still be possible?

class A:
     attr: ForwardRef("dyn_class")

registry = {"dyn_class":  type("dyn_class", (), {})}

A.__annotations__["attr"] = A.__annotations__["attr"]._evaluate( None, registry)

That is exactly my use-case of stringified annotations. PEP 563 indeed makes it much more reliable.

To explain a bit: I’m writing a tool, Griffe, that visits the AST of modules to extract useful information. It is able to rebuild an expression from nodes, in which each name is a struct containing both the name as written in the code, and the full, resolved path in the scope of its parent.
For compiled/builtin modules, it cannot load an AST so it falls back on introspection. When the introspected module imports future annotations, great, I can simply compile stringified annotations to transform them into the previously mentioned expressions (these annotations come, for example, from inspect.Signature.return_annotation). If the introspected module does not import future annotation, I have to handle an actual Python object, which is much more unpredictable.

The gist of it:

def _convert_object_to_annotation(obj, parent):
    # even when *we* import future annotations,
    # the object from which we get a signature
    # can come from modules which did *not* import them,
    # so inspect.signature returns actual Python objects
    # that we must deal with
    if not isinstance(obj, str):
        if hasattr(obj, "__name__"):
            # simple types like int, str, custom classes, etc.
            obj = obj.__name__
        else:
            # other, more complex types: hope for the best
            obj = repr(obj)
    try:
        annotation_node = compile(obj, mode="eval", filename="<>", flags=ast.PyCF_ONLY_AST, optimize=2)
    except SyntaxError:
        return obj
    return get_annotation(annotation_node.body, parent=parent)

Emphasis on repr(obj). Types imported from the typing module have good representation methods. But this is not enough, see the example below:

>>> from typing import List, Tuple
>>> T = List[Tuple[str, int]]
>>> repr(T)
'typing.List[typing.Tuple[str, int]]'

The issue here is that typing is unknown in the given scope, so I won’t be able to resolve it properly.

Even worse:

>>> TT = Tuple[T, T]
>>> repr(TT)
'typing.Tuple[typing.List[typing.Tuple[str, int]], typing.List[typing.Tuple[str, int]]]'

That makes a really long string, and T was lost in the process.

If the introspected module had stringified annotations instead, I would get 'TT' from, for example, inspect.signature, and I would be able to resolve it.

1 Like

typedload does this.

I did it in the testsuite because I needed local classes. I have no idea what the users are doing, but if python allows defining a class inside a function, and it allows that function to have variables that are instances of that class, and it allows typing those variables, and it allows passing a pointer to that class to another function… it seems strange that the other function that has the pointer can’t use the types like the original function can.

So in typedload it’s neither encouraged nor discouraged… it’s just a language feature that works in all other cases and would be an incongruence to behave differently in this particular case.

I normally use the PEP 563 string-based annotations because I support py3.7-3.12

And in py3.7 I cannot use the | operator in typing. And I find the t.Union[] to be very verbose and messy.

So, I nearly always use string-based typing like:

Python 3.7.16 (default, Jan 17 2023, 22:20:44)
Type "help", "copyright", "credits" or "license" for more information.
>>> def fun(a: 'int | str') -> None: pass
...
>>> def fun(a: int | str) -> None: pass
...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'type' and 'type'

I see above that i could use from __future__ import annotations
Are there any pros or cons of one vs the other ?
Is the recommendation that I should be using the future import ?
And would the future import change in different versions of py3.7 (i.e. would 3.7.0 behave same as 3.7.16?)

Yes, but you would run into them if you moved to from __future__ import annotations (it’s mostly around resolving references to other things in your type annotations). If your code moves over without issue then you should be fine.

Nope, that __future__ import has not changed since it was first introduced.