Hello,
I think I’m asking an obvious question, but maybe I’m missing something. In the face of type checking cyclic import dependencies the recommendation here is to use typing.TYPE_CHECKING so that checkers are still able to import the “other” module to access its types. This is all good and well for a static type checker.
Switching to runtime, however, if I want to reflect on types using get_type_hints() then I get a NameError (expectedly so) because — drawing from the example in the docs above — the name 'bar' (used by type annotation 'bar.BarClass') wasn’t found:
import typing
import foo
import bar
print(typing.get_type_hints(bar.BarClass.listifyme)) # {'return': list[bar.BarClass]}
print(typing.get_type_hints(foo.listify)) # NameError: name 'bar' is not defined
However, I know that bar was imported here and so I can pass it down:
print(typing.get_type_hints(foo.listify, globalns=globals())) # {'arg': <class 'bar.BarClass'>, 'return': list[bar.BarClass]}
Problem is that this solution doesn’t scale well when nesting function calls. For example, suppose a module utils.py which contains a helper function:
import typing
def get_type_hints(thing: typing.Any) -> dict:
return typing.get_type_hints(thing, globalns=globals())
then that module’s global namespace doesn’t contain bar:
import utils
print(utils.get_type_hints(foo.listify)) # NameError: name 'bar' is not defined
It seems to me that I’d have to pass around a namespace (or a copy thereof) which contains necessary names, if I’d like to make get_type_hints() work across multiple function calls, and that doesn’t feel right to me. It get’s more complicated because not always do I know the names required by this or that thing.
But… do I have another choice? Refactor, and consolidate the multiple modules into one? Am I perhaps missing something that might help?
Would it make sense for get_type_hints() to return a “type unknown” sentinel instead of raising an error? Is incomplete type information — in some scenarios — better than none at all?
Thanks!
Jens