Deferred computation/evalution for toplevels, imports and dataclasses

when investigating import times for pluggy and their effect on hatch,

we took note that importing importlib.metadata, typing and inspect in a pretty way,
it makes a import time difference that’s very noticeable

  • 0.003 s !! when deffered
  • 0.046 s when eager (rounding errors apply to the components)
    • 0.024 s from importlib-metadata
    • 0.012 s from typing
    • 0.004 s from inspect

typing was only used for declaring type aliases and type annotations
importlib.metadata was only used very late for entry-points

currently there are quite some pain points for deferring those imports sensibly
while importing

  • loss of sane usage of typeddicts (those need eager import of typing or utterly reprehensible hacks)
  • painful deferring/local importing of importlib_metadata/inspect

i would like to propose expanding the deferred calculation of just type annotations to also be applied for imported names and globals like classes with dependencies decorated functions

# excessively simplified example
import typing
import dataclasses
class Test():
   foo: int

def make_test() -> Test:
   return Test() # only by the time this is called the imports should trigger and the class should be made 


This would be a large change to Python’s runtime semantics. Currently, libraries can do anything they want on import (i.e., imports can have side effects); this change would mean anything that happens on import now happens at some future time which may be very non-obvious. This is even more true of class definitions with decorators, where for instance a decorator may add a class to some kind of global registry.

To me the proposed change smacks of spooky action at a distance, which would be worse than Python’s current semantics, which are relatively straightforward (viz., the entire file is run top to bottom).

1 Like

Is this PEP 690?


Would putting most of these modules’ attributes behind __getattr__ be worthwhile?

With some basic profiling, the slowest parts of these modules could be isolated from the rest.

I think there are some very minor overheads related to adding internal modules (more modules need to be loaded), but if the result is that import typing is nearly instantaneous, that could be a pretty big win.

With deferred annotations as a related topic, the case seems strongest for doing this with typing.

1 Like

It does seem that, even if we’re not prepared to adopt PEP 690 because of the changes in semantics, stdlib modules that do not have side-effects could be lazily loaded, either through __getattr__ or something like GitHub - scientific-python/lazy_loader: Populate library namespace without incurring immediate import costs. Much of the code there handles cases where imports fail, so it could be stripped down for use in stdlib to something quite compact.

1 Like

Honestly, I’d just report these as performance bugs. That’s likely to be much easier to address than any sort of deferred loading proposal - as has been noted, that’s basically PEP 690, and it was rejected because the semantic implications were too great.


I’m not sure you can always avoid the performance issue without some form of lazy loading inside the libraries or a backwards compatibility break. I’m not so familiar with importlib.metadata but I know dataclasses for instance pulls in inspect which is fairly significant[1].

However you kind of end up following a chain downwards. Is the issue that inspect is a heavy import and so dataclasses should avoid it or is it that inspect should be made faster, possibly by avoiding or delaying heavier imports itself (can the ast import be delayed for instance)…

I think that both this suggestion and the rejected PEP690 proposal both have issues with changing existing behaviour in ways that can cause unexpected side effects. However, the lazy_loader module linked earlier and other methods I’ve seen seem like a lot of work to get around the lack of direct language support (and can be a pain when static analysis tools get involved).

  1. It takes about half the import time if you look at -Ximporttime, but if some of the other direct dependencies were removed from dataclasses they end up being imported by inspect anyway. ↩︎

FWIW, I’ve filed gh-109653: ``: improve import time by creating soft-deprecated members on demand by AlexWaygood · Pull Request #109651 · python/cpython · GitHub, which cuts down the import time of typing by around a third according to my measurements.


i wonder if something like

lazy import  inspect

lazy def foo(...):

would make it more allowable

it would be nice if there was a namespace/dict variant that could contain lazy computed members

it would help for type annotations, heavy imports one needs later (code surrounding that is horrendous)

it would also help for delayed imports

the alternative would be to set up something like apipkg to generate certain objects

in that case there would be .pyi files with import specs and a code-generator would translate them to .py files with lazy imports (with the caveat, that class/function definitions would be disallowed, only imports and very basic constants

I think in this case I would still expect the decorator to be evaluated, and the wrapped function (or class) could be depending on what the decorator does?

You could make a decorator wrapper that only does the import and work on __call__ or __getattr__ and replaces itself at that point. It wouldn’t work for something like that needs to be there for static analysis but you could make a @deferred_dataclass decorator that works at runtime for instance.

Basically only the expression that’s creating the decorator can be executed

Else one would have to execute the just lazy defined Thing to apply the decorator

Looking at this thread again, I think getattr laziness is quite powerful for these cases as a client as well as for potential use in the stdlib.

I’ve only used the following trick at the top level of a package, but it can be used anywhere:

import typing

if typing.TYPE_CHECKING:
    from ._foo import Foo
    def __getattr__(name): ...

For this to work, the import speed of typing is the most important thing.
There’s a funny thing going on here if you wanted to defer the import of TypedDict because typing imports are slow. If Foo above is a TypedDict, then it would work just fine as an optimization if typing itself deferred everything except for the TYPE_CHECKING variable.
But presently, of course it doesn’t work well for that case.

This use case may impact how maintainers think about the typing module. If this usage pattern is one we want to encourage, then typing could begin chunking slow imports into internal modules, to make TYPE_CHECKING very fast to access. (There’s also the MYPY = False variable, which needs no import, if that’s still supported. But I’m not sure I like relying on that anymore, even if it does work.)

1 Like