To make it easier to check, we’ve created an interactive browser-based demo for PEP 810 that lets you experiment with lazy imports directly in your browser with emscripten. The demo has a code editor mode and an interactive REPL (with enhanced PyREPL for Chrome/Edge, and a fallback for other browsers). It includes a pre-loaded lazy_demo package with examples showing when modules are actually loaded and how deferred error handling works.
The demo runs Python 3.15 alpha compiled to WebAssembly, so networking and threading aren’t available, but it’s perfect for exploring the lazy imports feature.
Actually, the situation is even worse. Even without disabling lazy loading, the above pattern for “optional” dependencies doesn’t work, because dir(yourmodule) will always fail with an import error. And AFAIK dir is how autocomplete in interactive shells currently works.
More generally, as long as there are operations that trigger the reification of the whole module, then lazy imports should NOT be recommended as a replacement for the
patter. Pulling out opt_dependency to the top level as a lazy imports currently ties the successful import of opt_dependency to common module-level operations such as dir and getattr (even if getattr is not used to access opt_dependency itself).
This is IMHO is an extremely unfortunate footgun. You can see that there are multiple people in this thread, who expected this pattern to work (and the PEP itself even suggests it as a main use case for lazy imports).
Your concern is valid, but if you allow me the comparison it’s essentially saying “as long as you need all the values, then you can’t use the thing that lazy imports the values.” That’s the nature of lazy evaluation: deferring work until needed. Of course you can say “but I just need the names” but that is unclear if every use of mod.__dict__ just needs the names. In the examples you listed maybe true but we did this because there is a lot of code that expects mod.__dict__['thing'] to just work. Also interactive shells and autocomplete shouldn’t be the cases driving the compromises because performance there does not matter that much.
There’s another approach: don’t reify on __dict__ access. Keep lazy proxy objects in __dict__ and force manual reification only when actually using values.
This would mean dir(yourmodule) wouldn’t trigger imports, optional dependencies at module level would work, and autocomplete wouldn’t fail on missing deps. The cost is that it would break code expecting module.__dict__ to contain real objects, causing compatibility issues with existing tools.
We have chose compatibility over this use case. It’s a legitimate design tradeoff and you can reasonably argue either way.
>>> lazy import lel
>>> getattr(lel, "bar", None)
>>> getattr(lel, "foo", None)
<function foo at 0x7f8ed7c60930>
>>> lel.foo
<function foo at 0x7f8ed7c60930>
>>> lel.foo()
Traceback (most recent call last):
File "/home/pablogsal/github/lazy/lel.py", line 1, in <module>
lazy import blech
ImportError: deferred import of 'blech' raised an exception during resolution
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<python-input-4>", line 1, in <module>
lel.foo()
~~~~~~~^^
File "/home/pablogsal/github/lazy/lel.py", line 4, in foo
blech
ModuleNotFoundError: No module named 'blech'
But the issue here isn’t performance. The author of some library will move the import opt_dependency from inside the function to a top level lazy import (as is suggested by the PEP). The consequence of this action isn’t that autocomplete now runs slowly, it’s that autocomplete stops working, because dir(somemodule) now always raises an ImportError unless opt_dependency is installed.
How about a third option: do reify on explicit __dict__ access, but special case dir and getattr not to use __dict__ directly (dir wouldn’t reify anything and getattr would only reify the accessed key).
Or even a fourth option: provide a way to lazy import with a default value. For example:
lazy import numpy as np or None
lazy from opt_dependency import expensive_func or default_impl
In this case, reification would attempt to do the normal import, but use the provided default in case of an Exception, instead of propagating it.
At the very least, even if you decide that you have to keep dir behaviour, it might make sense to at least provide an escape hatch similar to globals, but accessible even from outside the module.
This is incredibly cool! Having an interactive WebAssembly demo where people can experiment with the feature directly in their browser is brilliant. I wish more PEPs did this as it makes the proposal so much more tangible and accessible than just reading specs.
This distinction feels right to me. The contextlib.suppress pattern is really the exception, not the rule. Most with usage is about resource management and controlled entry/exit, not about catching exceptions from the body. The semantic overlap with try/except is much weaker than it initially appears.
The scenario with contextlib.suppress(ImportError) seems rare enough that it could be caught by linters, and the practical benefit for migration seems substantial. If the restriction stays, libraries like yours are stuck with hacks through 3.15, which undermines the goal of providing a standard mechanism.
I’m not sure this use case is compelling enough to drive the restriction. If someone needs eager imports under global lazy mode, they can use the filter mechanism, or just use a regular import statement outside a with block and rely on the filter. The forced-eager pattern seems like a niche need that doesn’t justify blocking the backwards-compatibility migration path you’ve identified.
Perhaps it would be worth exposing a __raw_dict__ property (name negotiable) so consumers could opt out of reification the same way calling globals() inside the module does?
When an import is lazy, __lazy_import__ is called instead of __import__ . __lazy_import__ has the same function signature as __import__ . It adds the module name to sys.lazy_modules , a set of fully-qualified module names which have been lazily imported at some point (primarily for diagnostics and introspection), and returns a `types.LazyImportType`` object for the module.
Traceback (most recent call last):
File “//main.py”, line 7, in
print(sys.lazy_modules)
^^^^^^^^^^^^^^^^
AttributeError: module ‘sys’ has no attribute ‘lazy_modules’
It’s still not implemented Notice there is a warning there saying the implementation may be out of sync with the PEP because we keep changing it so please don’t worry if you find missing or incorrect things there.
Wow! Thanks for working on this and putting in such careful thought. Lazy imports are definitely a thing we want. Clearly it’s also a hard problem to solve for everyone.
I have some comments (and concerns) about the proposed feature. Sorry if I’m repeating things already said and/or answered, but this thread is too long to read through for the time I have to spare.
“lazy” actually means “maybe lazy”, which can be confusing (and are there any potential traps to thinking an import is lazy when it isn’t? There certainly can be for the reverse.)
I like the corresponding __lazy_import__
how much of the performance gain is due to type hints?
does vars() get the same treatment as globals()?
avoiding implicit laziness recursion makes sense
having a lazy import in one module not make import lazy in any other modules makes sense
the option to control laziness globally is nice, and probably worth the extra cognitive complexity
the lazy proxy + reification approach seems reasonable and straight-forward and realistically shouldn’t ever surface to users, which is good
why do some (but not all) operations other than name lookup (bound module) reify? Why not just name lookup? The rules there are probably going to trip people up and I’m not sure there’s a good way to avoid confusion.
supporting __lazy_modules__ as a bridge is a good idea
why isn’t sys.lazy_modulesa dict mapping names to proxies that haven’t been reified yet? (I would expect the proxy would know the name of the module where it is bound, and its own name.)
where will the import error raise for a lazy import if the module isn’t found? (normally you might wrap an import in a try-except-ModuleNotFoundError but that isn’t allows for lazy import)
what happens with from-import of attrs (not submodules)? Is that a syntax error, an import error, or does it just always do an eager import? Hmm, looks like it is lazy (which makes sense).
not a fan:
new keyword seems like it will eventually become irrelevant, but will proliferate like async; FWIW, I would expect the new __lazy_modules__to be the long-lived usage, not the new keyword, which would be a strike against adding the keyword
attrs of lazy module are not lazy (but maybe the performance gain isn’t big enough to be worth it?); I understand the downsides, but deep down in my gut lazy attrs seems right. Hopefully users split modules up into many smaller ones as a consequence. I suppose lazy attributes could be addressed separately.
no lazy imports outside global scope (I guess that’s fine since there are no lazy attrs)
extra complexity to figure out “is this import actually lazy?”
modifying the import system through importlib, rather that sys, is a new thing (not necessarily in a good way); let’s not start splitting that role between sys and importlib
if I import a module and it doesn’t show up in sys.modules then I might get confused; the explicit lazy keyword does help reduce the problem, but it might still throw people off
I’m not sure yet what I think about resolving import state at reification time rather than at the import statement; why not stash the module spec at the import statement?
relatedly, I’m a little turned off by the lazy from json import dumspexample–the ImportError raising later feels a bit icky, though again the explicitlazy keyword helps reduce the ickiness a little
that said, the exception chaining is definitely the right approach, all other decisions being settled
I’m glad you’ve considered possible future considerations related to declarative metadata and lazy imports
the explanation about “Observable behavioral shifts (opt-in only)” is really nice
you’ve covered performance impact well
the get() method on the proxy objects seems consistent with other proxies we have (e.g. weakref)
placing lazy at the beginning of the statement seems reasonable
from my experience with subinterpreters, I’d say adding a corresponding eager keyword is something to do sooner rather than later
FWIW, another argument against a decorator approach is that decorators are currently evaluated in steps, rather than strictly being compiler directives
FWIW, I think we should explore the various use cases where importing a module has side effects, rather than mostly just being declaration code (effectively), which IIUC is why we can’t just have all imports be lazy. That exercise would likely illuminate the constraints that lazy imports have to work with. As it is, I’m not aware of any in-depth analysis; if I’m wrong about that then hurray. Otherwise I’m left having a hard time finding confidence about any lazy import mechanism.
Again, thanks for all the great work on this proposal. I’m glad we haven’t given up on the idea of lazy imports. Ultimately, the proposal (with some adjustments) might be enough to cover most of the key use cases, as well as provide a foundation for further improvement. (My uncertainty on that is my main concern on this overall.) I look forward to seeing where this discussion leads!
This also holds true for need of introspection. Im thinking of a function in importlib or inspect that returns a (read-only proxy of) non-reified module dict from a module object — or less preferably, a dunder that does the same.
I think putting it into a module is better because accessing the non-reified stuff is an advanced feature which most user code won’t need to worry about, but will be useful in cases such as implementing dir().
Please no.While I understand your concerns about keyword proliferation, I believe the explicit syntax is essential and should remain the primary mechanism.
The lazy keyword makes the laziness visible exactly where it matters: at the import statement itself. When you see lazy import x, you immediately understand what’s happening. In contrast, with __lazy_modules__ = ["x"] followed by import xelsewhere in the file, you have classic spooky action at a distance. Someone reading the import statement has no idea it’s lazy without scrolling up to find the module-level declaration, which might be dozens of lines away. This violates the principle of locality that makes code maintainable.
The PEP itself emphasizes explicitness throughout: it’s literally in the title (“Explicit lazy imports”). The syntax is the better long-term solution by design. As both a library developer and user, I’d find a future where __lazy_modules__ is the norm genuinely harmful to code clarity. The keyword may feel like overhead now, but it will age much better than scattered magic lists controlling imports from afar. The transition period doesn’t justify arbitrary years of ugliness. There has been already comments on why this cannot be the default as well.
There’s a fundamental contradiction in these concerns. On one hand, you’re worried about “extra complexity to figure out ‘is this import actually lazy?’” but on the other hand, you’re advocating for making attributes lazy, which would create vastly more complexity. The PEP makes a clear architectural choice: laziness happens at the module boundary, not at the attribute level. Once you access a lazy module at all (e.g., json.dumps), the entire module reifies. This is simple and predictable, and solves the actual problem which the PEP tackles: deferring expensive module initialization and not paying for the imports you don’t use.
I am not a core developer but I am sure that individual attributes lazy would explode the complexity budget in every direction. Consider what you’re actually asking for:
Every attribute access would need proxy checking and potential reification
from module import a, b, c would need to track which of a/b/c have been touched
Method calls like json.dumps(data) would need to distinguish between accessing dumps (lazy) and calling it (reifies)
The performance overhead would be present potentially on every single attribute access throughout the program. The interpreter would need to track all ways to get the attribute (directly, via dictionaries, via descriptors, via getattr…)
Debugging would become nightmarish—is this AttributeError because the attr doesn’t exist, or because reification failed?
All of this to avoid executing a module that you’re already actively using. If you’re calling json.dumps(), you clearly need the json module. The lazy loading already saved you the cost if you never touched it.
But that’s the whole point! This has been discussed a bunch of times in the discussion already. The reason (at least the part that I am interested in) is that finding the module spec is often one of the most expensive part of importing as each lookup traverses sys.path with multiple filesystem stat calls, and on slow filesystems each stat can take hundreds of milliseconds. In many cases, finding the spec costs more than executing the module itself. If you eagerly look up specs at the import statement, you’ve already paid most of the cost lazy imports are trying to avoid. The PEP chose full laziness (defer everything until first use) because filesystem operations are often the dominant cost, so lazy import foo does almost nothing: just creates a proxy object. The tradeoff is you can’t use it for cheap existence checking, but the performance win from avoiding filesystem operations is substantial and is what a lot of us want.
I would say that this analysis has already been done extensively in PEP 690’s two discussion threads, and to some extent in this comment thread with feedback from maintainers of big libraries and companies who’ve deployed this in production, and in this PEP’s sections on motivation, how to teach and rejected ideas. With respect, it’s somewhat unfair to say you don’t have time to read through the discussion but then request analysis that’s partially covered by resources in that very discussion.
I really like this suggestion. It elegantly solves the asymmetry problem where code inside a module can use globals() to access lazy objects without reification, but code outside the module has no equivalent option as module.__dict__ always reifies.
I don’t see how this variety of lazy import is useful. If you have to be aware that something is lazily imported and do something explicit to reify it, why not just conditionally import it in the first place?