This just doesn’t match reality. It’s not “a handful” of modules with side effects. It’s everywhere. Real Python code is full of this stuff: logging setup, plugin registration, global state initialization, database connections, framework configuration. When the companies that are participating in the discussion tried to roll out something like PEP 690’s global approach, they hit a wall. Even with total control over their codebases (Meta has a monorepo!), the effort was massive. They needed complex filtering systems just to keep things working.
PEP 690 got rejected for good reason.
I don’t know where are you getting this from. The comment you are answering has very good reasons that you have dismissed as “it just helps the compiler”. Explicit beats implicit. When you see lazy import foo, you know immediately that something’s different. You know side effects are deferred. You know ImportErrors might show up later. Without that keyword, you’re left guessing based on some config file buried somewhere else. it’s also about responsibility. The person writing lazy import owns that decision and its consequences. In a world where everything might be lazy based on a global flag, nobody knows what’s going on without checking external state.
I think is fair to say that his has been benchmarked extensively and there is more than enough evidence. It’s also not difficult to realise why this could be the case. For example, the issue shows up on NFS-backed filesystems and distributed storage, where each stat() call has network latency. In production environments, you can see 50-200ms per stat() call depending on network conditions. When you have dozens of imports and each one does multiple filesystem checks traversing sys.path, you burn through seconds just finding modules before executing any Python code. In some measurements, spec finding accounts for 60-70% of total import time.
Memory savings are absolutely significant too. But the I/O cost is often the single biggest bottleneck. The folks at Bloomberg, Google, Meta, and HRT probably have similar stories. There were some links shared in the PEP and the discussion about that.
From what I can see, the community did explore alternatives, including the scientific Python approach and the LazyLoader approach (eager spec lookup, lazy execution). That’s in the rejected ideas section, though maybe it needs more detail on why they chose full laziness. The fact that both the author of LazyLoader and the authors of the scientific Python solutions are backing the PEP should also prove the merits of the proposal.
The negatives of not doing cheap existence checking are real. But the mitigation seems straightforward: importlib.util.find_spec() works for that use case, and it’s explicit about what you’re doing. If you need to check whether a module exists without importing it, lazy imports probably aren’t the right tool anyway. The semantics get confusing (what does lazy in try/except even mean?).
The alternative (eager spec, lazy execution) gives you existence checking but loses most of the performance win on network filesystems and other similar situations. From the discussion and the production deployments mentioned, it looks like they’re trading a use case that has a clear workaround for performance gains that would otherwise be impossible. The users that have tried similar solutions clearly care about startup time, and they chose full laziness after trying both approaches.