PEP 810: Explicit lazy imports

Just for reference a minimal version of this idea (which only works for the restricted cases where LazyLoader works) looks like this:

import importlib.util
import sys
import threading
from contextlib import contextmanager

_thread_local = threading.local()
_original_import = __import__

def _lazy_import_wrapper(name, globals=None, locals=None, fromlist=(), level=0):

    lazy_modules = getattr(_thread_local, 'lazy_modules', None)

    if lazy_modules is None or level != 0 or fromlist or name not in lazy_modules or name in sys.modules:
        return _original_import(name, globals, locals, fromlist, level)

    try:
        spec = importlib.util.find_spec(name)
        if spec is None or spec.loader is None:
            return _original_import(name, globals, locals, fromlist, level)
        spec.loader = importlib.util.LazyLoader(spec.loader)
        module = importlib.util.module_from_spec(spec)
        sys.modules[name] = module
        spec.loader.exec_module(module)
        return module
    except Exception:
        return _original_import(name, globals, locals, fromlist, level)

if isinstance(__builtins__, dict):
    __builtins__['__import__'] = _lazy_import_wrapper
else:
    __builtins__.__import__ = _lazy_import_wrapper

@contextmanager
def lazy_importer(lazy_modules):
    if sys.version_info >= (3, 15):
        yield
        return
    prev = getattr(_thread_local, 'lazy_modules', None)
    _thread_local.lazy_modules = lazy_modules
    try:
        yield
    finally:
        _thread_local.lazy_modules = prev

Then you can use it like this:

from lazy_import_backport import lazy_importer
__lazy_modules__ = {"this", "collections"}
with lazy_importer(__lazy_modules__):
    import json                           # This is eager because is not in __lazy_modules__
    import this                           # This is lazy (prints nothing even if normally would)
    from collections import namedtuple    # This is eager because the from ... import ...

In both <3.15 and under this PEP it will make this lazy (via different mechanisms).

For this idea you can use something like:

import sys
import importlib.util

def lazy_import(name):
    if sys.version_info >= (3, 15):
        return __lazy_import__(name)
    if name in sys.modules:
        return sys.modules[name]
    spec = importlib.util.find_spec(name)
    spec.loader = importlib.util.LazyLoader(spec.loader)
    module = importlib.util.module_from_spec(spec)
    sys.modules[name] = module
    spec.loader.exec_module(module)
    return module

Then in both this PEP and <3.15:

>>> import lazy_import
>>> this = lazy_import.lazy_import("this")
>>> this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
<module 'this' from '/home/pablogsal/github/lazy/Lib/this.py'>
10 Likes

This is a good point about the ergonomics of this backwards compatibility and thankfully, the PEP now clarifies the semantics (thanks, Pablo):

A module may define a __lazy_modules__ variable in its global scope, which specifies which module names should be made potentially lazy (as if the lazy keyword was used). This variable is checked on each import statement to determine whether the import should be made potentially lazy. The check is performed by calling __contains__ on the __lazy_modules__ object with a string containing the fully qualified module name being imported.

So with this clarification, special casing "*" is not needed (at least for this initial PEP): module authors who have validated lazy compatibility and want to provide backwards compatibility (marking potential lazy imports without using the keyword) can simply define a container with arbitrary logic.

For example if there are tests showing that all imports can and should be lazy, one can write:

class Everything:
    def __contains__(self, item):
        return True

__lazy_modules__ = Everything()
# or as a one liner 
__lazy_modules__ = type('', (), {'__contains__': lambda *_: True})()
2 Likes

I consider scientific-python’s lazy-loader the current best solution in this space. You might like others, but I don’t know that it matters that much which of the available of solutions you prefer – they exist and all have some of the same limitations.

The main issue with any current approach is that you can’t write ordinary Python code. There’s inherent overhead, especially when collaborating in a team with various levels of experience, in using anything which is not an import statement.

My post was partly responding to the idea that this proposal “offers little benefit”. From my perspective it offers a lot. If I haven’t made a convincing case for it at this point, I think it’s best that I give up on doing so, since this thread has many readers.

I didn’t find this to be a convincing argument against handling * or any other special spelling.

I like the idea of

__lazy_modules__ = True

The issue is, IMO, a matter of developer ergonomics rather than one of support and pure functionality.

I can personally live without “lazy everything” as a feature, but I wouldn’t really like to see the Everything class during code review (although it is kind of neat) as a workaround. I’d rather just list out all modules, even if it is a bit clunky and verbose. It can be automated with some effort if necessary.

3 Likes

I think this is the only place you have a point, but you don’t really provide any information about what the related endeavours are, how those clash with the design proposed in this PEP, and what the timeline for getting those designs into Python would be, if they were accepted. Without any concrete design discussion, especially since the proposal in this PEP is well thought out, waiting a couple of years is a ridiculous ask. However, if you have a proposal coming up in the coming weeks or months (ready for 3.15) that could replace this proposal with a more generic feature that would be a reasonable ask.

So I’ll ask directly, is there currently a PEP in the works that would be ready for 3.15 where this PEP could interfere with its design? If the answer is no, I don’t see why this PEP should be held up.

__getattr__ is only called after the attribute has been lookup in the modules __dict__.
So, this is a correct code for getting a module’s attribute:

   dict = *_PyObject_GetDictPtr(module); // Or the Stable ABI equivalent
   attr = PyDict_GetAttr(module, attr_name);
   if (attr == NULL) {
        // Then call __getattr__
   }

With PEP 810 this will result in attr assigned a PyLazyImportObject, will it not?

Hummm, there is no PyDict_GetAttr but I suppose you mean PyDict_GetItem and friends.

That will happen indeed and that’s by design, the same way accessing module.__dict__ in Python would give you the lazy proxy. I’m not looking to start a long debate here about how common this is or isn’t , but I wanted to honor your concern by doing some investigation into how real code actually uses these APIs.

I analyzed all 132 C-API import call sites in CPython’s standard library (Full data here) . 27% use PyImport_ImportModuleAttrString() or PyImport_ImportModuleAttr() which handle reification automatically. 54% import but never access attributes (it’s used to get the module state or other business). The remaining 20% use separate import + attribute access, and all 26 of those cases use PyObject_GetAttrString() or PyObject_GetAttr() which trigger reification through the normal attribute protocol.

Critically, nothing in the standard library bypasses the attribute protocol by accessing the module dict directly after import(as of right now :wink: ) There are zero instances of the pattern you describe (direct dict access after import to get a specific attribute). All the usages I have seen (like this one in the curses module) are to set stuff on the module.

Here is a quick matrix plot for your convenience:

Now, in the remote case were we still think we need mitigation, we’re happy to add new C-APIs now or later. There are a few options:

  • PyImport_ImportModuleReified() - would guarantee a real module object is returned with all attributes reified.

  • PyModule_GetDictReified() - would reify any lazy imports before returning the module’s dict.

  • PyDict_GetItemWithReification() (or similar) - would add reification logic directly to dict access operations, checking if a value is a lazy proxy and reifying it automatically.

We believe all of these would be premature. The common patterns work correctly with zero changes, while the vanishingly rare edge case can be easily addressed with a check for lazy objects and a call to resolve the lazy object if needed. If this ever becomes a real issue, we can add these APIs later without breaking anything. That said, if the SC thinks any of these APIs are required, we’re happy to add them of course.

9 Likes

This is my main and only point. The rest is to indicate why the benefits are not big enough to not take precautionary measures / extra effort to minimise / eliminate this risk.

So in more general space of deferred evaluation there are roughly 4 levels of explicitness (from most to least):

  1. Explicit proxy object - everything is explicit here (think dask evaluation graph)
  2. Mutation - still no implicit substitutions, but class changes shape (think importlib.LazyLoader)
  3. Object is replaced implicitly on a chosen set of operations - still not completely implicit. (This was thoroughly explored and implemented in Backquotes for deferred expression)
  4. Completely implicit namespace variable substitution on any reference. This was one of the initial starting points for “Deferred evaluation”. (https://github.com/DavidMertz/peps/blob/master/pep-9999.rst hinted at this to high degree.)

Approach (4) is the most implicit black magic there is in this area and there was non-trivial amount of effort to explore more explicit alternatives, because for general “deferred evaluation” concept it is simply too implicit given Python’s philosophy and the way things are.


And this, although named “Explicitly lazy imports”, is using the most implicit method there is in this space.

So what I am saying is at he very least, this should be given more time for things to come together.


So my only problem is with the “implicit design of variable substitution”.
Other parts of this I have nothing against:

  1. Syntax - lazy is a good keyword - it is my favourite - defer and others I don’t fancy that much.
  2. Import specific things seem well thought out. I am quite confident that it does deliver “explicit-granular-…” promise. There were comments that it could be made simpler, and maybe it could be subject to couple more step backs, but if it goes as it is - I don’t mind - not my area here, so I don’t even have any significant opinion.

I never followed “lazy imports” because I never thought that it will drop the bomb of such implementation. I just assumed that it will do something along the lines of “explicit proxy / mutation-LazyLoader-like” and it will be nicely orthogonal and non overlapping.

But it turned out differently and the pace of this doesn’t make it easy to jump in and raise high level considerations.


So to get back to implicitness.

This will not be equivalent:

try:
    ref = lazy_import('non_existent_package')
except:
    sys.exit(1)

Currently (with LazyLoader, it will terminate early, with new lazy import numpy, it always runs without an error.

And this is another point, what is the point of “import specific” implementation if it doesn’t make any importing assurances? The current behaviour is much more explicit and useful.

lazy import does_not_exist

success…

I agree that implicit handling will not choke on any cases. But that is what “completely implicit” is - it passes silently on anything. But I would argue that it should better choke a bit and let packages and ecosystem adapt to “non choking” way instead, while slowly widening “non choking” space to its maximum possible.

There is very little information on drawbacks of LazyImports in PEP 810 – Explicit lazy imports | peps.python.org. Would you mind giving more information?

  1. What cases does it not work on?
  2. Could you give some examples where LazyImports fail to deliver what this proposal manages well?

This is my challenge:

List the number of cases, where LazyLoader fails, and if I found a way to make them work, you would seriously reconsider adjusting the approach.

I dont represent anyone, im speaking for myself as a community member whos been following this discussion. When I said “we’ve listened” I meant the community has engaged with your points throughout this thread: you can see the responses from multiple people.

Your saying im being “too personal” but im talking about the pattern of discussion behavior not attacking you personally. Your jumping between multiple unrelated topics across many long posts, responding to your own posts, and asking for the PEP to be delayed for years for some hypothetical “more general deferred evaluation” approach. That’s not about you as a person, thats about how this pattern is making the discussion really hard to follow for everyone trying to focus on the actual technical questions.

I am not surprised: i have yet to see someone whos taking too much space in a discussion actually realize it as everyone always thinks theyre just taking their fair share.


Look, its totally fine to disagree with the PEP. You dont need to agree with everyone. But your going on and on and on with the same vague arguments without engaging with the responses people keep giving you.

People have explained the design rationale to you multiple times and your arguments keep being really vague. When someone points you to the PEP sections or explains why LazyLoader doesnt work you just shift to different broad concerns.

From what i can tell this is sealioning. I think others have been answering your questions but the pattern keeps being the same: you raise vague objections, people respond with specifics from the PEP, you ignore those responses and shift to demanding more work from the authors or saying things need more time. Thats not productive technical discussion.. Lots of interesting technical conversations are getting buried under you going on about how we need to wait years more for some hypothetical “more general deferred evaluation” approach.

I wont be engaging further on this and I’d respectfully recommend others also consider not engaging. I prefr to focus on the actual open technical questions not keep going in circles on fundamental design choices that have already been thoroughly discussed and documented in the PEP.

16 Likes

My take on the subject :
The point of @dg-pb about the “4 levels of deferred magics” is right. Yet the long history of import-within-def also show there is a robust pattern leverageable.
Going fully-lazy is a painkiller today, footgun tomorrow, extending from silent-error at import to the longest time scale version of “tomorrow”, i.e. inter-package compatibility testing.
Assuming this, patterns allowing “everything is lazy” should not be acceptable, every lazy import should be a special case.
The hidden magics should not conceal other things in plain sight, i.e. the __dict__ access should reify (anyway, what will non-reified dict contain ? Most probably exactly nothing), during development, lazy and eager modules should have the same interface, if people feel strange things interacting with some lazy module with autocompletion or inspection, that should warn them of the magics involved and that the module should better not be lazy-loaded. The benefits of lazy import should only matter during execution time, not inspection/development time, the other way around is a code smell or a design flaw.
Code with or without lazy should be guaranteed to run the same, but this is impossible if imports have side-effects, thus lazy-with-side-effect should be strictly discouraged, prevented if possible (how? Idk).
Some way of non-reifying access should be provided, and the path to the module should be resolved at lazy import declaration, minimally preventing impossible imports. Thus __lazy__.path should be public and potentially raising at resolve time. Perhaps several things like __version__ should also be eagerly obtained through __lazy__ for further compatibilty checks (not inspection checks, which should not depend on lazyness).
→ The hidden side effects and silenced impossible/incompatible imports are the footguns impossible to fully cancel, everything doable to mitigate them should be done. What is missing to allow a technical consensus is a proper eager resolve procedure.

Thanks a lot! This is indeed very nice.

(I actually searched before posting if this question was already raised, but couldn’t find it. Sorry for the noise)

2 Likes

Played with importlib.util.LazyLoader for a bit and it is quite easy to improve it.
At least to the level that I could fathom.

What it is not able to handle is from package import object.
But is it worth the cost?


Extension is 74 lines of code:

  1. import_module-like interface
  2. handles submodules (and namespace packages)
  3. works in any place (same as import_module. naturally…)
from importlib.util import lazy_import

with timeit_ctx(print):     # 0.0004
    ttk = lazy_import('tkinter.ttk')
tk = sys.modules['tkinter']

print(type(tk))             # <class 'importlib.util._LazyModule'>
print(type(ttk))            # <class 'importlib.util._LazyModule'>

with timeit_ctx(print):     # 0.017
    repr(ttk)

print(type(tk))             # <class 'module'>
print(type(ttk))            # <class 'module'>
  1. Raises usual errors at import definition
try:
    tkinter = lazy_import('nonexistent_tkinter')
except ModuleNotFoundError:
    print('Caught tkinter.')
  1. Bonus - import the background
from importlib.util import background_import

with timeit_ctx(print):     # 0.0004
    tt = background_import('turtle')

print(type(tt))             # <class 'importlib.util._LazyModule'>
time.sleep(0.1)
print(type(tt))             # <class 'module'>

with timeit_ctx(print):     # 0.00001
    repr(tt)

print(type(tt))             # <class 'module'>

try:
    turtle = background_import('nonexistent_turtle')
except ModuleNotFoundError:
    print('Caught turle.')

If anyone has cases that have issues with importlib.LazyLoader, would be interested to hear.

“Hidden magic” feels like it describes much of the powerful tools we have in Python rather than describing something alien to it.

Lookups where you see one object when accessed from __dict__ and another when accessed directly feels like an extension of what you can do with non-data descriptors on classes. Perhaps if PEP-549 had been accepted this would already be normal in modules.


Yes. From imports are essential. Any solution which ignores from imports is ignoring one of the major reasons for lazy imports.

You need them to handle cases where the import exists to export the API to users. It would let people avoid having to write things like this: transformers/src/transformers/__init__.py at main · huggingface/transformers · GitHub

Alternatively you’ll see tools using separate type stub files with the same basic idea. This requires duplicating all of your imports. This is annoying to maintain and easy to break.

I have this context manager tool for lazy imports that I have used in applications, but static analysis really doesn’t understand that the names don’t exist in the module globals[1] and the replacement of __import__ won’t play nicely if something else wants to replace __import__ at the same time[2].


I’ll admit I’m not completely satisfied with the explanation for the rejection of a context manager over new syntax. I’ve been over those arguments here but I would also add that I wouldn’t have expected it to work by changing the compiled code, which seems to have been the mechanism that was considered.


  1. they are attached to the LazyImporter object which is accessed directly or through module __getattr__ ↩︎

  2. hence, fine in applications where I control that - but not for libraries where I would really like to use this but can’t control if something else also wants to replace __import__. ↩︎

2 Likes

I think from … import … not working is a pretty serious issue. While it can be partially worked around, doing so is tricky because it exposes the difference between from package import module and from package import attribute, which is otherwise mostly invisible to users. As an example, I’ll use astropy.units (a module) and astropy.__version__ (a string, defined in astropy/__init__.py):

# Currently
from astropy import units, __version__

# With PEP 810
lazy from astropy import units, __version__

With PEP 810, both continue to work as expected, after adding a single keyword.

With your proposed lazy_import function[1], I suddenly need to be aware of that distinction:

# This works when from-importing a submodule:
units = lazy_import('astropy.units')

# Workaround for from-importing an attribute:
astropy = lazy_import('astropy')
# ... then use `astropy.__version__` in the code instead of just `__version__`


# Note that trying either of those the other way around would fail:

# Raises `AttributeError: 'NoneType' object has no attribute 'loader'`
__version__ = lazy_import('astropy.__version__')

# Second line raises `AttributeError: module 'astropy' has no attribute 'units'. Did you mean: 'utils'?`
astropy = lazy_import('astropy')
astropy.units

Note that the error messages are either cryptic (in the first case) or misleading (in the second case).[2] We could probably add some code to improve them, but the underlying inconsistency would remain.

This is an important underlying difference here: In the PEP, both finding the spec and loading the module are lazy; in lazy_import, finding the spec is eager while only loading the module is lazy. But as mentioned previously, spec finding can make up a large fraction of the total import time; so while lazy_import has an advantage in these try/except scenarios[3], it has a severe performance disadvantage.

This looks interesting! And I’m guessing it would get a significant performance boost under free-threading? It’s orthogonal to this PEP, so we shouldn’t discuss this here unless it directly affects the implementation of lazy imports; but I wonder if there’s enough interest in this to introduce some async import syntax, analogous to lazy import. :thinking:


  1. I’m assuming the example implementation from the importlib docs, which appears to match the behaviour in your code samples. ↩︎

  2. Tested with astropy 7.0.0. After writing this example, I noticed that the latest astropy release overrides __getattr__, so the second case should no longer raise an AttributeError. But my point still stands that the default behaviour would raise that error. ↩︎

  3. Which can be worked around with an explicit importlib.util.find_spec() call in the generic case, or with e.g. if sys.version_info > ... in some special cases ↩︎

1 Like

See: Async imports to reduce startup times

2 Likes

Why?
Should lazy import replace majority of import statements?
Or is the aim to provide the tool for few slow-to-load libraries?

If the latter, then this is not an issue at all.
If the former, then this implementation is hardly suitable for it - if used extensively, it is a ticking time bomb:

lazy from thing import thing
lazy from os.path import splitext, aabspath, future_error_here, and_here, and_here_as_well
# so far so good...

stem, _ = splitext('file.py')
# still good...

# Made use of one of the objects from `from` import, but it didn't expose any errors for its neighbours.

Without static type checking this is unsustainable.
And static type checking is optional (e.g. stdlib doesn’t use it).


Yes, this analogy is undeniably convenient.

But in contrast, I think, the consistency of behaviour of:

try:
    lazy import non_existent_package
except ModuleNotFoundError:
    ...

is much more important place to put emphasis on.


Yes, it can. To be more precise, from what I have observed - 100 - 400 µs.
And this is quite fast.
Import load times of slow-to-load libraries, where load times become a bottleneck are in much much higher order. E.g.:

full_stdlib - 170 ms
numpy       - 100 ms
scipy       -  14 ms
scipy-stats - 500 ms
pandas      - 200 ms

There might be opportunities to optimize - but I wouldn’t expect anything major for low-mid effort attempt. (There might be surprises on the high effort end, but wouldn’t count on it either.)

From my experience, big portion of slow-to-load packages are slow to load because of low quality code, bloats and unnecessary dependencies.

Of course there is a portion of high quality stuff that is slow to load, but in relation to fast-to-load stuff the ratio is small.


If there is a place to apply Python’s Zen, it is this:

Explicit is better than implicit.
...
Simple is better than complex.
...
Errors should never pass silently.
Unless explicitly silenced.

Some decisions are being made based on only one of these, where the validity is arguable at best.
While the way I see it, this proposal violates the above to very high degree in their truest sense.


If community can not live without from module import attribute, then so be it, but I can not tress the price being paid for this single feature enough. (my github crashed on PR’s diff…).

Otherwise, most/all of other benefits of this (such as filtering, dedicated syntax, …), can just as well be applied to much more explicit approach of eagerly finding spec, that has many other non-trivial benefits in several dimensions.

@dg-pb thank you for your continued engagement and for the energy you’ve brought to this discussion. Your position is clear, and your concerns have all been thoroughly documented in this thread by your many messages. You can be sure the Steering Council will have full visibility into your feedback when they review this PEP.

We (the authors) have carefully considered the trade-offs you’ve raised. Many of them were already addressed before, are discussed at the PEP or are no problems at all. It is also understandable and totally fine if you aren’t satisfied with our answers or with the current state of things. While we understand and respect your perspective, we remain confident that PEP 810 strikes the right balance for the Python community as it’s written. The proposal addresses real, widespread pain points, and we’ve received overwhelming support from library maintainers, users, educators, organisations, and many community members who have validated that this design solves critical problems they face. Many of those users have expressed their enthusiasm and support in this discussion. Your voice is important, but it’s one voice among many. Unfortunately we don’t have anything additionally to say that we haven’t said before or that isn’t in the PEP already and we don’t want to keep going in circles. We have asked previously community members to please respect when we have consider a topic closed (of course after stating their view) so I will kindly and respectfully ask you again the same thing.

At this point, continuing to reiterate the same concerns or doing meta-discussion doesn’t add new information to the discussion. Instead, it makes it significantly harder for the Steering Council, other reviewers, and community members to parse through this thread and identify the distinct points of feedback. We need to keep this conversation followable and productive for everyone involved.

I appreciate your passion and your investment in Python’s future, but I need to ask you to please let the discussion move forward. We have nothing further to add on these specific points, and we’re going to proceed with the proposal as written.

I have no doubt that as a reasonable person you understand this position and you will respect it. I’m confident we can work within our community guidelines without needing moderator support.

Thank you again for taking the time to engage thoughtfully with this work.

28 Likes

One keyword has already elevated this discussion to nearly the longest thread in the forum (15 messages short), so let’s not pile on another one! :wink:

That’s an interesting idea though but you’re right that it’s orthogonal to this PEP. async import would have very different semantics and use cases compared to lazy imports. If you’re interested in exploring that, I’d suggest starting a separate discussion thread or opening a new topic as it deserves its own conversation rather than getting mixed into the lazy imports discussion here.

For now, let’s keep this thread focused on PEP 810. Thanks for the thoughtful question!

6 Likes

Small update: We’ve added a bunch of updates to the PEP with clarifications and improved language based on feedback from this thread and some private discussions. We’ve also updated the demo and the reference implementation though keep in mind there may still be bugs and missing pieces as we continue refining things.

Also as a small note and as we said previously in this monster of a discussion, thank you for your understanding if we’re not responding to your comment directly. I can assure you we’re reading everything carefully and discussing it internally as a team. We’re also getting a bunch of private messages and emails with comments, enthusiastic support, kind words and suggestions. It’s genuinely helpful to hear how this would impact your work. Unfortunately, it’s impossible to respond to everything individually given the volume, but please know that even if we don’t reply, we’re listening and your input!

Thanks again for all the engagement!

15 Likes

I’m reluctant to comment since this thread is already huge. However, here’s my 2c. First, I think having a lazy import mechanism built into the language is a good idea. Almost every larger Python project wants such a thing. Yes, you can roll your own. Having a standard way would be better.

IMHO, Python’s import system (modules, packages, etc) could actually use a complete overhaul. E.g. have a builtin function like import("...") that provides the new behavior. You could do this as a 3rd party package but then you also need tooling like type checkers and IDEs to understand it. Anyhow, that’s completely out of scope of this PEP. A big overhaul like that doesn’t seem likely to happen.

In my own projects, I’ve been using a simple implementation of lazy importing. I let type checkers know about imports with the following pattern:

if TYPE_CHECKING:
    import scipy as sp
else:
    sp = load("scipy")

If this PEP gets approved, I can switch to it and simplify my code.

12 Likes

Hi all,

thank you to the authors for writing the PEP in a way that is easy to follow and understand even for non-core developers. :blush:

I’d like to return to the following point which I don’t feel was fully addressed (apologies if I missed it)

import sys
__lazy_modules__ = ['x', 'y', 'z']
import x
import y
import z

# all the things

if sys.version_info[:2] > (3, 14):
    # ensure we didn't accidentally reify an import that should be lazy
    for mod in __lazy_modules__:
        assert is_lazy(mod)  # or however that might be written/spelled

I’ve recently spent a fair amount of time trying to speed up a CLI application through inline imports (and even made my first CPython contribution to inline some imports in stdlib :slight_smile: ), and it is a surprisingly non-trivial, painstaking and iterative process, and at the end the result is very brittle unless one invests in non-trivial testing to ensure that modules meant to be lazy are indeed lazy.

The PEP already proposes a global flag for marking all imports as lazy (-X lazy_imports), and a mechanism to ensure certain imports are eager (Lazy imports filter). Would it be possible to have a global flag (something like -X warn_on_eager_lazy_imports) that would essentially be a global equivalent of the snippet proposed above? (note that the snippet doesn’t work for lazy imports defined via the lazy keywords). Especially for larger codebases I think something like this would be hugely valuable.

btw: I really like that the PEP explicitly mentions a dedicated howto guide on this topic will be created, and mentions things like -Ximporttime that many people are unaware of.

1 Like