Re-reading the PEP, I see one point without much rationale. Sorry about pointing it so late – it’s a big PEP, and it’s easy to miss the forest for the trees.
“Transparent” means that besides the delayed import (and necessarily observable effects of that, such as delayed import side effects and changes to
sys.modules), there is no other observable change in behavior: the imported object is present in the module namespace as normal and is transparently loaded whenever first used: its status as a “lazy imported object” is not directly observable from Python or from C extension code.
Is this a good constraint?
Lazy imports aren’t fully transparent, as they have “necessarily observable effects”. With that in mind, is preventing any other observable changes worth the implementation complexity?
The complexity worries me. I understand that it can be easily added to current dicts, but it’ll be a burden for any future optimizations and implementations.
To make things clearer, consider semantics like the following. (I don’t see anything similar in Rejected ideas, hopefully it wasn’t floated earlier):
import foo creates a global variable
__lazy:foo (specially named, but otherwise normal), and sets it to a lazy object. (Another possibility is using a global dict:
LOAD_GLOBAL for potentially lazy objects (which are known at compile time) becomes
- tries loading
foo, and if it doesn’t succeed:
__lazy:foo from globals (not builtins), resolves it and stores the result as
- replaces itself with
LOAD_GLOBAL, if the specializing machinery allows that
__getattr__ tries resolving lazy objects the same way
set_lazy_imports would work as in the PEP
importlib.resolve_lazy_imports(mod_or_dict) or a
That would break more modules than transparent way of modifying
dict, but, how serious would that be? ISTM that it would break modules that inspect module
Would it be better than changing
__dict__? I have no way of knowing. Even if I implement it, I can’t quite test it without access to huge real-world codebases, and mechanisms to patch third-party deps.
And that’s the main thing that makes me uneasy. Testing an implementation is a huge undertaking, since it involves adapting third-party code. I don’t think it can realistically be done outside Meta. If the proposed semantics are just a local optimum, or a Meta-specific one, we might get stuck in it.
I don’t know what to do about this, though. It dosesn’t sound fair to ask Cinder folks to implement and test half-baked ideas.
So, my concrete question is: how important is the “transparency”?
(Apologies it this was discussed before – but if it was, it should be mentioned in the PEP.)
: This is a personal view, I don’t represent the SC here.