I realized I failed to accurately state this, I do not think PEP-563 is correct solution either. I think it carries almost entirely the same issues as this pep does, and in many places even worse issues. I work with pydantic daily and have experienced the subtleties it carries. I use active type hints all the time and design a lot of code around them. I wrote a decently sized post on how you can solve large chunks of this problem in a backportable way a few months ago. I was mostly focused on preserving back portability. As well as some other active typing issues. And some ugly design choices were made around that.
Of the main problems that are trying to be resolved. The first is Forward References in some flavor of self referential or defined later. Those as you pointed out are very common and the current default work around of stringifying it is not great. Frankly it also is a major limiting factor in annotations as u can not use ACTUAL literals easily in annotations.
To my understanding of Python bytecode, it should be reasonably simple to create a new bytecode instruction that can be used by annotations, in place of LOAD_NAME, that can lookup the name and if a name error is raised simply create a forwardRef, append it to the globals dict and return the value. That would resolve all the current forward reference issues and be deterministic. Some version of this (like carls above suggestion) has already been determined to be a required by this pep anyway.
For the second issue, my link above has a nice and âcleanâ design for dealing with circular references/import deferrals from pure python. It also would allow you to preserve import information at runtime, pep 649 does not. A pure python version is crazy hacky but sorta âjust worksâ.
I am going to say there is really a âthirdâ issue, is assigning and evaluating of annotations, that is IMO what is trading determinism for performance. This is where PEP-649 and 563 are not behaving like I expect python code to behave.
I also think there is an easy half way between solution where names in complex annotations are captured eagerly and the expression is evaluated lazily. That would preserve the current default annotation capture behavior and does not have the issue of changing depending on when u look at it. Depending on the mixture of class to function annotations the size of this in memory may end up being smaller than keeping the class namespace alive anyway. And does not interact with metaprogramming modifying the namespace.
Oh! It does my heart good to read that. You donât even know how much!
I think itâs entirely possible, yes. And Iâm currently working on it.
The bad news is: what Iâm working on is something of an overhaul of PEP 649. Iâm nearly ready to post a new top-level topic about it here on the Discuss. Sadly not quite ready yet. Hopefully this week.
If I were you, Iâd be worried, and with good reason! All I can say is, Iâm doing my best over here, to ensure that PEP 649 considers all the ramifications of the change. And Iâm working assiduously to ensure that all usersâ needs are met, and that 649 wonât be regarded by history as a terrible mistake.
Hello, Iâm the author of PEP 649. Thanks for taking the time to suggest your alternate proposal; I appreciate that you want to make Python better. Ultimately I donât prefer your approach. I also believe some of your criticisms of PEP 649 are mistaken. Please see my comments below.
I concede that I donât use metaprogramming in Python, and Iâm not conversant in what might be common techniques in that world. I donât really understand the problem you describe.
However, I copied and pasted your class Bug: ... sample into a local source file, changed the last line to print the annotations, and ran it under Python 3.11. It showed val was annotated with the Correct class. I then added from __future__ import co_annotations and ran it under the most recent version of my co_annotations branch (hash 63b415c, dated April 19 2021). This also showed val was annotated with the Correct class. I checked, and yes it was creating the __co_annotations__ attribute, and when I ran that method manually it returned the Correct annotation for val. So, as far as I can tell, the code sample you suggested would fail under PEP 649 actually works fine.
I appreciate that metaprogramming is a complicated discipline, and Iâm willing to believe that PEP 649 can cause observable and possibly undesirable behavior changes in metaprogramming. Iâd be interested if you could construct a test case that did demonstrate different results with the current co_annotations treeâparticularly if itâs a plausible example of real-world code, rather than a contrived and unlikely example. (There have been a lot of changes proposed to PEP 649, but they wouldnât affect this part of the mechanism, so the April 2021 version is fine for you to test with.)
No, but it would mean that classes simultaneously shadowing existing names and using those names as part of an annotation will have to find an alternate expression that resolves to their desired value, e.g.
import builtins
class A:
clsvar: builtins.dict[int,str]
def dict(self):...
Perhaps the inconvenience of fixing these sorts of sites is offset by the improved readability, where the reader doesnât have to remember the order of execution in order to remember âwhichâ dict was being used in the annotation.
Unfortunately, itâs not as âreasonably simpleâ as you suggest.
First, you would need to create three new bytecodes: LOAD_NAME as you suggest, but also LOAD_GLOBAL, and LOAD_DEREF. All local variables referenced in an annotation would have to be relocated out of fast locals and into a closure (as already happens in PEP 649) because the Python compiler doesnât do dataflow analysis, and so doesnât know at compile-time whether or not a particular local variable has been defined at any particular point.
Also, managing the ForwardRef instances is going to add complexity. The ForwardRef needs to be the âstringizerâ class, which means implementing every dunder method and creating new stringizers in a kind of crazy way. Rather than expose that behavior to users, I propose to make it a âmodeâ that you can switch on and off, and I would shut it off on all ForwardRef objects before returning them to users.
But tracking all the ForwardRef objects that could be created is tricky. Consider this example:
class C:
a: typing.ClassVar[undefined_a | undefined_b]
It isnât sufficient to simply iterate over all the final values in the annotations dict and shut off âstringizerâ mode on all the top-level ForwardRef objects you see. ForwardRef objects may be buried inside another object; in the above example, ForwardRef('undefined_a | undefined_b') would be stored inside a typing.ClassVar. Itâs not reasonable to exhaustively recurse into all objects in an annotations dict to find all the ForwardRef objects that might be referenced somewhere inside.
Similarly, itâs not sufficient to simply remember every ForwardRef object constructed by the âfake globalsâ dictâs __missing__ method. In the above example, undefined_a | undefined_b is a ForwardRef constructed by calling ForwardRef('undefined_a').__or__(ForwardRef('undefined_b')). So this is a ForwardRef object created by another ForwardRef object, not by the __missing__ method.
My plan for the implementation of PEP 649 is to create a list to track every ForwardRef created in the computation of an annotations dict. It would be created by the âfake globalsâ environment, and every time anything created a new ForwardRef objectâeither the __missing__ method on the âfake globalsâ dict, or a dunder method on a ForwardRef objectâthe new object would be added to the list, and also carry with it a reference to the list. After the __annotate__ method returns, but before I return the new annotations dict to the user, I iterate over the list and deactivate the mode on every ForwardRef object.
I think this will work. But thereâs an awful lot of magic behind the âstringizerâ and the âfake globalsâ mode that permits it to work. Iâm comfortable putting that magic into the Python library. Iâm not comfortable building that much magic into the Python language.
I assume the âeasy half wayâ you mention is your âLOAD_NAME which creates a ForwardRef for missing symbolsâ proposal, incorporating the âstringizerâ functionality (which you called DefRef in your âAlternativesâ thread.)
I donât like this approach. Also, it doesnât currently satisfy all the use cases satisfied by 649. The latter problem is fixable; I donât think the former problem is.
The latter problem is simply that you provide no recourse for getting the âstringizedâ annotations. Runtime documentation users enjoy the âstringizedâ annotations provided by PEP 563. Also, I made sure PEP 649 supported âstringizedâ annotations as a sort of failsafe. I worry there may be users out in the wild who havenât spoken up, who have novel and legitimate uses for âstringizedâ annotations. If we deprecate and remove the implementation of PEP 563, without providing an alternate method of producing âstringizedâ annotations, these users would have their use case taken away from them.
This part is relatively easy for you to fix: simply add to your proposal some mechanism to provide the âstringizedâ strings. Since you donât propose writing the annotations into their own function like PEP 649 does, I assume youâd more or less keep the PEP 563 approach around, but rename it to a different attribute (e.g. __stringized_annotations__). You might also have to propose a lazy-loading technology for it, as there was some concern that this approach would add memory bloat at Python runtime for an infrequently-used feature.
The reason why I still donât like this approach: I think o.__annotations__ should either return the values defined by the user, or fail noisily (e.g. with NameError). I donât think itâs acceptable for the language to automatically and silently convert values defined by the user into proxy objects, and I consider the techniques necessary to create them to be too magical to define as part of the language. I donât remember your exact words, but I dimly remember you described PEP 649âs approach of delaying evaluation as being âsurprisingâ or ânovelâ, which I interpreted as a criticism. Thatâs fair, but I consider silently replacing missing symbols with proxy objects far more âsurprisingâ and ânovelâ, and I am definitely critical of this approach.
Iâm the kind of guy who literally quotes the Zen Of Python when debating technical issues, and I suggest the Zen has guidance here:
Special cases arenât special enough to break the rules. Although practicality beats purity.
I wish annotations werenât all special enough to need breaking the rules. But the Python worldâs experience with annotations over the last few years has shown that they are. Annotations are a complicated mess, and weâre far past being able to solve them with something simple. Here, unfortunately, practicality is going to have to beat purity.
I concede that PEP 649âs delayed evaluation of annotations is novel, and a small âbreakingâ of ârulesâ. But I consider it far less novel, and a much smaller infraction, than changing the language to silently and automatically construct proxy objects where it would otherwise raise NameError.
Also:
Errors should never pass silently. Unless explicitly silenced.
This one PEP 649 obeys, and your proposal does not. I consider requesting SOURCE or FORWARDREF format an explicit request to silence NameError exceptions; your proposal silently and implicitly catches those exceptions and swaps in a proxy object.
If you still prefer your proposal, thatâs reasonable. But youâre going to have to write your own PEPâand you should probably do it soon. I already revised PEP 649 this week, and resubmitted it to the Steering Council; they previously indicated they wanted to accept it, so itâs possible they could accept it very soon. (Although they said that before the recent revisions. Itâs possible theyâll find something they donât like, and reject the PEP or ask for changes, which could give you more time.) I suggest you can write your PEP as a response to PEP 649, and simply cite the material in it, which will make your PEP far shorter and faster to write.
My guess is, wrappers (e.g. attrs) would have about the same experience under your proposal as under 649. In both cases, theyâd have to know how to compute annotations with proxies, and stringized annotations. Also in both cases, if they wanted to create new annotations or modify existing ones, the easiest and likely best way would probably be to dynamically construct a small function/class with the annotations the way they want it, then pull out the annotations from there in the appropriate format and put them in the annotations dict theyâre building. With 649, this would live in __annotate__, with your proposal it would live wherever they created their annotations, presumably as part of __init__ or what have you.
The danger is that putting class variable annotations inside a function will make it so they can no longer access names defined in the class namespace, because class namespaces are not visible in their nested functions. Fixing this would require some changes to how the symtable works, but as far as I can tell PEP 649 does not propose any such changes.
I talked with Jelle about this in person. Jelle was simply unfamiliar with that aspect of PEP 649: __annotate__ (the new name for __co_annotations__ and __compute_annotations__) can in fact see the class namespace. PEP 649 specifically adds this ability: it adds a new __locals__ attribute to the FunctionObject, ensures itâs used as the locals dictionary when the function is run, and sets __locals__ to the class dict for __annotate__ functions on class methods. Jelleâs sample code, when run with the April 21 2021 version of the co_annotations branch (with from __future__ import co_annotations active), happily produces the same results as when run with Python 3.11.
Thanks, I had indeed missed that aspect of the PEP.
However, now I can confirm the behavior change @zrothberg described above. Consider this code:
Correct = "module"
class Meta(type):
def __new__(self, name, bases, ns):
ns["Correct"] = "metaclass"
return super().__new__(self, name, bases, ns)
class OtherClass(metaclass=Meta):
Correct = "class"
val1: Correct
val2: lambda: Correct # poor man's PEP 649 (real PEP 649 is smarter)
print(OtherClass.__annotations__["val1"]) # class
print(OtherClass.__annotations__["val2"]()) # module
When run with current(ish) main and with 3.7, I get
class
module
But with your branch (and an added from __future__ import co_annotations), I get
metaclass
module
That makes sense, because previously the annotations were evaluated before the metaclass mutated the namespace, and now they are evaluated after.
Is this a problem? Possibly for some users, but it seems unlikely to affect a lot of real use cases. (I can believe that metaclasses injecting names into the class namespace is common, but metaclasses injecting names that are also used as annotations within the class body has got to be less common.)
You can obviously construct similar cases where PEP 649 changes behavior because namespaces are mutable. For example:
x = "before"
def f(arg: x): pass
x = "after"
print(f.__annotations__["arg"]) # currently "before", will be "after"
I donât think the new behavior is bad, and Iâm not aware of a concrete use case that would be affected by this change, but it is a change in user-visible behavior. The existence of such edge cases could serve as an argument for putting PEP 649 behind a future import at first.
Okay, I understand. This is the âbecause 649 can change the order things are evaluated in, this effect is observable at runtimeâ behavior. This is already mentioned in the PEP. Iâm not sure this use case is novel or common enough to require special mention in the PEP, though Iâm not strongly averse to doing so.
Should we expect metaprogramming use cases to run into this behavior in the wild? Is it common in metaprogramming to use a class attribute as an annotation, then overwrite that attribute with a different value in metaclass __new__?
When I get a second later I am going to respond to the larger one with more details. But to give u a less weird looking example and one that demonstrates the problematic interaction with metaprogramming.
def makedict(self):
return dict(self)
class CustomType(type):
def __new__(mcls, name, bases, namespace, /, **kwargs):
namespace["dict"] = makedict
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
return cls
class OtherClass(metaclass=CustomType):
test:dict[str,int]
#the annotation dict is referencing makedict not the builtin
I had tested this example against the previous branch. Canât quite remember the error, I believe it was empty dict or raised exception. I have some code right now that does roughly this but generates the function and injects it into the class namespace dict. If the new version is posted I can test it against that one.
I believe you when you say âobservable changes in runtime behavior can also be observed when using metaclassesâ. You donât need to contrive examples to prove that point.
The more interesting questions are,
are there examples of code using metaclasses in the wild that will be affected by the runtime changes from 649, and
are there common metaprogramming idioms that will be affected by the runtime changes from 649?
Iâd like to add two more points of critique of your approach.
First, your approach does nothing to reduce the runtime cost of unused annotations, and in fact makes computing annotations more expensive. Every time there is an annotation that uses an undefined name, youâd have to eagerly create the stringizing ForwardRef. Assuming your ForwardRef behaves as Iâve proposed for mine, youâd have to track all of them that were created, so you could shut off the âstringizerâ behavior after computing annotations was done. This is all extra code that would run every time annotations are bound, under your proposal, and the ForwardRef objects themselves have some memory cost.
One of the goals of 563 and 649 is to make annotations cheaper (both in CPU and memory) when theyâre not referenced at runtime. Static typing users annotate large portions of their codebase, but rarely examine those annotations at runtime, and 563 and 649 both speed up the loading of those modules. Your proposal would make importing them even slower, and use more memory, than âstock semanticsâ for annotations.
Second, your approach permanently stores ForwardRef objects in the annotations for every missing symbol. For users who have a simple in-module circular dependency, if they wanted to see âreal valuesâ in their annotations, theyâd have to go back and manually find and evaluate the ForwardRef objects buried in their annotation values. But these could be anywhereâstored in collections, buried in other iterables, stored in arbitrary attributes. Iâm not sure this replacement work could be replaced in a general-purpose way, so I donât think we could write a library function to do it for users.
With 563 and 649, in many circumstances you can simply wait until all the symbols are defined before attempting to evaluate themâwith 649, simply by inspecting o.__annotations__ as normal, and with 563 calling eval or inspect.get_annotation or typing.get_type_hints. I think the 649 approach is nicer, but then thatâs not surprising as Iâm the author of 649.
As I wrote above yes. Even the standard library does so.
I think there is some confusion on why someone would do this because the code (with dict) is from an actual codebase I wrote. It is not a contrived example like u suggest.
Particularly I want to make sure this is clear.
This metaclass would incur side effects for 649
They are also not equivalent in effect. The code this example is from needs to add it to the namespace before calling super so that type.__new__ properly executes the __set_name__ as it is a descriptor being added. This is EXTREMELY useful in runtime annotations as automatic creation of descriptors is used inplace of the django like descriptors frequently.
Once you account for the fact that metaclass inheritance is also a thing this gets more and more complex and harder and harder for the end user to predict what will happen under pep 649. Currently the only place that unexpected values get added to the default annotation is if the metaclass defines __prepare__. Which I can honestly say I have only ever come up with 1 situation where that was needed or useful.
I want to state that is really necessary at times to modify annotations prior to the creation of the class. So long as co_annoations is a real function and not a codeobject that should be trival as u can actually just wrap it with a function to modify it in new so that once its inspected later it resolves the same way.
For example
class CustomType(type):
def __new__(mcls, name, bases, namespace, /, **kwargs):
if "__co_annotations__" in namespace:
namespace["__co_annotations__"] = fix_ann(namespace["__co_annotations__"])
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
return cls
Not what I was referring to. This one is easier to explain in code.
Given the following class:
class example:
start:dict[int,str]
Default semantics roughly translates that to
class example:
__annotations__ = {}
__annotations__["start"] = dict[int,str]
Your code approximately (ignoring the scoping/caching/forwardref stuff right now so I can physically write it in pure python)
class example:
@classproperty # I know this isnt real but lets pretend for ease
def __annotations__(cls):
return cls.__co_annotations__()
@staticmethod
def __co_annotations__()
return {"start":dict[int,str]}
I am suggesting to do something like this:
class example:
__co_ann__ = [] # temporary list that type.__new__ won't
# add to cls.__dict__
@classproperty # I know this isnt real but lets pretend for ease
def __annotations__(cls):
return cls.__co_annotations__()
__co_ann__.append((dict,int,str)) # eager capture
@staticmethod # lazy evaluation
def __co_annotations__(a=__co_ann__[0])
return {"start":a[0][a[1],a[2]]} # may need to use a different
# dict builder (dict from iterable)
Add something like tuple interning and u are only carrying around a few extra pointers in the co_annotations default arguments after gc runs. For class objects I believe this will actually end up with a smaller memory footprint because the class namespace will not need to be held, without co_annoations its gcâed after type.__new__ with co_annoations its roughly bound to the lifetime of the class. I donât think this really needs a new pep as you are just shifting some stuff around. You still capture the stated effect of the title of the pep deferred evaluation of annotations. Just with the semantics of eager capture.
The only thing from my alternative thread that I was suggesting should be integrated, especially if memory and correctness is a primary concern, is the import hijacking created forward references. (As I wrote above u can ignore most of the thread it was written with backportablity in mind not modifying the forwardref class). That reduces directly the number of forward references and retains actual runtime information about forward references to imported modules.
With this setup if you shift all the creation of forward references to occur where the annotation is written then your co_annoations function can then just check if any variable in the tuple is a forward reference and resolve them before creating the dict. This should dramatically reduces complications of resolving them inplace. Also any future optimizations of annotation resolving (like caching against the entire expression can always be stapled on after with out much concern).
I am not really sure what the issue is that Runtime documentation users need that requires them to use stringed annotations that fails to work correctly with the repr of the computed annotation. I would think in fact that the repr would be more useful as it will actual keep the original module of the class in the name. Not just the current name that is being viewed. Not overall familiar enough with this topic though.
So I think this mostly depends on how it is handled.
For one I think the behavior is reasonably straight forward.
class a:
var: b
print(b)
# forwardref["b"]
Also I think we both agree this example shouldnât a name error.
class a:
var:b
class b:...
But this one should
class a:
var:c
class b:...
One approach is to add a __debug__ or runtime flag bound check (so it doesnt impact production workloads) that checks that any automatically created forward references created inside the module when it is done importing resolves to real object else it raises an error. Combining this with the import created forward references (from my other thread) should cover almost every use case. It doesnât do anything for forward refs created inside closures though. That is kinda its own can of worms anyway.
Though I would also like to point out this is frankly a class of bugs that IDEs, linters, and static analysis tools are already pretty great at catching. I do not think the same can be said for class of bugs that would occur from metaclass side effects.
I really donât think we need a new pep, just to clean up some oddities. I also understand if you donât feel this meshes well with your code and am willing to write a pep because its an important issue. Also appreciate you taking the time and responding to me.
I see. Can you give me an example of an annotation in the standard library that would be affected by how 649 interacts with metaclasses?
As I understand it, youâre saying that 649 will have an effect on metaclasses that implement their own __new__ which modifies the namespace object before calling super().__new__. I actually just grepped through the Python library for metaclass use. The only example I found where a metaclass __new__ modified the namespace before calling super().__new__ was EnumType in the enum package. And that only modifies names starting with an underscore, which means theyâre all private values.
So I must admit, Iâm baffled by your assertion that 649 will change the behavior of metaclasses in the Python library in a way that will affect annotations in real-world code. What am I missing?
Youâve said several times that these are real problems in real code bases. But Iâm guessing that this CustomType example is something you made up for illustrative purposes.
Can you give me a link to an example of existing real-world code that does this? You said youâve done it; how about a link to your code? Youâve also mentioned Django several times; does Django do this?
I certainly believe you when you say that metaclass inheritance makes it hard to reason about how your code will behave, whether or not 649 is involved. If metaclasses are in the habit of rebinding Python builtins (e.g. dict), so that it changes the annotations used in classes that are instances of that metaclass, it certainly does seem hard to reason about.
PEP 649 is solving a problem for real-world users, in a way that is minimally disruptive for what I perceive as the largest communities of users. If youâre right that PEP 649âs semantics are going to cause problems in metaclasses, I want to understand what those problems are, and how widespread theyâre going to be. I appreciate you providing illustrations of what problems might look like, but I need something more concrete.
Can you give me a concrete example of metaclass code in an existing, popular package, that would be affected by 649, and cause problems for users of that package? And can you describe what those problems would be, and how hard it would be for the metaclass author to fix them?
Yes, PEP 649 __annotate__ functions (theyâve been renamed) are real functions, not code objects. Allowing user code to wrap it and modify its output is an explicit goal of the PEP. Thatâs necessary for users like attrs and dataclasses.
But this is simply inefficient for the wide world of annotations. Annotations can use class variables, as you frequently observe; they can also use nonlocals, and can call functions, and use the binary OR operator (|), and and and and and. They are expressions, therefore they permit anything permitted in a Python expression. (Except under PEPs 563 and 649, where certain operators with side effects are disallowed: walrus operator, yield, yield from, await.)
In order to express the full flower of annotations, you need something as capable as bytecode, which is why I chose bytecode. Even AST is insufficient, as it requires compile-time-computed namespace information to resolve names correctly. And strings are insufficient, as early adopters of PEP 563 discovered.
In this new proposal of yours, you seem to be attempting to preserve the existing behavior of how annotations are defined in classes, where the __annotations__ dict is filled in piecemeal as the class body is executed. I admit that PEP 649 doesnât bother to try and preserve that behavior; I didnât consider it important, and I believed it to be an implementation detail that user code should not rely on. Are there important use cases where user code is relying on inspecting the value of the classâs __annotations__ dict from inside the class body?
The problem is that the repr of many type hints is a real mess. Consider this code:
Users vastly prefer to see MyHighLevelType in their documentation, rather than the repr of the value computed by that expression. In particular, some of the type hint objects defined in typing expand into messy unreadable reprs festooned with [T] and so on. Iâm sorry I donât have a more concrete example handy; the example I saw was maybe two years ago in an ancient discussion.
I donât know what you mean. For example:
If b hasnât been defined yet, then evaluating b should produce a NameError. I really donât understand what youâre suggesting in your example; are you proposing that Python create a ForwardRef object and bind it to the name b? That seems like complete madness. But how else would print(b) print something as per your example?
Even if thatâs not what youâre proposing, Iâm certain that youâre proposing Python fix the reference to the non-existent b in this example by substituting a ForwardRef('b') object. The goal of your proposal is for Python to silence the NameError. This means Python has automatically silenced the NameError, which I donât like, because errors should not pass silently unless explicitly silenced. Your approach also forces the user to permanently deal with this ForwardRef object at runtime, which I also donât like.
The Python steering council has said they want to accept PEP 649. As Thomas Wouters, current sitting member of the SC, put it to me at PyCon last week: âWe canât stay where we are [with stock semantics], and we canât go to 563, so our only choice is 649.â So, yes, they are seriously considering accepting PEP 649.
Incidentally, accepting 649 would also reject an existing accepted PEP (PEP 563), and would deprecate and eventually remove its implementation, which has been available as a from __future__ import for several years now. This is unprecedented.
So, if youâre summarizing this situation as âsome odditiesâ, either me, the Python core devs, the Python static typing community, the Python runtime annotation using community, and the Python Steering Council have all severely misunderstood whatâs going onâor, maybe youâve underestimated the complexity and the seriousness of the situation.
Yes, I think you should write your own PEP. But, again, you will have to move quickly, as the Steering Council has previously indicated they were ready to accept 649. They were just waiting for the update, and I published that about two weeks ago.
So, as a âheavy metaclass userâ (even if most of it just for replying questions on the web), I have to say I actually enjoy the changes introduced by this PEP.
Sure, there is a change in behavior for some cases, and I am just know following @zrothberg objections (really, I am a man from e-mail times, still getting used here). But so far, in cases like,
for example:
class Bug:...
class Correct:...
class CustomType(type):
def __new__(mcls, name, bases, namespace, /, **kwargs):
namespace["Correct"] = Bug
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
return cls
class OtherClass(metaclass=CustomType):
val:Correct
OtherClass.__annotations__
#{'val': <class 'Bug'>}
#should be {'val': <class 'Correct'>}
I really diverge the âcorrectâ should be âCorrectâ - and I think people can see that if the metaclass author is caring about annotatoins at all, the metaclass shouldâve changed the annotation to âBugâ as well. The PEP as is will make that transparent for the metaclass author,
and if a different effect is wished, the metaclass can do that explictly.
The âshadowingâ of names in the class namespace for expressions in annotations also feels natural, and doubly so when we start coding with the mindset that in annotations, forward references are a given (once this PEP is the new normal).
If anything, the PEP opens a lot of new possibilities for metaprogramming - specially for people like me (if any) that will disregard any advice in the sense of âthis feature should be used for static type checking onlyâ
And my final remark for now is: any incompatibility errors induced by these behaviors changes in existing code (again, if any), will be errors in static type checking contexts, and will be catch early on pipelines as soon as Python 3.12 is enabled on those: it will be a matter of modifying the code for working with 3.12 if that is desired.
Ok, there are ways of writing specific code that will âbreakâ in other contexts than static type checking with these, but one have to really mean it - nonetheless the new behavior just feels more intuitive (meeting my personal definitons of âPythonicâ)
FWIW, Iâm definitely not going to be ready to check in 649 by May 8th, the feature freeze for 3.12. Just a data point for, I dunno, people who steer things, maybe.