PEP 649: Deferred evaluation of annotations, tentatively accepted

I don’t think the current pep as designed has really been vetted against metaprogramming or other class side effects. First problem that shows up quickly is that while previously if you wanted to change the namespace dict of a class during evaluation you would use a custom prepare method, now any modification of the class dict that is passed to __new__ and to __init__ may result in modification.

class Bug:...
class Correct:...

class CustomType(type):

    def __new__(mcls, name, bases, namespace, /, **kwargs):
        namespace["Correct"] = Bug
        cls = super().__new__(mcls, name, bases, namespace, **kwargs)
        return cls


class OtherClass(metaclass=CustomType):
    val:Correct

OtherClass.__annotations__
#{'val': <class 'Bug'>}
#should be {'val': <class 'Correct'>}

This means this change effects any class that modifies the namespace dict. While there has been a reduction of libraries that need custom metaclasses of a small sample of 4 libraries that I know use them: django, pydantic, sqlalchemy, and pythons enum lib. Everyone, but sqlalchemy, modifies the namespace object. In Django all but 1 of their metaclasses does so, the only one that did not was a testing related metaclass.

This also would mean that the class namespace object that would normally be discarded after the class obj is returned in now kept alive. The current behavior to drop the object is documented.

Changing this to form a closure with the actual class.__dict__ object instead of the namespace dict seems like it would create even more chance for namespace collision.

There is also some strange side effects related to class function names bashing type annotations.

Take the following code

class A:
    clsvar:dict[int,str]
 
    def dict(self):...

The annotation is now referring to the function dict instead of the built in dict. This is going to artificially restrict function names.

It isn’t exactly rare for classes to shadow built in names, the UUID class for example shadows multiple builtins. That means we cannot reference these type in the annotations.

class A:
    clsvar:dict[int,str]

    def dict(self,a:dict) -> dict: ...

That is something that actually works perfectly fine right now.

I know this has been discussed multiple times already but I am really having trouble understanding what about the runtime cost of annotations is so high that it makes sense to create a feature that fundamentally behaves nothing like the rest of the language.

Considering that stringified annotations are already at an acceptable performance level (whatever that means). What exactly about annotations is causing enough overhead to warrant lazily capturing the names involved in the annotation?

Is this

def a(b:'str'):...

really that much cheaper then this?

def a(b:str):...

Or is the main issue that deeply nested annotations are expensive to compute? I am not really clear why the string version would be so much cheaper then the single object lookup version. This really feels like trading deterministic behavior for performance.

4 Likes

The issue with string literals as annotations are that writing code in quotes is both ugly and not checked by Python for syntax. When you get implied strings via PEP-563, the visual hackiness and parse error typo issues are gone, but they still carry no context about how they should be evaluated. We’d rather people working in annotated code bases not need to reason about when they need a stringified annotation or not at all in their typical daily flow of writing type annotated code.

Your method names mirroring builtin type names example is a good one and is basically taking one of the examples in the PEP a little further by using the builtin name dict and pointing out where such a name can get reused for valid reasons. Thanks! This is a place where PEP-649 deferred evaluation can “go wrong”. But code like that is far less common than situations where people need to manually think about stringifying annotations due to forward references. The workaround for such code are to use _dict = dict or from typing import Dict other name aliases. We don’t anticipate this being a common need.

Continuing down the strings as annotations path also doesn’t solve the problem that annotations being strings introduced in the first place: Runtime use of annotations. This prevented PEP-563 from ever becoming our default behavior. (pydantic et. al.)

The main issue PEP-649 aims to improve? Both PEP-563’s now-alternate-unrealized future and manually used string annotations had two motivating reasons: One is to allow forward references. The other is module import time performance and thus whole program startup time. PEP 649 aims to resolve those conflicting goals into a more natural - in most cases - implementation.

This really feels like trading deterministic behavior for performance.

We don’t have deterministic behavior today given string annotations and PEP-563. There is no guarantee which state of program context said strings will be evaluated within. PEP-649 deferred evaluation trades that non-deterministic behavior for an alternate one to get rid of the use of strings moving us to what appears to be a happier place.

1 Like

@larry The Python Steering Council is ready to officially accept PEP-649. How’s the implementation looking? Do you think an implementation could land in time for 3.12beta1?

Based on the poll results in Survey: Should PEP-649 implementation use another future import? we’d prefer to go without a new from __future__ import co_annotations as the PEP lays out and just change the default behavior.

(Understanding that feedback during people’s beta period testing could change our minds on the __future__ topic.)

4 Likes

I realized I failed to accurately state this, I do not think PEP-563 is correct solution either. I think it carries almost entirely the same issues as this pep does, and in many places even worse issues. I work with pydantic daily and have experienced the subtleties it carries. I use active type hints all the time and design a lot of code around them. I wrote a decently sized post on how you can solve large chunks of this problem in a backportable way a few months ago. I was mostly focused on preserving back portability. As well as some other active typing issues. And some ugly design choices were made around that.

Of the main problems that are trying to be resolved. The first is Forward References in some flavor of self referential or defined later. Those as you pointed out are very common and the current default work around of stringifying it is not great. Frankly it also is a major limiting factor in annotations as u can not use ACTUAL literals easily in annotations.

To my understanding of Python bytecode, it should be reasonably simple to create a new bytecode instruction that can be used by annotations, in place of LOAD_NAME, that can lookup the name and if a name error is raised simply create a forwardRef, append it to the globals dict and return the value. That would resolve all the current forward reference issues and be deterministic. Some version of this (like carls above suggestion) has already been determined to be a required by this pep anyway.

For the second issue, my link above has a nice and “clean” design for dealing with circular references/import deferrals from pure python. It also would allow you to preserve import information at runtime, pep 649 does not. A pure python version is crazy hacky but sorta “just works”.

I am going to say there is really a “third” issue, is assigning and evaluating of annotations, that is IMO what is trading determinism for performance. This is where PEP-649 and 563 are not behaving like I expect python code to behave.

I also think there is an easy half way between solution where names in complex annotations are captured eagerly and the expression is evaluated lazily. That would preserve the current default annotation capture behavior and does not have the issue of changing depending on when u look at it. Depending on the mixture of class to function annotations the size of this in memory may end up being smaller than keeping the class namespace alive anyway. And does not interact with metaprogramming modifying the namespace.

2 Likes

Oh! It does my heart good to read that. You don’t even know how much!

I think it’s entirely possible, yes. And I’m currently working on it.

The bad news is: what I’m working on is something of an overhaul of PEP 649. I’m nearly ready to post a new top-level topic about it here on the Discuss. Sadly not quite ready yet. Hopefully this week.

If I were you, I’d be worried, and with good reason! All I can say is, I’m doing my best over here, to ensure that PEP 649 considers all the ramifications of the change. And I’m working assiduously to ensure that all users’ needs are met, and that 649 won’t be regarded by history as a terrible mistake.

9 Likes

I’ve finally posted the overhaul I mentioned above. I look forward to your comments!

1 Like

I’ve updated PEP 649.

2 Likes

Hello, I’m the author of PEP 649. Thanks for taking the time to suggest your alternate proposal; I appreciate that you want to make Python better. Ultimately I don’t prefer your approach. I also believe some of your criticisms of PEP 649 are mistaken. Please see my comments below.

I concede that I don’t use metaprogramming in Python, and I’m not conversant in what might be common techniques in that world. I don’t really understand the problem you describe.

However, I copied and pasted your class Bug: ... sample into a local source file, changed the last line to print the annotations, and ran it under Python 3.11. It showed val was annotated with the Correct class. I then added from __future__ import co_annotations and ran it under the most recent version of my co_annotations branch (hash 63b415c, dated April 19 2021). This also showed val was annotated with the Correct class. I checked, and yes it was creating the __co_annotations__ attribute, and when I ran that method manually it returned the Correct annotation for val. So, as far as I can tell, the code sample you suggested would fail under PEP 649 actually works fine.

I appreciate that metaprogramming is a complicated discipline, and I’m willing to believe that PEP 649 can cause observable and possibly undesirable behavior changes in metaprogramming. I’d be interested if you could construct a test case that did demonstrate different results with the current co_annotations tree–particularly if it’s a plausible example of real-world code, rather than a contrived and unlikely example. (There have been a lot of changes proposed to PEP 649, but they wouldn’t affect this part of the mechanism, so the April 2021 version is fine for you to test with.)

No, but it would mean that classes simultaneously shadowing existing names and using those names as part of an annotation will have to find an alternate expression that resolves to their desired value, e.g.

import builtins
class A:
    clsvar: builtins.dict[int,str]
 
    def dict(self):...

Perhaps the inconvenience of fixing these sorts of sites is offset by the improved readability, where the reader doesn’t have to remember the order of execution in order to remember “which” dict was being used in the annotation.

Unfortunately, it’s not as “reasonably simple” as you suggest.

First, you would need to create three new bytecodes: LOAD_NAME as you suggest, but also LOAD_GLOBAL, and LOAD_DEREF. All local variables referenced in an annotation would have to be relocated out of fast locals and into a closure (as already happens in PEP 649) because the Python compiler doesn’t do dataflow analysis, and so doesn’t know at compile-time whether or not a particular local variable has been defined at any particular point.

Also, managing the ForwardRef instances is going to add complexity. The ForwardRef needs to be the “stringizer” class, which means implementing every dunder method and creating new stringizers in a kind of crazy way. Rather than expose that behavior to users, I propose to make it a “mode” that you can switch on and off, and I would shut it off on all ForwardRef objects before returning them to users.

But tracking all the ForwardRef objects that could be created is tricky. Consider this example:

    class C:
        a: typing.ClassVar[undefined_a | undefined_b]

It isn’t sufficient to simply iterate over all the final values in the annotations dict and shut off “stringizer” mode on all the top-level ForwardRef objects you see. ForwardRef objects may be buried inside another object; in the above example, ForwardRef('undefined_a | undefined_b') would be stored inside a typing.ClassVar. It’s not reasonable to exhaustively recurse into all objects in an annotations dict to find all the ForwardRef objects that might be referenced somewhere inside.

Similarly, it’s not sufficient to simply remember every ForwardRef object constructed by the “fake globals” dict’s __missing__ method. In the above example, undefined_a | undefined_b is a ForwardRef constructed by calling ForwardRef('undefined_a').__or__(ForwardRef('undefined_b')). So this is a ForwardRef object created by another ForwardRef object, not by the __missing__ method.

My plan for the implementation of PEP 649 is to create a list to track every ForwardRef created in the computation of an annotations dict. It would be created by the “fake globals” environment, and every time anything created a new ForwardRef object–either the __missing__ method on the “fake globals” dict, or a dunder method on a ForwardRef object–the new object would be added to the list, and also carry with it a reference to the list. After the __annotate__ method returns, but before I return the new annotations dict to the user, I iterate over the list and deactivate the mode on every ForwardRef object.

I think this will work. But there’s an awful lot of magic behind the “stringizer” and the “fake globals” mode that permits it to work. I’m comfortable putting that magic into the Python library. I’m not comfortable building that much magic into the Python language.

I assume the “easy half way” you mention is your “LOAD_NAME which creates a ForwardRef for missing symbols” proposal, incorporating the “stringizer” functionality (which you called DefRef in
your “Alternatives” thread.)

I don’t like this approach. Also, it doesn’t currently satisfy all the use cases satisfied by 649. The latter problem is fixable; I don’t think the former problem is.

The latter problem is simply that you provide no recourse for getting the “stringized” annotations. Runtime documentation users enjoy the “stringized” annotations provided by PEP 563. Also, I made sure PEP 649 supported “stringized” annotations as a sort of failsafe. I worry there may be users out in the wild who haven’t spoken up, who have novel and legitimate uses for “stringized” annotations. If we deprecate and remove the implementation of PEP 563, without providing an alternate method of producing “stringized” annotations, these users would have their use case taken away from them.

This part is relatively easy for you to fix: simply add to your proposal some mechanism to provide the “stringized” strings. Since you don’t propose writing the annotations into their own function like PEP 649 does, I assume you’d more or less keep the PEP 563 approach around, but rename it to a different attribute (e.g. __stringized_annotations__). You might also have to propose a lazy-loading technology for it, as there was some concern that this approach would add memory bloat at Python runtime for an infrequently-used feature.

The reason why I still don’t like this approach: I think o.__annotations__ should either return the values defined by the user, or fail noisily (e.g. with NameError). I don’t think it’s acceptable for the language to automatically and silently convert values defined by the user into proxy objects, and I consider the techniques necessary to create them to be too magical to define as part of the language. I don’t remember your exact words, but I dimly remember you described PEP 649’s approach of delaying evaluation as being “surprising” or “novel”, which I interpreted as a criticism. That’s fair, but I consider silently replacing missing symbols with proxy objects far more “surprising” and “novel”, and I am definitely critical of this approach.

I’m the kind of guy who literally quotes the Zen Of Python when debating technical issues, and I suggest the Zen has guidance here:

Special cases aren’t special enough to break the rules.
Although practicality beats purity.

I wish annotations weren’t all special enough to need breaking the rules. But the Python world’s experience with annotations over the last few years has shown that they are. Annotations are a complicated mess, and we’re far past being able to solve them with something simple. Here, unfortunately, practicality is going to have to beat purity.

I concede that PEP 649’s delayed evaluation of annotations is novel, and a small “breaking” of “rules”. But I consider it far less novel, and a much smaller infraction, than changing the language to silently and automatically construct proxy objects where it would otherwise raise NameError.

Also:

Errors should never pass silently.
Unless explicitly silenced.

This one PEP 649 obeys, and your proposal does not. I consider requesting SOURCE or FORWARDREF format an explicit request to silence NameError exceptions; your proposal silently and implicitly catches those exceptions and swaps in a proxy object.

If you still prefer your proposal, that’s reasonable. But you’re going to have to write your own PEP–and you should probably do it soon. I already revised PEP 649 this week, and resubmitted it to the Steering Council; they previously indicated they wanted to accept it, so it’s possible they could accept it very soon. (Although they said that before the recent revisions. It’s possible they’ll find something they don’t like, and reject the PEP or ask for changes, which could give you more time.) I suggest you can write your PEP as a response to PEP 649, and simply cite the material in it, which will make your PEP far shorter and faster to write.

1 Like

My guess is, wrappers (e.g. attrs) would have about the same experience under your proposal as under 649. In both cases, they’d have to know how to compute annotations with proxies, and stringized annotations. Also in both cases, if they wanted to create new annotations or modify existing ones, the easiest and likely best way would probably be to dynamically construct a small function/class with the annotations the way they want it, then pull out the annotations from there in the appropriate format and put them in the annotations dict they’re building. With 649, this would live in __annotate__, with your proposal it would live wherever they created their annotations, presumably as part of __init__ or what have you.

I looked into this example and I stumbled on another case that I’m not sure works well with PEP 649. Consider this case:

Correct = 1

class OtherClass:
    Correct = 2
    val1: Correct
    val2: lambda: Correct  # poor man's PEP 649

print(OtherClass.__annotations__["val1"])  # 2
print(OtherClass.__annotations__["val2"]())  # 1

The danger is that putting class variable annotations inside a function will make it so they can no longer access names defined in the class namespace, because class namespaces are not visible in their nested functions. Fixing this would require some changes to how the symtable works, but as far as I can tell PEP 649 does not propose any such changes.

I talked with Jelle about this in person. Jelle was simply unfamiliar with that aspect of PEP 649: __annotate__ (the new name for __co_annotations__ and __compute_annotations__) can in fact see the class namespace. PEP 649 specifically adds this ability: it adds a new __locals__ attribute to the FunctionObject, ensures it’s used as the locals dictionary when the function is run, and sets __locals__ to the class dict for __annotate__ functions on class methods. Jelle’s sample code, when run with the April 21 2021 version of the co_annotations branch (with from __future__ import co_annotations active), happily produces the same results as when run with Python 3.11.

False alarm! :smiley:

Thanks, I had indeed missed that aspect of the PEP.

However, now I can confirm the behavior change @zrothberg described above. Consider this code:

Correct = "module"

class Meta(type):
    def __new__(self, name, bases, ns):
        ns["Correct"] = "metaclass"
        return super().__new__(self, name, bases, ns)

class OtherClass(metaclass=Meta):
    Correct = "class"
    val1: Correct
    val2: lambda: Correct  # poor man's PEP 649 (real PEP 649 is smarter)

print(OtherClass.__annotations__["val1"])  # class
print(OtherClass.__annotations__["val2"]())  # module

When run with current(ish) main and with 3.7, I get

class
module

But with your branch (and an added from __future__ import co_annotations), I get

metaclass
module

That makes sense, because previously the annotations were evaluated before the metaclass mutated the namespace, and now they are evaluated after.

Is this a problem? Possibly for some users, but it seems unlikely to affect a lot of real use cases. (I can believe that metaclasses injecting names into the class namespace is common, but metaclasses injecting names that are also used as annotations within the class body has got to be less common.)

You can obviously construct similar cases where PEP 649 changes behavior because namespaces are mutable. For example:

x = "before"

def f(arg: x): pass

x = "after"

print(f.__annotations__["arg"])  # currently "before", will be "after"

I don’t think the new behavior is bad, and I’m not aware of a concrete use case that would be affected by this change, but it is a change in user-visible behavior. The existence of such edge cases could serve as an argument for putting PEP 649 behind a future import at first.

1 Like

Okay, I understand. This is the “because 649 can change the order things are evaluated in, this effect is observable at runtime” behavior. This is already mentioned in the PEP. I’m not sure this use case is novel or common enough to require special mention in the PEP, though I’m not strongly averse to doing so.

Should we expect metaprogramming use cases to run into this behavior in the wild? Is it common in metaprogramming to use a class attribute as an annotation, then overwrite that attribute with a different value in metaclass __new__?

3 Likes

Hold on, would this be problematic if one is reusing a type alias or type var?

T  = TypeVar('T', bound=Bound)

def f(x: T) -> Result[T]:
    ...

T = TypeVar('T', bound=Other)

def g(x: T) -> Nullable[T]:
    ...

Or would the annotate function close over these?

It would affect that code, yes. Do people write code like that? Mypy doesn’t allow it (example: mypy Playground).

4 Likes

Similarly mypy and pyright both treat it as a type error to redefine a type alias. A type alias is expected to be constant for type checkers.

3 Likes

When I get a second later I am going to respond to the larger one with more details. But to give u a less weird looking example and one that demonstrates the problematic interaction with metaprogramming.

def makedict(self):
    return dict(self)

class CustomType(type):

    def __new__(mcls, name, bases, namespace, /, **kwargs):
        namespace["dict"] = makedict
        cls = super().__new__(mcls, name, bases, namespace, **kwargs)
        return cls

class OtherClass(metaclass=CustomType):
    test:dict[str,int]

#the annotation dict is referencing makedict not the builtin

I had tested this example against the previous branch. Can’t quite remember the error, I believe it was empty dict or raised exception. I have some code right now that does roughly this but generates the function and injects it into the class namespace dict. If the new version is posted I can test it against that one.

I believe you when you say “observable changes in runtime behavior can also be observed when using metaclasses”. You don’t need to contrive examples to prove that point.

The more interesting questions are,

  • are there examples of code using metaclasses in the wild that will be affected by the runtime changes from 649, and
  • are there common metaprogramming idioms that will be affected by the runtime changes from 649?
2 Likes

I’d like to add two more points of critique of your approach.

First, your approach does nothing to reduce the runtime cost of unused annotations, and in fact makes computing annotations more expensive. Every time there is an annotation that uses an undefined name, you’d have to eagerly create the stringizing ForwardRef. Assuming your ForwardRef behaves as I’ve proposed for mine, you’d have to track all of them that were created, so you could shut off the “stringizer” behavior after computing annotations was done. This is all extra code that would run every time annotations are bound, under your proposal, and the ForwardRef objects themselves have some memory cost.

One of the goals of 563 and 649 is to make annotations cheaper (both in CPU and memory) when they’re not referenced at runtime. Static typing users annotate large portions of their codebase, but rarely examine those annotations at runtime, and 563 and 649 both speed up the loading of those modules. Your proposal would make importing them even slower, and use more memory, than “stock semantics” for annotations.

Second, your approach permanently stores ForwardRef objects in the annotations for every missing symbol. For users who have a simple in-module circular dependency, if they wanted to see “real values” in their annotations, they’d have to go back and manually find and evaluate the ForwardRef objects buried in their annotation values. But these could be anywhere–stored in collections, buried in other iterables, stored in arbitrary attributes. I’m not sure this replacement work could be replaced in a general-purpose way, so I don’t think we could write a library function to do it for users.

With 563 and 649, in many circumstances you can simply wait until all the symbols are defined before attempting to evaluate them–with 649, simply by inspecting o.__annotations__ as normal, and with 563 calling eval or inspect.get_annotation or typing.get_type_hints. I think the 649 approach is nicer, but then that’s not surprising as I’m the author of 649.

2 Likes

As I wrote above yes. Even the standard library does so.

I think there is some confusion on why someone would do this because the code (with dict) is from an actual codebase I wrote. It is not a contrived example like u suggest.

Particularly I want to make sure this is clear.
This metaclass would incur side effects for 649

class CustomType(type):

    def __new__(mcls, name, bases, namespace, /, **kwargs):
        namespace["dict"] = makedict
        cls = super().__new__(mcls, name, bases, namespace, **kwargs)
        return cls

This one would not.

class CustomType(type):

    def __new__(mcls, name, bases, namespace, /, **kwargs):
        cls = super().__new__(mcls, name, bases, namespace, **kwargs)
        cls.dict = makedict
        return cls

They are also not equivalent in effect. The code this example is from needs to add it to the namespace before calling super so that type.__new__ properly executes the __set_name__ as it is a descriptor being added. This is EXTREMELY useful in runtime annotations as automatic creation of descriptors is used inplace of the django like descriptors frequently.

Once you account for the fact that metaclass inheritance is also a thing this gets more and more complex and harder and harder for the end user to predict what will happen under pep 649. Currently the only place that unexpected values get added to the default annotation is if the metaclass defines __prepare__. Which I can honestly say I have only ever come up with 1 situation where that was needed or useful.

I want to state that is really necessary at times to modify annotations prior to the creation of the class. So long as co_annoations is a real function and not a codeobject that should be trival as u can actually just wrap it with a function to modify it in new so that once its inspected later it resolves the same way.

For example

class CustomType(type):

    def __new__(mcls, name, bases, namespace, /, **kwargs):
        if "__co_annotations__" in namespace:
            namespace["__co_annotations__"] = fix_ann(namespace["__co_annotations__"])

        cls = super().__new__(mcls, name, bases, namespace, **kwargs)
        return cls

Not what I was referring to. This one is easier to explain in code.
Given the following class:

class example:
    start:dict[int,str]

Default semantics roughly translates that to

class example:
    __annotations__ = {}
    __annotations__["start"] = dict[int,str]

Your code approximately (ignoring the scoping/caching/forwardref stuff right now so I can physically write it in pure python)

class example:
    @classproperty # I know this isnt real but lets pretend for ease
    def __annotations__(cls):
        return cls.__co_annotations__()

    @staticmethod
    def __co_annotations__()
        return {"start":dict[int,str]}

I am suggesting to do something like this:

class example:
    __co_ann__ = [] # temporary list that type.__new__ won't
                     # add to cls.__dict__

    @classproperty # I know this isnt real but lets pretend for ease
    def __annotations__(cls):
        return cls.__co_annotations__()
    
    __co_ann__.append((dict,int,str)) # eager capture

    @staticmethod # lazy evaluation
    def __co_annotations__(a=__co_ann__[0])
        return {"start":a[0][a[1],a[2]]} # may need to use a different 
                                         # dict builder (dict from iterable)

Add something like tuple interning and u are only carrying around a few extra pointers in the co_annotations default arguments after gc runs. For class objects I believe this will actually end up with a smaller memory footprint because the class namespace will not need to be held, without co_annoations its gc’ed after type.__new__ with co_annoations its roughly bound to the lifetime of the class. I don’t think this really needs a new pep as you are just shifting some stuff around. You still capture the stated effect of the title of the pep deferred evaluation of annotations. Just with the semantics of eager capture.

The only thing from my alternative thread that I was suggesting should be integrated, especially if memory and correctness is a primary concern, is the import hijacking created forward references. (As I wrote above u can ignore most of the thread it was written with backportablity in mind not modifying the forwardref class). That reduces directly the number of forward references and retains actual runtime information about forward references to imported modules.

With this setup if you shift all the creation of forward references to occur where the annotation is written then your co_annoations function can then just check if any variable in the tuple is a forward reference and resolve them before creating the dict. This should dramatically reduces complications of resolving them inplace. Also any future optimizations of annotation resolving (like caching against the entire expression can always be stapled on after with out much concern).

I am not really sure what the issue is that Runtime documentation users need that requires them to use stringed annotations that fails to work correctly with the repr of the computed annotation. I would think in fact that the repr would be more useful as it will actual keep the original module of the class in the name. Not just the current name that is being viewed. Not overall familiar enough with this topic though.

So I think this mostly depends on how it is handled.

For one I think the behavior is reasonably straight forward.

class a:
    var: b
print(b) 
# forwardref["b"] 

Also I think we both agree this example shouldn’t a name error.

class a:
    var:b

class b:...

But this one should

class a:
    var:c

class b:...

One approach is to add a __debug__ or runtime flag bound check (so it doesnt impact production workloads) that checks that any automatically created forward references created inside the module when it is done importing resolves to real object else it raises an error. Combining this with the import created forward references (from my other thread) should cover almost every use case. It doesn’t do anything for forward refs created inside closures though. That is kinda its own can of worms anyway.

Though I would also like to point out this is frankly a class of bugs that IDEs, linters, and static analysis tools are already pretty great at catching. I do not think the same can be said for class of bugs that would occur from metaclass side effects.

I really don’t think we need a new pep, just to clean up some oddities. I also understand if you don’t feel this meshes well with your code and am willing to write a pep because its an important issue. Also appreciate you taking the time and responding to me.