I don’t think the current pep as designed has really been vetted against metaprogramming or other class side effects. First problem that shows up quickly is that while previously if you wanted to change the namespace dict of a class during evaluation you would use a custom prepare method, now any modification of the class dict that is passed to __new__ and to __init__ may result in modification.
class Bug:...
class Correct:...
class CustomType(type):
def __new__(mcls, name, bases, namespace, /, **kwargs):
namespace["Correct"] = Bug
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
return cls
class OtherClass(metaclass=CustomType):
val:Correct
OtherClass.__annotations__
#{'val': <class 'Bug'>}
#should be {'val': <class 'Correct'>}
This means this change effects any class that modifies the namespace dict. While there has been a reduction of libraries that need custom metaclasses of a small sample of 4 libraries that I know use them: django, pydantic, sqlalchemy, and pythons enum lib. Everyone, but sqlalchemy, modifies the namespace object. In Django all but 1 of their metaclasses does so, the only one that did not was a testing related metaclass.
This also would mean that the class namespace object that would normally be discarded after the class obj is returned in now kept alive. The current behavior to drop the object is documented.
Changing this to form a closure with the actual class.__dict__ object instead of the namespace dict seems like it would create even more chance for namespace collision.
There is also some strange side effects related to class function names bashing type annotations.
Take the following code
class A:
clsvar:dict[int,str]
def dict(self):...
The annotation is now referring to the function dict instead of the built in dict. This is going to artificially restrict function names.
It isn’t exactly rare for classes to shadow built in names, the UUID class for example shadows multiple builtins. That means we cannot reference these type in the annotations.
class A:
clsvar:dict[int,str]
def dict(self,a:dict) -> dict: ...
That is something that actually works perfectly fine right now.
I know this has been discussed multiple times already but I am really having trouble understanding what about the runtime cost of annotations is so high that it makes sense to create a feature that fundamentally behaves nothing like the rest of the language.
Considering that stringified annotations are already at an acceptable performance level (whatever that means). What exactly about annotations is causing enough overhead to warrant lazily capturing the names involved in the annotation?
Is this
def a(b:'str'):...
really that much cheaper then this?
def a(b:str):...
Or is the main issue that deeply nested annotations are expensive to compute? I am not really clear why the string version would be so much cheaper then the single object lookup version. This really feels like trading deterministic behavior for performance.