The typing spec includes a section on type erasure. In this section, it indicates that type checkers should generate an error if a generic attribute is accessed from a class object.
class Node[T]:
x: T
Node[int].x = 1 # Error
Node[int].x # Error
Node.x = 1 # Error
Node.x # Error
This make sense because the type of x depends on the T, and all specializations of Node[T] share the same class object at runtime.
However, the spec also indicates that the following should be flagged as an error.
type(p).x # Error
This is problematic because there are legitimate use cases where this would be a false positive because type(p) evaluates to a type that includes subclasses of Node. Consider the following:
class IntNode(Node[int]): ...
def func(p: Node[int]):
type(p).x # Error?
func(IntNode())
Iād like to propose a minor change to the typing spec to accommodate this legitimate use case. In particular, Iād like to propose that we simply delete the line type(p).x from the sample. Type checkers can then choose whether or not to generate an error in this situation.
Iāll note that none of the major type checkers (including pyright, as of recently) emit an error in this situation, so this change would align the spec to existing type checker behavior.
In the linked example, model is being used as if it were ClassVar, but itās not annotated as a ClassVar.
In order for type(p).x to be safe, it seems like x should be annotated as a ClassVar
So the problematic part of the spec is disallowing type variables in ClassVar.
Type checkers report problems with this:
class B[T]:
x: ClassVar[T]
when these examples are showing why this should be allowed.
We could say that if a ClassVar has a type variable, that class is considered kind of āabstractā. And subclasses must specify the type variable in order to not be abstract (as they do in the examples presented by Eric).
This could let us know which classes itās safe to access this member on.
If type variables remain disallowed in ClassVar, it seems like the discussed access should remain disallowed to match. Otherwise, this looks like an inconsistency.
I donāt have a clear opinion on what a type checker should do here but I agree the spec is probably too restrictive.
But this week I came across something similar - it was actually an even more extreme case where x is ClassVar[T], which works on concrete subclasses because a metaclass is setting the class var.
I suspect we will wind up at least allowing the access with an error that is easy to disable; while the pattern isnāt particularly safe, it is legitimate in some contexts.
class Node[T]:
x: T
...
def func(p: Node[int]):
type(p).x # Error?
Is this a correct example? As far as I understand, the variable x is declared as an instance variable, so any access from class should be an error.
Maybe you mean this code:
from typing import ClassVar
class Node[T]:
x: ClassVar[T]
class IntNode(Node[int]):
x: ClassVar[int] = 3
def func(p: Node[int]):
type(p).x
reveal_type(type(p).x)
func(IntNode())
When I do a code review, I sometimes see such code, and it is hard to convince and explain that this is incorrectly typed code (because typing spec forbid variance for class variables).
I encountered this mostly for this pattern:
the base class (almost always ABC) want to parametrize class variable (ex. nodeClass),
then child class becomes parametrized and overrides nodeClass,
See full code example:
Also, I remember that there were threads discussing unnecessary restrictions for parametrized ClassVar:
Generally speaking, at least imo, the idea (or rather the fix of problems) is not bad.
This however might be a problem, as it would be bad to get errors in one type-checker but not in another one. As people before me have said, this might belong to its own āfixā, whilst we keep your example being invalid (because of the missing ClassVar).