from typing import Self, assert_type
class A[T]:
x: T
def __new__(cls: type[Self], x: T) -> Self:
return super().__new__(cls)
def __init__(self, x: T):
self.x = x
# [Edit] this definition is problematic, see discussion below
@staticmethod
def staticm[U](cls: type[U], x: T) -> U:
return cls(x)
@classmethod
def classm(cls: type[Self], x:T ) -> Self:
return cls(x)
a1 = A[int].__new__(A[str], "foo") # error: "foo" incompatible with `int`
assert_type(a1, A[str])
a2 = A[int].staticm(A[str], "foo") # error: "foo" incompatible with `int`
assert_type(a2, A[str])
a3 = A[int].classm("foo") # error: "foo" incompatible with `int`
assert_type(a3, A[int])
The a3 case looks good to me. In both the a1 and a2 cases, both mypy and pyright seem to use the A[int] receiver for checking arguments, but the A[str] argument for instantiation. This feels inconsistent to me. Instead, I would expect to either:
Use the explicit type args from the receiver, A[int] and error that A[str] is a bad argument. The result type should be A[int].
Ignore the explicit type args from the receiver, A[int] and use A[str] to instantiate. The "foo" argument should be accepted, and the result type should be A[str].
Because __new__ can use Self, unlike static methods, it might make sense to use (1) for __new__ but (2) for staticm. But since __new__ is arguably a static method, it also makes sense for the behavior to be consistent. I don’t feel strongly about that point.
I would argue that the existing behavior is buggy, so would like to see type checkers agree on either (1) or (2) above. Maybe I am missing something, though? Thanks for your attention
I think the definition of staticm in your example should be rejected; the use of T doesn’t make sense. For instance, pyright accepts these lines:
a2 = A.staticm(A[str], 1)
assert_type(a2, A[str])
But of course, it’s an A[int] at runtime. In general, using a class-scoped type variable in a staticmethod seems dubious, for the same reasons that ClassVars can’t use such type variables.
For the __new__ case, your option (1) makes sense to me. A[int] should specialize any attribute access, so the __new__ call should only be allowed if its arguments are compatible with A[int].
Mypy and pyright treat these the same way. They appears to push the class tparam T into the static method. I see the argument that this is dubious, but unlike attributes the occurrence of T in a method can actually vary, so I think the behavior is arguably reasonable.
For those I think my ideal behavior would still be that the staticm1 is an error. Instead of reusing the class-scoped variable, it should introduce a new type variable (which is what staticm2 does).
And for staticm2, it shouldn’t depend on T at all, so your call should succeed and be inferred as returning A[str], not A[int].
my ideal behavior would still be that the staticm1 is an error.
I can get behind this, but to be clear I think this would represent a pretty major breaking change for mypy. Do you think it’s realistic that we could actually specify this and change mypy? Note that mypy also performs this “push down class tparams” trick when accessing regular instance methods off a class.
for staticm2, it shouldn’t depend on T at all
I agree, and I unfortunately made a copy/paste error when I tested this (called staticm1 instead of staticm2). Both mypy and pyright do the right thing here. I guess the question is whether, assuming mypy continues to push down class tparams, it staticm1 should behave like staticm2.
If type parameters are pushed down (which is also a reasonable choice), then the staticm1 call should be an error because we already specialized to A[int], so passing A[str] is incompatible. Mypy and pyright already seem to do this correctly.
Makes sense. I am happy to get behind this. Basically this is behavior (1) from the OP.
For completeness of the discussion, behavior (2) is justifiable as well because the instantiation of T is fully determined by the arguments of the call. And indeed, the runtime type is actually A[str].
But I am fine either way, since I think both are reasonable.
I found a problematic case. Pushing class type parameters down interacts poorly with class methods where the class type parameter appears in the type of cls. The problematic case is foo below.
class A[T]:
@classmethod
def foo(cls: type[T]) -> T:
return cls()
If we forget types for a moment, when we call A.foo, the function invokes A’s constructor and returns an instance of A, so the runtime type of value returned from A.foo() is A.
Mypy and Pyright don’t behave the same for A.foo() or A[int].foo(), and I can’t really fully explain how each of them does behave (maybe the maintainers could weigh in?).
One thought – for any bound method, the type of the receiver must be assignable to the type of self (or cls in this case). In other words, type[A[T]] must be assignable to type[T], but this seems wrong, because then we have that T occurs inside of T. So, I think one way to explain this code is that it’s invalid at the definition site.
(Maybe this is just too niche of an edge case for anyone to really bother?)
/Users/jelle/py/tmp/a_foo.py:6:13 - information: Type of "A.foo()" is "Unknown"
/Users/jelle/py/tmp/a_foo.py:7:13 - information: Type of "A[int].foo()" is "Unknown"
/Users/jelle/py/tmp/a_foo.py:7:20 - error: Cannot access attribute "foo" for class "type[A[int]]"
Could not bind method "foo" because "type[A[int]]" is not assignable to parameter "cls"
"type[A[int]]" is not assignable to "type[int]"
Type "type[A[int]]" is not assignable to type "type[int]" (reportAttributeAccessIssue)
1 error, 0 warnings, 2 informations
For the first call, no type argument is given for A, so we get A[Any]. Then the call returns Any (or Unknown, which is a variant of Any). We pass type[A[Any]] to a parameter of type type[Any], which is valid.
The second call specializes T to int. Then we pass type[A[int]] to a parameter of type type[int], which pyright correctly identifies as invalid.