`cls: type[T]` and `super()`

Consider this metaclass definition:

class Meta(type):
    def __call__[T](cls: type[T]) -> T:
        return super(Meta, cls).__call__()

All type checkers (mypy/pyright/ty/pyrefly/zuban/pycroscope) error on super(Meta, cls), saying that this is an invalid super() call, because type[T] is not an instance or subtype of Meta. This error is correct! There is no constraint on T that allows us to say that this super() call is valid.

But now let’s sugar that explicit two-argument super call into its implicit zero-argument form: just super() instead of super(Meta, cls). This should be a no-op! These two forms have identical semantics. At runtime, the zero-argument form literally looks up the enclosing class (Meta) and the first argument of the enclosing function (cls) to provide the arguments. So now we have:

class Meta(type):
    def __call__[T](cls: type[T]) -> T:
        return super().__call__()

Even though these examples have identical semantics at runtime, now four type checkers (pyright, pyrefly, zuban, and pycroscope) stop complaining about the super call.

This is relevant because the second form exists in the conformance suite, and I am trying to figure out how to make ty both correct and conformant here.

I don’t think that ignoring incorrect-super-arguments diagnostics for implicit super is correct, but that is the approach used by every type checker that currently passes this conformance test.

But I’m not sure how else this code should be written. I don’t see any way to express the invariant that type[T] in this case must be an instance of Meta. It can’t be expressed as an upper bound on T. It could be expressed explicitly with intersection types.

Should type checkers assume that the explicitly annotated type of cls should be implicitly intersected with Meta? That would mean that both examples above should pass. But if that applies to cls in a metaclass method, it should also apply to self in any method; that’s quite a large change to type inference.

I’m unsure what the right approach is here, but I’m inclined to add an E? to that line in the conformance suite, since it seems to me that if anything it is more correct to error on that line than not to.

2 Likes

This seems like the right approach to me. Even with explicit user-expressible intersections, I think we would want this to be implicit instead of users needing to write this out themselves in this case.

I think this makes some sense, but as I observed, it’s a big change. (Especially if it’s broadened to apply to all self arguments – and there doesn’t seem to be any principled reason why it should apply to metaclass method cls only.)

Today all type checkers are fine with this kind of pattern, and it works fine at runtime, too:

class Meta(type):
    def __call__[T](cls: type[T]) -> T:
        return type.__call__(cls)
    
class Other: ...

Meta.__call__(Other)

If we were to implicitly intersect every self-argument annotation with the implicit upper bound of the enclosing class, then Meta.__call__(Other) would have to be an error.

This version is accepted by pyrefly, zuban, and pyright (since they don’t validate zero-argument super) even though it fails at runtime on the super() call:

class Meta(type):
    def __call__[T](cls: type[T]) -> T:
        return super().__call__()
    
class Other: ...

Meta.__call__(Other)

EDIT: Of course, calling unbound methods with an explicitly-provided self type like this is inherently unsound to begin with, since we allow subclass method overrides to narrow the self type (and that narrowing happens by default with the implicit self type if un-annotated). So this side effect looks more like a benefit…