Spec change: Inference with TypeVarTuple

The spec currently says that this code should be rejected by a type checker:

def f[*Ts](x: tuple[*Ts], y: tuple[*Ts]): ...

f((1,), ("1",))  # error, int and str are different types

This is inconsistent, because we allow this:

def f[T](x: T, y: T): ...

f(1, "1")  # ok, T is inferred as object or int | str or Literal[1, "1"], depending on type checker

I feel this is inconsistent and is better to keep the behavior of TypeVarTuple similar to that of TypeVar. I am proposing to change the spec accordingly:

6 Likes

I’ve actually thought this was meant to be this way for some reason, never thought it was just overlooked / forgotten.

Define+1 on this.

I agree.

There are use cases for preferring a stricter inference, but those use cases apply to TypeVar also, so they don’t argue in favor of preserving this inconsistency.

Seems logical to allow, and there is a reasonable interpretation that mirrors one typecheckers should already implement.

For what it’s worth, this is on my list of things that need better specification language eventually, though it’s extremely low on the list, and I don’t see a reason to suggest postponing for a more rigorous definition.

We were just talking about this two days ago actually, wondering why it was special-cased like this, and came to the conclusion that is was strange.
Anyway, getting rid of special-casing is almost always a good idea AFAIK, so +1.

1 Like

i think a lot of people would like normal type vars to infer like this

def f[T](t1: T, t2: T): ...

f(1, "")  # expect error: Literal[""] is not assignable to int

for tvt, with the current rules it’s not very clear on what is and isn’t allowed, it just says ā€œno unionsā€, which is quite underspecified. are common super types allowed for example?

1 Like

Mailman 3 Review of PEP 646 (Variadic Generics) - Typing-sig - python.org has the original discussion on this. The reason I think was for functions like this

def add(a: Tensor[Shape], b: Tensor[Shape]):

Here the input often needs to be matching to make sense. If you try to add (2,2) tensor with (3,3) one it will just crash and fail. So solution like Tensor[Literal[2] | Literal[3], Literal[2] | Literal[3]] is undesirable. That is a simplification as broadcasting rules exist ((2,1) can usually be added with (2,4)) but those were left out as messy.

Edit: I’m unsure how likely that issue is to happen though especially when tuples are only covariant supported variadic while others are all invariant.

In NumPy we’re not able to use TypeVarTuple for shape-typing at the moment. One reason is, like you mentioned, the invariance. Another is not being able to specify bound=int. So we instead use integer tuples for this. I can’t speak for other tensor libraries though.