The spec currently says that this code should be rejected by a type checker:
def f[*Ts](x: tuple[*Ts], y: tuple[*Ts]): ...
f((1,), ("1",)) # error, int and str are different types
This is inconsistent, because we allow this:
def f[T](x: T, y: T): ...
f(1, "1") # ok, T is inferred as object or int | str or Literal[1, "1"], depending on type checker
I feel this is inconsistent and is better to keep the behavior of TypeVarTuple similar to that of TypeVar. I am proposing to change the spec accordingly:
There are use cases for preferring a stricter inference, but those use cases apply to TypeVar also, so they donāt argue in favor of preserving this inconsistency.
Seems logical to allow, and there is a reasonable interpretation that mirrors one typecheckers should already implement.
For what itās worth, this is on my list of things that need better specification language eventually, though itās extremely low on the list, and I donāt see a reason to suggest postponing for a more rigorous definition.
We were just talking about this two days ago actually, wondering why it was special-cased like this, and came to the conclusion that is was strange.
Anyway, getting rid of special-casing is almost always a good idea AFAIK, so +1.
i think a lot of people would like normal type vars to infer like this
def f[T](t1: T, t2: T): ...
f(1, "") # expect error: Literal[""] is not assignable to int
for tvt, with the current rules itās not very clear on what is and isnāt allowed, it just says āno unionsā, which is quite underspecified. are common super types allowed for example?
Here the input often needs to be matching to make sense. If you try to add (2,2) tensor with (3,3) one it will just crash and fail. So solution like Tensor[Literal[2] | Literal[3], Literal[2] | Literal[3]] is undesirable. That is a simplification as broadcasting rules exist ((2,1) can usually be added with (2,4)) but those were left out as messy.
Edit: Iām unsure how likely that issue is to happen though especially when tuples are only covariant supported variadic while others are all invariant.
In NumPy weāre not able to use TypeVarTuple for shape-typing at the moment. One reason is, like you mentioned, the invariance. Another is not being able to specify bound=int. So we instead use integer tuples for this. I canāt speak for other tensor libraries though.