I’m not entirely sure if this’d be ok (as I’d just change the title to Type Defaults for Type Parameters) but might it be better to use type parameters now that PEP 695 is accepted?
I think it’s fine to rename as you suggest, and it would improve clarity.
I recently wrote the typing conformance tests for PEP 646 and TypeVarTuples. When writing the tests, I carefully read every line and code sample in PEP 646. I realized when doing this that the authors of PEP 646 had made a number of changes after the last draft that I had reviewed (which was prior to its acceptance). That mean my knowledge (and the pyright implementation) were a little bit out of date with the final draft.
One of the changes that was added to PEP 646 late in the process is going to cause problems for PEP 696, and we’ll need to think about how we want to address this conflict.
The problem is in this section of the spec. Specifically, there’s an ambiguity in the case where a class or type alias is parameterized by a TypeVarTuple and one or more TypeVars.
Here’s the specific wording:
In order to substitute these type variables with supplied type arguments, any type variables at the beginning or end of the type parameter list first consume type arguments, and then any remaining type arguments are bound to the
TypeVarTuple
.
And also:
Note that the minimum number of type arguments in such cases is set by the number of
TypeVar
s.
Let’s look at an example of where this is problematic for PEP 696.
type Foo[*Ts, T] = tuple[*Ts, T]
# Ts gets the type *tuple[int, str]
# T gets the type str
x1: Foo[int, str, str]
# This can also be specified as...
x2: Foo[*tuple[int, str], str]
Now, consider if T
has a default type, as allowed in PEP 696.
type Foo[*Ts, T = int] = tuple[*Ts, T]
# This is unambiguous
x1: Foo[int, str, str]
# I think this is unambiguous
x2: Foo[*tuple[int, str]]
Perhaps the resolution is to amend the section of the typing spec adopted form PEP 646 and clarify that Foo[int, str, str]
in this example ignores the presence of a default type for T
, and that if your intent is to use the default type for T
, you must use Foo[*tuple[int, str]]
instead.
The good news is that there’s not much use of PEP 646 yet (mypy just recently added support for it), so if we have to make minor changes to the TypeVarTuple portion of the spec to accommodate PEP 696, we can probably do so without a major backward compatibility concern.
This isn’t something that needs specifying in PEP 696 is it, just in the typing spec right?
This isn’t something that needs specifying in PEP 696 is it, just in the typing spec right?
It would be good to come to consensus on how to resolve the issue. Doing so will eliminate a potential objection for PEP 696’s acceptance.
Are you proposing the following?
type Foo[*Ts, T = int] = tuple[*Ts, T]
x2: Foo[*tuple[int, str]]
assert_type(x2, tuple[int, str, int]) # Ts bound to (int, str), T uses default
x3: Foo[int, str]
assert_type(x3, tuple[int, str]) # Ts bound to (int,), T bound to str
That seems strange to me; I would expect Foo[int, str]
and Foo[*tuple[int, str]]
to always be equivalent. I suppose that would imply that it’s not meaningful to use a TypeVar with a default after a TypeVarTuple.
Are you proposing the following?
That’s what I was proposing, but I don’t have a strong opinion. I’m fine with your proposal too. I just want to ensure that the intended behavior is clearly defined in the spec.
My preference would be to forbid it entirely as likely to be confusing. If we compare it to function signatures an argument with default value is allowed after *args but only can be specified by keyword. We currently can not specify type argument as keyword. If we could I’d follow functions and only allow default type variables after typevartuple to be specified by name.
I think default type variables makes pep 637 worth revisiting later.
I currently working on adding support for PEP 696 in Mypy. Even though forbidding TypeVar defaults after TypeVarTuples would simplify the implementation, I don’t believe that’s necessary. They can (and already are in pyright) resolve just fine.
I agree with what Jelle said, it does my sense IMO that Foo[int, str]
and Foo[*tuple[int, str]]
are equivalent. I.e. if someone wants different arguments for Ts
, they should need to specify all other TypeVars as well. Not sure that would even be used that much, so it should probably be fine. Maybe it’s just worth adding the example to the PEP to clarify it.
If we want to forbid TypeVars with defaults that follow a TypeVarTuple, should we also forbid it at runtime? I.e., should def f[*Ts, T=int]():
be a SyntaxError? My preference would be yes, and to relax this restriction if and when we get keyword-based type params (à la PEP 637).
Relatedly, in the PEP (PEP 696 – Type defaults for TypeVarLikes | peps.python.org) I don’t understand the sentence “This would mean that TypeVarLikes with defaults proceeding those with non-defaults can be checked at compile time.” The grammar as stated doesn’t check that: is it meant to say that the compiler should raise a SyntaxError when it encounters a type parameter with a default before one without a default? I’ll submit a PR to the PEP to clarify that.
Another question that came up while reading the PEP: the PEP says that TypeVar defaults are not valid in functions, essentially because it’s too hard to implement in a type checker.
Should it therefore be a SyntaxError to use a default in a generic function, like this:
def f[T = int](): ...
My preference would be to allow this at runtime, so we can allow type checkers which do want to implement support for defaults on generic functions to do so. The PEP could be changed to say that the semantics for defaults in generic functions are unspecified; type checkers may either raise an error if they encounter them or use some other semantics.
I proposed a change to the PEP to clarify this case and the one I discussed above: PEP 696: Proposed changes by JelleZijlstra · Pull Request #3638 · python/peps · GitHub.
I generally lean towards runtime being laxer where adding errors would be work to allow experimentation and easier forward compatibility. With target python version + forward references if we allow keyword based type params in say 3.15 earlier versions could still likely use them potentially wrapping some annotations as forward references.
I don’t have strong preference though and can buy make runtime and type checking produce errors consistently.
The Typing Council supports this PEP. Statement on behalf of the full council:
The Typing Council recommends accepting PEP 696, “Type Defaults for Type Parameters”. The Steering Council previously deferred making a decision on this PEP until it had seen the impact of PEP 695, “Type Parameter Syntax”. PEP 695 has been well-received by the typing community, and we believe that PEP 696 builds on the new syntax in a natural way.
The proposed feature improves the usability of generic classes by allowing sensible defaults to be specified for omitted type parameters. The PEP demonstrates the usefulness of this feature with two examples of type definitions in the standard library that would benefit from TypeVar defaults. The PEP allows libraries to evolve their typed interfaces better, since in many cases it would mean authors no longer have to trade off between more accurate typing and new errors and verbosity for existing users. The PEP has been extensively discussed and revised in public typing forums over the past two years and has community support. Reference implementations of the needed runtime and type checker changes are included.
The Steering Council decided to follow the Typing Council’s recommendation (and the 2023 SC recommendation, really) and accepts PEP 696: PEP 696 Type defaults for TypeVarLikes · Issue #177 · python/steering-council · GitHub
Thank you! I opened issues to track implementing the PEP:
- Implement PEP 696 (Type parameter defaults) · Issue #116126 · python/cpython · GitHub
- Spec: Add PEP 696 (Type parameter defaults) · Issue #1642 · python/typing · GitHub
Any help from readers here on either issue would be appreciated.
I am working on the implementation of the PEP and I’d like some more opinions on how PEP 696 should work with TypeVarTuples. The PEP’s spec is https://peps.python.org/pep-0696/#grammar-changes, giving the example:
class Qux[*Ts = *tuple[int, bool]]: ...
And in the grammar:
type_param_default:
| '=' e=expression
| '=' e=starred_expression
This implies that syntactically, starred expressions can be used in any type parameter default. However, presumably type checkers should require a *
on all TypeVarTuple defaults, and reject it on TypeVar and ParamSpec.
So:
class A[T = *tuple[int]]: ... # OK at runtime, type checker error
class B[T = int]: ... # OK
class C[**P = *tuple[int]]: ... # OK at runtime, type checker error
class D[**P = [int, str]]: ... # OK
class E[*Ts = *tuple[int]]: ... # OK
class F[*Ts = int]: ... # OK at runtime, type checker error
It seems odd to allow these syntactic variants that are always rejected by type checkers.
In contrast, the first draft of PEP 695 proposed:
class MyClass[T = int, *Ts = tuple[int, ...], **P = [int, str]]: ...
So without a *
before the TypeVarTuple default.
I like the simplicity of this syntax, but it reduces the symmetry between the syntax for the default and for specialization.
I’m currently planning to implement the following:
- Do not allow
*
before TypeVar or ParamSpec defaults, as this is not legal in the type system and in general the grammar only allows*
in specific places where it is meaningful. - Allow
TypeVarTuple
defaults both with and without a*
prefix. This allows you to use e.g.*Ts = Unpack[Alias]
if you want to use a default that does not support the*
syntax. - When performing type substitution at runtime, TypeVarTuple will error if the default is not e.g. a tuple.
Your proposal makes sense to me. I made the same assumption in pyright’s parser when I implemented this feature.
Reviewing gh-116126: Implement PEP 696 by JelleZijlstra · Pull Request #116129 · python/cpython · GitHub, I’m somewhat concerned about the way the runtime implementation distinguishes between a TypeVar that has no default and a TypeVar that defaults to None
. Currently the PR implements the following behaviour:
- If a TypeVar defaults to
None
, the__default__
attribute of that TypeVar will betypes.NoneType
- If a TypeVar has no default, the
__default__
attribute of that TypeVar will beNone
.
This is the behaviour that’s specified (in one slightly throwaway line) in the PEP, but I can’t see it being discussed in this thread at all before now. To me it feels somewhat unintuitive, quite subtle, and very easy to forget. Jelle’s done a good job at documenting this subtlety, but I’m still worried that this will cause confusion among users and result in lots of issues being filed at CPython.
I’d like to propose that we introduce a new sentinel value (with a nice repr) for the __default__
attribute to indicate that the TypeVar has no default: typing.NoDefault
. Adding a dedicated sentinel for this purpose would have the added advantage that we would be able to represent the signature of the TypeVar
constructor more easily in the docs. Currently the implementation PR includes this in the documentation as the new constructor signature for TypeVar
:
.. class:: TypeVar(name, *constraints, bound=None, covariant=False, contravariant=False, infer_variance=False, default=<unrepresentable>)
If we used typing.NoDefault
as the __default__
value for TypeVars that have no default, we could instead represent the signature for typing.TypeVar
in the docs like this:
.. class:: TypeVar(name, *constraints, bound=None, covariant=False, contravariant=False, infer_variance=False, default=typing.NoDefault)
That has two advantages in terms of documentation: it’s self-evident from the constructor signature what effect failing to pass the default
parameter has (it means you have no default); and it’s valid Python syntax, unlike <unrepresentable>
.
I mostly agree with everything you’ve presented here, I would like to note however that this is following behaviour that the bound argument currently has which however bizarre doesn’t lead to any funky situations for type checkers because to type checkers None is type(None) IIRC. Might be worth changing this as well if possible cause the behaviour is similarly confusing.
FWIW the other ideas that were discussed (I think privately) were defaulting to typing.Unknown (which would have similarly needed to be added) which was turned down due to typing not having access to your type checking mode at runtime or even just object() which I think falls victim to a lot issues that other sentinels have.
bound
does not actually behave this way:
>>> print(TypeVar("T").__bound__)
None
>>> print(TypeVar("T", bound=None).__bound__)
None
This means runtime type checkers can’t distinguish between TypeVars with a bound of None and TypeVars with no bound. A bound of None is meaningful within the type system (here’s how it works with pyright: link), but it’s not practically useful because None has only one value.
Unlike with bounds, a default of None is actually useful, so we do need some mechanism to distinguish them. The NoneType/None convention that is currently implemented works, but it’s a little odd. I’m OK with switching to Alex’s idea above.