PEP 695: Type Parameter Syntax

PEP 695 is posted.

It proposes to add an improved syntax for specifying type parameters within a generic class, function, or type alias. It also introduces a new statement for declaring type aliases.

This PEP has already gone through several cycles of discussions in the typing-sig forum and public (virtual) meet-ups in the Python typing community. Feedback from those discussions has been incorporated. We’re interested in getting feedback from the broader Python-dev community.


This an incredible step forward!

I’m just curious, but why is this allowed:

S = 0
def outer1[S]():  # Type variable given same name as name from enclosing scope.
    S = 1  # Ordinary variable given same name as type variable from enclosing type scope.

It’s incredibly confusing to me. Does it serve a useful purpose to allow this? I ask because it’s a lot easier to allow something in the future than it is to disallow it.

I know this is minor, but the syntax highlighting for the new type soft keyword is confusing. Its use seems to be analogous to class or def, so ideally it should be green like them. (Initially, I didn’t like the syntax, but once I saw the analogy to class and def, I loved it.)

An earlier draft of the PEP disallowed name conflicts between type parameters and other names (local variables, etc.). Guido pointed out that there is no precedent for this in Python, which allows inner scopes to define new names that overlap with the names in outer scopes. The proposal was amended to be consistent with existing scoping rules in Python.

I presume you’re referring to syntax highlighting in pylance. That’s something we can fix if/when the PEP is accepted. The current implementation is a proof of concept.

1 Like

I believe they are referring to the highlighting in the PEP. type is highlighted the way it is because it’s a built-in; the highlighting wasn’t changed for the PEP. Highlighters will need to distinguish between type as a keyword and type as a built-in. Has an alternative like “alias” or “typealias” been considered? (Not that I’m particularly qualified to comment, but I personally don’t like “type” being used to mean something significantly different than the built-in. I do love everything else about this PEP :slight_smile: )


Okay, that makes sense! I guess that maybe type checkers or linters will warn about this?

Awesome, yeah, that makes sense.

Also, just want to say that this is a really well-written PEP. It’s really cool how variance can be inferred. I always had to look up covariant and contravariant, and it’ll be nice not to have to do that anymore!

What are the thoughts on an new keyword such as typedef, instead of making type even more confusing, given its already dual purpose, to not only function as returning the literal type of an object but also serves to create new classes.

Edit: I think typedef is a good choice given the purpose in C, which is an alias to an existing type. Given the purpose of the pep and how it would it like to deprecate typing.TypeAlias I believe it to be fitting.

1 Like

I also think that the new type alias syntax has the most potential for bike shedding. I don’t think this pattern appears anywhere in Python?

<keyword> <varname> = <value>

Maybe it could instead be modeled after import module as mod? Something like this?

alias (list[T] | set[T]) as ListOrSet[T]

Can also be multi-line of course:

alias (
    dict[str, Json] | list[Json] | str | int | float | bool | None
) as Json

But I also see that type is a good choice for a new (soft) keyword because it’s already the name of a built-in function. So maybe this instead

type dict[str, int] as MyDict
1 Like

I don’t think that would work well. Whereas module names are often short and have a clear syntax (names separated with a .), type aliases can get much more complex and often span multiple lines. That’s one of the reasons I use aliases, simple / short ones can just be inlined.

With that, I think it would be more difficult to identify the alias name at the end.

I slightly prefer type as it’s shorter. Even with the dual purpose, I don’t think it’s an issue differentiating between the soft keyword and other uses.

It also better matches the existing soft keywords match and case IMO.

Can the ast representation be adjusted to use Name(..., ctx=Store()) instead of identifier? That would make it easier for static analysis tools like pyflakes and pylint which use the ast, to implement. The Name node also includes the precise position (of the name alone) which would make error highlighting easier.

There was a related discussion for identifier in match blocks. Although nobody has gotten around to changing it yet. Probably not that important anymore as most tools have added workarounds in the mean time.

FWIW I like the reuse of type as the soft keyword myself.

Introducing a new term is hard. But because this is already a built-in most (not all) code has avoided ever using it as a variable name so the potential for confusion on that front is less. There will be confusion about its magic :magic_wand: sometimes keyword sometimes built-in status but I don’t expect many people will be surprised at that.

As a prefix it does have the smell of a var or let prefix style keyword that some other languages use but I don’t find this to be a problem given it’s use case is solely typing related.

I haven’t tried thinking of alternatives.


One thing people who haven’t been following closely might want to be aware of is the interaction with PEP 696 (default values for TypeVars). There is some discussion in this section of PEP 696.

Although it was anticipated, PEP 696 wasn’t a PEP at the time of PEP 695’s writing. I think the additional new syntax involved is quite obvious, but I’m not sure how the Steering Council orders decisions on these things.

I always had to look up covariant and contravariant, and it’ll be nice not to have to do that anymore!

I agree that automatic inference of variance in definitions is nice (like with generic protocols today!), but do note that users will still need to reason about variance to understand the type errors they get — so I can’t promise you’ll never have to look up those terms :slight_smile:

1 Like

One advantage of using type instead of something more specific such as typedef is that the keyword can be used for other type-related thing in the future. For example we could have:

from foo type import X

instead of

from typing import TYPE_CHECKING, Protocol
    from foo import X

When is the __type_variables__ member of a class populated? Will what the values stored in __type_variables__ be modifiable if I write a metaclass?

The __type_variables__ member would be populated at the same time as __name__, __module__, and __qualname__.

I’ve been playing around with the fork and I’m sorry if you’ve already answered this but you mention

Several classes in the typing module that are currently implemented in Python must be reimplemented in C. This includes: TypeVar , TypeVarTuple , ParamSpec , Generic , and Union . The new class TypeAliasType (described above) also must be implemented in C

Currently it looks to me like this isn’t the case, is this something that was deemed not worth the time investment (for now i.e. if the PEP is accepted this will happen) or has your opinion on implementing this changed?

The current fork is just a proof of concept. It will require a rewrite by someone other than myself if the PEP is accepted. The current fork doesn’t allow for PEP 696’s use of earlier TypeVars in the list to be used in default type expressions, for example.

Ah right, that makes sense

I have maybe an alternative syntax.

def generic(*e):
    def w(f):
        return f
    return w

def foo(value: T):

foo(5)      # ok
foo("test") # wrong typing

I don’t know if it meets all the requirements in the PEP but I think it might be an alternative syntax which is backward compatible.
The drawback is that T in this example is declared at module scope and it is not enforced that the arguments of generic are walrus expressions.

@15r10nk, a decorator-based syntax was considered and rejected. The problem is that decorators are not evaluated until after the decorated object. That means T in your example would not be defined at the time the def foo statement is executed by the interpreter.

As you pointed out, this approach also declares T at the module scope, so there is no proper scope enforcement — which is one of the goals of the PEP.

And finally, a decorator-based syntax would look significantly different than any other language that we surveyed in the appendix of the PEP. It’s OK for Python to be different if there’s a good reason. But such differences create friction for developers who are coming from other languages. All else being equal, it’s better to choose a syntax that looks familiar to developers coming from other languages.


I agree to your points except the first one, because don’t know where this undefined state should be.


from inspect import signature

def generic(*e):
    def w(f):

        return f

    return w

def test_signature(f):
    return f

@generic(T := int)
def foo(value: T):


output (Python 3.10.8):

(value: int)
(value: int)

But you might be right. In the long term it would be better to have an explicit direct syntax to express the intend. The only benefit here would be the backward compatibility, which has no value in the long term. :+1: for the great work and thank you for your reply.