PEP 695: Type Parameter Syntax

Thoughts on PEP 695, rejection of PEP 677 and the future of typing syntax

Since @thomas / the Steering Council asked the community for more thoughts on PEP 695, I thought I’d write something up.

First and foremost, I think PEP 695 is really well thought through, and I admire all the effort and cleverness that went into it. I think the scoping challenges were formidable and the proposed solution addresses them well. The new syntax for generic classes and functions overall feels natural to me as a user of typed Python.

Summary of PEP 695

I saw a couple messages on Discord and here about the PEP being hard to follow, so here’s a quick summary of the PEP (still contains jargon, but is at least shorter). PEP 695 comprises of three things:

  1. A new syntax for generic classes and functions
  2. A new syntax for type aliases
  3. A way for type checkers to automatically infer variance of type variables

How do these things relate / why is this one PEP instead of two or three?

The new syntax isn’t just syntactic sugar, but an effort to clarify the scoping of type variables.
Type variables are conceptually meaningless outside of a scope that binds them. However, today’s syntax for type variables confuses that fact. This is particularly confusing when the type variable has properties attached (like variance or bounding), because those aren’t properties of the type variable itself, as much as properties of the class, function or alias that binds them.

The syntax changes thus target the places where you’d bind a type variable, mirroring precedent in other languages. The automatic inference of variance is primarily a usability improvement, but is part of this PEP because including it means we don’t need new syntax to explicitly express variance.

Rejection of PEP 677

Okay, I’ve sufficiently buried the lede, so here it is: I would be very surprised if the Steering Council accepted PEP 695. The Steering Council is largely the same as when it discussed PEP 677 (syntax for callable types). Every reason for rejecting PEP 677 applies to this PEP as well, except often stronger.

PEP 677 rejection notice. I excerpt the points below, as per my interpretation, but click through to the notice if unfamiliar.

Let’s go through the reasons the SC provided for rejecting PEP 677:

  1. We feel we need to be cautious when introducing new syntax.
    […] A feature for use only in a fraction of type annotations, not something every Python user uses

This still clearly applies.

With PEP 677, you could have argued that it’s intuitive and mirrors existing Python syntax, that it’s a smaller change, that it’s easier for users to ignore. But PEP 695 changing how functions and classes can be declared is a big deal… It’s amongst the more personal changes you can make to a language. I expect Python users to have feelings about PEP 695 syntax.

And while I think the SC was wrong on PEP 677, they are absolutely right to be wary of syntax changes.
For example, the small syntax change in PEP 646 maybe flew under the radar, but now it’s causing a little bit of trouble for PEP 649.

PEP 695’s estimate is use in 14% of files with typing. I measured prevalence of TypeVar on the corpus of GitHub - hauntsaninja/mypy_primer: Run mypy and pyright over millions of lines of code . This is 127 projects (often name brand) that use typing in CI and close to ten million lines of code. It depends on how you count, but Callable was used in about 22% of files with typing and TypeVar was used in about 9%. PEP 696 (defaults for type variables) could change how much generics are used, but that remains to be seen.

  1. While the current Callable[x, y] syntax is not loved, it does work.
    This PEP isn’t enabling authors to express anything they cannot already.
    […] We can imagine a future where the syntax would desire to be expanded upon.

Unlike PEP 677, which was just sugar, PEP 695 does make generics conceptually clearer. But it’s still true that the syntax changes in PEP 695 do not enable things that aren’t already expressible. (For what it’s worth, this should be a point of pride. It’s great that users on all Python versions can typically benefit immediately from new typing features)

While type variables are relatively mature, I think they are still more susceptible to future changes than PEP 677 (there was basically only one direction to take PEP 677, which is the extended syntax that PEP 677 discussed). For example, PEP 696 would involve a (straightforward) syntax addition to PEP 695. PEP 695 itself is adding automatic variance. One could imagine future kinds of TypeVarLike’s (things like ParamSpec). Or maaaybe even a future where we model mutability explicitly, which would have implications for variance.

  1. In line with past SC guidance, we acknowledge challenges when syntax
    desires do not align between typing and Python itself.
    […] shifts us further in the direction of typing being its own mini-language

Changing function and class declarations for a typing specific feature doesn’t assuage this fear.

  1. We did not like the visual and cognitive consequence of multiple -> tokens in a def

The odds of someone proposing a syntax change that people aren’t concerned about the visual and cognitive consequences of is lower than the odds of Python no longer being dynamically typed :wink:

While I think the syntax is better than the status quo and blends quite nicely into existing typed Python, I empathise with worries about a relatively implicit way of defining symbols or more soft keywords and the overloading of type.

And of course, the Steering Council may have additional concerns that are more specific to PEP 695 (I can think of a few that might come up).

Future of typing syntax

Given the above, my best guess is that the SC will:

  • Ask for automatic variance inference to be its own PEP and then accept that PEP (variance is confusing, this is a thing that helps, it’s low burden for type checkers to implement since we already have it as part of PEP 544)
  • Defer syntax changes until the dust has settled on autovariance and PEP 696 (defaults for type variables), or reject the syntax changes outright

The rest of this section is written assuming this outcome.

I think if PEP 695 is rejected for effectively a superset of the reasons PEP 677 was rejected, this would be somewhat frustrating, on both “sides”. On the typing “side”, because a lot of effort and ingenuity is spent on PEPs like this and because these changes have the potential to help users (of typing). On the Steering Council “side” because it sucks to say no to a lot of well thought through work that benefits some users but may have global costs — especially if saying no for similar reasons to previous proposals.

I’d love more guidance from the Steering Council on syntax changes, particularly syntax changes that are aimed at ergonomic benefit. The rejection reasons for PEP 677 are quite broad:

  • re point 1, every syntax change will be a syntax change
  • re point 2, as mentioned, it should be a point of pride and strength that we can typically find kludgy ways to express things without new syntax. I found this point confusing at the time of PEP 677 too, especially since IMO PEP 677 did a good job anticipating future extensions: see Mailman 3 [Python-Dev] Re: PEP 677 (Callable Type Syntax): Rejection notice. - Python-Dev - and the reply
  • re point 3, seems to rule out most syntax changes that are about typing ergonomics.
  • re point 4, every syntax change will have visual and cognitive consequences. This is subjective and it’s unclear a priori what the Steering Council’s bar here is.

I understand that it’s hard to give guidance here and it’s important to preserve optionality in both ways (to reject things for subjective reasons or accept things that in some ways contradict previous rejections).

To make things more concrete, here are a few random ideas that could be in the guidance action space:

  • Syntax changes targeting typing ergonomics should only touch parts of the type system that have not been changed in X years (addresses point 2 of PEP 677 rejection)
  • Syntax changes should look more like PEP 637 rather than being ergonomic focussed (PEP 637 was also rejected, but the SC did say the typing argument was the strongest argument for that change) (addresses point 2 and point 3 of PEP 677 rejection)
  • Syntax changes should not make use of PEG features (mentioned in PEP 677 rejection) (addresses point 4 of PEP 677 rejection)
  • Syntax changes that likely affect <X% of Python files are unlikely to be considered (addresses point 3 of PEP 677 rejection)

Finally, and this is getting off topic, I think there’s often a nebulous desire expressed for typed Python to feel more cohesive with untyped Python (I think SC might even have said something on these lines, but I can’t find the source). I’d love opinions from everyone on what that means and recommendations on how to go about it (maybe in another thread), for instance:

  • On the typing side, this often results in a desire for ergonomic syntax, because ergonomic syntax is a way for something to feel native and cohesive.
  • For some users, cohesion means the ability to blur lines between runtime and static type checking. There are limits to what is even theoretically possible here, but for what it’s worth I think we’ve made good strides in recent years to make things more introspectable and future proof the runtime aspects of typing.
  • For some users, cohesion means powerful static type checking primitives that look more like writing Python than writing a declarative DSL.
  • For some users, cohesion could just mean better resources and documentation. Most non-typing features of Python have a decade (or two) headstart on typing features when it comes to building these resources.
  • For some developers, cohesion could mean building static analysis libraries that are easy to build tooling or custom static analysis on top of.
  • Finally, for some people, maybe this is just a polite way to say “this stuff looks different, get off my lawn” :wink: But don’t worry, we’ll win you over :slight_smile:

Just FYI, as an SC member, I won’t vote to accept something that tries to overload what the decorator syntax means in any magical way (i.e. if it isn’t just like any other decorator and thus just a thing you import for typing, I can’t support it).


I’m very glad to hear it, but I don’t understand the context. Are decorators — and I mean the symbol mydecorator in @mydecorator, or the function another() in @another(), or the expression in @lambda x: x recently — not always evaluated before their decorated functions? Why would that be magical if used for marking generic functions and classes without introducing a syntax change? I’m not being deliberately obtuse.

1 Like

Yes, the expression after the @ in decorator syntax is evaluated before the decorated function/class. That’s not typically what we mean by “evaluating a decorator,” though; typically we mean “calling the decorator – the result of evaluating the decorator expression – with the decorated function/class.” The purpose and typical use of decorator syntax is not for evaluating the decorator expression itself to have side effects.

So how would you bind names in a decorator to be used solely within the definition of the decorated function/class? The only possibility I can see is that evaluating the decorator expression has the globally visible side effect of binding names in the global scope, which the decorator then (again as a global side effect) deletes from the global scope when the decorator actually runs on the decorated thing. I think this is technically possible today, but I certainly wouldn’t advocate for it, and I think it would be reasonable to describe it as “overloading the decorator syntax in a magical way.”


Here’s fuller example starting with full order relevant pieces execute.

def generic(t):
    def _func(f):
        return f
    return _func

def foo(x: print("B")):
    return x


This prints A → B → C → D → E. In particular the annotation for x is printed before decorator runs at all. If you instead had @generic(TypeVar("T")) the decorator does not have opportunity to execute before x’s annotation is run. So even with stack frame/manipulating globals/function namespace you can’t define T at right time. If you delayed annotation evaluation you may be able to, but having a feature that requires from __future__ import annotations/co_annotations like behavior is warning sign as it is easiest to work with runtime type annotations when nothing special happens to them and I wouldn’t expect decorator to require future import to be usable.

The next issue is desired scope/validity of T. The goal with new syntax is for,

def foo[T](x: T):
  def _g(x: T) -> T:
  return _g  

to be valid, but

def foo[T](x: T):


to be an error. So T must exist at time of annotation of function’s arguments (too early for decorator), exist inside of function’s scope, and not exist outside of function.


See above, I had the same idea to hack it with globals and this can indeed be done.

But the “how” isn’t the point (yet). The decorator idea seems to have been rejected under the premise that it cannot be done. Seeing how it can be done, it should be given more consideration, including some effort to find a nice syntax both within what is currently possible and by introducing new syntax (which this PEP is doing anyway) that isn’t special-casing but might benefit the whole language.

Afterwards, there is still time to reject the idea on the basis that there truly isn’t a workable implementation. For the scoping, for example, one should at least consider PEP 649 which appears to solve that issue easily, if accepted.

PS: On the other point,

I do not agree with that statement universally. I think everyone is aware that a decorator-function-call @mydecorator() does something before it takes the function. Whether or not introducing the typevars, which live in the magic place of type annotations, is a side-effect is debatable. Particularly when that side-effect is only for the duration of the function definition, which again is doable as in my example above.

1 Like

Is it possible to use type comments (# type: ...) for PEP 695 instead? These have the advantage of being both non-intrusive (minimal or no effect on either runtime code and readability, no changes to standard Python syntax) and recognised by the compiler already.

The following is a syntax error when parsed with ast.parse(<source code>, type_comments=True), because type: ... is not allowed to be on a single line on its own:

# type: int

This leaves an opportunity to use the line above a function or class declaration (or either on top or below @decorators) as a place to bind type variables to the definition underneath.

Proposed syntax (taken from various examples in PEP 695):

# Upper bound
# type: [T: str]
class ClassA:
    def method1(self) -> T: ...

# Type variable in function signature
# type: [T]
def func(a: T, b: T) -> T: ...

# Constrained type specification (string quotes omitted for forward reference)
# type: [T: (ForwardReference, bytes)]
class ClassB: ...

# Parameterisation
# type: [T]
class ClassA(BaseClass[T], param=Foo[T]): ...

This has some of the same drawbacks as the rejected idea PEP 695: Prefix Clause, especially regarding scope clarity. However, some of the feedback on this thread about the information-denseness of the current proposal suggests that there is a fine balance between clarity of scope and a flood of square brackets appearing in a class or function declaration, and there’s not much consensus on where to draw the line.

1 Like

The comment would not bind the name T, so this would not work without changes to the compiler. And if we’re changing the compiler, we shouldn’t do it to introduce syntax that looks like a comment.


Hi, I am a long-time lurker and created an account to express my support for this proposal. I know that one user’s perspective is unlikely to make a difference in such a long and complex discussion, but I do believe that this PEP makes generics and type variables much easier to work with than they are today.

Most important to me is that, under PEP 695, parameters are declared exactly when they are needed and their usage is localized to there and there alone. This is addressed in the PEP as the second paragraph of Points of Confusion:

The scoping rules for type variables are difficult to understand. Type variables are typically allocated within the global scope, but their semantic meaning is valid only when used within the context of a generic class, function, or type alias. A single runtime instance of a type variable may be reused in multiple generic contexts, and it has a different semantic meaning in each of these contexts. This PEP proposes to eliminate this source of confusion by declaring type parameters at a natural place within a class, function, or type alias declaration statement.

I find this to be most compelling.

  • It’s confusing to declare a type variable in the global scope and have its meaning potentially change wherever it’s used.
  • It’s confusing that, despite this, the “bounds” of a type variable sometimes have to be defined ahead of time, but then also only narrowed by the type checker when the type variable is used.
  • It’s confusing that T = TypeVar("T") can refer to a class-scoped type parameter in one place and a function-scoped one in another, or that it can be imported into another module and take on semantics that are further detached from its declaration.

PEP 695 makes this all much clearer. As a user of typed Python, this example from the PEP is anecdotally very common:

# Here is an example of a generic function today.
from typing import TypeVar

_T = TypeVar("_T")

def func(a: _T, b: _T) -> _T:

# And the new syntax.
def func[T](a: T, b: T) -> T:

The new syntax more closely mirrors that found in other languages (described in the Appendix). It is more obvious with the new syntax that T is meaningful in the function, but not outside.

The post that I’m replying to says: “The new syntax for generic classes and functions overall feels natural to me as a user of typed Python.” I wholeheartedly agree. Having talked to developers IRL who are more familiar with other languages, this new syntax makes more intuitive sense in that way, too.

Finally, I believe the case for new syntax is stronger here than it was in PEP 677; there, the proposed syntax was conceptually equivalent to the existing syntax. PEP 695, by contrast, clarifies longstanding conceptual confusion about type variables and generics. It’s not “just” syntax—this is a meaningful improvement for users of typed Python.

I hope the SC accepts it.


(Apologies for the double post—my previous one was already long, and this is a reply to a post much further up.)

This was my experience (and that of coworkers who were unfamiliar with this) as well. Having to inherit from Generic and declare TypeVar is surprising and kludgey. This, in contrast, feels like the “one—and preferably only one” way to do this, or the way that things “should” work. This is especially true when considering the survey of other languages in the Appendix.

1 Like

If the main objection is that the proposed syntax is too “magical” then I think this might be a good compromise.

I do think it’s probably okay to hide the instantiation of the TypeVars behind some magic, though.

A proposal in a similar vein:

class A(Generic[T]) with (T: Any):
    def __init__(self): ...

def reduce(
    function: Callable[[T, S], T], sequence: Iterable[S], initial: T
) -> T with (T: Any, S: Any): ...

I think it might be good to make specifying the bound mandatory, and if there is no bound then you can use Any or object (using object actually seems a bit more correct to me).

Also, instead of re-purposing “with”, it could be done with “where” as a soft keyword:

class ClassA(Generic[T]) where (T: str):
    def method1(self) -> T:

On behalf of the Steering Council, we’d like to report that we are happy to accept PEP 695. Thanks to everyone for the reinvigorated discussion in the last few weeks – I look forward to this step forward for typing in Python.



Yes and no. As you pointed out, “With PEP 677, you could have argued that it’s intuitive and mirrors existing Python syntax, that it’s a smaller change” while “PEP 695 changing how functions and classes can be declared is a big deal”. But the key detail there is while PEP 677 proposed something that I would argue was a bit of syntactic nicety for something that functioned fine w/o the PEP, PEP 695 has a much broader impact in terms of improving your typed code. So I think a better concept to compare this to in terms of impact compared to cost is decorators; not necessary but there’s a more demonstrable improvement to how your code reads, the general semantics, and minimizing errors than compared to simplifying how you write Callable.

Or at least that’s how I approached the two PEPs when thinking about whether to accept/reject them.


This never got addressed.

I expect this is something that the implementation work will reveal.

Slices are not expected to ever be used in function or class definitions so it seems like there is sufficient information for to reason about what is what.

1 Like

Doesn’t this [T: str] syntax conflict with how slices are parsed.

The grammar changes introduced in this PEP are unambiguous. The : token is never interpreted as a slice when used in the context of a type parameter definition, just as : is never interpreted as a slice when it’s used today for parameter type annotations.

1 Like

But it would break my super important and not at all contrived class, as far as I understand.

class BoundedMeta(type):
    def __instancecheck__(self, obj):
        return isinstance(obj, float) and self.LO <= obj <= self.UP

class Bounded(metaclass=BoundedMeta):
    def __class_getitem__(cls, rng):
        lo, up = rng.start, rng.stop
        return type(f'Bounded[{lo}:{up}]', (cls,), {'LO': lo, 'UP': up})

print(isinstance(3.1, Bounded[2.3:5.5]))  # True

But it would break my super important and not at all contrived class, as far as I understand.

This PEP has no impact on how : is interpreted in the grammar today. That’s true for your code sample above or any other code that works today. So no fear, your super-important not-at-all-contrived class would continue to work just fine. :slight_smile: .

The grammar changes this PEP introduces are for type parameter definitions only. For example, if you were to change your class to be generic with a type parameter named T, you would use the following syntax.

class Bounded[T: str](metaclass=BoundedMeta): ...

I might have missed it but I believe this wasn’t addressed as well. It’s likely just an implementation detail. However the AST change would make it easier for linters (written in Python) to highlight the name specificly without needing to parse the expression themselves (using regex).

Congrats to the authors! One more thing to look forward to in 3.12.