PEP 695: Type Parameter Syntax

I just want to add to this discussion the importance of improving the ability to declare static types for the future of python. I’ve seen a lot of people talking about the fear they have from ruining python’s simplicity.

But you must take into consideration the simplicity that comes from having libraries with good typings. Even if the average user will not declare complex types on a daily basis, the ability for big libraries to do so, will make the user’s experience easier when the IDE will warn them of bugs (pymongo can have type checking for valid mongo queries that is based on the collection type for example). The easier programming experience of the user side is part of what makes a language “simple” and “easy to use”.

I’m not saying we shouldn’t try to make these syntaxes as simple and pretty as possible, but as long as we’re careful it will be cost-effective in the long run.


I don’t think that the distance is an issue. The distance factor is exactly the same for functions and classes, but that is no problem.

If there is a problem with current syntax, I don’t think it is the code difference that is to blame.

The decorator idea is mentioned in “Rejected ideas” in the PEP:

We likewise considered prefix forms that looked like decorators (e.g., @using(S, T)). This idea was rejected because such forms would be confused with regular decorators, and they would not compose well with existing decorators. Furthermore, decorators are logically executed after the statement they are decorating, so it would be confusing for them to introduce symbols (type parameters) that are visible within the “decorated” statement, which is logically executed before the decorator itself.

You can use a more descriptive name but the real problem is that it does not make sense conceptually to say that the type variable itself is contravariant or covariant. Suppose we have this:

T = TypeVar('T', covariant=True)
S = TypeVar('S', contravariant=True)

class ClassA(Generic[T, S]):

class ClassB(Generic[T, S]):

It does not make sense here to say that either T or S is innately covariant or contravariant as a type variable. Rather the question is whether the parametrised type ClassA[T, S] is covariant or contravariant in its type parameters T or S. It could be that ClassA is covariant in T and contravariant in S but the reverse or something else might hold for ClassB. So the property of covariance is actually a relationship between the parametrised type and its type parameters. Conceptually it would make more sense to place this at the class statement like:

class ClassA(Generic[Covariant[T], Contravariant[S]]):

This PEP sort of leans towards that except with the suggestion that the covariance or contravariance would be inferred from the rest of the class body. So I think the suggestion is that you would do:

T = TypeVar('T', infer_variance=True)
S = TypeVar('S', infer_variance=True)

class ClassA[T, S]:

class ClassB[T, S]:

Then a checker would figure out the variances for ClassA[T, S] and ClassB[T, S] by looking at the attributes and methods of the classes.

Ideally you would just do away with the TypeVar declarations altogether. The syntax in this PEP should mean that in the main places where a type variable is used there is a syntax and a scope to introduce the type variable without needing a separate TypeVar statement. There are basically three places where you might use a type variable:

  1. Function signature
  2. Class statement
  3. Type alias

The syntax in this PEP makes it possible to introduce the type variable without prior declaration in the first two of these (def func[T](...) and class A[T](...): ... but not for type aliases because the type statement itself doesn’t provide a way to do it. Doing it for a type alias would I guess look something like:

type[T] ListOrSet[T] = list[T] | set[T]

Here the syntax type[T] can introduce the type variable.

Thanks. I was misled by the fact that isinstance(1, float) is False. But I forgot that typing treats ints as substitutable for floats (for reasons that I will understand if I go and look them up, but which never seem obvious to me…)

OK, but I’m not sure how variance is relevant to my point. Your message where you introduced variance into the discussion:

wasn’t obviously a response to anything in particular (maybe that’s Discourse’s threading not being clear enough?) but in the context of what I thought we were discussing, it didn’t seem to answer my point that I think

def with_request[R, **P](f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]:

is harder to understand and harder to look up if you’re trying to understand it than

R = TypeVar("R")
P = ParamSpec("P")

def with_request(f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]:

Can you explain how variance is relevant to that question? Or if it isn’t, then respond to that point, please? (I already covered your previous response regarding duplication of assignments).

Oh, I missed that it is in the PEP:

type ListOrSet[T] = list[T] | set[T]

This does introduce the type variable T so that the TypeVar is not needed.

Because of the numeric tower:

Both of these issues are not insurmountable:

from typing import TypeVar

def generic(*typespec):
    unset = object()
    glbls = globals()
    store = {t.__name__: glbls.get(t.__name__, unset) for t in typespec}
    for t in typespec:
        glbls[t.__name__] = t
    def decorator(function):
        for name, value in store.items():
            if value is unset:
                del glbls[name]
                glbls[name] = value
        return function
    return decorator

def myfunction(x: T):

With the __coannotations__ magic, that would probably not even have to touch global scope.

Of course we haven’t really won anything by still needing to write out the TypeVar('T') here, so this is really only addressing the points made above.

I’m going to echo what some others are saying about this and other typing concepts being confusing. I consider myself an expert Python programmer, and I’m still regularly confused by typing despite spending many, many days studying and adding it in my libraries. This new syntax is confusing, and it takes so many pages to describe and justify that I just can’t follow it, despite really trying.

For regular users of Python, typing is optional, so if there are confusing constructs or difficult to type areas, they can mostly be ignored and go unused. But for better or worse, typing is not optional for maintainers, because users constantly ask for type annotations.

It also makes it that much harder for a regular user to contribute to libraries. They’ll have to be reading or adding all these confusing constructs that that they likely aren’t using in their own code, all to get CI to pass. This is the state of things now, let alone also adding this new syntax.

The more complex typing becomes, the more likely I am to either get something wrong trying to use it, or to just throw up my hands and ignore it, benefiting no one. I would much rather see more effort go towards “how can we make typing fit easily with and accept real Python code” rather than adding more syntax.


I can’t find any mention in either the PEP or this topic of “slice”. Doesn’t this [T: str] syntax conflict with how slices are parsed, or at least how they’re read by developers?

1 Like

My response to that was prior variance discussion here,

For discoverability my work pattern is mainly in an IDE where I rely on hovering over variables/objects to see more about them. For vscode I know if hover over P/T it’ll show a hint it’s type variable/paramspec and let me click to see where it’s defined. I’d guess other IDEs like pycharm have similar support while some editors may lack it.

Otherwise I’m unsure of discoverability for syntax. How do people discover meaning := expression? That feels comparable to me with introducing syntax. I find it from reading python release notes + teammates teaching me, but I’m aware that both are things a beginning may lack.

I can add if that’s not enough that yes I agree new syntax is less explicit then ParamSpec/TypeVar for discoverability and that when encountered first few times will be less easy to lookup then existing syntax.

There is major difference on typing usage in python vs other languages. Python type hints can be used heavily at runtime. Pydantic, cattrs, and typeguard are all libraries that do heavy runtime type introspection. Having types live in separate namespace/world could work for static type checkers but would not cooperate well with runtime type usage.

OK, I responded to that here. And others have made the same point. I don’t know if there’s anything further to discuss here - it looks like you simply disagree with me, and at this point I don’t think there’s much more I can add. I hope the PEP gets modified to take into account the feedback from the various experienced Python users with a casual level of typing knowledge who have commented here, but I don’t know how likely that is. It’s frankly rather too hard to engage with the discussions for me to do much else.

1 Like

From my view I feel like I agree with you there is learning trade off. I wasn’t trying to disagree that learning impact on first encountering it. My view is I think benefit from usage in existing files that often use generics is worth that trade off and the trade off is comparable to most other syntactic changes.

1 Like

I think people who don’t write a lot of typing code may not realize how much of a pain it is to have to write type variables everywhere, check that they’re right, etc.

But code is also about readers, and this brings another point (sorry for the giant edit) that I think may have been missed from the discussion (although it’s mentioned in the PEP) is how the new syntax is significantly more logical the previous syntax. Type variables have an intuitive scope no matter how you declare them. Even if you declare a type variable the old way:

T = TypeVar('T')

it doesn’t have any real meaning outside some generic object. And it can take significant effort for the reader to figure out what that object is. Consider:

class X(Generic[T]):
  def f(self, a: T) -> None: ...
  def g(self, b: U) -> None: ...

Here, T is scoped to the class X, and is meaningful anywhere inside it, but U is scoped to the function g, and is only meaningful inside it. While T’s scope is made obvious by inheriting from Generic[T], U’s scope is not obvious, and the reader has to carefully check every enclosing function and class.

Ideally, the declaration should be right beside the start of the scope in which it’s valid. And while there is a reasonable place to do that for classes (in the inheritance list), there is currently no such place for functions. And ideally, that point for functions would be before the signature because the signature depends on the type variables.

Also, ideally, whatever notation we choose for functions should be the same for classes just to reduce cognitive load.

This PEP satisfies all of these things:

  • it defines type variables at the scope in which they are valid,
  • it defines function-scoped type variables before the signature, and
  • it uses the same notation for generic classes, functions, and type variables.

I understand the desire for conservatism, and I think we should definitely explore other possibilities, but so far, I think this notation seems to be the most logical to me from a typing perspective.


Maybe the PEP could benefit from having a larger example comparing the existing and proposed syntax. Many of the complaints above would apply equally to both but I agree that the new proposed syntax is a significant improvement.


True, but this is also a side-effect of our habit of reusing TypeVars throughout a module instead of creating them as necessary per function/scope when details like variance come into play.

Could a where clause be possible? That would push what would have been in the brackets to a separate line.

1 Like

I really like the separate line idea if it is possible. I agree with Lukasz’s point about “the rather unprecedented density of information that would end up in a function and/or class signature”.

If there is a way to do it on a separate line, I think that ideally we would use the exact same notation for classes, functions, and type variables.

1 Like

Personally I tend to agree somewhat to both sides here: The current TypeVar and ParamSpec etc. declarations are a practical issue, but eliminating those declarations entirely is also problematic. I think that’s why the decorator syntax seems appropriate.

For what it is worth, I actually feel the @generic(TypeVar("T")) idea does win something since it does eliminate those global variables. It’s a bit verbose (even more characters in the current way) but IMO having more words is not necessarily an issue; you can pretty easily skip those decoreator lines mentally so they are less likely to become noise. If using a decorator is still considered problematic (not unreasonable since most decorators do have meaning and you can’t just skip all @ lines), perhaps a syntax to embed it into the declaration line would be possible? Something like this

def myfunction(x: X, y: Y) -> tuple[X, Y] with TypeVar("X"), TypeVar("Y"):

class Foo(list[T]) with TypeVar("T"):

The exact syntax can be discussed, the main idea I want to raise is embedding type variable declarations into the construct declaration may be a viable approach.


I think it’s problematic because of the reason Eric gave PEP 695: Type Parameter Syntax - #19 by erictraut

How about on the other side of the declaration and do you really need to import TypeVar for this?

with type X: int, Y, *T, **P
def my_function(x: X, ...): ...

Still, I think the PEP’s notation has one other elegance to it: When you declare a class like this:

class C[T]: ...

you use it in code like this:

c = C[T]()

So the notation mirrors exactly how you use it. Maybe to reduce density, linters or type-checkers could flag high complexity and beg authors to break things up? Because it is unfortunate for very simple definitions to need two lines.


But as I pointed out above, I don’t think that has really been considered in full.

That just isn’t true, the decorator is evaluated before the function. The function is then evaluated and passed to the decorator.

See my example above. You can probably make this tightly scoped within the new __coannotations__, too.

The decorators look almost exactly like C++ template<> syntax to me.