PEP 695: Type Parameter Syntax

Thanks. I was misled by the fact that isinstance(1, float) is False. But I forgot that typing treats ints as substitutable for floats (for reasons that I will understand if I go and look them up, but which never seem obvious to me…)

OK, but I’m not sure how variance is relevant to my point. Your message where you introduced variance into the discussion:

wasn’t obviously a response to anything in particular (maybe that’s Discourse’s threading not being clear enough?) but in the context of what I thought we were discussing, it didn’t seem to answer my point that I think

def with_request[R, **P](f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]:
    ...

is harder to understand and harder to look up if you’re trying to understand it than

R = TypeVar("R")
P = ParamSpec("P")

def with_request(f: Callable[Concatenate[Request, P], R]) -> Callable[P, R]:
    ...

Can you explain how variance is relevant to that question? Or if it isn’t, then respond to that point, please? (I already covered your previous response regarding duplication of assignments).

Oh, I missed that it is in the PEP:

type ListOrSet[T] = list[T] | set[T]

This does introduce the type variable T so that the TypeVar is not needed.

Because of the numeric tower: https://peps.python.org/pep-0484/#the-numeric-tower.

Both of these issues are not insurmountable:

from typing import TypeVar

def generic(*typespec):
    unset = object()
    glbls = globals()
    store = {t.__name__: glbls.get(t.__name__, unset) for t in typespec}
    for t in typespec:
        glbls[t.__name__] = t
    def decorator(function):
        for name, value in store.items():
            if value is unset:
                del glbls[name]
            else:
                glbls[name] = value
        return function
    return decorator


@generic(TypeVar('T'))
def myfunction(x: T):
    ...

With the __coannotations__ magic, that would probably not even have to touch global scope.

Of course we haven’t really won anything by still needing to write out the TypeVar('T') here, so this is really only addressing the points made above.

I’m going to echo what some others are saying about this and other typing concepts being confusing. I consider myself an expert Python programmer, and I’m still regularly confused by typing despite spending many, many days studying and adding it in my libraries. This new syntax is confusing, and it takes so many pages to describe and justify that I just can’t follow it, despite really trying.

For regular users of Python, typing is optional, so if there are confusing constructs or difficult to type areas, they can mostly be ignored and go unused. But for better or worse, typing is not optional for maintainers, because users constantly ask for type annotations.

It also makes it that much harder for a regular user to contribute to libraries. They’ll have to be reading or adding all these confusing constructs that that they likely aren’t using in their own code, all to get CI to pass. This is the state of things now, let alone also adding this new syntax.

The more complex typing becomes, the more likely I am to either get something wrong trying to use it, or to just throw up my hands and ignore it, benefiting no one. I would much rather see more effort go towards “how can we make typing fit easily with and accept real Python code” rather than adding more syntax.

4 Likes

I can’t find any mention in either the PEP or this topic of “slice”. Doesn’t this [T: str] syntax conflict with how slices are parsed, or at least how they’re read by developers?

1 Like

My response to that was prior variance discussion here,

For discoverability my work pattern is mainly in an IDE where I rely on hovering over variables/objects to see more about them. For vscode I know if hover over P/T it’ll show a hint it’s type variable/paramspec and let me click to see where it’s defined. I’d guess other IDEs like pycharm have similar support while some editors may lack it.

Otherwise I’m unsure of discoverability for syntax. How do people discover meaning := expression? That feels comparable to me with introducing syntax. I find it from reading python release notes + teammates teaching me, but I’m aware that both are things a beginning may lack.

I can add if that’s not enough that yes I agree new syntax is less explicit then ParamSpec/TypeVar for discoverability and that when encountered first few times will be less easy to lookup then existing syntax.

There is major difference on typing usage in python vs other languages. Python type hints can be used heavily at runtime. Pydantic, cattrs, and typeguard are all libraries that do heavy runtime type introspection. Having types live in separate namespace/world could work for static type checkers but would not cooperate well with runtime type usage.

OK, I responded to that here. And others have made the same point. I don’t know if there’s anything further to discuss here - it looks like you simply disagree with me, and at this point I don’t think there’s much more I can add. I hope the PEP gets modified to take into account the feedback from the various experienced Python users with a casual level of typing knowledge who have commented here, but I don’t know how likely that is. It’s frankly rather too hard to engage with the discussions for me to do much else.

1 Like

From my view I feel like I agree with you there is learning trade off. I wasn’t trying to disagree that learning impact on first encountering it. My view is I think benefit from usage in existing files that often use generics is worth that trade off and the trade off is comparable to most other syntactic changes.

1 Like

I think people who don’t write a lot of typing code may not realize how much of a pain it is to have to write type variables everywhere, check that they’re right, etc.

But code is also about readers, and this brings another point (sorry for the giant edit) that I think may have been missed from the discussion (although it’s mentioned in the PEP) is how the new syntax is significantly more logical the previous syntax. Type variables have an intuitive scope no matter how you declare them. Even if you declare a type variable the old way:

T = TypeVar('T')

it doesn’t have any real meaning outside some generic object. And it can take significant effort for the reader to figure out what that object is. Consider:

class X(Generic[T]):
  def f(self, a: T) -> None: ...
  def g(self, b: U) -> None: ...

Here, T is scoped to the class X, and is meaningful anywhere inside it, but U is scoped to the function g, and is only meaningful inside it. While T’s scope is made obvious by inheriting from Generic[T], U’s scope is not obvious, and the reader has to carefully check every enclosing function and class.

Ideally, the declaration should be right beside the start of the scope in which it’s valid. And while there is a reasonable place to do that for classes (in the inheritance list), there is currently no such place for functions. And ideally, that point for functions would be before the signature because the signature depends on the type variables.

Also, ideally, whatever notation we choose for functions should be the same for classes just to reduce cognitive load.

This PEP satisfies all of these things:

  • it defines type variables at the scope in which they are valid,
  • it defines function-scoped type variables before the signature, and
  • it uses the same notation for generic classes, functions, and type variables.

I understand the desire for conservatism, and I think we should definitely explore other possibilities, but so far, I think this notation seems to be the most logical to me from a typing perspective.

6 Likes

Maybe the PEP could benefit from having a larger example comparing the existing and proposed syntax. Many of the complaints above would apply equally to both but I agree that the new proposed syntax is a significant improvement.

2 Likes

True, but this is also a side-effect of our habit of reusing TypeVars throughout a module instead of creating them as necessary per function/scope when details like variance come into play.

Could a where clause be possible? That would push what would have been in the brackets to a separate line.

1 Like

I really like the separate line idea if it is possible. I agree with Lukasz’s point about “the rather unprecedented density of information that would end up in a function and/or class signature”.

If there is a way to do it on a separate line, I think that ideally we would use the exact same notation for classes, functions, and type variables.

1 Like

Personally I tend to agree somewhat to both sides here: The current TypeVar and ParamSpec etc. declarations are a practical issue, but eliminating those declarations entirely is also problematic. I think that’s why the decorator syntax seems appropriate.

For what it is worth, I actually feel the @generic(TypeVar("T")) idea does win something since it does eliminate those global variables. It’s a bit verbose (even more characters in the current way) but IMO having more words is not necessarily an issue; you can pretty easily skip those decoreator lines mentally so they are less likely to become noise. If using a decorator is still considered problematic (not unreasonable since most decorators do have meaning and you can’t just skip all @ lines), perhaps a syntax to embed it into the declaration line would be possible? Something like this

def myfunction(x: X, y: Y) -> tuple[X, Y] with TypeVar("X"), TypeVar("Y"):
    ...

class Foo(list[T]) with TypeVar("T"):
    ...

The exact syntax can be discussed, the main idea I want to raise is embedding type variable declarations into the construct declaration may be a viable approach.

2 Likes

I think it’s problematic because of the reason Eric gave PEP 695: Type Parameter Syntax - #19 by erictraut

How about on the other side of the declaration and do you really need to import TypeVar for this?

with type X: int, Y, *T, **P
def my_function(x: X, ...): ...

Still, I think the PEP’s notation has one other elegance to it: When you declare a class like this:

class C[T]: ...

you use it in code like this:

c = C[T]()

So the notation mirrors exactly how you use it. Maybe to reduce density, linters or type-checkers could flag high complexity and beg authors to break things up? Because it is unfortunate for very simple definitions to need two lines.

2 Likes

But as I pointed out above, I don’t think that has really been considered in full.

That just isn’t true, the decorator is evaluated before the function. The function is then evaluated and passed to the decorator.

See my example above. You can probably make this tightly scoped within the new __coannotations__, too.

The decorators look almost exactly like C++ template<> syntax to me.

2 Likes

Thoughts on PEP 695, rejection of PEP 677 and the future of typing syntax

Since @thomas / the Steering Council asked the community for more thoughts on PEP 695, I thought I’d write something up.

First and foremost, I think PEP 695 is really well thought through, and I admire all the effort and cleverness that went into it. I think the scoping challenges were formidable and the proposed solution addresses them well. The new syntax for generic classes and functions overall feels natural to me as a user of typed Python.

Summary of PEP 695

I saw a couple messages on Discord and here about the PEP being hard to follow, so here’s a quick summary of the PEP (still contains jargon, but is at least shorter). PEP 695 comprises of three things:

  1. A new syntax for generic classes and functions
  2. A new syntax for type aliases
  3. A way for type checkers to automatically infer variance of type variables

How do these things relate / why is this one PEP instead of two or three?

The new syntax isn’t just syntactic sugar, but an effort to clarify the scoping of type variables.
Type variables are conceptually meaningless outside of a scope that binds them. However, today’s syntax for type variables confuses that fact. This is particularly confusing when the type variable has properties attached (like variance or bounding), because those aren’t properties of the type variable itself, as much as properties of the class, function or alias that binds them.

The syntax changes thus target the places where you’d bind a type variable, mirroring precedent in other languages. The automatic inference of variance is primarily a usability improvement, but is part of this PEP because including it means we don’t need new syntax to explicitly express variance.

Rejection of PEP 677

Okay, I’ve sufficiently buried the lede, so here it is: I would be very surprised if the Steering Council accepted PEP 695. The Steering Council is largely the same as when it discussed PEP 677 (syntax for callable types). Every reason for rejecting PEP 677 applies to this PEP as well, except often stronger.

PEP 677 rejection notice. I excerpt the points below, as per my interpretation, but click through to the notice if unfamiliar.

Let’s go through the reasons the SC provided for rejecting PEP 677:

  1. We feel we need to be cautious when introducing new syntax.
    […] A feature for use only in a fraction of type annotations, not something every Python user uses

This still clearly applies.

With PEP 677, you could have argued that it’s intuitive and mirrors existing Python syntax, that it’s a smaller change, that it’s easier for users to ignore. But PEP 695 changing how functions and classes can be declared is a big deal… It’s amongst the more personal changes you can make to a language. I expect Python users to have feelings about PEP 695 syntax.

And while I think the SC was wrong on PEP 677, they are absolutely right to be wary of syntax changes.
For example, the small syntax change in PEP 646 maybe flew under the radar, but now it’s causing a little bit of trouble for PEP 649.

PEP 695’s estimate is use in 14% of files with typing. I measured prevalence of TypeVar on the corpus of GitHub - hauntsaninja/mypy_primer: Run mypy and pyright over millions of lines of code . This is 127 projects (often name brand) that use typing in CI and close to ten million lines of code. It depends on how you count, but Callable was used in about 22% of files with typing and TypeVar was used in about 9%. PEP 696 (defaults for type variables) could change how much generics are used, but that remains to be seen.

  1. While the current Callable[x, y] syntax is not loved, it does work.
    This PEP isn’t enabling authors to express anything they cannot already.
    […] We can imagine a future where the syntax would desire to be expanded upon.

Unlike PEP 677, which was just sugar, PEP 695 does make generics conceptually clearer. But it’s still true that the syntax changes in PEP 695 do not enable things that aren’t already expressible. (For what it’s worth, this should be a point of pride. It’s great that users on all Python versions can typically benefit immediately from new typing features)

While type variables are relatively mature, I think they are still more susceptible to future changes than PEP 677 (there was basically only one direction to take PEP 677, which is the extended syntax that PEP 677 discussed). For example, PEP 696 would involve a (straightforward) syntax addition to PEP 695. PEP 695 itself is adding automatic variance. One could imagine future kinds of TypeVarLike’s (things like ParamSpec). Or maaaybe even a future where we model mutability explicitly, which would have implications for variance.

  1. In line with past SC guidance, we acknowledge challenges when syntax
    desires do not align between typing and Python itself.
    […] shifts us further in the direction of typing being its own mini-language

Changing function and class declarations for a typing specific feature doesn’t assuage this fear.

  1. We did not like the visual and cognitive consequence of multiple -> tokens in a def

The odds of someone proposing a syntax change that people aren’t concerned about the visual and cognitive consequences of is lower than the odds of Python no longer being dynamically typed :wink:

While I think the syntax is better than the status quo and blends quite nicely into existing typed Python, I empathise with worries about a relatively implicit way of defining symbols or more soft keywords and the overloading of type.

And of course, the Steering Council may have additional concerns that are more specific to PEP 695 (I can think of a few that might come up).

Future of typing syntax

Given the above, my best guess is that the SC will:

  • Ask for automatic variance inference to be its own PEP and then accept that PEP (variance is confusing, this is a thing that helps, it’s low burden for type checkers to implement since we already have it as part of PEP 544)
  • Defer syntax changes until the dust has settled on autovariance and PEP 696 (defaults for type variables), or reject the syntax changes outright

The rest of this section is written assuming this outcome.

I think if PEP 695 is rejected for effectively a superset of the reasons PEP 677 was rejected, this would be somewhat frustrating, on both “sides”. On the typing “side”, because a lot of effort and ingenuity is spent on PEPs like this and because these changes have the potential to help users (of typing). On the Steering Council “side” because it sucks to say no to a lot of well thought through work that benefits some users but may have global costs — especially if saying no for similar reasons to previous proposals.

I’d love more guidance from the Steering Council on syntax changes, particularly syntax changes that are aimed at ergonomic benefit. The rejection reasons for PEP 677 are quite broad:

  • re point 1, every syntax change will be a syntax change
  • re point 2, as mentioned, it should be a point of pride and strength that we can typically find kludgy ways to express things without new syntax. I found this point confusing at the time of PEP 677 too, especially since IMO PEP 677 did a good job anticipating future extensions: see Mailman 3 [Python-Dev] Re: PEP 677 (Callable Type Syntax): Rejection notice. - Python-Dev - python.org and the reply
  • re point 3, seems to rule out most syntax changes that are about typing ergonomics.
  • re point 4, every syntax change will have visual and cognitive consequences. This is subjective and it’s unclear a priori what the Steering Council’s bar here is.

I understand that it’s hard to give guidance here and it’s important to preserve optionality in both ways (to reject things for subjective reasons or accept things that in some ways contradict previous rejections).

To make things more concrete, here are a few random ideas that could be in the guidance action space:

  • Syntax changes targeting typing ergonomics should only touch parts of the type system that have not been changed in X years (addresses point 2 of PEP 677 rejection)
  • Syntax changes should look more like PEP 637 rather than being ergonomic focussed (PEP 637 was also rejected, but the SC did say the typing argument was the strongest argument for that change) (addresses point 2 and point 3 of PEP 677 rejection)
  • Syntax changes should not make use of PEG features (mentioned in PEP 677 rejection) (addresses point 4 of PEP 677 rejection)
  • Syntax changes that likely affect <X% of Python files are unlikely to be considered (addresses point 3 of PEP 677 rejection)

Finally, and this is getting off topic, I think there’s often a nebulous desire expressed for typed Python to feel more cohesive with untyped Python (I think SC might even have said something on these lines, but I can’t find the source). I’d love opinions from everyone on what that means and recommendations on how to go about it (maybe in another thread), for instance:

  • On the typing side, this often results in a desire for ergonomic syntax, because ergonomic syntax is a way for something to feel native and cohesive.
  • For some users, cohesion means the ability to blur lines between runtime and static type checking. There are limits to what is even theoretically possible here, but for what it’s worth I think we’ve made good strides in recent years to make things more introspectable and future proof the runtime aspects of typing.
  • For some users, cohesion means powerful static type checking primitives that look more like writing Python than writing a declarative DSL.
  • For some users, cohesion could just mean better resources and documentation. Most non-typing features of Python have a decade (or two) headstart on typing features when it comes to building these resources.
  • For some developers, cohesion could mean building static analysis libraries that are easy to build tooling or custom static analysis on top of.
  • Finally, for some people, maybe this is just a polite way to say “this stuff looks different, get off my lawn” :wink: But don’t worry, we’ll win you over :slight_smile:
26 Likes

Just FYI, as an SC member, I won’t vote to accept something that tries to overload what the decorator syntax means in any magical way (i.e. if it isn’t just like any other decorator and thus just a thing you import for typing, I can’t support it).

2 Likes

I’m very glad to hear it, but I don’t understand the context. Are decorators — and I mean the symbol mydecorator in @mydecorator, or the function another() in @another(), or the expression in @lambda x: x recently — not always evaluated before their decorated functions? Why would that be magical if used for marking generic functions and classes without introducing a syntax change? I’m not being deliberately obtuse.

1 Like

Yes, the expression after the @ in decorator syntax is evaluated before the decorated function/class. That’s not typically what we mean by “evaluating a decorator,” though; typically we mean “calling the decorator – the result of evaluating the decorator expression – with the decorated function/class.” The purpose and typical use of decorator syntax is not for evaluating the decorator expression itself to have side effects.

So how would you bind names in a decorator to be used solely within the definition of the decorated function/class? The only possibility I can see is that evaluating the decorator expression has the globally visible side effect of binding names in the global scope, which the decorator then (again as a global side effect) deletes from the global scope when the decorator actually runs on the decorated thing. I think this is technically possible today, but I certainly wouldn’t advocate for it, and I think it would be reasonable to describe it as “overloading the decorator syntax in a magical way.”

5 Likes