TypeExpr[] (formerly TypeForm[]) is a new way to spell variables that hold type expression objects at runtime. They are similar to type[]class objects, but additionally allow matching TypedDicts, generic lists, unions, literals, and other complex types. Being able to pass around type expressions is especially useful for runtime type checkers:
# A variable holding a type expression object. NOT a type alias.
INT_OR_STR: TypeExpr = int | str
# Call of a function which accepts a type expression as an argument
assert isassignable(1, INT_OR_STR) # like isinstance()
Draft 3 of this PEP is now ready for review. Please leave your comments in this thread.
Notable changes since draft 2
A new Rationale section explains & distinguishes the related concepts of “class objects”, “type expressions”, “annotation expressions”, and plain “objects”.
Explicit TypeExpr Values introduces the TypeExpr(...) syntax (with parentheses) to explicitly mark an expression to be interpreted as a type expression rather than as a value expression.
Implicit TypeExpr Values introduces rules for recognizing type expression objects in a value expression context.
How to Teach This is greatly extended, with interaction examples from the old Specification section, examples of how to write a function that manipulates TypeExpr values, and why you might want to import such functions from libraries rather than defining your own.
You may consider this off-topic, and that’s fine, but as a relative outsider, I’m baffled by the plethora of different “type like things” that exist in the static typing world. We have types (like int) and type[] things, and now TypeExpr[] things. As well as possibly more - the change from “type form” to “type expr” leaves me confused over what’s what.
Is there, or will there be, a place in the documentation where typing novices like myself who see references to type[] or TypeExpr[] can go to read up on what the differences are, how to distinguish between the different concepts, and whether any of it matters to me? I live in fear of getting a bug report for my code that says “use of int is incorrect, please use type[int]” or something similar…
To be clear, I’m not after clarification right now. I’m interested in how we make sure that the information is accessible in the long term. And the “How to Teach This” section of the PEP glosses over this by suggesting that most people won’t ever interact with these things - which, to be honest, doesn’t really seem like it addresses the point of the section, which is not “do we need to teach this”, but “how do we teach this”.
I don’t think the typing specification is intended as a document for non-expert users, is it? As far as the typing module docs are concerned, though, that would indeed be a reasonable place. Although I’m not convinced that it’s always particularly “beginner friendly”.
The problem with this sort of syntax is that it’s inherently an advanced feature, but the nature of type annotations in my experience is that it’s hard to avoid the advanced features because they crop up in places like the typeshed as annotations for parts of the stdlib. So users who don’t have the background in why these features are needed still need some sort of documentation. That’s where I think the “how to teach this” section of the PEP could help - it could give guidance on how to discuss the feature in terms that a non-expert can follow.
I think implementing your own function that uses TypeExpr as a hint and does not just pass it to another is advanced/library use case. Calling a function that uses TypeExpr as a hint is a normal use case and should be explainable to average python developer. My short summary is,
def func(x: type):
...
class Foo:
...
func(int)
func(str)
func(Foo)
func(int | str) # Fail, not a type object
func(Annotated[int, "...") # Fail not a type object
func(list[int]) # Fail not a type object
func("Foo") # Fail not a type object
An easy way to check is something a type object is, type(x). type(int) is type. type(Foo) is type. type(list[int]) is GenericAlias. type(int | str) is UnionType.
TypeExpr is the annotation when you do want the last 4 things to be acceptable. When you want to handle any value that is valid type annotation even though it may not be type object itself. The advanced part is handling values like Annotated/forward references/unions/etc at runtime is tricky. But a beginner can certainly use libraries that handle those values in same way some libraries may use metaclasses for core functionaliy, but most users shouldn’t reach for them too often.
Thank you for mentioning that. I completely agree. I usually try to follow all Typing discussions but trying to read through the TypeForm discussion really confused me and I thought I was just too tired. Looking back at this I very much agree this PEP needs a better “How to Teach This” section.
Yes, agree, the PEP could be improved that way. And I also agree with your overall observation. This feature is kind of a quirk. I’m convinced it’s necessary, but at the same time it’s unfortunate that type isn’t sufficient.
I’ve been waiting for this for the better part of a decade now so I’m very grateful to David for pushing this on.
I don’t have any complaints about the PEP. The only comment I have is that I personally find the rationale for not widening type to be unconvincing, but not nearly enough to really fight this. I don’t think it’s that big a deal.
First of all, it’s obvious you’ve put a lot of thought and consideration into this draft, so thank you! I really like the new “Rationale” section; it clarified the proposal a lot for me. Two thoughts:
The “Implicit Annotation Expression Values” section seems a little out-of-place, unless I’ve missed something. Will this proposal lead to annotation expressions appearing in value expression contexts? (How? I thought this was about type expressions?) If not, why do we need these new rules?
I agree with the previous comments that the “How to Teach This” section could be improved. The current content of this section reads to me like an advanced usage tutorial, whereas I’d be looking for answers to more basic questions like:
How does this idea fit into the existing type system conceptually? (If I were teaching a course on typing, what section would this belong in?)
How do I explain the difference between type and TypeExpr?
Who needs to know about TypeExpr? You allude to this in the first paragraph, but I think it would be worth explicitly discussing what end users, library authors, maintainers of static typing tools, and maintainers of runtime typing tools need to know.
And a couple minor comments:
I’d suggest linking to Rejected Ideas: Accept arbitrary annotation expressions somewhere early in the PEP. One of the first questions that popped into my head was “Why TypeExpr and not AnnotExpr?”, and I suspect other readers will wonder the same thing after you’ve introduced type expressions vs. annotation expressions. A quick “See this rejected idea” note would clear that up.
There are a few remaining references to “TypeForm” in the “Implicit TypeExpr Values” section. You may want to do a quick search-and-replace =)
Thanks, that’s exactly the sort of explanation that I think is needed, and I think should be mentioned in the “How to teach this” section of the PEP, saying that it will be added to some easily accessible document (the typing documentation under TypeExpr is probably the right place).
Out of curiosity, where does type[...] fit into this? I found this section in the typing docs which suggests that type[X] means roughly the same as “type, but must be X or a subclass of it”. Is that correct?
Yes that’s right idea behind type[X]. Some examples,
def func(x: type[int | str]):
...
class Foo:
...
class Bar(Foo):
...
def foo_func(x: type[Foo]):
...
func(int) # Good
func(str) # Good
func(Foo) # Bad Foo is not subtype of int | str
foo_func(Foo) # Good
foo_func(Bar) # Good
foo_func(str) # Bad str is not a subtype of Foo
Classes that have a lot of subtypes (protocols or Exception) are where I tend to see type[X] used. The other way I see it commonly used is with typevar for example simple class decorator may have a signature like,
def class_dec[T](cls: type[T]) -> type[T]:
...
edit: Another good example is function cast from typing. It’s signature today closest we have is like,
def cast[T](typ: type[T], val: object) -> T:
...
cast is one of those cases where it actually supports TypeExpr too and after this pep should be
Today type checkers special case cast in practice to behave like TypeExpr signature. With this pep it would be possible for a user to define my_cast function that behaves like cast.
My understanding is that one of the projects of the Typing Council is to create a user-facing documentation counterpart to the current Typing Specification (which is geared more toward type checker implementors and experts).
I’ll revise the “How to Teach This” section with more content that could inform higher-level documentation about TypeExpr that appears elsewhere.
I’ll also try to make the section more standalone: Answers to a lot of the “why” questions appear elsewhere in the PEP, but I suspect it may be useful to at least recollect the high level information here.
A lot of the examples-style content in the current “How to Teach This” section was pulled out from the “Specification” section, but those examples are bit advanced for the average user of TypeExpr. I’ll probably make a new supersection outside the standard template since I still think this content is valuable for the rare folks actually trying to write their own functions that take TypeExpr as input.
I’ve updated the PEP with a new How to Teach This section, written with the “casual typing users” audience in mind. @pf_moore I’d be particularly interested to see if it makes sense to you.
I’ve moved the content of the old “How to Teach This” section - which was aimed at “runtime type checker implementors” - to a new Advanced Examples section.
I’ve also applied other feedback earlier in this thread.
Awesome, yes that’s exactly the sort of explanation I wanted. I really appreciate this change.
One further useful addition to the PEP would be a statement on where this explanation will be published in the long term - from bitter experience with packaging, I’ve found that if there isn’t a plan in the PEP for where to give the information a permanent home, you end up with an undiscoverable mess of people quoting opaque PEP numbers, and fragmented documentation. I don’t know where the typing community is in terms of user documentation, so this may be too much to ask right now, but it’s worth keeping in mind.
Apologies if this has been asked before or if it’s addressed somewhere in the PEP that I haven’t seen, but why is TypeExpr used to declare type expressions in preference to TypeAlias – does the type expression need to be disambiguated at definition time? – and would type expressions then be expressible using new-style type aliases?
TypeAlias and TypeExpr are used in different situations:
A TypeAlias defines a type alias that can be used in a type annotation context:
StringGenerator: TypeAlias = Callable[[int], Iterator[str]]
def print_strings(gen: StringGenerator) -> None:
for s in gen(5):
print(s)
A TypeExpr defines a variable that can hold a type that can be used in a value context:
BINARY_FUNCTION: TypeExpr = Callable[[Any, Any], Any]
if isassignable(my_func, BINARY_FUNCTION):
print("It's a binary function!")
elif callable(my_func):
print("It's a non-binary function!")
else:
print("It's not a function at all!")
Edit: A TypeAlias can be implicitly converted to TypeExpr, but not the reverse. So you could also write:
BinaryFunction: TypeAlias = Callable[[Any, Any], Any]
if isassignable(my_func, BinaryFunction): ...
This might be the wrong place for this, but I found some of the differences between type expressions and annotation expressions somewhat surprising and confusing. The PEP defers to the definition in the typing spec, which, I think, is the first time that definition is used outside of the spec itself. It’s definitely good to have TypeExpr exaclty match the spec’s definition, but I think this PEP would be a good opportunity to maybe reevaluate some details of it.
In particular, TypeGuard/TypeIs being type expressions and P.args/P.kwargs only being annotation expressions stand out to me. The intuition the PEP and the spec uses is that type expressions are ones that describe a type (i.e. a set of values) while annotation expressions are ones that add qualifiers to some type and are only valid in certain contexts.
The spec definitions for TypeGuard and TypeIs never mention it occurring anywhere outside of function return annotations and only describe its affect on the way type checkers should use the annotated function’s calls. Looking at some non-standard usages of them in this pyright playground and this mypy playground they also behave inconsistently and there doesn’t seem to be a clear set of values that TypeGuard contains. From what I can tell the reason that a TypeGuard/TypeIs is considered a type expression is so that it can fit in a Callable[..., TypeGuard[...]] without special casing, not because it actually defines a type.
P.args and P.kwargs seems to be the opposite, they are considered annotation expressions because they shouldn’t occurr in most places type expressions can, but they do describe a type. Having them be annotation expressions rather than type expressions that just aren’t valid in most contexts feels like an implementation detail rather than it being conceptually closer to a ClassVar rather than e.g. a type var tuple.
I don’t personally have historical context on why certain special forms were deemed type expressions and others were deemed annotation expressions in edge cases.
isassignable(value, list[Literal["Y", "N"]]) is OK.
Perhaps a rephrasing of “a value of Literal[...] type” would be more clear. Hopefully the example following the original statement is clear.
An earlier PEP draft provided a clarification in that Literal[] TypeExprs section that I might bring back:
However Literal[...] itself is still a TypeExpr:
DIRECTION_TYPE: TypeExpr = Literal['left', 'right'] # OK