Is `assert_type` checking for exact match, or a subtype relationship?

Pyre still needs to add support for assert_type, which is important to address so that we can use the conformance suite.

One question that came up right away: is assert_type supposed to check for exact type match (so that it’s requiring specific type inference results, and implicitly requiring the type checker to use eager inference rather than just type constraints), or just a subtype relation?

It looks to me like it’s supposed to be exact-match based on this discussion and the fact that the stubs don’t just declare the signature as

def assert_type(__val: _T, __typ: Type[T]) -> _T: ...

but I thought I would check.

The question is particularly interesting because the Pyre team has been considering changing our inference algorithm, which is very limited, and one of the options is to use a constraint-based system where we generally wouldn’t try to eagerly infer types, at which point the implementation of assert_type forcing an exact match would become slightly trickier.

It is meant to do an exact match. A subtype match would be less useful for the intended use cases.


Would you mind elaborating what the “intended use cases” are and why subtyping match is less useful? If I read it correctly, the link Steven referenced earlier did suggest that at least in that one particular instance, having assert_type do structural subtyping match is more useful than having it do exact type match.

There’s no real easy way to do a static exact type match just using existing typing constructs[1], whereas for a subtyping match you can essentially just do x: BaseType = returns_a_subtype() and the type checker would complain if the expression on the right didn’t evaluate to a subtype, so you don’t really need assert_type for that.

  1. you could probably construct something using generics taking advantage of variance rules, but it wouldn’t be pretty or concise ↩︎

1 Like

The intended use case is checking how type inference behaves in type checkers. The main use case I was thinking of when I proposed the feature was to write tests for typeshed, e.g. checking that the right overload is picked in complicated situations. It’s also used now in the conformance test suite the Typing Council has been putting together.

Here’s an example where using a subtype match would not work well (testing PEP 647 TypeGuard semantics):

from typing import assert_type, TypeGuard

def guard(x: object) -> TypeGuard[int]:
    return isinstance(x, int)

def check(x: int | str) -> None:
    assert_type(x, int | str)
    if guard(x):
        assert_type(x, int)
        assert_type(x, int | str)  # type checkers should *not* infer str here

If assert_type() did a subtype match, the last call would not be able to check that the type checker implemented PEP 647 correctly.


Thanks for the explanation. Yes I do agree that just doing assignments / func calls / etc. would be sufficient to perform subtyping checks. But the question still remains on what are the other cases where the exact-matching is useful.

Assuming you’re also implementing reveal_type(), I think the implementation of assert_type() should be to do the same as reveal_type() does to get a type for the first argument, then compare the resulting type against the second argument of assert_type().


Thanks for the example!

checking that the right overload is picked in complicated situations

Do you have more specific examples there?

Here’s my hypothesis: the core problem here is not that subtyping check itself is insufficient. The problem is that the assertion is uni-directional and could only check one side of the bound. For example, the PEP 647 case can be tested by doing a subtyping check in the reverse direction, i.e. asserting that x is a supertype of int | str in the else branch.

The problem with exact match assertion is that the notion can be hard to define precisely. The earlier link of protocol matching is one example – do you consider two structurally identical protocol with different names an “exact match”? Refinement is another example:

def check(x: A) -> None:
  if isinstance(x, B):
    assert_type(x, ???)

The precise type for x under the if branch should be Intersection[A, B], but that type cannot be expressed in the current type system. If you do strict “exact” matching, then assert_type(x, B) should always fail there because B is not the same as Intersection[A, B]. In order to make assert_type useful in these cases one had to patch the definition of “exact” and add some relaxations in certain cases, which may end up affecting other cases.

Instead of doing exact matches, my proposal would be to just have 2-way subtyping assertions. E.g. offer both a subtyping assertion assert_subtype(x, T) and a supertyping assertion assert_supertype(x, T) to make sure that both upper and lower bounds can be asserted. The pre-existing assert_type(x, T) does not even need to be deprecated – it can just become an alias of both a subtype and a supertype assertion. I think this would be a bit cleaner than status quo as there wouldn’t be any ambiguities on what “exact match” means.

I think this would work on most cases, but there will be scenarios where problem is going to occur.

Our prior assumption is that what gets printed out in reveal_type() is for human consumption only and does not necessarily need to be in valid Python type syntax. This is important because not all valid types are expressible with Python syntax (e.g. intersection types, callable types with additional restrictions on param names, etc.). Even for Python-expressible types, it is also sometimes more preferable to display not the more precise type the system could think of, but a good-enough type that’s cheap to search for and satisfy all existing constraints.

1 Like

typeshed/test_cases/stdlib/builtins/ at main · python/typeshed · GitHub, and more generally the rest of the test_cases/ directory.

This doesn’t work well in the presence of Any, since it is a subtype and a supertype of everything. For example, if the type checker infers list[str] but we want list[Any], a call to assert_subtype(x, list[Any]); assert_supertype(x, list[Any]) would pass, because list[str] is both a subtype and supertype of list[Ant].

You are right that there are some ambiguities, and we may have to address those in the spec. My thoughts are:

  • Identical structural types (e.g., protocols or TypedDicts with different fields but the same name) should match in assert_type().
  • The nested isinstance() case is not expressible with assert_type(); i.e., there is no way to write a successful assert_type() call in your example with nested isinstance() calls. That is unfortunate but manageable; you can use @Daverball’s trick above to at least check that the type is a subtype of both A and B.

I totally agree that not everything is subtyping. But to me this means we need additional assertions that properly take Any into accounts (e.g. assert_consistency_subtype() and assert_consistency_supertype()), as opposed to patch the problem with ad-hocly defined rules. I think this aligns with the proposal laid out in Kevin’s foundational work – type checkers already need to handle both normal “subtyping” relations (which satisfies transitivity), and separately “consistency subtyping” relations (which works with Any but does not satisfy transitivity).

For the github examples, that’s a lot for me to scan through :frowning: Would you mind pointing out just any one of them that cannot be handled with subtyping or consistency-subtyping assertions?

because list[str] is both a subtype and supertype of list[Any]

Nit: I want to point out this is not true if we follow Kevin’s definitions – list[str] and list[Any] are consistency-compatible with each other, but they are not related in the normal subtyping sense. In fact, the purpose of having the notion of “consistency subtyping” is precisely to be able to clear out this kind of confusions.

1 Like

Thanks, those are good points. You are right that I confused subtyping with consistent subtyping.

I think a good way forward would be to add definitions for “subtyping” and “consistent subtyping” to the spec and use them for the specification of assert_type(). I believe it should be correct if it says that assert_type(x, T) means x should be both a subtype and a supertype of T. (With the understanding that everything is a consistent subtype of Any, but not everything is a subtype of Any.) We could add additional type assertion function as you suggest, but I don’t think they are as useful, and they are not necessary for improving the specification of assert_type.


Hi everyone, this is my first post in this community, so I’ll quickly introduce myself. I’ve started working on the Pyre team at Meta recently. Before that, I worked on Flow (our type system for JavaScript) for several years.

If assertions in the coherence suite are based on exact match (either via normalization or combined subtype+supertype checks), I worry that we might end up over-specifying. For example, in Jia’s refinement example, should a type checker be forbidden from inferring a more precise type?

Is it desirable to allow implementations to have different behavior at all? If so, then a weaker specification based on subtyping alone would be useful.

assert_type() is just one tool, and the existence of that tool doesn’t imply that type checkers should always behave exactly the same. Anyone writing test cases for typeshed or for the type spec conformance suite should be careful to only test specified behavior and avoid overly precise assertions.

However, as I wrote above, there are areas of the current specification where an exact type match is needed to check conformance with the spec. That makes assert_type() as currently envisioned a useful tool, and it can’t be expressed using other primitives.

@Jelle assert_type requiring an exact match seems extremely restrictive on what kinds of ways people can handle type inference, and (at least from my perspective) seems to have been informed by people assuming one specific way of handling type inference.

It seems more appropriate to me if the conformance suite was to be updated to allow a means of testing for conformance which doesn’ require an exact match except in cases where the type system itself does. Your example with typeguards could easily be checked for conformance with a set of consistency checks, one of which could include ensuring that int wasn’t narrowed out. This doesn’t require exact matching semantics to ensure the incorrect behavior didn’t happen.

There are a couple of topics discussed in this thread: 1) how does assert_type work?, and 2) how is assert_type used in the conformance tests?

Topic 1: How does assert_type work?

As Jelle said, assert_type does an exact type comparison. This is how it was designed, this is how it is implemented in mypy, pyright and pytype, and this is how it needs to work to support its intended use case. There are many libraries that now depend on this behavior in their tests. The pandas library is probably the most well known. I don’t think we can or should try to change it at this time.

If you see the need for a new mechanism that performs a “laxer” test than assert_type, you’re welcome to make the case for such a facility, but that would need to be different from (or a new variant of) assert_type.

If the intended behavior of assert_type is unclear in the current spec, we should work to clarify the intended behavior.

Part of the problem here is a lack of precise terminology, as Jelle and others noted above. That’s something I’d like to see us address in the typing spec, probably in the concepts chapter. I think the term “equivalent type” makes sense for this concept, and it’s the term I’ve been using in recent PEPs and the pyright documentation. By this, I mean two types that describe exactly the same set of allowable values. For example, list[Any] and list[int] are bidirectionally compatible, but they are not equivalent because the set of values described by list[Any] is different from the set of values described by list[int]. Two TypedDicts or Protocols with the same definition (but different names) are equivalent. Likewise, int | str is equivalent to str | int. And Literal[True, False] is equivalent to bool.

Topic 2: How is assert_type used in the type checker conformance test suite?

The current conformance suite is written in a way that assumes a manual “scoring” process. It would be nice to have a fully-automated conformance suite, but given the different ways that type checkers report errors (e.g. on different lines) and the fact that the spec allows variations in some areas, we couldn’t find a way to fully automate the tests. For details, refer to this documentation. The tests are written to make this scoring process as easy and error-free as possible. As Jelle mentioned, assert_type is one tool that we use to assist with this.

The latest published conformance tests use assert_type in 307 places, mostly in situations where the correct evaluated type is unambiguously dictated by the spec.
So far, there has been only one test where an assert_type assertion fails in one type checker but succeeds in another and both behaviors are currently allowed by the spec (due to an ambiguity that I hope we will eventually address). In this case, I added a comment so it’s clear to someone who is scoring the test that different types are acceptable from the perspective of the spec. I think these cases will be very rare, so it’s OK to deal with them as exceptions.


If that is the intended level of equivalence, I would feel mostly comfortable retracting my prior statement that it’s too restrictive, but I don’t think that what you’ve said here matches what’s in the specification, and even what you have there has some minor ambiguity (at least if I don’t read with some additional context of prior discussions not linked here) due to the difference in a type annotation constraining possible types, and a value that has that exact type at runtime.

I’ll give it more thought later, it’s sure to come up again with other things I’m actively working on.