There are a couple of topics discussed in this thread: 1) how does assert_type
work?, and 2) how is assert_type
used in the conformance tests?
Topic 1: How does assert_type
work?
As Jelle said, assert_type
does an exact type comparison. This is how it was designed, this is how it is implemented in mypy, pyright and pytype, and this is how it needs to work to support its intended use case. There are many libraries that now depend on this behavior in their tests. The pandas
library is probably the most well known. I don’t think we can or should try to change it at this time.
If you see the need for a new mechanism that performs a “laxer” test than assert_type
, you’re welcome to make the case for such a facility, but that would need to be different from (or a new variant of) assert_type
.
If the intended behavior of assert_type
is unclear in the current spec, we should work to clarify the intended behavior.
Part of the problem here is a lack of precise terminology, as Jelle and others noted above. That’s something I’d like to see us address in the typing spec, probably in the concepts chapter. I think the term “equivalent type” makes sense for this concept, and it’s the term I’ve been using in recent PEPs and the pyright documentation. By this, I mean two types that describe exactly the same set of allowable values. For example, list[Any]
and list[int]
are bidirectionally compatible, but they are not equivalent because the set of values described by list[Any]
is different from the set of values described by list[int]
. Two TypedDicts or Protocols with the same definition (but different names) are equivalent. Likewise, int | str
is equivalent to str | int
. And Literal[True, False]
is equivalent to bool
.
Topic 2: How is assert_type
used in the type checker conformance test suite?
The current conformance suite is written in a way that assumes a manual “scoring” process. It would be nice to have a fully-automated conformance suite, but given the different ways that type checkers report errors (e.g. on different lines) and the fact that the spec allows variations in some areas, we couldn’t find a way to fully automate the tests. For details, refer to this documentation. The tests are written to make this scoring process as easy and error-free as possible. As Jelle mentioned, assert_type
is one tool that we use to assist with this.
The latest published conformance tests use assert_type
in 307 places, mostly in situations where the correct evaluated type is unambiguously dictated by the spec.
So far, there has been only one test where an assert_type
assertion fails in one type checker but succeeds in another and both behaviors are currently allowed by the spec (due to an ambiguity that I hope we will eventually address). In this case, I added a comment so it’s clear to someone who is scoring the test that different types are acceptable from the perspective of the spec. I think these cases will be very rare, so it’s OK to deal with them as exceptions.