Type guards don’t only work with instances. They also work with types (issubclass
), literals (equality check), and other special forms, like typed dicts. If you are suggesting IsInstance
should maintain the current type guard semantics but also narrow in the negative case, I don’t think that’s a good name for it. If you are suggesting IsInstance
should precisely mimic isinstance
(by disallowing return types isinstance
can’t handle), then we’re probably gonna be back here looking to add another type guard variant before long.
Yes I view it as similar enough to fit. I’d also enjoy if isinstance did support like typed dict/literal. It seems reasonable code to write,
class Foo(TypedDict):
...
x = {}
if isinstance(x, Foo):
...
y = "..."
if isinstance(y, Literal["A", "B"]):
...
Other forms also could be described with isinstance if generics worked with it. issubclass(x, Foo) is similar to isinstance(x, type[Foo]) and latter is closer to how type checkers view types. The notion you are hitting on is partly that isinstance type argument and type annotation/generics are separate things today. There’s separate proto-PEP dives more in that topic.
There are runtime complications to actually supporting this so I am not proposing that we do change isinstance here. Just that spirit of name does fit for these types and runtime checking of it happens to be tricky. There exist libraries like typeguard (yes library name is typeguard similar to this PEP concept) that are about extending runtime isinstance to support some of these cases. I mostly use typeguard as a fancier isinstance.
I think more fairly in this case, “things in the stdlib should adhere to stdlib backwards compatibility”.
If the type specification wants to live outside of the stdlib, it can evolve as quickly as those who are driving it agree. But once it gets into the stdlib, it has to adhere to the promises of the stdlib.
The code works exactly as it used to, but the type checking may produce different errors. That’s a much less significant problem, and I think it’s unfair to characterize it as “breaking”.
It breaks my build (which checks for type errors.). I think it is absolutely fair to characterize that as “breaking” - and I don’t think it is acceptable.
The tenor of your response comes over as “typing moves fast and breaks things”. That’s fine for Facebook. I don’t think it’s a good fit for python.
Most projects have the CI pinned to a type checker version, just as they do with linters. Linters also change what they check and how they check it, and no one complains that they’re “breaking” things when they do that.
Here’s an example of pinning a version in the pre-commit hook from a project I follow.
Is there a reason you don’t want to pin the version of your type checker, but still consider behavior changes to be “breaking”?
Is there a reason you don’t want to pin the version of your type checker, but still consider behavior changes to be “breaking”?
We do pin the version of our type checker, but from time to time we upgrade from one version of python to a later version. When we do that, we usually need to upgrade the type checker so that we can use the latest python syntax (for example, “match” statements). I don’t want that upgrade to break the build.
So when you upgrade Python version, nothing breaks except the type checking errors?
If we change TypeGuard it’s not the Python upgrade that will break your build, but the Mypy upgrade, right? And I think us (the community) mandating that Mypy upgrades don’t break builds is impossible, especially at this stage of typing maturity. I don’t see it as a reasonable goal.
Speaking from experience, I was very OK with Mypy upgrades breaking my builds since it usually meant things got better typed (safer). We always budgeted a little time to make it work just in case, just like we did for Python upgrades or the upgrades of any dependency.
I’m ok with mypy builds breaking because it now implements something correctly, or because it changed some behaviour not specified by the type system, but I’d be less happy if my correctly typed code suddenly starts erroring because the underlying typesystem changed.
That’s not a real argument. Some things breaking (which they’ll do, one man’s bug is another man’s feature) should (typically) have no effect whether we should feel free to break other things. Otherwise we could just throw away the backward-compatibility policy.
Sorry if it’s not clear, but I’m not making an argument. I was just asking if type checking errors were the only errors that he tends to get when he upgrades his Python version.
Of course, everyone has different a different experience with typing. There are at least three camps discussed here:
- I hear you and Paul and some of the other voices that really want to mitigate typing-related churn. It’s not fun doing type annotations when you have other goals you want to meet.
- Yet as I said in my other comment, we should also think about what interface we present to future users who are coming to Python fresh.
- And then there are other people like Tin’s last comment that agree with my experience of anticipation for new features that tend to make my code better typed.
For me, I run a type checker probably every five minutes. It is easier to run the type checker than it is to run the program, and the errors that it gives are better. This has made coding a lot faster for me, and has increased my confidence that things are working, which changes the kinds of tests that I need to write. This is why I love type checking in Python. (Funnily, I was totally against the idea when it was first proposed—I thought, “it’s optional. Great, I’ll never use it.”) So now that I love it, there are still many things that I’d like to see added (e.g., intersections, multiple dispatch, partial application, and parameter specification forwarding). I realize that these features may have some stability costs.
As for policy, as far as I know, there isn’t one yet when it comes to typing. This is one of the things that we should probably discuss in a more appropriate thred (maybe the governance thread?) I don’t know if we need a policy, or just trust that the “typing council” will take the time to listen, and make good decisions that balance the needs of various participants. From what I’ve seen, the Python typing people seem (to me) to be extraordinary listeners. I feel like we’re in good hands.
I think it’s important for me to say this since I seem to be creating some confusion with my comments. I’m not trying to discount anyone else’s experience. I’m just describing mine. I think this one of the shortcomings of online versus in-person meetings: it’s hard to know that other people are listening (I think this is what @pf_moore was feeling earlier in the thread). I hear what the stability camp is saying and their reasons for it. They’re good reasons.
Ah, I see, I thought you were making an argument through a retorical question, sorry!
I agree with essentially 100% of your post, except of course the backwards compatibility part. The time to reflect on the interface we want to present to future Python users is when the PEP is open for discussion, after that it’s essentially done. The interface is chosen and we have to live with our mistakes. In this case, there is an easy option, create a new type that does what this proposal wants, even if the name might get strange. People get used to strange, non-descriptive names, in which I would include TypeGuard
. It took me a very long time to understand it’s semantics, partly because I don’t see any guarding going on anywhere. Not saying it’s a bad name, but it’s (IMO) not so good that we should desire to change it’s behaviour to a perceived better one.
Regarding typing backwards compatibility, the typing PEPs are, afaict, all part of the “Standards Track”, including this one. That is the same track that language changes belongs to, so I would assume that the features introduced by typing PEPs are covered by the normal backwards-compatibility policy. Thus, I agree with those thinking this PEP should introduce a new type instead of introducing breaking behaviour.
I’ve just merged the upgrade from 3.9 to 3.11. Most of what broke was: pylint changes (I would have liked to defer the pylint upgrade, but couldn’t), and unittest.mock complaining when one created an autospec mock from an already mocked function. That was obviously mad, but it happened surprisingly often.
Pylint changes have been mitigated by temporarily disabling some of the noisier checkers. (But it paid for that pain by finding a genuine bug). (I am beginning to worry about pylint getting too opinionated, but OTOH having it enforce a single style is desirable.)
I haven’t upgrade mypy yet, and I expect that to create churn - but usually because it gets better at spotting problems, not because it complains about something valid which it didn’t previously complain about.
If we were to introduce a new construct, does anybody have a preference for the name?
So far I believe we’ve seen:
- StrictTypeGuard
- TypeNarrower
- TypeCheck
- IsInstance
I was thinking one of these might work too:
- TypeFilter
- TypeRefiner
One question: If we’re keeping TypeGuard
, is there opposition to refining its definition to be truly narrowing? I.e.,
def is_u(val: T) -> TypeGuard[U]: ...
def f(val: T):
if is_u(val):
# Type of ``val`` is narrowed to ``T & U``--rather than `U`
else:
# Type of ``val`` remains as ``T``.
The T&U
type is significantly more useful. It’s unlikely that anyone is counting on the current U
behavior. And it doesn’t seem to contradict the documentation if I’m reading it right.
This would break TypeGuards that narrow from e.g. list[object]
to list[int]
, since list[object]
and list[int]
do not intersect. I would prefer to keep TypeGuard’s current behavior in view of the previous points in this discussion.
I wonder where such a thing turns up?
I’m sure you realize that using TypeGuard
in that way seems like it would hide bugs. If object A gives you a reference to a list[object]
, and you use a type guard to re-interpret it as a list[int]
, then you could hand it to object B who only expects to pull integers out of it. But of course, object A can throw whatever it wants into the list.
This is unlike the narrowing use of type guard (what’s shown in the documentation). That seems safe to me.
If we’re not changing type guard, then could we at least put a strong suggestion to prefer the PEP 724 construct (whatever it ends up being called)? It seems much safer, and easier for readers to reason about.
Edit: I changed my mind about this paragraph:
I think it would be more idealistic to force people who are trying to convert lists of objects to lists of int to write a function that checks the list element types, and then either returns
- a casted list to emphasize that they’re doing something really unsafe, or else
- a copy of the list.
This behavior is already inconsistent across type checkers. How isinstance narrows in positive case vs TypeGuard narrows in positive case differs. Pyright does partial intersection where it does intersect with type variables.
This code passes pyright but fails mypy because this behavior is currently unclear. If goal is keep current behavior I don’t think that behavior is agreed upon today. I’ve only encountered false positives from mypy choice where code is safe at runtime but fails at type checking time.
edit:
from typing import TypeVar
from typing_extensions import TypeGuard
T = TypeVar("T")
def guard(x: object) -> TypeGuard[int]:
...
def foo(x: T) -> T:
if guard(x):
return x # Is this int or int & T. Mypy says int, pyright says int & T.
return x
Accidentally posted while adding example.
edit 2: I’m neutral on this in context of this specific PEP. I think intersection/narrowing behavior is currently undefined and is another area where peps/documentation are ambiguous and could use clearer spec.
This is in the motivation section of PEP 647, which introduced TypeGuard (search for is_str_list
), so I don’t think it’s a very contrived case. My main takeaway from the discussion so far is that we cannot afford to change the specified semantics of PEP 647 TypeGuards.
I do agree that if we end up with a second construction introduced by PEP 724, documentation should heavily emphasize the new construct and encourage people to use it over TypeGuard.
I agree with that. But where exactly is MyPy’s behavior specified? The documentation says:
TypeGuard
aims to benefit type narrowing – a technique used by static type checkers to determine a more precise type of an expression within a program’s code flow.
That suggests the intersecting behavior.
Then, at the bottom, it notes:
Note
TypeB
need not be a narrower form ofTypeA
– it can even be a wider form. The main reason is to allow for things like narrowinglist[object]
tolist[str]
even though the latter is not a subtype of the former, sincelist
is invariant.
Does that preclude intersection? Or does it just mean that type guard has to work when U
is wider than T
?
since
list[object]
andlist[int]
do not intersect.
Do you mean that it’s irreducible? That is,
l: list[object] & list[int]
for x in l:
reveal_type(x) # object & int = int
l.append(3) # Okay since list[int] supports this.
l.append("a") # Okay since list[object] supports this.
def f(m: list[int]) -> None: pass
f(l) # Okay, since l is narrower than list[int].
def g(m: list[object]) -> None: pass
g(l) # Okay, since l is narrower than list[object].
(For people reading along, this looks to me like another example of this giant discussion. The question about whether type checkers should reduce empty intersections to Never
complicates things too.)
If you agree with this, then it appears that @mdrissi is right that it’s underspecified. PyRight chose one semandic and MyPy chose another.
I realize that intersections may be very hard to implement, so if we’re going to specify things, why not say: a function that accepts T
and return TypeGuard[U]
has static type:
T
in the negative case, and- in the positive case:
- with an interface at least as wide as
U
but no wider thanT&U
, and - that can be passed to functions accepting
U
, and possibly (depending on the type checker) functions that acceptT
.
- with an interface at least as wide as
I’m sure this could be worded better, but this gives type checkers a bit of freedom while giving users a clear guarantee. And it’s compatible with MyPy and Pyright’s current behaviors.
I agree with you that whatever naming convention is ultimately used, that users should instinctively gravitate towards the one that provides what I and many in this thread believe to be the most intuitive semantics - strict narrowing. If the naming conventions chosen do not have this effect, I would question whether we are using the right names.
And while many in this thread have lamented breaking changes, breaking existing code - particularly code that per mypy primer is not all that common - is a one-time cost. The implications of these changes on user intuition, are arguably permanent. Ergo I don’t think we should rule out redefining TypeGuard
and then adding a separate construct for the original behavior just yet.
To this point, if we were to pursue TypeGuard
and StrictTypeGuard
I would find it hard to believe that users would intuitively use the more verbose form. So to me, this option is a non-starter.
On the other hand, I do think a compelling case was made for IsInstance
since the behavior mirrors the current narrowing behavior of isinstance
.