PEP 724: Stricter Type Guards

In my opinion, we still don’t have one good example of non-strict type guards that would not be better written using strict type guards. Do you have an example from real code?

I totally agree.

Fair enough. I think that typing is not like the standard library since:

  • typing does not normally affect code runtime (except in runtime introspection) and changing type errors is much lower stakes,
  • type checkers can be pinned to a version even as python is upgraded, and
  • typing is in its infancy and really benefits from being able to swiftly correct design errors without long and labor-intensive deprecation periods.

Anyway, I think we should probably move this discussion to the governance thread. What do you think?

1 Like

@ntessore’s example is_small_array is a fine example, IMO. I don’t know if it’s from “real code”, but it certainly looks like something that could be written in real code.

As @sirosen said, the goal here should be to describe to the type system the behaviour of existing code. So please don’t repeat the suggestion that is_small_array should be refactored - that’s putting the cart before the horse. Type annotations and type checkers are there to validate if your code is right, not to force you to write your code a certain way (whether or not that way is easier to prove correct). Maybe it’s not possible to do this in every case - some code uses runtime features that can’t be easily expressed in terms of static types, and that’s fine - but the existence of the current TypeGuard demonstrates clearly that this isn’t true in this case.

So we have an example. Are you now going to say that isn’t sufficient to question the idea of simply removing the current behaviour? If so, then what is your criterion for accepting that removal is going to cause problems for some users? Do you want two examples? A hundred? The PEP itself says that only 25 code bases were checked to ensure that they wouldn’t be affected by the change. Even one example that would be is equivalent to 4% of the test population. Is something that has a 4% chance of causing a problem acceptable? Yes, I know this is a silly argument. I’m trying to point out that the whole “how many examples can you come up with to support your case” argument is a bit silly - precisely because typing is now so widespread that getting meaningful samples has become essentialy impossible…


One relevant history is TypeGuard are closely based on typescript typeguard. Typescript has typeguards years before python and made opposite decision here of only supporting strict type guard. There is open typescript ticket for non-strict type guards (python like) that has real examples. The push to flip behavior of TypeGuard is the various user feedback “bug reports” where tickets are filed expecting strict behavior and surprised that current behavior is as spec stated.

The main cost in supporting both is user confusion and adding complexity to typeguard documentation. I think pyright in past did implement both strict and non-strict type guard, so it is definitely feasible to support both. If we do support both Strict and Nonstrict type guard which one should be default under name TypeGuard? Backwards compatibility would be TypeGuard stays non-strict. User expectations pushes to make TypeGuard strict and add separate LaxTypeGuard.

I was curious so I audited my own codebase usage of TypeGuard. There’s about 20 of them and I spot 1 that LaxTypeGuard would be correct for. The code is,

def check_concrete_type(t: type[T]) -> Callable[[type], Typeguard[T]]:
  def _concrete_type(cls: type) -> TypeGuard[type[T]]:
        return inspect.isclass(cls) and not inspect.isabstract(cls) and issubclass(cls, t)
  return  _concrete_t

So overall I think lax type guards definitely have good evidence they exist (typescript issue being best list), would lean negative narrowing is better default for common usage, and personally would be fine with either adding Lax/StrictTypeGuard. I feel adding LaxTypeGuard and recommending users tend to pick TypeGuard better fit for expectations then adding StrictTypeGuard and adjusting documentation to highlight strict first/more clearly.


I agree with your other two notes, but I disagree with this last point very, very strongly.
Typing has gone mainstream. Maybe pydantic and FastAPI were what put it “over the top”, maybe it was the addition of __class_getitem__ for the builtins in 3.9, maybe something else… Whatever it is, python typing has “hit it big”. Everyone who’s anyone is using it, watch out!

It’s still young relative to the stdlib or some other software projects, but it’s not very new anymore. Modern annotations have been widely available since python 3.5, so that’s 8 years of history.

I’d be happy to do so; certainly some of this is high-level directional stuff which is not specific to this PEP. I’ll see if there’s any useful contribution I can make on that thread.

On the other hand, as pertains to this PEP, the long-term view I’m promoting has some short term impact. At some point, the changes need to slow down. Is TypeGuard a strange hill to die on for this? Sure – as far as I’m concerned, I didn’t choose it. We’re talking about it because it’s the PEP which is on the table today. It could have been any proposal for a backwards incompatible change that got caught in this discussion. But we really need to start somewhere or we won’t make progress on stabilizing the behaviors.

For me, adding LaxTypeGuard and changing TypeGuard seems like it’s addressing some of the concerns – “can functions like X be described by the type system?” – but it punts on the bigger question of how typing can evolve to become more stable.

I 110% agree with you that TypeGuard being strict is a better default, and the naming would long term be better if we had TypeGuard/LaxTypeGuard.
But is it better by a wide margin, vs StrictTypeGuard/TypeGuard?

Given that StrictTypeGuard has the added benefit of being fully backwards compatible, and that I think TypeGuard/LaxTypeGuard is only marginally better, I can’t help but favor StrictTypeGuard.

1 Like

The question that I have about this is, if this is the case, why are any of these changes being made as PEPs rather than just the tools working them out on their own? The way I see it, by the time things get to the stage of a PEP (which may do something like alter the CPython docs), the dust should pretty well have settled. This makes typing seem like a moving target and likely contributes to the perception that various typing constructs are “advanced topics” (because using typing means you have to stay abreast of whatever the latest changes are).

1 Like

Absolutely. The idea that “rapid change is acceptable because typing is still developing” is, in all honestly, dangerously naïve. To give a very small example, I work on pip, and our code base is fully type checked. That’s purely internal, as we don’t expose a programmatic API, so changing our type annotations affects no-one but ourselves. But we use mypy in our CI, and if a type check were to fail, that would be a blocking issue that would prevent PRs from being merged[1]. If a check breaks because mypy changed something in a way that made our previously correct annotations now fail, that would require work to deal with it from an already extremely under-resourced maintainer team.

I just checked, and we have one use of TypeGuard in pip:

        def _should_install_candidate(
            candidate: Optional[InstallationCandidate],
        ) -> "TypeGuard[InstallationCandidate]":
            if installed_version is None:
                return True
            if best_candidate is None:
                return False
            return best_candidate.version > installed_version

It’s a local function with logic tied into variables in the surrounding code, and I have no idea how I’d change it, if it broke because TypeGuard suddenly started narrowing to None on a False return value. I think the code doesn’t care, but I can’t be sure. What I would almost certainly do is to change the return type to bool, effectively giving up on TypeGuard as a useful feature (at least in this context, and probably to a small extent in general). Is that really the result we want from this PEP?

  1. Yes, we can override the check, but we typically don’t. ↩︎

That may be a goal, but in practice code does accommodate the type checker. For example, you have to use LBYL with instance checks rather than branching with exceptions (EAFP) if you want the type checker to distinguish types in branches.

As for type guards, some patterns of usage will work, and others won’t. The point of this PEP is to allow new patterns to work, and make some old patterns incorrect.

I never said anything so extreme, so I really don’t understand this hyperbole. Let’s try to stay level-headed please.

I don’t consider it a great example because, as I said, it looks like it should have been broken into two pieces—regardless of typing, but just from the principle of the separation of concerns. But yes, it is an example. I think Mehdi’s example is much more convincing.

I don’t think it’s impossible. Maybe more projects should be added to the primer? Surveying type-checked code is an important tool in deciding type change impacts. Personally, I think it’s a lot better to gather information than it is to just imagine what code might be out there.

I think I should clarify what I meant: I’m not saying that no one is using typing. I’m saying that typing is in its infancy relative to all of the features that the typing community is waiting for. Python could stop being developed today, and I could still productively use it for a decade without wanting to switch language. Typing has come a long way, and does amazing things, but there are still features that we are desperately waiting for.

And this where my motivation lies. All changes need to balance the costs of the change:

  • induced maintenance work (refactoring, etc.),
  • induced false positives that break CI,
  • induced false negatives that hide bugs that would otherwise be found.
    —against the benefits of the change:
  • providing typing expressiveness,
  • making typing easier to understand,
  • repairing false positives and negatives, and
  • perfecting the future world of typing for future users.

So when I think about this PEP, I see a very tiny amount of induced maintenance. We have a couple examples so far that might need minor tweaks mainly to prevent false negatives (which are not a huge deal).

The benefits are significant: strict type guards are more useful, easier to understand, they repair a number of flaws in the type shed (e.g., isawaitable, and isdataclass), and they make the future of typing significantly better for future users.

In case, you haven’t seen the linked issues that were added the PEP, here they are:

Many people have been asking for strict type guards. Of course that doesn’t prove that there aren’t just as many people who are perfectly happy with lax type guards. Although, it does seem that in the standard library, there is a need for strict type guards, but not for lax ones.

That’s why I like Mehdi’s proposal best. I like PEP 724’s proposal to make TypeGuard strict. Then, if there’s enough demand, we could consider adding lax type guards with a clumsier name (or flag). This will steer users towards the strict type guards, which is probably what they want. I feel like future users will thank us for making their lives easier.

I realize we may not agree on this. I know that I tend to put a lot of weight the future. But hopefully you understand my point of view.


Speaking as someone who has maintained large code bases for Jupyter in both TypeScript and Python as they’ve evolved their typing features, we have long taken the stance of pinning the version of TypeScript/mypy until we are ready to adopt new features.

In both cases we are starting from an untyped system and trying to bootstrap a typed system, and things will not always go smoothly. The difference between TypeScript and Python is that we don’t necessarily control which type checker will be used by both maintainers and consumers of the libraries.

Perhaps what is required is a declared “requires-typing” version, similar to “requires-python” in pyproject.toml, where a library explicitly says which version of the typing spec they adhere to, and that could be honored by both IDEs and CLI tools, both within the library and from downstream consumers.


Apologies, but I genuinely don’t know what it is you are trying to say. Multiple people have said that they have code that uses the existing semantics, and very few people have said they have audited their existing usage to confirm it will still work under the changed semantics. I’m genuinely baffled as to what you’d need to accept that there’s a case for keeping the existing semantics - I’m not even asking you to agree, just to accept that there’s a case for it.

As a part of the “typing community” the key feature that I would like at this point is stability. And I agree that typing is in its infancy as far as that is concerned…

Obviously, and I didn’t say otherwise. But it’s important to acknowledge the limits of what you can find. We’ve struggled with that in packaging. There is a huge amount of usage in closed-source code, and it’s often using features in different ways than you see in open-source projects. I have no reason to believe typing would be any different in this regard.

I think you’re optimistic about “minor tweaks”. In the example I gave from pip, I’m not even sure if I’d know how toconfirm if it still works with strict type guards, much less know how to refactor it.

We definitely don’t agree. As to whether I understand your point of view, all I can say is that it seems to me that you don’t value stability (and in particular, the stability of documented features) as highly as I do - if that’s not how you’d describe your point of view, then I’m afraid I don’t understand.

Also, I find your assertion that you “put a lot of weight [on] the future” very odd. I would say that I put a lot of weight on the future, and for me that means setting down firm roots, making sure the foundation of the typing ecosystem is stable, well-documented, and reliable. Once those foundations are achieved, then building new features on top will be far less disruptive to the community, and we all benefit.

def _should_install_candidate(
    candidate: InstallationCandidate,
) -> bool:
    if installed_version is None:
        return True
    return candidate.version > installed_version

if candidate is not None and _should_install_candidate(candidate):
    // install the candidate

In the absence of any other context, if I were writing something like this, I would accept the hard type (not None), and do the null check separately.

1 Like

I generally agree with @NeilGirdhar view. I think there’s very different stability expectation in runtime library vs static analysis tool. I consider it important python runtime type behavior is stable. The type checker inference behavior and what errors are shown I view with same expectation as pylint/ruff/similar tools. All static analysis tools I expect to be pinned to one version in CI. pip similarly pins to mypy==1.0.1. pip type errors are not same on latest version of mypy as current one (very similar differ by ~3 errors). Pip’s type errors are more different if you run it with different type checker.

My code changing behavior and failing is very categorically different to me then my linter/type checker adding 5 new error messages in a release. Often I’ll look at a new error decide it makes sense and correct or disagree/unsure and may type: ignore/revisit later. A few type ignores is pretty normal and while having heavy amount of ignores/cast is bad, python type checking system is not all or nothing affair for value. If most of my code passes type checking and a few places don’t that I ignore/revisit later that’s still value for me.

My experience is also other static analysis tools like pylint it is common for them to have errors change across versions. Pylint has been in usage for years prior to mypy and even know I think pylint/flake8 should have usage pretty comparable in magnitude (maybe larger/smaller) than mypy. Should those libraries also have strong stability expectations and be involved in PEPs for changes? Pylint/similar tools also commonly have IDE integration.

For packaging comparison I’d consider mypy/pyright exact errors closer to poetry/hatch’s configuration choices. Packaging libraries share common standards, but packaging libraries also have freedom to make many of their own maintenance decisions. Poetry deciding to deprecate a config feature in month vs year is their decision. This also similarly applies to pip. Pip has it’s own deprecation policy it follows separate from python deprecation policy. Mypy similarly documents itself as not following SemVar because,

Mypy doesn’t use SemVer, since most minor releases have at least minor backward incompatible changes in typeshed, at the very least. Also, many type checking features find new legitimate issues in code. These are not considered backward incompatible changes, unless the number of new errors is very high.

Quoted directly from mypy release notes. Expecting type error stability across versions directly disagrees with mypy’s maintaince policy and feels similar to saying that packaging libraries should have deprecation policy similar to python language.

Other aspect is in practice as user much of type checking error instability does not come from type checkers. It comes from library stubs/types. For many python libraries their type hints are incomplete. As they evolve it is expected behavior for type checkers to report new errors. Numpy/pandas/matplotlib data science ecosystem in particular it’s types are in a lot of flux/evolution right now. It’s normal for updating matplotlib version to impact type errors reported. And as user whether backwards compatibility is broken due to type checker vs library types changing feels very similar. Either way I need to review new type errors and decide to adjust my code/ignore them.

edit: Another aspect is mypy policy one of several for type checkers. Each type checker has separate maintainers with some overlapping goals, but also separate goals. One type checker may value stability higher than another. Another type checker may choose to update more frequently and have different deprecation policy/standard on what is reasonable change for errors. PEP does not feel suited for deciding maintainance policy of multiple libraries released separate from python language and some by fully separate owners.


I don’t really see why that should be the case. A static analysis tool is basically just an application whose data is a program’s source code. Stability for such a tool is just as important as for any other tool, like a command-line tool that, say, converts BMP to PNG, or computes lexical statistics on a text. The thing that, in my mind, implies greater stability is the fact that something is in the stdlib. That would mean that something like mypy is free to evolve more rapidly in terms of what typing constructs it handles, but that the stdlib shouldn’t attempt to keep up with that.

I agree. This to me is the fundamental issue, and is one reason I have an overall pessimistic view of the various typing changes in Python. My perception is that the increasing spread of static typing is leading more people to spend more of their time trying to please the type checker, looking for new ways to write their code in order to chase some perceived benefit of being able to use a particular typing feature. The type checker is often not easing the work that people are already doing, but causing them to add an additional kind of work (typing-specific code gymnastics) to their load. This workload then spreads to everyone who has to interact with such code, let alone contribute to projects using typing, because it becomes part of everyone’s expectations that any work on writing Python will include some nonzero amount of typechecker wrangling.

I do wish we could stick to the original notion, which is that everything related to static typing is 100% optional in Python. To me that means that typing-related considerations should never have any influence on how code is written or what it actually does; it’s purely a convenience layered on top. It means that any proposal that envisions people refactoring their code to please a typechecker is prima facie misguided. If people want to do that, it’s their choice, but nothing in the official documentation, a PEP, the stdlib, etc., should contain even a whiff of a suggestion that anyone should ever do that.


It seems there’s two distinct camps here:

  • Backwards compatibility is sacred for Python, so it should also be for the type specification.
  • Backwards compatibility is not sacred for the type specification, and breaking it outweighs the downsides for this specific case

To move this forward, is the addition of StrictTypeGuard (instead of changing TypeGuard) more palatable?

Eric Traut had proposed StrictTypeGuard in the thread that the PEP was based on, but the responses had a slight preference for breaking backwards compatibility. I don’t believe anybody really opposed StrictTypeGuard though.

I think the responses so far provide strong evidence against the current proposal of simply changing the meaning of TypeGuard. Which is sad, because it seems clear to me that the newly proposed semantics are better, and it’s confusing for users to have two objects that do almost but not quite the same thing. However, for a feature specified in a PEP, maintaining compatibility is important.

I don’t like the name “StrictTypeGuard”. “Strict” can mean a lot of things, and it’s not particularly obvious that the new behavior is more “strict” than the existing one. It’s different, but not necessarily more strict.

If we add a new object, I think it would make sense to make it support only the “strict” version, in the terminology of the table in PEP 724 – Stricter Type Guards | That is, we would require that the StrictTypeGuard return type R is consistent with the StrictTypeGuard input type I. Users who want the “non-strict” behavior would simply continue to use TypeGuard.

If so, we could say that the new construct always narrows the type to a type that is narrower than before. Could we come up with some name that incorporates this concept? Perhaps TypeNarrower.


I never said that there “wasn’t a case for it”. Just like you, I’m considering the case of keeping the existing semantics, but I personally think they should be removed. (More on why below.)

Maybe we should ask the PEP writer to add a migration guide? Ultimately, any flavor of type guard can be rewritten without type guards. They are typing sugar. The current type guard can always be rewritten:

def is_u(val: T) -> bool:  ... # defined as before, but just return a Boolean.

and then used as follows:

def f(val: T):
    if is_u(val) is not None:
        u = cast(val, U)
        # Use u instead of val from here on to get the `U` type you wanted.
        # val is unchanged, as desired.

Let’s examine the future under your proposal of keeping TypeGuard versus the future that’s suggested by the PEP and pretend that we’re a new user who has to choose a type guard for a function that she’s writing, and ask which future is a better one to live in.

First, some background on the various type guards:

We have the current TypeGuard:

def is_u(val: T) -> TypeGuard[U]: ...

def f(val: T):
    if is_u(val):
        # Type of ``val`` is narrowed to ``U``.
        # Type of ``val`` remains as ``T``.

Now, the PEP 724 “strict” TypeGuard:

def is_u(val: T) -> TypeGuard[U]: ...

def f(val: T):
    if is_u(val):
        # Type of ``val`` is narrowed to ``T & U``.
        # Type of ``val`` is narrowed to ``T & Not[U]``.

Note that this is extremely logical since it can work exactly like an instance check for U.

And the proposed LaxTypeGuard:

def is_u(val: T) -> LaxTypeGuard[U]: ...

def f(val: T):
    if is_u(val):
        # Type of ``val`` is narrowed to ``T & U``.  Note the difference!
        # Type of ``val`` remains as ``T``.

Now let’s compare the benefits for a new user in each future.

In the “stable future” that you’re proposing I guess you want to keep the current type guard, and add the strict type guard? In that case, the stable future has the following problems:

  • There is doubt about which type guard is required, which requires a deep understanding of the documentation.
  • Using the current type guard requires learning a new reasoning pattern that is unlike instance checks.
  • The current type guard has a lot of surprising behavior based on all of the bug reports against it. In particular, it does not narrow T (replacing it with U in the positive case). This will probably necessitate adding LaxTypeGuard, and maybe deprecating it anyway.

The “progressive” future that I’m proposing would have the strict type guard only. Thus,

  • There is no doubt about which type guard is required, which guides new users to the obvious choice.
  • Using the type guard can work exactly like an instance check, which makes it easy to understand.
  • If there’s a need, a LaxTypeGuard can be added. It has the benefit of mirroring the strict type guard–without the surprising behavior. It’s a bit trickier to desugar than the current type guard, so the case for adding it is stronger too.

These are two futures that I was comparing, and this is the basis for my motivation. I understand the desire to mitigate upgrade pains, but I think they’re outweighed by the benefits of creating the better future.


I agree, even as someone who is actively asking for this future. But both of these subtly different things exist, which is the counterpoint which makes me uncomfortable with the change as currently proposed. I’d be happy to contribute a page to the standalone typing docs about “TypeGuard vs StrictTypeGuard” if things follow this path. And then both stdlib types could link to that.
With proper doc (in whatever form), is this still a major issue?

I appreciate that you’re thinking of alternate names. “X” and “StrictX” are harder to differentiate than “X” and “Y”.
Not against “TypeNarrower”, but just to offer an alternative, “InstanceCheck” also rings true.
But this reveals that you see the naming issue and likelihood of user confusion as much more severe than I. Perhaps so severe that no quality of documentation could alleviate the problem?

That’s a pretty different take on this from where I’m at. I read the PEP and felt like “this is an improvement, but wasn’t the rejected StrictTypeGuard option better because it’s backwards compatible?”

One thing I want to note about the TypeGuard usage I found browsing projects is that many guards are not inherently sound outside of their specific calling context.

For example, here’s something we have at my work:

class A(TypedDict):
    x: int

class B(TypedDict):
    y: int

class C:
    x = 1

def has_shape_a(z: Any) -> TypeGuard[A]:
    return isinstance(z, dict) and "x" in z

def has_shape_b(z: Any) -> TypeGuard[B]:
    return isinstance(z, dict) and "y" in z

def demuddle(w: A | B | C) -> int:
    if has_shape_a(w):
        return w["x"]
    if has_shape_b(w):
        return w["y"]
    if isinstance(w, C):
        return w.x
    raise NotImplementedError(...)

In their broader context, the guards are appropriate and check the shape of data from a store with new and legacy data formats.
As used, these guards could be made strict with no ill effect. But outside of that context, the guards themselves aren’t even valid under current semantics. Note how they fail to check value types. Asking if they would be valid under StrictTypeGuard is sort of a category mistake.

I haven’t looked into it, but I suspect that the pip internal check may be a similar case.


If I’m in a “camp”, it’s “Typing should aspire to allowing Python developers to annotate their code without refactoring it as much as possible.” The backwards compatibility arguments are, to me, secondary to whether type checkers are a place to decide which of two (feasible to check) idioms will be checked correctly.

That said, this thread has definitely made me more hesitant to use TypeGuards, and it makes me think I’ll probably need to give new features several years to mature before adopting them, even after landing in stdlib.

I have found typing an interesting puzzle, to see what it takes to describe the things we did 15 years ago with custom data structures to fit some really niche problems. If it stops being interesting and just becomes another maintenance headache to keep up with whatever mypy is doing this year, I’ll probably just stop doing it.


I think these are pretty good examples. At its core type guard is a way to define new cast rules. cast(str, x) also allows you to lie to the type checker and can be unsafe. It’s like escape hatch for places where you as author know more then type checker can determine as type checker. Escape hatches are useful but can be easy to abuse/misuse.

If typeguard function is intended to be general and called in many files like public api then you’d consider any argument passed. This applies well to libraries writing type hints for other users calling them/typeshed. The inspect module has a bunch of typeguard functions this fits. If you have a typeguard function that’s only used in on file you can make a lot more assumptions.

Explaining these differences well dives into type system/checker behavior and concepts a lot further then standard library documentation does today and I would not want standard library documentation to gain a tutorial on narrowing/cast/typeguard practices. Instead Static Typing with Python — typing documentation feels better spot. Especially when backwards compatibility concern here is because we added too much detail on typeguard behavior to standard library documentation in first place.

Awkward current spot is,

  1. Existing typeguard behavior is not one a lot of users would expect and guess. The proposed pep is closer to how typescript has successfully had type guards for years. Changing behavior from current to strict type guard is backwards compatibility change in type checker inference.
  2. Explaining the differences between the two and how decision depends both on implementation and planned usage feels more advanced topic than most typing features. Maybe well written tutorial is enough and this concern is too high.
  3. Unlike normal runtime behavior there is no clear process to deprecate type inference behavior. Type checkers are released on separate release cycle from language, there are multiple of them each with their own cycles, and type checker version a user uses is indepedent of what their library uses. Adding constraints like requires to packaging metadata would likely be dependency pain. Warning when existing usage will change is difficult especially given which to use is not something a type checker can determine. The core goal of typeguard is to allow user to introduce new casting rules the type checker is not smart enough to know are appropriate for a given usage.

Number 3 is general problem that today type inference/checkers lack good ways to handle backwards incompatible changes. The closest I see is mypy primer and sometimes if a change causes a lot of projects to have new errors, mypy maintainer will open prs to those projects assisting with change. Implicit optional change being good example where type system rules evolved in backwards incompatible way from pep 484 years after it.

As someone who read this discussion/similar topics and pondered difference in two, adding a new form StrictTypeGuard/TypeNarrower would work well for me. For average user I would be impressed/surprised if adding new form didn’t create confusion. And keeping current behavior also keeps common confusion point.

On positive side for many usages which one you pick works out same. For pip code, if that function were used in more spots then it would make sense to be LaxTypeGuard. For the two spots it is used in, Lax vs Strict will lead to same behavior in the end. This is why changing behavior to pep proposed one globally ended up having mostly good mypy primer results and most codebases had few/0 type error changes.

Edit: On naming thing I think new behavior is closer to IsInstance behavior so names that highlight it are nice. At same time even new behavior I think is still not same as real isinstance function. It is impossible today to write a custom function that has same typing behavior as isinstance. Type checkers special case it as a built in. Isinstance is a form of native type guard and rules it follows are similar but different both to lax and strict type guard. I’m not even sure isinstance differs on strict type guard/pep proposal as exact rules for isinstance are undefined behavior but today for type checkers. The main ideas are documented in type checker docs but full rules are more messy and you likely will find type checker inconsistencies here.

Edit 2: It would be nice if using this peps semantics isinstance(obj, Foo) and using function with TypeGuard[Foo] had same behavior. I think that are some soundness holes but pragmatically making exact same behavior is simpler to explain then saying new behavior narrows both positive and negative but rules still may differ in subtle ways between normal isinstance vs user defined type guard.


How about the simple TypeCheck? To my eyes, that suggests a binary nature of the outcome, in the sense that something failing a TypeCheck[T] isn’t a T.

(On that same note, I thought that TypeGuard was quite aptly named in that it doesn’t have a strict binary connotation.)

1 Like

This seems like a reasonable idea, and suggests IsInstance[T] for the type name. Even if there are subtle differences, maybe documenting them under a note explaining why the semantics can’t be exactly the same would be enough.

For me, the two names would then make sense

  • TypeGuard - asserts that a value is of a given type, but says nothing when it can’t make that assertion.
  • IsInstance - confirms if a value is or is not of a given type.

“Guard” has the sense of a one-sided check, where “is” feels two-sided to me.

On a more general note, it would be nice if cases like isinstance, where type checkers special-case particular builtins and stdlib names (I believe dataclass is another one) we’re listed somewhere and documented. At the moment they tend to come up in discussions as unverified presumptions, which is not ideal.