PEP 781: Make ``TYPE_CHECKING`` a built-in constant

I like the principle of making the if TYPE_CHECKING: pattern cleaner, but I wonder if type checking could be one case of a more general pattern? It’s often recommended to use a multi-valued option rather than a boolean, so that the situations an option covers can be expanded over time.

If we had something like __mode__, we could use it to check for other modes of operation in addition to type checking. For example, I often think it would be nice to be able to include unit tests alongside the functions they cover, rather than in a separate module. I could imagine doing that with some kind of __mode__ enabled when pytest is running:

def plural(singular_word: str, plural_word: str | None = None, *, count: int) -> str:
    """Get a plural or singular version of a word to describe `count`."""
    return singular_word if count == 1 else (plural_word or f"{singular_word}s")


if "pytest" in __mode__:

    def test_plural() -> None:
        assert plural("example", count=1) == "example"
        assert plural("example", count=2) == "examples"
        assert plural("sheep", "sheep", count=2) == "sheep"
        assert plural("affix", "affixes", count=2) == "affixes"

If the interpreter was able to cheaply throw out the test block (or perhaps even strip it out at package-install time?) I wouldn’t feel bad about bloating a runtime module.

This is similar to the if __name__ == "main": pattern, but I’m imagining multiple modes could be enabled at once.

And for type checking, this could be something like:

if "typechecking" in __mode__:
    from typing_extensions import assert_type as assert_type
else:

    def assert_type(val: _T, typ: Any, /) -> _T:
        return val

What else could a mode be used for? Perhaps other expensive things done at runtime, like @dataclass() or enum definition as @steve.dower mentions above? e.g. if a module could handle pre-processing a __mode__ conditional block before runtime (e.g. at install or build time?), it could substitute a regular class definition for a dataclass or enum.

3 Likes

Thank you for your important feedback.

This PEP does not completely replace typing.TYPE_CHECKING.
Users using tools that utilize typing.TYPE_CHECKING = True can continue to use typing.TYPE_CHECKING as before.

I will add a note in the “Backward Compatibility” section about this use case.

All of pyanalyze, typeguard, beartype doesn’t use the pattern.

It is not easy as you think. typing uses much metaprogramming.

  • Reimplement in C doesn’t reduce time to import other modules from typing.
  • Metaprogramming part is really hard to port in C.
  • Other Python implementations will use typing.py, instead of C implementation.

Additionally, this PEP can strip typing only code from bytecode. For example:

# https://github.com/sqlalchemy/sqlalchemy/blob/dabd77992d785cad89ed110acd2f648a454fb7ae/lib/sqlalchemy/sql/elements.py#L133-L191

@overload
def literal(
    value: Any,
    type_: _TypeEngineArgument[_T],
    literal_execute: bool = False,
) -> BindParameter[_T]: ...


@overload
def literal(
    value: _T,
    type_: None = None,
    literal_execute: bool = False,
) -> BindParameter[_T]: ...


@overload
def literal(
    value: Any,
    type_: Optional[_TypeEngineArgument[Any]] = None,
    literal_execute: bool = False,
) -> BindParameter[Any]: ...


def literal(
    value: Any,
    type_: Optional[_TypeEngineArgument[Any]] = None,
    literal_execute: bool = False,
) -> BindParameter[Any]:
    r"""Return a literal clause, bound to a bind parameter.
    ...

This code creates 4 function objects.
Three functions are registered for typing.get_overloads() and one function is for module namespace.

If you don’t need typing.get_overloads(), TYPE_CHECKING can skip creating three functions for overload. But this PEP can strip all code objects for the three functions from the bytecode (pyc file).

Since Python 3.14, each def statement creates 2 function objects (one for annotation, see PEP 649).
This would help WASM python users to write rich type hints.

I think it would be good for the PEP to summarise more of the discussion, e.g. to discuss potential alternatives.

  • For instance, why not ask type checkers to treat any assignment like TYPE_CHECKING = False as defining a symbol that is true at type check time? mypy and pyright already do this.
  • It would be also good to talk about the footguns of __type_checking__ / TYPE_CHECKING. Use of this can totally break runtime typing and allows users to arbitrarily lie to type checkers. This to me makes it feel like a feature for advanced users, whereas being a builtin makes it something a lot of Python devs will encounter.
5 Likes

I think motivation section describe it already.

I don’t think that topic is appropriate for this PEP. This issue is not caused by this PEP.
If it is really a big problem, we should write a note in the documentation for TYPE_CHECKING right now.

1 Like

Is it possible to use TYPE_CHECKING as the constant?

Right now it’s possible to avoid a typing import with:

TYPE_CHECKING = False
if TYPE_CHECKING:
    from collections.abc import Iterable
    from typing import Any

It would be nice if we could keep using TYPE_CHECKING = False, and then when 3.14 is the lowest supported, remove TYPE_CHECKING = False altogether. I would not need to make any changes until October 2029 (when 3.13 is end-of-life).

This means I wouldn’t need to replace any code right now with __type_checking__ == False or try/except, and make sure I’m using the right version of mypy.

If not, this could be added to the PEP’s rejected ideas.

1 Like

I much prefer a new dunder. TYPE_CHECKING is not only very shouty, the dunder makes it clear that this is a special Python builtin. TYPE_CHECKING is not going away any time soon, so it can still be used for the time being, until in a brighter future we’ll be able to use __type_checking__.

Edit: That said, I believe __type_checking__ should be a variable, not a keyword. A keyword is not only unexpected, a variable offers an easier forwards-compatible path.

1 Like

It is difficult. TYPE_CHECKING = True is possible now and Sphinx does it.
Making TYPE_CHECKING constant breaks such code.

__type_checking__ is much more rare, and it is reserved word.

Type checkers aren’t inspecting the value of typing.TYPE_CHECKING though - they’re substituting the entire meaning of specifically the TYPE_CHECKING that comes from typing (because type checkers are expected to replace that entire module with their own internal/static logic, not to execute anything from inside of it).

If a new variable is added anywhere besides the typing module, type checkers have to learn new rules about how to interpret it. Even if the name is TYPE_CHECKING, which is why assigning to TYPE_CHECKING doesn’t actually help the static checkers, and trying to cleverly assign it anything other than False also doesn’t help them.

Ultimately, if False: is just as good at avoiding actual execution while still having code included. Back when I was implementing type checking/code completions before the typing module existed, we deliberately said that we’d still treat code under if False as if it executes, so that you could import things that are helpful for type analysis but unnecessary at runtime. The typing.TYPE_CHECKING constant is just the more readable version of this (and it more cleanly allows checkers to exclude code paths that they can determine will not be taken).

All of this means that type checkers need a literal typing.TYPE_CHECKING today, and defining __main__.TYPE_CHECKING isn’t (or shouldn’t) help them - that’s just a regular variable that they have to deal with, and setting it to False in code should mean that if __main__.TYPE_CHECKING blocks are excluded by the type checker (since they can determine that it’s always False).

So the idea is that “code needed for the type checker but not needed for the rest of the code [runtime]” is protected by something that always evaluates to False, but the type checker knows to ignore regardless of value.

All of which is why I favour __type_checking__ as a built-in variable that is allowed to be overridden by code. Because type checkers should always treat if __type_checking__ blocks as if they are executed, and code that has to run on pre-3.14 can assign __type_checking__ = False unconditionally and the code will not be executed on any version.

I have no idea why Sphinx is doing this, but it will cause their interpretation of code to break. The biggest reason to protect code with if TYPE_CHECKING is to avoid circular imports (often required for type checking, but entirely unnecessary for Python).

I assume Sphinx is using exec or import to parse code (rather than compile or the ast module), in which case they aren’t a static type checker, and really ought to be skipping code intended only for static analysis.

2 Likes

@AA-Turner already addressed this point above:

1 Like

Regarding cross-version compatibility, globals()["__type_checking__"] = False will allow the name to be set on older versions without causing a syntax error on newer versions (even with the simplest keyword based implementation).

Edit: that said, making __type_checking__ = False a legal statement (without allowing any other form of assignment to that target) would also be straightforward.

3 Likes

I must be missing something. To me this feels like something that should absolutely be a variable in the built-in namespace. (With value False.) Why is that controversial?

8 Likes

I believe that the idea is to have it be exclusively and only False, to allow for compiler optimisations (e.g. reducing bytecode size).

Personally, I think making it a keyword is a mistake. I’d prefer to either have a normal variable in the built-in namespace, as you suggest, or to special-case the name to forbid assigning anything but False to it, to allow keeping the compiler optimisations. Either option is better than the keyword approach, in my view.

A

3 Likes

I’d rather not do this. I view if TYPE_CHECKING: as a hack. It’s a hack that’s necessary for some use cases, but if we’re talking about the evolution of the language, I’d prefer to make hacks unnecessary instead of embedding them in the grammar.

Here’s some ideas for what we can do to make if TYPE_CHECKING less important:

  • Make annotations not evaluated by default. This is done in Python 3.14.
  • Add a mechanism for lazy or typing-only imports. PEP 690 was rejected for being too magical, but I think there’s appetite for a more explicit version (lazy import foo?)
  • Perhaps add a mechanism for lazy decorators, which are not executed at runtime. (One-minute idea: @@final <newline> def foo(): ... would make it so foo.__decorators__, when accessed, evaluates to [final].
  • Look closely at what typing.py and other expensive modules do at import time and make it faster.

These are just a few half-baked ideas, but they can all help make Python code faster and more readable in general, without embedding a hack into the language definition.

12 Likes

I follow @Jelle here, I think a type import feature would fit better. Prior discussion:

While typing.TYPE_CHECKING/__type_checking__ also allows code to be conditionally defined, it is most of the time to lie to the type checker.

I largely agree with this, but I don’t view it as a hack. I view it as a necessary tool in the absence of better, that has sharp edges when used to lie rather than used to avoid currently unresolved issues that arent the types themselves.[1]

Being a builtin name makes more sense to me than a keyword as a result. Pragmatism and acknowledging that we don’t have solutions that are shorter term here makes me lean toward adopting this, but no further. This shouldn’t be a constant, and we should look for more widely applicable ways to improve the situation. That could be if False or __type_checking__: to get the existing if False optimizations while signaling to a typechecker the intent rather than making a ne long term constant.

I may have some changes to suggest for 3.15, but they’re too much to ask for for this close to 3.14, and unfortunately attempts at benchmarking this in real world import-time-sensitive-use got delayed.

Alternatively, since the only thing @final actually does is attempt to set obj.__final__ = True, we could case that as a classvar into the type specification as brought up here

+1 to this. I think type import foo and type import bar from foo are the natural syntax here building on what we already have though? If runtime laziness is desired rather than deferral until introspection for module annotations, people can always inline an import instead.


  1. If anyone wants some ideas on preventing the sharp edges, here’s something I use in CI to ensure that the annotations I have are runtime valid even with the tricks I use to defer their evaluation ↩︎

1 Like

I agree that this is not a hack and would rather see this small PEP implemented now than postpone it for a much larger more intrusive PEP that, considering how PEP 690 went, I’m sure would take a long time to reach consensus with still a fairly high chance of rejection.

12 Likes

I am in full agreement with Jacob here.

1 Like

Since one of motivation is compile time code elimination, __type_checking__ must not assignable like __debug__ is unasignable too.

# a.py
if __type_checking__:
    print("hello")

# b.py
import builtins
builtins.__type_checking__ = True
import a  # Expect “hello" printed.

Unassignable builtin needs special casing in runtime. Unlike __debug__, __type_checking__ is always False. So using regular keyword like False is much simpler to implement. Zero code change for runtime.

This complexity is required not only for CPython, but also for other implementations. So adding unassignable builtin increase the complexity of language. That is why I chose regular keyword in the PEP.

The most controversial point is convenience until Python 3.13 goes EOL.

# if __type_checking__ is unassignable at all
try:
    TYPE_CHECKING = __type_checking__
except NameError:
    TYPE_CHECKING = False # or from typing import TYPE_CHECKING

# if `__type_checking__ = False` is allowed
__type_checking__ = False

Should we increase Python language complexity only for this convenience until 2030?

I thought “no” when writing the PEP. But I will reconsider it after implement it.

I think these improvement doesn’t useful to reduce pyc size.

See my comment in previous thread.

SQLAlchemy doesn’t put all @overload into if TYPE_CHECKING: block. (They doesn’t need typing.get_overloads() except execution_options method. And execution_options method is not defined in the elements.py.)
But replacing TYPE_CHECKING to False reduces ~10% pyc size already.

Additionally, as I said in this comment, PEP 649 doubles the number of function objects.
So if False hack still save some RAM and import time.

Making type annotation zero cost without if False is really hard problem.
I am not sure we can solve it within 10 years.

1 Like