Type inference for function return types

Python type checkers already exhibit a certain degree of type inference, the most obvious cases being lambdas (in some cases) and local variables. It might be useful if we had type inference for function (and method) return types.

Having a way to make type checkers infer the return type would have some benefits:

  • less repeating ourselves, especially with unwieldy type hints like Callable[[Callable[P, T], T] (I was reminded of this when we were discussing the wraps typing the other day
  • (niche) the ability to return types that type checkers use internally. I’ve run into this with some internal types in the Mypy attrs plugin; there are types produced there that are not possible to actually express outside the plugin

Downsides:

  • the inferred return type would not be available in a runtime context, but if that’s important the users can actually just spell it out explicitly
  • one more thing to learn
  • doesn’t actually allow anything new (in 99.99% of cases), just makes the experience more pleasant

Interestingly, Pyright already does infer the types of functions with no return type annotations; Mypy doesn’t and treats them as Any. I assume this would be a no-go for Mypy simply for the backwards compatibility issues, and there’d be a weird edge case of a function with no arguments and no return type.

# Ignored as untyped due to no annotations
def utcnow():
    return datetime.now(tz=utc)

What if we used a special symbol to opt into type inference for return types? For example, if we used ...:

def utcnow() -> ...:
    return datetime.now(tz=utc)

# or maybe
from typing import Inferred

def utcnow() -> Inferred:
    return datetime.now(tz=utc)
2 Likes

I don’t think this would be a backward compatibility issue for mypy. After all, the capability could be enabled by an opt-in configuration flag. Jukka (the original author of mypy) said that he soured on the idea of return type inference because of his early prototyping efforts. He also has some philosophical concerns which he discusses here.

As you note, pyright infers return types if the function doesn’t have an explicit return type annotation. My experience has been different from what Jukka describes. I find that pyright is able to infer the correct return type the vast majority of the time, and developers seem to really appreciate this. It eliminates the need to provide explicit return types in the majority of cases. In situations where pyright is unable to infer the correct result, it’s typically easy for the developer to add a correct return type annotation.

I’ll note that I copied the return type inference capability from TypeScript, where it also works well and is well appreciated by developers.

Incidentally, pyright technically violates the current draft of the Python typing spec which states:

For a checked function, the default annotation for arguments and for the return type is Any.

I think this is too restrictive, so I plan to argue in favor of changing this wording in the spec — to at least make it permissible for type checkers to infer return types if they choose.

What if we used a special symbol to opt into type inference for return types

I think that’s unnecessary. If this is an opt-in capability, it should be a type checker configuration switch rather than something that requires opt-in on a per-function basis.

3 Likes

My general thought is that the Python type checking ecosystem could really benefit from a tool that infers type annotations and then adds them automatically and explicitly to the source. This would address most of the philosophical issues: you still get the documentation benefits of annotations, error messages and debugging experience is good, there is no type checker performance cost.

Explicitly opting into inference on a per-function basis could be useful as a type checker flag if you want to ensure a human has thought about the signature for each function or if type inference is unreliable. But like Eric discusses, if inference usually works well, I’m not sure how much need there is for the additional -> .... (I do have some thoughts about how the spec should navigate the question of inference, but sounds like Eric is just foreshadowing the spec change for now)

3 Likes

FWIW, I believe pyre has a pyre infer subcommand that does exactly that.

I’ll also note that Pylance, the VS Code language server built on pyright, provides an easy way to add an inferred return type to a function through the use of a one-click “code action”. This isn’t fully automated like pyre infer, but it allows for a more interactive approach for those who prefer that.

4 Likes

I’ll add a downside here

The current default (of Any) works well for gradually typed code bases. By defaulting to Any, a lack of annotation helps identify places that have not had a human review if the current typing and behavior is correct, or as correct as can currently be typed. While I’m generally in favor of more inference and less manual typing, I don’t think typing in python has evolved in a way that this is a good thing to change by default. I would be okay if the wording is amended to allow it as a configuration option in type checkers. I would also be okay with a decorator which conveys that type inference should be used if possible. I would not want this on by default, as the inferred type may be wider or narrower than intended when it comes to code bases that predate typing and have not been fully updated and functions which are allowed to be used with ducktyped behavior.

1 Like

That’s a really cool idea. However, I place a lot of value on source code conciseness and in my ideal world the default would be only inference (no return type annotation), falling back on tooling to generate the type for me (what you’re proposing), and writing out the type explicitly as the last option. I’m leery of having tooling generate boilerplate for me from my Java days :wink:

I’m a long-time VS Code user, how do I trigger this?

1 Like

I’m curious how well that works? Long ago, when we were experimenting with tooling to infer types from the source code, my experience was that often the inferred types were “correct” as far as the soure code goes, but either too verbose, or overspecified, or otherwise not close to what the programmer would write. For example, a function might look like it takes all numbers, but the programmer meant it to work on integers only, and other numeric types might be excluded by future changes to the code. Or something might be called with lists only but be carefully designed to work with sequences. Etc.

1 Like

A bit off topic: While I personally prefer return types to be inferred when not stated explicitly, I recommend to always add explicit return types, except for trivial functions:

  • Documentation (similar to function arguments)
  • This checks that a function really returns what the writer intended it to return
  • If not given, and the implementation is wrong, it can make it hard to spot the root cause of a problem: write_int(to_int(...)) “Can’t pass str to write_int” vs. “Can’t return str from to_int”. (Easy to spot the problem in this toy example, but much harder in some real world code.)
1 Like

An opt-in flag could cause issues with PEP 561 packages. If a PEP 561 package that uses mypy for type checking would enable this configuration flag and omit explicit return types, users of the package that don’t have the mypy flag enabled would silently lose some return types. On the other hand, if the flag would be implicitly enabled for all PEP 561 packages, there is some risk of backward compatibility issues due to additional inferred types, though it’s not clear what the impact would be. On the other hand, requiring explicit return types only in PEP 561 packages doesn’t seem like a viable option for pyright users.

Requiring a special marker for inferred return types would solve the above issues.

Also, if PEP 561 packages would start relying on this more widely, differences in type inference semantics between type checkers would become more pronounced, as a larger fraction of types in public interfaces would be inferred. As PEP 484 and follow-up PEPs don’t specify type inference rules in any detail, and different type checkers have well-known differences in type inference behavior, enabling return type inference by default could reduce the compatibility between type checkers, whereas everybody seems to agree that improving compatibility between tools is desirable and even important (including me). Perhaps we should first look into further standardization of type inference.

Note that the prototype that Eric referred to was from the very early days of Python static type checking. I think it’s quite possible that with all the improvements to the type system, stubs and type inference in the last 10 years, the quality of inferred types will be much more acceptable now.

I still think that having explicit return types often offer a net benefit, especially in contexts where you don’t have access to an IDE that can show the inferred return types for you (e.g. in many code review tools and when viewing code on the command line). For somebody who spends most their time in an IDE that can perform type inference, I can imagine the experience will be different, but non-typing-aware tools are still widely used in my experience, including editors such as vim not configured to use an LSP implementation. I’m not saying that mypy won’t ever support inference of return types, though – especially if some of my above concerns can be addressed (or turn out to be insignificant after further analysis).

3 Likes

PEP 561 package should provide full type annotations for any function or method included in its public interface and not rely on inference. As you point out, it’s important for library interfaces to be specified unambiguously, and type inference rules vary across type checkers. That observation is consistent with the guidance we’ve published here.

I’ll note that pyright incorporates a command-line option called --verifytypes that runs on an installed “py.typed” package and verifies whether its entire public interface is fully and unambiguously typed. Many library authors have adopted this tool to help ensure that their “py.typed” packages are “type complete”.

Return type inference is most useful for internal functions within a code base. I don’t recommend it for libraries.

2 Likes

I want to back up this with a sharing of my experience.

When I’m reading code that I’m not familiar with, it makes a huge difference whether it has explicit type annotations. So I would like defaults to encourage explicit type annotations.

1 Like

It’s been a few years since I used this as part of type adoption at Instagram; maybe someone from the Pyre team is around and has more recent experience. My recollection is that our experience roughly matched yours, but we did still find it useful. Our expectations from this kind of tooling were not to provide commit-ready type annotations, but just to make it easier and faster for a human to annotate. It’s usually easier for a human to correct an over-specified or overly-complex type than it is to figure out the right type from nothing. Similar for our use of runtime profiles types from MonkeyType.

I recall one specific issue we ran into with over-specified inferred types is inheritance; it was common to get an inferred return type of None on a base method that was intended for override, where the correct annotation should have been Something | None. I’m curious how pyright’s inference handles this. Observing that A.meth() must always return None is not sufficient to conclude that a.meth() will return None, given a: A, because there may be a subtype B(A) with an override of the method.

FWIW, I checked this out in the pyright playground and the answer is that pyright will happily issue “incompatible override” errors based on its inferred return types. So if you have an inheritance hierarchy with method overrides where Pyright will infer incompatible return types (e.g. some implementations return None and some return int), and the real intended return type of the method is the union of those (int | None), you must add explicit return type annotations to avoid type errors.

This violates the “gradual guarantee” (that removing annotations from a gradually-typed codebase should not cause new type errors to appear), but Python type checkers have never really taken the gradual guarantee as a hard requirement. It does seem like it might not be entirely clear to a new user in this case, if the runtime behavior is what they want, that the way to fix those “incompatible override” errors is to add annotations.

1 Like

I’m not sure this is a good idea.

While inferring local variables is good, return types (and parameters) are part of the function signature/API, so this needs careful consideration.

If a missing annotation means that the type checker can infer the return type (instead of assuming Any), it now means that a seemingly innocent change/refactoring might silently change the signature of the function, without the person doing the change/refactoring realizing that. This might have problematic consequences, specially if due to a small logic error a return type changes from str to str | None for example: the person writing this might not notice, type checking in the project might still pass, with only type checking downstream noticing the problem. This is worse yet if the return-type-inference is behind a feature flag. Today, with explicit type annotations, the type checker can immediately point out the issue.

4 Likes

As a heavy user of mypy and a heavy user of certain metaprogramming capabilities like decorators/higher-order functions, it really does get painful to manually maintain return types for certain things. I favor explicit over implicit when it enhances the readability of the code, but there are types involving multiple layers of Callable (which itself is a little bit hard for a human to parse visually) where it would be better by far to allow readers to look directly at the nested functions to determine the return type of the wrapper, and putting those fully-specified type annotations in the actual code is a net negative regardless of how they get there (typed by a person or autofilled by a plugin).

def deco_fact(foo: int, bar: int) -> Callable[[Callable[[int, str], float]], Callable[[str], float]]:
    def deco(f: Callable[[int, str], float]) -> Callable[[str], float]:
        def wrapper(s: str) -> float:
            return f(foo + bar, s)
        return wrapper
    return deco

This would be many times easier to read and comprehend if it could be written:

def deco_fact(foo: int, bar: int) -> ...:
    def deco(f: Callable[[int, str], float]) -> ...:
        def wrapper(s: str) -> float:
            return f(foo + bar, s)
        return wrapper
    return deco

incidentally, I’m in favor of the explicit marker, for several reasons:

  • I don’t think an all-or-nothing situation (checker runtime flag) is ideal for complex projects. Many return types may be implicit Any for “good” reason. Having to switch everything over to explicit Any could be a painful migration path for some users.
  • libraries could opt into this behavior in certain edge cases like what I mention above, where the type is well-defined and there’s negative utility conferred by making it explicit.
  • a reader of the code knows at a glance that the type of this function is well-defined - in other words, the programmer’s intent is more directly preserved, rather than being hidden in a configuration file.
  • in an IDE context, there’s now an obvious ‘thing’ over which someone can hover to get just the return type
2 Likes

The explicit marker (and this rationale for using it in place of overly complex, verbose, and repetitive types) are both reminiscent of the auto type in C++.

The usual way of solving this problem today in Python would be an appropriately named TypeAlias for the verbose type.

4 Likes

I think this point needs more emphasis and (IMO) kills this idea. Type inference is great and awesome, but the return type really must be annotated explicitly. It’d be great if an IDE can auto-fill the expected type [1] but it should be written down in the code, so that downstream consumers know what to expect and any future changes will have something to check against, rather than inferring the change was desired.

Even languages with strict typing and clever type inference still require return types to be annotated, and I think this the main reason. It’s part of the function signature that should be consciously designed by the developer, not an incidental detail.


  1. and this would would let the author know if they had unexpected return values ↩︎

We do use TypeAlias occasionally. But that still requires either handcrafting or auto-generation (plus a manual move, since the auto-generator is going to put the type in-place), and it actually complicates things in practice, because then the reader needs to go reference something else (possibly defined a ways further up in the module), which is in essence a layer of indirection.

Another objection to hand-crafted types for these types of situations is that it’s not necessarily easy to trust that the hand-crafted type is precise and accurate, especially since with many types of decorators you end up having to use a cast(<type>, wrapper) regardless, because of *args, **kwargs.

When implemented correctly, inference allows you to avoid handcrafting and the perils associated with complex handcrafted types – and an auto-generated type is indistinguishable from a handcrafted one, which leads right back to the same set of verbosity and correctness questions.

1 Like

I don’t believe this is true. I can think of several languages off the top of my head (Haskell, OCaml, Typescript) that do not require return types to be defined explicitly.

3 Likes