In Python, there is no way to enforce a return value from a function/method is used. This is especially useful for functions and methods that are pure (i.e. the function has no side effects), and is a common UX problem where the programmer may expect the function call to modify some argument or the object the method is attached to.
I (inspired by @NeilGirdhar) propose adding typing.no_discard: a decorator to indicate that the return value is not indended to be discarded. The decorator can be used on functions and methods to indicate that the return value of the function shouldn’t be discarded. It can also decorate a class to indicate that any function returning an instance of the class (or a subclass) is implicitly marked as no_discard.
This proposal is very much based on [[nodiscard]] from C++ 17. Please refer to cppreference and the original C++ proposal for more information.
def no_discard(arg):
"""
Decorator to indicate that the return value is not intended to be discarded.
The argument can be a function/method or a class. If it is a function, the
type checker will warn when a return value is discarded at the call site.
If it is a class, all functions that return an instance of the class or a subclass
will be implicitly marked `no_discard`.
"""
return arg
#########################################
### Use case 1: function/method decorator
#########################################
@no_discard
def pure_func1(x):
"A pure function - return value should be used."
return x + x
class C1:
"A class with a `no_discard` method"
@no_discard
@classmethod
def from_path(cls, arg):
"Construct C1 from some arg. Also pure."
...
return C1()
tmp = pure_func1(10) # OK
pure_func1(10) # Warning from static type checker: return value not used
c1 = C1.from_path(...) # OK
c1.from_path(...) # Warning from static type checker: return value not used
###############################
### Use case 2: class decorator
###############################
@no_discard
class NC1:
"A class that should not be discarded if returned from a function"
...
def f_that_returns_NC1():
return NC1()
nc1 = f_that_returns_NC1() # OK
f_that_returns_NC1() # Warning from static type checker: return value not used
As python doesn’t have support for this at the language level, the burden of implementing the warnings would fall on static type checkers like mypy and pyright.
Some criticisms I anticipate:
“This would make the language more verbose” - maybe, and as such, I would document that this feature should be used sparingly, and used for the most confusing cases.
I’d love to get your feedback on this addition - pitfalls, challenges, what we should change etc. Thanks.
My impression is that such an annotation is more useful in a language with more explicit memory management where you want to be extra sure callers know they’re getting ownership of a returned resource. I’ll have to skim the C++ proposal since it may provide more use cases, but off hand this doesn’t seem very important in Python where all memory management is automatic and variables are names in a namespace, not references/pointers.
Also my latest soapbox has become whining about how things that aren’t part of the type system shouldn’t be in typing. That ship might’ve sailed a while ago, though.
Not advocating that this is something suitable for Python, but for prior art (and naming), in Rust this is called #[must_use] (which removes the “double negative” of C++'s nodiscard):
Pyright kind of does this today partially. It treats binary operators as if they no_discard and warns if you have code like,
a + b
where result is discarded. In practice binary operators tend to be no_discard like although some libraries like beam use >> operator to have side effects. If we had actual no_discard decorator these false positives could be fixed and beam could avoid marking their own operators as no_discard.
I’m a +0.5 on this. More so to avoid current false negatives I see with existing heuristic version of this rule, but I don’t recall seeing much complaints about pyright rule so I think this does mostly work in practice and there are functions where’s it’s suspect of a bug to drop result. In general pure functions don’t make much sense to discard result and at moment we have no way to indicate purity either.
edit: In pyright, this rule is called reportUnusedExpression and is on in strict mode. There is another version of this rule called reportUnusedCallResult which behaves like every function is nodiscard unless it returns None, but that one is off even in strict mode and is likely way too noisy. Both of these rules could be replaced by no discard rule.
Thanks for linking me, and posting this as I had suggested. I’m not really proposing this though, although I’m very interested in the discussion.
I do find this proposal superior to the “Y solutions” in the original issue (e.g., metaclasses). However, I don’t think I would personally use this feature because I rarely accidentally discard values I don’t mean to, so the benefit isn’t big enough for me.
This related idea is also extremely interesting. Pure functions would be great to indicate for libraries like Jax that have a Jit that only works with pure functions. Are there many “must use” functions that aren’t pure (modulo logging)?
Unlike “must-use”, “pure” has the advantage that it’s an invariant that can be checked (on the decorated function) by type checkers.
I think the gradual way to do purity is to have decorator that marks a function pure or impure. Impure functions can call any function. Pure functions can call only pure ones. A function that is unmarked has unknown purity so pure functions would be allowed to call them. And then typeshed would need to mark functions. Depending on whether pure or impure is more common one may be assumed default but would need practical testing with mypy primer to see if that’s reasonable vs unknown purity.
I think nodiscard is simpler case and probably fine to have by itself though. It is possible for impure function that has only “internal” side effects to make sense as non discard. kind of like c++ mutable keyword for variables allowed to be modified in const function.
Could you elaborate on what problem is this attempting to solve? Is the supposition that it’s a common programming error to call functions that return a value and accidentally drop the returned value on the floor? Is there a way to support that supposition with data? I’m a bit skeptical because I can’t remember this flavor of error showing up in any Python code that I’ve ever written or code reviewed. Perhaps you could provide some examples of bugs that you’ve seen that would be prevented by this mechanism.
I can see this being more important in a language like C++, especially for functions that allocate memory and rely on the caller to dispose of that memory. Dropping the result on the floor would result in a memory leak. But in Python, this isn’t an issue because of reference counting.
If we were to add something like @no_discard, I think it’s unlikely that any library authors would make use of it. It’s really up to consumers of a library to decide whether or not to consume returned values. I presume that the intended use is for internal code bases, not public libraries?
As @mdrissi mentioned, pyright already implements options reportUnusedExpression and reportUnusedCallResult. Have you tried these? Do they meet your needs?
The one thing I can think of is resources that need to be released promptly, which isn’t guaranteed to happen if you simply drop something on the floor. However, a simple “nodiscard” attribute won’t handle that. Consider:
# Bad:
data = open(fn).read()
# Good:
with open(fn) as f: data = f.read()
Marking open() as nodiscard wouldn’t solve this, since its return value IS being used.
I see a lot of comments saying C++'s [[nodiscard]] is useful for resource management - i.e. a function may allocate some memory. *That’s very much not what [[nodiscard]] was designed for. Rather, it conveys intention: The return values of the function is meant to be used and not discarded.Nowhere in the C++ proposal was memory or resource management mentioned.. Further more, modern C++ (since C++11) strongly favors RAII, which makes resource management a non-issue in this case, yet [[nodiscard]] was accepted into C++17.
My motivation for typing.no_discard came from a very common UX bug in open source libraries, see the following issues for example. There is currently no mechanism in Python to convey that the return value of a function call should not be discarded (other than for the user to read the documentation and the source). [[nodiscard]] simply conveys intention, and unintended usage can be caught by the static type checker.
I think typing.no_discard is very much similar to type annotations: they have almost zero impact at runtime, and are in the language to convey intention and improve the developer experience. The Python VM doesn’t care about the annotations in general. (obviously there are libraries like Pydantic that actually take advantage of annotations at run time for data validation).
Pyright’s builtin reportUnusedCallResult doesn’t work by default as it requires the user to turn it on. On the other hand, typing.no_discard allows a library developer to convey intention, and static type checkers should by default check for this.
See my comment above. The equivalent of data = open(fn).read() wouldn’t be caught by C++'s [[nodiscard]] either, as the return is technically used by read(). [[nodiscard]] is designed to simply convey intention (that the return value should not be discarded) , not catch memory bugs.
Purity is interesting but very much out of scope for typing.no_discard. A function can be both inpure and have a return value that’s meant to be used.
Additionally, I believe something like @pure wouldn’y help a JIT at all, since the burden is on the programmer to decide whether a function they wrote is pure or not, but this is often not trivial to do (see this SO discussion) and I suspect there is a nonnegligible chance the average user will mistakenly mark an inpure function as @pure, further confusing any JIT compiler.
Yeah, which is what I was saying. Result: I have exactly zero idea in my head of what sorts of situations this would be useful for.
So, got any examples? I would love to know what sort of Python functions would require that their return values be checked. In C++, I can imagine a few (which may or may not be correct), such as essential error code returns, but that wouldn’t apply to Python (you’d use exceptions).
Whenever I see a motivation for a change that begins with “In Python, there is no way to enforce…” I get a queasy feeling.
There is currently no mechanism in Python to convey that the return value of a function call should not be discarded (other than for the user to read the documentation and the source).
Yes, and in my view reading the documentation is exactly what the user should do. This just falls into the category of “someone wrote buggy code because they didn’t know or think carefully about what they were doing”. And there’s nothing wrong with that! It’s a common enough situation. But I don’t see that it requires any changes to Python, or any solution other than “people need to read the documentation”.
In the same vein, one can argue that type annotations shouldn’t be in the Python language either, since users can just read the documentation to know what arguments to pass, or what return value they’ll receive. Obviously, we like type annotations, because they improve the developer experience, since the annotations make static analyzers smarter, and help the user write better code.
Based on your example and the linked GitHub issues I now see how this feature could be used with factory functions (“alternative constructors”). I also skimmed the C++ proposal document and see that as you said, it isn’t intended for resource management. And since C++ has constructor overloads, you’re less likely to need it for static factories, but that’s an OOP style thing I guess.
The interesting thing about the C++ proposal for me was that it hardly provides any motivation or use cases for nodiscard at all, other than “you might use this sometimes.” Rather, it seemed intended to standardize existing compiler specific annotations. Ideally a PEP would do the same, but in this case there is a “chicken/egg” problem because a tool like MyPy would have to decide to support the annotation without it being in the standard library, which feels like a stretch.
I have a hard time seeing something like nodiscard becoming mainstream in Python when there are straight forward existing solutions that can check for this at runtime (that’s what it looked like from the GitHub issues), plus the ability static analysis tools already have to do this analysis in expression contexts. It makes the use case exceptionally narrow. Classmethod factory functions are a very common pattern in Python particularly (since we can’t overload initializers), which for me adds weight to the “read the docs” argument. These functions should have docstrings indicating that they are factory functions and have return type annotations matching the class type, which makes their usage extremely obvious.
——————
Brendan, my friend, this is a fair perspective, but makes me wonder why you are concerned about a static analysis proposal when you don’t use static analysis? Just language bloat?
Correct, my no_discard proposal is very much geared towards the factory @classmethod use case, as that’s the one I originally had in mind. Good that you mentioned C++ supports overloading constructors. I almost forgot why I wanted to write so many factory @classmethods in the first place
Could you take a look at this proposal? It’s a python implementation of a run-time check. I think this is a common enough use case that I want to push for a new keyward in the language, and also support static analysis on it.