Right. Basically my view is we’ve already got more of this sort of thing than we need, and there’s too much churn going on to “fix” “problems” that can already be solved by humans reading documentation.
And we could fix nearly all traffic problems if drivers could just cooperate with each other and let people move efficiently. While we’re dreaming, can we solve the problem of humans not understanding floating-point, of humans misusing mutable default arguments, and humans not understanding the “hobgoblins” paragraph in PEP 8?
In all seriousness: If the solution depends on humans reading documentation, it is not going to work. SOME people will read SOME documentation - and we can increase the proportion by making those docs easy to access (eg help(x) in the REPL, which I suspect is far more frequently used than the actual docs) - but anything that allows a computer to check is a benefit. That’s why, even though you could just write # at this point, x must be a positive integer as a comment, we have the assert statement - assert isinstance(x, int) and x > 0 can be checked.
I don’t personally use type hints in any of my projects, but they are a valuable feature for those who want them.
That said, though, I do get the argument of “why should the language be bloated for the sake of type hints”. That’s a legit concern when it comes to syntax (every piece of syntax is a significant language burden), but I do think that simply adding a decorator - which will be a pass-through, making it have minimal cost at run-time - isn’t enough of a cost to worry about.
But on the other-other-other-other hand (I’ve lost track of the number of hands I’m using here), I have yet to see good justification for a no_discard decorator, so I’m -0 on it. Cost is relatively low, so if others see strong benefit, sure, go for it.
One data point that would be very helpful for seeing the value of this would be to go through a couple files in typeshed standard library stubs and add no_discard to them. You can make a draft pr to illustrate the kinds of functions/how common this decorator would be. That’d be a lot more concrete then describing a few situations.
This point gets back to something from the original discussion. If the main problem this is intended to fix is for factory class methods, then there was already a proposed solution: use a metaclass to define these methods and they won’t be available on the instance.
If people aren’t reading the documentation they aren’t going to even find such a method on their instance, and the “no discard” problem doesn’t arise in the first place. If they are reading docs, they should know how to use the methods properly.
It’s true that this doesn’t provide a helpful error, but IMO that’s moot because users will never even try to write the erroneous code.
It’s the caller’s prerogative what they do with the return value. If you write a function or method that creates side-effects, and the caller wants those side-effects, why should they have to write: __ = foo() to pass type checking?
So methods with visible ussr side effects should avoid using no discard. I think only “pure” methods should use no discard which includes some alternate constructors but also things like sorted/most binary operators. Most of functions in math module I think are pure too and could be nodiscard.
This is why I think in practice no discard is rather similar to tracking purity.
I wrote this in the other thread but I’ll paraphrase the reply here as well:
It’s never up to the callee to tell the caller what to do. I should be allowed to call a pure function (which is a concept that doesn’t really exist in Python, at least not in the Haskell referential transparency way) 10 million times and ignore it’s return value. Maybe I wanted to heat my room a bit and calling that function is heats it to exactly the temperature that I want.
Also, what would the semantics be? Like if this is allowed:
# Using a `NoDiscard` object because this is not
# complex enough to require a decorator IMO
def add(a: int, b: int) -> NoDiscard[int]:
return a + b
tmp = add(1, 2) # Return value is assigned but ignored so in reality it's discarded
print("NoDiscard did nothing but introduce irritation in the programmer!")
A proper NoDiscard, IMO, would really need to make sure the return value is used, which would require it to check if the return value, if a local variable, is used in a local context or, if nonlocal, that the nonlocal value is used in its context or, if a global value, that the global value is used anywhere in the code. And I don’t think anyone wants that!
I’d much rather we just see this as the learning opportunity that it is, classmethods as alternative constructors is a very common idiom in Python. If the authors of Pytorch Lightning  wish to “idiot-proof”  their code, they should use one of the suggestions presented above (or in the previous thread) instead. I really, really don’t think we should add a feature that will essentially only help people copy-pasting snippets from blogs without thinking.
A very nice library/framework I might add, really liked both the design and docs so no ill will directed towards them ↩︎
maybe “novice-proof” or “user-that-doesn’t-want-to-learn-python-proof” is better? ↩︎
An example of a non-pure function whose result should not be discarded is asyncio.create_task(). Currently the standard library relies on users reading documentation:
Save a reference to the result of this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isn’t referenced elsewhere may get garbage collected at any time, even before it’s done.
I think my view is the opposite: if humans aren’t going to read documentation, nothing is ever going to work. Certainly we want to ease their path as much as possible, and I don’t mean that we need to expect every human to read every iota of documentation. But there’s no way that cobbling together a bunch of granular annotations is going to add up to what you can get a human to understand by telling them. So I don’t see it as useful to bend over too far backwards to get X% of the way there. If the process of annotating the code for machine readability winds up increasing the burden on humans, I see that as a regress, not progress.
Yes, and I consider this a wart in asyncio. Effectively, it means that asyncio as it stands is NOT sufficient to run an event loop, and you basically need some additional infrastructure of your own. Something like this:
exc = task.exception() # Also marks that the exception has been handled
if exc: traceback.print_exception(type(exc), exc, exc.__traceback__)
all_tasks =  # kinda like threading.all_threads()
"""Spawn an awaitable as a stand-alone task"""
task = asyncio.create_task(awaitable)
There, now you can spawn(some_task()) and things will behave correctly. With threads, you can do that simply by calling the standard library function, and the thread does exactly what it should.