These constructs obviously don’t belong in the same bucket as, say, implicit unicode/bytes conversion, where they’re so confusing that they have to be banned for everyone’s protection. I think the PEP’s argument is a little different: it’s saying that supporting these constructs requires maintaining extra special-case complexity in the interpreter, and this extra complexity isn’t worth it given that they’re confusing and don’t seem to have any reasonable use cases.
raise seems like a pretty different situation to me. Python’s already committed to the general principle that any arbitrary bytecode can raise an exception (signal handlers, typos, …), and that new exceptions replace old exceptions – but even there we take some effort to save the original exception in __context__, because experience showed that losing the original exception entirely was too confusing. OTOH there’s nowhere else in the language that an innocent-looking continue can cause a live exception to disappear without a trace.
Was this discussed on python-ideas? I don’t remember a discussion there, but I might have missed it in my skimming. If it hasn’t, it probably should spend some time in -ideas.
I feel this proposal wouldn’t have survived the gauntlet of -ideas; it proposes to change well documented behavior because the behavior might be surprising if you don’t read the documentation.
I missed that when I skimmed the PEP – it’s mentioned in a single paragraph hidden behind a long argument trying to show that these constructs are confusing. That’s also what the OP (a PEP co-author) used to open the discussion here, and it’s been the entire focus of the discussion so far.
Anyway, when I first learned how to implement a language, some things were drilled into me: features should be “orthogonal”, so the user can combine them in any way they like; and implementation effort is not a reason to avoid a feature properly.
I understand there are different ways to think about such things, but at this point neither of the lines of argument brought up in the PEP has convinced me. If MicroPython thinks this is not worth their effort they are welcome to deviate from Python 3.8 in this respect, but I don’t think a PEP that officially takes this out is a good idea. (Basically the only reason for submitting the PEP now is so they can continue to say “we fully implement Python 3.x.”)
The behaviour of return/break/continue within a finally is clear to me, and I believe it is clear to most of Python programmers (at least it is clearly documented). It is not an issue that it complicates the compiler a bit. Is is not an issue that MicroPython can deviate from CPython.
But there is other issue.
try:
return foo()
finally:
bar()
is equivalent to
try:
res = foo()
except:
bar()
raise
else:
bar()
return res
except without raise considered bad practice in general. There are exceptions: when the exception is saved or transferred to handle it in other place or time, or at least logged. This is why the bare except is not banned. In any case some important exceptions should be checked and reraised.
But if the finally block contains return / break / continue,
try:
return foo()
finally:
return bar()
is equivalent to
try:
res = foo()
except:
return bar()
raise
else:
return bar()
return res
raise in the except block is never executed. This automatically makes this example a bad practice.
The question is whether we should support a construction which always violates good practices and cannot be used in good program? It silences KeyboardInterrupt, GeneratorExit, MemoryError, RecursionError, CancelledError which can be raise by virtually any code and should not be silently ignored. The only way to fix it is to get rid of finally and use except.
Agreed, if we were to remove every bit of surprising code from the language, I can imagine it would be stripped bare of features and new features would hardly ever be added. I don’t think there’s any issue with something being initially surprising, as long as the documentation clearly states the behavior and it’s understandable to the majority of readers.
By getting rid of finally do you mean removing it entirely from the language, just that specific usage, or am I misunderstanding something? Removing finally entirely seems like it would be quite an involved deprecation process.
def __exit__(self, exc_type, value, traceback):
if self.stdout:
self.stdout.close()
if self.stderr:
self.stderr.close()
try: # Flushing a BufferedWriter may raise an error
if self.stdin:
self.stdin.close()
finally:
if exc_type == KeyboardInterrupt:
# https://bugs.python.org/issue25942
# In the case of a KeyboardInterrupt we assume the SIGINT
# was also already sent to our child processes. We can't
# block indefinitely as that is not user friendly.
# If we have not already waited a brief amount of time in
# an interrupted .wait() or .communicate() call, do so here
# for consistency.
if self._sigint_wait_secs > 0:
try:
self._wait(timeout=self._sigint_wait_secs)
except TimeoutExpired:
pass
self._sigint_wait_secs = 0 # Note that this has been done.
return # resume the KeyboardInterrupt
# Wait for the process to terminate, to avoid zombies.
self.wait()
def _recv_bytes(self, maxsize=None):
if self._got_empty_message:
self._got_empty_message = False
return io.BytesIO()
else:
bsize = 128 if maxsize is None else min(maxsize, 128)
try:
ov, err = _winapi.ReadFile(self._handle, bsize,
overlapped=True)
try:
if err == _winapi.ERROR_IO_PENDING:
waitres = _winapi.WaitForMultipleObjects(
[ov.event], False, INFINITE)
assert waitres == WAIT_OBJECT_0
except:
ov.cancel()
raise
finally:
nread, err = ov.GetOverlappedResult(True)
if err == 0:
f = io.BytesIO()
f.write(ov.getbuffer())
return f
elif err == _winapi.ERROR_MORE_DATA:
return self._get_more_data(ov, maxsize)
except OSError as e:
if e.winerror == _winapi.ERROR_BROKEN_PIPE:
raise EOFError
else:
raise
raise RuntimeError("shouldn't get here; expected KeyboardInterrupt")
The code is not simple. It silences arbitrary exceptions, this is not obvious, and perhaps was not intended. But it is not easy to say what should be the correct behavior in such cases.
I’m mostly responsible for this PEP, and honestly I didn’t think it would lead to such a backlash.
Anyway, as way of background, about a year or so ago I found that MicroPython crashed badly with return/break/continue within a finally. It took me a while to find a fix to the VM which was minimal and didn’t impact code size or RAM usage. Around that time I discussed the problem with Nick Coghlan and we came to the conclusion that the use of return/break/continue within a finally is probably not something you’d want to do anyway, silencing all exceptions.
Since the CPython stdlib used this feature rarely, and the bug in MicroPython was never reported by any user, it was low priority, and eventually got fixed. Then fast forward a year and CPython had a bug handling continue within a finally (so does MicroPython, currently not fixed) and I thought that it might be worth writing a PEP to forbid it.
I didn’t want to mention MicroPython explicitly in the PEP, I wanted the PEP to stand on its own as an “enhancement” of Python regardless of any implementation. But it was the difficulty of implementing return/break/continue that lead me to write the PEP.
In my own opinion, the two strongest reasons I see for adopting this PEP are:
the existing code in the stdlib that uses return within finally is really hard to understand (well, for me anyway, so maybe not such a big point), and therefore hard to maintain;
almost all other languages that support “finally” in some form have come to the general consensus that return should not be used in a finally block (see the PEP for links).
On the other hand, IMO a good reason to reject this PEP would be because it’s better to retain orthogonality of language constructs rather than add a special exception to forbid something (in the compiler, or could be done in the grammar).
I don’t want MicroPython to influence any decision on this PEP. As I mentioned above I thought this PEP was worth writing for the Python language itself. Whatever the outcome here MicroPython will try to follow suit, and if it can’t then it’ll be listed as a difference to CPython.
Reading the references in the PEP it seems to me that most languages implement this kind of construct but have style guides and/or linters that reject it. I would support a proposal to add this to PEP 8 (if it isn’t already there). I also won’t have hard feelings if MicroPython decides not to implement it.
I note that the toy examples are somewhat misleading – the functionality that may be useful is a conditional return (or break etc.) inside a finally block.
For example the return in the __exit__ method in subprocess.py is actually correct, since a return from __exit__ re-raises the exception that it was originally handling, and the only exceptions it could be cancelling would be from if self.stdin: self.stdin.close() or another KeyboardInterrupt. The comment at the top of the try block hints that close() indeed can raise: “Flushing a BufferedWriter may raise an error”. Since we’re already inside error handling (__exit__) it is reasonable to assume that the author fully intended the behavior of cancelling of such errors.
The Steering Council took a vote, and the outcome was that 4/4 members present agreed to reject the PEP, for ther reasons I outlined in my previous message. Instead we welcome a PR for PEP 8 that recommends against this usage.
If you want to you can submit a PR to update your PEP with a rejection notice, or I can do it (probably just quoting what I said in my last message).
You’re also more than welcome to submit a PR for PEP 8.
Thanks @guido for the fast decision on the PEP (even if it didn’t go my way).
I’m happy to make a PR to update PEP601.
I’m also happy to submit a PR to update PEP8 to recommend against the usage. But in that case (and if a linter also warns about this use) wouldn’t the uses in the stdlib (eg in subprocess.py) then be considered bad practice and need to be changed?
By “no other reason”, could a good reason be considered improving the overall readability of the section or does it have to involve a more functional modification? This may not apply to adhering to a singular PEP 8 guideline (such as this upcoming one), but I think there’s a significant amount of value that can be gained from improving the readability of a regularly visited portion of the standard library.
Improving the readability can result in users having a better understanding of what is being done within the code, and reduce the long term maintenance cost of enhancing or debugging the section in the future. Of course, this typically is not the largest priority, but gradually improving readability over time can certainly make an impactful difference.
Regarding the PEP 8 “reasons to ignore a guideline”, it also mentions that:
To be consistent with surrounding code that also breaks it (maybe for historic reasons) – although this is also an opportunity to clean up someone else’s mess (in true XP style).
Although rather frequently, it seems to be the case that the process of cleaning up is dismissed in favor of adhering to the preexisting standard. When would be an example of an optimal time to “clean up someone else’s mess” and how is that differentiated from maintaining a historical standard? To me at least, it seems that there’s not always a clear distinction between the two situations.
Note: The above questions are more in the context of CPython PRs, rather than in general.
Please no. That would open the floodgates of well-meaning but potentially breaking PRs. We don’t have enough reviewers to double-check that such improvements are actually correct.
I can definitely understand that concern, thank you for the clarification. My primary motivation for becoming a member of the triage team was to help improve the issue regarding the lack of reviewers for PRs and to help the core developers get through them more efficiently. It is most certainly not my intention to worsen that problem.
Apologies for the potentially destructive suggestion, but I’m glad that I brought it up. It’s very useful to be crystal clear on the subject for the purpose of PR reviews, so that I can provide more definitive answers to PR authors. From my impressions thus far in the last several months of reviewing PRs, this seems to be a rather commonly misunderstood area and admittedly one that I’ve not understood myself.
Thanks for understanding. We sometimes encourage people who have expressed genuine interest in contributing to submit a “trivial” PR as an exercise in learning how to go through the PR process (which as you know is quite a bit more involved for Python than it is for the typical GitHub project).
But I’ve also seen quite a few PRs that fix a trivial typo or apply a trivial formatting fix (often with PEP 8 as an excuse[1]) without further contributions by the same author, and I’m concerned that some people use their commit to plump up their resume with a bullet saying “Python Contributor” or something lofty like that, and this needs to be nipped in the bud, as such PRs do consume reviewer resources.
[1] Note that one of the section headings in PEP 8 is “A Foolish Consistency is the Hobgoblin of Little Minds”.
I can certainly see that being an issue. Many of those PRs are at least partially responsible for the current bottleneck.
Unfortunately, I think it’s inevitable that some people will take advantage of the system so they can become an effortless “Python Contributor”. I don’t think there’s an easy way to prevent that entirely without hindering those who genuinely want to learn, we can only somewhat limit the negative impact on reviewer time.
My first PR was a trivial documentation change, so I can certainly empathize with those looking for an introductory PR to gain experience with the workflow. When I had first started, I was entirely unaware of the existence of the “easy” keyword on bpo (“newcomer friendly” hadn’t existed), so I thought that making a minor phrasing improvement would provide a good introduction.
I would like to thank everyone who has made the process of contributing for the first time significantly easier or went out of their way to accommodate new contributors. If it weren’t for those efforts, there’s a good chance that I wouldn’t have started contributing myself.
I’ve always enjoyed that quote used in PEP 8, but I don’t think that I fully appreciated the real-world application of it until now. That conforming to standards exclusively for the sake of consistency ultimately serves no productive benefit, and can even become a detriment when obsessed upon.
A later part of that quote from Emerson, “He may as well concern himself with his shadow on the wall”, could easily be translated to Python as “They may as well concern themselves with changing single quotes to double quotes”.