I had comments from multiple people soon after creating my previous post - thanks for that.
There are some suggested changes that I’d be happy to make to get the bugfix ready for commit.
Before going through with the remaining changes I wanted to start a discussion around the request for the solution to be fully backwards compatible (see comment from @gpshead).
Brief issue context
If a timeout is hit from subprocess.run(cmd, timeout=T, text=True, capture_output=True) then the resulting subprocess.TimeoutExpired exception stores stdout and stderr in bytes (ignoring text=True), or if no output was received sets the attributes to None rather than the empty string.
This requires a messy workaround to get the expected types, and this workaround makes mypy very unhappy. In other words, this can add tens of lines of boilerplate to correctly handle TimeoutExpired while keeping mypy happy.
p: subprocess.CompletedProcess[str] = subprocess.run(cmd, **kwargs)
except (subprocess.CalledProcessError, subprocess.TimeoutExpired) as e:
if isinstance(e, subprocess.TimeoutExpired):
# Workaround for https://github.com/python/cpython/issues/87597,
# TimeoutExpired gives bytes or None rather than str.
if isinstance(e.stdout, bytes):
e.stdout = e.stdout.decode("utf-8")
if e.stdout is None:
e.stdout = ""
if isinstance(e.stderr, bytes):
e.stderr = e.stderr.decode("utf-8")
if e.stderr is None:
e.stderr = ""
"Command failed with stdout:\n%s\nstderr:\n%s", e.stdout, e.stderr
Thoughts on backwards compatibility
Here’s my view on the backwards compatibility question (which I appreciate may not count for much here - I’m looking for more opinions and a clearer picture of CPython’s general policy).
This seems to certainly be a bug, and the issue is tagged as such (I’d be interested to hear an explanation if people feel differently).
I can’t see any reason this would be desirable behaviour, it seems to have just been implemented incorrectly (with no associated testcases - my fix breaks no tests).
I wish this hadn’t been done - now it is reasonable for people to rely on this behaviour and it’s a user’s bug if they don’t handle it, which IMO changes this from a bug in the CPython implementation to a bug in the API and makes it less feasible to make the breaking change to fix the issue.
The suggestions on the PR to add further API clutter (e.g. a ‘decode’ attribute on top of things like text alias for universal_newlines) seems unwarranted to fix a bug.
Yes people may have code that assumes the existing behaviour, but this was not documented (until the PR linked above) and there’s no reason to guess this would be the behaviour.
I can understand not wanting to backport for this reason (although not sure I agree that a bugfix shouldn’t be backported), but if a small, potentially breaking change cannot be committed to fix a bug, isn’t this just going to end up in incredibly convoluted/messy APIs and/or needing to wait a long time for fixes?
Does the risk of the breaking change really outweigh the risk of the bug?
What’s the general policy of CPython around breaking changes for bugfixes? All I could find was Accepting Pull Requests, which says " If there is a change to the semantics, then there needs to be a strong reason, because it will cause some peoples’ code to break. If you are unsure if the breakage is worth it, then ask on the Core Development Discourse category."
Any thoughts would be appreciated. I’d love to get a fix for this finished off and committed, I’m just not sure I can face adding API clutter and/or waiting til 3.14!