I don’t really think it’s helpful to post an entirely different proposal in the same discussion thread…
(@steve.dower, thanks for your input. I’ll respond to that in a separate post.)
The informal poll currently has 54 responses on it. The activity died out so here’s the summary of the results.
Thank you to everybody that participated! I’ve gotten responses from developers of many notable libraries, among others (alphabetical order): aiohttp, attrs, BeeWare, conda-forge, Coverage.py, DevPI, flake8, Flask, httpx, Hypothesis, Jupyter, multidict, NumPy, pre-commit, PyPy, pytest, setuptools-scm, SymPy, Tox, Twisted, urllib3, uvloop, virtualenv.
Annual releases sentiment
The idea to release Python annually met with a warm reception but not unanimous support. Two-thirds of the respondents felt annual releases would be better while 20% thought it would make things worse. I won’t focus on the positive support here since the arguments there are largely consistent with what PEP 602 says on the matter. Instead, let’s look at the criticism.
Notably, representatives of the following projects said that annual releases would feel worse: NumPy, PyPy, SymPy, BeeWare. The reservations come from the perception that an accelerated release cadence would accelerate the rate of change and increase the testing matrix.
In my view the former is unlikely to happen, as the accelerated release cadence simply slices the existing rate of change more gradually, making consecutive releases more similar to each other. If anything, due to the beta feature freeze period staying as long as it was before, I’d expect the rate of change to slightly decrease.
The latter worry of the increased testing matrix was well summarized by Matti Picus who highlighted that some projects use multiple dimensions of testing. For example, one dimension can be “Operating System”, while another can be “CPU architecture”. Each new “Python version” in this matrix can then increase the number of tests significantly.
12 months of bugfix releases
Note: after the bugfix support period, Python still provides security updates for a release until five years after the original 3.X.0 release. This is true today and is not meant to change in PEP 602. For some reason this detail gets lost despite pictures in the PEP.
Decidedly fewer people were excited about the prospect of twelve monthly bugfix releases. Over 1/3 of the respondents were concerned by the shortened support period but also by the additional churn for integrators due to a larger number of point releases. I also received some critical feedback about this piece of PEP 602 over Twitter.
I took this into account and checked whether we can stretch the bugfix support period back to 18 months without generating additional churn for core developers. It looks like we can, if we release bugfix updates every other month. This is still an improvement over the current practice of quarterly bugfix updates. I updated the PEP to reflect this.
Alpha releases are rarely used
Finally, it turns out very few projects are testing alpha releases of Python. The listed reasons are unavailability on CI systems, incompatible dependencies (including tooling like linters, testing frameworks, etc.), and additional workload.
While this is out of scope for PEP 602, from discussions at the core sprint it seems that it would make sense to set more concrete expectations for stability in the master branch. Something in the vein of Steve Dower’s suggestion of the “latest” development stream.
This was an informative endeavor on many levels. I might incorporate some of the above information into the PEP body somehow but I haven’t decided how yet. I’m trying my best to keep it simple and easy to digest.
Raw data from the poll: https://www.dropbox.com/s/ynv7xhcbi7jfow1/pep-0602-poll-responses.xlsx?dl=0
I sympathise quite a bit with this concern, as a past and present contributor to projects with such multi-dimensional testing matrices, such as Numba and PyArrow.
However, it seems that with this PEP the number of Python versions simultaneously in active bugfix support wouldn’t increase, right? Sure, there would be more versions in “security fix” mode, but that’s a slightly different thing.
Well, no. The amount of active bugfix releases has been 1-2 (due to the informal overlap at the discretion of the Release Manager) for many Python versions now and yet the real support matrix for library maintainers has to include whatever is still distributed in supported operating systems. This roughly corresponds to our full five-year security support period.
The change that I made now to the PEP is to allow for a formal overlap to provide a guaranteed period of 18 months of bugfix releases. See the updated pictures in the post. I was initially worried that this overlap would put additional workload on core developers but looking at the release calendar of the previous versions I discovered the informal overlap has been a thing for a long time.
As a package maintainer, a yearly release cycle means I will end up skipping versions of python (ex: support new features in 3.8, 3.10, 3.12, skipping 3.9 and 3.11) because its just moving too fast. The net result being python-dev has used up twice the effort, and effectively slowed down my adoption of python language features. For me, this is the ultimate waste of effort.
Would an alternate 2yr release cycle be an alternative to address concerns.
Even years… 2020, 2022… Produce a LTS Version with a 5 yr support cycle and in the alternate years with a 2 year support cycle.
This would essentially allow users to choose the implementation that suits their use case.
One of the discussions we had at the sprints is that “support cycle” needs a definition, as many people have different understandings. This is why my suggestion earlier was so detailed, because in conversation it was very clear that the definitions are not shared.
I suspect in this case you’re referring to the bug fix phase of a release? Security fix phase extends beyond the 5/2 years?
This is my main concern and why I would rather see a two year cycle.
Missing a year is fine if we don’t break anything between releases. I’d also be happy to reduce the rate of breaking releases in other ways, but reducing the overall rate of releases is the only way that can’t fail
To the contrary, reducing the overall rate is making it more likely your project will break on the new release:
- the more time between releases, the more changes each holds; and
- the more time between releases, the less likely it is somebody else will trip over your problem and it will get fixed before you get to it.
This is why I disagree with the argument that if any one user will be skipping releases, that makes the quicker release cadence a futile exercise. Yes, you might be skipping even releases. But another user might not and they will encounter and report problems before you even get to them. And by the way, this is not just bugs in CPython I’m talking about. It’s also the need to update your project’s dependencies to work with the latest version (for example by fixing DeprecationWarnings). More common releases make it more likely that this sort of thing will be identified and fixed sooner.
Finally, I’m repeating this like a mantra now, quicker releases will be more similar to one another. Accelerating the calendar doesn’t magically provide us with more core developers or provide existing ones with more free time.
How can I make this clearer? It’s like having one big bus leaving your station every hour vs. a smaller one every 30 minutes. You’re more likely to find the big bus crowded because there’s less choice. A problem with the hourly bus (say, a blown tire) is also more painful than a similar problem of the bus that comes every 30 minutes.
I think our main disagreement is that you’re focusing on magnitude of change per release, while I’m more worried about non-zero changes over time (eg. “number of times per year I need to change something due to a CPython release”) - hence I think less frequent “must support” releases and more frequent “may support” releases is better than only having the latter.
From a practical sense, a library must support all published releases; people start complaining otherwise.
It can skip some Python versions though, e.g. run tests only against odd releases.
Now we usually don’t run tests against all Python bugfix releases but choose one from the available set. Skipping even releases for reducing a test matrix doesn’t change it too much.
Or, more realistically, test the latest Python, the oldest supported one plus several arbitrary versions in between.
In an ecosystem where there’s a high level of interdependence between libraries, “may support” releases don’t really make much sense to me. All it takes is for one or two key libraries to not support a given release, and it becomes hard to impossible for any library to claim to support that release (sure, “We support Python 3.11, but are not responsible for any issues that are related to our dependencies” is a reasonable support statement, but in practice it doesn’t help users who can’t use my library on a version that I claim to support). So in practice, “may support” versions are “must support unless you’re OK with your users complaining”.
For an application-centric viewpoint, “may/must support” versions make more sense. But library support is more complex (it’s the same logic that says applications should pin their dependencies but libraries need to keep their dependency specifications broad)…
Precisely. This is why I don’t like the idea of “just skip a release” to help handle the faster rate of releases.
So this would work well with “test the fast track release, the current stable release, and as many security-fix releases as you can afford”? (Assuming we can get the various CI systems to pick up the faster releases in a timely manner)
It’s different because the stable release is 2 years long. Debian is released every 2 years approximately.
It means a high chance of getting 4 years old system Python because Debian will choose the current stable Python at the moment of debian freezing.
I expect the same from other Linux distributions.
Fast release fits Arch/Gentoo model only not Ubuntu/Fedora.
Actually, fast releases look much closer to beta than normal release.
The PEP 602 proposes full-fledged release every year with 2 years support period, there is no principal difference between these versions.
In the conversations we had last week, aligning with the Linux distro release cycles didn’t seem to be this critical. Perhaps we need to consider that more strongly?
Luckily, the other important platforms don’t mind when we release, they’re far more flexible. So we can adapt to Linux here. Even with an annual release, we only gain a year at most.
(Alternatively, do we need to become the system Python? Or should we be looking for approaches to distribute the latest Python build sooner but on request and discourage users from relying on system Python having the latest features?)
So as a user, how would I install Python? Install 3.9, then 2019-09, then 2020-04, then 2020-12, then 3.10, then…? That sounds like a pretty messy setup, as the version style for Python varies. I don’t want to just take 3.x, because I’m an early adopter, and I like shiny new features. But I don’t want to just take the dated releases, as that makes it hard for me to discuss the version I’m on with other people who are focusing on 3.X releases.
And as a library maintainer, I still have to support 3.X and YYYY-MM releases, as my users could be using either. So it’s just the complex versioning with no corresponding benefit.
And as a packaging infrastructure developer, how the heck to we re-specify the
Requires-Python tag? How would a library specify
>=3.10 in a way that also catches calendar versioned releases that are post 3.10? Would the packaging libraries need to hard-code 3.X release dates somehow? Or would we expect package authors to remember to encode
>=3.10 OR >= 2020-08?
Basically, this seems like a solution that at best changes which groups of users see the complexity (I assume your proposal would be of benefit to other groups, like maybe standalone application developers?) And in practice, I think it will probably add complexity globally, even if it improves things for some subsets of users.
So I’m -1 on something like this.
In Fedora, we ship new Python releases as soon as the first alphas are out. Shipping a half dozen Python interpreters is quite easy. What users are sometimes concerned about is other 3rd party libraries that are not pip-installable (usually system libraries bindings such as libvirt, dnf, rpm…). Those are only built and shipped for the Python version you call “system Python” and changing this Python version is a big coordinated process planed months and months ahead (the “integration” thing is a tad tricky). As a Linux distro, we already “distribute the latest Python build sooner” and the release schedule still has a large impact on us.
I don’t think it’s possible to avoid an eventual increase in complexity here, as the size of CPython’s existing install base means that the only “low” complexity option is to continue with the status quo. The status quo isn’t actually simple, but people have had ~20 years to adapt to its particular flavour of complexity (as the essential approach hasn’t really changed much since Python 2.0). Unfortunately, the world around us doesn’t stand still, and a release model that was a generally good fit for software deployment models in the late 1990’s and early 2000’s isn’t necessarily the best approach for all situations in 2020+.
Thus the main potential benefit offered by Steve’s suggestion of a new production-ready release stream is that instead of needing to compromise between the interests of existing consumers that are well served by the current release model and those that aren’t, we can instead focus on designing a new release model that covers scenarios that aren’t as well supported as they could be today. (In particular, environments with sufficiently strong CI pipelines that adopting a new Python feature release isn’t much more eventful than any other code change).
Now, it may be that CalVer releases from trunk isn’t the best way to better adapt to those situations (the practical concerns you raise above are genuine problems). One alternative might be to run trunk in perpetual beta (ditching the alpha releases entirely), postpone feature freeze to the first release candidate (lengthening the rc phase accordingly), and commit to users that all CPython releases (even beta releases) will be considered suitable for use in production - the differences would be in API and ABI stability guarantees across updates, not in release quality.
Relative to the incremental feature release proposal in PEP 598, one key advantage of the perpetual beta model is that it wouldn’t change anything in the user experience after the X.Y.0 release in each release series, so folks that are happy with the status quo would only see the baseline feature release cadence change from 18 months to 24 months and no other changes.
Relative to the annual release cadence proposal in PEP 602, the same benefit would apply as for PEP 598 (i.e. minimal impact on folks that are happy with the status quo), but it should also provide the following benefits:
- for consumers that can consume feature updates easily, a perpetual beta release stream would offer an even lower feature latency than would be offered by an annual feature release cadence (but with more stability than can currently be found when attempting to consume trunk directly)
- more flexibility in adjusting where the release candidate phase falls in the years where we produce an X.Y.0 release (as both the PyCon US sprints and the core dev sprints would mainly be focused on the next beta and bug fix releases, rather than specifically the next X.Y.0 release)
So while I’m not quite prepared to withdraw PEP 598 in favour of Steve’s proposed model just yet, I’m intrigued enough by the concept that I’ve offered to co-author writing it up as a PEP (hopefully later this week). If I come out of that process convinced that the perpetual beta model is likely to be a better option overall than the incremental feature releases idea, then I’ll include the withdrawal of PEP 598 in the same PR that adds the perpetual beta write-up.
But if that new release model is ignored by people who are comfortable with the current release model (if not necessarily with the release frequency), then it may well be useless. For example, you can’t offer a new release model aimed at the needs of standalone application developers if library developers ignore it and hence don’t support that release stream.
My gut instinct is that any new release model (or stream in a multi-stream model like the one @steve.dower is proposing) must be supportable by library developers without significant extra effort, because there are very few usage models that don’t rely on the 3rd party library ecosystem. But I’ll admit I may be biased in that view.