[ACCEPTED] PEP 602: Annual Release Cycle for Python

If you’re biased this way then so am I :slight_smile:

My thinking on the two stream model is almost too biased towards the library developers and against the application developers.

To answer in terms of the concrete answer you asked:

>=3.10 OR >=2020.08 is an unnecessary condition, because the latter is implied by the former :slight_smile: But the idea is that your package “probably works on latest” and “works on >=3.???”.

The trick is that users must be on the latest fast track release. That’s an explicit requirement. It’s not enforced, of course, until someone comes and says “why doesn’t package X work on latest from two months ago” and you say “latest != two months ago; update and come back”. The CalVer is less about versioning and more about making it easy to see when a user is in the wrong place - they’ll show up with the version number, and you’ll know straight away either that you have a new/current problem, or that it’s irrelevant.

And once you know you have a problem, you fix it in a way that works on the latest stable 3.x version, as well as the latest release from the fast stream, and then you’re done. You don’t have e.g. 3.8, 3.9, 3.10 and 3.11 beta all active at once, just two versions. Which means the app developers are more likely to have to install dependencies from source than before, but the library developers don’t have to maintain as many Python versions.

It would of course be possible to keep the core idea, but adapt the versioning slightly:

Have odd minor releases (3.9.x / 3.11.x / …) be the rolling release model, and even minor releases (3.10 / 3.12 / …) be the stable version. That way, no gymnastics with the version numbers are necessary.

Additionally, it avoids things like “rolling alphas” or “rolling betas”, both of which would probably see little use, as distros, universities, enterprises etc. would (often by policy) not roll out a beta release.

PS. GNOME uses a very similar versioning scheme, for example.

Oh. I didn’t realise that was what you meant.

So for that to work, libraries distributing non-universal wheels would have to release new binaries for every fast-track release. Otherwise, those users who you’re insisting must upgrade monthly would simply say “I can’t, there’s no new numpy/scipy/whatever release yet”. My instinct is that you’d have quite a job persuading the scientific Python stack to switch to a monthly release cycle…

For a proposal like this to work, you’d need a lot of tooling changes to enable binary extensions that work for multiple Python versions (on the fast track release path, at least). Unless I’m missing something here - this is something that’s come up before in these discussions.

Strong -1 on making it harder for end users (or anyone using libraries) to get access to binaries for libraries. Obviously, libraries will supply binaries for the main 3.X releases, but if we don’t encourage binaries on the fast track releases, people simply won’t use them. (My evidence for that assertion is anecdotal, but I’m pretty confident in it - on a personal note, I definitely wouldn’t be able to use the fast track stream if I couldn’t rely on binaries for projects like numpy, pandas, matplotlib, etc, existing).

2 Likes

I can’t understand if you are joking. How am I supposed to figure out that the latter is implied by the former? Do I have to look up some kind of table every time I want to compare Python versions?

The very idea of having two different versioning schemes for the same piece of software gets a strong -1 from me.

I’m actually fine with that as an outcome, as my mental persona for the potential consumers of the fast track releases are folks that are either:

  • operating a web service, and hence able to build and cache their own wheels for whichever version they’re using in their own build pipeline
  • building an application (whether web, desktop, or mobile) or physical appliance that bundles its own Python, and are hence able to make their own wheels or pre-installed virtual environments at the same time as they make the rest of their software bundle

End users that say “Having wheel archives available from PyPI makes an enormous difference in release usability for me” would instead fall squarely into the category of folks for whom the status quo works reasonably well, and for them, the intended payoffs of a split release model would be:

  • higher proportion of wheels available from PyPI on X.Y.0 release days
  • fewer unexpected dependency breakages when upgrading to a new X. Y. 0 feature release (due to more routine compatibility testing by folks that have opted in to the continuous beta stream)

(Edit: it also occurs to me that this approach would mean that “is compatible with the continuous beta stream of releases” would become another benefit for extension modules targeting the stable C ABI rather than the full CPython ABI)

1 Like

Not a joke. Version comparisons are done numerically (according to whichever PEP we wrote to define them), and since 2020 is greater than 3, the latter condition is always satisfied by the former.

Perhaps you were reading it differently than I intended? I recognise now there are two interpretations here for what the specification applies to. I was thinking of a Requires-Python spec, since Paul mentioned it, but the more general interpretation where you might use it as a shorthand it could imply the opposite.

I read than any fast CalVer release is greater than any stable one because 20XX is always greater than 3.
Python 3.12 released at 2025 it still older than 2020.0 by version comparison rules.

I found it very confusing.

But if 3.15 is released in e.g. 2025 (I haven’t done the math, sorry), it’s still considered “smaller” than 2020?

Yes, because it’s a straight numeric comparison.

Note that I’m not recommending this or saying it’s a good idea, just pointing out that it’s how the comparison would work. It’s safe to switch from SemVer or series versioning to CalVer, but not to then switch back or to mix them.

We can discuss it more when there’s a new thread for the full PEP, but my thinking is that you would never have a version restriction against the fast release series. So this side discussion is irrelevant.

OK. In which case count me as -1 on this proposal because I’m currently an early adopter and you’re putting me in a category of people for whom you’re proposing to introduce a slower release cadence :frowning:

(I seem to be accumulating a set of -1 votes, one for each interpretation of the proposal that I get offered. I think that means I’m -1 overall :wink:)

That’s all for the yet unnumbered “Slow down stable releases but introduce a python-latest release channel” PEP. Out of curiosity, how are you feeling (numerically) about the proposal in PEP 602?

I’m fine with 602, as it delivers releases faster for me. But I’m not really the one who pays the cost (which is basically the RMs and 3rd party projects providing non-universal binaries, as well as possibly integrators who sync to Python releases) so I personally don’t think my vote is that important there.

Numerically, put me down as +0 I guess.

I see the virtue of a 2 year cycle, particularly if there’s a predictable beta as well as release eg betas in October in even years, full releases in odd. You kinda get the same virtue in say, in March betas and October releases. Which frames the discussion back in terms of cadence rather than duration.

A 2 year cycle with a 1 year cadence, eg with 3.13beta being released in October at the same time as 3.12.0 final might be useful. Vis a vis depreciation, coinciding marking something as depreciated at the same time as releasing a beta without that feature (or majoring on it’s depreciated status if remaining marked depreciated without removal yet) has some potential.

+1 on PEP-602.

I run my tests on 3.8.0b4 locally and 3.7.4-[latest docker image] in CI. Our team follows micro versions.

If Python had a new 3.x release once a week we might skip some versions, but arguably it would kick us in the butt to improve our release process :slight_smile: Consider our project as an example of Python end-user, we have to track not only Python but also ~30 direct dependencies, ~100 transient dependencies and other random deps, like glibc CVEs. Python is not the bottleneck.

All this talk about too fast or too slow (in this thread) smells of bikeshedding and I think misses the point.

PEP-602 is an improvement, let’s take it :slight_smile:

This reads as “no point discussing slower or faster; just make it faster” which is not an argument with any value. We don’t need to change anything at all, so if changing the rate of releases is “bikeshedding” then the status quo wins automatically, not the proposal.

(Just a general reflection on the comment in the hope of pushing us towards a decent discussion. Don’t take this as a personal attack - you’re certainly not the first to try this persuasive tactic :slight_smile: )

1 Like

I prefer 1 year > 2 year > 1.5 year cycle.

Ubuntu, Debian, and RHEL (CentOS) are most important Linux distributions. While I don’t know RHEL release cycle, Ubuntu LTS keeps 2 year cycle and Debian also have non strict 2 year cycle.

If we continue 1.5 year cycle:

  • Python 3.9 will be released around 2021-04 and it will be too late for Debian 11 Transition Freeze.
    • If Debian 11 is released on 2021-07, it will have Python 3.8 which is released on 2019-10.
  • Ubuntu 22.04 LTS will have 1 year old Python 3.9 (expected 2021-04)
  • Python 3.11 will be released around 2024-04 and it will be too late for Ubuntu 24.04 LTS.
    • Of course, we can shorten release cycle on this specific case.

I think current 1.5 year cycle is not good for these important distributions having 2 year cycles.

I prefer 1 year cycle because all Ubuntu LTS and Debian releases will have fresh and stable (6 month old) Python.

If we don’t need to have constant release schedule, I prefer 1~2 year cycle: release 6 month before Ubuntu LTS or 2 month before Debian Transition Freeze.

Update: I posted a mail about this thread to debian-python ML (thread).

3 Likes

It still needs more work (and feedback from @steve.dower as co-author) before it will be ready for assignment of a PEP number and its own discussion thread, but I’ve made enough progress on the “rolling release cadence” draft to be happy that I like it more than I like my own initial idea in PEP 598: https://github.com/ncoghlan/peps/pull/4/files

The major change relative to Steve’s write-up earlier in the thread is the use of the existing beta release numbering scheme for the fast track releases (and the design discussion has a section on why that is).

3 Likes

I’m on the team that maintains Python in RHEL, one of the distros you call most important. For RHEL, the release cycle of Python is completely irrelevant. Recently, we’ve released RHEL 8 with Python 3.6. (And RHEL 7 only had Python 2.7 when it was released.) What’s relevant for us is to be able to put new Pythons in Fedora early (because RHEL usually forks from a Fedora release, all RHEL work starts in Fedora). In regard of development and progress and new stuff I consider Fedora far more important than RHEL.

3 Likes

Miro, Is this the best link to Fedora’s release cycle? https://fedoraproject.org/wiki/Fedora_Release_Life_Cycle

2 Likes

Yes, it is, sorry for not providing it sooner.

1 Like