My thinking on the two stream model is almost too biased towards the library developers and against the application developers.
To answer in terms of the concrete answer you asked:
>=3.10 OR >=2020.08 is an unnecessary condition, because the latter is implied by the former But the idea is that your package “probably works on latest” and “works on >=3.???”.
The trick is that users must be on the latest fast track release. That’s an explicit requirement. It’s not enforced, of course, until someone comes and says “why doesn’t package X work on latest from two months ago” and you say “latest != two months ago; update and come back”. The CalVer is less about versioning and more about making it easy to see when a user is in the wrong place - they’ll show up with the version number, and you’ll know straight away either that you have a new/current problem, or that it’s irrelevant.
And once you know you have a problem, you fix it in a way that works on the latest stable 3.x version, as well as the latest release from the fast stream, and then you’re done. You don’t have e.g. 3.8, 3.9, 3.10 and 3.11 beta all active at once, just two versions. Which means the app developers are more likely to have to install dependencies from source than before, but the library developers don’t have to maintain as many Python versions.
It would of course be possible to keep the core idea, but adapt the versioning slightly:
Have odd minor releases (3.9.x / 3.11.x / …) be the rolling release model, and even minor releases (3.10 / 3.12 / …) be the stable version. That way, no gymnastics with the version numbers are necessary.
Additionally, it avoids things like “rolling alphas” or “rolling betas”, both of which would probably see little use, as distros, universities, enterprises etc. would (often by policy) not roll out a beta release.
PS. GNOME uses a very similar versioning scheme, for example.
So for that to work, libraries distributing non-universal wheels would have to release new binaries for every fast-track release. Otherwise, those users who you’re insisting must upgrade monthly would simply say “I can’t, there’s no new numpy/scipy/whatever release yet”. My instinct is that you’d have quite a job persuading the scientific Python stack to switch to a monthly release cycle…
For a proposal like this to work, you’d need a lot of tooling changes to enable binary extensions that work for multiple Python versions (on the fast track release path, at least). Unless I’m missing something here - this is something that’s come up before in these discussions.
Strong -1 on making it harder for end users (or anyone using libraries) to get access to binaries for libraries. Obviously, libraries will supply binaries for the main 3.X releases, but if we don’t encourage binaries on the fast track releases, people simply won’t use them. (My evidence for that assertion is anecdotal, but I’m pretty confident in it - on a personal note, I definitely wouldn’t be able to use the fast track stream if I couldn’t rely on binaries for projects like numpy, pandas, matplotlib, etc, existing).
I’m actually fine with that as an outcome, as my mental persona for the potential consumers of the fast track releases are folks that are either:
operating a web service, and hence able to build and cache their own wheels for whichever version they’re using in their own build pipeline
building an application (whether web, desktop, or mobile) or physical appliance that bundles its own Python, and are hence able to make their own wheels or pre-installed virtual environments at the same time as they make the rest of their software bundle
End users that say “Having wheel archives available from PyPI makes an enormous difference in release usability for me” would instead fall squarely into the category of folks for whom the status quo works reasonably well, and for them, the intended payoffs of a split release model would be:
higher proportion of wheels available from PyPI on X.Y.0 release days
fewer unexpected dependency breakages when upgrading to a new X. Y. 0 feature release (due to more routine compatibility testing by folks that have opted in to the continuous beta stream)
(Edit: it also occurs to me that this approach would mean that “is compatible with the continuous beta stream of releases” would become another benefit for extension modules targeting the stable C ABI rather than the full CPython ABI)
Not a joke. Version comparisons are done numerically (according to whichever PEP we wrote to define them), and since 2020 is greater than 3, the latter condition is always satisfied by the former.
Perhaps you were reading it differently than I intended? I recognise now there are two interpretations here for what the specification applies to. I was thinking of a Requires-Python spec, since Paul mentioned it, but the more general interpretation where you might use it as a shorthand it could imply the opposite.
Note that I’m not recommending this or saying it’s a good idea, just pointing out that it’s how the comparison would work. It’s safe to switch from SemVer or series versioning to CalVer, but not to then switch back or to mix them.
We can discuss it more when there’s a new thread for the full PEP, but my thinking is that you would never have a version restriction against the fast release series. So this side discussion is irrelevant.
I’m fine with 602, as it delivers releases faster for me. But I’m not really the one who pays the cost (which is basically the RMs and 3rd party projects providing non-universal binaries, as well as possibly integrators who sync to Python releases) so I personally don’t think my vote is that important there.
I see the virtue of a 2 year cycle, particularly if there’s a predictable beta as well as release eg betas in October in even years, full releases in odd. You kinda get the same virtue in say, in March betas and October releases. Which frames the discussion back in terms of cadence rather than duration.
A 2 year cycle with a 1 year cadence, eg with 3.13beta being released in October at the same time as 3.12.0 final might be useful. Vis a vis depreciation, coinciding marking something as depreciated at the same time as releasing a beta without that feature (or majoring on it’s depreciated status if remaining marked depreciated without removal yet) has some potential.
I run my tests on 3.8.0b4 locally and 3.7.4-[latest docker image] in CI. Our team follows micro versions.
If Python had a new 3.x release once a week we might skip some versions, but arguably it would kick us in the butt to improve our release process Consider our project as an example of Python end-user, we have to track not only Python but also ~30 direct dependencies, ~100 transient dependencies and other random deps, like glibc CVEs. Python is not the bottleneck.
All this talk about too fast or too slow (in this thread) smells of bikeshedding and I think misses the point.
This reads as “no point discussing slower or faster; just make it faster” which is not an argument with any value. We don’t need to change anything at all, so if changing the rate of releases is “bikeshedding” then the status quo wins automatically, not the proposal.
(Just a general reflection on the comment in the hope of pushing us towards a decent discussion. Don’t take this as a personal attack - you’re certainly not the first to try this persuasive tactic )
It still needs more work (and feedback from @steve.dower as co-author) before it will be ready for assignment of a PEP number and its own discussion thread, but I’ve made enough progress on the “rolling release cadence” draft to be happy that I like it more than I like my own initial idea in PEP 598: https://github.com/ncoghlan/peps/pull/4/files
The major change relative to Steve’s write-up earlier in the thread is the use of the existing beta release numbering scheme for the fast track releases (and the design discussion has a section on why that is).
I’m on the team that maintains Python in RHEL, one of the distros you call most important. For RHEL, the release cycle of Python is completely irrelevant. Recently, we’ve released RHEL 8 with Python 3.6. (And RHEL 7 only had Python 2.7 when it was released.) What’s relevant for us is to be able to put new Pythons in Fedora early (because RHEL usually forks from a Fedora release, all RHEL work starts in Fedora). In regard of development and progress and new stuff I consider Fedora far more important than RHEL.