In an ecosystem where there’s a high level of interdependence between libraries, “may support” releases don’t really make much sense to me. All it takes is for one or two key libraries to not support a given release, and it becomes hard to impossible for any library to claim to support that release (sure, “We support Python 3.11, but are not responsible for any issues that are related to our dependencies” is a reasonable support statement, but in practice it doesn’t help users who can’t use my library on a version that I claim to support). So in practice, “may support” versions are “must support unless you’re OK with your users complaining”.
For an application-centric viewpoint, “may/must support” versions make more sense. But library support is more complex (it’s the same logic that says applications should pin their dependencies but libraries need to keep their dependency specifications broad)…
So this would work well with “test the fast track release, the current stable release, and as many security-fix releases as you can afford”? (Assuming we can get the various CI systems to pick up the faster releases in a timely manner)
It’s different because the stable release is 2 years long. Debian is released every 2 years approximately.
It means a high chance of getting 4 years old system Python because Debian will choose the current stable Python at the moment of debian freezing.
I expect the same from other Linux distributions.
Fast release fits Arch/Gentoo model only not Ubuntu/Fedora.
Actually, fast releases look much closer to beta than normal release.
The PEP 602 proposes full-fledged release every year with 2 years support period, there is no principal difference between these versions.
In the conversations we had last week, aligning with the Linux distro release cycles didn’t seem to be this critical. Perhaps we need to consider that more strongly?
Luckily, the other important platforms don’t mind when we release, they’re far more flexible. So we can adapt to Linux here. Even with an annual release, we only gain a year at most.
(Alternatively, do we need to become the system Python? Or should we be looking for approaches to distribute the latest Python build sooner but on request and discourage users from relying on system Python having the latest features?)
So as a user, how would I install Python? Install 3.9, then 2019-09, then 2020-04, then 2020-12, then 3.10, then…? That sounds like a pretty messy setup, as the version style for Python varies. I don’t want to just take 3.x, because I’m an early adopter, and I like shiny new features. But I don’t want to just take the dated releases, as that makes it hard for me to discuss the version I’m on with other people who are focusing on 3.X releases.
And as a library maintainer, I still have to support 3.X and YYYY-MM releases, as my users could be using either. So it’s just the complex versioning with no corresponding benefit.
And as a packaging infrastructure developer, how the heck to we re-specify the Requires-Python tag? How would a library specify >=3.10 in a way that also catches calendar versioned releases that are post 3.10? Would the packaging libraries need to hard-code 3.X release dates somehow? Or would we expect package authors to remember to encode >=3.10 OR >= 2020-08?
Basically, this seems like a solution that at best changes which groups of users see the complexity (I assume your proposal would be of benefit to other groups, like maybe standalone application developers?) And in practice, I think it will probably add complexity globally, even if it improves things for some subsets of users.
In Fedora, we ship new Python releases as soon as the first alphas are out. Shipping a half dozen Python interpreters is quite easy. What users are sometimes concerned about is other 3rd party libraries that are not pip-installable (usually system libraries bindings such as libvirt, dnf, rpm…). Those are only built and shipped for the Python version you call “system Python” and changing this Python version is a big coordinated process planed months and months ahead (the “integration” thing is a tad tricky). As a Linux distro, we already “distribute the latest Python build sooner” and the release schedule still has a large impact on us.
I don’t think it’s possible to avoid an eventual increase in complexity here, as the size of CPython’s existing install base means that the only “low” complexity option is to continue with the status quo. The status quo isn’t actually simple, but people have had ~20 years to adapt to its particular flavour of complexity (as the essential approach hasn’t really changed much since Python 2.0). Unfortunately, the world around us doesn’t stand still, and a release model that was a generally good fit for software deployment models in the late 1990’s and early 2000’s isn’t necessarily the best approach for all situations in 2020+.
Thus the main potential benefit offered by Steve’s suggestion of a new production-ready release stream is that instead of needing to compromise between the interests of existing consumers that are well served by the current release model and those that aren’t, we can instead focus on designing a new release model that covers scenarios that aren’t as well supported as they could be today. (In particular, environments with sufficiently strong CI pipelines that adopting a new Python feature release isn’t much more eventful than any other code change).
Now, it may be that CalVer releases from trunk isn’t the best way to better adapt to those situations (the practical concerns you raise above are genuine problems). One alternative might be to run trunk in perpetual beta (ditching the alpha releases entirely), postpone feature freeze to the first release candidate (lengthening the rc phase accordingly), and commit to users that all CPython releases (even beta releases) will be considered suitable for use in production - the differences would be in API and ABI stability guarantees across updates, not in release quality.
Relative to the incremental feature release proposal in PEP 598, one key advantage of the perpetual beta model is that it wouldn’t change anything in the user experience after the X.Y.0 release in each release series, so folks that are happy with the status quo would only see the baseline feature release cadence change from 18 months to 24 months and no other changes.
Relative to the annual release cadence proposal in PEP 602, the same benefit would apply as for PEP 598 (i.e. minimal impact on folks that are happy with the status quo), but it should also provide the following benefits:
for consumers that can consume feature updates easily, a perpetual beta release stream would offer an even lower feature latency than would be offered by an annual feature release cadence (but with more stability than can currently be found when attempting to consume trunk directly)
more flexibility in adjusting where the release candidate phase falls in the years where we produce an X.Y.0 release (as both the PyCon US sprints and the core dev sprints would mainly be focused on the next beta and bug fix releases, rather than specifically the next X.Y.0 release)
So while I’m not quite prepared to withdraw PEP 598 in favour of Steve’s proposed model just yet, I’m intrigued enough by the concept that I’ve offered to co-author writing it up as a PEP (hopefully later this week). If I come out of that process convinced that the perpetual beta model is likely to be a better option overall than the incremental feature releases idea, then I’ll include the withdrawal of PEP 598 in the same PR that adds the perpetual beta write-up.
But if that new release model is ignored by people who are comfortable with the current release model (if not necessarily with the release frequency), then it may well be useless. For example, you can’t offer a new release model aimed at the needs of standalone application developers if library developers ignore it and hence don’t support that release stream.
My gut instinct is that any new release model (or stream in a multi-stream model like the one @steve.dower is proposing) must be supportable by library developers without significant extra effort, because there are very few usage models that don’t rely on the 3rd party library ecosystem. But I’ll admit I may be biased in that view.
My thinking on the two stream model is almost too biased towards the library developers and against the application developers.
To answer in terms of the concrete answer you asked:
>=3.10 OR >=2020.08 is an unnecessary condition, because the latter is implied by the former But the idea is that your package “probably works on latest” and “works on >=3.???”.
The trick is that users must be on the latest fast track release. That’s an explicit requirement. It’s not enforced, of course, until someone comes and says “why doesn’t package X work on latest from two months ago” and you say “latest != two months ago; update and come back”. The CalVer is less about versioning and more about making it easy to see when a user is in the wrong place - they’ll show up with the version number, and you’ll know straight away either that you have a new/current problem, or that it’s irrelevant.
And once you know you have a problem, you fix it in a way that works on the latest stable 3.x version, as well as the latest release from the fast stream, and then you’re done. You don’t have e.g. 3.8, 3.9, 3.10 and 3.11 beta all active at once, just two versions. Which means the app developers are more likely to have to install dependencies from source than before, but the library developers don’t have to maintain as many Python versions.
It would of course be possible to keep the core idea, but adapt the versioning slightly:
Have odd minor releases (3.9.x / 3.11.x / …) be the rolling release model, and even minor releases (3.10 / 3.12 / …) be the stable version. That way, no gymnastics with the version numbers are necessary.
Additionally, it avoids things like “rolling alphas” or “rolling betas”, both of which would probably see little use, as distros, universities, enterprises etc. would (often by policy) not roll out a beta release.
PS. GNOME uses a very similar versioning scheme, for example.
So for that to work, libraries distributing non-universal wheels would have to release new binaries for every fast-track release. Otherwise, those users who you’re insisting must upgrade monthly would simply say “I can’t, there’s no new numpy/scipy/whatever release yet”. My instinct is that you’d have quite a job persuading the scientific Python stack to switch to a monthly release cycle…
For a proposal like this to work, you’d need a lot of tooling changes to enable binary extensions that work for multiple Python versions (on the fast track release path, at least). Unless I’m missing something here - this is something that’s come up before in these discussions.
Strong -1 on making it harder for end users (or anyone using libraries) to get access to binaries for libraries. Obviously, libraries will supply binaries for the main 3.X releases, but if we don’t encourage binaries on the fast track releases, people simply won’t use them. (My evidence for that assertion is anecdotal, but I’m pretty confident in it - on a personal note, I definitely wouldn’t be able to use the fast track stream if I couldn’t rely on binaries for projects like numpy, pandas, matplotlib, etc, existing).
I’m actually fine with that as an outcome, as my mental persona for the potential consumers of the fast track releases are folks that are either:
operating a web service, and hence able to build and cache their own wheels for whichever version they’re using in their own build pipeline
building an application (whether web, desktop, or mobile) or physical appliance that bundles its own Python, and are hence able to make their own wheels or pre-installed virtual environments at the same time as they make the rest of their software bundle
End users that say “Having wheel archives available from PyPI makes an enormous difference in release usability for me” would instead fall squarely into the category of folks for whom the status quo works reasonably well, and for them, the intended payoffs of a split release model would be:
higher proportion of wheels available from PyPI on X.Y.0 release days
fewer unexpected dependency breakages when upgrading to a new X. Y. 0 feature release (due to more routine compatibility testing by folks that have opted in to the continuous beta stream)
(Edit: it also occurs to me that this approach would mean that “is compatible with the continuous beta stream of releases” would become another benefit for extension modules targeting the stable C ABI rather than the full CPython ABI)
Not a joke. Version comparisons are done numerically (according to whichever PEP we wrote to define them), and since 2020 is greater than 3, the latter condition is always satisfied by the former.
Perhaps you were reading it differently than I intended? I recognise now there are two interpretations here for what the specification applies to. I was thinking of a Requires-Python spec, since Paul mentioned it, but the more general interpretation where you might use it as a shorthand it could imply the opposite.
Note that I’m not recommending this or saying it’s a good idea, just pointing out that it’s how the comparison would work. It’s safe to switch from SemVer or series versioning to CalVer, but not to then switch back or to mix them.
We can discuss it more when there’s a new thread for the full PEP, but my thinking is that you would never have a version restriction against the fast release series. So this side discussion is irrelevant.