PEP 596: Python 3.9 Release Schedule (doubling the release cadence)

And now back to the proposal for the cadence change itself (this is still me talking as an individual contributor - the SC doesn’t have a collective opinion yet, although we’re definitely sympathetic to the motivation behind the proposal).

As noted above, I think an annual cadence would align nicely with other events in the Python ecosystem (most notably PyCon US and the now annual core development sprints).

However, I think going to a full supported-for-6-years release every year would increase community support matrices too much (this increased support burden was the main concern that stalled the previous proposals for more frequent standard library updates that were independent of core interpreter updates).

We’re talking more test runs in CI, more binary wheels on PyPI, more of everything. All of that would be wasted collective effort and resources if the intended target audience for the new release cadence is folks who are going to be upgrading to new feature releases almost as soon as they’re available (and that’s presumably the case - folks who aren’t already asking “That feature is committed to the CPython repo, why can’t I already use it in production/at school/wherever?” aren’t bothered by our release cadence being relatively slow).

So my proposal would be to go for something that’s closer to a hybrid of the Fedora and Ubuntu lifecycles (as well as drawing inspiration from the Django release cadence): alternate between our existing regular support period (which I’ll call a “Standard Lifecycle Release”) and a new reduced support period that pretty much only lasts until the next feature release (which I’ll call a “Minimal Lifecycle Release”).

In this variant of the proposal, we’d have an alternating cadence where 3.8 would be a Standard Lifecycle Release, and receive bugfix releases for 2 years (until late 2021), and then security updates for a further 5 years after that (until late 2026). By contrast, 3.9 would be a Minimal Lifecycle Release (starting in late 2020), and its support period would be greatly truncated: after a final bug fix release alongside the 3.10.0 release (in late 2021), Python 3.9 would go into security fix only mode only until the 3.10.1 maintenance release. This approach would mean that if you chose to migrate from 3.8 to 3.9, you’d also be committing to migrate to 3.10 almost as soon as it was available. Alternatively, folks that didn’t want to make that commitment could stick with 3.8 until 3.10 was available.

From a support matrix perspective, the idea would be that community projects would face the following combinations:

  • active bug fix branches: 1 (e.g. 3.8 prior to 3.9 release) or 2 (e.g. 3.8 and 3.9 prior to 3.10 release)
  • active security-only branches: usually 3, occasionally 4 (during the couple of months where a Minimal release is being dropped from the matrix)

That’s still a larger support matrix than today, but it’s not dramatically larger the way it would be if we did a standard lifecycle release every year.

To help folks keep track of which releases were supported and in what state, we’d need to enhance our download pages and our developer guide with a diagram and supporting table akin to those the Django project uses to explain their release status at https://www.djangoproject.com/download/#supported-versions

(Note: I’m deliberately avoiding the “LTS” term, since I want to keep that reserved for the separate “Extended (aka Enterprise) Lifecycle for Python” concept at https://github.com/elpython/elpython-meta/blob/master/README.md, which aims to reduce the developer experience harms of LTS Linux distros keeping legacy Python versions alive for extended periods in response to commercial demand)

Also speaking for just myself, I agree a somewhat faster release cycle would be good for us. I also agree that the support matrix could be overwhelming. But I just hate, hate, hate the idea that there is (intentional or accidental) semantics in the low bit of the minor version. When I encountered this convention years ago for Linux, every time it was significant I had to ask some expert what the convention was. Maybe we should just name the releases with short lifecycles betas? Or we could bump the major version each time after a new LTS release? (So e.g. 4.0 would be a short-lived release, and 4.1 would be the following LTS; then 5.0 would be the next short-lived release, etc.) I know that sounds crazy too, but let’s please get a little bit creative about this rather than settle for even/odd.

1 Like

For those who don’t know, this is the same as what django does:

Starting with Django 2.0, version numbers will use a loose form of semantic versioning such that each version following an LTS will bump to the next “dot zero” version. For example: 2.0, 2.1, 2.2 (LTS), 3.0, 3.1, 3.2 (LTS), etc.

source

I think you’d still have to ask an expert to learn the convention though.

Can we bump the major version number? How much code will break if sys.version_info[0] == 4? We’ve just spent a decade training our users to use “python 3” as the name of the language – how much pushback and confusion will there be if we start talking about “python 3 version 4.0”?

Even if we can’t, we can still potentially do clever things with the version number. (E.g.: LTS are named like 3.10, 3.20, non-LTS are 3.11, 3.12, skipping versions as needed? Switch to calver using 3.<encoded date>?) But it might effect which options we consider.

It’s not just the number of active releases for users to deal with, it’s also the number of active releases for US. How do backports work in any of these models?

Genuine question here: how would the slow/fast (or LTS/short-lived) releases really be different from our current releases (the slow cycle) vs. building from master every month (the fast cycle)?

There’s obviously a bit of stabilisation that happens as part of any real release, but besides that, how is this actually different from a “current” release and an “alpha” release? I can’t imagine we’re really going to do that much more stabilisation work than we do for alphas, and if the compatibility constraints in the fast track are as stringent as in the slow one, what does anyone gain?

Now, I do acknowledge that we get seriously low usage of our alpha and beta releases (though I think the beta usage is picking up) simply because of the names. If all we’re really proposing is making the prereleases sound more stable, then maybe we ought to just slow down main releases and increase prereleases to the point where people can’t ignore them?

1 Like

Heh, I didn’t know that. I thought I was inventing it on the spot. Thanks for confirming this isn’t completely crazy.

1 Like

The main challenge/concern with the status quo appears to be around the feature freeze at X.Y.0, combined with the 18 month period between X.Y.0 releases, so one possible approach would be to move the feature freeze later in the life of a release series, while still locking things like the filesystem layout and the C ABI compatibility at the point where we do now.

So instead of starting their life already frozen, the lifecycle of a feature release would look like:

  • X.Y.0 sets the filesystem layout, language AST, byte code format and C ABI for the release series, may introduce new deprecation warnings, converts old deprecation warnings to errors, and may introduce other arguably incompatible changes that require entries in the porting guide
  • 12 months of “backwards compatible feature additions” (typically X.Y.1, X.Y.2)
  • 12 months in bug fix only mode (all feature development starts targeting “X.(Y+1).0” instead)

From a timing perspective, this would be pretty similar to what I suggested above, but the version numbers would be different: what would have been 3.10.0 in the first approach would just be 3.9.2 in this approach.

In this model, there would be a consistent 24 months between X.Y.0 releases and feature freeze releases, but those periods would be offset by 12 months. As a result, if you miss a particular feature deadline, there’s always another one at most 12 months away.

The conservative upgrade path would continue to be to go from feature freeze release to feature freeze release, but there would be a new eager upgrade path available that tracked the pre-feature-freeze releases.

+1. I spent about twenty minutes trying to describe exactly this last night and then gave up :slight_smile:

Why the 2 year expansion of security support compared to what we do now?

So would this be a reasonable expectation for 3.8/3.9?

  1. 2019-10-15: 3.8.0 (what we expect in a feature release today)
  2. 2020-04-15: 3.8.1 (features allowed as code from 3.8.0 doesn’t break)
  3. 2020-10-15: 3.8.2 (features allowed)
  4. 2021-04-15: 3.8.3 (bugfixes only)
  5. 2012-10-15: 3.9.0 (what we expect in a feature release today)
  6. 2021-11-15: 3.8.4 (bugfixes only)
  7. 2022-04-15: 3.9.1 (features allowed)
    … and so on.

To me that screams “bit the bullet and switch to semver” where we bump the major version every 2 years and we don’t bother with intentional bugfix releases until we reach the point of no longer adding backwards-compatible features:

  1. 2019-10-15: 4.0.0
  2. 2020-04-15: 4.1.0
  3. 2020-10-15: 4.2.0
  4. 2021-04-15: 4.2.1
  5. 2021-10-15: 5.0.0
  6. 2021-11-15: 4.2.2
  7. 2022-04-15: 5.1.0

Otherwise this seems like we are getting into the situation that Guido pointed out where you need innate knowledge of what those versions represent in order to know if there’s new features or if it’s just a bugfix release (and you can’t rely on the micro number if we have to do an emergency bugfix release). IOW how am I supposed to know what could have changed in 3.8.2 versus 3.8.3 as a user?

It also feels like we’re working hard to avoid scaring the community into thinking Python 3 -> 4 is like 2 -> 3. If that’s what this scheme is meant to cover and people are worried I say jump to Python 10 and basically start fresh with our numbering scheme.

How would this impact new features?

E.g. would positional-only arguments potentially have needed to wait 2 years under this proposal? I know the syntax is backwards-compatible, but there’s enough other things that have changed in terms of APIs that I think this proposal does push some changes out farther than 18 months (which I’m not claiming is a bad thing, but something to acknowledge).

Just an error regarding our current policy - I mixed up “security fixes until 5 years after release” and “5 years in security-fix only mode”. The resulting math seemed a bit off, but it didn’t click as to why until you pointed it out.

“Were any new features added in X.Y.Z?” isn’t a question users typically need to ask (except for purely academic purposes).

Instead, the questions they tend to ask are:

  • Can I install this version in parallel with my existing version without overwriting it? (No for different point releases in the same X.Y series, yes for different X.Y series)
  • Is my existing software at risk of failing to run? (Some risk for dot-zero releases, little or no risk for subsequent point releases)
  • Do I need to rebuild my extension modules? (Probably for dot-zero releases, no for subsequent point releases)
  • Do I need to recompile my pyc files? (Yes for dot-zero releases, no for subsequent point releases)
  • Does this package I want to use support the point release I am currently running? (Can already be “No” for earlier point releases, as the package may depend on a particular regression or compatibility issue being fixed in a later point release)
  • When will this release stop receiving security updates? (Until the next point release comes out for any given point release, and until 5 years after the dot-zero release for the release series as a whole)

I’m just looking for the lowest risk approach that gains us the reduced feature latency benefit that we’re after with the fewest undesirable consequences. “Move the feature freeze 12 months later in the release series lifecycle” is the best I’ve come up with so far, as it providesmmost of the desired benefits, while the downsides are ones that distributors and end users already have to deal with thanks to bug fixes and provisional APIs.

The compatibility issues with positional args weren’t inherent in the change itself - they arose from design choices in related APIs. For a non-zero point release, any approach that required a new porting note or deprecation warning would be deemed unacceptable, so folks would either need to design a more conservative API change (which is where I expect we’ll be for positional args in 3.8.0b2), or else wait for the next dot-zero release.

The whole reason “bugfix only” releases are a thing though, is that no-one trusts software developers who claim that they’re adding features but they’re definitely all backwards compatible.

Also, it seems like part of the value of short-cycle feature releases would be to ship disruptive changes there first, so we can shake out any issues there before we commit to supporting the new feature in an LTS?

2 Likes

I ask that pretty much every time a new version of any software I use comes out. And even more relevantly, I regularly want to know:

  • What version will the new feature that I know has been developed get released in?
  • Will there be new features in the upcoming release?

Both of these are easily answered by looking in release notes, but under the proposed scheme would be really hard to infer from the version number alone. Under the current scheme, they are easy to answer:

  • If the current version is X.Y.Z, then it will be in X.(Y+1).0
  • If the upcoming version is X.Y.Z, then yes if Z=0 else no.

I’m with Guido that you shouldn’t need to ask an expert to understand the significance of a version number.

1 Like

There are major comms downsides to any change. If “zero chance of end user confusion” is the bar we set, then we’re going to end up stuck with the status quo again.

However, actually changing the version numbers faster has major practical consequences beyond mere potential for confusion - more versions that can be parallel installed, more versions to build for, more versions to test against.

We don’t need to inflict that level of pain on the ecosystem if our goal is merely to reduce the shipping latency of standard library features - instead, we can choose to inflict the lesser pain of users not being able to infer whether a release is feature frozen or not merely from the version number.

1 Like

One of the advantages of working against this fear is that we’ve ended up with a much more coherent language and runtime. If “anyone” could add “anything” without considering whether it would be best deferred to Python 4, I suspect we would have a pretty big mess by now.

(The C# team chooses a theme for each of their major releases, and the C++ committee has a detailed vision. I don’t know of any other languages that change as often without looking like completely random motion.)

Just to add here, I started a thread on python-dev back in April 2018 about switching to calendar versioning. The MM->MM3 migration seems to have ended up consolidating it with a thread it broke off from, but search for “Python version numbers” in this thread to see the discussion from back then.

At this point, I think the biggest problem with calendar versioning is not the psychological barrier of “a number other than 3” but rather the same problem with any versioning scheme that switches to a number other than 3, which is all the code that has erroneously conflated sys.version_info[0] == 3 with sys.version_info[0] > 2, and related problems.

I have to say, though, if the biggest problem we have with PEP 596 is what versioning scheme to use, I consider that a pretty ringing endorsement for the PEP.

1 Like

That’s not the biggest problem. :wink: There are a few proposal floating in this thread that accelerate releasing features to the wild but which all differ in various ways (which always happens when this topic comes up).

I think this is very quickly coming to a point where people will just have to write a PEP like Łukasz has and lay out:

  • The hypothetical release schedule for the next two Python releases
  • Support expectations
  • Potential impact on core devs, package maintainers, and users in terms of increased/decreased overhead
1 Like

A simple cadence of annual releases - is easy to remember and makes planning easier for everyone.

It may also help us sync up with other schedules (e.g. annual PyCon for communications, gcc schedule, ubuntu, fedora, etc.)… so overall less noise which is good.

1 Like

Aye, agreed - I volunteer to write one for “Delaying the feature freeze for API additions” (which I should be able to get to on Saturday).

In relation to Paul’s concern regarding how users (and contributors!) would know whether or not a release series was still receiving feature additions, my proposal will be that we make the last feature addition release of the most recent stable series coincide with the first alpha release of the next development series. (i.e. 3.10a1 would be published 12 months after 3.9.0, and would indicate that the 3.9.x series had now entered its feature freeze phase)

I still prefer a scheme that means I can interpret a version number in isolation. Is 3.9.17 a feature release or not? Don’t know, it depends on when it was released in relation to 3.10a1.

1 Like

Yep, and that’s the one criterion on which switching to full semantic versioning would be superior (as major feature releases would correspond to major version number bumps, minor feature releases would correspond to minor version bumps). There are just significant backwards compatibility and migration concerns associated with actually doing that which I would like to separate from the question of introducing a major/minor feature release split in the first place.

I’m about to start drafting this concept in PEP form, so that should be up later today.

Initial version of the “minor feature releases” PEP has been posted: https://github.com/python/peps/pull/1108/files

The proposed Python 3.9.0 timeline is actually only a few months longer than that in PEP 596, but the two PEPs diverge significantly after that (PEP 596 would have another major feature release take place in 2021, whereas PEP 598 proposes a series of minor feature releases targeting existing installations instead, with the next parallel installable feature release not happening until 2022)

For the “Why not full semantic versioning?” question, I kick that can down the road for a couple of years - it is clearly a good fit from a philosophical perspective, and it’s only the practical technical difficulties that make me want to separate it from the question of giving ourselves access to minor feature releases in the first place.