[ACCEPTED] PEP 602: Annual Release Cycle for Python

Hi @hroncok, I moved the discussion to Users so you can respond.

Thanks for your feedback! Having an annual release cycle has the nice property of aligning with Python community events, the two most important being PyCon US (+ the following sprints) and the annual weekly core print.

Moving the annual release calendar forward such that the final freeze happens in September would misalign it with those two events. Due to the fluidity of the calendar, for example in terms of the number of release candidates, this change wouldn’t actually guarantee Python would align with Fedora every year.

Your Plan B of shipping Python always six months after release to me sounds like a big improvement over the current situation. You gave the example of Python 3.8 above. With annual Python releases, shipping 3.X.5 or 3.X.6 with Fedora will make it easier for you to ship related third-party projects that are already upgraded by the community to work with this version of Python. That should help with the exhaustion you mentioned. What do you think?


Shipping 3.X.5 or 3.X.6 would certainly make things much easier and help with exhaustion. However Fedora would no longer be driving early adaption of new Python 3.X versions in various upstream projects as it currently does and I’ve always consider that as a benefit for the larger Python ecosystem, not to Fedora itself. This actually sounds like a big drawback over the current situation.

Some datapoints:

  • Fedora 29 Final was shipped with 3.7.0.
  • Fedora 26 Final was shipped with 3.6.1.
  • Fedora 24 Final was shipped with 3.5.1.
  • Fedora 21 Final was shipped with 3.4.1.
  • Fedora 18 Final was shipped with 3.3.0.
  • Fedora 15 Final was shipped with 3.2(.0).

We wanted to ship 3.8.0 in Fedora 31 but didn’t make it, due to the closely misaligned schedules. I want to avoid this in the future, but the current annual release cycle proposal makes it permanent.

When we switch packages to new Python version, we often discover bugs. I’d rather help them discover during betas than during 3.X.5.

Having Fedora and Python release schedules closely aligned (as it happens e.g. with Fedora and GNOME) would IMHO be hugely beneficial to both projects. Having Python releases scheduled as proposed is the worst possible combination with Fedora’s schedule – it either creates an impossible race we cannot win, or it drops the benefits that were present with the “adapt soon” Fedora’s Python relationship so far.

What is the maximum amount of days this can be moved? Is it 0?

We can certainly live on the edge. I just don’t want to satisfy with permanent misalignment.

Since this has been moved over to the users section, I think it would be a good idea to add a poll to the topic with something along the lines of “Which release major period do you prefer?”, including “9 months”, “12 months”, “18 months”, “24 months”, and “other” as options. Even if some of these are not open for direct consideration, it would be an easy way to get a rough idea for preferences.

The input form that @ambv created will be a great way to get more detailed feedback, but I suspect that it will end up leaving out quite a few users. The results of a single question poll give less information overall, but tend to be more widely appealing since they require far less time investment.

Also, could the post be edited to include a link to the input form at the bottom? Not everyone actively uses twitter.


I’m just a python user, but one advantage of longer release cycles which is not mentioned in the PEP is that it is easier to advocate for a version upgrade when there is a descent number of new features to enjoy.

I can always find one or two reasons to upgrade (dataclasses for 3.7, positional only arguments for 3.8, …), rarely more, and I feel I might lose interest in following new releases when they’re smaller. This goes along with @pitrou’s comment saying that

it seems that our new features are more and more specific, less and less game-changing

Also, nice work on the graphical models @ambv. I found the first graphic with the visual representations of each release phase and minor version to be especially helpful.

Could we potentially upload it somewhere on python.org when/if this PEP is finalized? I think it would provide a particularly helpful and simple method of communicating the release phases to both users and package maintainers.

I’ve been talking about a possible alternative with a few people and wanted to summarise it here, and I give permission to merge it into the PEP as a [rejected] alternative (or we can move the current proposal into rejected ones I guess :clown_face:)

Fast and Stable releases

The main goal is to minimize the number of releases in bugfix mode (that period between the x.y.0 release and entering security fix-only mode) and also minimize the time between new code/features being merged and becoming available to users.

In short, we have a stable release series that looks basically the same as our current releases (3.x.y). The main bugfix period becomes 2 years, and security-fix period remains as it is. The next release’s beta is 6 months (or 3, I don’t really mind) overlapping with the end of the previous version’s bugfix, so that we only ever have one stable release taking bugfixes.

We also have a fast release series that is calendar versioned (“2019-09”, “2019-10”, etc.) and releases from master every month (or I could see every three months being reasonable). This becomes like a continuous alpha release, and of course us core developers have to take compatibility seriously all the time to avoid breaking users unnecessarily.

In pictures (version numbers and months selected arbitrarily - can realign however we want):

One fast release each month, six-month beta periods and 24-month bugfix:

Alternatively, one fast release every three months and stable releases as above:

For users, the implications are:

  • if you’re on fast track release X-20YY, you won’t get bugfixes unless you update to (X+1)-20YY
  • if you’d prefer a slower release, there is exactly one current release, and potentially one beta release (for those whose job it is to make sure it works before upgrading their user’s CPython version)

For library developers:

  • your own stable releases should work with stable track CPython, and may work with fast track (if desired)
  • your own prereleases should probably work with fast track, if you want early-adopting users to be able to give you feedback
  • your minimal test matrix is “current fast track release” and “current stable/bugfix release”, optionally also testing the beta stable release when available

For core developers:

  • fast track and stable must install side-by-side
  • we have to become more careful about making any breaking changes (or face broken users within 1-3 months), and probably think about better use of runtime or per-module feature flags
1 Like

I don’t really think it’s helpful to post an entirely different proposal in the same discussion thread…

(@steve.dower, thanks for your input. I’ll respond to that in a separate post.)

The informal poll currently has 54 responses on it. The activity died out so here’s the summary of the results.

Thank you to everybody that participated! I’ve gotten responses from developers of many notable libraries, among others (alphabetical order): aiohttp, attrs, BeeWare, conda-forge, Coverage.py, DevPI, flake8, Flask, httpx, Hypothesis, Jupyter, multidict, NumPy, pre-commit, PyPy, pytest, setuptools-scm, SymPy, Tox, Twisted, urllib3, uvloop, virtualenv.

Annual releases sentiment

The idea to release Python annually met with a warm reception but not unanimous support. Two-thirds of the respondents felt annual releases would be better while 20% thought it would make things worse. I won’t focus on the positive support here since the arguments there are largely consistent with what PEP 602 says on the matter. Instead, let’s look at the criticism.

Notably, representatives of the following projects said that annual releases would feel worse: NumPy, PyPy, SymPy, BeeWare. The reservations come from the perception that an accelerated release cadence would accelerate the rate of change and increase the testing matrix.

In my view the former is unlikely to happen, as the accelerated release cadence simply slices the existing rate of change more gradually, making consecutive releases more similar to each other. If anything, due to the beta feature freeze period staying as long as it was before, I’d expect the rate of change to slightly decrease.

The latter worry of the increased testing matrix was well summarized by Matti Picus who highlighted that some projects use multiple dimensions of testing. For example, one dimension can be “Operating System”, while another can be “CPU architecture”. Each new “Python version” in this matrix can then increase the number of tests significantly.

12 months of bugfix releases

Note: after the bugfix support period, Python still provides security updates for a release until five years after the original 3.X.0 release. This is true today and is not meant to change in PEP 602. For some reason this detail gets lost despite pictures in the PEP.

Decidedly fewer people were excited about the prospect of twelve monthly bugfix releases. Over 1/3 of the respondents were concerned by the shortened support period but also by the additional churn for integrators due to a larger number of point releases. I also received some critical feedback about this piece of PEP 602 over Twitter.

I took this into account and checked whether we can stretch the bugfix support period back to 18 months without generating additional churn for core developers. It looks like we can, if we release bugfix updates every other month. This is still an improvement over the current practice of quarterly bugfix updates. I updated the PEP to reflect this.

Alpha releases are rarely used

Finally, it turns out very few projects are testing alpha releases of Python. The listed reasons are unavailability on CI systems, incompatible dependencies (including tooling like linters, testing frameworks, etc.), and additional workload.

While this is out of scope for PEP 602, from discussions at the core sprint it seems that it would make sense to set more concrete expectations for stability in the master branch. Something in the vein of Steve Dower’s suggestion of the “latest” development stream.


This was an informative endeavor on many levels. I might incorporate some of the above information into the PEP body somehow but I haven’t decided how yet. I’m trying my best to keep it simple and easy to digest.

Raw data from the poll: Dropbox - File Deleted - Simplify your life

I sympathise quite a bit with this concern, as a past and present contributor to projects with such multi-dimensional testing matrices, such as Numba and PyArrow.

However, it seems that with this PEP the number of Python versions simultaneously in active bugfix support wouldn’t increase, right? Sure, there would be more versions in “security fix” mode, but that’s a slightly different thing.

Well, no. The amount of active bugfix releases has been 1-2 (due to the informal overlap at the discretion of the Release Manager) for many Python versions now and yet the real support matrix for library maintainers has to include whatever is still distributed in supported operating systems. This roughly corresponds to our full five-year security support period.

The change that I made now to the PEP is to allow for a formal overlap to provide a guaranteed period of 18 months of bugfix releases. See the updated pictures in the post. I was initially worried that this overlap would put additional workload on core developers but looking at the release calendar of the previous versions I discovered the informal overlap has been a thing for a long time.


As a package maintainer, a yearly release cycle means I will end up skipping versions of python (ex: support new features in 3.8, 3.10, 3.12, skipping 3.9 and 3.11) because its just moving too fast. The net result being python-dev has used up twice the effort, and effectively slowed down my adoption of python language features. For me, this is the ultimate waste of effort.


Would an alternate 2yr release cycle be an alternative to address concerns.

Even years… 2020, 2022… Produce a LTS Version with a 5 yr support cycle and in the alternate years with a 2 year support cycle.

This would essentially allow users to choose the implementation that suits their use case.

One of the discussions we had at the sprints is that “support cycle” needs a definition, as many people have different understandings. This is why my suggestion earlier was so detailed, because in conversation it was very clear that the definitions are not shared.

I suspect in this case you’re referring to the bug fix phase of a release? Security fix phase extends beyond the 5/2 years?

This is my main concern and why I would rather see a two year cycle.

Missing a year is fine if we don’t break anything between releases. I’d also be happy to reduce the rate of breaking releases in other ways, but reducing the overall rate of releases is the only way that can’t fail :wink:

1 Like

To the contrary, reducing the overall rate is making it more likely your project will break on the new release:

  • the more time between releases, the more changes each holds; and
  • the more time between releases, the less likely it is somebody else will trip over your problem and it will get fixed before you get to it.

This is why I disagree with the argument that if any one user will be skipping releases, that makes the quicker release cadence a futile exercise. Yes, you might be skipping even releases. But another user might not and they will encounter and report problems before you even get to them. And by the way, this is not just bugs in CPython I’m talking about. It’s also the need to update your project’s dependencies to work with the latest version (for example by fixing DeprecationWarnings). More common releases make it more likely that this sort of thing will be identified and fixed sooner.

Finally, I’m repeating this like a mantra now, quicker releases will be more similar to one another. Accelerating the calendar doesn’t magically provide us with more core developers or provide existing ones with more free time.

How can I make this clearer? It’s like having one big bus leaving your station every hour vs. a smaller one every 30 minutes. You’re more likely to find the big bus crowded because there’s less choice. A problem with the hourly bus (say, a blown tire) is also more painful than a similar problem of the bus that comes every 30 minutes.

1 Like

I think our main disagreement is that you’re focusing on magnitude of change per release, while I’m more worried about non-zero changes over time (eg. “number of times per year I need to change something due to a CPython release”) - hence I think less frequent “must support” releases and more frequent “may support” releases is better than only having the latter.

1 Like

From a practical sense, a library must support all published releases; people start complaining otherwise.
It can skip some Python versions though, e.g. run tests only against odd releases.
Now we usually don’t run tests against all Python bugfix releases but choose one from the available set. Skipping even releases for reducing a test matrix doesn’t change it too much.
Or, more realistically, test the latest Python, the oldest supported one plus several arbitrary versions in between.


In an ecosystem where there’s a high level of interdependence between libraries, “may support” releases don’t really make much sense to me. All it takes is for one or two key libraries to not support a given release, and it becomes hard to impossible for any library to claim to support that release (sure, “We support Python 3.11, but are not responsible for any issues that are related to our dependencies” is a reasonable support statement, but in practice it doesn’t help users who can’t use my library on a version that I claim to support). So in practice, “may support” versions are “must support unless you’re OK with your users complaining”.

For an application-centric viewpoint, “may/must support” versions make more sense. But library support is more complex (it’s the same logic that says applications should pin their dependencies but libraries need to keep their dependency specifications broad)…


Precisely. This is why I don’t like the idea of “just skip a release” to help handle the faster rate of releases.

So this would work well with “test the fast track release, the current stable release, and as many security-fix releases as you can afford”? (Assuming we can get the various CI systems to pick up the faster releases in a timely manner)