PEP 596: Python 3.9 Release Schedule (doubling the release cadence)

Does it really take 6-9 months after the release of 3.N.0 until I can say sudo apt-get install python3.N?

Actually, yes. :slight_smile: “Well funded” doesn’t mean that the additional work this implies can actually be funded! Also, it is a huge amount of work to validate and roll out new feature releases, so it’s a good thing that they happen relatively infrequently. I think it’s a deeply baked assumption across the ecosystem that point releases don’t require the same level of diligence. Yes, we occasionally get bitten by that (e.g. NewType), but we don’t have that many provisional APIs, so it’s a rare wound.

2 Likes

It depends on whether you’re more worried about “Our developers might use APIs that don’t exist yet in all our target environments” than “The Python developers might break an API that we’re using in a new point release”.

If the latter is more of a concern, then you’d just continue with the existing strategy of upgrading the CI pipeline to the new version before upgrading any DCs, and rely on either code review or static analysis to pick up on the use of newly introduced APIs.

If the former is a major concern, then the simplest fix would be to adopt an organisational rule prohibiting the migration of mission critical services to new Python versions until those version have hit their feature complete release date (remember: PEP 598 puts the 3.9 Feature Complete date 2 years after the Python 3.8.0 release date, so it’s entirely reasonable for orgs to decide to treat the entire incremental feature release period as an extended beta).

I added the sys.version_info.feature_complete flag to PEP 598 precisely so that that kind of policy would be easy to enforce programmatically.

However, if an organisation didn’t want to do either of those things, then the only comprehensive CI strategy would indeed be to test against both minor versions while the rollout was still in progress, such that instead of upgrading the CI pipeline in place, you’d instead have to do something like:

  1. Keep the existing pipeline in place to ensure compatibility with not-yet-upgraded DCs
  2. Start a new pipeline in parallel to ensure compatibility with upgraded DCs
  3. Once the second pipeline is passing, actually start upgrading DCs
  4. Once all DCs have been upgraded, retire the original pipeline

Or, if running two pipelines in parallel isn’t feasible, you’d need to run an interim pipeline that included a Python upgrade/downgrade step in order to test both versions until the rollout was complete.

Joke answer: “it’s not a bug, it’s a feature.”

Serious answer: Bug fixes tend to matter to a tiny portion of the user base. If it’s possible to work around the bug they probably have done so by the time the release with the fix is available. For the vast majority of users, the vast majority of bugfixes don’t matter, and their code works just as well on (e.g.) 3.5.0 as it does on 3.5.1.

But new features attract users like flies. As soon as you release a new feature in 3.5.1, people are going to go out of their way to use it, and then you have something that definitely doesn’t work on 3.5.0. That to me is the big difference (and I’ve thought about this a lot because this argument definitely has come up since the earliest days of Python versioning discussions).

PS. There’s an easy solution for NewTypefrom typing_extensions import NewType. But I will also accept the criticism that we could have handled introduction of the typing module better.

2 Likes

So, basically ignore X.Y.* until * == some future feature frozen release point? It seems backwards to me, and difficult to communicate to users and internal customers.

One other problem came up in some random discussions. Imagine that Fedora, RedHat, Debian, and Ubuntu all upgrade to different X.Y.* versions at different times. Even if they somehow manage to stay on the current release, it takes a differing amount of time to push out new versions (not even talking about adoption rates of their consumers).

So now you have a script you want to run on all 4 Linux distros. Good luck keeping up on the minimum feature set you can safely write against. Even if you protect every possible new feature use with a conditional check, what will that do to your code and how tedious is it going to be?

2 Likes

I’ve pushed an update to the PEP based on this discussion: https://github.com/python/peps/pull/1129/files

And the answer is that if you’re worried about this kind of thing, you have to target feature compatibility with the last feature complete release series, even if you’re actually running on the newer one that’s still accepting new features. That compatibility together with testing on the latest feature release then provides you with a decent proxy for compatibility with any earlier feature releases in the current series.

The difference relative to the status quo is that we’ll be distinguishing between “not production ready” (alpha, beta, release candidate) and “production ready, but not feature complete” (baseline feature release, incremental feature release). Folks with simple deployment targets will gain access to features earlier than they would today without needing to do anything new, while folks with complex deployment targets will need to care about the BFR/IFR/FCR distinction (but at least wouldn’t be getting hit with a new major feature release to add to their test matrices every 12 months)

If we go with year based release numbers, remember that we are py3k and not use the actual Gregorian year. We should be 3000 based. :slight_smile:

There’s even a nice way to map this to sys.version_info such that .major remains 3 without seeming strange.

Note: I opened a new topic for PEP 602 which is the evolution of the original idea in this topic.

I can’t wait for Python 20.0!

2 Likes