[OBSOLETE] PEP 596: Python 3.9 Release Schedule (doubling the release cadence)

Sorry, I totally disagree with this. Imagine the scenario in a corporate environment where different entities control rollout of new Python minor releases on their own cadence. Maybe some data centers get X.Y.Z+1 this week and another DC gets X.Y.Z+1 next week. Some upgrades fail so they get stuck on X.Y.Z for longer.

New backward compatible feature gets released in X.Y.Z+1, and some developers start using it. Now their code runs in some DCs but not others, and that can cause major breakages. You can argue that automation and other constraints would limit the use of new X.Y.Z+1 feature until X.Y.Z+1 has been rolled out everywhere, but I claim that’s a nearly impossible state to guarantee.

Even in the open source world, I think introducing new features in a point-release is asking for a world of hurt. Let’s remember the lesson we learned with adding True and False in 2.2.1.

1 Like

On a historical note, Guido and I talked about this recently. We used to just deprecate the last version when the new one was released. E.g. 1.4 support stopped when 1.5 was released. I think this was mostly because we had a much less sophisticated VCS and CI, but also because we could get away with it; the user base was much smaller then.

Obviously we can’t do that today.

That argument doesn’t hold Barry - if you have that kind of environment, just decline to deploy (or strictly limit the use of) new Python versions until they hit FCR, or else set Python-Requires appropriately on all your deployments, so DCs running old versions of Python, will also keep running the compatible versions of everything else. An organization’s CI pipelines should also be restricted to the oldest still deployed IFR, rather than the newest.

2.2 was a long time ago, and Python app deployments have changed massively since then - we’re not talking about humans trying to cobble together a working system by downloading tarballs from an assortment of different websites anymore - we actually have machine readable metadata, and tools to automate putting together a consistent system.

(Plus the usual corporate argument applies: I put zero weight on proposals that ask volunteer publishers to do more work for the sake of enormous organizations that aren’t managing their development processes properly)

1 Like

A more recent painful example was the introduction of typing.NewType in Python 3.5.2. I can forgive that one because typing is officially provisional.

The other problem with adding new features in point releases is that it conflates bug fixes and security fixes with new features. If we have to adopt X.Y.Z+1 to address a critical bug or security issue, that means that we’re also adopting all the new features in X.Y.Z+1. It will be difficult to know what new features have been added and to add blockers that prevent developers from using those new Z+1 features.

Take the NewType issue mentioned above. I’m not even sure how that would be workable. It would mean that we would either have to have a feature matrix (i.e. “You can’t use NewType because 3.5.1 is still deployed somewhere”) or you have to keep all old versions live in your CI and run your CI/CD against all those versions. How else would you catch that a developer snuck in a reference to NewType that works almost works everywhere?

1 Like

OK, I think I figured out a way to better express why the “But compatibility with older point releases…” argument confuses me. Here’s Barry’s hypothetical example on that front:

Now consider that same hypothetical scenario, but with the second paragraph adjusted to be about a bug fix rather than a new backwards compatible feature:

A bug gets fixed in X.Y.Z+1, and some developers start omitting their previous workaround for the standard API being broken (or perhaps are using the API in new code and don’t even realise that a workaround used to be required). Now their code runs in some DCs but not others, and that can cause major breakages. You can argue that automation and other constraints would limit the reliance on X.Y.Z+1 bug fixes until X.Y.Z+1 has been rolled out everywhere, but I claim that’s a nearly impossible state to guarantee.

Basically, I don’t understand how “New APIs that existing code necessarily isn’t using (as the API didn’t previously exist), and is marked in the documentation as added or changed in a particular point release” can possibly be more of a threat to the reliability of staged rollouts than bug fixes in existing APIs. And if measures have been put in place to handle the “Don’t rely on bug fixes that may not have been rolled out universally yet” scenario than those same measures (which are already needed today) will be just as effective in addressing the “Don’t rely on feature additions that may not have been rolled out universally yet” scenario that the incremental feature release proposal in PEP 598 introduces.

(I’ll also note that this particular argument concedes the point that PEP 598 would work in getting features into the hands of developers sooner, since it only causes new problems if developers are actually using the new features we’d be delivering to them. By contrast, PEP 596 won’t do anything to help a large proportion of developers, since it requires that a new major Python version be made available in their target environments, and that they switch their project to target it, rather than taking the much lower risk step of updating to a new point release of an existing major version)

1 Like

You pin your CI to run the oldest still deployed Python in your production environments. If that’s Python 3.5.1, then that’s what you run in CI - you’ll pick up both reliance on missing features, and you’ll pick up missing workarounds for bugs that are still present in 3.5.1.

For generic publishers that aren’t targeting any particular environment, you’d test against the latest point release, and drop official support for older point releases as soon as a new one comes out (this is effectively what already happens today when public CI providers update their Python runtimes to a new point release).

The difference is that those kinds of bug workarounds are less frequent and can be worked around in local code unconditionally. It does mean you tend to carry around those workarounds for a long time.

But it means that if you’re pulling in Z+1 to fix a bug, you can only defend against incremental new features by backporting or reimplementing every one of them. That’s essentially what we did with NewType. But with this proposal, I now have to track every single new feature from .0 to .Z in additional to tracking bug fixes in those versions.

I don’t want to treat new features as something I have to defend against. :wink:

I think that’s a wildly optimistic view of corporate infrastructure, especially when there are 100ks or more machines. :slight_smile: It effectively means you’d be pinning to point releases instead of major releases. Do you really want to do that?

1 Like

But is that benefit to you in a well-funded corporate environment worth the cost of gating every feature addition to Python behind the backwards incompatible filesystem layout changes that mean major feature release adoption cycles are measured in months and years, rather than the weeks and months of minor point releases?

It’s also the case that the versionadded/versionchanged notes in Sphinx generate a machine-readable inventory of when new APIs were added, which means that a static analyser could be built that caught the use of APIs that didn’t exist in the older version.

Does it really take 6-9 months after the release of 3.N.0 until I can say sudo apt-get install python3.N?

Actually, yes. :slight_smile: “Well funded” doesn’t mean that the additional work this implies can actually be funded! Also, it is a huge amount of work to validate and roll out new feature releases, so it’s a good thing that they happen relatively infrequently. I think it’s a deeply baked assumption across the ecosystem that point releases don’t require the same level of diligence. Yes, we occasionally get bitten by that (e.g. NewType), but we don’t have that many provisional APIs, so it’s a rare wound.

2 Likes

It depends on whether you’re more worried about “Our developers might use APIs that don’t exist yet in all our target environments” than “The Python developers might break an API that we’re using in a new point release”.

If the latter is more of a concern, then you’d just continue with the existing strategy of upgrading the CI pipeline to the new version before upgrading any DCs, and rely on either code review or static analysis to pick up on the use of newly introduced APIs.

If the former is a major concern, then the simplest fix would be to adopt an organisational rule prohibiting the migration of mission critical services to new Python versions until those version have hit their feature complete release date (remember: PEP 598 puts the 3.9 Feature Complete date 2 years after the Python 3.8.0 release date, so it’s entirely reasonable for orgs to decide to treat the entire incremental feature release period as an extended beta).

I added the sys.version_info.feature_complete flag to PEP 598 precisely so that that kind of policy would be easy to enforce programmatically.

However, if an organisation didn’t want to do either of those things, then the only comprehensive CI strategy would indeed be to test against both minor versions while the rollout was still in progress, such that instead of upgrading the CI pipeline in place, you’d instead have to do something like:

  1. Keep the existing pipeline in place to ensure compatibility with not-yet-upgraded DCs
  2. Start a new pipeline in parallel to ensure compatibility with upgraded DCs
  3. Once the second pipeline is passing, actually start upgrading DCs
  4. Once all DCs have been upgraded, retire the original pipeline

Or, if running two pipelines in parallel isn’t feasible, you’d need to run an interim pipeline that included a Python upgrade/downgrade step in order to test both versions until the rollout was complete.

Joke answer: “it’s not a bug, it’s a feature.”

Serious answer: Bug fixes tend to matter to a tiny portion of the user base. If it’s possible to work around the bug they probably have done so by the time the release with the fix is available. For the vast majority of users, the vast majority of bugfixes don’t matter, and their code works just as well on (e.g.) 3.5.0 as it does on 3.5.1.

But new features attract users like flies. As soon as you release a new feature in 3.5.1, people are going to go out of their way to use it, and then you have something that definitely doesn’t work on 3.5.0. That to me is the big difference (and I’ve thought about this a lot because this argument definitely has come up since the earliest days of Python versioning discussions).

PS. There’s an easy solution for NewTypefrom typing_extensions import NewType. But I will also accept the criticism that we could have handled introduction of the typing module better.

2 Likes

So, basically ignore X.Y.* until * == some future feature frozen release point? It seems backwards to me, and difficult to communicate to users and internal customers.

One other problem came up in some random discussions. Imagine that Fedora, RedHat, Debian, and Ubuntu all upgrade to different X.Y.* versions at different times. Even if they somehow manage to stay on the current release, it takes a differing amount of time to push out new versions (not even talking about adoption rates of their consumers).

So now you have a script you want to run on all 4 Linux distros. Good luck keeping up on the minimum feature set you can safely write against. Even if you protect every possible new feature use with a conditional check, what will that do to your code and how tedious is it going to be?

2 Likes

I’ve pushed an update to the PEP based on this discussion: https://github.com/python/peps/pull/1129/files

And the answer is that if you’re worried about this kind of thing, you have to target feature compatibility with the last feature complete release series, even if you’re actually running on the newer one that’s still accepting new features. That compatibility together with testing on the latest feature release then provides you with a decent proxy for compatibility with any earlier feature releases in the current series.

The difference relative to the status quo is that we’ll be distinguishing between “not production ready” (alpha, beta, release candidate) and “production ready, but not feature complete” (baseline feature release, incremental feature release). Folks with simple deployment targets will gain access to features earlier than they would today without needing to do anything new, while folks with complex deployment targets will need to care about the BFR/IFR/FCR distinction (but at least wouldn’t be getting hit with a new major feature release to add to their test matrices every 12 months)

If we go with year based release numbers, remember that we are py3k and not use the actual Gregorian year. We should be 3000 based. :slight_smile:

There’s even a nice way to map this to sys.version_info such that .major remains 3 without seeming strange.

Note: I opened a new topic for PEP 602 which is the evolution of the original idea in this topic.

I can’t wait for Python 20.0!

2 Likes

Google was returning this discussion as the first link for “python 3.9 release schedule”, including a super “helpful” info box with the now incorrect dates.

I’m able to edit the thread title, but if @ambv or an admin could link to https://www.python.org/dev/peps/pep-0596/ from the top of the first post, that would be handy.

Edit: the incorrect info box is gone now, so either someone at Google fixed it directly, or their AI figured it out automatically based on the edits @ambv and I made :slight_smile:

2 Likes