It depends on whether you’re more worried about “Our developers might use APIs that don’t exist yet in all our target environments” than “The Python developers might break an API that we’re using in a new point release”.
If the latter is more of a concern, then you’d just continue with the existing strategy of upgrading the CI pipeline to the new version before upgrading any DCs, and rely on either code review or static analysis to pick up on the use of newly introduced APIs.
If the former is a major concern, then the simplest fix would be to adopt an organisational rule prohibiting the migration of mission critical services to new Python versions until those version have hit their feature complete release date (remember: PEP 598 puts the 3.9 Feature Complete date 2 years after the Python 3.8.0 release date, so it’s entirely reasonable for orgs to decide to treat the entire incremental feature release period as an extended beta).
I added the
sys.version_info.feature_complete flag to PEP 598 precisely so that that kind of policy would be easy to enforce programmatically.
However, if an organisation didn’t want to do either of those things, then the only comprehensive CI strategy would indeed be to test against both minor versions while the rollout was still in progress, such that instead of upgrading the CI pipeline in place, you’d instead have to do something like:
- Keep the existing pipeline in place to ensure compatibility with not-yet-upgraded DCs
- Start a new pipeline in parallel to ensure compatibility with upgraded DCs
- Once the second pipeline is passing, actually start upgrading DCs
- Once all DCs have been upgraded, retire the original pipeline
Or, if running two pipelines in parallel isn’t feasible, you’d need to run an interim pipeline that included a Python upgrade/downgrade step in order to test both versions until the rollout was complete.