[REJECTED] RFC: PEP 608: Coordinated Python release

I would think the PEPs are over-optimistic of what can be achieved for Redhat/Mac/Windows.

Limiting the focus on the common ultra-basic would already be great:

  • zero constraint on Python delivery, just get community focus/efforts on the bottlenecks to get the whole eco-system be ready sooner (which can vary at each version or beta cycle, depending of PEPs)
  • suspect “always a bottleneck” : C.I. Chains + Cython + Numpy + Jupyter Notebook-
  • ugly patches are ok in alpha/beta to keep the bottlenecks away for the rest of the community,

and maybe some environments shall be droped, at least at alpha/beta stage, as increasing the effort for ever shrinking effect:

  • Windows 7/8, Windows 32 bit (I see 32 bit mostly dying with Windows7),
  • Old Mac/Linux versions.

I think this is an excellent idea in general, even if it will still take some time to hammer out the details. Testing a some core packages to be compatible with the new release is very good for finding allowing libraries to know what to adapt to (generally, projects that are actively maintained would surely be willing to work on removing deprecated methods, or at the very least accept pull requests to that effect), and for having a wider range of projects available at release time.

Most projects (even those as big as pandas) do not test against python master, and this is something that should conceivably be done for CPython itself.

This mode of development is also not unprecedented, for example there’s the community build for scala or crater runs for Rust.

IMO, minimal list to start with could also include Sphinx, particularly since we depend upon it for building Python and our documentation.

Also, I think it would be helpful to have more than one category of projects that the list would include, such as a “suggested release blocking” section that includes only a few stable projects and a “non-blocking” section that includes significantly more packages. This would allow us to expand upon the list further over time, without increasing the maintenance cost as much for each addition. It could potentially be only two sections or perhaps many sections with increasing compatibility priorities.

I would propose that new packages added to the list could start in a lower compatibility priority section; moving up or down depending on if they are proven to be stable and responsive to fixes over time. Regardless of the priority level of any package, we would of course maintain the discretion for issues to be a release blocker or not (as the PEP suggests).

These different sections could help give us an idea of where to focus our efforts, and give us a better understanding of compatibility issues. It’s far more meaningful if a known “stable” package suddenly breaks, in comparison to one that is new to the list or commonly has compatibility issues. In a way, it’s not entirely different from how we think of the buildbots.

My concerns about the ambiguity here remain. As a pip core developer, my question to you in that case would be, precisely what commitments are you expecting from the pip developers if this proposal were to be accepted? What would change for pip?

  • Are you expecting us to add the dev version of Python to our CI? Is “whatever Travis/Appveyor/Azure happen to have available” sufficient, or will we be expected to add a CPython build to our CI?
  • Are you expecting us to re-run our CI when a new Python release occurs (at the moment we only run CI when a pip commit happens)?
  • Are you expecting us to run CI repeatedly on our release tags and/or master? At the moment we run CI on commit but we don’t run CI again unless another commit occurs.
  • Are you expecting us to let the CPython project know if our tests fail? Do we need to triage such failures to confirm if they are caused by a CPython change or something else (environmental issues, pip relying on undocumented behaviour, etc)?
  • Who do you propose communicates these issues to our vendored dependencies? Are you OK with our policy that we generally don’t patch vendored dependencies, but wait for a downstream fix to be released?
  • Do you have any expectation that we prioritise fixing failures that are triggered by CPython changes? What if our test failures are delaying a release?

I could go on - there are many questions that would need answering here, if this were to become a formal part of any process. Of course, if it’s simply a general “good intention”, that we try to ensure that pip works when a new Python version is released, then (a) that wouldn’t involve much extra work from the pip developers, but (b) it’s what we already do… And general “good intentions” don’t need a PEP :wink:

But as it stands, I’d personally be against pip agreeing to be part of this PEP. (The other pip developers may have different opinions - we’d have to discuss this internally and come up with a project response if it came to the point of signing this PEP off).

1 Like

Hi Paul,

Thanks for these interesting questions. It seems like the PEP needs some clarification :slight_smile:

No.

It’s better if a selected project has a job to test the project on the next Python (the job doesn’t have to be mandatory), but it’s not required by the PEP.

My plan is that Python core developers (at least me) will add a CI for projects which are not tested on next Python yet. Maybe maintainers of selected projects will be kind enough to help us on this task :wink:

No.

For the short term, my plan is more to have a script similar to my experimental https://github.com/vstinner/pythonci project which would be run manually. Only the Python release manager is expected to do such manual check.

But for the long term, it would be better to have a CI, so run continuously.

No.

For projects less active than CPython (ex: if the CI is not run daily), we can do manual checks and/or have a separated CI.

IHMO we will need a separated CI anyway, especially to test Python changes on selected projects before merging a change. I’m thinking at changes known to be incompatible, like my https://github.com/python/cpython/pull/16959

No.

The Python release manager will have to check the status of selected projected at each beta, rc and final release.

I hope that project maintainers of selected projects will be more proactive to report issues upstream, but it’s not a PEP requirement.

Honestly, I don’t want to go so far in term of organization detail in the PEP. I prefer to let developers organize themselves :slight_smile: Most developers are kind enough to report issues directly to the proper project.

The strict minimum required by the PEP is that the release manager detects that pip is broken. That would an enhancement compared to the current blurry status quo.

In my experience, the issue is first reported to pip, and then it is reported to the downstream project. For example, I was somehow involved in reporting the deprecated collections ABC issue to html5lib and getting a html5lib release. You may notice that Python core developers already coordinated with pip to wait until pip is being fixed before removing collections ABC aliases :wink: Such coordination is not something new.

Let me try to explain that differently. The PEP doesn’t require pip developers to do more work. The PEP gives a trigger to pip developers to force core developers to revert a Python change if something goes wrong.

The PEP makes the assumption that all open source projects are understaffed and that CPython has a larger team than other selected projects. Even if it’s not written down, my expectation is that core developers will help to fix the broken selected projects, because these projects would block the final Python release.

My expectation is that if pip developers are simply too busy to fix an issue, CPython must be fixed to not bother pip developers. Or the CPython release is blocked until pip is fixed, if it’s better to let pip developers to fix the issue themselves. pip and CPython are not exclusive: they are Python core developers working on pip :wink:

But sometimes, it’s just a matter of weeks or even of days. That’s why the PEP gives the freedom to the release manager to ignore a broken project for a release.

IMHO it’s fine if Cython is broken for the first Python beta releases, but it’s not ok for the first rc release. And it must block a final release.

Let’s say that Cython is badly broken on Python 3.9 because of a last minute bugfix between two rc releases. Cython is fixed, but the Cython release manager is traveling. The Python release manager can try coordinate the Python and the Cython releases, or take in account that Cython is going to be released soon with a fix, and release Python anyway.

The coordination can be relaxed when a project is already fixed, but I would prefer to not release Python 3.9 final knowing that pip is badly broken, that no fix is available and no one is available to fix it. The PEP is a process to reduce the risk of ending in such situation, by making sure that bugs are detected before the final release.

If the PEP is modified to not require anything, IMHO the whole PEP becomes useless.

The DeprecationWarning is being ignored section of the PEP is a concrete example of such issue. Developers are “expected” to take DeprecationWarning warnings in account, but they don’t. Result? Python break frequently random projects when removing a feature, even if it was deprecated for 10 years.

The PEP must to be strict on some specific points. I expect you to help me to clarify what should be strict and what doesn’t have to be strict :slight_smile:

Technically, Python doc can be built even if Sphinx is not compatible with the latest version of Python: you can use an older Python version. No?

But Python without pip is barely usable.

I’m thinking aloud about the strict minimum set for selected projects. Adding any project means adding indirectly many other dependencies and Sphinx has many dependencies.

The PEP gives many new tasks to the release manager. I would prefer to not add confusion with “non blocking projects”.

I would prefer to stick the PEP to the minimum requirements for a Python release. Obviously, everything helping to get a better release can be done without a PEP and is welcomed (as already written in the PEP!) :slight_smile: You are free to add tons of CIs. You don’t need the Python core developers approval nor select projects maintainers approval :wink:

1 Like

I heard that such “world rebuild” is also done in CPAN before a Perl release.

But there’s a big difference between trying to do this and requiring we do this. So it’s great what you want to accomplish with the PEP, but that’s best-effort compared to requiring we do it. And if you mean to make this a goal and not a rule then this should either be an informational or process PEP or instead be in the devguide and not be a PEP.

1 Like

What about if this were describing a project that the PSF could offer a grant for?

Resourcing it is the big challenge - everyone should agree this is a good thing to achieve, but that doesn’t magically produce the ability to do it.

But perhaps if this PEP could become a statement of work, we’d be able to pay someone to do it.

@ewa.jodlowska - any thoughts?

1 Like

What part of “it” would it pay? Even if you pay someone to test all listed projects and file issues and PRs for the detected regressions, the PRs must still be reviewed and approved by a core developer of each of those projects.

As for this PEP, I agree with others: this is much to big a hammer to solve the problem at hand.

Also, as the developer of a third-party project (not listed in the PEP, though), what would help us most for testing a new Python release is to have conda-forge or Anaconda binaries for it in time. Right now 3.8 is available from neither.

We discussed this proposal at the Steering Council meeting this week, and our key conclusion was that we don’t think a PEP is the right vehicle for pursuing this idea.

There’s no “CPython Buildbots” PEP for example, there’s just a note in PEP 101 to check the status of the stable buildbots, and then assorted bits of documentation on managing the Buildbot fleet.

(I’ll put my own suggestions for how to proceed in a separate post from the SC level feedback)

Rather than pursuing this as a release process change, I think it makes more sense to pursue this as a CI enhancement, similar to the refleak hunting build, or the speed.python.org performance benchmarking.

That way the exact set of projects tested can be reasonably fluid (rather than being inflexibly locked down in an approved PEP) and chosen both to exercise various aspects of the standard library and to assess core ecosystem compatibility implications for a change.

If there are any PEP level changes at all, it would just be in the form of an additional note in PEP 101, and even that might not be needed if the standard practice is to file release blocker bug reports when the compatibility testing finds problems.

4 Likes

Sorry for jumping into this a bit late, I was told about this discussion a few days ago by @steve.dower.

I have a script on my home computer I run a few times a week that builds me a virtual enviroment with master-branch of {cpython, cython, numpy. scipy, matplotlib, pandas, ipython/jupyter,…} (basically the whole scientific python ecosystem through to my day-job code) and do most of my day-to-day development on that machine in that enviroment. It is a terrible brute-force bash script what has things like where I have various projects checkedout out hard-coded, but it does have the right order to build things baked in. I’m happy to share if people are interested.

I think it makes more sense to pursue this as a CI enhancement, similar to the refleak hunting build, or the speed.python.org performance benchmarking.

This makes a lot of sense to me. I think CI running master-branch python against latest stable releases of projects (installed from pypi!) would be a very good thing. Given that there will not be wheels yet, it may be worth using some sort of staged build.

I agree with @pf_moore’s concerns about this putting more burden on under-resourced projects, but we are going to see these failures one way or the other when Python is released so getting notified earlier would be better.

Running the master cpython - master project combination on CI is probably less valuable (but easier to get going as most of the CI services have a ‘nightly’ option that many of us use already).

…if Django, why not matplotlib, Flask…

As the lead Matplotlib developer I am :+1: on that suggestion!

1 Like

Would you mind to send me your script to vstinner@python.org? Thanks.

Responded to @vstinner via email.

Another thought that is wroth surfacing publicly is that it may be worth talking to the conda-forge folks. They have a correct machine-readable dependency graph, know where the stable sources are, and have build scripts for everything. I wonder if a pruned version of the graph they are using to run the py38 rebuild could be re-purposed for this?

1 Like

Please, no pandas. It’s popular, it’s easy to use, but it’s sloooooooow. You can do the same things Pandas do with 2-3 lines more of Python or numpy code. IMHO it should not be considered a blocking project.

Excuse me for the spam, but I think there are many other interesting and popular projects:
















and others that are very useful:










This PEP has been rejected. But thanks for the list of projects, we can consider them if someone works on a CI to test these projects on the master branch of Python.

I forgot pyperf:smiley:

Thank you Stinner, I think you’re doing a great job with CPython, reading the release notes.

And your PEP was not bad. I simply suppose it’s too much an hassle :smiley:

Maybe make DeprecatedWarning on by default is more simple. This way also the people that uses those projects will bother the project’s devs to fix them :stuck_out_tongue:

And IMHO some of the projects you listed should simply stay in the stdlib, as numpy, pip, scipy, Sphinx, sqlalchemy and pytest. From my list, I will add paramiko, pipenv, openpyxl, crc16, psutil and arrow, even if maybe pipenv is too young (but was adopted by PyCharme) and openpyxl and arrow have a not so great implementation… :smiley: