Let me try to reply to everybody.
First of all, this PEP doesn’t propose drastic changes: most items of the PEP were already done before Python 3.8.0 final release, but it seems like most people didn’t notice
- Most if not all selected projects were compatible with Python 3.8 when Python 3.8.0 has been released. This happened because different people worked hard to make this happening.
- Python core developers and maintainers of their projects are testing various projects on the next Python, especially during the beta phase
- Bugs and regressions are reported to Python upstream.
- Incompatible changes are discussed on python-dev, the bug tracker, Twitter, on projects bug tracker, etc. Some incompatible changes have been reverted during Python 3.8 beta phase.
- Some bugs were considered as serious enough to get the “release blocker” priority which blocks a release according to the PEP 101. As far as I know, all these release blocker issues have been fixed.
- There are already multiple CIs running different projects on the next Python. For example, at Red Hat, we rebuilt Fedora on Python 3.8 beta: it’s not just a “few projects”, but the whole operating system: more than 200+ Python packages (I don’t know the exact number, maybe it’s 500+, Miro knows that better than me, maybe 200 was the number of packages broken by Python 3.8)
- The release manager is already free to change a release blocker issue to deferred release blocker, or even “reject” the release blocker priority if they consider that it’s not severe enough.
The Python 3.8 beta phase discovered the PyCode_New() C API change: it was decide to revert this change, and add a new function instead.
This PEP is not about doing new things, but more about better communicating around this work which is already done, and better coordinate.
IMO the counter-productive part is more that each Python release breaks tons of Python projects This PEP proposes a concrete solution to make Python releases less painful for users.
Many users remind the pain of upgrading from Python 2 to Python 3. We are close to have Python 2 behind us. It’s time to look how to prevent such situation to happen again.
It seems like the PEP doesn’t clearly explain when the compatibility is tested and who is going to pay attention to this.
I don’t expect that a single person, the release manager, will handle all issues. It’s the opposite: I would like to involve all Python core developers and all maintainers of all selected projects in the process. It’s more a human issue than a technical issue. That’s why the PEP title is “Coordinated Python release”: we need more interactions and coordination than what we have currently.
I don’t expect that selected projects will only be checked the day before the expected final release. No. My intent is to check these projects every day using a CI: see the “Distributed CI” section, where CI stands for Continuous Integration The formal part is that the release manager is intented to send a report at each beta release. I expect that the release manager will point to projects which need most help to be updated.
One practical issue is that project dependencies can evolve more quickly than the PEP will be updated. So I chose to only select dependencies which are the most likely to be broken, but also select dependencies which are the most commonly used. For example, urllib3 is the top #1 the most downloaded library on PyPI. If urllib3 is broken, you should expect a huge amount of projects to be broken on the next Python. On the other side, I chose to ignore distlib which is very specific to pip and packaging.
Obviously, if distutils is broken by Python 3.9, pip tests will fail, and so the Python 3.9 will be indirectly blocked by distutils, even if it’s not explicitly listed in selected projects.
I tried to explain that, but very shortly, in the “How projects are selected” section: “Some dependencies are excluded to reduce the list length.”
About “(most notably requests)”: requests is explicitly listed as a “project” in the PEP list. I’m aware of pip vendored dependencies (src/pip/_vendor/).
Obviously, I’m open to discuss the exact list of selected projects It’s a Request For Comments and a draft PEP
With my Red Hat , the “compatibility” check basically means that building a Fedora package does not fail, knowing that tests are run as part of the build process. But this definition may be too specific to Fedora, since Fedora uses specific versions, which can be different than versions chosen by the upstream project for their CI.
If a project has a CI running on the next Python, the CI is expected to pass: it must be possible to install dependencies, to build the package (ex: build C extension), and its test suite must pass.
The CI can be run by the project directly, or it can be a CI run by Python core developers, or both. That’s why I use the “Distributed CI” term, instead of requiring an unique CI.
For example, when I discussed with numpy developers, they told me that they like to control how dependencies are installed: which version, which OS, etc. For example, the OpenBLAS version.
Obviously, having multiple CIs testing different configuration are not counter-productive: they can detect more bugs. But we will have to decide at some point which CI is the one used to validate a Python version Maybe this choice should be delegated to each selected project? I guess that the natural choice will be upstream CI run by the project.
If Django decides to not support Python 3.9, maybe it’s a strong signal to Python core developers that something gone wrong and that we have to discuss to understand why Django doesn’t want to upgrade. Maybe we are putting too many incompatible changes and it’s time to slow down this trend?
Maybe Python core developers and other volunteers can offer their help to actually port Django. This happens very commonly. It’s common that core developers who introduce incompatible changes propose directly pull requests to update projects broken by their change.
It happened for the new incompatible types.CodeType constructor: Pablo (and others) proposed different pull requests. It also decided me to introduce the new method CodeType.replace(): projects using it will not longer be broken if CodeType constructor changes again (gets a new mandatory parameter). I proposed pull requests to different projects to use it.
If Django doesn’t want to support Python 3.9, doesn’t want to accept pull requests or pull requests cannot be written, well, the Python release manager should be able to exclude Django from the selected projects. I expect that such decision will be a group decision.
Maybe ignoring Django is fine. But what about pip or urllib3? What if Python 3.9 is released even if we know that pip or urllib3 don’t want or cannot support Python 3.9? Is Python 3.9 without pip/urllib3 still relevant? That’s also the question asked indirectly by the PEP.
For the specific case of Django, maybe Django code base is too big and Django release cycle is too slow, to include Django in selected projects. I’m fine with dropping it from the PEP if you prefer. But it would be nice to have clear rules to include or not a project.
If you consider that the selected projects list is too long, we can make it way shorter. Maybe Python 3.9 should start only with pip and nothing else?
Ok, it’s now time for me to introduce you a very experimental project that I started a few weeks ago: https://github.com/vstinner/pythonci
This project is supposed to be a way to test the selected projects on a custom Python binary with custom Python options. For example, using -X dev or -Werror (passed as command line arguments or as environment variables).
I consider the project as experimental because I have different issues:
- First, I chose to hardcode commands used to install dependencies and to run the test suite of a project. I’m not sure that this approach is going to scale. I was scared when I saw the complexity of the tox.ini project of the coverage project.
- I wrote a task for coverage which uses tox, but I failed to run coverage with the specific custom Python binary (the task ignores the custom Python and uses “python3.7” instead).
- Python 3.9 already introduced incompatible changes which cause the job to fail early, while installing dependencies. In short, pip is currently somehow broken in Python 3.9. In fact, pip has been fixed (bundled html5lib no longer uses collections ABC aliases but collections.abc), but pythonci runs an old pip version which isn’t compatible with Python 3.9… I’m not sure why, I should investigate.
- All jobs fail very early using -Werror because even pip emits many warnings (not only DeprecationWarning). My plan is to experimental to only treat DeprecationWarning as error… But pip fails with -W error::DeprecationWarning: again, because pythonci picks an outdated pip version.
- I wrote pythonci for different usage: test the master branch of Python with a patch, test a project with -X dev, test a project with -Werror.
By the way, pythonci includes patches on pip and setuptools to fix a few warnings, to be able to experiment -Werror.
In short, I would say that right now, Python 3.9 is in a bad shape: it’s barely usable, most basic functions like pip are broken… Maybe I’m wrong and it will be fine in practice.
All these issues also decided me to propose this PEP.
I don’t think that a single CI can reply to all open questions. Some jobs may only be relevant to Python core developers.
For example, I would like to drop the “U” mode of the open() function: https://bugs.python.org/issue37330 But I have no idea how many projects would be broken by this change… 4 months ago, when I tried, even building Python was broken… because of Sphinx… because docutils was ignoring the DeprecationWarning since Python 3.4. Moreover, when I reported the issue to docutils with a patch… I was told that docutils was already fixed, but there was no release yet! (A new docutils version has been released with the fix in the meanwhile.)
It would be great to have a tool (like pythonci?) to check that the selected projects still run fine while working on an incompatible change: run the tool manually before merging a PR.
This is not a theoretical issue: pip was broken by the removal of collections ABC aliases. It was an issue in html5lib which has been fixed: a new compatible pip version has been released in the meanwhile.