The release of
pip 19.x seems to have broken installation for a large swathe of the ecosystem because of the changes with PEP 517. While we’re working to figure out what to do, I can’t help but wonder if there’s anything we could have done to detect that this was going to be a problem before the release.
One idea that I’ve come to is that we probably could have detected this if we had some better “integration testing”, using the master branch of
pip to install some real projects - in a virtualenv, not in a virtualenv, etc.
I’ve already suggested on setuptools that we start testing against
pip, and I’m thinking we may want to start testing against
master for other tools as well. I’m wondering if maybe it would make sense for us to broaden the scope of this testing and create something akin to CPython’s buildbots to detect early incompatibilities in Python’s build ecosystem.
Roughly what I’m thinking would be that for each environment we could install the master branch of each of the relevant tools (pip, setuptools, virtualenv, tox, etc), and try to install a bunch of real projects from PyPI and see if they fail. To guard against noise when some project introduces a bug in its build, if there are failures we can re-run the failing installations using the release versions of the build tools and only hard-fail if the release tools succeed.