Some reasons:
- It will double the amount of 3.x builds (& testing!) required by any medium-and-up-sized Python project, which is a substantial burden.
- It will also plague infrastructure that has been built with the assumption that there’s only one ABI per python (minor) version, which has been true ~forever.
- I don’t know if the pip resolver would be capable of distinguishing the ABIs when trying to resolve the installation of a package, but I suspect not. This would be a huge usability issue (installing a package compiled for the wrong ABI, or falling back to source installs of packages not yet nogil-ready, etc.)
Not counting the debug ABI (which isn’t generally something people distribute), there hasn’t been a python version with two productive ABIs, so the above list is just the start of a likely large number of unintended side effects that someone needs to fix. IMO the onus is on the PEP to argue why the benefits of having parallel ABIs outweigh all that work.
I understand that it sounds like an appealing option to have nogil be available more quickly, and you strengthen that point with…
… but IMO it’s going to take a while to digest this either way, and the acceleration provided by parallel ABIs risks ending up being a mirage that will cause very high costs for maintainers.
On that topic: By my reading of the stable ABI promises, if you intend to break that, a major version bump is unavoidable. And certainly, other pent up changes will attach themselves to such an occasion – e.g. the packaging ecosystem is considering various flavours of large overhauls, c.f.
But that’s to be expected IMO – nogil is an amazing piece of work, but it’s still not realistic to ask that all other brewing / pending changes in the rest of the Python ecosystem give nogil “exclusivity” on a major version bump.