(Maintainer of: PyPA: build, cibuildwheel; Scikit-build: scikit-build-core, scikit-build, cmake, ninja, a few others; also pybind11 and it’s examples, bunch of Scikit-HEP stuff, also plumbum, CLI11 (C++), and other stuff, also frequent contributor to nox, also some conda forge and homebrew recipes).
First point: I don’t think the current situation is terrible - I think it’s a great step forward from the past setuptools/distutils monopoly, especially for compiled backends. Making extension modules with setuptools was/is really painful, and requires up to thousands of lines (14K in numpy distutlis, IIRC) to work, and is very hard to maintain. Setuptools/distutils supports extension builds more from necessity and its original use building CPython, not because it was designed to build arbitrary user extensions originally. We are just now starting to see good options for extension building backends built for PEP 517 (scikit-build-core & meson-python are recent additions that wrap two of the most popular existing build tools, cmake and meson). I don’t think finally seeing multiple usable options for build backends is bad!
On unification: I think unifying interfaces and providing small, modular libraries to help in that goal is a fantastic step forward. Certainly, in the compiled space, many/most users will want a build system like CMake or Meson - building a compiled extension from scratch is really hard, and not something I think we want to compete on. Reusing the years of work and thousands of pre-existing library integrations is valuable. I’d love to see more helper libraries though - the public API for wheel would be really useful, for example.
pyproject-metadata are great; I’d like to see a bit more of this sort of thing, it would make building custom backends easier. I’d also love to see more usage unification; config-settings in pip matching build for example (at least for
-C and lists,
--config-settings unification might be too far gone).
On extensionlib: In my opinion, this must be an “extensions” PEP. I want both meson-python and scikit-build-core to work as PEP 517 builders first, so we have a good idea of everything required to make an “extensions” PEP. I also think we ideally should have a proof of concept (in extensionlib or as a hatch plugin) of the idea. Also for some projects, a native PEP 517 builder will probably remain ideal even after this. If your code is mostly (or in some cases, entirely) a compiled extension/library/app, then it likely would be best to just use the PEP 517 backend provided by your tool of choice. However, if you do have a mixed project, especially one that mixes compiled extensions (Rust compiled with cargo and C++ compiled with cmake or meson, for example), then being able to use these tools per extension would be highly valuable. It also allows the author to take advantage of things like Hatch’s pretty readme plugin or vcs plugins, etc. Source file collection is not unified, so it someone already knowns hatchling, reusing hatching and just adding a compiled extension via the extensions system would be nice. The key issue is handling config-settings - this would probably be the bulk of the PEP; for the toml settings, this is pretty easy, but we’d need a good way to pass through extension settings. You’d not pass in a list of files; you’d get out a list of produced artifacts and maybe a list of consumed files (for SDists). Things like cross-compiles are handled by the extension backend; it’s no different than cross-compiling as it is today. Another important thing to handle is
get_requires_for_build_*, which is very important for compiled extension building, as they often have command-line app requirements that optionally can be pulled from PyPI.
On conda vs. PyPI: I think both approaches have merits, and I don’t think one should be jettisoned in favor of the other, but we should do what we can to help these work together, and maybe learn from each other. Giving the library author the ability to produce their own wheels has benefits, such as better control over your library, and rapid releases - sometimes conda releases get stuck for a while waiting for someone. Providing good tools to do it (like cibuildwheel & CI) has been huge, and I think the situation is better than Conda’s layers of tooling that makes tooling that injects tooling that duplicates tooling into tens of thousands of repositories. This has been patched so many times that it’s really hard to fix things that are clearly broken, like
CMAKE_GENERATOR, which is set to “Makefiles” even if make is not installed and Ninja is, etc. Also, I spent several days trying to get the size of a clang-format install under some amount (500 MB, I think?) so it could be run with pre-commit.ci’s limits - and then I found the other pybind11 maintainers had deleted conda a year or two ago and had no intention of reinstalling it. Then someone produced a scikit-build/cibuildwheel binary for clang-tidy for PyPI - it was 2 MB and installed & ran pretty much instantly, and didn’t require conda preinstalled. The CMake file was less than a page, and the CI file was less than a page. Also, due to the custom compiler toolchain, if a user wants to build compile something locally, conda’s a mess. We get a pretty regular stream of users opening issues on pybind11 just because they are using the conda Python and don’t know why they can’t compile their own code. Conda’s designed to be pre-build via conda-build, and not build on the user’s system via standard tools. On the flip side, Conda can package things that can’t be done as wheels (at least as easily), it can handle shared libraries without name mangling, and it has a uniform compiling environment (mostly). And the central nature does allow central maintainers to help out with recipes a bit more easily. (Though, I should mention that many of the “thousands” of maintainers are really just the original package submitters, just like PyPI).
Even for non-compiled backends, we wouldn’t have things like hatchling if the playing field hadn’t been opened up to multiple backends so the best could win out. And there’s a clear use case for flit-core, too, for building things that hatching itself depends on, for example. ↩︎
It was “able” to because it had to be - there was no way to compete, but wasn’t intended to be full featured. Things like selecting a C++ standard are missing. ↩︎