I don’t understand. What are these reasons to release a wheel at all, let alone preferentially, for a pure Python project? If my project is pure Python, and I’ve tested and verified that I can install it in local virtual machines with various Python versions from an sdist, then why would I need, or benefit from, a wheel?
The long-and-short of it (for me) is --only-binary :all:
on pip
and its friends. That ensures we’re never compiling an sdist when resolving.
There are a few reasons for that:
- Python Build Standalone: If your settings looks like the machine that compile it’s settings, you win the lottery. Lottery and reproducibility aren’t friends though
- Consumer Security: Some corporations don’t allow running arbitrary code for package creation… for obvious reasons. And
pip
and friends compile these into wheels before installation.
Since there’s no easy way for end-users to say “I only need wheels from projects that use compilation, and projects that don’t have arbitrary code execution when building a wheel”, then just blindly asking for only wheels is the simple, efficient, and easy way to go.
But, you’re also falling into my earlier point of package authors shouldn’t have to care, but the community still benefiting
That focuses on install speed but there’s also things like quicker resolves since metadata is statically available.
But why does Pip do all of this extra work, when the sdist already contains every verbatim .py
file that should end up getting copied into place, and doesn’t contain anything that requires preprocessing?
If that’s “because it doesn’t know”, why can’t the metadata in an sdist tell it that much?
If that’s “because we already have that and it’s what the wheel format is”, then why do we need two separate formats? Why can’t the wheel format also just include C source files alongside the compiled result for the people who are doing FOSS (and let people override the compiled result by recompiling locally with whatever optimizations, without needing to specify a separate install format)? Why can’t the metadata tell Pip that there’s still a compilation step that’s been omitted for whatever reason?
(this was a request to move to a different thread. which this now is)
(I’m fine with this post being split into its own thread with the 3 posts above.)
I too was confused for a lot of time before this “clicked”.
In my understanding, the need to “build” pure-Python package is a combination of historical factors, the way the packaging community is structured, and abstractions that try to work both for pure-Python packages and packages with compiled code.
Historically, setuptools was the de facto only way to build packages. setuptools supports C extensions, but even for pure Python code, the de facto standard setuptools metadata format was setup.py
, which needs to be executed, so the notion of a “build” made perfect sense: building a pure Python package is, among other things, executing the setup.py
to get the metadata. This was done in the sdist → wheel step. pip called setuptools to build a wheel from an sdist, and setuptools did this metadata resolution dance, plus optionally C compilation.
Then came PEPs 517 and 518 because setuptools was burdened with legacy and hard to evolve. These PEPs abstracted the “sdist → wheel” step into an interface that was no longer an ad-hoc interaction between pip and setuptools, but one following a standard that would enable alternatives to setuptools, now called “build backends”, to get usable by pip, by following the same standard interface.
For native code compilation, this was a great advance – cf. all the build backends specific to a certain C build system, like meson-python, scikit-build, pymsbuild, sip, and so on. The huge variety and poor standardization of C build systems is not something the packaging ecosystem can fix, so this build frontend/backend abstraction is useful to interface with many build systems. And of course, for compiled code, the notion of a build step makes perfect sense.
For pure Python projects, PEPs 517 and 518 were also a great advance, but for different reasons. There could now be dead-simple build backends with nice declarative metadata, like Flit. So the main role of this “build” step for pure Python projects was to turn the metadata written in the format chosen by the build backend into the wheel metadata format that pip understands.
Now, the point where this gets more confusing from the end user perspective is PEP 621. That PEP standardized the pyproject.toml
metadata that we know today. So, for the layman, it now intuitively makes less sense that you have to build the sdist into a wheel for pure Python code, because from the surface, it looks like all build backends are equivalent, since they use the same metadata format. (Well, almost, because Poetry still doesn’t support PEP 621 cough.)
This is only on the surface, though. There are still many things that could be standardized à la PEP 621 but currently aren’t. Off the top:
-
How
dynamic
fields are computed (e.g., various backends have features to read the version from an assignment__version__ = ...
or such in a Python file, either by executing the file or parsing it), -
How files from the sdist are included/excluded (you usually want just
.py
files, but projects need a way to include extra files like data files), -
And probably other things I’ve missed.
In theory, there could be a single de-facto standard build backend for pure Python code, like setuptools
used to be, which could be so lightweight as to be bundled in pip by default, and pure Python sdists found on PyPI using this backend would be practically equivalent to wheels, since pip could cheaply turn them into wheels or even bypass that step and just install them directly as an optimization.
In practice, there is currently no such standard build backend. I see no appetite for going back to setuptools
, but no clear winner has emerged so far among Hatch, Flit, Poetry, PDM and a couple others.
(Personally, I would dearly like to see a “go-to standard build backend for pure Python projects”, especially if it also had convenient features for managing pure-Python projects, like Hatch, and especially if Hatch gained support for managing Python versions. But it’s far from clear that PyPA should force that by choosing a tool and making it “official”, as opposed to merely waiting for the community to standardize by itself. I guess Pipenv is in all minds regarding this.)
Even if there was a clear winner, it would still be beneficial for pure Python projects to upload both an sdist and a wheel, because the sdist typically contains extra files like test files, which are just wasted bandwidth to download when you simply want to install the package. So, today, this sdist/wheel split serves many purposes: (1) let you download only what you’re interested in, (2) allow many build backends to coexist, by making each build backend an abstraction for an sdist → wheel step, (3) enabling packages with compiled code to ship binary distributions, by building several wheels, encoding compatibility into wheel tags.
(Sorry that this turned “blog post” sized.)
Shorter version:
Because pips dependency resolver needs to know all the dependencies of the project to be able to decide which version to install. There’s no (easy and general) way today to get projects dependencies without building a wheel (and you need to deal with things like dyanmic dependencies). Wheels contains statically this information. So in general if a project doesn’t have wheel pip needs to create an isolated (virtual) python environment, install build dependencies and invoke prepare metadata for wheel build just to find out if the given version is compatible or not with the current Python interpreter. For wheels is muich simpler, just download and parse a static file that will tell you.
I think this is the key question, so I’m going to focus on that. While the use of two (or more) formats is largely a historical thing, there is still value in that.
Python packages vary a lot. Some are a pure set of .py
files with static metadata. Some feature some kind of dynamic metadata — e.g. read version from git tags or Python files. Some feature “plain” C or Rust extensions. Some actually generate .py
or C files using different tools. Some have non-Python dependencies. There’s a lot of use cases to cover.
The current standards are doing their best to cover as many of them as possible but there are drawbacks. PEP 517 makes it possible to provide a semi-consistent API for building a lot of different projects, effectively covering a lot of use cases. However, supporting these use cases requires a lot more complexity than your average “pure Python wheel”, so naturally having and distributing two formats makes things faster. Wheel has everything static, so it can be processed and installed almost immediately. Sdist is “dynamic” and requires invoking the build system that may have additional dependencies and that can be slow.
I suppose you could try to devise integrating both formats but that would add a complexity with no really clear advantage. Furthermore, covering different use cases with one file would inevitably cause them to become much larger. Just imagine that some C extensions link to large static libraries — sdist is relatively small, wheels are huge. The opposite also could happen — for if sdist contains tests, it can be much larger than wheels. Shoving everything into every wheel (and you need many for many different targets) would cause a lot of duplication.
Another key thing here is that “pure python wheel” isn’t a clear enough descriptor for when the build step from sdist to wheel is “noise”.
It’s a bit of a historical thing now at this point, but a huge example of this case is the 2to3 code, where your sdist would be written in Python 2.x syntax, and the 2to3 settings would cause it to get “transpiled” to 3.x code upon build/install into a Python 3.x environment.
Another (still historical) example is pre PEP-420 namespace packages where __init__.py
would get generated.
The real key differentiator isn’t “is it pure Python python”, but “are there build steps above and beyond unpacking the archive and copying files”, and as it turns out, the answer for that tends to be “yes” for a lot more packages than one would guess on the tin.
Of course wheel files themselves include installation steps (for example, generating .pyc
files) so we could bake some level of “standard” build steps that an installer is expected to know how to do, just like there are standard install steps they are they are expected to know how to do… but it’s a lot of effort to do that, for very little pay off when the cost of wheels that otherwise would not be needed is pretty low.