Python Packaging Strategy Discussion - Part 1

This is essentially what CPython offers as well, and gosh are there problems. Python is probably in a slightly different position than R since there are so many Linux/Unix-like system tools written in Python that distros want to package, and those packaging efforts unfortunately conflict with the user experience. if the user wants to use the distro-provided Python as a standalone component.

Combined with Steve’s comment on the Windows side of things, I am under the impression that R’s approach may not suit Python that well (or at least needs non-trivial modification to work for Python), since the two langauges are used in too different scenarios.

1 Like

Hi,

I come from the world of Bazel [1] + Python where I’ve contributed a bit to the primary rule set (rules_python) and am actively working on new rules (rules_pycross).

I think Python usage with Bazel lands somewhere between “integrator” and “user”, with its own idiosyncrasies thrown into the mix. There’s no giant package repository like Conda; instead most users pull wheels and sdists from PyPI. But users also steer away from system-provided python. By default we pull builds from indygreg/python-build-standalone.

However, unlike most end-user situations, fully-integrated solutions are undesirable as they often quickly find themselves at odds with the ways in which Bazel wants to do things. For example:

  • In Bazel, build actions actions aim to be isolated and fully reproducible. For a particular set of inputs (including build tools, environment variables, etc.), the output should always be exactly the same. Bazel achieves this in part by sandboxing each action and paying close attention to details like redefining __DATE__ and __TIME__, sorting filenames and zeroing out timestamps in zip files, etc. Part of isolation is blocking network access by default, so a build step that tries to fetch its build dependencies as part of its execution (like pip or build, by default) will fail. Instead, these dependencies need to be known ahead of time.

  • Because build steps are reproducible, they can be aggressively cached, and Bazel does this at every possible step. It’s even popular to have this cache be remote and shared by many users working on the same codebase. Other caches, such as pip’s wheel cache, or any tool that tries to cache its own build artifacts, are redundant and sometimes problematic.

  • Bazel has its own parallel execution engine with built-in support for remote execution. Tools that attempt to parallelize themselves are often in conflict with this.

So I wanted to register a vote for small, specific tools like pypa/build, pypa/installer, and similar, at least as part of or in addition to any larger “do everything” solution.

As an aside, the biggest missing piece for me is a resolver that - given a list of input packages - can provide a fully-locked, cross-platform list of requirements including build and (as a bonus) external requirements in some fashion - and isn’t heavily attached to an integrated tool that wants to manage the project. I know it’s a very challenging problem.


  1. Bazel: it’s a build + test + execution platform that grew out of Google’s own build system, Blaze. There are other tools with similar origins that likely share similar concerns, like Buck, Pants, etc. ↩︎

8 Likes

Thanks for the context of Python with Bazel usage @jvolkman, interesting.

This part was a little confusing to me, because when I hear Bazel I hear “hermetic builds, no external dependencies”. After reading the docs of python-build-standalone, it seems like the Python interpreter is built to be fully self-contained (e.g., static linking against musl libc) and the only wheels on PyPI that can be used are the pure Python (or -any) ones. And everything else needs to be rebuilt from source, with the compiler toolchain and against CPython and other needed libraries in the Bazel build.

This is perhaps an example where a uniform interface would be helpful? Build tools want to default to parallel builds in most cases (nice if pip install mypkg takes 1 min instead of 10), but if they had a uniform CLI flag (now typically --parallel, -j or -n followed by an n_cpus integer) then you could always pass --flag-name 1 there without having to think about which tool accepts what flag.

You can certainly setup a Bazel environment where all dependencies are vendored in your own repo and everything is built from source. This is what Google does, and I’m sure some other large organizations using Bazel are as well.

But as a matter of practicality, modern Bazel usages regularly pull pre-built, sha256-verified third-party dependencies. The hermeticity comes from knowing all inputs to an action - some of which may have been downloaded - and preventing (or trying to) undeclared inputs from being available.

There is a -gnu variant built against glibc 2.17 which I believe is what most are using. I don’t interpret “standalone” as “statically linked”, but rather that you just download, extract, and run, without worrying about any install processes or assumed layouts on the host system. The system requirements for the different build variants are described here.

Indeed, nested parallelization seems like a solveable problem with the proper interfaces and plumbing. The build rule needs to tell Bazel how many cpus it plans to consume (I believe this is possible) and also tell the build process how many cpus should be consumed.

1 Like

Is the thinking that the better packaging world would have prebuilt none python code packaged?
With out that each python package that needs a non python dependency would be building it for themselves, like now.

On macOS is that homebrew?
On windows is that the Windows Package Manager? Windows Package Manager 1.3 - Windows Command Line

(Maintainer of Briefcase (part of the BeeWare suite) which is either a user of packaging tools, or a packaging tool itself, depending on your definitions)

This thread has done a good job at highlighting that packaging is a complex problem for a variety of technical and historical reasons. However, IMHO, the biggest issue that exists isn’t technical - it’s communication.

For me, the most revealing part of the statement that “there are too many tools, and users are not sure which ones to use” isn’t that there are too many tools. The important part is that unless you’re knee-deep in discussions about packaging, it isn’t currently clear which of those tools should be used - and, the fact that it isn’t clear which one to use is, at least in part, the cause of there being too many tools.

As an example - the packaging.python.org tutorial was updated 6 months ago to use Hatch in its examples. This would seem to indicate a significant signal from the PyPA that Hatch is a “new default”; but I’m not aware of any formal statement of that intent. What does this decision mean for Setuptools (the old default)? Should it be considered deprecated? Should existing projects migrate to Hatch (or anticipate a migration in future - and if so, on what timeframe)? What does the introduction of Hatch as a new default mean for Flit, which is also a PyPA managed tool? How does this decision fit into the longer term vision and plans of PyPA members?

Whatever the outcome of the technical aspects of this (and future) strategy discussions, I’d suggest that the process of communicating that strategy - and communicating progress towards the desired future state - is just as important (if not more so) than the strategy itself. This is especially important given it’s going to take months or years to converge on that future state.

I acknowledge that the PyPA is more of an “accumulation of interested parties” rather than a formal standard-setting body. However, even an informal statement declaring a vision and desired future state for Python packaging would be invaluable in terms of setting community expectations, generating convergence towards that desired future state, and guiding the contributions of those who might be inclined to help get there.

Perhaps I’ve missed the point, and the purpose of this discussion is determine what the strategy and vision should be, prior to publication as a PEP (or whatever instrument is appropriate). If that’s the case, consider this a hearty endorsement of that plan. However, if it isn’t, above everything else, I’d advocate for clear articulation of whatever the final vision happens to be.

11 Likes

There is no formal statement because there is no formal recommendation. It was chosen as the default because it is easier to use for newcomers and at the time (and still) had stable releases that supported the latest packaging PEPs rather than setuptools’ beta support.

It is also important to note that because of the pipenv debacle any official future recommendation would be kind of tricky.

No, and cannot technically be deprecated until something else can build extension modules (which I am working on with Henry, albeit slowly). That is the final piece of the goal set out in the abstract of PEP 517.

No project should feel compelled to do anything but if there is a desire to move metadata from setup.py to pyproject.toml then I personally would recommend Hatchling for ease of use, among other reasons which I plan to outline in a dedicated document soon.

Perhaps others could chime in for this one but my interpretation of Flit is that it will be forever required to build the few core packages like packaging because it vendors its dependencies which makes bootstrapping for distributions easier.

That is an open question! The following is my personal opinion:

The original, and unchanging, philosophy of Hatch is that its defaults are based on accepted standards and you can mix and match tools so if you only wanted it for the Hatchling build system you could just do that or if you wanted to only use it for its environment management you could only do that. Basically, if you wanted to you could use a different tool for every part of managing a Python project: creation, building, versioning, publishing, environment management, Python installation management, etc.

While I still think you can do all that and Hatch will always allow that, I was conflating could with should. I am increasingly of the opinion that the Python ecosystem would benefit from having a recommended user-facing packaging tool like Cargo and npm.

What opened my eyes recently was doing a month long embed on a Rust team at work. I simply did not realize how much friction is encountered and uncertainty exists which I mostly avoid due to the fact I am already experienced and know what to use, when, and how. If packaging in that ecosystem was not as streamlined as it is then onboarding and teaching beginners would be quite the task, and would look similar to how Python is now where each team or organization must document the workflow that they specifically have chosen.

I think this thread is talking about 5 different things which I think should be broken up:

  1. A recommended user-facing packaging/project management CLI and build backend
  2. The continuation of building core packaging logic in dedicated libraries like installer which will make the ecosystem more accessible for distributions and things like Bazel, and also allow for easier maintenance of and less dependence on pip
  3. The ability to better integrate with the Conda ecosystem e.g. their packages being available outside that ecosystem
  4. The intricacies of native code (as I mentioned Henry has funding for and is working on this)
  5. Lock files (which there is already a thread for and Brett is incrementally working on)

I am trying to tackle 1, which is mostly a UX challenge and is the direction chosen currently by Hatch in that it is a wrapper for many other tools and based on plug-ins for extensibility. I don’t want to restate what has already been said but I basically agree with everything in the second and third comments.

3 Likes

Fair enough… but as an external user who isn’t actively involved in the decision making process, what I observe is “Python/PyPA now recommends Hatch as a packaging tool”. At the very least, it’s an indicator that the powers that be consider it good enough to be a reasonable default. That may not have been an intentional recommendation - but it’s an implicit one, especially given the role setuptools has played in the last 20 years of Python packaging.

It may be tricky, but I don’t think that makes it any less important. Personally, I feel that a project that clearly articulates “Although we recommended X in the past, we’ve come to the conclusion that was a bad call” is a better outcome than never making a recommendation and leaving the community to flounder and produce a bunch of new competing standards as a result.

Fair enough; however, this kind of makes my point about focussing community attention. If I were in a position to help, should I focus on helping Hatch support extension modules, or on fixing the gaps/bugs in PEP517 support for setuptools?

We may not be able to deprecate Setuptools today; but indicating that the intention is to eventually deprecate setuptools, and the conditions that would lead to that deprecation, is a valuable signal for the community - if only at the level of reducing the number of new libraries built on top of a tool that is planned for deprecation as soon as opportunity allows.

Completely agreed with this. But this is also an area where an a semi-standards body like PyPA is in a position to help by communicating current recommendations and intentions for future directions.

1 Like

I was looking at @mayeut’s Manylinux Timeline today, and something struck me that I think is perhaps relevant in the larger scheme here: despite Python 3.7’s EOL in less than half a year (and not being supported anymore by large parts of the PyData Ecosystem[1] for about a year already), 54%(!) of current Python package downloads use an almost-or-already EOL Python version.

Python 3.9 (released over 2 years ago) is at ~9% and everything above is a rounding error. This thread has mostly focused on the library side of packaging, but it’s indicative (IMO) of the compounding nature of these problems that we end up with very slow upgrade speeds, which becomes a larger and larger hurdle the longer things are left untouched (and prospective upgrade side effects accumulate) at every step of the way in the dependency chain.[2]

The point I’m trying to coax out here is that this 100’000 foot view on the aggregated delays in our upgrade story should be an aspect of that “packaging strategy discussion”[3]. More to the point, I fail to imagine how we could make a substantial dent in it without:

  • absolving users of being integrators (by default), as they otherwise have a daunting amount of work to go from Python 3.x to 3.y (will all packages be available? which package versions will involuntarily change? etc.)
  • being able to publish packages in the absence of library authors (because otherwise any downstream dependency then has to wait, which again compounds); this one is thornier, because it would be a big change (or new layer) in the PyPI social model, but OTOH, we cannot (and should not) force library authors to always be available e.g. to recompile & re-release when a new Python version drops.[4]

Taken together, this sounds to me like big flashing signs towards needing an official integration layer[5].

Out of curiosity, I wanted to check some current download numbers for conda-forge (which obviously is not reflected in the statistics from PyPI and thus Manylinux Timeline). It’s less trivial to get concrete numbers (they’re all aggregated over time), but – despite obvious shortcomings in the approach – I used the download numbers from the recently released cryptography 39.0, because it’s part of almost every non-trivial python environment (e.g. anyone who has any transitive dependency on requests)[6]. After almost exactly a week of being online, the data looks as follows:

Python linux-64 linux-aarch64 linux-ppc64le osx-64 osx-arm64 win-64 Sum % C-F % PyPI[7]
3.8 42’106 1’626 75 5’187 941 8’657 58’592 25.2% 75.8%
3.9 53’457 1’630 91 7’180 2’337 11’233 75’928 32.6% 18.6%
3.10 45’446 862 94 5’657 2’271 9’072 63’402 27.3% 4.7%
3.11 16’728 187 62 3’680 823 13’097 34’577 14.9% 0.9%
Sum 157’737 4’305 322 21’704 6’372 42’059 232’499 100% 100%
% 67.9% 1.9% 0.1% 9.3% 2.7% 18.1% 100%

Now, I’d be the first to say that this is not a fair comparison for various reasons[8], but the qualitative difference still is somewhat striking to me – on Windows, Python 3.11 is even the dominant version already.

A shouty remark

The point here is not about conda, but about the difference it makes to have such an integration layer. Yes, floating such an idea begs an obvious question of who would do all that work, but as an indication, conda-forge has attracted ~4600 volunteer maintainers[9], who are willing to do that that kind of integration for the wider Python ecosystem. To my mind, it wouldn’t be impossible that this kind of motivation could similarly be harnessed for an “official” integration layer directly.


  1. those following NEP 29. ↩︎

  2. I know of at least one maintained library that still has a hard pin on python 3.7 – not >=, but ==. ↩︎

  3. actually, this whole line of thought reminds me of @freakboy3742’s 2019 keynote about potential “black swan events” for Python ↩︎

  4. the stable ABI becoming more usable could help here, as would the ability of packages to mark themselves as pure python not just in the wheel tag but in the package metadata. ↩︎

  5. though I’ll concede the possibility that maybe all problems look like nails when I got an integration-shaped hammer ↩︎

  6. using the download numbers for the various python builds directly doesn’t work well because the different versions don’t get published at the same time, with differences in the amount of time being online obviously skewing the download numbers. ↩︎

  7. From the manylinux-timeline; only taking downloads for Python versions >=3.8 into account ↩︎

  8. e.g. PyPI serving a wider range of packages, and thus more of the “long tail”, conda-forge not having builds <3.8 anymore (obviously skewing the statistics compared to PyPI), not everyone using cryptography, etc. ↩︎

  9. whether they are also the upstream package author or just a volunteer; this is just counting so-called “feedstock maintainers”, not drive-by collaborators who open a PR. ↩︎

2 Likes

This is wrong on pretty much all levels:

  • There are multiple build backends capable of handling native code and producing Python extension modules. For the most general case (multi-language, C/C++/Cython/Fortran/Python/CUDA) I’d recommend meson-python and scikit-build-core. Then for Rust users there’s maturin.
    • Furthermore there’s enscons - I know less about it, but I believe it works. Given that SCons as a build system doesn’t come close to Meson or CMake in capabilities/performance, its usage probably won’t grow much.
  • I wouldn’t expect setuptools to be deprecated because those other things exist. It’s well-maintained and has a large user base. New projects are another story - authors should probably choose one of the other build backends indeed.
  • The PEP 517 goals have largely been achieved. The point is to be able to write new build backends (not replace distutils/setuptools with a single new thing), and those exist.
    • What’s left is mostly making it easier to write build backends. Two of the most painful things are dealing with wheel tags and with install schemes. Both are way hairier than they need to be, and not well documented - you typically need to go read source code of some libraries, or piece things together from multiple places and trial-and-error.

Hatch/hatchling docs seem a little misleading, it claims to have a build system and does not warn users that it’s pure Python only. The libextension thing doesn’t exist at this point (I can’t even find a previous post on it anymore), and even if that does come to fruition it’s very likely only going to address the simplest cases - e.g., user has a few Cython/C extensions only, no external dependencies or other languages, and doesn’t need much from compilers or a workflow for developing native code. One build backend calling another is not a healthy idea in general.

Hatch as a workflow tool that wraps other tools seems quite nice, but we’re going to continue having multiple build systems and hence multiple build backends.

1 Like

The integration I would propose here is that those distros should put up a PyPI-compatible index of their own with their builds of packages on it and convince pip to look there first.[1] The innovation here is that the builds don’t have to be on PyPI proper, and the user doesn’t have to figure out what URL they should be putting in - it’s just an inherent property of the Python install they’re using (and it may be different in a venv based on that install, for example).

More confusing than Python crashing? Or the rest of the OS crashing?

In any case, the distributor opted into it. If their users get confused, maybe they could figure out how to be compatible with the defaults so they don’t have to warn on so many packages :wink:


  1. Unfortunately “first [index]” is not a concept that pip is familiar with, so that would be the work to do on the client side. Otherwise, it’s all about the integrator providing their packages in a compatible way. ↩︎

1 Like

I’m not saying it will be deprecated I’m saying PEP 517 has not fully been achieved because there has not been a PEP about extension modules.

What do we need a PEP to define? I’ve successfully created a backend that builds extension modules (albeit for a narrow enough case that it’ll never take over the world, but that’s fine), so the fundamentals are sufficient.

Off the top of my head, what we’d need a PEP for in this area is to define far more ABI constraints than can be captured by the platform tag. Not every package needs to target a specific GPU, or a specific set of CPU extensions, or a specific version of a particular system library. But those that do need to target those are currently forced to have to support a likely range of them and handle it at runtime (or more commonly, force users to choose a specific package/feed at install time).

But that’s totally separate from PEP 517.

1 Like

This kind of thing btw contributes to the sense of there being “too many tools” from my perspective: it takes time to learn and evaluate each of these different tools. I don’t mind there being different tools and I don’t mind learning the one that I will use. However I don’t want to spend lots of time learning lots of different tools that I won’t end up using before I can even figure out which one I should use.

I guess Ralf just read the hatch docs just like I did because it was mentioned above. I had to get a long way through to reach the point of concluding from the absence of any instructions for non-trivial build configuration that it probably doesn’t provide anything to help with non-trivial builds. Had that been stated clearly up front I would have stopped reading a lot earlier.

As Ralf says that’s not a deficiency of hatch since it’s just a question of scope. It just means that I need to use hatch with something else as a build system. What that means from my perspective though is that I just spent a bunch of time reading stuff before realising that while it looks nice in lots of ways it does not solve any of what for me are the hard problems so I still need to go and choose a build backend (which means learning more tools to evaluate them).

2 Likes

From PEP 517:

The goal of this PEP is get distutils-sig out of the business of being a gatekeeper for Python build systems. If you want to use distutils, great; if you want to use something else, then that should be easy to do using standardized methods.

Unless I am profoundly misinterpreting that and our long-term goals, then that is not yet complete because there is no standardized method/interface for extension modules.

To be super specific here, for building a wheel with compiled bits you need 2 distinct things: a build backend and an extension module builder.

The build backend is the thing that users configure to ship the desired files in a zip archive which we call a wheel. The extension module builder is the thing in charge of compiling like cmake that will then communicate with the build backend to say what extra files should be included.

Mildly misinterpreting (to be fair, it’s ambiguous, unless you’re familiar with the author’s language) :slight_smile:

“Standardised methods” here basically implies that you specify your alternative to setup.py ... in pyproject.toml and anything that wants to run a build knows how to run it.

Sure, if that’s how you want to architect it. setuptools combines the two, as does pymsbuild, and so if you want to use either of those to build your extension then you also use them to package the wheel. conda-build uses a batch file/shell script, so you can build/install any way that you want/can and the tool figures out what to include.

Both are architectural choices. If your build backed wants to support interchangeable builders, it needs to provide a configuration option to its users. That’s all.

I think a build backend doing the part of an extension module builder should be discouraged.

Even if hypothetically all of the logic for packing the wheel, filtering files (and associated configuration), reproducibility, etc. was in a standalone library, the use of that library would still be undesirable compared to simply implementing an API that tells an arbitrary build backend “hey, I built a file at this path and it should be shipped”.

Assuming the latter was an option, wouldn’t you have chosen that for pymsbuild?

Probably not, but only because I wanted to go even more arbitrary than that library would’ve been :wink: zipfile suited me just fine (and I did structure the code to generate a laid-out wheel first, then just zip the whole thing, so that last step is very straightforward).

The question works both ways though,[1] basically coming down to “I only want to write the code that interests me,” which I think is totally fine. Unfortunately, nobody seems really committed to solving the single-source-cross-platform-with-existing-ABI-constraints-compilation problem, in large part because it requires solving the installing-non-Python-dependencies-as-part-of-build problem, which is considerably more difficult now that builds tend to happen in fresh, isolated environments.[2]

It sure seems like this is the problem to solve, though, and I think we’re heading towards a better understanding of what we need and what approaches are feasible. I don’t think it requires every build backend to have full support for native modules, though. That’s a perfectly fine thing to constrain your choice of backend when setting up a project.


  1. If there was a library that could [cross-]compile native code on all platforms without [platform-specific] configuration, wouldn’t you just use it? ↩︎

  2. For clarity, I mean PEP 517 here, not hosted CI environments. I still think the latter are our best way out of the complexity here, as they’re the closest free thing we have to a Linux distro/conda environment that isn’t a Linux distro/conda environment. ↩︎

4 Likes

Suppose you want Rust’s cargo to be your extension module builder. A cargo invocation produces a shared library linked with the appropriate Python flags / shared libraries. Now hatch wants to put it in a wheel? Should work fine?

Remember GitHub - dholth/nonstdlib: Python's standard library, repackaged. Experimental. that produces hundreds of wheels from a single sdist?

The PEP 517 and related formats are so trivial to produce, that it may be difficult to provide value at a lower level.