User experience with porting off setup.py

I’m not sure if we are talking past each other here.

The point is that apparently you should not pin dependencies in pyproject.toml but should instead give suitable ranges like setuptools > 62 etc. When you build with e.g. pip wheel . then this will create an isolated environment and install all of the newest versions compatible with those ranges.

If you want deterministic builds then you want to be able to pin the versions of e.g. setuptools more precisely. The suggestion from @sbidoul is that you can do that by disabling build isolation and using pinned requirements like:

pip install -r requirements-pinned.txt
pip wheel --no-build-isolation .

This puts me in a situation of needing to choose between either a) having build isolation but not having pinned requirements or b) being able to have pinned requirements but not having build isolation. For deterministic builds though I would ideally like to have both pinned requirements and build isolation at the same time.

I guess your suggestion is that I can just make my own clean environment for this but that argument applies to everything: why does any tool like pip need build isolation if I can just go make my own clean environment?

2 Likes

Mostly not, they weren’t really relevant to what I was working on, except as transitive dependencies. One or two hacks were required (I remember one assumed that “bash.exe” always existed on Windows, and occasionally some environment variables were needed to enable/disable certain options), but they were mostly within the area of having a couple of platform-agnostic extension modules and few/no external dependencies. (Numpy/sklearn/Pywin32 were not on the list, for example.)

I think that suggestion was specifically for @indygreg who wanted to use their existing environment. You could instead do both if you wanted to.

Yeah, the build isolation in the Python packaging tools is yet another problem, which distro maintainers have to keep switching off. We have already isolation built in our tools (osc build here in SUSE/openSUSE, mock in Fedora/RHEL, I am certain Debian has something as well). Yet another area where the old setup.py build worked better for us.

Matěj

I think there’s a key question here about what “reproducible” means to you? Since we are talking about building extension modules, are you also pinning e.g. the C compiler? What about glibc? Are you after byte-for-byte reproducible builds, or do you have some looser definition, and if so what’s acceptable to vary and what isn’t? None of this is represented natively in setup.py BTW, and so pyproject.toml didn’t make the situation worse in any way from this perspective. If you want reproducible builds of wheels you basically need to do it in a container (like Anaconda does, and even then I don’t know if they go as far as setting SOURCE_DATE_EPOCH or somehow fixing the machine’s clock for any tools that may not support SOURCE_DATE_EPOCH).

Now if you are more after “reproducible” in terms of “I want a bill of materials for what was used to create this wheel”, then that’s getting into lock files which I know some people have started to have conversations about and are thinking about how that looks for sdists. And even then, that still doesn’t necessarily cover the C compiler, etc., so it still hinges on what you’re exactly trying to record for your view of “reproducible”.

5 Likes

1000+ on this one! The problem is that the current fashion is to hate standard library and push everything to independently developed external packages. And because of the lack of BDFL and endless number of the packaging systems grew up. The Venn diagram in the illuminating presentation by Anna-Lena Popkes is just an illustration of how grotesque the situation is.

Matěj

2 Likes

Within the Python packaging build system there is a distinction between “python dependencies” and “non-Python dependencies”. Controlling non-Python dependencies would need to be done separately in this context but the pyproject.toml is used to specify the Python dependencies (the things that can be installed with pip install foo) so that is what I am talking about here.

One thing that I would like to do is pin the versions that are used when cibuildwheel builds the wheels that I upload to PyPI. That way if I go back to the 1.1 release branch 6 months after the release of v1.1.0 to put out a v1.1.1 bugfix release I can presume that the build will not have broken because of e.g. a new release of Cython or something like that.

The problem is that tools like build, cibuildwheel etc will install the newest versions that are consistent with the constraints in pyproject.toml so to control what gets built there I need to pin in pyproject.toml. The general guidance seems to be that pinning or putting in speculative upper bound constraints are both bad practice though because it creates problems for downstream packagers.

I can disable build isolation and use something else to pin the versions for building wheels in CI but this only works when building from VCS and not when building from an sdist. How can I communicate to someone who is looking at an sdist in 10 years time what contemporary versions were used to build the project when the release was made in such a way that they can easily reproduce those versions?

2 Likes

It must be very disheartening for the volunteers who spent months and years of efforts to address hard problems, write and debate standards, implement them in tools, fix bugs, rework documentation to see messages such as yours with a parade of strong negative words and dismissive, reductive and incorrect summaries of the situation.

Many here have agreed that the current state is not yet good enough and that things will continue to be improved, so maybe the negativity could be restrained.

13 Likes

By the way, can I just say thanks here to the various people who have been motivated by this discussion to submit PRs to the packaging guide? It’s much appreciated.

18 Likes

A concrete thing that would be nice is if a frontend like build could optionally take in a constraints file. This would allow you to fix concrete versions, while still having the frontend manage the isolated environment for you.

Of course, current constraints files don’t really support hashes, which is something one really would want to avoid propagating supply chain attacks.

By the way, one useful thing that build does but pip wheel does not is to check presence of build requirements even when --no-isolation is set

4 Likes

Exactly. Build isolation and dependency pinning are complementary functionality to achieve highly deterministic, reproducible builds. I want them both.

As a package maintainer, I want these properties by default because it puts me in control of when things change. Any change of behavior is tracked by a VCS commit, not spooky-action-at-a-distance when state on a server (like PyPI) changes or I muck around in a local Python install or virtualenv that I later use for packaging.

This behavior gives me guarantees that when I release new versions of my software - possibly from a legacy branch - I don’t have to worry about whether a new version of software or presence of unexpected software in the build environment changes the built artifacts unexpectedly. I don’t want to be debugging unexpected changes to Python packaging behavior when trying to release a mitigation for a 0-day vulnerability or recover from a SEV-0. Rather, I want Python packaging behavior changes to go through the same code lifecycle change process (e.g. PRs + review) that every other code change goes through.

I also prefer this approach because it gives me - the package maintainer - a way to clearly articulate a set of known working versions and to define boundaries for support. If someone reports a bug, I can ask did you reproduce in exactly this environment? This can save a lot of time triaging bugs.

I absolutely love that modern build frontends are isolated by default. The missing features here are a way to make the build frontend/backend install more deterministic and a more ergonomic way to pin dependencies in a pyproject.toml world.

5 Likes

Thank you for the kind words.

I’m so grateful for the folks that have devoted so much time and energy to improving packaging, pip, and pypi. You have made a huge difference over the past decade. I have personally benefited from your hard work and use these tools daily.

For those offering constructive comment and areas for improvement, I encourage you to think about small PRs or ways that you can thoughtfully improve the docs and tools for future users. I get packaging is frustrating at times. Together, we can improve it if we listen to each other with respect and gratitude.

3 Likes

Anna-Lena Popkes Venn diagram illustrates the huge growth that Python has had in the past decade. Ten years ago, data science and scientific programming were far less popular than today. The needs of these communities, especially the use of extensions, other languages, and specialized hardware drove the need for tools to meet the unique uses of these communities.

As a 30 year old language, Python has had to evolve and rely on the expertise of the community to thrive. Shared understanding and thoughtful discussions are good steps toward something better for users.

3 Likes

Most of my thoughts on @indygreg’s post have already been covered (it’s a very helpful user journey report illustrating both some general problems and a specific problem with “How do I port a setup.py based build process with CLI options to account for the build backend/frontend split when using declarative project metadata?”).

However one additional thing that occurred to me when reading it was that there’s no clear modern packaging equivalent to the Python 3 Q&A I maintained for a number of years during the Python 2 to 3 transition. That Q&A wasn’t authoritative (hence it being on my domains rather than PSF ones), but it was informed (and I tried to be clear on the differences between personal opinions and collective decisions). One of the biggest benefits of that doc is that I was able to give some insight into the “Why?” of various decisions without folks having to wade through entire PEPs and discussion threads, and without having to get broad consensus on publishing “the” answer to various questions rather than just my own answer. Even without official status, it ended up accumulating sufficient search ranking that folks either found it on their own, or received links to it as part of responses to their questions.

In a similar vein, most of the decade+ long transition in Python packaging falls under just a few broad categories (“improve security”, “improve reliability”, “prefer declarative configuration metadata to imperative command implementations”) with various practical barriers to making the transition entirely seamless, but from the outside the only readily visible piece is the progressive deprecation of the old imperative ways of doing things. The concrete reasons for specific changes are generally held either in GitHub threads or PEPs though, hence folks finding it hard to get clear answers to their entirely understandable “But why?” questions.

Alas, as @pf_moore and others have pointed out, it isn’t enough to realise “Hey, this would potentially be helpful”, as actually creating and maintaining such a doc requires a significant investment of time and energy.

4 Likes

Okay, so what I’m wondering at this point is: What are the key kinds of complexity found in these build processes? Specifically what do people so with setup.py, that’s difficult to (do in pyproject.toml / migrate to a pyproject.toml-based approach), that’s really valuable to the people doing it (rather than just being e.g. some old legacy workaround that isn’t really needed any more)? Reading through the rest of the thread, I don’t get a clear picture of this. I do get a clear picture that many people are finding it difficult to do certain specific things, but not what those things are. For me, it’s only really ever involved turning some keyword arguments for the setup call, into corresponding entries in pyproject.toml. (And I was someone already accustomed to writing JSON by hand, so TOML seems trivial.) I do use Poetry, but I’m honestly considering dropping it and just using the built-in tools.

I honestly feel like we should start talking about getting people to “delete setup.py”. Of course, Setuptools is still a default backend, thus it is not by any means deprecated. However, I can imagine a future in which it either eventually drops support for setup.py (and a bunch of legacy cruft is cleaned up), or it otherwise morphs into or is replaced by something that’s explicitly only a backend. Once it exists, pyproject.toml is simpler, if only because it isn’t executable code used for configuration.

Did I misunderstand something? I know that distutils is being removed in 3.12 and that Setuptools is technically third-party, but I thought that pip wheel still exists, still sets up isolated build environments, and still provisions whatever version of Setuptools is necessary to function as a backend within that isolated build environment. Does that not, then, qualify as “a build frontend present in Python distributions by default”?

(Actually, I’m honestly not clear on why build exists in the first place.)


That said, I have some thoughts about organizing guide/tutorial content about pyproject.toml, which I think apply independently of whatever those issues might be.

Agreed 100%, but with the caveat that there is clearly more to talk about than just that use case.

The desire to “use setuptools” (i.e. specify it as a build backend) isn’t a problem AFAICT, and won’t be for some years yet. The problem is the desire to use setup.py. It’s exactly as Oscar says IMO.

I’m not comfortable with the idea that an officially blessed tutorial makes an arbitrary choice of tooling from among competing third-party options. The tutorial should IMO explain that a backend is being chosen (and briefly point out some common options), but then show how to choose the built-in, default option (i.e. setuptools, but still having it use pyproject.toml) and make it work (i.e., pip wheel, unless again I misunderstood something).

I think this observation is fair. It’s just, why not pick Setuptools to get started, and shuffle the discussion off to the side?

I think all these categories of users are important. I’m not convinced that they need separate guides. (The fact that the current guide can do what it does “without forcing a backend”, btw, makes me disinclined to agree that “choosing a backend is important”. Choosing Setuptools doesn’t feel like making a choice, for the most part. Although in the sketch below, I do propose describing it as such.)

Here’s how I think a unified explanation should be laid out, roughly:

For modern projects we don’t recommend using setup.py or setup.cfg any more. Instead, the metadata that used to go in setup.cfg, or as keyword parameters to the setup() call in setup.py, are now covered by pyproject.toml. If you’re starting fresh, you don’t have to worry about the history here. However, people maintaining older projects should be aware of these changes. Having code run at install time has numerous downsides [explain in detail]. [etc. etc. as necessary] If you’re in this situation, please read through in order to get a basic understanding of the new system; section X covering the detailed contents of pyproject.toml will also show how to migrate your old setup with minimal changes.

Here’s pyproject.toml. You use it to give metadata about the code that will be distributed (the [project] table), and to configure the tools that will prepare the code for distribution (the [build-system] table). Code can be distributed as a sdist that is basically a zip archive of the source code, or as a wheel - this is necessary to handle compiled C extensions. We say that a wheel needs to be “built”, but for pure Python projects this generally just means bundling the Python source code together with some metadata. The advantage is that an installer can then directly copy files into place, since they’re now organized the way that they need to be within the site-packages directory of the target Python. [Ed.: Or something along these lines, basically just explain why wheels are important even for pure-Python, open-source projects]

Building a wheel generally involves having a frontend communicate with a backend. (Explain these terms in-line, in addition to linking the glossary, along with justifying why the split exists.) In modern Python, an up-to-date version of Setuptools will work fine as a backend, and Pip serves as a frontend. Some people prefer to use third-party tools to fill these roles - many of them provide their own frontend and backend, as well as offering other functionality such as uploading wheels to PyPI. (Pip does not do this; see the guide for Twine or the guide for manual upload if you want to use only native Python tools.) Skip to section Y if you’re interested in choosing a third-party frontend and/or backend.

[This is section X.] Here’s how you use the [build-system] table to tell Pip to use Setuptools as a backend. There’s also legacy support for using Setuptools if the table is omitted, but you shouldn’t rely on this because [explain why]. Here’s how you use the [project] table to give the necessary metadata. Only the name and version are strictly necessary, but others are important or useful because [explain why]. If you already have an old setup.cfg file lying around, here’s how to migrate it. If you already have an old setup.py file lying around, you might have to do some more complex analysis, but here are some hints on filling in the metadata from there.

After that, here’s how to use pip wheel after that in order to create the wheel. [Explain where the .whl file ends up, basic anatomy etc.]

[This is section Y. Now we can mention Flit, Hatchling, Poetry, PDM etc. ad nauseam.]

The idea is that people can be directed to skip over the parts that don’t apply to them, and there’s one overall canonical source of information. More specific guides could then be extracted from this, potentially (such as what @bryevdv outlined).

2 Likes

Python isn’t the only language that has seen huge growth in the past decade. Nobody thought about running JS on the server in the 2000s, but Node.js is huge now. In the Node world, an equivalent diagram would be smaller and have less overlap between categories. A similar comparison could be done between Python and Rust, for example — there are less tools, and they have more capabilities. Not all solutions in the other ecosystems are perfect, or better than Python’s, but from an end-user perspective, having less choices is better.

I don’t really see much data-science/scientific-programming-specific tooling in the diagram by Anna-Lena Popkes. The only thing that would qualify IMO is conda. Many tools in the package-building group don’t support building extension modules.

5 Likes

We all hope that things improve in the future, but “continue to be improved” seems to operate on the assumption that things have been improving, which as you can see is far from a consensual opinion amongst downstream users.

(and, yes, part of this is because people had learnt to live with the pains of setuptools, and they don’t want to be exposed to new pains instead)

Right. Trying to find excuses (“it’s because of scientific computing”) doesn’t really help here. The proliferation of new backends/frontends does not seem to address novel use cases.

2 Likes

I don’t think there’s a proliferation of frontends - there’s pip, and there’s build. And build was created to stop every backend needing its own CLI. Other frontends are possible, certainly, but people mostly just use one of those.

For backends, we do have a lot, and I don’t honestly think it’s that helpful. PEP 517 was intended to allow other backends to exist[1], and it did that. But do we need so many backends that prodce wheels from pure Python projects? Probably not. I don’t think every workflow tool needs to provide its own unique backend, for example. But that’s what happens when you open up to competition - people experiment. I hope that ultimately we end up with only a few backends - and in particular only one that is the obvious answer for new users writing their first pure Python package. But we’re in a transition[2] right now, and one downside of that is too much choice.


  1. It’s important to remember that at that time, setuptools was very different, and a lot of people wanted an alternative. ↩︎

  2. Yes, in the packaging ecosystem transitions take 5-10 years. We all wish they didn’t :slightly_frowning_face: ↩︎

7 Likes

As I remember it a big factor in the original motivation for the frontend/backend split was that some scientific libraries wanted to be able to use alternatives to setuptools for building complex dependencies and extension modules. For example both SciPy and NumPy have now migrated from distutils/setuptools to meson-python. I would like to migrate python-flint to meson-python as well so that we can have a unified build system for managing and building both the underlying C libraries and also the extension modules.

Tools like meson-python rarely seem to be mentioned in all of these guides though and it seems that instead we have a proliferation of newer options that only handle the cases that are already well served by pip and setuptools.

7 Likes

Yes. The problem here is that a generic document is not the right place for genuinely complicated cases, and the simple cases are made confusing by the artificial complexity introduced by competing tools. So no-one wins.

@willingc pointed out the Diataxis framework, as a way of structuring documentation to address this better. What we need is people to work on a unified set of documentation based around this structure.

Does anyone want to form or be part of a packaging documentation working group?[1]


  1. I don’t, I’m rubbish at writing documentation… ↩︎

4 Likes