PEP 517 Backend bootstrapping


(Donald Stufft) #121

I’ve been doing a lot of thinking about this, and talking to a few folks about it who have a lot of experience with this type of problem in a variety of build systems with 10-20+ years of history behind it as well. Trying to figure out how they’ve solved this problem in the general case, and where our solutions meaningfully differed and why.

Historically we’ve had setuptools acting as the only build backend, and due to the nature of how setuptools worked, any sort of build code that existed beyond what setuptools itself provided had to be developed inside of setup.py with minimal ability to use third libraries to make reusable code. This lead to a ton of custom one off code that was poorly tested and factored that ended up being cargo cult’d around from project to project and often times being slowly modified as time went on so there were tons of copies of this code but they were all slightly different, like mutating strains of a virus spreading throughout the packages in the Python ecosystem.

We developed PEP 517 with those experiences in mind, and thus we designed to counter our experiences. We knew one off code had been a huge problem for us, so we purposely made it difficult to actually go about and add this one off code to a package ever again. Obviously we didn’t make it impossible, or intreehooks wouldn’t exist, but we purposely made it as difficult as we reasonably could.

In thinking about all of this, part of me feels like maybe we swung our mental pendulums (myself included) too hard in the opposite direction and gone to the other extreme, which isn’t inherently better, it’s just different. There are legitimate use cases for one off code that our current system isn’t handling nearly as well as it could be.

For instance one such thing that this one off code allows is novel composition of existing build systems that maybe only makes sense for a single package or small wrappers over existing build systems to provide some level of customization that maybe didn’t make sense in the original build backend but do for this one particular package. Now obviously one answer to that is they can spin out this code into a custom library and depend on it. However I think that perhaps encouraging people to make single-use libraries on PyPI is making the experience worse for users, not better. Obviously general purpose backends absolutely should be distributed independently on PyPI, but I feel like things that are obviously tied to one specific package or such should not live there. We should encourage producing build-backends as libraries as the default case, but provide tooling to support the in tree case as well.

Obviously we didn’t completely block out support for this today, because the intreehooks package exists, but I think that with some fairly simple additions to the spec we can natively support it rather than requiring a sort of meta backend.

Now of course, one of the use cases for this generic facility to have an in tree build backend would be to provide a way for backends that wish to build themselves, to do so without requiring a fiddly meta bootstrapping process on the end user’s side of the equation. They would simply use whatever mechanism we implemented, and provide an implementation of themself through that which is good enough to produce a wheel and nothing more.

In thinking through all of this, and talking it over with others, I think I’ve convinced myself that we should:

  • Add a python-path key to the build-system table in pyproject.toml, with a restriction that the path must be relative and must be resolve to a location relative to the location of the directory that contains the pyproject.toml.
  • Update PEP 517 to state that build-backends that are not packaged using another build-backend SHOULD utilize the python-path key and ensure that they do not introduce a cyclic build graph (e.g. foo can’t build-depends on bar which then build-depends on foo).
    • A key thing here is we don’t prescribe how this should be done. I can think of two possible solutions:
      • Provide a bare bones self building hook with zero dependencies and add the path to that using the python-path key (recommended).
      • Bundle their build dependencies as part of their sdist, and use the python-path key to add all of those build dependencies, and point their build-backend key to the “normal” import that they would otherwise use.
    • It might make more sense to make this a MUST, the SHOULD means that if a project is truly against putting in any effort to support automatically building from “zero” they can just omit support for it completely and still be complying with this PEP. However the case of just vedoring your dependencies is fairly simply to implement and makes the experience a lot more consistent for end users with less footguns and less likely leading to having users randomly hitting either the frontend or the backend with reports saying “hey this didn’t work”.
    • We should probably mention that front ends SHOULD reject cyclic dependency graphs when building from source.
  • Update pip so that --no-binary :all: does not mean “disable PEP 517 and never use a wheel at all”, but rather means “don’t use any existing binaries, and produce a new wheel file locally to install from”.

I think that this solves not only the bootstrapping problem, but also brings our stance back to a more moderate location that makes it easier for projects that should have one off build code do that while still generally encouraging people to package truly generic build code as libraries on PyPI. Since the “one off build code” solution even takes the same shape programmatically as the “library from PyPI” case it provides a much simpler path for someone to start off with a one off build backend, then if it ends up growing to something that is more generally useful, extracting it out into it’s own library and distributing it with minimal changes.

This also allows backends to weigh the trade offs of how they might implement these constraints. If a project wants to expend minimal effort to do so and do not believe that from source is particularly important, they can simply just dump all of their build dependencies into a _vendor directory in their sdist at the cost of a larger sdist (but then, hopefully most people are installing from wheels anyways). If a project is willing to put in more effort, they can implement a minimal build system (either by making their actual build system able to operate in a minimal dependency mode, or adding a small one off one).

Of course, if a build backend wants to side step this by just depending on another build backend to package themselves, that is completely reasonable as well.

I realize that this is a bit of a reversal from my earlier position, but that comes from talking to people with long standing build systems that pretty much universally said that sometimes it’s needed to have some custom one off logic specific to that package, and realizing how crummy it would be if every project like that needed a custom build hook on PyPI. Once I accepted that idea, then I realized we had two use cases where this feature would be useful, which lead me to reverse my position and aim for the more general solution.


(Bernat Gabor) #122

@dstufft I’m in agreement, though still worried that such cycle detection and another type of build dependency (aka self bootstrapping) will significantly make build front ends more complicated, but I guess no easy way around it.


(Chris Rose) #123

One approach you might take is to bootstrap the project-local build backend before augmenting the sys.path with the other build backends. However, that’s probably an artificial limit.

The thing is, we are discussing a really narrow edge case here; if a package needs this ability, the developers in question are going to be really, really motivated to get it right. I think you’re probably okay.


(Paul Moore) #124

@dstufft This sounds good, although I also have some concerns that cycle detection might be a significant issue in practice. Given that you described cycle detection as a SHOULD requirement, I don’t see that as a major issue, but I’d hope that people don’t take it as an indication that pip will do this (I’d prefer to punt on this for pip, and get the basics working there, before worrying about cycle detection).

I agree though that the backend requirement is probably better as a MUST. The way by making it a hard requirement, any failures can be considered as backend errors (and in practice, backends that fail that reuirement may still be usable in the majority of situations). Whereas with a SHOULD, all frontends have to be prepared to handle backends that introduce cyclic dependencies, and we’re back where we started (all the frontend can do in practice is fail, and it’s not even clear it would be possible to do so gracefully).


(Donald Stufft) #125

Yea. Conceptually it should be fairly simple to do, maintain a list that acts as a stack which is just the name of whatever build dependency we’re building, as we recurse deeper into the build-dependency graph, check if the build dependency we’re about to process already exists in the stack, if so error, if not append to the end of that stack and continue. That may be significantly harder to do in practice, so I left it as a SHOULD because I think in any case we’ll want to implement it at some point because otherwise the failure case is crummy (I think it would basically just fork bomb the system and/or infinite loop on building dependencies). It’s also reasonable to make it a SHOULD because it exists entirely as a sanity check that the build dependency chain of the build-backend makes sense, as long as it does make sense it should be a no-op.

The place where I think it would be most helpful is cases where a build-backend has it’s own build dependencies, which notably isn’t disallowed under my proposal here, it’s only disallowed if it would introduce a cycle. Even if a build-backend does everything right, taking on build dependencies is a bit of a risk. For instance, flit build depends on requests, docutils, pytoml, and zipfile36 (in some cases). That’s completely fine today since all of those dependencies are relying on setuptools so no cycle exists, however if say requests decided to switch from using setuptools to using flit, then a cycle would be introduced through no changes in flit that would need to be dealt with.

The strategies I could come up with to handle that edge case are:

  • Have the frontend check for cycles and present a good error message.
    • I suspect long term frontends are going to want to do this in either case, regardless of what the PEP says although they may punt on it at first because the way the error would be expressed otherwise is kind of crummy and we should probably at least fail gracefully in the presence of a compliant backend.
  • Disallow build backends from having build-dependencies.
    • This neatly sidesteps the issue since it’s impossible to have a cycle if there are no build-depends. However there’s no real way we can enforce it while also keeping the mechanism itself general case so the front end would still be exposed to the possibility of a cycle in the case of a backend declaring a build-dependency anyways.

One thing we may want to add is some language that points out the issue that if a build-backend has build dependencies that future updates to those dependencies may introduce a cycle and that they will likely want to consider having minimal build-dependencies and coming up with a strategy to ensure that those build dependencies don’t introduce a cycle accidentally (pin versions, have an out of band agreement not to do that, etc).


(Paul Moore) #126

I suspect that the best approach here would be to keep the normative language simple and precise, but then add a separate “discussion” section that explains all of the background and potential issues.


(Donald Stufft) #127

Yea that seems reasonable to me. Only reason I thought to add it to the PEP at all is it’s sort of a subtle issue that a new comer trying to build a brand new backend might not realize it.


(Paul Ganssle) #128

I have to say, this sounds like the worst of all possible worlds, as far as I can tell. It sounds like it’s saying “we should do all the things proposed in this thread”, with new obligations for both frontends and backends, plus it complicates the semantics of sys.path for backends, all to solve a problem that, so far, no one has a real concrete use case for that we care about. Not to mention it barely solves the problem. Chances are, setuptools will just start shipping a vendored wheel as part of the sdist, so now all the “can’t build from external wheels” people will end up actually totally unable to actually accomplish their goal without actually patching the sdist, which is harder for them than simply using the clean, existing infrastructure for this.

Why do we need cycle detection if we have a mechanism for self-bootstrapping? Why do we need a convenient syntax when it’s not at all a problem to have the convenient semantics be relegated to an additional layer of “meta” backends to avoid the additional complexity in implementing a backend and a frontend?


(Paul Ganssle) #129

I’d like to clarify that a wheel is not a “binary blob”. An sdist is a zip file containing all the files you want to install, plus some build configuration to put it all in the right place. A (pure python) wheel file is a zip file containing all the files you want to install, already arranged in such a way that pip can put them in the right place. In many ways, a pure python wheel is more, not less auditable than an sdist.


(Paul Moore) #130

Maybe I’m missing something here, because I don’t see the significant differences between the options that you do. I’m assuming that your preferred solution is still (some variation of) having a named file in the project root as the bootstrap backend? If so, could you clarify exactly what the content of pyproject.toml and that bootstrap file would be for setuptools? And what you think would need tio be different under @dstufft’s proposal? Whenever I try to do so, I can’t work out how things would be different under the various proposals (I keep getting bogged down with the setuptools needs wheel which needs setuptools… cycle).


(Paul Ganssle) #131

Arguing about this for so long has me largely convinced that we should just expect “root” backends to be able to bootstrap from wheels - possibly with the speculation that front-ends MAY require that non-universal wheels need not be considered when satisfying build dependencies. We can then leave it up to front-ends to decide how that works - they can use cycle detection to keep the wheel use to a minimum, satisfy every build dependency with a pure python wheel or do something more complicated.

If we can come up with some concrete use case that we actually want to support that this doesn’t work for, then yeah, you are right that adding an option to have a bootstrap backend located in a “well-known location” is my preferred fall-back, but if that’s the case then we should do one or the other, not both.


(Chris Rose) #132

It’s not exactly one, but in the case of any package that ships compiled extensions – I hope setuptools won’t ever be one, but nonetheless! – the difference is pretty moot.


(Thomas Kluyver) #133

I think this “nonetheless!” point is part of where the disagreement is. The build systems we’re thinking about are pure Python and there’s no obvious reason to introduce compiled extensions to them. So a wheel of these isn’t a ‘binary blob’.

Maybe there’s something we’re missing which would call for build systems with compiled extensions? But in the search for concrete use cases, arguing that you can’t use a wheel of setuptools because it’s kind-of maybe like a binary blob isn’t terribly compelling.


(Paul Ganssle) #134

If the extraordinary case of a compiled build backend were ever to become common, the intreehooks approach would still be valid - intreehooks bootstraps itself from a wheel and the compiled backend would use intreehooks (or a successor package optimized for compiled backends). For these edge cases where we have zero examples of someone actually needing it, I think it’s sufficient to make it possible even if it’s not convenient.

As mentioned above, I’m perfectly fine with the requirement that frontends may specify that you can’t use non-universal wheels.


(Paul Moore) #135

The thing I’ve just realised is that pre-PEP 517, --no-binary :all: does actually download wheels, because setuptools has a build_requires of wheel, which is satisfied by easy_install, not by pip. So the discussion of requiring pure-source builds needs to look at the requirements from that perspective - that it’s not something that users have ever had in the past (even if they thought they did…)


(Donald Stufft) #136

No. It’s roughly @njs’s original proposal with some additional verbiage that explicitly lay out the fundamental requirements of reality under that proposal.

I think that people are probably not using --no-binary :all: because they thought it would be fun and thus I assume that they have a legitimate use case for doing so. I care about supporting those users, wether you do or not is your prerogative but it’s certainly not a universal opinion that we don’t care about those use cases.

This is a digression, but honestly the wheel project (or at least, bdist_wheel) should probably be combined with the setuptools project at this point. To the best of what I can tell looking at rdepends in a variety of other package systems and going from my gut, roughly the only consumer of wheel is setuptools. If not as a first class part of setuptools, then vendored like pakaging and such is. It’s kind of silly that in 2019 with PEP 517, we’re keeping that particular relic of when wheel was brand new around.

We know that every time you install setuptools in PEP 517 you’re going to need to install wheel regardless, and if setuptools is the only consumer of the wheel library then breaking them apart isn’t buying us much.

In any case, I believe my proposal does solve the bulk of the current issue we’re having here and allows front ends to build an complete dependency set, including any build dependencies “from source”. I’ve put the “from source” in quotes here, because that roughly translates to “from an sdist”. There is nothing (and there has never been anything) that mandates that an sdist contains only “source”, you can ship real binary artifacts (e.g. a .so) in them and nothing will stop you. So like every other part of sdists, this PEP relies on convention to say that sdists should generally only truely be “from source”, but provides flexibility for cases where that doesn’t make sense.

The “clean, existing infrastructure” is neither clean nor truly existing.

Telling someone who just wants to pip install --no-binary :all: that they need to externally figure out how to bootstrap some build dependencies and make them available to pip makes it harder for end users and less clean. In many cases they won’t even know before hand what their build dependencies are going to be they’re just going to have to run pip, look for the error, do research to figure out how to build that dependency, build it, then run pip again; rinse and repeat until they finally managed to get it. It’s going to be incredibly frustrating and error prone for end users.

It’s only “clean” from the point of view of a build backend author, who under your proposal, gets to externalize the “uncleaniness” cost onto their users rather than solve that problem themself. Externalizing costs doesn’t get rid of them, it just shifts the burden onto someone else, and in this case would end up requiring duplicative effort that could trivially be centralized.

Most of these users do not have any existing infrastructure set up to use pip with. If they did they would presumably be using that infrastructure to build their wheels instead of directly using pip.

Did you read my post? I’ve already answered this question:

We could try to go down the second route there and disallow self-bootstrapping build backends from having any dependencies, but as I already said there’s no real way we can enforce that without also keeping the mechanism general to handle other useful cases. I think that it would be better to take an opportunity where there is a useful, general solution that solves multiple problems and apply that, rather than applying different, specific solutions for each problem. But we could go the specific route and remove the need to check dependency cycles by just disallowing self hosting build backends any dependencies.

I have to ask again, did you even read my post?

As I said, many build systems provide the ability to have some sort of in tree build system specifically to handle one off cases. This would be useful for us to prevent these projects from polluting PyPI with single use dependencies that nobody else would use.

I personally consider it good engineering practice to attempt to produce generic solutions when you have multiple problems that can reasonably be solved by the same mechanism. That means there is less mental overhead for users because there is just simply fewer concepts they have to be familiar with and less moving parts in the system as a whole. It’s already the case that a common complaint from Python’s packaging users is that there are too many tools that each do slightly different things and in order to meaningfully participate they have to install half a dozen things just to get started, so combining mechanisms means that we avoid adding even more random one-off tools to the toolbox.

A wheel is effectively a binary blob in packaging terms. It’s a built artifact. Whether or not the built artifact contains a .so or not is immaterial.

I don’t think that’s particularly relevant. Users also were not having their configured index servers respected, proxies, etc because pip had no control over setup_requires. So any logic that implies that pip install --no-binary :all: doesn’t apply to build dependencies because of setup_requires would also equally mean that we should feel free to just YOLO download wheels from PyPI for build dependencies even if the user has configured it otherwise.

Even if it made sense, it’s not true that they never had it in the past either. Setuptools itself didn’t support installing from Wheels until 38.2.0 which was released in Nov of 2017. Pip has had support for installing wheels (and thus, disabling of installing wheels) since 1.4 which was released July of 2013. So they’ve been building build dependencies from source in the case of sdists and wheels for twice as long they couldn’t prevent setup_requires from installing wheels.

The fact that they started to get build depends came from a setuptools update to support wheels at all with no changes in pip at all, so IMO, the assumption pip made when implementing the --no-binary :all: (that setuptools was only going to be downloading sdists) got invalidated and pip should have been updated to work with that.


(Paul Moore) #137

What I was trying to say here was that there’s a whole load of nuances around --no-binary :all: and similar cases that we’re ignoring here, and probably most of the users of that flag are too. That’s not to say that they don’t have valid use cases, but simply that the majority of users may well not have thought everything through (the ones who have are likely the ones who have set up their own infrastructure).

I’m not saying that - I’m simply saying that it’s quite possible that there are bugs/limitations lurking in how --no-binary :all: currently works that our users apparently haven’t noticed (or aren’t worried about) yet (nobody has raised a pip issue saying “even though I have --no-binary :all: my build is still downloading a wheel for the wheel project!”). So trading our current set of not-completely-understood edge cases for a different set isn’t exactly the end of the world. However, ignoring design problems once we’ve noticed them isn’t particularly good :frowning:

The main point here for me is that @pganssle is looking at this from a setuptools perspective, where they can’t move from a pre-PEP 517 build to a PEP 517 one because they want to stop using setup_requires and PEP 517 doesn’t let them do so without opening up a whole can of worms around source-only builds. To me that explains his frustration around the way the discussion keeps (from his perspective) digressing.

I think this is a much more important point here (even though it’s tangential to @pganssle’s backend bootstrapping problem). We have direct feedback from someone who is involved in build systems, saying that having an “in-tree hook” escape hatch is a very valuable feature. @dstufft suggests that his conversations with other people who have experience in this area gave the same sort of feedback.

So I think there’s a strong case for having some sort of “in-tree hook” support in PEP 517. If that support also solves setuptools’ self-hosting problem, then that’s great. If it doesn’t, then we need to look at why not, and see if we can find a solution to that problem (ideally, by designing a single solution that resolves both problems, but in the worst case we can have 2 separate solutions).

What I don’t think any of us want is an impasse, where pip needs to continue supporting --no-binary :all:, but setuptools can’t provide a build system that works with that feature.


(Barry Warsaw) #138

But a wheel can contain binary blobs, in the form of DLLs when extension modules are mixed in. Those won’t ever be acceptable to some downstream consumers.


(Chris Rose) #139

This. Also, if wheels are a requirement at any point in the stack, then downstreams all need to audit them for binaries, which imposes a nontrivial cost on users of these packages, and it’ll be cost that won’t be easily shared, because each auditor will have different issues to consider in the process.


(Barry Warsaw) #140

The issue is of course different if pip et al build to ephemeral or local wheels, because then the binary artifacts are built locally. That’s acceptable.