What do you consider pain points of todays build system?

This is a follow-up to our[1] previous survey (what do you want to see in tomorrow’s CPython build system?).

So, now for the related, but slightly different question of what do consider pain points of todays build system?

We’re not looking for solutions nor a comparison of build systems. One item per message; vote using hearts.

With today’s build system, we mean both the *nix build (GNU Autotools + Make), the Windows build, and the docs build. We will be collecting thoughts from Committers first, and then move this to Core Development after some days.


  1. @zware and /me ↩︎

It is multi-step (./configure && make), and if you have to ./configure it’s going to take a while.

5 Likes

Note: There is a workaround, manually enable a cache: ./configure --cache-file=../python-config.cache (...).

2 Likes

I was not aware of this, although it highlights a second issue: cache invalidation is a challenge (builds randomly fail if something changes and it doesn’t notice), and the solution is very frequently to do a from scratch build.

2 Likes

Even with the cache it seems that rerunning ./configure is slow (it definitely is for me on macOS), and it’s a pain to have to remember the options (I know it saves them in Makefile, but it’s hard to find). I would think that discovery of different (in)abilities of the C compiler and configuration of the command line flags representing user options (e.g. debug mode) could be done using different tools.

I am still bummed that pretty much any file I touch causes _bootstrap_python to need to be rebuilt, after which a whole slew of generated files are regenerated and then recompiled, plus everything that depends on them.

4 Likes

Defaults to “whatever is installed on my machine” rather than defining a consistent set of dependencies that a CPython version should have. (Arguably this is more of a “what is CPython” question than a build system limitation, but the build system is where it plays out. Also worth noting that Windows and I believe WASM don’t have this issue, as they always get dependencies during build.)

11 Likes

Difficult to understand, maintain and evolve.

2 Likes

I’m channelling others in agreement a bit here. Having lived primarily in a Bazel based build system for so long, any time I have to touch any Makefile or configure.ac stuff I cringe. We lack:

  1. An explicitly specified full list of dependencies of every build step.
  2. Explicitly declared specific outputs from each build step.
  3. Ability to understand the build as an acyclic directional graph.

The above effectively includes Steve’s point about “using things from the local system” instead of having a controlled set of inputs:

  • If something is to be used locally from the system, that at least should be explicitly specified and selectable choice to do so within the build system instead of an implied default behavior.
  • I do realize this runs counter to the grand unix tradition in which autoconf really saved the day in the '90s - Every big project corporate software development environment I’ve worked on or with in the past 22 years has gone to great pains not to depend on the system; even when autoconf and make based. That way you can get the same rebuild twice, years apart.

Technically it is plausible to do all of that in a Makefile. But in practice it is so easy to not get it right and hard to visualize and understand when where and why because it requires carefully trying instead of the build system forcing you to be explicit and have it right.

That anyone would bring up magic --flags to enable magic caching or suggest installing ccache and putting it first in your PATH… actually highlights these points. Those aren’t developer friendly. If they’re so important, they should be a default without anyone needing to configure or specify anything.

Generalizing: Localized caching schemes to avoid work injected within individual steps atop a build system is an indicator that information we should have had has been lost along the way and we are trying to make up for the lack of full graph view by other less efficient means. (I’m not saying it isn’t useful, just that its existence is a symptom of a larger problem)

Thinking more based on what hoops I’ve trained myself to jump through:

  1. Untrustable incremental builds. I don’t trust that our build system today ever works right incrementally after i’ve switched branches or pulled in updates. I routely do ‘rm -rf * ; …/cpython/configure ; make -j20’ in my build directory when having made any change in the source tree other than my own local branch edits, as that reliably rebuilds. I get build failures in other scenarios if I don’t.
  2. Needing to specify build parallelism. (what am I a farmer? meme) I shouldn’t need to know how many cores I have, that’s the job of the build tool. parallel everything based on local available resources should be the default without a flag.

That the autoconf configure exists as its own giant serialized linear script step instead of a pile of independent actions in their own dependency graph is an example of lack of good build system plumbing. Most of what it does could be done in parallel. and most of what it does it also almost-constant on a given system - the build system should understand this and not need to rerun all of those actions all the time. The more your transitive dependencies are fully controlled instead of being picked up from the local external environment outside of the build system’s pervue to analyze, having the build system understand that and avoid rerunning such actions when nothing should’ve changed is plausible.

If you read this far, I’m sorry. I wrote too much. :stuck_out_tongue: have a consolation cookie! :cookie:

9 Likes

(Moved to Core Development)