When you kick the packaging hornet's nest on Twitter, the hornets seem to want an opinionated, KISS solution

As much as I can see how cathartic it can be to complain about packaging, it’s not clear to me that there’s anything new here. I think almost all of these things are on our radar already, and most of them were discussed at the Python Packaging mini-summit and have specific action items see here.

With regards to the complexity of setuptools, I think that’s largely a documentation issue. Mostly speaking for myself but I think the other setuptools maintainers would agree with this, what we’re looking to do is:

  1. Make it possible for 90-95% of people to specify their configuration in a declarative metadata file (currently that is setup.cfg, but we’re happy to also support pyproject.toml), with first-class support for more complicated workflows in setup.py.
  2. Deprecate and remove the more complicated parts of the setup.py workflow, and get setuptools out of the business of being a command line application.
  3. Move to workflows based on specifications, so that other build tools can be built to those specifications.

The first item on this list is essentially moving closer to how cargo works in Rust - almost everyone specifies their builds in the declarative Cargo.toml, but some small fraction have more complicated builds and have to use build.rs.

That is probably not the correct issue, Issue #1688 has more serious discussion in it, but yes, it is true that if someone is willing to do the work we would almost certainly accept it.


I think this depends on whether we want to continue to keep common metadata requirements a per-tool thing or if we have any interest in trying to standardize common things. For instance, every project will have a name and version number, so why have every tool have their own way of specifying that? Maybe we want that so tools will pull from __version__ in some instances and not in others, but I don’t know if we want that much duplication of effort either.

Well, as much I as understand it. That isn’t in the PEP so it isn’t anything beyond me stating at Nick’s request that people should consider the stable ABI to help in their maintenance cost from a release cadence perspective.

Now hopefully this is all settled and things can move forward, but me saying something in an email most definitely does not make something true. :wink:

Correct, but that doesn’t mean efforts have not puttered out a bit, e.g. editable installs. So re-raising things to try to help kickstarting things again doesn’t hurt.

Are we getting to the point that maybe we need to start some packaging WGs to do targeted problem solving and to help keep momentum going on the key issues? For instance, should there be an editable install WG to go off and try to figure out a proposal to bring back here? Same thing for universal build tool to see how feasible that would be (and I’m not suggesting pip here on purpose)? How about other stuff on the list from the summit? Basically my worry is we are all trying to help solve all the problems and that leads to all of us being spread thin, making it easier to lose track of things and not be heading towards resolutions.

1 Like

I wonder if thence long time almost-there-ness of PEP 517 et al isn’t just a sign that the limits of volunteer labor for that kind of task is reached and we should find a way to pay someone(s) to bring it over the line.


I think it’s a bit of both this and what Brett said - everyone wants to have their say, so people’s attention gets spread too thinly. Add to that the fact that we’re all volunteers and there are significant limits on how much time people can spend on any of this (without burning out).

Groups targeted with working on individual problems, quite possibly involving some sort of funded resource, sounds like a good approach. But for funding to be an option, we’d need those targeted problems clearly defined - the Packaging WG notes are a good start here, but they probably make a lot more sense to the people who were present, if I’m honest, and could do with some tidying up and ongoing maintenance to track progress and/or changing situations. Maybe funding a project manager to co-ordinate planning and prioritisation of all the various work items would be a good solution, with the actual implementation work being handled by volunteer working groups (possibly assisted by further funded resources, if specialised expertise or even just additional manpower is needed).

IMO, the other problem with the “almost there” nature of PEP 517 is that in one sense it’s a solution without a problem - it deliberately looks at providing a standard solution that any frontend can use to work with any backend. But the reality is that the only frontend in serious use is pip, and the only major backend is setuptools (flit is a great example of another backend, but because it only targets simple, pure-python projects, it doesn’t help with more complex problems like compiler configuration or editable installs). So with the more complex problems, we keep hitting cases where we don’t have any experience to address the question of “what was wrong with the old way?” So (for example) pip gets pressure to allow people to opt out of PEP 517 features, rather than there being pressure to find a solution within PEP 517.

This is the case but doesn’t have to be. One can write a custom import hook to update c-extensions on demand, if they’ve changed. And this could be exposed as an editable install.

Only when you modify extension code, not pure Python code. So my experience is a bit different: I have one command to type when I modify extension code, yes, but that’s the regular experience when modifying any kind of C code (including outside of the Python world) – modify C code, then recompile. When I only modify Python code, though, I don’t have anything to type.

I will add that some extensions can be slow to build, so not having to re-run pip install every time I modify pure Python code is really important.


FYI y’all – https://github.com/takluyver/flit/pull/260 – flit master now has support for src/ directories. :slight_smile:


I hate to pile on but my monkey brain forces me to stress a few things that are important from my perspective:

  • This would mean Python would get an unconditional compile step, that you’d have to remember. And if you forget it, things wouldn’t break, they’d just behave weirdly.
  • This workflow is effectively already enforced when you use tox and the reason why most people don’t use tox for their main feedback loop is simply that the installation step is much too slow.
  • Most Python packages have no extensions and from those that have, many just vendor some kind of library (like argon2, rapidjson, uvloop…) that almost never changes.
  • Python Packages aren’t just for packages you upload to PyPI. There are good reasons to make your apps packages too, even if you never install them beyond pip install -e . in dev and pip install . in prod. I really don’t want the auto-reloader of my web app have to run pip install .
  • As a side-note: pip install . takes longer than a full compilation of most of my Go projects.

So yeah, there’s really no way around proper editable installs in certain contextes, sorry. :worried:


Apparently I expressed myself quite badly :slightly_smiling_face: Let me try to address the responses. My alternative proposal to -e . is most definitely not to force on a compilation step to everyone, but to have a command that knows when to do what (and when not to) before running the actual command, like how you’d use go run or cargo run instead of executing the built binary manually. And that command would go into the hypothecal tool that contains those other commands we don’t want to stuff into pip.

Using pip install directly for this would most definitely be unacceptably slow, but that wouldn’t be necessary. Part of the reason pip install . is slow is exactly because it’s not made for the developing workflow, but targets redistributing, so it builds all the needed intermediates and final redistributables (e.g. wheels) that are totally not needed during development. For Setuptools, the underlying compilers already handle incremental compilation for extension modules (I hope?), and I can’t think of any technical challenges to implement a copy-if-source-is-newer logic to put both built and pure-Python files to the destination if we know what files are built.

So for sum up, I guess the main point I’m trying to make is instead of bolting more things onto pip so it sort-of-kind-of works as something it’s not designed for (as a development tool), it’d be a better to leave it alone and actually make a good development tool, since we already have most needed pieces for the latter. And on the other hand (I believe this is known, but haven’t seen it mentioned here), there are actually important pieces missing to make pip install -e . possible.

If you ask a compiler to compile, it will compile. It doesn’t try to be smart by figuring out whether the sources have changed. That’s the job of the build tool, e.g. setuptools or make or ninja.

Figuring out C/C++ dependencies (e.g. header files) automatically is not that easy. I doubt setuptools knows how to do that. And it’s worse for Cython, I don’t think any build system out there knows to collect Cython dependencies automatically (Cython can depend on C header files but also on Cython include files and modules!).

You are correct, sorry for messing this up :frowning: I think my point would still stand though; it is better to place this responsibility on build backends (setuptools, meson, etc.), since they know best what they’re building, and let them tell the frontend what they did (and/or what the frontend should do with the result).

Good point, I didn’t think of Cython at all. Some sort of an escape hatch would be needed so a user can explicitly skip build.

Yup that discussion definitely got me rethinking a lot. Cargo’s custom build script seems to work pretty well and can be used as an inspiration. The script (analogous to our PEP 517 backend) is one Rust program that emits information to tell Cargo (PEP 517 frontend), including:

  • When should a rebuild be triggered
  • What the build script’s result contributes to the build environment (not sure what the analogy would be for Python)
  • How the frontend should do with the result (extra linker flags; the analogy would be what files the frontend should copy)

Why an “escape hatch”? Why not just keep the historical behaviour of pip install -e .? It seems there is ideological opposition against it and very little concern for practical issues, as outlined by @hynek and I above.

+1 with that. It reinforces my feeling that editable installs can be delegated entirely to backends.

With such an approach the scope of standardization of editable installs could be reduced:

  • so that frontends can be aware that an editable install occurred (for instance, today pip is totally unaware that a distribution was installed with, say, flit install --symlink and therefore pip list and pip freeze can’t output meaningful information)
  • so tools such as tox can invoke editable installs in a uniform way, to be able to provide test stack traces that point to the original source code

The historical behavior has two problems:

First, the historical behavior involves setuptools ignoring the frontend (pip) and just YOLO’ing over the user’s environment, so it’s really not obvious how to generalize it to support other build backends in a consistent way. Even if we keep the backend doing most of the work, we still have to figure out how the frontend tells the backend where to stick the files (which needs to respect the user’s configuration the passed to the build frontend), and we need to figure out how to support uninstalls. Ideally in a way that consistent and generic enough to allow new tools and features to be invented in the future.

Second, the historical semantics have some very sharp edges that can easily bite users. Everyone agrees that the functionality is super useful, but it would be even nicer if we could keep that functionality without requiring every user to understand all the intricacies of which edits require which kinds of rebuilds and track them in their head. That’s an impossible task for a beginner, and even for experts it’s a waste of mental energy that could be spent on more productive things. Tracking this kind of thing is what build systems are for.

Like, yeah, cython dependencies are complicated. But surely it’s still easier to teach a computer how to figure them out than it is to teach every user how to figure them out in their head. And more reliable too. It would be nice to at least have the option of using smart build systems in the future.

And yeah, in the mean time there’s nothing stopping individual backends from offering setuptools-style editable installs. Flit supports them today. So it’s not like this is an emergency that has to be solved yesterday.

That uniform invocation method might not even be necessary if tox grows a way to configure what it’s current usedevelop option does.

To reiterate the point here, even if pip install -e . doesn’t work for you, you can still use python setup.py develop, or flit install --symlink as appropriate. This will of course annoy the people who insist that everything should be available as a pip command, which leads us right back to @steve.dower’s comment earlier:

1 Like

I broke out the “single tool” discussion as best as I could to Developing a single tool for building/developing projects. I can’t split a second time, so the editable install discussion can stay here or we can start a new discussion if we need to start writing down the exact requirements for an editable install that e.g. .pth files don’t solve.

Thanks for doing that Brett! I was definitely feeling that discussion was mixing in with the original conversation here.

In case someone’s wondering – I’ll be AWOL from this discussion for another week FWIW, because last week of college before end-sem, which means lots of submissions.