Sdists for pure-Python projects

Yep yep yep!

The question for me is… Do we even want to recommend that? If we expect that redistribution should start from the source/VCS, I think we’d make the opposite recommendation, to omit these data from the sdist.

1 Like

Is this not approximately the same as identifying how to build a wheel?

Basically to run the tests you need to:

  1. Build and install the project.
  2. Install some other dependencies for running the tests.
  3. Run some command that runs the tests.

Step 1 is already covered by PEP 517 whose specification is basically like steps 2 and 3 here except that the dependencies and command in PEP 517 are for building the wheel whereas here it would be for running the tests.

Is running tests or building docs much harder to standardise than how to build a wheel?

Tools like nox, hatch etc are at a different level here because they are for running things in different environments and extra checks like linters and stuff that are not really needed for downstream consumers of the sdist. Of course if you had standardised hooks to run the tests or build the docs then there is no reason why a tool like nox or hatch could not make use of it, just as they might with the PEP 517 hooks.

One qualitative difference I would say here between running tests and building the docs as compared to building a wheel is that this is much more likely to be a downstream packager or someone doing things somewhat manually. That means that I think it is a little more acceptable for things to not be fully automatic from pyproject.toml e.g. we can assume that the packager might read some instructions and add a bit of package-specific configuration within their own metadata.

1 Like

I don’t know. I’m not even sure it’s something where “we should have a council, then we’d make decisions like this” would help. There’s no consensus, and so any decision is essentially arbitrary. If someone spent time investigating what the use cases were for sdists (beyond the basic one of being a fallback distribution method for when a wheel doesn’t exist) then that would provide more facts, but that’s a bunch of work that no-one’s actually done yet.

If you want an arbitrary decision, here’s mine:

  1. sdists MUST contain everything needed to build a wheel for the project on all supported platforms.
  2. sdists SHOULD contain everything needed to run the project test suite locally. This includes all test code and data, and a machine-readable list of libraries needed to run the test suite. This list does not have to be in a portable format - including it in the configuration of a task runner like tox or nox is acceptable (although something like a requirements file is better).
  3. sdists MAY include everything needed to build the project documentation, on the same basis as the test infrastructure above. However, it is acceptable for projects to include tests but not documentation, if they choose.
  4. sdists SHOULD NOT include tool/service configuration like CI configuration or scripts, VCS control files, or editor/development tool configuration files. Consumers wanting this sort of information should work from a VCS checkout or snapshot.

To repeat, this is nothing more than my personal, entirely arbitrary, view on what a recommendation for “what goes in a sdist” should be. If someone likes it and wants to write it up as a recommendation in the packaging guide, I’d be fine with that. If people disagree, that’s also fine - I’m not going to try to defend ths or persuade people it’s “right”.

Source code is typically stored in a src directory these days. I’m not sure if the conventions for tests and docs are as clear-cut - tests and docs directories might work, but test runners and doc builders can be configured in tox.ini, noxfile.py, and possibly various requirements files. Finding all of those is potentially messy. But yes, I used the wrong word, I didn’t mean “standards” so much as “conventions”. And you may well be right, we might now be in a position where there’s enough consistency to make it practical to have some sensible default assumptions (looking back, it’s nearly 10 years since I looked into this in any real detail, so yeah, my instincts are probably badly out of date!)

I agree. I don’t think we want to be dealing with standard hooks to run tests or build docs. The main thing is to include the test or documentation sources. But whenever I’ve seen this come up, packagers usually do want some level of automation, so including either a machine-readable list of dependencies (a requirement file, for example) or the config for the project’s supported task runner (tox/nox configuration files) is a reasonable thing to expect.

There’s nothing wrong with leaving some of this to the project maintainers, either. Some principles and conventions, plus helpful defaults from build backends, is plenty IMO[1]. Manually adding something like extra_sdist_files = ["tox.ini"] isn’t an unreasonable thing to ask.


  1. Plus tutorials that provide a suggested structure that follows those principles, so that beginners have something to start from. ↩︎

4 Likes

PEP 517 provides build hooks for building a wheel without caring whether you put the code under src or somewhere else. It’s the build backend’s job to figure out where the code is when building a wheel. A “test backend” or “docs backend” could be likewise responsible for reading config files and knowing where any necessary files are stored as needed for running tests or building the docs.

I don’t think that the packagers should be running the same things as the project’s nox files. I have a nox file and it sets up all the environments that I want to test in but the packager only needs to test things in their environment. If they do need to test multple environments (e.g. different Python versions) then I would expect them to have their own infrastructure to do that and that their list of environments might not be the same as mine. A hook to run the tests downstream should just mean “run tests in this environment” just like PEP 517 specifies “build a wheel for this environment”. Also the tests involved should potentially be more limited than those that are used by the project itself.

1 Like

I’ll say that this aligns with my personal viewpoint: from an sdist and some developer documentation I can build a wheel, run the test suite to verify the wheel build was successful, and build the docs so I know how to use what I just built. Everything else that’s related to developing the distribution should stay in the project repository.

1 Like

I would like to echo the preference of Paul that this should not be a PEP. When we talk about MUST, SHOULD, etc. we are talking about what the user configures. Build backends cannot do much here unless they hardcode specific things that may change over time based on what the ecosystem uses.

Please do not prescribe what build backends do here.

I would strongly recommend we change this to (or add additional subtext of SHOULD): a source distribution must contain everything required to build a wheel without network access.

1 Like

I’m assuming this assumes the build back-end is already installed as well as any dependencies it may need to perform the build?

Yes exactly. For example, I maintain bindings for a C library and the source distribution ships the downloaded source files for later compilation.

1 Like

I’m a little confused here, because there isn’t any guarantee that the wheel contents have anything to do with the contents of a supposedly corresponding GitHub repository, either.

If you need to build something from source and you have a security need to inspect the source, then you’ll have to inspect whatever it is you’re actually about to compile. I don’t understand how being assured that there is an identical copy on GitHub makes that any easier.


Anyway, the main thing I want to say here is that all this time later, it still strikes me as strange that we have two totally different formats that are both fundamentally an archive of code with metadata, that record functionally the same metadata, in different formats.

And that even when the code is pure Python, people are asked to upload both the sdist and the wheel, to serve different audiences, except that the sdist will be used as a fallback for the wheel (and not the other way around even though it should be perfectly technically possible), except that there are lots of disadvantages to using that fallback.

And that even when the code is pure Python, in order to install an sdist Pip needs to set up an isolated environment just so it can “build” a wheel by invoking a whole bunch of Setuptools machinery, which in the common case will effectively do nothing useful except to pack up the same source tree again with different metadata (and without pyproject.toml or setup.py, just so that Pip can unpack it again following the standard rules for wheels.

And that even when the code is pure Python, the sdist will be expected to include either a setup.py or pyproject.toml (or both) despite that there is fundamentally not anything left to “set up”. Even the dependencies will have already been written into the PKG-INFO of the sdist (roughly equivalent to the wheel’s METADATA, I think) - yet some backends will even generate a stub setup.py for an sdist when the project source only uses pyproject.toml.

As far as I’m aware, a wheel’s metadata records everything that an sdist’s metadata would need to, and the only things that fundamentally prevent someone from manually downloading and unpacking a wheel and then treating it like an sdist are a) the assumption, built into the wheel concept, that there’s no more “building” to do and b) the absence of setup.py and pyproject.toml.

So here’s a strawman proposal for getting rid of separate sdists entirely:

  • There is one distributable artifact format: wheel.

  • When a wheel is downloaded, it’s unpacked into a temporary folder, instead of directly into its home in site-packages.

  • If the wheel has a none-any tag, it’s allowed to contain pyproject.toml and/or setup.py at top level.

  • If it doesn’t have that tag, or doesn’t have either pyproject.toml or setup.py, it’s an ordinary wheel and the next step can be skipped.

  • If it does have those things, the installer then sets up an isolated build environment around the temporary folder, and invokes a new hook from the build backend - as it now recognizes that it has unpacked what is now the equivalent of an sdist. The contract is that instead of doing its own build isolation or zipping up a wheel afterwards, the backend is expected to only invoke whatever compilers and arrange the temporary folder as if it were an unpacked wheel.

    • For projects that have to compile extensions, it could go through the existing Setuptools machinery with setup.py, or maybe in the future we have something better than that.

    • For pure-Python projects (maybe the build backend that was specified in pyproject.toml sees that the absence of a setup.py and draws that inference), it could just delete the docs, tests and pyproject.toml itself.

  • Now the installer can follow the original “spreading” logic to put files into site-packages etc. the way it currently would for a wheel.

Am I missing a reason this is unworkable? I wouldn’t be surprised if I got some minor details wrong, but if there’s a conceptual problem with this approach then I definitely want to understand it.

e: If this works, it would also placate my annoyance at the fact that Pip can build wheels but not sdists. Of course, it only includes the wheel-building functionality because it needs to install sdists indirectly via making a wheel; but still.

I think that his meaning was the other way around – if you expect the sdist to contain the same files from the relevant VCS repo, someone could give you a nasty surprise.
This is part of why, if I’ve followed their discussions correctly, Arch is moving to prefer repos – they view it as more secure in many cases, as the contents are more transparently knowable. Absent a good solution for signing (GPG is out and known to be poorly used, sigstore may be ascendant?), I think we can understand that preference.

Given that distros may prefer repos as sources of truth, I wouldn’t really be against dropping sdists… Except “why?” In the vast majority of cases, they represent 0 burden. Is it not a better goal to make publishing an sdist “really optional” – whatever that means in terms of tooling being graceful about their absence, but also community expectations and norms?

While I would prefer to drop sdists, my question is really why the repo should work any better as a source of truth than the sdist does. No matter how the source is provisioned to me, if I want to be sure it hasn’t been compromised before I hit return on the build command, I need to look at the files that said command will actually use.

I admit that I don’t fully follow the security rationale. It seems to me that it is equally hard to carefully inspect a repo vs the sdist. Perhaps the thought is that if it’s a community maintained project, we can hope that the maintainers would notice if something untoward were added to the repo. I’m not super convinced that there’s any security benefit to using a repo, but certainly there’s no loss.

But the reason to prefer the repo in order to be able to run tests makes plenty of sense. I’m not sure that including my tox.ini helps very much in this case, but including my test directory does. And tox/nox/CI configs are probably useful if it’s not obvious how to test the package.

1 Like

I just wanted to make it clear that this isn’t universal. For Bokeh we pre-build BokehJS, and manually insert an already-built BokehJS into the sdist. Building an sdist will not build BokehJS from its TypeScript source. [1] The sdist, for us, is for “building” the Python package, only.


  1. We do this because we also publish a canonical BokehJS to NPM and a public CDN, and the version included in an package install must match the published version exactly. Why must it match? It must match, because I’ll quit open source entirely before exposing myself to a possible support burden resulting from subtly different versions floating around after being created by downstream (re-)packagers with whatever different JS toolchain versions they happen to have used. ↩︎

3 Likes

They can lead to surprising failure modes for end users where there isn’t a compatible wheel, and the user does not have the required build tools installed. If the intent of a library is to provide prebuilt wheels, the library author may be better off not providing sdists and directing users that can’t install it to inform them about their system details (to expand their wheel building matrix) + install from repo (generally speaking, users with build tools installed will be capable of doing this) On that note, the internal package index at my work doesn’t have sdists, even for packages mirrored from elsewhere that provide one. If we need it on a machine not supported, we build the wheel for that case once.

I’m not entirely sympathetic to those redistributing a library using sdists since there is no guarantee of what can or should go in them, it seems more appropriate for them to be using the actual source (repo) and preferably to actually work with the library author if they need any additional things included or documented about the build system. (such as implicit dependencies included for them by choice of build system that might not be universal)

The security argument is not entirely a red herring, but it leans in favor of repos and only at the level of social goodwill and who you place trust in. If you have not reviewed the source, you can only trust it to the extent you trust the maintainers and their practices. It doesn’t really matter how the source is provided to you for this. Still, if you have a level of willingness to have social trust for a well-known project and its committers, then the repo provides more information about when, how, and by whom different things were introduced. It’s a matter of risk profile, who you trust, and how much trust you are willing to extend, but most people do have a line somewhere where they are placing a level of faith in the community, and a reliance on community policing for suspicious things added to high profile projects.

Without a level of trust extended, the repo going from a commit that has been reviewed already for security and only reviewing what has changed is less expensive than fresh reviewing the entire project each time.

3 Likes

Count this as another vote for “--only-binary by default”, then? :slight_smile:

Related, and not sure if I mentioned it before, but I don’t really like the way that pyproject.toml interacts with the sdist model, in that it pushes the dev’s choice of backend onto the user. I can imagine why that might be a necessary evil for some large projects with native dependencies. But I imagine a scenario where someone with a pure Python project somehow overlooks uploading the wheel, and then the user is expected to have (or Pip installs) some heavyweight backend (that the dev had only chosen in order to get some nice command-line dev shortcuts) just so that the pure-Python code can be packed into a wheel and unpacked again.

To be clear, the idea here is just that the repo would have more eyes on it?

1 Like

The choice of backend is invisible to most users. Build frontends (build, etc) should take care of creating en environment, fetching the build dependencies, installing them and then building a wheel. The only time a build backend leaks to the user is for packages that require some compilation (extensions). That’s because a user might want to pass extra compiler flags and other similar things. And what that happens, well the user really has to know what options to pass via pip install -C or build -C. Each backend have different options.

Sure, but my complaint is that this “transparent” fetch could be a lot of overhead. For years I used Poetry for small projects mainly because my limited research suggested it would be the easiest way to make a clean break with pyproject.toml etc. Poetry itself is many times as much code as the largest project I released this way, and the pyproject.toml AFAICT specifies a dependency on the entire thing, not some sub-package that offers only the backend.

Good thing I uploaded wheels even though I didn’t understand why at the time (thanks to @pradyunsg for the excellent, quick reference on that btw).

This isn’t true. For an sdist, I need a Tar extractor (not available natively on Windows), then some way to go through each file and read the source.

For a repo, I could either clone it and load it up in my IDE with code inspections, or browse the files (potentially on my phone!) via GitHub etc. in addition, the repo is guaranteed to be in a format with some developer finds optional to interact with.

Poetry has poetry-core · PyPI which is lightweight and only includes the build backend AFAIK. Hatch has hatchling, pdm has pdm-backend · PyPI. These I think only include what’s required to build packages (I think).

If there is overhead (and there is when it comes to sdists), it’s actually creating environment and building the wheel. Though, nowadays most projects publish wheels.

I’m not sure. My gut tells me there’s a more nuanced way that would be better, such as when first interacting with pip, pip asking the user to configure their defaults or ask questions about how they want to use pip to configure those defaults. I do think that --only-binary is a better default if that’s the only option for change in this regard for the same reasons why I view sdists as less useful in a modern packaging setting.

That’s an aspect, but not really a direction I care to argue in, as in theory every dependency everyone uses should have at least a pair of eyes on it (their own) no matter where it comes from. Practically speaking, the fact that a reliable chain of history allows review to be done incrementally from version to version is much more valuable a point from that section.