How should a lockfile PEP (665 successor) look like?

In an environment, a user could be interested in all entry points but also, probably most important, the interpreter with importable modules. An application is essentially an environment, where the user is only interested in the entry points provided by the package providing the application. That is how I distinguish the two.

Good questions! In my point of view a lockfile should aim to provide as much information that is needed to reproduce the environment. Best effort. It is inevitable it will be incomplete when

  1. certain system dependencies are needed that cannot be described using Python tooling;
  2. certain build flags are needed that also cannot be described with Python tooling;
  3. dependencies are needed that require authentication, and the user has no means to authenticate.

The first one is a general problem that I consider entirely out of scope. The second is only relevant for source builds. In my opinion this is similar as the first and out of scope. I think these two items are also best left to non-Python tooling to begin with, though hopefully in time they can cooperate/integrate better. The third, I actually don’t know the current state of this with Python tooling as I hardly use say pip myself these days. Auth should anyway be done at a different level. And about not having the credentials, well, that’s fine not? It’s essentially the same as someone yanks a release from PyPI.

That the existing lockfiles don’t lead to fully reproducible builds by itself is I think acceptable, since the available Python tooling is lacking for that. Though maybe there is more with pipenv, I have not checked.

If you consider other languages/frameworks, it is actually the same. None of them out there lead to actually reproducible builds by themselves. Haskell’s Stack can, when using it’s Nix integration. That these tools by themselves cannot create a reproducible build/environment/application is not a blocker the way I see it, you simply need way more information for that. But most of these tools do provide a good foundation that tools such as Nix and Guix and I am sure there is more tooling out can, with minimal effort, add additional information and make reproducible builds.

Of course that doesn’t help the average Python user. But therefore I think it is relevant to also offer the PEP 665 reproducible install and have that as the default for installers.

I’m personally fine with that starting out of scope and we can bring them in later if performance calls for it. I personally wouldn’t mind a url-hint key or something that suggests a place to check before trying to hit a package server to find the appropriate file, but I don’t think that’s necessary for a v1/MVP approach. If I remember correctly from the PEP 665 discussions, Pradyun may disagree. :sweat_smile:

Maybe interestingly, making this out of scope and part of the CLI/API of the tool doing the install opens up an interesting solution to the “I won’t know the hash of the wheel” problem. If wheel files that don’t have a hash recorded are marked as “must originate from a trusted source,” and the tool provides a way to explicitly mark a source of wheel files as secure (e.g. package server on the intranet, known location in the container image), then controlling those endpoints becomes the security mechanism instead of the hash (alone). This could also tie into supporting sdists by only allowing them from trusted sources. I don’t know how much of the concerns people had with PEP 665’s security strictness this alleviates in the end, though.

For me it’s being able to rely on file hashes and the benefits that brings. Otherwise if you say, “I want pkg 1.2.3,” then you are not sure what you’re going to end up with (the whole “a build on Wednesday can be different than a build on Tuesday” problem).

Hence why I have assumed at this point I was going to have to come up with a proof-of-concept to gather feedback.

Here’s my perspective as a user who cares about supply chain and “build” at a large company that uses Python.

The way I think about lock files and reproducibility is that I care about it from a practical “bill of materials” sense, rather than a “pure reproducibility” sense like Nix or byte-for-byte installation output.

Don’t get me wrong. I love the idea of pure byte-for-byte installation output, but that’s not my primary concern. If I wanted that, I could use Nix or a private wheelhouse for example.

What I’m really looking for is a way to “lock” and verify the full list of transitive “inputs” or “ingredients” from a list of direct dependencies.

So that I can then install using the same trusted set of “inputs” elsewhere on another machine by referencing the lockfile. To be clear: I have no expectations that the installation be guaranteed to succeed on the other machine (because sdist etc may have dependencies required on the host). Or that the resulting output will be byte-for-byte equivalent.

In an ideal world, I would be able to produce such a lockfile across some set of platforms and OS. I think I would expect the python_version and implementation name to be fixed, but perhaps most flexible would be to provide the expected list of “targets” as some sort of input specification.


Thank you. That’s a very helpful (for me, at least) explanation of how people can want lockfiles while still being OK with using sdists. Would it be fair if I were to characterise this view as being about focusing on verified inputs rather than verified outputs (which is, in my understanding, what the people interested in reproducible installs want)?

To be clear, I’m not proposing or arguing for any particular type of lockfile here, I’m just trying to establish some sort of shared terminology and understanding so that we can meaningfully discuss proposals.

1 Like

If I understand Greg correctly, he wants to have all transitive sdist dependencies. That’s exactly what I want as well. This allows third-party tools such as Nix to take it as input, and with an additional description of potential non-Python deps and options per package which is out of scope for a lock file, it becomes possible to make reproducible builds.

I’m actually fine with a mix of bdist and sdist to be honest. I am not a purist in that sense (at least not atm). I’m a pragmatist here and will accept whatever pypi offers in terms of packages, but would like the inputs locked and hashed so that I can copy it to another managed machine(s) in my organisation and have a reasonable time installing the same packages. The resulting installed packages would be “the same”, but only in terms of names of direct packages installed (and their transitive deps), not byte-for-byte equivalence.

There are packages that are bdist-only for example, so I’m not looking for a pure “build from source” solution. If I wanted that, I’d probably use Nix. I dont see any real risk that a pragmatuc solution would block a “pure reproducibility” solution like nixpkgs aims for. Nix is very capable at squeezing out all non-determinism, so I’m not too worried that any PEP produced here would make it incompatible with Nix philosophy fwiw.


I read Greg as wanting to allow sdist dependencies, but he’d be fine with dependencies getting locked as wheels - either pure Python wheels, or a set of wheels covering whatever target platforms he needs. But either seems plausible, so it would be useful to have terminology to distinguish - maybe “source only” input-focused lockfiles would be a reasonable term for lockfiles that disallowed wheels at all?


Yes @pf_moore Your understanding of my statement is correct.

1 Like

Right, hence why I’d say we should lock sdist when possible. If there aren’t any there, then indeed we skip the sdist for that package.

poetry2nix, the tool that takes poetry lock files and makes them usable with Nix, by default uses sdist. But it is possible to override/fallback to wheels.

For information, here are the default overrides poetry2nix adds. Furthermore, because poetry does not record build systems, a mapping of build system overrides is added as well. Note these are version-independent, which is not correct and should be fixed.

So I suppose we could discuss then whether we want

  1. source-only input lock file
  2. wheel-only input lock file
  3. mixed input lock file

I suppose 2) could then be split further in

a. all-wheels input lock file which records all available wheels
b. minimal-wheels input lock file, which records the minimal amount of wheels needed, i.e. PEP 665.

What I am looking for is 3) with, to be clear, the requirement sdist is always added when available.

I think 3 is what this thread is discussing. 3 is a superset of 1 and 2 and supports bdist and sdist.

2 is PEP-665 which failed to gain sufficient support for approval. 1 is limited because it excludes users of packages that are wheel-only.

It may already be possible to have pip “prefer sdist” for power users, so as long as that isn’t somehow blocked with a solution, I think your requirements are probably safe.

So sounds like we would both like:
Option 3. Locked inputs (with option to prioritise sdist).

Next desire from my side would be a mechanism to collect enough metadata of the inputs for a set of “platform spec” so that a lockfile can be produced for multiple “platform specs”. Similar to poetry, but maybe a bit more specific in that it only locks supported target platforms. I’m not trying to produce a universal lockfile that I could expect to drop anywhere. I guess I wouldn’t mind if it could work everywhere, I’m just unclear if it’s possible. Poetry seems to try, but its slow and may have other issues. The reasoning behind this request is that developers often develop on Windows or Macos, but CI or production is Linux. Yes, not ideal, but its very common. It would be “nice” if a lockfile could be produced on Windows or Linux and have a reasonable shot of it installing on Linux (again not expecting a guarantee because an sdist etc would have dependencies on compilers on the host etc).


So… does PEP 665 style lockfile, that includes sdists and doesn’t pin its build dependencies in any way, work for people?

1 Like

Wheel + sdist run-time only is what poetry does right now. What I am aiming for is locking of (Python) build dependencies as well, as this is already a known issue with poetry. Hence, just PEP 665 + sdist is not sufficient IMO.

For me, there’s a graduation of expectations:

  • strongest: for a lockfile without sdists, installation on another machine with same arch/OS must succeed
  • should-work (:crossed_fingers:): for a lockfile without sdists (or python-only sdists), installation on a different arch/OS should succeed
  • no guarantee: lockfile with sdist – best effort, but essentially there are arbitrarily many failure modes during compilation due to divergent host toolchains compared to the machine where the lockfile was created
1 Like

You mean just for the project, not sdists, right?

My expectation on the lockfile is:

Given three sets of package files (sdist/wheel) A, B, and LF,
where LF is a subset of A, and B is a superset of A,
the lockfile is sufficient information to recreate LF from B without reference to A

To expand:

  • I don’t care about abstract packages, only concrete files
  • If I want my sdist to be reproducible, I’ll build a wheel and reference that instead of the sdist
  • I need to be able to change the index out from under the lockfile and still get the right files [1]

I’m okay with generating different lockfiles for different platforms. Maybe the format can allow multiple lockfiles to be stored in a single file, but that sounds like a tooling issue and not a conceptual feature.

  1. or an error if I mucked up and pointed at indexes that don’t contain the right files ↩︎


That definition appealed to the mathematician in me :slight_smile: Unfortunately, though, I’m not sure how to relate it to the real world (just like the sorts of maths I like the best!)

I don’t understand the relevance of A in the above. If we just had LF and B, with LF a subset of B, and we said “the lockfile is sufficient information to recreate LF from B”, how would that be different? And what is B in this? Is it the index (that you want to be able to change)?

Also, when you say “I need to be able to change the index out from under the lockfile” does that mean changing B? Is this where A comes in? You can substitute any set B as long as it’s a superset of A?

1 Like

We need a real set of files to generate a lockfile - without it, it’s just an abstract list of packages and constraints. So there’s a prior resolve step that uses those (let’s call them) “requirements” against the real files in A to select the subset LF (which is also a set of files).

So yes, A and B are basically indexes. Or some combination of multiple indexes. I don’t really care, and I don’t think it matters for the lockfile provided they can be interpreted as a set (no duplicates) of files (sdists or wheels).

It means (Python syntax) B is not A, but set(B) >= set(A).

The superset bit just means that if B is missing files that are in A, I don’t expect to be able to recreate LF. To be concrete, if the lockfile contains a list of filenames, and none of the files in B have the name, the lockfile is not valid in that context and I don’t expect to get the original set of packages back.

Edit: Also, the “without reference to A” means that I need enough information in the lockfile to guarantee that the recreated LF is identical to the original one. I’m not allowed to go download the hashes from A to compare to the ones in B - I have to bring them in the lockfile if I’m going to use them later. (Thinking here of air-gapped systems that will have no PyPI access)

My definition of a lock file is list of package names + pinned versions/hashes such that a tool that does not have any resolution logic will get me final environment. Ideally tool could be fetch from pypi requested version, maybe build, verify hashes, complete install.

No knowledge of dependency metadata required after lockfile is produced. What counts as dependency metadata is a bit debatable though. Is environment marker evaluation allowed for multi-platform support? I’m fine producing one lock file per platform/environment and having lockfile consumer not even need marker support.

A similar definition is valid input to pip install --no-dependencies that would satisfy a later pip check even though lockfile consumer may not even have ability to do that check.

I mainly care for fully pinned versions. On sliding scale no pins → byte reproducibility, complete pins is enough in practice for me. Main motivation is stability and not accidentally upgrading library without testing it. I’m sure there are cases where same pin, but non reproducible build caused a failure, but I haven’t experienced that enough to require it.


No, also for each and every sdist. Hence recursively, and why you end up locking multiple environments: your environment of interest, but also all the ephemeral build environments needed to get to your environment of interest. This also means the lock file can contain multiple versions of a package.

How do you know where to get that sdist? And would you still hash your wheel? The wheel build is typically not reproducible so somebody really needs to use specifically your wheel.

Yes. In Nix, we pass one or more urls to a fetcher, along with a filename and a hash. If the file is already in the store, it will be reused, otherwise fetched. It does not matter where it comes from since the name and hash are constant. Thus changing index is no problem. As long as we include the full url of an artifact I think we will be fine.

I want to emphasize that if you lock on wheels, you essentially create a lock file format that is unusable for people doing scientific computing (e.g deploying an app that does computations/simulations/analyses), as they often will want to build the packages themselves to get the desired performance.

OK. So basically, unless I’m still misunderstanding, a list of wheels and their hashes is a sufficient lockfile for you. You need the hashes to confirm that whatever source your installer is using (your set “B”) contains the same files as the original set “A” (i.e., that if the filename is the same, the content is as well).

But (ironically, given your comment that you don’t care about abstract packages) it feels a bit abstract.

To make it concrete, would a lockfile spec that simply said a lockfile was a JSON file containing a list of {"filename": xxx, "hash": yyy} objects, be sufficient for you? And all an installer like pip was required to to was locate (from whatever sources it had access to, not specified in the lockfile specification) the various referenced files, confirm they matched the given hashes, and then install those files without doing any dependency resolution? Fail with an error if any file isn’t accessible, or if the hash doesn’t match.

… but if you don’t want sdist reproducibility, you’d be happy for sdists to appear in the lockfile? In that case, would you expect the lockfile to include build requirements (like setuptools) and if so, do you need the lockfile to specify that those build requirements (presumably exact versions of them?) need to be available? How does the lockfile indicate to the installer that they are only needed at install time, not in the final environment? It’s also worth noting that currently, there’s no build tool that lets you override a declared build requirement of (say) setuptools, with a more specific requirement such as a specific version from a lockfile. So locking build requirements will involve non-trivial tooling changes. And conversely, not locking build requirements will risk failing on air-gapped systems where someone forgets to make build requirements available.

TBH, I’m just reiterating the questions that got raised when PEP 665 was still considering allowing sdists. And they are the questions that caused that PEP to decide to only support wheels. So there’s nothing really new here, we’re still hitting the same problems:

  1. Allowing sdists in lockfiles triggers a bunch of hard questions about precisely how much reproducibility people actually care about, as well as how to lock build requirements (or simply provide them, if isolated systems where “what’s in the lockfile” is all that can be assumed to be available).
  2. Limiting lockfiles to wheels leaves too many use cases unsatisfied, and it’s not clear that what remains will give sufficient benefit to justify standardisation (in particular, it won’t actually stop people saying that they need a lockfile standard!)

There’s a third possibility, which no-one has explored yet as far as I’m aware, which is to simply explore what people are actually doing right now, and standardise that, in effect formalising the current de facto approach(es). So basically, writing a PEP that defines something with roughly the semantics of pip’s “fully pinned requirements file with hashes included”. That may still fail, because it’s too vague to be any practical use, but trying to be precise isn’t faring much better right now :slightly_frowning_face:

1 Like