Thanks. I’m happy to know that uv allows it, although my point was that regardless of what lockers or workflow tools currently choose to do, I think the file format needs to support it. Users can choose a different workflow tool, or even build their own workflow using lower-level utilities, but we don’t want them to have to define their own lockfile format as well.
My preference is to favor non-conflicting dependencies for all scenarios in a single lockfile (the uv default) and that if there is a strong reason where I need conflicting dependencies for a particular scenario, that I could “escape” into a distinct lockfile.
I think Brett suggested we don’t focus on single or multiple files though, so I’ll try avoid that topic.
I think that means my preference is to favor a multi-scenario lock approach, because it is a superset of the single scenario approach. e.g. If I can lock for all scenarios by default, but with escape hatches for unique scenarios, then I’m happiest.
I’ve been thinking some more about the “many scenarios per lock” case, and I’ve come to the conclusion that I really don’t understand the workflow behind it. I’d like to, both on a personal level and also as I’ll need to make an informed decision on this PEP, and I suspect I can’t do that if I don’t have a good understanding of a key use case.
This may be too far offtopic (this discussion already covers way too much ground) so if anyone (especially @brettcannon) feels that it should be split off into a topic of its own, I’m happy with that. But I do think it’s relevant to the “purpose of lockfiles” question.
For background, my concept of locking is very much around the “lockfile as a specification for an environment” model. I see the most common locking scenario as being “here’s a set of requirements that I want to install, I’d like to be sure that if I do the same install in the future, I get the same result”. This is basically (as I understand it) what pip-tools does, and I cannot imagine any credible lockfile standard not supporting this.
Using a workflow manager like uv
, my naive understanding is that if I do uv add
, that records the new requirement, and simultaneously updates the lockfile to include it. That in itself seems fine to me - it’s a streamlined version of the approach I describe above.
Where things start to confuse me is if I want a different environment. From what I can see, uv doesn’t really have a concept of multiple environments, there’s a single “project environment”, and then… nothing? Maybe the idea is that environment managers like nox and tox handle those situations just fine, so uv doesn’t need to? But I still need to define the relevant dependency groups. So suppose I want to create some documentation for my project. I write it using Sphinx. So I want to create a new “docs” dependency group that installs the tools I need to build the docs. That again is OK, uv add --group docs sphinx
. Note that I want the latest version of sphinx when I run the docs build, because I like shiny new things
I assume that sphinx (and its dependencies, and any other docs tools I need) won’t be installed in the project environment. They shouldn’t be, I never asked for them to be after all. So I can now reference the “docs” dependency group in my nox configuration. Or maybe there is a uv command that I’ve missed to say “run sphinx using the docs dependency group, in a separate environment”? Anyway, everything’s OK so far.
Now I want to do some testing. So I set up a new group, “test”. That contains my project, plus its dependencies, plus the tools I need to run my tests. Note that unlike the docs group, this one does need the project included. Again, though, I don’t want my docs group installed when I’m running the tests. This may just be because I want my test environment to be “clean”, but it could be for a more fundamental reason (my tests depend on an old version of Sphinx, which doesn’t support features I use in my docs build).
My understanding is that I’ve now managed to get sucked into “not recommended behaviour”, simply because I want my test and docs environments to be independent. But what I did seems entirely reasonable to me. So I’m confused as to what I did wrong. And in any case, why is uv interfering here? I’m running the docs and test jobs in different environments, using nox, so what’s it to uv how I define those groups?
This all seems wrong to me - I think using nox is wrong here, not least because I don’t see how nox can ask uv to populate the session environment with what uv locked (so I end up using pip, and I might as well have defined the doc dependencies in my noxfile). But conversely, I can’t see how using uv would work, because it only has one environment and I need multiple environments. Unless the idea is that I recreate the project environment each time I run a task with a different set of dependencies - but that feels like taking the whole “once installing is fast, we no longer have to fear doing it too much” idea way too far.
So I think I’ve misunderstood something about how these “multi-environment” lockfiles are meant to be used. I’d appreciate some clarification. And even though I’ve used uv as my example here, the answer doesn’t have to be in terms of uv. “This is how you do that in PDM (or Poetry)” is perfectly fine. As long as one of the tools that needs “multi-environment” lockfiles can explain how they would work in this sort of situation.
Sorry for the long message. I couldn’t work out how to describe the sort of situation that confuses me any more briefly. Thanks to anyone who took the time to read this far!
Nope, I think it’s a reasonable thing to discuss here to try and make sure everyone has a similar understanding.
There’s a single virtual environment at any one time in .venv
, but I believe you can install into that environment whatever combination of things you have written down in your pyproject.toml
via uv sync
.
No, you just switch as necessary by running uv sync
appropriately. You obviously can still use Tox and Nox for such things as well if you prefer.
I don’t think that’s true. I think the only thing uv does by default that might run counter to what you’re used to doing is pinning everything to the same package version regardless of what causes the package to be pulled in.
If you don’t use uv’s lock file, then nothing.
Why is that? uv is fast enough that some people are simply running uv sync
when they cd
into their project directory.
For PDM, I think you just create as many virtual environments as you want and then you install into them as necessary with the lock file. The similarity here is the sync
commands let you specifies groups to use and such, it’s just a question of whether the tool expects you to bother with separate virtual environments or not.
The way I think of “many scenarios per lock file” case is it centralizes everything into a single file and makes it easier to have everything pin to the same version of a package if you want that. That could facilitate analysis of all potential scenarios at once based on seeing them all at once in a single file. It might make the file smaller since you cut down on duplication. It also makes it easier to keep everything up-to-date when you update your pyproject.toml
as there’s only one file to update instead of N files for each scenario.
Thanks! Everything had gone silent, and I was worried I’d killed the discussion
OK. I guess that’s not that much different than switching venvs (it doesn’t work in parallel, but even I’ll concede that’s an obscure case, and it’s a tool UI issue, not a standards one).
Yes, but that’s precisely the thing I have a problem with. As I said in my example, if I want to use newer Sphinx features in the docs I write, but I’m testing a feature of my app that relies on an older version of Sphinx, I have a problem. Yes, it can be done, but it’s “not recommended”, and worse (IMO), it violates my intent that these are independent tasks - docs build and testing are unrelated. I’m happy to simply dislike the workflow uv promotes, but with so much of the multi-scenario support in the standard being driven by uv’s experience, I want to make sure the standard isn’t taking on the same biases.
OK. I have no objection to making it easier to pin everything in conjunction. On the other hand, I do have an objection if it requires doing so. Or even if it makes it seem like pinning everything together is the expected behaviour. The standard should be impartial on UX matters, and IMO pinning everything as one, vs pinning “roots” independently is very definitely one of these things.
Two example of a lockfile with two “roots” (I don’t particularly like that term, is there a better one we can use?), one with them locked as a whole, and one with them locked independently, might help me understand whether the current format achieves that impartiality. Unfortunately, I’m still struggling to really understand the multi-root format, so I don’t feel confident to write such an example myself
It’s not because I know you don’t like this design, so I have tried to make sure your preferences are supported, too.
Correct, hence why I haven’t designed anything to make this the case.
You can call them dependency groups if you want, or direct/top-level requirements.
Let’s consider two dependency groups: new-sphinx and old-sphinx using Sphinx 8.1.3 and 7.4.7, respectively. The locker would lock for both dependency groups separately, creating a dependency graph for both. So far, 2 separate dependency groups with separate requirements. Now, you could keep it all separate, but I haven had people push back on that before for being verbose (although that was a while ago, before I was a parent, so people’s view may have changed).
But what the locker can do now is merge the dependency graphs where there’s complete overlap. So if new-sphinx and old-sphinx only differ by, say Sphinx and Pygments, then there would be two entries for those two packages in the lock file and then only one for all other packages and everything ties together to minimize duplication. That way the lock file has recorded the dependency graph for your two dependency groups independently without needlessly repeating itself.
And I’m purposefully not showing an example as this isn’t special to the format chosen; I can make this work today or with any other modification made to the PEP in the future. As I said, I planned on supporting this knowing it’s how you work.
Ah yeah – the current format does not require this. In fact, vis-a-vis some other designs, it makes it possible to have multiple “independent” sets of dependencies within a single lockfile.
And that’s why the current PEP doesn’t bother trying to support multiple files.
OK, so where do we sit with all of this so that we can drive towards deciding what the exact goal we want lock files to solve for are so I only have to update PEP 751 one last time?
Do you have a better feeling now as to why some have asked for a singular lock file that can handle multiple scenarios at once, @pf_moore ?
Do the people who will be asked to implement this – e.g., @frostming , @radoering , @charliermarsh , possibly @ofek – happen to have a unified opinion around “lock for the scenario” versus “lock for any and all scenarios” (as outlined in PEP 751: now with graphs! - #158 by brettcannon)?
I would recommend that I not be included in the criteria for acceptance as I fundamentally disagree with that example:
In my view, lock files are only useful insofar as their ability to reproduce an environment and therefore each combination there describes a completely different environment albeit with some shared inputs. This seems to not be how tools like Poetry and UV think of things. I disagree with the desire for a lock file to be tied to a project/library because lock files in my opinion should generally only be about applications. This also seems to be at odds with the other tools and some users (this is in part because there is no standard to define direct dependencies of distinct environments/applications).
So, I think my opinion should not be considered.
I accept that there are some use cases that work better with such a multi-scenario lockfile, while also being reassured that the format will allow independently locked scenarios in such a lockfile. I don’t particularly like the workflows that take this approach (and in particular I don’t like the “lock everything as one by default” approach of uv) but that’s not relevant to this specification, so I’ll say no more on that.
So my questions have been addressed.
I would like the ultimate standard to allow multiple lockfiles, if only to support the “one lockfile per scenario” approach, but I’m not going to complain if the consensus is that a single lockfile is preferred - the same functionality is available either way, it’s just an organisational choice.
I don’t think pip is affected by any of this, for what it’s worth. As a lockfile writer, pip is only involved if someone wants to use pip install --report
to generate a lockfile, and that will be a single scenario lockfile, which is uncontroversial. And as an installer, the only thing that matters to me is that installing won’t need a lockfile, and that’s never been in question.
I think your opinion is entirely valid (and in fact, I agree with it). But I think the “environment reproducing” lockfile remains a valid (and important) part of the lockfile spec, and will be supported regardless of the outcome of this debate[1].
What I think this does imply, though, is that the spec shouldn’t dictate that there is only one lockfile per project. It’s fine if there’s an obvious per-project name like pyproject.lock
, but the spec should allow for per-environment names like pyproject.docs.lock
.
if that’s not the case, I’ll probably reject the PEP, but Brett knows this, so that isn’t going to happen ↩︎
Not to muddy things if folks have already reached a clear consensus, but for folks grappling with what it means to lock multiple sets of requirements into one file or resolve them “together”, I’d just boost trying out an example scenario/workflow (like Paul’s) with uv
’s “conflicting dependencies” feature. It really helped me understand how the end result in uv.lock
differs, the possible ways to install from it, which packages end up resolved to multiple versions per dependency-group vs a single one, etc. Of course that’s not necessarily going to be how the standardized lockfile encodes this, but still helpful.
That’s the “lock file per scenario” approach, e.g. requirements.txt
v2. So I wouldn’t take my example as “this is the way inputs should work”, more something I expected everyone here would understand.
That’s fine and I’m not particularly attached to a single lock file.
There is definitely no clear consensus unless/until the tool authors tell me they all want the same thing.
(The only tiny bit of nuance I would add here is that uv now allows you to declare combinations of extras as “conflicting”, and will error when attempted to install at the same time. The common example is: when working with PyTorch, you might want one extra for “PyTorch in CPU mode” and one extra for “PyTorch in GPU mode”. Those require different PyTorch versions, but the user knows they never need to be installed at the same time. So you can have subsets of the graph that are effectively independent lockfiles.)
This is very well-put. I’m thinking about this a lot and will reply soon.
I don’t know how much this will help the decision, but I wanted to say that, as a user, I’ve been following this discussion with great interest (and prompted me to register here). I definitely think some form of standardized lock file is needed, and I want to thank @brettcannon for spending a not inconsiderable amount of effort on this.
My own work generally involves short term data analysis projects that are then archived and occasionally “resurrected” months or years later. For that perspective, an improved requirements.txt
would suffice (when using conda
, I’ve used conda-lock for this purpose).
To my mind, we should be thinking about what use cases require interoperable lock files, and leave the tool-specific capabilities to the [tools]
section. The two that (I recall) being discussed in this thread are distribution of applications and facilitating CI builds.
I have no experience with CI (hopefully others can weigh in here), but can speak to application installation. As a user, I’m fine with a few lock files (one per platform?), but the risk of a “exponential explosion” of lock files concerns me. If every platform, every supported python version, and every optional extra for an application requires its own lock file, finding and selecting the right file would not be user friendly (not to mention spamming the project root directory, although that seems solvable). I don’t relish picking the right lock from a list of 48 files.
Separately, having lots of closely related but slightly different lock files seems like an opportunity for files to get out of sync; guaranteeing that nominally related locks are, in fact, related seems like a worthwhile goal.
In contrast, supporting uv
’s multiple roots seems superfluous for the common specification. Instead, I would ask: is there a way to structure the lock file so that it works in some sensible way with other tools, but still provides enough that uv
can support multiple roots (edit: within the [tool]
section).
I, perhaps naively, see two possible ways to make this work. Either the primary lock describes an environment that supports all possible roots (from which uv
can select a subset for different subprojects), or one root is treated as the default (leaving some nodes unreachable from that default root), allowing uv
to specify alternative entry points in the [tool]
section. Perhaps this doesn’t even need to be decided in the PEP - different tools could choose one out the other (so long as the lock describes a valid environment).
To summarize: I would prefer to see a unified solution, but could accept that this may make more sense as an export format. But I think, from an end user perspective, lock files will be more useful if the lock files can describe multiple environments.
I am not sure what makes more sense in the end so I will just try to anticipate how we will probably handle the different possible outcomes for Poetry:
- “lock for the scenario”
Such a lock file cannot replace our internalpoetry.lock
so that it will not have an immediate impact on Poetry itself. It will be “just another export format”, so it will eventually be implemented in thepoetry-plugin-export
plugin. I do not think that Poetry will be able to read/install from such a lock file in the foreseeable future (just write it via the export plugin). - “lock for any and all scenarios”
Since such a lock file has the potential to replacepoetry.lock
, we will try and if it is possible we probably will replacepoetry.lock
with such a standardized lock file. Honestly, I do not expect Poetry to be able to install from a lock file created by another tool from the start but to have to rely on some information in thetool
section [1].
While 2. seems more attractive to me than 1., it also means more work for us and depending on the exact format it may take us a while to support it.[2]
I’m not quite “decided” on anything here, but I’m gonna push myself to share my current perspective, which is: I think we should make replacing uv.lock
, poetry.lock
, pdm.lock
, et al non-goals, and instead focus on the “requirements.txt v2” framing.
My thinking is as follows:
- We’re still evolving our understanding of this “universal resolution” problem… Even the formats themselves are still evolving, at least in our case. For example, since the start of the PEP, we added support for resolving conflicting extras and dependency groups. It’s a big change with a bunch of implications. I haven’t yet thought about how that would or wouldn’t work with the current proposal.
- Over the course of the thread, I got a little burned out (so I can only imagine how Brett feels — I’m sorry!), in part because I felt that I was having to advocate for a lot of the design and UX choices we’ve made in uv, which ultimately led me to feel like the standard was at risk of being coupled too heavily to specific tools and the workflows that accompany them. I felt similarly when I saw other tool authors or users advocating for changes that were needed to support their own tool — a totally reasonable thing to do, but I got worried about how it would all shake out in the design. It felt like we might be designing around the wrong primitive.
- As the discussion evolved, it seemed like we started to move away from the idea of having installer interoperability (i.e., that you could use Poetry to install from a uv-generated lockfile or vice versa), in which case… it just doesn’t seem like standardizing
uv.lock
/poetry.lock
/pdm.lock
really adds that much? It doesn’t really reduce lock-in, since we’re still expecting users to change out any tool-specific metadata, remove the lockfile, and re-create it with whatever tool they’re migrating towards. The motivating use-cases like “give a cloud provider a lockfile and have them provision the environment for you” also became more difficult to imagine, since the portion of the lockfile that you want to install is also coupled to individual tools and perhaps even their CLIs. I think the value from standardization, as written, is that no tool is privileged with respect to other tools (for example, Dependabot could support reading from them equally well). That is valuable. But is it worth it? Maybe we can achieve that same outcome with “requirements.txt v2” anyway? Looking at these varied formats and wanting to unify them under a standard is the right instinct, but I just don’t know if I’m convinced on the tradeoffs in practice. - Finally, despite the time we’ve spent focusing on supporting all of the different tool use-cases, the standard doesn’t necessarily improve the experience for uv’s users, and would be a lot of work for other tools (e.g., Poetry) to adopt (in my opinion). I do think there are good reasons for uv (or any project) to adopt a standard even if it doesn’t improve the experience for uv-only users. But as a counter-example, the things that would be impactful for uv users would be pushing on standards around consistent metadata across distributions for a single package-version, making it easier for tools to reason about or remove dynamic metadata, and, in general, aligning on paradigms like universal resolution and declarative package management. But I don’t think we are aligned on those ideas.
If we strip the proposal back to a “requirements.txt v2”, I think we will be able to align far more quickly on the format itself, and have a much clearer set of motivations and use-cases. (We get to drop: extras, dependency groups, multiple roots — hopefully we can even drop [tool]
?)
In uv, I would view this as an alternate format for uv export
, which already exports a “single scenario” (e.g., a specific package, with a specific set of extras enabled), with platform markers, but in requirements.txt
format. Then, we’d likely add support for “installing from a lockfile” to uv pip
or elsewhere in the API. So it wouldn’t be the first-class format that we use in uv, but it would be fully interoperable with other tools, and users could export from uv.lock
and then install that exported lockfile with uv pip install
if they want.
I do think there are good arguments against “requirements.txt v2”:
- Is it worth standardizing?
requirements.txt
kind of works for this today, though it has the notable downsides that (1) it’s implementation-defined, and (2) it lacks the kind of granular information that you need for full reproducibility (like URLs, mapping packages to indexes, etc.). The standard would presumedly solve those downsides. - Will it make it harder for us to come up with a standardized format for
uv.lock
,poetry.lock
, etc. in the future, since we’ll have to deal with compatibility with whatever format we come up with? I think it probably will — I could be wrong though. I do want to come back to this problem in the future when we understand it better, and I’d love to expand the format as part of that. So, if that ends up as an agreed-upon goal, are we better off not standardizing anything now? - It adds yet another file format rather than reducing the number of file formats that users have to understand when interfacing with packaging tools. (That is: you now have
pylock.toml
in addition touv.lock
.)
Finally, I’m a little hung up on the question of “Will uv / Poetry users even use this format?” If so, why? Where / when / why do they use requirements.txt
today? Maybe to get that Dependabot support? But then you have the painful problem of “always keeping a pylock.toml
up-to-date with uv.lock
so that Dependabot looks at it.” So it may only really be useful in practice as a way to, e.g., make a project installable by any tool even though it’s managed by uv, or similar (i.e., installer interop).
Still, I see some value to this as a primitive.
As a minor note, if we go down this route, we may want to consider calling this something other than a “lockfile”. It will already be confusing for users that they have both pylock.toml
and uv.lock
/ poetry.lock
/ pdm.lock
— referring to them all as “lockfiles” when they serve different roles could make things worse. (I expect I’ll get a lot of pushback here, but I’m already anticipating the number of questions we’ll get about why a project might want to include both uv.lock
and pylock.toml
as opposed to a single file.)
I want to be clear that supporting this standard (and aligning with standards more broadly) is important to me. So if the consensus here is such that we do want to pursue a replacement for uv.lock
, poetry.lock
, pdm.lock
et al, then I will do everything I can to ensure that the standard works for uv and that uv adopts the standard.
This is mostly what I’m looking for in some sort of standardization at this point. GitHub’s dependency graph, and tools like Dependabot that use it, are important. A tool being understood by GitHub makes it more appealing to use, and GitHub takes quite a while to improve ecosystem support. Even if pdm and uv locks were eventually understood by GitHub, any new tool would be at an immediate disadvantage.
Perhaps there’s some set of operations to standardize rather than a file format. “List all dependency names, and versions for each name” and “update dependency X to version >= N” would cover GitHub’s dependency graph and Dependabot. If we could somehow specify dependency_introspection="uv|pdm|..."
, and GitHub (or other tools) knew to call that binary with the standard commands, each tool could still have its own lock format and features.
The key word there, though, is “kinda”. The info box at Requirements File Format - pip documentation v24.3.1 even says:
The basic format is relatively stable and portable but the full syntax, as described here, is only intended for consumption by pip, and other tools should take that into account before using it for their own purposes.
You’re also missing the key motivator for me starting this work: requirements.txt
is not secure-by-default; listing file hashes is an optional thing and currently most tools make it an opt-in experience.
Perhaps, but then you need to also define the wire protocol/encoding, at which point you have a file format.
Now I’m not opposed to this concept outright (although the bikeshedding over how to write this stuff down scares me), but I would still expect it to be secure-by-default, so there would be expectations around e.g., requiring hashes for listed files that would be installed.
I also don’t know if this is even amenable to GitHub or other potential tools like cloud hosting providers. Having to spin up an environment, do an install of a Python tool, and then run it for this information may (not) be too much to ask compared to a file format where they can process it in any way they feel is most advantageous to their use-case. They may be totally fine with this since they might be doing something like that now anyway, but I just don’t know.