Storing requirements for tasks in pyproject.toml (drafting a PEP)

Continuing the discussion from Meta: organizing several proposals related to the future of pyproject.toml:

I have just begun writing this up as a PEP locally; but I’m laying the idea out here - in more detail, but still informally - before I get too deep into it.

I pitched the concept in the previous thread like so:

I misspoke there. In my original conception, the build-wheel target would have been for dependencies at build time; as I was writing this post, I dropped it entirely.

Generally: the pyproject.toml spec is expanded to include optional [required-to] and [required-for] tables. There are a lot of use-cases to handle; this proposal is designed to push the complexity onto tool implementers - one writes things in pyproject.toml that are intuitive, and tools are expected to do the right thing.

The keys of required-to can have any name, like required-to.<task-name>, where is some string describing a “task” that can be done with the code such as running it, testing it, building a wheel etc. Certain names have explicit meaning defined by the spec; names with a leading underscore are for “private” use by individual tool suites, or even given an idiosyncratic meaning by the developer. Other names not starting with an underscore are left up to the community to standardize; it is hoped that if multiple tools exist to perform a task that isn’t covered here, the authors can collaborate to name and define the task.

This scheme gives a consistent, regular, readable description of the purpose of each list of dependencies, while allowing generalization to any conceivable such purpose. The phrase “required to” is followed by some action that causes the requirement, while “required for” is followed by a noun describing the thing that imposes the requirement (and then by the corresponding action - it seems redundant to try to insert an extra “to” here).

The values for each such key are a list of requirements, in identical format to the current [project.dependencies].

Similarly, <task-name> keys under separate required-for.<extra-name> tables (i.e., sub-tables of required-for) are used to define additional dependencies needed for the task-name under the condition that the extra-name is available.

I plan to define the following task names and their semantics explicitly:

  • install - dependencies that must be installed when the wheel is installed (as well as when testing); i.e., dependencies required at runtime when the code from the wheel is used.

  • run - dependencies needed simply for running the code, without producing a wheel.

    • This is meant for simplicity for people who aren’t intending to build a wheel; however, it’s conceivable that the same code could have use as both a library and a Pipx-installed standalone application, and have different dependencies in those two contexts.
  • test - dependencies that must also be installed when testing the code, including the test harness itself. This allows for pinning a version of Pytest, for example.

    • A test runner would put dependencies from test into the test environment, as well as dependencies from use-wheel if that key is present, and run otherwise.
  • use-wheel - dependencies that must be installed when the wheel is installed, that are not used in testing. For example, an application distributed as a wheel, that uses Requests, might mock out all the networking calls for testing, and not want to include it in the test environment.

    • Conceivably, this could be used to share “third-party package data” - for example, if multiple image processing libraries want to provide the same sample data for tutorials. I don’t know how useful this is, but it seems pointless and impractical to forbid it.
  • build-doc - dependencies needed for preparing documentation, including the documentation writer itself. This allows for pinning a version of Sphinx, for example.

    • Conceivably, this could be [ab]used for building other distributables that are ancillary to the actual code. However, it would be better to figure out what those might be, and let others define separate names for such tasks.
  • develop - dependencies used in a development environment, such as Black, MyPy, precommit, a linter etc. This would allow multiple devs on the same project to clone a consistent development environment.

Some equivalences:

  • The information currently in [project.dependencies] can now be split across [required-to.install] and [required-to.use-wheel] - similarly, [project.optional-dependencies.<extra-name>] across [required-for.<extra-name>.install] and [required-for.<extra-name>.use-wheel]. It would be an error for pyproject.toml to include both [project.dependencies] along with either of the corresponding required-to entries, and similarly for the optional dependencies. (“In the face of ambiguity, refuse the temptation to guess.”)

    • However, now it’s possible to make this split, and there’s a clear use case for it.
  • Other targets are approximated by third-party tools already. For example, Poetry’s [] would subsume many of the new entries. The new scheme allows for much more specificity.

Additional semantics:

  • If a [] or [required-for.<extra-name>.run] entry is present (even if empty), but there is no [build-system] table nor any of [required-to.install], [required-for.<extra-name>.install], [required-to.use-wheel], [required-for.<extra-name>.use-wheel] entries, this signifies that the project is not intended to build a wheel, and tools shall not attempt to do so.

    • This allows for protection against Pip downloading an sdist of something intended to work only as an “in-place” application and trying to install it as a library.

    • We could perhaps relax the requirement to specify [] and [project.version] in these cases. That seems worth separate discussion.

  • Aside from that, in general the run and install lists are meant to be fallbacks for each other. A script runner would set up dependencies from install if run is absent (as well as test, of course); a test harness would set up dependencies from run if install is absent. (Either would ignore use-wheel regardless.)

Other thoughts:

  • For symmetry, it would seem to make sense to include a build-wheel task. However, this the one thing where it doesn’t make sense to offer extras-specific variants, and it would be completely redundant with [build-system].requires as things stand.

  • I know I originally was opposed to a [run] table; the new [] is different in that it’s expressly only for describing dependencies (and not other environment setup). It’s also part of a more general system, that tries to put the various use cases for Python code on relatively even footing (rather than holding wheel-building above all else).

  • The version of Python required for any task that ultimately involves executing the .py files in the project, ought to be the same regardless of that task. Therefore, there is no attempt to specify a Python version in the new tables; that’s what [project.requires-python] is for. While that corresponds to a core metadata field in wheels, that’s not to say that this is the sole purpose of the information; the documentation describes this value as simply “The Python version requirements of the project.”, and I’m not changing that. It’s fine if e.g. script runners don’t have this information; they are free to complain, or else try using whatever version is available.

1 Like

I wonder if this should be named install. For me at installation-time the dependencies are things like pip, conda, and so on. So two things come to my mind…

One, I think another verb than install should be used for this task. The verb run is already taken for some slightly different use case. I do not know what a better verb could be.

And second, is there a scenario where a project might need or want to declare that it needs specific things to be installed, for example this project can only be installed with conda or can only be installed with pip? In this case I would add a new task with the verb install where one could specify the installer(s): conda, pip, or whatever else.

1 Like

I had thought of use-wheel before I opted to use that for the wheel-specific dependencies. Maybe that isn’t the right name for that context, either. Names are hard. I’d like the names not to be too clunky, but I agree that install could easily be misunderstood the way you describe.

That’s… a frightening proposition IMO :astonished: I can see the aesthetic argument for it, but it’s hard to imagine the practical use case. More to the point: at least as things currently stand, with the current wheel spec, by the time someone is actually trying to install the project, it’s too late. The wheel’s metadata doesn’t record anything analogous; so even if the developer wanted to specify, say, a minimum version of Pip to install the package, an older version of Pip which downloaded the wheel would have no way to know that it’s “forbidden” to unpack it.

This proposal looks like it adds a lot of things to pyproject.toml, many redundant with keys already present. So this is quite unlikely to fly, but I think for it to have any chance at all you should very clearly and concisely write up the current problems / pain points it’s trying to solve.


Of course. A PEP is supposed to have a Motivation section, after all. I’ll go over the bullet points here while I think about how to write it properly (I’m not sure if this is exhaustive, but it should be pretty close):

  • pyproject.toml seems like the one obvious place to put metadata about a project.
    • Everyone’s already using it that way anyway.
    • Nothing about the name [project] exactly screams “this corresponds one-to-one with entries in the METADATA file of a wheel”, but according to PEP 621 it all does (except for [project.entry-points], but that’s still specifically wheel-oriented).
      • On the other hand, many of those names make perfect sense in other contexts anyway.
    • Consequently, people expect to be able to store general metadata there - in particular, lists of dependencies.
  • Lots of users have “projects” that don’t fit the mold of the standard packaging flow.
    • “Project” is quite loosely defined overall, but we have some ideas about common forms they take.
      • Different kinds of projects will involve varying tasks, but again there are a few easily identified common tasks.
    • There’s a broad perception that packaging is hard.
      • Learning about wheel metadata is intimidating and unnecessary if you only care about requirements.
      • It’d be nice to be able to ease the transition - this was a huge point of contention in the discussion surrounding PEP 722/723.
      • One great way that could work is if users didn’t have to worry about the classic concept of packaging before starting to use pyproject.toml for its apparent natural purpose.
    • Many people don’t want or need to build wheels at all.
    • Even more people need to do other things besides build wheels.
      • Practically any “task” one could name here could reasonably have a list of dependencies.
  • However, pyproject.toml was designed to store specifically information used by the wheel-building process. It isn’t readily extensible to describe contexts other than wheel-building.
    • Some people try to work around this with requirements.txt; this comes across as clunky and ad-hoc (and Pip-specific).
    • Some try using Pipenv and the corresponding pipfile and pipfile.lock - this has its limitations, but more importantly it still scatters the metadata to separate, tool-specific files.
    • Some try [ab]using the “extras” mechanism - this comes with numerous downsides.
    • Various tools use the completely free-form [tool] table to express their opinions about what the metadata should look like. There’s no standardization or collaboration; compatibility is needlessly sacrificed.
    • Basically, every workaround feels “proprietary” even though it’s in plain sight.
    • Even many of these workarounds don’t make distinctions that would be useful to make, e.g. between various kinds of “dev dependency”.

Probably not all of that is directly relevant; I’ll need to go through some revisions, of course. Some of these thoughts probably belong in the “Rejected Ideas” section instead.

It looks this way because I’ve detailed semantics for a bunch of keys, all of which fundamentally work the same way.

First I should show an example, in case I accidentally oversold the complexity:

install = ["pandas"]
# Web requests will be mocked in testing; don't include this in test envs.
use-wheel = ["requests"]
build-doc = ["sphinx>=7"]
test = ["pytest>=6,!=7.1.0"]

# Define the "make-fancy-videos" extra for our project.
# This also empowers e.g. `future-pyrun ourproject[make-fancy-videos]`.
install = ["Pillow", "imageio[ffmpeg]"]
# We need sophisticated algorithms to verify the accuracy of video output.
test = ["scipy"]

In the description I used some more dots in the names; that’s just an alternative way of describing the TOML structure.

This proposal is primarily in response to:

… except that I realized that the conditions to select a list of requirements could depend on both the “task” (the fundamental reason why the requirements are required) and the extras configuration, and it seems reasonable to treat them as mostly orthogonal.

Initially I basically just wanted to say that here is a namespace where you can put some lists of requirements; the lists work the same way as existing examples; here’s how the namespace is organized. But I got a pretty clear impression that that would not fly, e.g. (admittedly in a different context):

As regards the redundancy, this is because I see the existing design as non-extensible in a place that needs to be extensible; since solving the problem requires building something new anyway, I designed it for parallelism. In this approach, the task of building a wheel is treated essentially the same way as other tasks, rather than assuming that wheel building is the default, dependencies related to installed wheels

It’s also because I think it’s important to distinguish between wheel contents that are vs. are not expected in a test environment. ([project.dependencies] can’t do that; so currently users are stuck either creating test environments that include libraries for functionality that will be mocked out anyway, or using some totally custom, parallel setup to describe dependencies in the test environment vs. the wheel.)

That said, I’m open to considering other ways to arrange the data. For example, I considered doing it this way to avoid duplication:

dependencies = ["pandas"]
# Web requests will be mocked in testing; don't include this in test envs.
wheel-dependencies = ["requests"]
doc-dependencies = ["sphinx>=7"]
test-dependencies = ["pytest>=6,!=7.1.0"]

make-fancy-videos = ["Pillow", "imageio[ffmpeg]"]

make-fancy-videos = ["scipy"]


  • I felt these names were less clear and read less naturally;
  • Aesthetically I don’t like that non-wheel tasks come across as second-class citizens;
  • The “dependencies” tag gets spammed (to avoid conflict with other [project] keys and make the purpose clear) - namespaces are one honking great idea etc.;
  • The extension to extras seemed especially inelegant.
1 Like

While I appreciate the time you spent writing this out concretely, I think this is just way too confusing for users (myself included). I don’t see this happening tbh.


I agree, as written it is too confusing. I think the first step[1] has to be a much more incremental change. That might boil down to the older question of how to turn requirements.txt into a standardized format that retains some of the flexibility people currently use it for.

It feels like a large piece of this proposal is working through that problem by making some specific choices (about names, and structure). I don’t think the community is going to agree on all of these choices, or even come close anytime soon. But there’s still a need there and a lot of it comes down to requirements files and/or lock-files.

  1. which might lead to something else entirely ↩︎

1 Like

Hopefully this is the fault of my current writing. There’s a lot of my own thought process and background (citing previous discussion) in the way of the actual idea.

But in terms of “more incremental change” I think I’m at a loss. The only thing that comes to mind is to describe just the structure this time around and then decide on names later. I’d actually be fine with that, but I got the impression that it would lead to chaos or people not seeing a clear future direction.

I’m aiming at less than that - just the parts covered by PEP 508 requirements strings, not any of the Pip-specific functionality.

I think the problem is one that’s been mentioned more than once already - you need to describe what the problem is before describing the solution. You may think you’ve done that, but I can say that for myself at least, I’m still completely in the dark as to what you think are the problems you’re trying to address here.

Until I understand the problem, I’m frankly not interested in your proposed solution. I can’t evaluate it if I don’t know what it’s trying to fix.


I’m especially disturbed to hear this from you, because I thought I was implementing your idea, for your reasons. Specifically:

What I understood in that discussion is that we were on the same page about the problems with requirements: in particular, there are multiple, not very standardized extant ways of scattering information about requirements around the project - some in requirements.txt (not even a standardized filename!) and some in various parts of pyproject.toml.

In turn, people try using pyproject.toml for this because it looks like a natural place for “project” information, where “project” is loosely understood and not formally defined, but might take on some shapes that you identified in that discussion. I understand that you were brainstorming, and that the context was

However, my long-term vision is that we support all those cases, and indeed everything that remotely makes sense, as long as it there is a reasonably elegant way to do it. At least, as long as by “support” we only mean standardizing some config data, so that third parties can make tools.

My long-term fear is that if we don’t do that, people will just abuse [tool] and pretend everything is supported, and there will be a huge mess of repositories x tools that could interoperate but don’t.


The current proposal is exactly my attempt to standardize requirements specifications - not “requirements files”, but explicitly picking up the idea of defining a section in pyproject.toml. My proposed section is called required-to, because I think that makes it read nicely. The keys are descriptions of reasons a given list of requirements would be required (and thereby, all sets of requirements are named). The requirements work in the common existing way that they would in the project.dependencies list or the simple cases in a requirements.txt file - i.e., PEP 508.

It’s abundantly clear to me that people want to make lists of requirements for non-wheel-related reasons; so I talk about some possible kinds of projects (again without trying to define anything formally) because that justifies storing that kind of information inside pyproject.toml. (And it does need to be justified - the entire discussion started from you pointing out that pyproject.toml currently is explicitly wheel-oriented.)

My one concession to “thinking longer term” that’s explicitly in this proposal (and now I realize that it would be fine to sever the idea here and introduce that later) is a [requirements-for] table, where each [requirements-for.<extra-name>] gives lists of tables that are specific to each of the project’s (still an informal concept!) extras.

If the idea came across as any more complicated than that, I guess that’s my fault.

Not at all.

When I talked about standardising requirements files, I meant literally nothing more than adding a new section, call it [requirements] for now, I really don’t care about the name at this point. That section contains a list of name = [list of requirements] items. The semantics are nothing more than “if a tool wants to read requirement set x, they can get it from the requirements.x value in pyproject.toml”.

I absolutely do not want to replace any of the existing fields in pyproject.toml, or suggest that tools stop using them. Nor do I see any point (in this context) in the idea of “tasks” that you’re introducing - there’s no such idea in how requirements files are currently used so why are you adding it?

Sorry if what I said wasn’t clear enough.

Also, even if we are “just replacing requirements files”, we still need to state the problem we are solving. In that case, the problem is “requirements files are non-standard and tied to pip”. The solution is a functionally near-equivalent but standardised location for the requirements. The issue that still needs to be discussed is whether a list of requirements (stripped of all the pip-specific options) is a sufficient replacement for requirement files, in the places they are currently used. If the answer is “no”, then the proposal isn’t going to work. If the answer is “yes”, then we can standardise this, but it won’t solve all of the issues around “projects that don’t build a wheel”, because some of those problems by definition can’t be solved with requirement files (or they would already have been solved that way!)

But the point I was making is that someone needs to understand those use cases. You can’t support them without knowing what the issues they have that need solving are. And you haven’t given any details about any of those use cases, hence you haven’t defined the problem, as I was saying.

Also, you haven’t given any reason why “a standardised replacement for requirements files” is a step towards that goal.

That’s not abusing [tool], it’s explicitly what it’s for! The problem is it (again, by definition) ties people to a single tool. If multiple tools are saving the same data in similar [tool] settings, then standarising a format for holding that data might be useful. Do you have any examples of that happening? I’m not sure I do. (“Where to find the Python code for the project” is one, but I don’t think there’s much agreement between tools on the details yet). And in any case, that’s still not addressing the question of why this would help support the cases you want to support, given that you haven’t clarified what they need yet.

… and yet, you haven’t explained why that’s worth doing. It feels like we’re talking in circles at the moment. You seem to think that what I (and others) have said is enough of a problem definition, and you can go to the solution. But we (or me, at least) are telling you it’s not. Certainly what I’ve posted has only ever been of the form “these are areas someone should look at” - I’m expecting you (or anyone else who makes a proposal) to do the investigation, because you’ll need to point to the problems people have said they have with such projects, and explain how your proposal addresses them (or doesn’t).

I think it did. Don’t worry, writing a proposal or PEP is hard, and going at something this complex as your first attempt is going to take some tries to get it right. That’s one of the things a PEP sponsor does - help you structure your proposal into a PEP.

If you genuinely want to just propose something like my “standardised requirements file” idea, I’d suggest getting rid of all the talk about multiple sections, abandoning the idea of replacing existing data in pyproject.toml, and explicitly giving up on trying to address the big picture “projects that don’t create wheels” debate. Instead, just start with:

  • Requirements files are useful, but not standardised. Here’s a proposed standard.
  • The aim is to allow people to move what’s in their requirements files into pyproject.toml.
  • Installers will need to provide a flag to say "install requirement list X from pyproject.toml (replacing pip’s -r flag).
  • The pip-specific options in requirement files won’t be supported. This will cause the following use cases to not be supported (give examples here that you’ve found by discussing what’s needed by real projects).
  • As a result, when requirements files get desupported in favour of the new standard solution, projects relying on pip-specific options in requirements files will need to change.
  • This is what such projects will need to do. (This, along with the two previous bullet points, is basically the “backward compatibility” and “transition plan” sections of the PEP).

You’ll probably also have to discuss cases like pip-tools, which (in effect) use requirements files as a form of lock file. I should note that I’m not even sure that it’s possible to replace requirement files without having some form of lock file solution in place. There’s no point in standardising a “requirements file replacement” if we still have to support requirements files once it’s available… Go and look at the discussions around PEP 665 if you want to see where that will lead :slightly_frowning_face:

On the other hand, if you prefer to make a proposal based on your longer-term ideas, I can’t offer much help (beyond “clearly define the use cases you want to support”), sorry.

1 Like

I had a whole response written out trying to explain further about the scope of the proposal and where I think I have actually done things that you say are missing. But it seemed very long-winded and I agree that we have been talking in circles. It feels like there’s a deeper underlying cause for my apparent inability to convey certain things, that I can’t quite grasp.

Instead, I think it will be better if I start by responding to your suggestion at the end. I thought I was prepared to give a detailed response already, but now I think I want to read and digest PEP 665 (and some references) fully first. Just two observations for now:

It hadn’t occurred to me that this could be a blocker. To me the lockfile problem ought to be entirely orthogonal. In my mind, a lockfile is just a transitively-closed list of locked dependencies; locked dependencies are just a kind of dependency; and transitive closure is a testable property. (In the worst case, you assume closure, and then maybe get an ImportError at runtime.)

So all that should be needed is a dependency specification that can include enough detail to describe a fully locked dependency, and the problem is already solved - a lockfile is just a list of those, in the same way that the solution to a system of equations is itself a system of equations, just a trivial one. And I already had a plan for that specification; it’s the second idea on my list in the meta thread. (One which I incidentally also think could allow for doing the work of PEP 725 more elegantly.)

But it seems like others don’t see it that way, so I need to understand why first. Which is why I’m reading PEP 665 etc.

The proposal I outline above is designed not to actively conflict with my longer-term ideas, but it does not express them. That’s why when I try to explain use cases, I’m not talking about different kinds of projects, but about different kinds of tasks that projects might need performed (and different kinds of tools that might perform them). They’re all things that fit within the traditional packaging paradigm of building a wheel - because even projects that do want to build a wheel have other things they typically want to do with the code. (Except for “running a project” - but it’s not hard to imagine e.g. setting up separate “installed” and “sandboxed” deployments for the same application.)

1 Like

To spare you some trouble because I have thought about this at length and doing part-time research on this this quarter at work: the reason is because there are different levels of reproducibility that folks desire.

  1. The first step is the PEP 665 approach of everything being built and you simply fetch from the original endpoint
  2. Next is source distributions which introduce a ton of complexity and without them I’m positive no proposal will be accepted
  3. The final step is locking build environment dependencies (there is some talk about external non-Python dependencies but I view that as out of scope)

The format is not really the hard part but rather you have to:

  1. Concretely describe the use cases that would be satisfied
  2. Target the minimum level of reproducibility and basically have a visual aid like a flowchart showing what happens when

I spoke to Brett and the folks from and as far as I understand the future lock file will require as an input the matrix of targets that you wish to support (like LLVM’s target triples). There are some nuances to the rationale but my sense is that this is due to reduced complexity rather than Poetry’s approach of putting everything inside all the time.

Unfortunately you’re too late for at least part of it :wink:

The phrase “the future lock file” stands out for me here. Does this mean that I should expect that another proposal for a lockfile format will be put forward?

And that it will definitely use a separate file, and not work by adding data to pyproject.toml?

And that whatever proposals I make for pyproject.toml would have to be designed independently, and not have a chance to integrate with that?

That would at least mean I don’t have to worry about the risk of standardizing non-locked dependency specifications only to find that people won’t use them because they still want to use requirements.txt so that they can specify hashes and such.

On the other hand, it would mean I don’t even get to explore (or fully exposit) my idea about locked dependencies qua special case of … dependencies.

What isn’t clear to me is: is it intended that such lockfiles are distributed, or are they only a dev tool? Because if they’re only a dev tool I don’t think I understand the motivation for the security guarantees.

But if they’re supposed to be distributed, then how? They don’t ordinarily go into wheels, and if they were included as package data then the installer wouldn’t care about them (at least, the PEP doesn’t seem to say anything about pulling one out from a root wheel, or indeed where the installer is supposed to get them from at all.)

It seems for example like Poetry’s idea is that if you’re making a package then you use pyproject.toml normally; but if you’re making an application then instead you commit your lockfile to version control, don’t bother with PyPI and use GitHub etc. as your primary means of distribution. Is that how PEP 665 is supposed to work? Because in that case the security seems illusory to me. The end user needs to actually read the lockfile to verify that installation won’t try to connect to unwanted sites. As for hashes, if the installer reports that a hash didn’t match, the end user has no way to know whether that’s the dev’s fault or the result of a security breach on the wheel host. (For that matter, if the project is hosted in the same place as its dependencies, that lockfile itself could just as easily have been compromised.)

There is no timeline but my intention was to express the fact that the people I know that would put forth such a proposal (myself included) will not be doing the Poetry way of including all-the-things. It’s too complex and the task at hand is already difficult.

It will definitely be a separate file and I would be greatly opposed to any such proposal to extend pyproject.toml by thousands of lines. No other language that I know of puts their machine-readable data within the user-editable metadata file and I’m trying to be vocal in all of these discussions to prevent this scenario.

I don’t see how they would be related unless you have plans to allow for entries in the dependency array to point to “dependency files”.

They are certainly not meant for project.dependencies but they also aren’t really a dev tool. They are meant to encode everything you need to “reproduce” an environment.

I don’t understand what you mean, can you please explain further?

This is a good point. I’ve gotten used to the idea that Poetry will edit the file anyway (and I even made my own tool to modify a Poetry-based file!), but maybe we should be pushing back on that. The lock data needs to represent a solve, and in particular to find transitive dependencies (or else it’s fairly pointless); between that and hashes etc. it will pretty well always be vastly more data than the input (and nobody will sit there and compute the hashes manually). I had thought that unifying the description of a locked dependency with an input dependency made sense, and would be worth some hiccups (i.e. it makes no sense to specify a hash for something that isn’t pinned) if that data could be in the same place; but now the idea seems pointless.

(Although I still do eventually want to be able to describe individual dependencies in a more detailed way than PEP 508 - if only to include non-native stuff. Thinking really idealistically, maybe it would be neat to be able to e.g. be able to specify a Github repository, and have the installer somehow take the repository and solved version number, and locate the corresponding release file.)

I could imagine a mechanism to associate an input requirements list with a lockfile that was generated from it. But I don’t know that there would be any use for it. (Also, the solver would be tempted to update pyproject.toml to point at the lockfile it just created.)

Okay, but who else needs that information besides the devs, and how will they a) obtain and b) use it?

I don’t know how to say more about it than the rest of that paragraph. Maybe I misunderstood the threat model (it seems to be dependent on the answers to my other questions).

… Ah, one more thing:

I understood this as representing the least “strict” level of reproducibility. Should it mean more than just pinning versions? Or only that?

5 posts were split to a new topic: The purpose of a lock file