There have been various opinions that rejects pyproject.toml[project] table for specifying the needs of a projects not meant to be a wheel (e.g., website, applications, a script file) — let’s call it application projects for short.
The dominant opinion is the [project] table is unsuitable fore applications because it requires the name and version field:
I don’t believe this opinion is strong enough to warrant rejecting [project] as a way to specify application projects. The stronger discontent with the status quo is not that these two metadata fields are awkward to use (in fact applications can have meaningful names and versions[1]), but because there are no tool out there that supports managing an application project specified with [project] table without these limitations:
Intends the application project be packaged and requires a [build-system] table:
Rye’srye sync which creates the project’s environment will fail without a [build-system] because it must install the project as a package
Hatchhatch env create which creates the project’s environment(s) ignores the project metadata when (see GitHub issue) if you do not intend to install the project (skip-install=true) as a package requiring the [build-system] table.
Ignores relevant metadata in [project] table needed for the application environment:
Poetry’spoetry install --no-root does not need a [build-system] defined but ignores [project] outright and instead uses their own [tool.poetry] (see this popular feature request from 2020 to use [project] .
Pip-tools’ pip-compile and pip-sync can use [project] table for respectively defining and creating the application’s environment but it cannot enforce the python version as specified in project.requires_python .
Requires extra runtime information to treat the project as an application project
Poetry, as before, requires adding --no-root to be application-friendly.
From the list of tools, the closest one to use [project] table for application is PDM; Poetry is a tiny step away since a minimal [tool.poetry] table satisfies a minimal [project] table [2]). With the --no-self flag (--no-root in Poetry), they effectively specifies, creates, and installs the application environment.
I hope that convinces folks to reconsider and use [project] to specify application projects. In recent, the story has changed from wanting to “specifying runtime environment of an application” (i.e., [run] table) to generally accommodate different use-case environments (i.e., dependency groups). I understand that solving this application project would take multiple PEPs, but they do limit future options (see name and version is required). PEP 735, if accepted as-is, will change the landscape where people see [project] table only for packaging and [dependency-group] meant for application projects and others[3].
Many have claimed their application project does not have a meaningful name and version. I find it hard to believe there isn’t a meaningful name for your application and we should encourage versioned deployments of applications (albeit not force; it’s probably too late now). Despite this difference of opinion, there are not real side effects of including this metadata in the use or deployment of your application. ↩︎
As I’m writing this, I am happily surprised. I have never thought of Poetry as application-friendly when in the past I felt it was very packaging-oriented. ↩︎
Would anyone care to entertain the idea of a [project].type field? ↩︎
+1 on better support for projects that aren’t intended to be packaged as wheels. But I don’t think [project] is necessarily the right way of doing this. Most of the fields in the [project] table are designed around building and publishing wheels and sdists. I think a different table - maybe something like [application], although not all projects are applications, either - would be better.
What’s your use case for the [project] table? Specifically, what fields would you normally use and why is it useful that they go under [project] specifically?
My own answer: I use most of the required fields, and only version is particularly superfluous as I’m not really cutting releases. Personally I don’t find the version requirement that onerous, since it’s easy to put something there and leave it alone. The rest of the required fields are good practice for any kind of project, in my opinion: every project needs a name, a README, a brief description, etc.
It’s less about project being especially useful and more about the cognitive burden of remembering and deciding that there are multiple options, if something else takes over this use-case. Once I got used to the ecosystem, it felt natural to use the project table for any kind of project, and the fact that it does or doesn’t create a wheel is incidental.
Plus, this pattern makes it simple to convert to a package later, if that ever makes sense.
I don’t think this has much to do with PEP 735, though. I think dependency groups are compatible with any use of the project table.
(Aside: I really need to make time to do more work on PEP 735, since I have ideas about allowing includes to cross with project.dependencies which need to turn into a proper spec.)
I contest this reading. [dependency-groups] doesn’t establish a space specifically for non-package projects!
Rather, it establishes a new namespace for dependencies – regardless of your project type – and carefully avoids requiring that a project be a package in order to use it. Necessarily, when you go from only having project to having project + dependency-groups, then anything which dependency-groups supports but which project does not is “only supported by dependency-groups.”
But that’s not the same as the new table being designed to satisfy “the same needs as project, but for different project types”. (And one of the problems with even trying to do that is that it’s been difficult to pin down exactly how non-package projects are shaped.)
There’s plenty of space to debate project table usage, or the addition of some parallel application or run table. But dependency-groups is intentionally positioned to be mostly orthogonal to those questions. I see it as a way for us to make progress even without trying to answer “what shapes of projects do we need to support”, and “can we support them all in one table, or one table per project shape?” and “are all of these project shapes mutually exclusive?” and so forth.
They are compatible with applications use case. PEP 735 (motivation section and others) is quite upfront on suggesting its proposal as (one of) the solution to the applications project. @sirosen (just now saw your comment as I’m typing this. I think the verbiage in PEP 735, as I interpret it, strongly suggests this)
So it comes down to which table does the project managing tool support for use with application projects. It could be both. I am not against dependency groups in general, but I’m afraid of the tooling picking sides. For example, Hatch only supports dependency groups (via their own table [tools.hatch.envs]) as a solution and recently removed using project table if your project intend to not be installed/packaged. Changes like these detrimentally affects my workflow (as it did last week) — application deployment and discussion and PEPs influences tooling to change (why I’ve made a seperate topic instead of continuing in this one)
To be fair to the tooling, I don’t criticize the them for taking a reactive stance changing their features as the climate around this topic develops. I do love those that attempt to set the standard by being proactive especially proposing PEPs.
Plus, this pattern makes it simple to convert to a package later, if that ever makes sense.
Being part of a solution for these projects doesn’t mean that it replaces the existing project table–it is designed to augment the existing capabilities while maintaining compatibility.
I don’t think there’s any reason to expect the bifurcation you’re worried about.
Thanks for outlining its general intent. I very much support dependency groups and PEP 735 if you were to express that same intent more explicitly on your section on why project table is rejected in PEP 735 (by you, I of course mean to include all those involved in the draft)
If I were to rephrase why I wrote this topic, short, this is an request to tool maintainers to not forgo the project table as a valid path in the future. PEP 735 makes it so easy to do so.
You may be only thinking of [project] in terms of dependencies. If so, then I can somewhat see your point (I don’t think it’s a significant concern, but I can see why you’re asking the question). But the majority of the values in [project] are unrelated to dependencies, and [dependency-groups] doesn’t affect them.
You’re right, it’s mostly dependencies because that’s the functional parts of the project table for applications. Let me take the perspective of someone who has to deploy a Python application.
Wrapping back to your initial question…
Specifically, what fields would you normally use and why is it useful that they go under [project] specifically?
Here I’m saying, there is a world where we can expand the purpose of pyproject.toml. Calling project table or pyproject.toml “is meant for only packages” is as awkward and confusing as requiring name and version for applications workflow.
On PyPI, the term “project” refers to a package name basically. It has never really had a direct relation with a term “application”. Instead, it relates to dists: Glossary - Python Packaging User Guide. There are a lot of problems in the ecosystem already that come from recycling the same terminology over and over again. In a disconnected context, it might make sense, but as things are right now — I don’t think it would.
In your “rant”, you’ve brought up extras being a bad idea for concrete dependencies. I agree, it’s the wrong mental model. However, I think people are using this because this is the only way you can share a resolved version (i.e., with concrete dependencies) of your application on PyPI:
See pdm-build-locked, which writes concrete dependencies (perhaps from pdm.lock) such that you can later install with mypkg[locked]. A similar feature can be found in Rust’s cargo install --locked.
So, however we decide to write this metadata (lockfiles and/or dependency groups), what also needs to be changed is PyPI[1]—there is currently no standard method to pip install an environment for deploying application from a remote source[2].
Correct. And this is coming from an assumption that the PyPI is less of a repository for apps but for libs mostly. Historically, people use different deployment/packaging techniques that don’t involve publishing most of the app types ot the PyPI: Overview of Python Packaging - Python Packaging User Guide — technically, putting a web app into a container also counts as packaging on some level. And in that case, using requirements/constraints is perfectly fine.
Yep, I saw such examples too. The recent one that comes to mind is gitlint. But again, the pins are for the env, not the app. Another reason the pins for app envs aren’t usually on the PyPI is that it has a somewhat different audience. People who what pinned and tested envs would normally go to the downstream redistributors — testing a lot of software together and providing that as a coherent repository is literally what the distro maintainers do.
Aha! So the remote bit of your use-case is a missing bit of the puzzle for me. You want to shift the environment+workflow management into pip which is essentially out of its scope as of today.
Though, your statement about being unable to pip install from remote is not exactly accurate.
The following works:
Note that neither are necessarily seeking to be a part of the metadata because of the earlier considerations. In fact, PEP 735 specifically states that dependency groups are not to be exposed in that way. Perhaps, a separate mechanism is necessary.
I’m not fully convinced of this…although I’ve likely have a very biased experience: I’m more likely to notice Python apps installed with pip/pipx than I do with an redistribution (e.g., OS package).
and I’ve totally used this a week ago. Why isn’t Python app distribution this way more common?
To list a few Python pure applications (those that aren’t lib/app hybrids like Flask, Click, Airflow, etc.) insists on pip[x] install from PyPA and does not distribute a lock file (or zipapp, etc.) on their releases:
Some of these projects have lock files for dev and test purposes, but don’t distribute them nor use them for the build process. Is it abstract dependencies are good enough? Or pinning (or constraining upper bound) in pyproject.toml is justifiable[1]evil (not-the-best-practice).
Ok you’ve convinced me. PEP 735 does satisfy my immediate use case—deploy my app to my server[2], I can just choose any compatible tool and deploy my application via:
Sharing reproducible applications on PyPI and using [project] non-package structure will have to wait. Reproducible deploys (without using containers) is much more important to me right now. ↩︎
It is neither missing nor rejected. Nor is it included. It’s out of scope.
Controlling the version of Python used in a new environment (as hatch, tox, &co allow) is an environment manager feature. It’s been a goal for PEP 735 to be implementable in a very wide range of tools like pip, pip-tools, hatch, tox, nox, poetry, pdm, and uv – and not all of these are environment managers.
If you want a standard which covers full specifications for environments, then you want something which is broader in scope than Dependency Groups.
I don’t think that’s a terrible thing to want – actually, I think it’s a very reasonable thing to want! – but it is not, IMO, our next adjacent move as a community. It’s really a different topic which would need a different spec.
One could criticize Dependency Groups for being too narrow and not solving enough use cases. I’d disagree – I think that by being narrow in what it does, it’s able to define something useful in a wide variety of use cases.
I think you should think about what the additional data you’d like added should do for the following tools:
pip-tools (pip-compile)
pip
tox when basepython is already specified / hatch when a Python version is specified
I don’t think it’s obvious. And even if you can come up with some reasonable answers like “ignore it”[1], that non-obviousness is a major issue. Users’ expectations are shaped by what they are able to express in the data.
In summary, I am (politely, I hope! but) firmly saying “no” to the expansion of Dependency Groups to include Python version constraints or any other data that resembles a specification for an environment. Dependency Groups will be usable as inputs to an environment specification, and that, in my opinion, is evidence of the strength and benefits of clear separations of concerns.
which, for the record, I think is the worst possible answer ↩︎