Dynamic versions are convenient because we already need to track the version in git tags, so we decided to make git the source of truth and base package building on it.
This worked nicely, because all package building and package management was procedural: You were calling setup.py, you were calling pip install, etc., and it was you deciding if something in the venv needed updating or a re-install. With lockfile-based package management, the workflow changes from procedural to declarative: You declare the list of packages you want, and the package manager materializes a coherent lockfile and environment with those packages present. In the case of {poetry,uv,..} run, you even skip the environment part and go directly to running your code with the packages you need (the environment becomes an implementation detail).
While reading information from git is convenient for users, it has some non-trivial implications with a lockfile: With static metadata, pyproject.toml is self-contained, we can read the pyproject.toml file(s) and use that for cache invalidation: If it matches the lockfiles and the user didn’t request an update, the resolution is fresh. Editables are .pth-linked so they don’t need updating either (there’s its own heap of problem with native packages/code generation and cache invalidation, but that’s a separate discussion), so we do a venv freshness check just to be sure (optional, depending on your package manager flavor) and launch the user code.
With dynamic metadata, all caching now depends on the state of the git repo! To determine whether the lockfile and the environment are fresh or not, we either need to make all package managers git-aware (and have some solution for other VCS) or we need to always run arbitrary code, which is slow (admittedly this is problem mainly for uv, but uv sync often runs faster than starting a python interpreter), fallible and has a (solvable) bootstrapping problem. It also means the state moves from a plain file to something only accessible through a helper program.
In other ecosystems, at least cargo and npm, there were only static metadata and static versions from the beginning and tooling developed around that (rather than python, which started with the ability to only run arbitrary code - pyproject.toml isn’t that old yet). In rust, there are for example https://github.com/crate-ci/cargo-release and https://github.com/MarcoIeni/release-plz. The latter inverts the control by making the version change and creating the git tag for you . It has built-in compatibility checks, so it knows which level of version bump is right for the next release. I fully acknowledge that in python, this would be costly change of a stable workflow and that the Python ecosystem isn’t as far with publishing tools.
(Disclaimer: I’ve never published a real js package myself, so please correct me if I’m wrong on those) In the javascript ecosystem, npm version has an on-by-default option to create a git tag (npm-version | npm Docs). This is also a change of controls, where the tagging isn’t done by the git cli git tag, but the npm cli npm version. This result in a workflow such as:
npm version patch # Update package.json, commit and tag
git push —-tags origin main
npm publish
npm also support reading the from-git (npm-version | npm Docs), reusing the latest git tag. Yarn 1 does only seem to support git tagging, but not reading from git (yarn version | Yarn).
I’m not saying that we can’t or shouldn’t make dynamic metadata work with lockfiles, but I want to point out the high cost this has for package managers and the jump in complexity, and also show the pros and cons of workflows centered around static metadata.