PEP 621: how to specify dependencies?

They are not actually: one is a “standard” that is specific to Python while the other is a more generic standard that spans multiple languages and tools.

So, you agree that the TOML approach has more advantage than PEP-508 from a metadata file standpoint?

Because you know the specification. What about new users or occasional Python developers?

And yet, when I see the number of files that get it wrong (or use programmatic checks instead), I feel that it might not be the clearest specification.

I chose it because of the following main reasons:

  • Readability (debatable)
  • Discoverability: it’s easier to find if a dependency exists in a dict than in a list
  • Explicitness
  • Programmatic manipulation: You can’t easily manipulate PEP-508 strings to change parts of it compared to TOML elements.
  • Consistency with what exists in other popular languages. This was the principal factor that led me to this decision.
1 Like

Nothing stops you from loading that list into a dictionary once you read the file. And they’re equal.

Not sure what part of it you consider it to be more explicit?

Why? What’s wrong with You update the property, and then call str on it?

The biggest problem here is that to get where other languages are would be a lot of pain. So the question is are we prepared to hurt a lot in the next few years just to be on par with other languages? And while getting there also loosing some current features (e.g. copy-paste-ability of specifications).

1 Like

That’s an extra step just to circumvent an issue that could be solved at the specification level.

You have the name of the elements specified directly in the file (like extras) which helps make it more self documenting.

That’s an extra dependency you need to have while you could rely on the fact that any TOML parser will return native types that are easily manipulable.

So, we just give up and don’t try to improve on what we have? Shouldn’t this be a goal in itself? To provide a user/developer experience that is on par with what other languages have?

That being said, one reason other languages were able to pull this off is because they mostly have one or two tools of reference, instead of several tools like we have in Python, so making a transition like this is easier.

I think the point of Donald was:

  • one is a string following a single specification: PEP-508,
  • the other is a string following & combining several specifications: TOML & PEP-508.

If we find out that most people outside of our bubble prefer the TOML way, then the answer should be a resounding “yes” from all of us here.

That’s a good thing! There should be a core dependency parser to avoid duplicate work by setuptools, Poetry, Hatch, Flit, etc.

1 Like

I don’t think this is an issue becuase even in your specification validating e.g. the version strings are valid requires an extra step. So given that you already need an extra step, that extra step might actually do this operation too automatically.

AFAIK the only part of PEP-598 that doesn’t specify the key explicitly is the extras. You get just as explicit with platform/python version keys. So in this sense the table format helps a bit, but just for extras.

The validation the TOML offers is at best a light validation. You can specify a lot of entries that’s valid TOML, however incorrect python specification. Package names, version and python requirement specifiers come immediately to mind (but could go onto what happens when someone passes a list instead of dict as value in the table, or uses integer as key instead of string, etc). Considering for good UX you’d want to validate all these you are already looking at an extra dependency.

We should improve things, but we need to balance benefits against the price we as a community have to pay to get there. And in this case from what I seen until now we’re talking about marginal benefits with significant resource investment to get there.

We need to work with what we have, not what we wish we had. Going down the TOML path will put strain on the entire ecosystem, not just 1-2 tools in Python, sadly.

I agree that the TOML approach has advantages when you’re utilizing multiple features of PEP508 at once. I think it’s either slightly worse or about the same in the simple cases.

Like I said. I don’t think this is a situation where either solution is just better across the board for the end users experience. I think you can construct real world situations where either one “wins” depending on which aspects you personally want to optimize for.

If one solution was just better in every situation I think you’d see a lot more enthusiasm for standardizing on that one solution. When there is no clear winner, status quo is typically the winner.

I would be very interested in hearing from people like @rhettinger who teach or have taught Python professionally. Do any of us here have regular interactions with newcomers outside of bug reports?

When it comes to a PEP, the PEP author makes a call and the PEP delegate either agrees or doesn’t. :slight_smile:

Oh, I’m not. We will reach a conclusion somehow.

FYI I asked once on Twitter which people preferred who knew both formats, and the results were inconclusive/leaned towards PEP 508. But once again, that was a selected audience that had exposure to both.

I think we have acknowledged that everyone in this conversation is bringing biases based on the tool(s) they maintain and what that tool currently supports. That pretty much guarantees a clean answer will not happen among ourselves.

So, how do we want to settle this? A poll here that we promote as widely as possible? A bunch of individual polls where we then come back with the results? We reach out to trainers and teachers and ask them to talk to their current classes to see if beginners have a true preference?

I think that the unstated assumption here is that we don’t realize what people find difficult about using our tools because we don’t find it difficult, but I think it’s actually really hard to get the information we want here.

In some ways, people already involved in Python packaging are the perfect people to ask about it, because we’re the ones dealing with a diverse group of users, dealing with bug reports, etc. We also tend to do the most complicated things with packaging and know the right way to do things. Also, many of us were motivated to get involved in packaging to fix problems we had ourselves.

I am not saying we should ignore our users (quite the contrary), but I also think that we need to acknowledge that often if you ask beginners whether they like X or Y, they’re often making that choice without a deeper context, and in the end we would get a worse experience by taking them at their word. (I say this as someone who has made UX suggestions that were accepted in beta tests and come to regret them enough times to feel hesitant about it.)

I don’t think we should be polling people for preferences. I think if we can agree that the question of which way to specify dependencies comes down to (or would be significantly be informed by) a disputed factual question, we should come up with a strategy to determine the truth.

That said, I’m not convinced that our differences really come down to factual questions. People don’t usually complain or have problems with PEP 508 and they don’t really complain about poetry's way to specify dependencies. That suggests to me that both are good enough and that people won’t be actively confused by using either one. Given that PEP 508 is already standardized, already in wide use, we cannot deprecate it in favor of a TOML-based system (which won’t work with all config systems) and people will need to learn it anyway makes me say that a tie should go to PEP 508. Maybe the result would be different if we were designing this from scratch in a vacuum, but I find it much more plausible that people will lament the proliferation of ways to declare dependencies than they will lament the fact that we’re using a compact DSL.


I haven’t formally polled this, but my gut-feel (based on many conversations, for example, I’ve just spent an entire day at EuroPython chatting with attendees about all kinds of stuff) is that the most popular approach would be “just pick one and tell me exactly what to do”.

And I agree with Paul (all of the Pauls :wink: ): existing standard wins over writing a new standard.


Is this true though?

Both pipenv and Poetry changed the metadata format and I didn’t see any backlash for that. So I think people are more than willing to follow any new standard.

And since we are specifying a brand new standard anyway I don’t see why we couldn’t go all in.

Note that last I heard, the long-term plan is to deprecate the concept of “extras” and make them into regular packages with [brackets] in the name. (So e.g. we’ll have a requests[security]-2.24.0-py2.py3-none-any.whl, which is a regular package that contains no code, and depends on the appropriate versions of requests + pyOpenSSL.)

So it doesn’t make a lot of sense for a new format today to hardcode the string extras, or to split up the extra from the package name.

First time I am hearing of it honestly and I would like to know the rationale behind this because it seems like a bad idea.

And I gave the example of extras but that applies to markers and git dependencies too, for instance.

1 Like

This is off-topic of course, but that sounds like an absolutely horrible plan.


It is indeed off-topic, but what is relevant here is that the concept of extras is in discussion (more precisely, there have been some comments, but no-one has really had the energy yet to make anything of it). We may end up looking at something more like how Unix distributions do things, with “recommends”, for example.

The term “extra” is pretty well-known in the Python packaging community, so even if we decide to, changing it won’t be easy - but promoting “extra” to a named element of a dependency specification (rather than just a specific bit of syntax the way PEP 508 has it) will make it even harder to “rebrand” the idea.

I don’t have a particular opinion on the matter here (other than saying that I find the terminology around extras pretty confusing, personally) but I think that’s the key thing to take away from @njs’ comment.

1 Like

[mod hat on] Please don’t make this kind of content-free judgement about other folk’s work without even knowing the details; it’s not helpful for having good technical discussions. [mod hat off]

FWIW, my personal feeling is that most of this discussion is a red herring / classic bikeshedding. IMO the two genuinely complicated parts are learning the operators for version comparisons, and mini-language for markers. These still exist in the “exploded” version, they just have a TOML-shaped frame around them instead of a PEP 508-shaped frame.

I appreciate the comparison here, but to me it’s pretty underwhelming once you fix the spacing in the PEP 508 version to make it readable. There are a few edge cases that might favor one or the other, but seriously, who cares about "cachy ~= 0.3.0" versus cachy = "^0.3.0".

I think the only reason other languages use TOML/JSON is because they started out using those formats for all their metadata, so it wasn’t worth the bother of specifying a complete DSL for dependencies. Python OTOH doesn’t have that history and already has the DSL specified, so we might as well use it.

Anyway, that’s my 2 cents. IMO the most productive thing would be to pick one and move on :slight_smile:


That said, I guess there is a substantive question here, which is how to encode extensions to PEP 508. AFAIK every pinning format has a mechanism for doing things like requesting in-place installs, allow dev-dependencies, specifying hashes, etc., and it would be good to have a place to put those. Maybe that should be a separate thread though, since this one is so deep in the weeds?

I definitely could have been clearer about explaining it, but this is the key benefit pipenv/Pipfile get from moving the package names out as TOML table keys: it allows the value to be a string for a plain dependency, or an inline table for something more complicated.

That said, it would be reasonable to say that these standard fields are intended for the kind of dependency metadata that can go into a built package, and anything like editable installs will remain in tool specific formats for now.

Interestingly, many tools using the exploded form for specification from the beginning still invent a DSL at some point anyway. Cargo, for example, uses exploded TOML, but dependencies in Cargo.lock (which is also in TOML!) uses a string form of the same specification. Bundler uses a Ruby-based DSL for specification, but the DSL is compiled into a string form in Gemfile.lock, despite the rest of the file is in a YAML-ish custom syntax that clearly supports structured data.

I don’t know their reasoning behind it, but it does not seem that strange to me in practice to have different dependency specification formats for humans and machines.