Implementation variants: rehashing and refocusing

The discussions around metadata extension for implementation variants have gotten long enough that I thought it was time for another condensation/refocusing effort. I hope this is useful and does not detract from the other discussions. The threads that this discussion is drawn from are:

I have tried to capture topics in rough priority, though some of these are too inter-related to be addressed independently of one another.

1. How filenames for variants can be differentiated

2. How variants play into the wheel candidate selection process (“finding” in pip terms)

3. How variants get selected on the client’s system

4. How lockfiles should handle variants

Plan going forward

I think the next step is to produce prototype implementations. I personally plan on trying an approach that consists of:

  • Produce a PEP 517 build backend or other scheme to produce wheel files that demonstrate containing additional metadata somehow
  • Modify warehouse and/or devpi to present per-package (not per-file) summary
  • Modify pip to add variant optmization pre-finding process, as broadly described in Selecting variant wheels according to a semi-static specification - #99 by msarahan
  • Implement variant metadata as part of the build tag

General test cases for proof-of-concept:

  • static definition of variants to use (not tackling dynamic detection of system state)
  • Matching specified variant where variant is required (this is the MPI use case, and maybe somewhat the CPU SIMD use case)
  • Graceful fallback from selected variant not being available
  • Matching specified variant where variant is optional, and we optimize by some criteria (this is the CUDA/cuDNN use case)
  • Implicitly “activating” a variant, such that subsequent installations will align with this variant (this is the OpenMP use case)
  • In all of these, performance will be important, so the behavior will be benchmarked while scaling different number of versions of dummy packages and the number of variants per package/version.
  • Demonstrate compatibility behavior with unmodified installers (pip, uv, environment managers)

I plan to not undertake these questions in the prototype, in the hopes of keeping the scope as small as possible:

  • How should users install support for a variant? I’m going to assume that any custom environment marker is already available, and hard-code where necessary.
  • How should system property detection be done? I think the process-based approach of standalone executables is probably the right path, but I’d like to avoid creating any such executable for now. The hard-coded static variant metadata will be sufficient for the proof-of-concept.
  • Lockfiles. I assume that lockfiles will require additional development, but that this development will be additive to the prototype, rather than requiring changes to it. I don’t know how valid that assumption is, but I’m making it in the name of reducing scope.

I welcome feedback, especially if any part of the proposed prototype is a non-starter.

To be clear, I didn’t propose either of these two concepts. I was merely reporting the two flavours of lockfile that came up in the discussions on the new lockfile PEP (specifically, the ones that people seemed to think needed to be supported in any viable lockfile proposal).

As I say, the current consensus in the lockfile discussions is that it’s essential to support both forms.

I would strongly recommend that anyone interested in this proposal should review the discussions in the lockfiles thread that I linked just before the post of mine you quoted.

To save anyone time, the lockfile thread is at Lock files, again (but this time w/ sdists!).

I do not intend to address lockfiles with an initial prototype, and I hope we can discuss the other, more fundamental issues first.

I think that the critical question here is whether variants are different distribution names or different wheels/builds for the same distribution: variant distributions or variant builds. My preference is to have variant builds/wheels that all have the same distribution name and are built from the same sdist because in the cases I am familiar with that is literally what is happening.

Just having variant builds at all and being able to specify them explicitly in requirements is already a big topic/change in itself. I think that is what leads some people to point towards variant distributions as the quicker fix but ultimately it would be better to make variant builds work. That just needs a lot of thrashing out though before even considering automatic selection in any detail because it needs changes in many places:

  1. The database of installed distributions (originally PEP 376) would need to represent which variant build is installed (whether it was installed by pip, conda, apt etc).
  2. There needs to be a way to encode the variants in wheel filenames (PEP 427).
  3. There needs to be a way for tools to discover what variants are possible for a given distribution and version combination.
  4. There needs to be a way to encode the variants in PEP 440 requirements so that a user can request a particular variant from pip or another wheel or requirements.txt can require a particular variant.
  5. There needs to be a way to build particular variants from sdists for the case when installation falls back on sdists (PEP 517).
  6. There also needs to be a way for maintainers/distributors to build and encode particular variants which is separate from the case where e.g. pip attempts the build. Tooling like cibuildwheel etc would need to be changed to build the variants.

Lastly to add to the test cases for proof-of-concept: there needs to be a transition plan for any proposal. The question is what happens when e.g. an old version of pip encounters a distribution that is shipping new build variants on PyPI that it doesn’t understand. That is not just a hypothetical or short-term transitional problem because old versions of pip are everywhere and will continue to be so and will also continue to be used by often inexperienced users.

4 Likes

Understood, and that seems like a fine decision to me (after all, there is no standard for lockfiles yet). But designing a solution that is incompatible with the requirements that we’ve established for a lockfile standard when it is finalised feels like it’s a non-starter for me, so it’s worth keeping those requirements in mind even if you don’t plan on doing anything more than that.

I agree that this would be the best approach. And I also agree that it’s a big change. As long as it’s clear that this is a lot of work, and it won’t be a simple or quick process, I think that’s fine. But there’s a very real risk that people will get burned out before we reach a solution, and that’s something to consider (for comparison, this feels like a bigger proposal than lockfiles, which have been under discussion for years).

3 Likes

I agree that figuring out the wheel/sdist relationship is crucial here. It seems like how wheels are connected to sdists in these variant cases is intimately related to Enforcing consistent metadata for packages. Some of the considerations raised by @rgommers here are particularly pertinent. If variants are different wheels with the same distribution and come from the same sdist, completely static sdist metadata may not be sufficient to capture the requirements of the different wheels built from that sdist. Concretely, to Oscar’s list above I think we would need to augment point 5 with “when building a particular variant from the sdist, it must be possible to modify additional metadata (such as dependencies) for the wheel”. I just want to make that explicit.

2 Likes

Yes, it should work like extras. It can still be possible to statically encode what the extra requirements for a build variant should be just as it is in the case of extras though.

I think that the reason I didn’t mention this point in my list is because in my mind it already seems clear that the concept of build variants should be merged with the concept of extras. What Python does with extras is like what rust does with features. There a crate has features which can be enabled or disabled. enabling the feature has two effects:

  • It modifies the build by conditionally including the code in the crate being built.
  • It pulls in additional otherwise optional dependencies.

This is also how it works in e.g. meson with options and then dependencies are required depending on the options as well as altering the build. This is also what happens with autotools when you do e.g. ./configure --enable-blas which again both alters the build and adds a requirement for a BLAS library. I assume that most build systems support this concept of building optional features that require optional dependencies.

Python’s extras are just like a limiting case of rust’s features where the build is not altered but additional dependencies are pulled in. That works okay for pure Python projects where there isn’t much “building” going on and it is just as easy to test for other libraries at runtime as at build time. It does not really work if you actually need to “build the feature” as is the case for Python packages with non-native code and non-native dependencies.

1 Like

I think Cargo features are a great analog to consider. Yes, I could foresee unifying the treatment there. I suspect that any such unification would require a reworking of the core metadata spec since we would need to make Provides-Extra a subfield of some other Feature (placeholder name) field. At the implementation level I don’t think introducing a compatibility layer to support both the old and new metadata for extras would be terribly difficult (famous last words), but it would definitely be a prerequisite for moving forward there.

Naively I wouldn’t expect this particular piece (the changes to extras) to impact the wheel spec materially in a way that would require changes in concert with How to reinvent the wheel, but that may be moot since the other changes required to support variants might lump this change into that category anyway, in which case we might be more willing to consider breaking changes if it helps.

1 Like

I don’t think this is true, and I’ll try to illustrate where I see the distinction. Specifically, the “hierarchy of variability” that I see emerging is this:

  1. distribution (sdist/repo/local source tree): the actual source code being built
  2. build variant: most projects don’t have build options, so they have exactly one of these (which is also the only option that Python level tooling currently supports). Projects like NumPy or PyTorch can have a large number of build options, hence attempting to use wheels for distribution sometimes feeling like a “square peg in a round hole” problem
  3. binary build (wheels): a fully pre-built binary artifact that can be installed and used without needing to build anything locally
  4. optional dependencies (extras): enabling extra features in an already installed component based on the presence of additional optional dependencies rather than changing anything in the component itself.

I do think there is an analogous relationship between build variants and installation extras, but that relationship isn’t “they’re the same thing”, it’s “build variants are to sdists and other source artifacts as extras are to wheels and already installed modules”.

At an ecosystem level, rather than trying to enforce “all wheels for a given distribution must have the same metadata”, we 'd instead aim to enforce “all wheels for a given distribution build variant must have the same metadata”. For the vast majority of distributions that only have one build variant (their default build), those two statements would have the same effect, but a project like PyTorch could just define the relevant build variants rather than attempting to devise environment markers to express all the conditionality they would need to express.

I think viewing the problem this way also helps to illustrate why it has seemed so intractable for so long: the exact nature of the build variants needed is inherently a project specific concern, so attempting to find a grand unified categorisation scheme that could cover the whole of PyPI is a task doomed to failure.

And it’s at that point where the similarity to extras comes back into consideration: the core metadata spec defines extras as arbitrary strings and the only way to get the specific extras installed is to explicitly request them (or have another package declare them as a dependency).

It may be that a similar scheme might be viable for build variants:

  1. Define a way for projects to declare build variant names in their source metadata
  2. Define a file naming scheme for build variant sdists (with build variant wheel names being derived from that using the existing scheme for deriving wheel names from sdist names and target platform tags). For example: torch__build_cuda12_1-2.3.tar.xz
  3. Define a way to request that a build backend emit a variant sdist that defaults to producing wheels for the requested build variant (including collecting any additional build dependencies)
  4. Define an optional way to request that a build backend emit a variant wheel directly (optional since there is a fallback to request the variant sdist and build that)
  5. Define a way to declare a dependency on a specific build variant of a given distribution, using a syntax inspired by, but not the same as extras (e.g. maybe torch(cuda12_1)[regular, extras, here]).
  6. Define a process for resolving dependencies on build variants (first try a wheel for that variant, then look for the matching variant sdist, and then finally fall back to the base sdist and ask it to build a wheel for the requested variant)

Note: using a __ (double underscore) in the build separator has benefits and downsides, as it means that older installation tools will still be able to install the build variant sdist, but they’ll misinterpret the distribution name. The clean error resulting from instead having an unknown character appear in the distribution name section of the filename as a build variant separator might be a better option overall, especially if it matched the build variant dependency declaration syntax (e.g. torch(cuda12_1)-12.3.tar.xz).

2 Likes

It is always unnecessary to generate variant sdists. The original sdist can already build the variant wheel if the build backend is passed the requested variant information like variants = ['cu12']. You would always want to build the wheel directly rather than producing an intermediate sdist that is almost identical to the original sdist.

Why not just use the same syntax as for extras torch[cuda12_1]? An installer already needs to consult the torch distribution to discover what cuda12_1 means as either an extra or a build variant. We just need to ensure that the metadata describing build variants is found in the same places as the metadata that describes extras.

Does the installed package data (PEP 376) actually store anything about extras?

As far as I know extras are only an install-time concept: after installation you simply have some distributions installed.

If extras are only an install time concept then what benefit is there in having them be distinct from build variants?

Resolvers need to use extras to distinguish candidates. That wouldn’t be the case with variants (I don’t think). Using extra syntax for variants would be very difficult for pip to handle, as we would need to know what’s an extra and what’s a variant before we access the distribution metadata.

That sucks, and it’s a really bad implementation detail, but extras are very tricky to handle in resolvers[1]. Basically, it’s going to be extremely important that any design - and especially one that impacts extras in any way - is demonstrated to be implementable with the resolver technology we have available, and that’s far from self-evident…


  1. at least, in pip’s resolver, and we tried really hard to find a cleaner approach but couldn’t :slightly_frowning_face: ↩︎

1 Like

I might not understand this enough but…

If the variants have different requirements (different dependencies) then would the resolver also need to use variants to distinguish the candidates?

In other build systems variant builds (“features” etc) are very often connected to optional dependencies whether those are build-time or run-time dependencies.

How does pip use extras to distinguish candidates without looking at the distribution metadata to find out what the extras are?

If I ask for foo[bar] how does pip use the bar part when distinguishing candidates for foo without actually looking at the metadata in foo that says what bar actually means?

I’m assuming here that “candidates” means “potentially allowable versions of a distribution”.

At the point we’re building the part of the dependency tree that introduces an extra, all we have is a requirement (which may refer to a non-existent extra). So we build that section of the tree on the assumption that the extra exists. Later, if we actually need to look up the dependencies (because we’ve committed to exploring that part of the dependency tree, and therefore we’re prepared to pay the cost of fetching the metadata) that’s when we discover if it’s valid (and if not, we discard that part of the dependency tree).

There’s an example in the resolvelib source of how to handle extras - it’s extremely simplified compared to pip’s implementation, but it covers the basic idea of needing to introduce “synthetic” candidates that reflect the existence of extras. The doc comment in resolvelib/examples/extras_provider.py at main · sarugaku/resolvelib · GitHub gives an overview:

Python package dependencies can include “extras”, which are additional
dependencies that are installed “on demand”. For instance, project X could
have an additional set of dependencies if PDF generation features are needed.
These can be defined for an extra “pdf” and requested on install as X[pdf].

The basic resolvelib algorithm cannot handle extras, as it builds a dependency
graph which needs to be static - the edges (dependencies) from a node
(candidate) must be fixed. Extras break this assumption.

To model projects with extras, we define a candidate as being a project with a
specific set of dependencies. This introduces a problem, as the resolver could
produce a solution that demands version 1.0 of X[foo] and version 2.0 of
X[bar]. This is impossible, as there is actually only one project X to be
installed. To address this, we inject an additional dependency for every
candidate with an extra - X[foo] version v depends on X version v. By doing
this, we constrain the solution to require a unique version of X.

I don’t think it’s productive to go too deeply into how pip models extras at this point, though. The key thing is that as a pip maintainer, I have no idea if it’s even possible to implement the sort of unification of variants and extras that you’re suggesting[1].

It would be much easier to validate any proposal in this area if it didn’t require changes to the resolution algorithm. I don’t know if that’s feasible, though. Replacing the resolver algorithm is possible in theory, but it’s not something I’d want a proposal for build variants to depend on…


  1. And even if it’s possible, doing so without serious performance degradation would be very challenging ↩︎

Suppose that at this point you fetch the metadata and it turns out that some of the extras are in fact build variants. Let’s say the requirement was foo[A,B] and when you fetch the metadata it turns out that A means a build variant and B is an extra.

Is it a problem that this distinction was not known until this time?

Regardless of whether A or B are build variants or extras either can entail additional requirements in the dependency tree. The difference is that if A is a build variant rather than an extra then that affects which wheels are acceptable or how a wheel should be built if building from sdist.

I misread this as “Using additional syntax…” for a moment, and was confused.

Reading properly, yeah, this was the main thought I had when forming my suggestion: for resolution purposes, each build variant is effectively a different distribution, while extras just declare additional optional dependencies.

Using parentheses for that is actually inspired by a feature in Fedora’s package installer where “category(name)” provides an aliasing mechanism for packages (e.g. “pydist(numpy)” to look up a Python package by its PyPI name rather than its Fedora one)

@oscarbenjamin is right that we wouldn’t need to define variant sdists, though. Instead, we would only need to define a way to generate variant metadata, which can all be included in the one sdist (likely even in the one metadata file, similar to the way extras are)

It’s also possible to link the two systems via higher level packages mapping their extras to different build variants of lower level packages, so it would be necessary to record in the installation metadata which build variant was currently installed. That would also suggest that depending on “name” would accept any variant (grabbing the default if nothing else is installed or requested), while “name()” would specifically require the default build variant, and error out if that conflicted with other requirements.

My last comment made me realise there is a succinct way to explain the key difference between depending on build variants and extras:

For a given distribution, you can request as many of its extras as you like, and that’s fine as long as their declared dependencies don’t conflict.

By contrast, for build variants, you must install exactly one into any given environment. They inherently conflict with each other and cannot be mixed and matched the way extras can.

The confusion when comparing this with a system like cargo is that cargo is a tool for declaring Rust source dependencies, which means you can mix and match feature dependencies the way you can mix and match extras in Python, since you’re going to be building your own copy of each crate, you’re not going to be depending on a pre built binary published to an artifact repository.

5 Likes

We need to be clear about our terminology here. When I say that we merge the idea of “build variants” and “extras” what I mean is that they both end up being “features” that are enabled or disabled. You have a distribution foo and it has optional features A, B, C, D. You can request to install e.g.:

foo
foo[A]
foo[B,C]
...

At the pure requirements level e.g. when distinguishing candidates there is no distinction between features that affect the build and features that do not. The features should match the semantic behaviour of extras so that requirements involving them can be combined symbolically like:

foo > 1.0 && foo[A]  -->  foo[A] > 1.0
foo[A] && foo[B]     -->  foo[A,B]

Some features will be mutually exclusive so there needs to be a way to indicate that in the metadata. Note that extras can be mutually exclusive as well if their combined requirements are unsatisfiable.

When you build a wheel for foo some of those features get baked in or out so if you do

pip wheel .[A]

then what comes out the other side might be

foo-1.0-cp312-cp312-win_amd64+A+B.whl

where I am putting the A and B tags in the platform tag. There can be different cases for what happens with foo’s features e.g.:

  1. Feature A was requested in the build and was therefore enabled and baked in to the wheel.
  2. Feature B was not explicitly requested but the build backend detected something in the system and decided to enable it so it is also baked into the wheel.
  3. Feature C was not requested and the build backend decided to disable it. This wheel cannot satisfy foo[C] because the C feature was not built.
  4. Feature D remains an optional feature of the wheel that can be enabled by installing some additional dependencies.

The last case (feature D) is how current extras work. The fact that the feature remains optional after building a wheel is what distinguishes extras from other features.

Exactly how these cases work obviously needs to be specified somehow in the metadata. When selecting wheels pip needs to know somehow that the wheel with A+B can satisfy the requirement foo[A,D] because D is an optional feature of the wheel (an extra).

Attaching the feature tags to the distribution name potentially makes more sense conceptually than using the platform tag. I imagine that using the platform tag plays nicer in transition where old tooling interacts with new wheels although a full specification of everything would be needed before we could evaluate that.

1 Like

It makes sense to me to think of these variants as just another kind of extra, but there are practical differences.

I think the biggest difference is that “build-time” extras generate different[1] wheels, while “install-time” (e.g. status quo) extras do not. That makes a big difference in terms of the infrastructure, and it might make sense to separate them purely on those grounds.


  1. mutually-exclusive? ↩︎

1 Like

I realize I didn’t add anything new to the discussion with that :sweat_smile: but I was trying to summarize a scenario I was imagining:

My impression is that a build-time extra can induce requirements on later installations. e.g. if I pip install numpy[mkl] then later packages should install their own mkl variant. If this is the behavior, I’m worried it’ll be confusing and lead to the same “why am I compiling this???” confusion when someone tries to install a package that clearly has a wheel available for their system.

Major packages might be expected to make all the variant wheels available (although it could be a pain for them), but lots of downstream packages will not, and this can cause a lot of unexpected compilation (or failed installs).

1 Like

I think it is important to remember that not everyone is downloading wheels from PyPI. I expect that many projects would have features that can only be selected when building from source and that are not available from PyPI or at least that the PyPI wheels would only use a particular set of features. It might even be that the different features are actually used for say conda packages vs PyPI wheels vs Debian packages vs local build etc. Then the purpose of having metadata about the build variants is not always that an installer ends up trying to satisfy them but instead that an installer can recognise when the binaries from different sources are incompatible like mixing wheels from PyPI into a conda environment. We already have many situations where wheels/binaries are different/incompatible but currently there is no way to encode any build differences in metadata.

An example is that the SciPy wheels on PyPI could require numpy[openblas32_pypi] and then pip can see that they are incompatible with a conda install of NumPy or a local build that hasn’t been built with the openblas32_pypi feature. The openblas32_pypi feature might even be unbuildable for a PEP 517 build frontend since the actual process of building it involves running the whole build in a manylinux docker container, building a BLAS library, running auditwheel etc.

I think most projects will not want to have build variants at all but there are important cases where it is useful to do so. Out of those I also don’t think that many projects will want to upload many variant binaries so in practice they will need to think of a small set of useful options for the binaries.

2 Likes