Implementation variants: rehashing and refocusing

While looking up the current macOS platform compatibility tags, I stumbled across this issue where it turns out the “optimise for portability or for the current system?” logic for macOS is already problematic due to x86-64 wheels being preferred to universal ones when both are available: Order of architectures in platform tags on macOS · Issue #381 · pypa/packaging · GitHub

That’s not an issue, it’s the right thing to do. And it has worked just fine for several years now. The arguments around universal2 being more portable are a bit nonsensical. If the installer knows it’s on x86-64, then grab the x86-64 wheel. It has very little to do with portability, unless you want to manually copy envs around between machines (which is unsupported by Python packaging). universal2 wheels are an anomaly, and it’s debatable whether they should exist at all, just like we don’t have combined 32/64-bit Windows wheels. Please do not mix that into this discussion - it’s hard enough to even follow along with this thread already.

1 Like

Copying environments between compatible machines is entirely supported. You have to be careful about it to ensure you don’t mess up and inadvertently depend on things you’re not shipping, but it’s supported. There are lots of packaging tools that rely on this (including some that are mentioned in the linked ticket, but also things like conda-pack, shiv, and more).

The reason the issue hasn’t been pushed in relation to the status quo is because there are ways to ensure the x86_64 wheels are excluded when the environment builder knows they really want the universal2 ones, and the problem only comes up when specifically building portable environments.

The connection to this thread is just to emphasise that we definitely need at least the ability to choose between “optimise for this hardware” and “optimise for portability” when selecting between available variants (recognising that that’s likely to actually be a spectrum between “highest performance with most restrictive hardware selection” and “broadest hardware compatibilty with either lowest common denominator runtime performance or larger packages containing multiple implementation variants”).

This is definitely a critically important part of this addition, but I think we need to do better than making users choose between portability and speed at package install time. Instead, I hope we can get to a place like we’ve discussed above, where the variant can be somewhat orthogonal to the exact package specs, and we can have 2 different environment specs that serve the purpose of capturing each of a portable and a reproducible set of specs. The default install behavior should be to always install some attempt at optimal hardware capability, but it should be easy to create environment specs that are more portable, and just as easy to take portable specs to a different system and use them to create a hardware-optimized environment there.

The user experience at question in my mind is whether people are creating environments iteratively from the CLI, then “freezing” them to create the environment spec to pass around, or instead writing a more complete environment spec file, then creating an env from it. The former case here will be really hard to retroactively make portable. The latter case should be fine.

1 Like

I think we have a very similar idea for how this might work, Doug. I think we need more than just iteration over variants, though. I think we need a resolver step that operates on variant variables and values. This would give us the ability to express relationships among variant variables and values, which is going to be really important for things like mutual exclusion relationships between OpenMP and BLAS implementations.

Here’s a rough diagram.

I’m working first on a tool that produces packages with some prototype variant metadata, but coding this is my next step.

The “backtracking” that I propose here is hopefully to just use resolvelib’s existing code. I haven’t looked at it closely, but the conceptual problem is the same. It may differ in how the constraints get formulated for input to the solver, but the idea is to avoid the need for any more complicated separate solver implementation.

1 Like

My immediate thought is that “collect variants by recursing dependency tree” is problematic, because the dependency tree doesn’t exist (as a completely defined entity) at this point. Because Python packages can have different dependencies depending on which version or even which wheel you select, the dependency tree is only discovered incrementally, during the resolution process. That’s the key point which I’ve been trying to get across, but no-one seems to be picking up on.

For a very simple example, suppose we have package A. Version 2.0 of A depends on B, version 1.0 does not depend on B. B has a variant X. If I request installation of A, do I need variant X? At this point, I don’t know whether I’ll pick A 1.0 or A 2.0 (it might depend on whether A 2.0 has a compatible wheel for my platform, for example).

Now imagine that happening at the bottom of a deep dependency tree, with boto (which has hundreds, if not thousands, of versions) somewhere in the middle.

No practical algorithm exists which allows you to consider “the dependency tree” as a concrete, fully-known, entity. This is a fundamental complexity of Python packaging, which most other dependency resolution problems don’t have to deal with. We’ve had extended discussions about the possibility of requiring all wheels for a given version of a project to have the same metadata (which would be a step towards addressing this, but would not be sufficient by itself) and even that has proved impossible to get consensus on.

The purpose of the dependency tree idea is to allow discovery of the set of variants that needs to be considered for a given set of packages. If discovery is too hard, we can back off and just rely on some public list of all variants, along with the user’s choice of which ones to pre-install to enable. It puts more onus on the user to opt-in to these things, rather than be given choice to enable them where relevant.

I have been envisioning inverting this problem. You don’t ask whether you need a variant for a particular package. You ask what set of variants is “best” according to some priority ordering, and then you take the corresponding package sets (which kind of behave like mini-indexes), and install packages using the normal algorithms in these sets.

One thing I haven’t really settled on is how variant-less packages fit in here. They should probably be available in every variant package set, so that we can fulfill dependencies. Maybe another way to do it is to say that “the installation from the variant package set only installs packages that have variants” and “dependencies are satisfied from the non-variant package set as a later step.”

This is all hand-waving in the absence of a working demo, but I think it is worthwhile to build that demo and poke at it to understand what the hard limitations might be.

FWIW, I think this is helpful and I support it. Maybe in the context of variants, there’s room for a higher-order definition that relaxes some of the need for every wheel of a given version to align. If you resolve variant values first, then can you say that every package with a given variant value must have the same metadata? This allows lots of nice variation between variant values while also preserving the benefits of metadata sameness in other ways. Older installers that do not understand variants would just not see these packages at all - it would require some specification of variant to see them. This is related to the way that PyTorch handles their wheels, where the variant metadata is encoded in the index folders instead of in each wheel itself.

1 Like

I thought that changing the syntax of requirements would invalidate all metadata standards that involve requirements e.g. that would require a metadata 3.0 anyway. Part of my thinking by suggesting to use extras was that requirements syntax would be unchanged so that at least some standards and tooling based on it would be unaffected:

Also the reason that I proposed using the platform tag in the wheel filename is because I hoped that older installers would ignore the wheels with unrecognised platform tags. More details need to be worked out but I hoped that there could be a way for a project to upload wheels/sdist that make things not worse for old installers while still achieving new behaviour for new installers.

2 Likes

That would be the reason I described the approach of adding new fields to improve backwards compatibility as counterintuitive :slight_smile:

The trick is that it is changing the permitted contents of existing fields (whether syntactically or semantically) that causes problems for existing clients. New fields are inherently ignored by existing clients, so as long as the “status quo” metadata continues to be published in the existing fields in the same way it has historically, it is possible to avoid a major version bump.

You definitely incur extra complexity doing things that way, but the pay-off is in smoother potential rollout plans for new functionality.

The situation with wheel filenames is similar: as long as older clients see the filenames they expect for default variants, the primary consideration for non-default variants is that older clients should fail to install such wheels rather than seeming to succeed but getting their installed distribution metadata wrong somehow. Putting the variant info in one of the existing fields may prove a convenient path to that outcome, but adding a completely new field may be judged even better (since it will fail early, at the filename partitioning step)

1 Like

I was imagining that you could do something similar with extras. In the python-flint case the current situation is that you have one wheel for Windows which is what gets installed:

python_flint-1.0-cp312-cp312-win_amd64.whl

Hypothetically in future we add some other wheels so it looks like:

python_flint-1.0-cp312-cp312-win_amd64.whl
python_flint-1.0-cp312-cp312-win_amd64+x86_64_v4.whl

Old versions of pip ignore the new file and continue to install the same wheel as before. New versions of pip allow you to select the alternative wheel explicitly:

pip install python-flint[x86_64_v4]

In the python-flint installation instructions we tell users something like:

Run ... command to find out if your CPU has AVX512, then install the latest version of pip and run pip install python-flint[x86_64_v4] to get a Flint build with the fft_small module and assembly enabled.

In future someone might want to add this sort of thing to a distribution requirement somewhere so you have an sdist with:

Requires-Dist: python_flint[x86_64_v4]

That is a problem because then both old and new installers could see this metadata (unless it only exists in wheels that old installers would ignore). A solution is that the default wheel can have an empty extras field sort of like:

extras = {
   'x86_64_v4': []
}

Then old installers install the old wheel and find the empty extras specification and consider it satisfied. New installers could know to check for variants metadata rather than just extras and then use that to select the other wheel.

This approach works for the python-flint case where there is always a clear fallback that is the acceptable even if suboptimal status quo wheel. I’m not sure how a good fallback scenario works for other cases under any of the proposals. In cases where a basic pip install foo already doesn’t work (need to use a custom index etc) then I guess the fallback is less of a concern.

There has been a lot of confusion about this sort of thing in this thread so let me be clear that for python-flint in particular there would never be a reason for another project to require a particular variant like this. I mention this only as a hypothetical example to consider how installing could work with this metadata.

The challenge here is that in isolation, you can’t tell whether this is referring to an extra or a variant. That ambiguity is a problem, not a benefit since it means the build tooling can’t provide any hints that this might pose a backwards compatibility problem with older installation clients.

By contrast, if new syntax is defined, then this would be disallowed:

Requires-Dist: python_flint(x86_64_v4)

And the build tooling could recommend replacing it with this:

Requires-Dist: python_flint
Provides-Extra: x86_64_v4
Requires-Dist-Variant: python_flint(x86_64_v4); extra == "x86_64_v4"

and handling detection of the more optimised version of the dependency at runtime.

There’d still be some compatibility issues with that (old tools wouldn’t handle dependencies declared on the python_flint[x86_64_v4] extra correctly, since they’d ignore the Requires-Dist-Variant field), but if anyone did run into problems, the discrepancy would be much easier to detect than it would be if the only way to identify the problem was to look at the content of the extras definitions, rather than the use of a new metadata field that old installers ignore.

Some of those potential problems could also be mitigated by having PyPI initially require that variant dependency declarations be limited to non-default variants until variant support in installation clients became more widespread. That is, putting the above example on PyPI having to be written as:

Requires-Dist: python_flint
Provides-Variant: x86_64_v4
Requires-Dist-Variant: python_flint(x86_64_v4); variant == "x86_64_v4"

That way, default variants wouldn’t be able to transitively bring in dependencies on non-default variants, you’d only get one by explicitly requesting it at the top level of the installation request, which would only be accepted if the installer being used understood build variants.

It’s not elegant, but it’s still a smoother transition path than having to bump the major metadata version, or having every declaration of a dependency on an extra becoming a potential installation inconsistency trap.

Assuming pip is using packaging to do this, then the newer wheels w/ extra info will be ignored as long as we don’t introduce a new tag portion to the file name:

For me, the question is can this be done using code the user chooses to construct such an ordering?

Who is expected to choose the code that implements that API? And does it run at install time every time?

That’s heading towards making the platform tag something that’s much more flexible (which I’m not arguing against, but this could lead towards not having packaging to decide macOS compatibility but instead code the user chooses).

That’s where my head is currently at as well. Letting the user run some code to dump out a JSON file representing their wheel tag preferences or something and then installers can use that instead of the broadly compatible tag ordering that they do now.

When do you see that code executing?

Hopefully installers can support that via separate lock files targeting the different scenarios.

Short answer: that would be up to the authors of installation and locking tools that wanted to pick non-default variants without requiring them to be explicitly requested.

Longer answer:

I suspect there would be four main approaches to handling the variant selector logic modules:

  • ignore them completely, only install non-default variants when explicitly requested (e.g. a basic installer that pushes the entire problem of selecting non-default variants off to locking tools would work this way)
  • only check for known variant selection schemes in the explicitly declared dependencies (allowing them to be checked and the preferred variant order for the current platform determined before the main resolution process starts)
  • allow checking for known variant selection schemes for all dependencies encountered during the dependency resolution process (whether those dependencies are explicitly declared direct dependencies or implicit transitive ones)
  • don’t support dynamic variant selector code execution at all, and instead accept config information that specifies known variant selectors and the preferred variant order to be used when resolving (i.e. require the selector logic to be run externally to produce the config metadata for the resolver. Depending on the use case, the variant order might even just be specified directly by a human writing values into a config file). This would have two subvariants like the above (config consulted only for direct dependencies, or for both direct and transitive), and could also appear in a form where the static config complemented dynamic variant selection rather than replacing it entirely.

The first and last cases are the ones I’d consider most important to consider when defining the underlying metadata, as if those work, then the dynamic configuration can come later as a convenience feature rather than as an essential required component of the system design (potential standardisation of the variant selector runtime APIs would then come later still based on observation of common patterns across selector APIs, rather than needing to be specified up front).

I’ve been doing a lot of reading, thinking, and off-line discussing and have a couple of observations and/or questions.

ISTM that there are two “classes” of selectors or variants [1]. There are static variants that are tied closely to the hardware or other environmental settings of the machine in question. These could be CPU and GPU architectures, OS, possibly compiler, etc. It’s not that these variants are set in stone (more on that below), but that one could in theory run some program to query the system and output a configuration file defining these selector dimensions. There are also “dynamic” variants, which is where I put BLAS, LAPACK, OpenMP [2]. I call these dynamic because, IIUC, they aren’t necessarily tied to specific machine definitions, and one could theoretically choose any particular ABI, but all packages installed in the same environment must be ABI compatible.

In some of my discussions I’ve also called dynamic selectors “narrowing variants” and static ones “non-narrowing”, meaning it might be possible to allow any compatible static variant to be installed, but once the installation commits to a dynamic variant choice, it narrows any other choices to require the same ABI. I wonder if this is an accurate or even helpful distinction?

Why it might matter: I could imagine defining static variants in a config file, written to by a program that’s run when the virtual environment is created, not when pip install [3] was run. Let me further imagine that this writes a file called pyvariants.toml sitting right next to pyvenv.cfg. Relying on this config file as the source of truth, removes the need to define a pip plugin API that then has to be surfaced in uv and any other installation tool. All you need to do is to define the config file format and let each tool read them as they see fit. You wouldn’t even really need to run the platform analysis tool – it would “just” be a convenience to generate an accurate pyvariants.toml file, but you could just create that file in your editor, and in fact, if the tool doesn’t analyze the system in a way that you want, or you’re trying to create a cross-platform venv, you would just edit the file anyway to make your choices known to the installation tool. Further, we leave it up to the venv-creation tool to decide how or even if, to run this tool.

This does mean that the static variant part would be tied to a venv, but that seems okay to me. I’m skeptical about whether you really can refresh a venv for new hardware [4] with any high degree of fidelity. Is it better to just nuke the venv and recreate it than to try to find which packages need to be uninstalled and reinstalled with new variant choices?

It doesn’t make sense for dynamic (“narrowing”?) variants to be captured directly in pyvariants.toml since you can’t know this until you start to resolve dependencies. You could however include a user’s preferences to “pre-narrow” the choices. Otherwise, as the resolution proceeds, you’d capture those dynamic choices in a JSON file (since it wouldn’t generally be intended to be easily human readable/writable) so that you could better ensure a compatible choice of variants as the installation process proceeds, but also importantly, so that subsequent pip install commands would know if there are narrowed variants in play from previous runs, and select compatible ones by default.

An important question to answer is whether variants are part of the wheel resolution algorithm or not. If not, does only package+version (i.e. distribution) matter? Implied here is that you can’t backtrack when pip install finds an incompatible set of dynamic variants. It would simply bail with an error, which wouldn’t be a great user experience, but would (I think) keep the scope of changes to the resolution algorithm to a minimum. Once you find a distribution that matches in package+version, you select a compatible wheel, but if you can’t find a compatible wheel, you don’t try to find a different package+version that might have a compatible wheel (how likely is that scenario anyway?). I think one important consideration here is that I expect these types of variant driven venvs to be rare and/or specialized, so making life better for these users [5] and not making things more complicated for the majority of package consumers – or even risking the introduction of new bugs in algorithms that are battle tested and in very widespread use.

Another random thought involves trying to build sdists when you don’t have a compatible variant available. I think this is just too difficult a problem to try to solve for this use case, if it’s even solvable at all. I wonder whether it’s time to disable sdist builds by default [6]?

It also seems to me that staged uploads to PyPI are an important enabling feature because you really want to be able to upload a suite of variant wheels, test them to make sure your variant resolution process works, and then publish them all in one fell swoop for public consumption.

I have no doubt that I’m missing lots of key details and maybe I’m wildly off base but I wanted to at least get some of my thoughts out there for consideration. Thanks for listening!


  1. terminology will be important to nail down once we get to the PEP stage ↩︎

  2. as described in incredibly well written and useful pypackaging-native key issue description ↩︎

  3. or any equivalent ↩︎

  4. e.g. when you install a new GPU ↩︎

  5. perhaps without completely solving all issues ↩︎

  6. Acknowledging the clever approach of hacking external binary wheel download into the “build” of the sdist is useful, but should be considered a temporary hack until a more principled approach can be developed, any “disable sdist builds by default” approach would have to have per-package overrides for backward compatibility. ↩︎

8 Likes

(Apologies for the long lag in my participation. Work…)

What is the use case for expressing a dependency on a specific variant? We can’t do that today, and it’s not something I anticipated needing, which is why I’ve been looking at variants as a step in the file selection process rather than the dependency resolution process.

Instead of variant, substitute “B has a wheel built one CPU architecture, but not for the architecture of the current host”. Pip would either choose the sdist (if it’s available) or refuse to install that version of A (possibly going back into the resolver). Right?

We should treat variants the same way.

If a variant can be selected based on the selector rules, use that file to determine that package’s dependencies. If no variant can be selected, that version of the package is not viable for the current environment.

Forcing variants to have the same dependencies would limit their utility for breaking up some of the really big packages we have to deal with. It would also limit their ability to solve some other things that have come up in these discussions like providing Linux-distro-specific wheel builds that rely on system packages for libraries. If we don’t mix selectors with the resolver, though, then I think we avoid the complexity issues you’re worried about and we also get to address some of those other use cases.

I like this distinction.

The approach of saying that the dynamic settings could be pre-configured in a file associated with the virtualenv also makes a ton of sense. It’s a step up from requiring the user to provide the value on the command line, which I think came up elsewhere in the thread.

Yes, exactly. File selection is not part of version resolution. I think I’ve been under the mistaken impression that pip would backtrack if there was no compatible file available for the selected version. If that’s not the case, I’m OK with sticking with the error reporting behavior for exactly the reason Barry lays out here: it limits the impact of this change to the existing already very complicated resolver code.

I’d be OK with that change, but I think it’s orthogonal to the variant question. Some of the packagers already don’t deliver sdists specifically because they don’t want most of their users trying to figure out how to build their packages themselves.

I think you’re missing my point. I’m thinking of the OpenBLAS style of variant, where everything in an environment needs to use the same variant. I’m also assuming that we continue with the current logic that pip uses, of finding one wheel corresponding to each package/version, and resolving using those. And finally, let’s pretend there are no sdists, as they only complicate things.

Now, for A 1.0, we choose one of A-1.0-BLAS1 or A-1.0-BLAS2.
Similarly, for A 2.0, we choose one of A-2.0-BLAS1 or A-2.0-BLAS2.
For B, we can only choose B-1.0-BLAS1 (there is no BLAS2 variant).

For A 1.0, it doesn’t matter whether we choose BLAS1 or BLAS2, nothing else is getting installed. But for A 2.0, we must choose A-2.0-BLAS1, because we depend on B, and there’s no BLAS2 compatible wheel for B.

But how do we know this without starting the resolution process, and reading A’s dependencies? Which is what we don’t want to do because then we hit the combinatorial explosion of variants x versions x packages…

(It’s actually worse than this, because A-2.0-BLAS1 might depend on B, but A-2.0-BLAS2 might not. Dependencies differing between wheels like this is something I wish wasn’t allowed, but unfortunately it can, and does, happen. If we allow for this, things get even more messy - but I’m assuming you already have a headache, so I won’t elaborate on this any further for now :slightly_smiling_face:)

I honestly don’t see any way of incorporating BLAS-style variants without them being involved in the resolution process. CPU variants, on the other hand, just feel like additional forms of platform tag, and should be relatively easy to incorporate in the way you suggest. Maybe that’s why we’re misunderstanding each other?

Actually, maybe what we really need to do is to split the proposal into two parts - one for CPU instruction set style variants, where there’s no requirement for all wheels to use the same variant, and another for BLAS style variants, where it’s necessary for all installed wheels in an environment to use the same variant? I’m not sure there’s enough in common between the two use cases to make it productive to treat them as the same problem.

2 Likes

You could be right.

I’ve been thinking somewhat along the lines of what Barry said earlier in the thread, but that I haven’t expressed well. The BLAS dependency use case requires user input, before choosing any packages. Once the user tells the installer which BLAS variant to use, that variant is used to choose the file. If the user wants BLAS2 and there is no variant of B for BLAS2, the installer should report that and fail (or backtrack, it’s still not clear to me if that’s an option). That’s the right outcome, and I don’t expect the installer to work any harder to tell me that if I’d specified BLAS1 it could have installed everything I asked for.

2 Likes