We are happy to announce that “PEP 817 – Wheel Variants: Beyond Platform Tags” has been merged and the full text of the PEP is now available at: PEP 817 – Wheel Variants: Beyond Platform Tags | peps.python.org.
History of Conversation
- [February 2021] What to do about GPUs? (and the built distributions that support them)
- [May 2024] Selecting variant wheels according to a semi-static specification
- [June 2024] Implementation variants: rehashing and refocusing
- [August 2025] WheelNext & Wheel Variants: An update, and a request for feedback!
- The primary concern in this thread was about security. As a result of that thread, we changed the wheel variant design to an opt-in design: “All the tools that need to query variant providers and are run in a security-sensitive context, MUST NOT install or run code from any untrusted package for variant resolution without explicit user opt-in.”. With an expectation that installers vendor the most commonly used providers (which the
pipanduvauthors seemed willing to do), this should strike a better balance between security and usability.
- The primary concern in this thread was about security. As a result of that thread, we changed the wheel variant design to an opt-in design: “All the tools that need to query variant providers and are run in a security-sensitive context, MUST NOT install or run code from any untrusted package for variant resolution without explicit user opt-in.”. With an expectation that installers vendor the most commonly used providers (which the
TL;DR
PEP 817 – Wheel Variants: Beyond Platform Tags is a proposal that addresses limitations in how Python packages handle hardware-dependent builds (GPUs, CPU instruction sets, BLAS variants, etc.). PEP 817 enables automatic selection of optimized wheel builds based on system hardware. Instead of separate indexes, package names, or manual selection, users run pip install <package> and the right variant is automatically installed.
Proposal to help guide this conversation
To help this discussion remain organized, we’re committing to the following:
We’ll post a summary every 1-2 weeks (depending on discussion volume) capturing the major points, concerns, and themes being raised across the thread. These summaries will help new readers catch up and ensure no voices are lost in a lengthy conversation. We commit to representing all viewpoints fairly and accurately - please call us out if you feel something has been misrepresented, and we’ll correct it immediately.
We hope this will be helpful for anyone to quickly follow and catchup on the conversation.
The Problem (Why This Matters)
Current State of Pain
The Python packaging ecosystem struggles with hardware-dependent packages:
- PyTorch publishes 7 different variants (CPU-only, multiple CUDA versions, ROCm, XPU) but users must manually select:
pip install torch --index-url="https://download.pytorch.org/whl/cu129" - CuPy publishes 55 different packages (
cupy-cuda100,cupy-cuda101, …,cupy-rocm-6-3) because there’s no standard way to express variants - JAX requires users to play with complex combinations of extras:
pip install jax[cuda13](12 different extras exist, many overlapping) - NumPy/SciPy cannot easily offer BLAS/LAPACK variants (OpenBLAS vs MKL) without duplicating wheel builds or package names.
Workarounds and Their Costs
Each workaround has serious drawbacks:
| Approach | Cost |
|---|---|
| Separate indexes | Manual installation steps, security risks (combining indexes), separate infrastructure |
Package name variants (xgboost vs xgboost-cpu) |
Dependency confusion, potential file conflicts, name-squatting attacks |
| Bundled “mega-wheels” | Excessive binary size, wasted bandwidth, exceeds PyPI size limits |
Extras mechanism (jax[cuda12], jax[cuda13]) |
Non-exclusivity, broken defaults (pip install jax is unusable without extras) |
| Source distribution workarounds | Requires source build, security risk (arbitrary code execution), no --only-binary support, breaks installers’ caching assumptions |
This fragmentation is especially painful for scientific computing and AI/ML, where ~40% of Python developers now work according to the 2024 Python Developers Survey.
The Proposed Solution
The design proposed in this PEP matter to end users (simpler and more robust installs, smaller downloads, increased performance for some packages) as well as to package maintainers (simplifies packaging, extensible design as new hardware and complex dependencies show up).
Please see the quotes in the Motivation section of the PEP for different perspectives of why this proposed design matters.
What This Looks Like in Practice
Before (PyTorch today):
pip install torch --index-url="https://download.pytorch.org/whl/cu129"
# or
pip install torch-cu129 # hypothetical separate package name
After (with PEP 817):
pip install torch
# Installer detects your CUDA 12.9 GPU and installs torch-2.9.0-...-cuda129_openblas.whl
# If no GPU: installs torch-2.9.0-...-null.whl (CPU-only)
Implementation Status & Path Forward
Reference Implementations
- variantlib: A library with a reference implementation of all parts of the proposed standard that we expect will be used by many packaging tools and installers.
- uv client: Astral’s package manager has variant support (currently in a separate branch as a prototype)
- WheelNext Index: Community initiative with wheel index demo
- Build Backends: (provided as example - overall fairly simple changes)
@mgorny @konstin @rgommers @atalman @charliermarsh @msarahan @seemethere @barry @dstufft @aterrel