PEP Proposal - Platform‑Aware GPU Packaging and Installation for Python

Hi @d8ahazard, thanks for sharing your thoughts here.

Add an accelerator suffix to the existing platform tag

Note that per https://peps.python.org/pep-0425/#compressed-tag-sets, the . character is already in use to delineate multiple tags in a compressed tag set, so using it to create an ‘accelerator suffix’ here would probably not be possible.

Since GPUs are not mutually exclusive, how would a resolver handle a platform that might have both CUDA and ROCm available to automatically select a single wheel without the user explicitly specifying it? Are ‘capabilities’ comparable across these architectures? Would specification always be required?

There is a lot of prior discussion about these issues at https://discuss.python.org/t/what-to-do-about-gpus-and-the-built-distributions-that-support-them/, you may want to review that thread if you haven’t already.

2 Likes