Sorry, I think I’m being messy with mixing ideas about here things are today and how they might be.
Right now, today, nothing detects CUDA version. It is on the user to specify it, and they have to specify it somehow for all of their packages that use CUDA.
So why does cudf-cu12 use the build backend? Because the wheels are hosted externally. The build backend is some sleight of hand to save the user from needing to use —extra-index-url.
Any idea about dispatch among implementations is speculative.
In my ideal world, the user would first configure their installer (let’s say pip) and change some setting that sets a preference for NVIDIA gpus to be used. The user then installs something like JAX or PyTorch, each of which indicate some support for NVIDIA gpus. These dispatch to an NVIDIA-provided package that inspects hardware. The hardware metadata is returned to the installer for JAX or PyTorch, which then use it to map to their known distributions.
cudf-cu12 is a hack to work around limitations described in What to do about GPUs? (and the built distributions that support them) - #64 by msarahan
The ideal situation is to have just cudf with variant dispatch and eliminate the hacks when they can be safely removed.