Would this be a good time as any to do this? This has been a big thorn in my side as well. At the very least it should be possible to have an Informational to reserve namespaces for specific usages (a la tool
tables in PEP 518).
I think so. The thing to check is whether there are any āofficialā settings already in use, or any common setuptools ones, particularly those that might be intended to bleed into any/all packages in an install, rather than just one.
It seems like a reasonable assumption that if values are being ignored by a backend, the user is going to be less than happy if itās silent. Any other ways we can test this, though?

Vocab for below. Based on the GNU autotools convention that CPython itself uses:
- Host: the system that will run the binary. Embedded device, etc.
Strange. I could have sworn everyone called that a target. Hearing host, I could very easily read the opposite meaning into it: The host being that which hosts the compiler.
It comes from GNU autotools, which CPython uses. When you cross-compile CPython, you specify --build
and --host
, and the terms are used internally in a couple of places. target
has a different meaning in GNU autotools too.
I put that there just so weād all be on the same page. Iāve found that mixing host/target terminology with build/host terminology or anything else, things get really confusing really fast. (Youāre right that most people use ātarget.ā)
I think the most realistic thing we can hope for is a standard sysconfig ādumpā tool that can be run on the host platform to produce all the information needed to build on the build platform.
Bit late to respond to this; a standard tool to do this would be perfect for PyO3. In particular, finding an extension to PEP 517 only really helps with cross-compiling python packages. If users are trying to cross-compile a Rust program with embedded Python, then PEP 517 is not relevant.
Reviving this thread: I have a draft PEP for some of the issues discussed here at Draft PEP9999: Standardized Config Settings for Cross-Compiling by benfogle Ā· Pull Request #1 Ā· benfogle/cross-compile-pep-draft Ā· GitHub
Feedback is welcomed, and let me know if this should be spun off into its own thread.
This combined with zig cc
could make module cross-compilation relatively simple.
https://andrewkelley.me/post/zig-cc-powerful-drop-in-replacement-gcc-clang.html
Unfortunately, the compiler isnāt the hard part. It doesnāt seem to be in this thread, but we definitely looked at zig back when discussing this last time around. Communicating to all the build backends that they should cross-compile, selecting the machine they should be compiling for, and helping them find the headers, libs and options they need to build is the hard part. Those are all Python-specific.

I have a draft PEP for some of the issues discussed here at Draft PEP9999: Standardized Config Settings for Cross-Compiling by benfogle Ā· Pull Request #1 Ā· benfogle/cross-compile-pep-draft Ā· GitHub
Sorry, I missed this when you posted it - I think I was already on holiday by then. Iāll take a look this week.
Somewhat tangent, I explored using Zig to build extension modules a while ago:
Iāve been playing around with the Zig toolchain. One very interesting thing I found recently is that Zigās toolchain, being completely self-contained, portable, and extremely compact, is being distributed on PyPI as an installable Python package. Combining with the fact that the Zig compiler has first-class support compiling C code, this means any sdist can acquire a working C compiler by simply adding ziglang as a PEP 517 build dependency. With some glue code around the compiler, I was able to ā¦
Unfortunately I have not found a good use case for this. Hopefully someone will if I can manage to remind everyone well enough!
Not sure about a use case specifically for packages building themselves, but it does sound cool.
My use case for cross-compiling more generally is to be able to build Docker containers locally on macOS (or other non-linux) without a VM. We use Bazel to build software - including Docker images - and unlike standard Dockerfiles which are basically scripts executed in a container (and on macOS, within a VM), Bazel creates the proper tarball structure using artifacts itās built previously.
Bazel has pretty good support for cross-compilation, and for languages like Go or even C++ with bazel-zig-cc, the cross compilation can ājust workā and the docker image produced on a mac will execute on a Linux host. But for Python, Bazel will just naively dump packages with Mach-O binaries into the container which obviously wonāt work on Linux. Cross-compilation for Python packages would help with that.
With Microsofts and Qualcomms push towards Arm64, and potentially RISC-V joining the party in the future, it might be interesting to revisit this discussion.
Debian is now finding itself going through a sysconfigdata_name
migration, away from a distro-specific values towards the same values as upstream.
The history here is that we had to remove the platform name from sysconfigdata_name
, as it was not very stable on our kfreebsd architectures, but the multiarch tuple is canonical. These kfreebsd architectures are no longer part of Debian, and Python rejected our patch to not include the architecture in sysconfigdata_name
, when multiarch was set. (GH-128879)
We have around 30 packages that are setting _PYTHON_SYSCONFIGDATA_NAME
explicitly somewhere. Presumably to ease cross-compilation. It would be nice to have a standard interface for looking up these names, that we can migrate them to.
Weāre contemplating adding a simple utility to provide these values with our Python interpreters, but obviously a standard supported upstream would be 100x better.
So, bump