Hi, I’m one of the RustPython developers, and during work hours I maintain a tightly-coupled C++/Rust project of about 200k lines.
I’d like to comment on some of the points raised in the post and the thread. I’m still getting used to Discourse, so please excuse me about missing quotes.
Questions about RustPython
RustPython isn’t something that can be considered in this PEP in short term. RustPython and CPython are not semantically compatible across many layers of their implementation.
Well, RustPython has a bunch of pure Rust library with excellently working Python stdlibs. it could serve as a reference when introducing Rust versions of certain libraries. I don’t believe it is directly related to this PEP.
RustPython has its own approach to running without a GIL, but it’s not compatible with CPython’s nogil direction.
If there’s one aspect of RustPython worth highlighting in this PEP, it’s that it has achieved a surprising amount of CPython compatibility with a very small number of contributors. I rarely contribute directly to CPython’s C code, but I’m very familiar with reading it. After implementing equivalent features in RustPython, the resulting Rust code is usually much smaller, with no RC boilerplate, and error handling is much clearer.
bindgen
I think there must be a good guidelines on how bindgen should be used. bindgen generates both data structure definitions and function bindings. Function bindings are usually reliable—but data structure definitions often are not. If we rely on bindgen for those, we must run the generated tests to verify compatibility.
In base64, the code currently uses a direct definition of PyModuleDef. To be safe, either:
-
verify struct size via tests, or
-
let C create the struct and only access it through FFI.
As far as I can tell, cargo test for cpython_sys currently doesn’t run the generated tests (I might have missed something).
I’m not saying this PEP must adopt following idea, but from experience, defining data structures on the Rust side and generating C headers with cbindgen can be safer than generating Rust code with bindgen. Though while rust-in-cpython focuses on writing stdlib modules, where C doesn’t need to call Rust, there may be limited motivation to use cbindgen.
This perspective comes from my experience with mixed C++/Rust projects. CPython being a C/Rust project may lead to fewer issues.
clinic
All Python functions will end up exposed as extern "C". For now, I’d actually suggested to consider cbindgen for this:
Each module could run cbindgen to produce a C header including all FFI functions with their original comments. Then, maybe clinic tooling could operate directly on those headers without major changes? I’m not totally sure since I don’t fully understand clinic, but it seems it could require less modification than the other 2 suggested methods.
When Rust penetrates deeper than the module boundary and this approach breaks down, we’ll have better insight for future decisions anyway.
ABI
I’m not sure how far Rust implementation will expand, but compared to Pants, CPython’s requirements seem much simpler. If we connect this with the clinic/cbindgen idea, we could enforce a policy that every exported symbol must be declared in a properly generated C header. Since the only stable ABI in Rust is the C ABI, having headers fully specify remains reasonable until Rust APIs are officially exposed to users.
Build time
Ideally, Rust debug builds shouldn’t be too slow. But many Rust libraries lean heavily on proc-macros, which can significantly impact build times. For example, RustPython has far less code and functionality than CPython, yet it takes ~5× longer to build, and the gap is even bigger for incremental builds.
If build time is a major concern, guidelines limits unnecessary proc-macro usage may help. Also, on the external tooling side, we can hope llvm might support faster Rust debug builds later since Python is a priority project for llvm project.
I don’t worry about generics in rust-in-cpython. Unlike RustPython, rust-in-cpython must generate C interface, which discourage to abuse generics.
From a build-time perspective, keeping one crate per module as _base64 doing now is very appealing.
Using unsafe
In my opinion, completely eliminating unsafe from base64 isn’t the right goal.
Rust guarantees that code outside an unsafe {} block is safe. Anything the compiler cannot verify must be wrapped in unsafe {}. Wrapping unsafe internals in a “safe” API means the programmer is manually guaranteeing safety.
Some guarantees can be established through review and careful implementation, but FFI safety often cannot be fully guaranteed due to inherent interface limitations. If we hide unsafe behind safe APIs even where true safety can’t be guaranteed, then we lose track of which code must be treated with caution.
So instead of trying too hard to remove unsafe, it’s better to encourage properly mark actually unsafe code and minimize them when possible.
Rust benefits vs. FFI cost
Rust reduces memory-related bugs, but across FFI boundaries, things can actually become less safe than using a single C compiler. The more FFI boundaries exist, the more type information is lost, and the more binding risk increases.
Usually, early Rust adoption increases FFI surface area and reduce problems in the rust codebase but also creates new problems at the same time. Then over time, as Rust takes over more internals, the boundary shrinks and things feel cleaner again.
From that perspective, starting with modules is a positive direction: a lot of code, limited boundaries.
Questions
Shipping strategy: Will the Rust extension only support nogil build? If so, that might help reduce some FFI complexity.
Duplication:
Python currently ships duplicate C and Python implementations for some modules. If this PEP considers moving some stdlib pieces to Rust, could Rust implementations also coexist as duplicates? If so, a guideline to have different implementations about same feature will be great. Having separate module paths and build flags would allow experimentation, and then flipping Rust on by default once stable. It will be work like a sort of feature-level incubators. If possible, I’d love to see this code used: GitHub - RustPython/pymath (While working on it, I learned how dealing with FMA is way nicer in Rust than in C. Thanks tim-one.)
If this proposal moves forward, I’m ready to dedicate a significant portion of my 2026 open-source time to it. As mentioned, I’m experienced with large-scale Rust FFI using bindgen, and I’m fairly familiar with Python internals as well. Please feel free to poke me if I can help.
Finally, I’m genuinely impressed that the CPython community is open to such a bold direction. I’m curious to see how this proposal plays out, and I’ll be following this thread with great interest. Cheers!