I understand why you changed the scope of the Pre-PEP but with statements like these throughout this topic that does feel a bit disingenuous. I will not put words in your or any of the other authors’ mouth but I personally get the feeling that the end state as described in this Pre-PEP is not the end state you desire. To me this also makes the discussion here less productive.
As many have said, optional extension modules can already be written in Rust and published, so that doesn’t really require a PEP. Perhaps we can start an effort to document those better and see what can be learned from them to inform a future PEP to introduce Rust as a dependency for CPython, but I also think that doesn’t need a PEP.
If the end goal is indeed to include Rust then we should just be honest and open about that, including the fact that we do not have an exact timeline yet. Then we can discuss the merits of that proposal and, if we want to pursue it, what that timeline should look like.
If I were opposed to making Rust a dependency of CPython for whatever reason I would definitely be frustrated by the current discussion. It feels like we’re putting the project on a trajectory towards doing so but wording it in such a way to not have that explicit discussion. That is likely not your intent and the step to downscope seemed reasonable, but since then it feels like some of the participants in this discussions have been talking past each other.
They’ve said over-and-again that (in response to input on this thread) the process will be two-stage. It’s reasonable of them to do so, and it’s reasonable of us to grant them the privilege of materializing a future PEP on the basis of what will be learned from this one.
Let’s be careful to understand the limits of text-based communication before inserting such accusations.
I would feel insulted and trolled by a statement like that, if I was @emmatyping. She stated that:
To insinuate that she and the PEP authors are less-than-candid about their long-term intentions seems unreasonable. Perhaps you didn’t intend it that way - but I feel compelled to say that it comes across that way to someone like me - who has zero skin in this game.
All true enough, but the well-reasoned response has also been given:
In summary, I’ve read this thread with pleasure for the tightly reasoned arguments and respectful tone of the PEP proponents. Hats off to you - this is what makes Python great.
My point was that if that is the intention we should discuss that intention and figure out what we can and can not understand about those steps and which we can take right now.
With free-threading a similar approach was taken. The Steering Council was very explicit about what the future should/would look like, but the exact timeline was to be determined based on doing the actual work. It is okay if there are still unknowns when taking big steps like these, but at least there should be an agreement on what the project is working towards.
But shouldn’t we then also have the discussion whether we want to introduce Rust to core at all? A couple of posts ago @malemburg said that the discussion was going way off-topic, but if this Pre-PEP is the first step towards introducing Rust to core I think it wasn’t.
Anyway, seeing the amount of hearts I guess my interpretation of this discussion isn’t shared by others so I’ll let it rest. I genuinely hope we can reach consensus amongst all (core) contributors for this topic as I think it has a lot of potential
We have one (for checking COM reference counting) - it’s been around so long that more people have forgotten about it than remember it - and if you’re happy to scatter ugly macros throughout the entire codebase and they only have an effect in MSVC with a certain flag then we can go ahead. (Spoiler: nobody is okay with this.)
Though we could also “solve” refcounting with the bare minimum C++ needed to use an internal RAII class. Potentially that could even be code neutral, and only require C++ for a validation build (though we’d have to replace PyObject * with some non-starred typedef throughout, that would be defined differently in this case). Replacing large portions of the codebase with an entirely different language isn’t necessary if we’re willing to accept typedefs or macros and a horrific PR that touches thousands of lines instead.
Ultimately, the effort required appears to be greater than validating refcounting in most cases, which are generally quite simple. And more importantly, we have a number of regular contributors (mostly core devs at this point) who do review C PRs closely for correctness (as Brett asked). We’re not exactly in a place where we’re struggling to get refcounting right on a regular enough basis that excluding all of that experience and replacing it with new contributors would be a net positive. Maybe it’s a long-term benefit, and in 5/10/? years we’ll be back where we are today (and presumably will be better off after that), but that needs to be justified.
I wish we thought more in terms of distributions. A distribution can include any libraries they want by default, including Rust-ified[1] versions of stdlib modules, so that their users get them by default. We could add them to our own distributions (the Windows/macOS/eventual standalone Linux ones) without needing to add them to the source distribution, though it’s also reasonable for us not to. But third party distros should be more than welcome to do it if it provides value to their users, and we should be encouraging that, if only because it takes some of the pressure of us being forced to accept every single idea upstream before anyone tries them downstream.
I think that’s what happening now, but people were getting far too hung up on the final decision when that decision cannot reasonably be discussed at this time. It’s nuanced.
One possible argument is “we can’t make Rust required because it doesn’t have X”. This isn’t a useful comment in this discussion if X is something that Rust could gain in the future[1].
A different argument would be “we will never be able to make Rust required because it will never have X”. That argument would be relevant to this discussion because it would derail the end goal, but I don’t think there are any real examples for this case.
It’s not disingenuous to have a long-term goal and propose an incremental step toward achieving it. This PEP is basically asking permission to perform an experiment, to gather information that is needed to make an informed decision later. The experiment can’t be performed on PyPI–it has to involve modifying the CPython distribution because nothing else will have the same reach.
I think this is an overly generous interpretation of what’s been proposed here (though the proposal could easily be reframed in this way).
The problem with putting it this way is that anyone who currently contributes to CPython is then entirely welcome to object solely on the basis of “I don’t want to be your test subject”, and nobody outside of that group has any standing to argue with it. So it’s a pretty weak approach that I think is doomed to fail. “We’re just experimenting on you” isn’t at all conducive to forcing people to participate.
Similarly, we can object to this angle because of the outsized impact compared to a more constrained experiment, and the actual substance of the proposal isn’t relevant - “we don’t want to experiment on millions of Python users” is sufficient to block any proposal that frames itself as an experiment.
I personally don’t mind you weakening the proposal like this, but I assume that’s not your intent, so I’d be very careful about trying this approach.
I should have expected that “it’s an experiment” has a less positive connotation outside of research.
I didn’t mean to frame it as an experiment on anyone. Other PEPs have similarly started with an “experimental phase” (i.e. the JIT and free-threading). The experimentation here is with tooling and workflows.
But I don’t want to weaken someone else’s proposal. If that framing is harmful, pretend I didn’t say it–it’s semantics.
(Sorry, I know this is a bit of a conversational throwback. I meant to follow up earlier.)
Is that a yes to both then? All of the Rust overhead costs can be shared between two binaries that utilise Rust? We won’t have to debate about the relative value of using Rust against the (last I checked) 4MB gain in footprint from another copy of the Rust core runtime plus whatever the dependencies cost every time another corner of the standard library wants to use Rust?
The 4MiB was because the Rust standard library is distributed in a precompiled “release optimizations + debug symbols” configuration and rustc didn’t have internal support for stripping debug symbols.
The default “Hello, World!” from a freshly generated Rust project, in its default build configuration, as generated for the x86_64 Linux target with Rust 1.91, once stripped, is 354KiB and it’s smarter about what it includes now, so it’s 463KiB before stripping.
(They’ve since added internal support for stripping binaries but, according to the Profiles section of Cargo’s manual, strip = “none” is still the default for the release profile and I can’t remember whether the bundled LLVM stripping and GNU strip are equally effective.)
At the request/approval of @emmatyping , I’m closing the topic. The discussions seems to be going a bit in circles and at this point it’s probably best to wait for the actual PEP.