PEP 11: Proposal to promote AArch64 platforms to Tier-1

Hello CPython community!

Since last year I’ve been active in the CPython community looking after Arm platforms. At EuroPython 2023 I gave a talk about “Python on Arm” and I highlighted two main takeaways: public benchmarks and Tier-1 support for AArch64 platforms. For the first item, I’m already working on it and we are making progress with @ambv. With this post I want to start a conversation on the Tier-1 promotion of Arm platforms.

The scope of the proposal is to promote the following platforms from Tier-2 to Tier-1:

  • aarch64-unknown-linux-gnu (glibc, gcc), (glibc, clang)
  • aarch64-apple-darwin (clang)

During EuroPython 2023 I had a conversation with a few core developers; we discussed amongst the other things about the Tier-1 promotion. We went through the differences between Tier-2 and Tier-1 requirements and analysed what it means in practical terms for a platform to be there. I report the gist of the conversation here:

  • CI failures block releases: I guess this boils down to having aarch64 agents for running checks when developers raise PRs on GitHub. I expect that emulation is not an acceptable solution so we should go with native agents. Before progressing any further, I’d like to know what has been tried so far to enable aarch64 checks on PRs. For instance I’ve seen that GitHub has Apple Silicon M1 in public beta and in Arm based hosted runners in private beta.
  • Changes which would break the main branch are not allowed to be merged; any breakage should be fixed or reverted immediately: this led to a support story by Arm. It has been explained to me that what you have seen overtime is that failures on aarch64 were happening because of issues in the kernel, compilers, and glibc; not because of Python itself. This is because you are using the latest snapshots of the stack hence you tend to find issues on these softwares. Ideally, as discussed, they raise bugs which gets addressed in a timely fashion. We acknowledge there is a problem due to the maturity/hardware availability and we expect the situation to improve over time. While we get there, I offer to be the contact person (diego.russo@arm.com) for any Arm related issue (compilers and kernel) so I can follow up with the right teams. It’s worth noting that this is not a replacement of upstream bug trackers, but a more direct channel of communication between CPython developers and Arm (addressing specific questions, nags, clarifications, heads up, etc…)
  • All core developers are responsible to keep main, and thus these platforms, working: the core developers I spoke with during EuroPython felt positive about the change. All major cloud providers have AArch64 VMs, Macs have completely transitioned to Apple Silicon, more and more Arm laptops are available on the market; I hope the rest of the CPython developers share the same view and feel positive about this promotion.

Please share your thoughts about the proposal and ask away if you have any question, I’ll do my best to reply! :slight_smile: Also if I’m missing anything from my analysis, please let me know.

In case we decide to carry on with the proposal, what’s the best way forward? Should I create a PEP for the promotion or should a PR to change the PEP 11 be sufficient?

Thanks!

10 Likes

We just turned on Apple Silicon testing in GitHub Actions. Otherwise that’s it to my knowledge.

For which platform? If we are talking about aarch64-apple-darwin then that’s probably a matter of seeing CI pass and then updating the PEP. For aarch64-unknown-linux-gnu it’s having PR CI support for the hardware.

I think you’re getting ahead of yourself as the CI issue has not been addressed yet. But since this isn’t an OS promotion then my guess is getting CI set up for PRs, seeing everything pass, and then updating PEP 11.

Thanks for taking this on!

In addition to the recent aarch64-apple-darwin CI runner support that Brett mentioned, it looks like we also cross-compile (but don’t test) aarch64-pc-windows-msvc in CI.

I also recently added JIT CI jobs that cross-compile aarch64-unknown-linux-gnu. We do run the tests under emulation, but lots of tests have to be skipped because of that. It’s really nice to have for JIT development, but I don’t think it’s enough for the tier one support you’re proposing.

Yep aarch64-apple-darwin can be promoted to Tier 1 now - the PR adding it to CI now that Github finally offered it went in yesterday. A PR to PEP 11 makes sense for that one.

I’m not worried about Linux aarch64 “only” being Tier 2 - Tier 2 is still fully supported and important to the world. The important thing is Tier 2 issues still block a release. Being Tier 1 means we’ve done the work to make many core devs lives easier by enough people having access to and willingness to use the tools needed to support the platform rather than just a couple.

The primary hold-up for 2 vs 1 is a lack of reliable CI on Github for the platform.

In this case, also happens to reflect the reality that most developers do not have access to the platform despite how widely deployed it is in the server side and embedded worlds. Because it is not what the laptops, desktops, and workstations people actually use are based on. The same holds true for Windows aarch64 (which is only tier 3 because of lack of core dev. capabilities/interest)

If any core dev’s lack local access to aarch64 Linux machines and want it for Python purposes, we should coordinate getting them a raspberry pi 5 for that purpose. (this different from my tier 3 “Raspbian” support… I run that buildbot in aarch32 mode so that we keep the widely deployed 32-bit arm Linux working)

3 Likes

Q: Do the GitHub macos M1 aarch64 runners support containers? If so… a native Linux aarch64 container run on one of those workers could potentially get us what we need for aarch64 platform to be Tier 1.

I understand that Apple’s aarch64 has some non-standard extensions, I don’t know how much of those bleed through to user-land aarch64 processes though. If visible differences include things like allowing unaligned memory accesses in situations that a standard arm64 platform would trigger a process failure, that could be a platform coverage deficiency.

Agreed. Having had to support emulated platforms at work before hardware was available, it isn’t sufficient to call something tier 1 supported. It has its uses, but you wind up spending most of your time fighting against deficiencies in the emulation that crop up rather than actual platform support issues. How much will vary based on the specific emulation, but that is the point: It is a lot of additional complexity.

Hello, thanks a lot for your feedback and extra information!

I think it makes sense to de-couple the 2 platforms then.

Let’s see aarch64-apple-darwin first.

This is great! I can see the commit that has enabled it. Well done.

Can I carry on and create one?

About aarch64-unknown-linux-gnu we need more discussion instead to enable the CI checks which seems to be the main blocker to be in Tier-1.

Correct, emulation is not the way forward for a Tier-1 support :slight_smile:

I guess the long term solution is for GitHub to provide AArch64 agents in the same way that provides other platforms. This is not available yet as it is still in private beta and we need to explore further for the time being.

It’s not a bad idea at all. If the code developer has an Apple Silicon, the other option is to use something like Parallels (or similar) as well to virtualise Linux. Not ideal but it’s an alternative.

This is one of the questions I’ve asked myself as soon as I saw the M1 availability on GitHub. According to this discussion running docker on Apple Silicon is not possible. :frowning: Unfortunately this is a no go.

Q. Are we open to have self-hosted agents?
If we get hold of some AArch64 machines, we could attach it to GitHub so we can run AArch64 checks. Could this be a workable solution while we wait for GitHub to provide AArch64 agents?

Finally, regarding aarch64-pc-windows-msvc, I feel like this is the best we can do at the moment. Let’s monitor the situation.

I think I’ve addressed all your comments! Thanks

For what it’s worth, I’ve personally had a great experience using Multipass for this.

1 Like

Is it possible to work around this limitation using qemu instead of docker? I know that aarch64 linux under qemu works fine on my M3 laptop, but also have no idea if there are subtle issues.

According to the discussion about docker linked to earlier the GitHub runners don’t support nested virtualisation (or rather, Apple virtualisation framework doesn’t at least not for macOS guests). The runners are VMs themselves.

Well, it’s better than no pre-commit CI job. Testing on “bare metal” may catch further issues closer to the CPU such as Memory Tagging Extension (MTE). I had to debug a MTE bug, it only occurred on bare metal, not in a Qemu VM running on x86.

Please do!

I don’t think we have had such a discussion, so there’s no known answer. I think if the self-hosted runners were shown to be reliable then it would be reasonable. But ultimately I think that’s up to the release managers, Developers-in-residence, and the SC to decide.

GitHub explicitly does not recommend using self hosted runners on public repositories.

There have been multiple GH related security incidents involving self hosted runners in the past. It’s up to whomever runs them to build sufficient infrastructure around securing them as well as ensuring they are stateless and reliable. Would we accept a Required for Tier 1 check depending on infrastructure without a similar SLO as GitHub actions runners themselves? The Just In Time runners docs behind the previous link includes details that can help create that, but realize it is effectively all about someone taking on that infrastructure burden.

Treating them like we do buildbots and only running on them when a core dev explicitly requests it post-code-review might be sufficient to alleviate security concerns (at least as far as existing buildbot owners are concerned). But that would not be in the spirit of Tier 1 today as a manually triggered post-review action is not something we’re likely to want marked as Required before merging.

2 Likes

Folks,

thanks again for engaging in the discussion. I guess the ideal solution is to have real hardware underneath and any attempt to go in emulation/virtualisation might introduce side effects counter balancing the benefits of testing on AArch64.

I am curious to know how you did it :slight_smile:

Without reporting the whole comment of @gpshead, I totally agree with everything you said. TBH I wasn’t aware for GitHub recommendation and this alone is sufficient for not pursuing this avenue.

The only option is really to have hosted GitHub Arm runner and we are in a luck here :star_struck: We have GitHub attention on this: I’ve filled out the form for the access to the private beta and we have been prioritised. What I need now is an admin of the Python organisation on GitHub to continue the discussion as we need to look at a few restrictions.

Please can someone with admin rights drop me an email to continue the discussion? We will update here as soon as we have more news.

Thanks!

2 Likes

@diegor ping, would you like to do the honours? :slight_smile:

@hugovk apologies for the late reply. I will do it in the next days as I need sort out an internal process to set up the contribution to the peps repository. As soon as I have it, I’ll do the PR.

Thanks for bearing with me :slight_smile:

2 Likes

Finally I created the PR to promote aarch64-apple-darwin to Tier-1: PEP 11: Promote `aarch64-apple-darwin` to tier 1 by diegorusso · Pull Request #3705 · python/peps · GitHub

Feel free to review it and leave comments (although it’s a trivial one).

Thanks for your patience.

3 Likes

I already merged the PR. :slightly_smiling_face:

4 Likes

Thanks @bre

That was fast! Thanks @brettcannon, much appreciated.

For aarch64-unknown-linux-gnu there are discussions going on and I’ll update you as soon as I have some news (hopefully positive).

Also that aarch64-apple-darwin is officially in tier-1, shouldn’t we be moving it in the buildbot configuration as well?

Tagging @vstinner as suggested by the git blame :slight_smile: