PEP 730: Adding iOS as a supported platform

@misl6 Thanks for responding - I didn’t want to move forward before confirming that these plans were going to be compatible with Kivy’s needs. Based on this feedback, it sounds like you’re broadly happy with the direction this PEP is heading (please correct me if I’ve misread that).

One detail I wanted to clarify:

To clarify - are you advocating that we should support static linking of modules in some form, or are you saying that you prefer the dylib approach that this PEP desribes, but just hadn’t got around to sorting out the details within Kivy?

Yes! Thank you for sharing.

I prefer the dylib in an Umbrella Framework that this PEP describes.

From App Store Review Guidelines (I’m that kind of developer that really reads legal docs :sweat_smile:) :

2.5.2 Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app, including other apps. Educational apps designed to teach, develop, or allow students to test executable code may, in limited circumstances, download code provided that such code is not used for other purposes. Such apps must make the source code provided by the app completely viewable and editable by the user.

I guess these apps (Terminals, Pythonista, etc …) can be included in the “Educational apps” exclusion.
However, yeah, most of apps would not need (and should not) install new features / update packages, without doing a new app release.

Instead, about the PEP 730 description:

iOS will not provide a “universal” wheel format. Instead, wheels will be provided for each ABI-arch combination. At present, no binary merging is required. There is only one on-device architecture; and simulator binaries are not considered to be distributable artefacts, so only one architecture is needed to build a simulator.

iOS wheels will use tags:

  • iOS_12_0_iphoneos_arm64
  • iOS_12_0_iphonesimulator_arm64
  • iOS_12_0_iphonesimulator_x86_64

In these tags, “12.0” is the minimum supported iOS version. The choice of minimum supported iOS version is a decision of whoever compiles CPython for iOS. At time of writing, iOS 12.0 exposes most significant iOS features, while reaching near 100% of devices.

That means that the developer (and the tool that manages it), will need to have two different python “environments” (One for simulator, and one for the real device).

:bulb: Why not using a “universal” wheel format? (which contains an xcframework instead of a Framework)

By using an xcframework, we can have a single Python “environment” to manage. (the single file size will be higher, but it’s still smaller than downloading 2x wheels)

I figured something like this might be happening. If Pythonista etc are able to slide under the review radar, then that’s great. However, I wouldn’t want to bet any part of the Python-on-iOS ecosystem on that window being open for an arbitrary app. Essentially we should be assuming on-device pip isn’t possible, and if someone is able to build an app that gets through review that does use pip, then they win a prize :slight_smile:

The primary reason for not using a universal wheel format is that the universal wheels experience on macOS has been a bit of a nightmare :slight_smile:

A lot of the scientific community has given up on the universal2 format on macOS, and only produces architecture specific wheels. NumPy and Pandas, for instance, only produce x86_64 and arm64 wheels. Pillow is in the same category. This is because the underlying libraries that they’re integrating with don’t play nice with a single-pass, multi-architecture compilation. C header and source files for non-macOS platforms generally aren’t set up to deal with multi-architecture compilation. And if you publish a wheel that has development headers (as NumPy does), those headers need to be “universal” compatible, which the vast majority of non-macOS code isn’t set up to do.

On top of that, there’s also the question of what architectures would be included in a “universal” wheel format for iOS. If iOS support had existed 10 years ago, and we used a univeral iOS wheel format, we would have required a different universal wheel formats for:

  1. armv6 + armv7 on device, plus x86 on simulator
  2. armv6 + armv7 + armv7s on device, plus x86 on simulator
  3. armv6 + armv7 + armv7s + arm64 on device, plus x86 and x86_64 on simulator
  4. armv7 + armv7s + arm64 on device, plus x86 + x86_64 on simulator
  5. arm64 on device, plus x86_64 + arm64 on simulator

There might even be a couple of other combinations - I’d need to check the exact timelines on CPU release dates and device EOLs. (5) would be the current state for a “universal” iOS wheel; however, it’s reasonable to expect that in a year or two, we’ll only need arm64 on device and arm64 on simulator, so that’s another universal format; and if there emerges an actual need for an arm64e on device, that’s another format again.

My overall point - “universal” on iOS isn’t a stable concept, and it would require considerable ongoing maintenance from a standards perspective.

The alternative - single architecture wheels - allows the producer of the wheels to determine what platforms they are able to support, and the consumer of the wheels can decide which architectures they want to support in their apps. Consider the following use cases:

  1. I don’t have an x86_64 development machine. Therefore, I don’t have to download the x86_64 simulator wheel. If the wheel is large, this could be a major disk space saving.
  2. Apple announces a new CPU architecture for devices. A library maintainer can publish a single new wheel and make deployment onto this archictecture possible; all existing wheels continue to work.
  3. A library maintainer or app developer decides that they need to drop support for one architecture because of some techical detail of the platform, or availability of testing hardware. They can do so by publishing wheels for all platforms except for the one they have deprecated.
  4. A library maintainer or app developer decides that they can’t adopt a new architecture, for the same reasons as (3). Again, they don’t publish wheels for the new architecture.

It’s also entirely likely, based on how the CPython configure script works, that a new iOS CPU architecture wouldn’t require any changes to CPython, other than a modification to PEP11 to add it to the Tier3 list, and a modification to PyPI to accept the new architecture for uploads.

FWIW: BeeWare did use a “universal” wheel format originally; we decided to move away from that because of the experience of managing those wheels, plus conversations with people in the scientific Python community.

I agree that this makes the end user’s life marginally more complicated, because they need to do multiple pip passes to get all the wheels they need for all the architectures they need to support; however, this is something that be easily managed with tooling. Briefcase does this presently; and the end user’s experience isn’t any more complicated than “add the library you want to a list of dependencies”. The code to manage multiple architectures is not especially complicated; It might even be something that can be upstreamed into pip (although there’s a bigger set of changes needed in pip to improve the experience of crossenv installation).

It’s also worth noting that if a library maintainer does want to publish a single platform wheel, they can - the wheel tagging format allows for multiple platform tags (e.g., how Pandas publishes manylinux wheels). So - if someone wants to publish universal/fat wheels, they can - there just won’t be a “universal2023” shorthand for encompassing multiple tags, and the tooling for creating universal wheels will be up to the person making the wheels… but that was going to be true regardless.

From the perspective of the PEP: whatever the final outcome, this topic definitely needs to be mentioned in the “rejected ideas” section, so I’ll add that to the list of v2 edits as soon as a consensus is reached on this topic.

In terms of the specific wheel structure - the iOS wheels we’ve been using (e.g., Files :: are structurally almost identical to what is distributed for macOS, Windows etc - that is, the wheel file contains dylib files “in-situ”, not Frameworks in a special location. We then use an Xcode build step to post-process the dylibs out of the site-packages folder where the wheel is installed, and into the Frameworks folder. These libraries need to be code signed anyway, so a post-processing step is inevitable; and there’s the added benefit that we don’t need to patch every PEP518 build system tool to teach them how to format iOS-compatible wheels.


In conversation with @pradyunsg, he noted the section on iOS minimum version tags is ambiguous with regards to how future versions will be accommodated.

The intention was to follow the model of macOS. Someone installing wheels will nominate the minimum iOS version they want to support; compatible tags will be every full version number between 12.0 and that version. On macOS, this minimum version is implied by the machine that is doing the installation; for iOS, it will need to be explicitly provided because of cross-compilation. So - if I build a project that targets iOS 15 as a minimum, the list of compatible tags will be considered ios_12_0-*, ios_13_0_*, ios_14_0_* and ios_15_0_*. This will require some modifications to various packaging tooling to incorporate this version compatibility scheme.

I also noticed that the proposed wheel tags are capitalised, and they shouldn’t be (I’m guessing my editor got “helpful” with the capitalization). The first part of the tag should match sys.platform, which means the wheel tags will be to:

  • ios_12_0_iphoneos_arm64
  • ios_12_0_iphonesimulator_arm64
  • ios_12_0_iphonesimulator_x86_64
1 Like

Absolutely agree. (This is also a nice way to discourage it since may not be clear to everyone that this behavior goes against Apple policies.)

Yes, unfortunately, I noticed that. And IMHO (ATM, see below) that’s not optimal, at least on macOS, as creates an additional layer of complexity when trying to package a macOS Python app.

But … we may take advantage of the upcoming work that needs to be done for iOS (.dylibFramework tool), to also ease the process of packaging a macOS universal2 app, as I feel it can share most of the logic.

I think you already guessed from my previous statement regarding the tooling, that I already changed my mind about having a universal iOS wheel. :smiley:

The idea of having a universal iOS wheel was appetizing at the moment. But I only later imagined (and your examples helped to) the burden of maintaining universal iOS wheels.

:heavy_plus_sign: Additionally, what happens if we decide to include tvOS or visionOS support in a year or two?

Since the developer may want to add the support for tvOS and visionOS in the same XCode project, we will still be forced to write and maintain the tooling that “mixes” iOS + tvOS and visionOS binaries.

If having (and maintaining) universal iOS wheels is a burden, a tvOS+iOS+visionOS+whateverOS wheel is complete madness (and impractical).

So, I’m fine with the single architecture wheels (and I started to reconsider my ideas about the need of universal2 macOS wheels too).

1 Like

To be sure, the behavior goes against Apple’s policies, but on-device pip or using python is definitely “possible” once you have root access.

To be useful (or more useful than just as an educational app) an app also needs to communicate with the internet or specific other apps, in order to get data files. If it can download data files, then it would also be possible to download code – you don’t need pip or a command shell for this, right? So, if the app embeds a Python interpreter, doesn’t this then open a potential backdoor? The app itself could be totally innocuous and legit, but if this is possible, then I don’t see how those apps can ever pass Apple review, unless the included Python interpreter for iOS itself will also be modified/restricted. Is this correct? Or is this too far-fetched?

1 Like

FWIW: The patches that I’ve currently got for iOS support also support tvOS and watchOS. I’m nowhere near close to proposing those two as Tier 3 platforms, but in conversations this week, the opinion seemed to be that as long as those changes didn’t break anything for the platforms that are supported, they could be included.

I haven’t done any work with visionOS yet… but if someone wants to buy me a toy… :stuck_out_tongue:


You’re correct - there’s effectiely no functional difference (from the perspective of an app) between using requests to download a JSON blob, using requests to download a source file and then execute it, and running pip.

I have no idea how Apple can/does police this; all I can tell you is that there are CPython-based apps in the Apple App Store, so they’re not getting rejected out of hand. My best guess is that they have some “smarts” to try an identify those kinds of usage patterns during the review process, but they’re fundamentally relying on the fact that they have the ability to arbitrarily yank an app from the store if it is revealed to be a problem, and the App Store guidelines gives them legal coverage to do so.

1 Like

It’s worth noting that CPython-based apps are not the only ones that may download and execute code that was not included during the app review.
See: case

But, unless you use that “feature” to avoid the app store review, or to include malicious code, Apple would not start to ban most of the apps that are on the App Store.

All web browsers download untrusted code from the Internet and execute it. Is it so different if it’s Python?

1 Like

It is. If you’re downloading code in a browser, the code runs in a JavaScript sandbox; the iOS App Store rules have the additional rule that the browser must be Apple’s own WKWebView, so it’s Apple’s JavaScript sandbox. That means the opportunities for jailbreaking or security exploits are significantly harder. Python code is talking directly to the executable, unsandboxed.


No: a normal iOS app is an executable containing native code, so the “sandbox” is implemented using kernel-level access controls, just like on any other Unix platform.

It looks like Apple actually does use them in some contexts:

total 208
-rw-r--r--  1 root  wheel  119024 Nov 10  2022 arm64-apple-ios.swiftdoc
-rw-r--r--  1 root  wheel  134666 Nov 10  2022 arm64-apple-ios.swiftinterface
-rw-r--r--  1 root  wheel  119024 Nov 10  2022 arm64e-apple-ios.swiftdoc
-rw-r--r--  1 root  wheel  134667 Nov 10  2022 arm64e-apple-ios.swiftinterface

total 208
-rw-r--r--  1 root  wheel  119036 Nov 10  2022 arm64-apple-ios-simulator.swiftdoc
-rw-r--r--  1 root  wheel  134676 Nov 10  2022 arm64-apple-ios-simulator.swiftinterface
-rw-r--r--  1 root  wheel  119036 Nov 10  2022 x86_64-apple-ios-simulator.swiftdoc
-rw-r--r--  1 root  wheel  134677 Nov 10  2022 x86_64-apple-ios-simulator.swiftinterface

But this confuses things even more, because they use arm64 rather than aarch64, which is the normalized form used by the GNU tools.

So for those places which must use triplets (and I’m neutral on whether that should include _multiarch), I think the best course is to follow what Rust does, since we’re likely to have closer contact with them in the future, and they’re more consistent with autoconf.

This also applies to sysconfig.get_platform.

platform.node() - the user-provided name of the device, as returned by the [[UIDevice currentDevice] systemName] system call (e.g., "Janes-iPhone"). For simulated devices, this will be the name of the development computer running the simulator.

Including this in the list implies that platform.node would return something different to os.uname().nodename. Is this actually the case? I don’t have a physical iPhone to test on.

platform seems like a better place for that, because it already has various platform-specific functions with a consistent naming convention. And this would also solve the next issue:

platform.machine() - The device model returned by [[UIDevice currentDevice] model] (e.g., "iPhone13,2"); or "iPhoneSimulator" for simulated devices.

This is inconsistent with the existing platforms, where platform.machine returns the architecture (e.g. arm64 or x86_64).

Instead, how about adding a new function platform.ios_ver(), returning a namedtuple of (release, model, simulator)?

And if you can just do model == "iPhoneSimulator", is the simulator flag even needed?

Thanks - I’m aware of that - and of the multi-layered security around iOS apps (I recently retired as Apple software dev). But I’m still not completely convinced that having these kind of apps (with embedded Python interpreter) does not change the way possible exploits can be quickly activated. For instance, when a new zero-day is discovered either in Apple code or in Python - or in how Python interfaces with iOS -, I would think that having these apps around could make it quite a bit easier to activate the exploit. Or not?

If your app provides pip, or has some other way to run exec() or a REPL interface, then you might have an easier time activating an exploit, because you could potentially experiment with injecting different payloads without needing to recopile the app.

However, this isn’t an iOS-specific attack vector. Executing any user supplied code is an inherent security risk. Apple’s approach to securing against this vector would appear to be grounded in the App Store review guidelines; they generally discourage vectors for adding code, and reserve the right to yank your app if it appears to be a security issue.

1 Like

While Rust is definitely a common target, I’d argue that every binary iOS package needs to interact with Apple’s tooling; only a subset of those need to interact with Rust.

autoconf tooling isn’t a good guide here either, because their support for iOS is… spotty. Aside from the arm64 vs aarch64 schism, they recognise ios as an OS, but not any other Apple platform; and they don’t have any representation of simulators at all. One component of the reference patches is an update to config.sub that adds these details; I’ve tried to submit this patch upstream to autotools, but their patch submission process appears to be a black hole you throw code into and hope it gets merged.

As for _multiarch; the discussions I had at the core team sprint seemed to suggest that _multiarch is an intentionally undocumented internal detail that assumes whatever value is helpful for the platform in question.

Counterpoint: sys.getandroidapilevel() exists (for which there should probably be an iOS analog as well, since there’s a comparible minimum iOS version), so there’s history of the sys module being used for low level platform descriptors.

That said - the other detail to consider is that sys is a compiled module, whereas platform is a Python module, which alters how hard it is to access various APIs. It seems best suited to values that are defined by the configure/Makefile, or can be satisfied by C system calls (Calling [UIDevce currentDevice] would be difficult inside sys - possible, but inelegant).

The documentation for platform.machine() describes the value as “The machine type… empty string if it can’t be determined”. This falls to CPU architecture because that’s about all you can really tell about most laptops and servers; however, I’d argue “iPhone13,2” or “iPhoneSimulator” is a much more accurate description of “the machine type” than “arm64”, especially given that platform.architecture() and platform.processor() exist.

The reference patches include this, mostly as a utility method for the other platform APIs; but it could be documented and exposed same as mac_ver().

Sure - we could omit it and require the user to write a check of the mode stringl; however, given that it’s a known to be a boolean property, and comparing with a string is potentially error prone (e.g., spelling errors, capitalization inconsistencies, …), it seems like a no-brainer to include it somewhere as a boolean.

platform.is_simulator() perhaps?

1 Like

platform.node() and os.uname().nodename return the same value: the machine’s network name for the simulator, and the device name (“Janes-iPhone”) on device. The platform.node() value is implemented directly by the call to uname().

1 Like