AOT instead of JIT

Self-contained Python executables have been possible for decades, since the language runtime is embeddable, and supports the notions of both “frozen” modules and zip archives as module import path entries.

Pyoxidizer is a recent option for that approach (converting Python apps to Rust executables), but other tools like PyInstaller have been offering the feature since the Python 2 days.

(As @Rosuav pointed out, the big downsides of this approach are that the executables get quite large and they’re not portable across target platforms anymore)

4 Likes

I’m aware of existing solutions, but find them sub-optimal and unoptimized. Packaged packages lack compilation (to machine code), and they’re not part of an official build process. What I mean is more than merely “adding” the entire VM with “launcher” to the standalone package, as I currently do with various solutions like BeeWare.

My concern is at a lower level, embedding the necessary runtime facilities, including only the used packages in the standard library and others, ensuring: a- standardized manner during the build process, and b- optimization.

For instance, when I compile Go package, I simply run “go build” and everything falls into place. I aspire to have a similar experience with Python. I believe this could be seamlessly integrated into the compile and build process, using full AOT compilation.

I had risen this topic, since the proposed JIT need a build process to generate IR with patching templates, so I think full AOT compiler with proper and standard build process would be a better investment, while CPython can still be installed and used to execute scripts as part of the Python’s toolchain if needed.

1 Like

This just…isn’t how it works, is the thing. There was never a point at which “AOT compiler” or “JIT compiler” were equally possible. The reason the JIT is a thing right now is outlined in the PEP 744 thread:

  • work was done for 3.12 that formalized the interpreter into a DSL
  • the idea of a copy-and-patch JIT method was recently developed and published
  • @brandtbucher spent the time to actually implement the thing!

The copy-and-patch design made the JIT vastly more feasible to implement. There is no design sitting around to make an AOT compiler for Python that will Just Work™ as a drop-in-replacement in the same way. And there is no pool of developers to be deployed to build such a thing[1]

It just doesn’t make sense to ask “why a JIT and not a full AOT compiler” when no one even has a way to build the latter right now.[2]


  1. at least, not in their volunteer time…many corporate developers have tried in the past ↩︎

  2. unless you just want the “bundled executable” thing…which has been available for ages ↩︎

4 Likes

I appreciate your response. I see the self-contained executables as something different with different tools from what have been previously available, and I personally have used and still use.

I believe the core team has done a pretty amazing job in the last two releases (3.11 and 3.12) to improve Python’s performance, so I am not here to express dissatisfaction; quite the contrary.

The “copy-and-patch JIT” is still new to me, I watched @brandtbucher presentation last year, and as a Python developer and lover, I would like to thank him anyway.

I am aware of the constraints and I also fully understand that compiling a dynamic language is notoriously difficult, but dynamic dispatch and dynamic type resolution are also possible with machine code, so even with little to no performance gain, I would be happy with partial results if I can eliminate the need to deploy the entire VM wherever I want to run a Python program.

This is what tools like Nuitka already do. This technique doesn’t need to be part of the reference interpreter, because it doesn’t rely on the reference interpreter at runtime.

6 Likes

Thanks for the suggestion, but I think it does not solve the problem of maintaining an official solution or at least an official “compile option”.

I hope the compiler (code-gen not JIT) and the build system stay away from CPython itself and be maintained separately.

How does Nuitka not fit this description?

2 Likes

So it needs to be official…

… but also not official? I am confused.

Short answer:
There is no shortage of compilers for Python, but as long as they are not part of the evolution process of the language itself, they can’t be well maintained (Nuitka has compatibility up to 3.11 only).

Longer answer:
With Python 3.5, type annotations were introduced, reflecting both interest and orientation towards some sort of gradual static typing.

Now, the “faster Python” initiative reflected the need to improve Python’s performance, especially given that the current use cases for Python are very demanding, and the demand for performance and predictability is expected to grow.

The response in the previous releases was very good, and the proposed JIT is a clear signal that there is a vision and great interest for some sort of “compiled Python”, but unless type information is utilized and, at some point, enforced, neither JIT nor AOT are going to significantly boost the performance (with the current JIT about 5%).

Whether with JIT/VM or AOT, Python is going to be more and more strict and static by default in order to make a real difference. I feel the recent developments are a clear signal that the community at large is ready.

So what about the VM? The VM was an appealing idea for various reasons, but I think making a new programming language now that needs a VM is an exceptionally bad idea. I don’t like to see Python evolves to be another Java or C# with their clunky VMs! I think Go and Swift are good examples of VM-less garbage-collected languages.

I think with the current JIT proposal, we are talking about some sort of roadmap, and I believe investing in a roadmap based on AOT and eventually making static typing default, would be a better approach, like compiler infrastructure with a build system (which are currently needed for JIT anyway), that are separated from CPython, and that may eventually evolve into some “compiled Python” implementation with static type checking in the future without fragmentation.

… but also not official? I am confused.

away from CPython’s codebase.

Nope, I am still confused.

1 Like

I don’t think this is true at all. I’ve seen no discussions about making Python less dynamic (or as you say, more static). Maybe I’m misunderstanding what you mean by more static.

5 Likes

I am not stating a fact, I feel is it going to be the case sooner or later.

“static” as in static type checking.

3- “There are already many compilers for Python”:

Yes, and they are widely used in production environments. This is certainly true of scientific python.

The center of Python’s community is the Steering Council and PEP process.

Yes, but as far as I know, as a longtime core developer, the SC and core developers in general do not want responsibility for any AOT compiler, let alone an ‘official’ one. We definitely do not want there to be just one. Python and CPython development is sometimes influenced by the existence of multiple compilers. py2wasm is a new one announced a week ago.

People will hesitate to adopt any solution and integrate it into a production environment if it is not widely known …

Existing compilers are widely known – and used.

It is currently intended that static typing never become mandatory.

1 Like

Thanks for the clarifications. I am not stating definitive facts; I’m trying to provide points of view.

The approach in scientific computing is not always systematic; it’s often messy and lacks a strict “system” mentality. For example, the ML space is about writing messy code that implements some algorithm and utilizes some sort of compiler for acceleration or to take advantage of certain hardware features. After the model or the result is there, the system around it gets implemented in a different manner (e.g. using C/C++).

I think my points can be better summarized as follows:

If there is interest and plans for a compiler with JIT as proposed, why not include static type checking during compilation, even if as an optional flag?

If the above is possible, why opt for JIT? Why not simply choose AOT with injected runtime support, also as an optional flag?

I’m not saying these things are easy and can be done at once, but what I’m saying is that the proposed JIT is going to take many releases before it shows good results. During this time, a lot, if not most, of codebases that need JIT will be replaced by AOT languages like Go and Rust.

Nonetheless, I appreciate the progress being made, and I think the past releases showed very good results.

Once again, you’re welcome to write an AOT compiler for Python. Otherwise, you’re just casting aspersions and asking other people to do things differently, without any evidence to show why.

Because that would be a duplication of information. Type information about what the code is actually doing instead of trying to use the typing hints provided by annotations is going to be way more valuable: The JIT doesn’t gain anything by looking at the static typing information. If you don’t believe this, learn how tracing JIT works.

If you want static type checking + compliation, you could also loop into something like mypyc.

  • JIT is easier to implement
  • requires less changes in the python code base that is being speed up
  • probably requires less changes in the CPython code base
  • doesn’t requires any changes in workflow for end users
  • Can be done incrementally easier than an AOT compiler

Well, that depends on how you define “good”. The JIT already provides tiny speedups right now.

Yes. And? Why is this a bad thing? CPython’s biggest strength has always been how easily it integrates with other languages via C APIs.

But also: These code bases have always exists, and both Go and Rust have been available for quite a few years. And yet there are still code bases which would benefit from JIT. So what is changing now that suddenly means these codebases are going to vanish before the JIT provides “good” results?

And: Unless you fundamentally redesign large parts of python, no AOT compiler for python is going to be able to keep up with Go/Rust/C. This starts with fundamental things like python having only BigInt by default. So there is always going to be a benefit to write in one of those languages.

1 Like

you’re welcome to write an AOT compiler for Python

I don’t need to, others have done a pretty good job within their capacity, many could simply be adopted and optimized.

asking other people to do things differently

I don’t think so, I am not asking for anything, I’m discussing to get opinions.

I never, asked for compiler for Python, and I never cared to be honest, I am happy doing things in Python within its limits, while doing other things in other languages (I use Rust with Python a lot!).

The trigger for this discussing is that if there is going to be compile process with JIT and build system for Python as proposed, then why not skip to something I suppose could be better.

without any evidence to show why.

I think I have pretty good argument, but I definitely don’t expect it to universally make sense.

Do you know how annoying it is when someone posts a thread in “Ideas”, but once the idea gets some negative feedback, claims that it’s not actually an idea? I have no idea how I’m supposed to react to this. Is it a serious proposal or isn’t it? Are you just throwing shade at current work, or are you actually discussing something that can be improved? Please, STOP making us nail jelly tto the wall!

7 Likes