Mojo = Python with C++/GPU performance?

Seems that Chris Lattner (the founder & BDFL of LLVM, also founder of Swift), Tim Davis (TensorFlow & AI veteran) et al. threw Python, C++, Rust & Compilers into a blender and came up with some pretty cool stuff.

In addition to a unified AI framework, they came up with a new programming language that’s essentially an extension of Python, which describes itself as:

Mojo is still young, but it is designed to become a superset of Python over time.

As such, I imagine this would be of interest to people here, and since I haven’t seen a thread on this yet, I thought I’d open one. I believe it touches at least upon the typing side and probably also to the faster CPython & nogil projects.

Among the things they’ve added are:

  • explicit control over memory layout (think class X: but with struct) & allocation
  • metaprogramming
  • lifetimes (Rust-inspired, apparently with less of a learning curve)
  • built-in keywords for SIMD-vectorization, parallelization & tiling, including “autotune”-ing this on the fly to any architecture the respective code runs on
  • all the while keeping native interoperability with all Python packages

A lot of the work leverages the MLIR-effort in LLVM that’s been a hotbed of AI development, which – to my rough understanding – does most or all of the heavy lifting from abstracting all the different hardware optimizations. Still I’d be honestly very impressed if it can pull off even just a fraction of that.

They claim to immediately provide 2.5-3x speed-ups on PyTorch/TensorFlow[1], and multiple 1000’s of times faster than vanilla Python on things like matrix-matrix-multiplication or the mandelbrot set. Obviously, no-one is seriously programming the latter two in native Python, but keeping (much) of Python’s expressivity without sacrificing much performance (they claim to beat C++ too) would unlock a lot of potential.

Announcement: Modular: Our launch & what's next
Keynote: https://www.youtube.com/watch?v=-3Kf2ZZU-dg [2]
Engine Details: Modular: Inference Engine
Github (currently empty[3]): GitHub - modularml/mojo: The Mojo Programming Language


  1. Comes with big claims too: no rewriting, no retraining, no quantization, no conversion, no plugins, just out-of-the-box performance boost on any cloud, and all in a multi-{model,framework,device,cloud} wrapper. ↩︎

  2. The most “hands-on” highlight of the keynote starts at 34:00 ↩︎

  3. opensourcing planned eventually, as per the keynote ↩︎

2 Likes

So it’s currently entirely proprietary? I’m not going to touch it then. Too many experiences with corporate-locked versions of free systems.

5 Likes

I see statements about wanting to be a “superset of Python” in the FAQ, but I’m not sure I understand what this means exactly.
My general impression is that this is a completely new language (cool), targeting primarily AI use-cases (trendy), and borrowing inspiration from Python (sure why not) – but it doesn’t look like it aims to be an actual Python alternative implementation (beyond a possible interpretation of “superset”), and definitely doesn’t look like it aims to support native extensions targeting the Python C-API.

2 Likes

The language seems to be adding C++-like features on top of Python, which doesn’t seem like a terrific idea for usability (and I say that as someone who writes C++ on a daily basis).

5 Likes

It’s my impression that they do, when they say the entire python ecosystem (including all its many extensions targeting the C-API) are natively usable.

Yeah, the template metaprogramming looks very C++-flavoured, but now that I’ve looked at the docs, they do criticize the C++ variant of it. I guess they need some form of compile-time programming to get their SIMD- & parallelization-magic to work (without extreme code duplication). I have no idea if those problems could be solved otherwise, though I don’t find the func[params](signature) syntax too offensive (which they describe as an extension of PEP 695).

To me Mojo is kind of more like the Cython (especially the pyx part). Whether your title makes sense depends on how to interpret =; it can be correct in some senses but certainly not, like, mathematically.

3 Likes

It’s definitely early days for Mojo. I plan to talk to Chris about how he plans it to become a true Python superset that will run existing Python code out of the box.

So far, Mojo is a separate language (they have struct but not class, Int but not int, etc.), and their “CPython compatibility” strategy involves calling a helper function that you pass the name of a module, and it will import that module and execute it in CPython, returning a Mojo proxy object. It then treats CPython as a separate language runtime and when values are passed between Mojo and CPython they are “marshalled” (boxing/unboxing Int/int, etc.). This is syntactically very smooth, because Mojo and Python have compatible syntax (so you can write e.g. x+y where x is a Mojo value and y a CPython object), but doesn’t speed up running CPython at all.

I presume that the reported speedups on PyTorch etc. are obtained by rewriting key parts of the PyTorch kernel in Mojo, applying the optimizations shown in the notebooks on the Mojo site.

18 Likes