JIT performance possibilities

You can point out ways in which point python has drawbacks compared to other languages. Just make sure to not lie or misrepresent the efforts others have done, especially after being explicitly told to not do that.

If while doing that you insult the developers of that language, it would get flagged. (And also, if it’s completely off topic it would probably get flagged as well)

There is plenty of point in this forum existing and no it is not used to suppress information. You can have a perfectly reasonable discussion about these things here if you do it respectfully. (And I also think that others could respond less defensively/more respectfully as well.)

This thread though is not really the right place to have a general discussion about these things though. At this point the discussion here has now hijacked this thread which is supposed to be about the PEP proposal. Probably the last ~20 posts should be moved to a different thread.


As a seasoned core developer I am a bit surprised by the defensivity of some responses here. I did contribute my share of performance improvements over the time (though I’m not involved in the impressive work sparked by @markshannon a couple years ago), and I cannot imagine being offended by the suggestion that CPython’s interpreter is historically slow because, well… it is.

A lot of work was poured into projects like Cython or Numba, for example, to avoid interpreter overhead in specific cases. Past projects like Psyco or Unladen Swallow, to name a couple, have tried to speed up the interpreter. PyPy, at the time, was started with the belief that it was difficult to make CPython faster, and that an entirely different architecture and community setting were necessary. These are facts that are well-known by anyone who has been interested in the subject of Python performance in the last 20 years.

And, yes, CPython and its ecosystem can still be plenty fast for some tasks, especially where its open architecture and rich C API allow for fruitful collaboration between bytecode and native code.


I think that’s where the problem arises. If one isn’t familiar with the history it is easy to make statements that seem dismissive of all the work that’s been done so far.


This is a pretty good starting point:

Most ideas that have been suggested have been tried already. E.g. tagged pointers for things like integers that fit into machine words is a common implementation technique. I did a quick-and-dirty prototype some years ago during a core sprint. It was surprisingly easier than I thought to kind of get working. Making it work for real would massively be more work. The performance gains were not amazing.

Python is hard to optimize for a number of reasons and so comparing it to something like Javascript is not very helpful. For CPython, one of the big constraints is compability with C extensions. The extensions are a big part of Python’s success. For many other language implementations, doing an extension like numpy (where you add new types to the language that integrate fairly seamlessly) is not so easy. Alternative Python implementations have struggled to efficiently support extensions (both pypy and Skybison, for example). The Skybison example is interesting in that the core part was quite a bit faster than CPython (as I understand) but once C extensions were used, a lot of gains disappeared.

A lot of work is still going on (faster-cpython project, free threading, C API overhaul). If you are interested in this kind of thing, do some reading first to find out what’s been done and what’s being worked on.


Regarding general performance of Python

I think Python is not as slow as some think.

Naive python usage is often slower in comparison. However, if one is artful in writing efficient Python, there are many ways to make it faster. And when one utilises an appropriate combination of performance improvement techniques it is often possible to write code, which in performance is highly competitive with other languages.

(of course if one doesn’t compare apples with pears)

In time with all the effort being put into it I think python will be as fast as other languages of similar abstraction level without the need of being very artful about it.

Regarding JIT

A quick read of Python 3.13 gets a JIT answered most of my questions about what it is and how it differs from other JITs.

Of course, this is only high level overview, but it was enough for me to put it into general context.

Few questions to those who know more about it:

  • Is this article accurate/correct?
  • Is something important missing?
  • Is there something that should be corrected?

I get the impression from previous discussions that the two of us have had that we do not write similar code for solving similar problems.

It is also often not possible to write Python code that is highly competitive with other languages. I have hit these limits many times and done measurements, benchmarks, tried many variations of optimisation and so on. Many times I have ended up writing at least part of the code in another language like C or writing wrapper code to call into something like a C library.

I have used other languages like this because I found that there was just no way to get within a factor of 10x (sometimes 100x!) the speed that the C library achieves and for me that is very often too big a speed difference to pass over. If that kind of speed difference isn’t a problem for you then you are likely writing very different code from me for solving very different problems.

It is a strength that Python makes it sort of easy to call into other languages but actually as a maintainer of a Python package that wraps a C library I can tell you that packaging that up is a lot of work. I would rather that the need for this was much less so that we could all do it less often and just write/use Python code that could run fast enough for the task at hand.