The CPython uses own implementation of big integer arithmetic. It’s mature and stable, but algorithms usually not the best, known in the field. It seems sometimes we already pay a high price for this, see e.g. recent story with the integer string conversion length limitation.
Of course, this situation might change. The CPython could use better algorithms (implemented in C or even in pure-Python). But someone could argue, that it is not a right project for this. Why not depend instead on some external library, that does same and have more appropriate community?
The GNU GMP was proposed for this several times in the past (see PR#46139 and PR#66121. Maybe it’s time to try this last time? IIUIC, major non-technical concern about using the GMP was the license.
Licensing issues are beyond of my comprehension, so I would like to see Yes/No-type answer here. Is it really stops us? This seems to be a non-issue for many projects with liberal licenses (e.g. Julia) or even for proprietary software (as Wolfram Engine or Maple).
Major technical problem with using the GNU GMP is it’s “memory management”: when GMP encounters a memory allocation failure, it will just abort the program. But my homework shows, that this issue could be solved by using low-level mpn_*() GMP’s functions together with custom memory allocation (using setjmp/longjmp to recover from allocation failures). The ZZ library demonstrates this approach, providing an alternative interface to the GNU GMP with the Libtommath-like API.
On top of the ZZ library, the python-gmp extension was implemented. I think it is developed enough to play with: the extension was tested by CI tests of some mathematical packages like the mpmath or the diofant (SymPy’s fork). In principle, same trick could allow to use the GMP for integer arithmetic in the CPython core.
Wait! But we already have the gmpy2 package. Or the python-flint. Ah, and Sage integers. (Did I forget something from the zoo of Python’s interfaces to the GMP?) All suffer from mentioned above memory management problem, but probably it’s not a big deal for an external extension. Why not simply suggest these packages to people who care about big integers?
I believe, there are reasons: using external integer type give severe speed loss for operands of small size (~ machine integers), some optimization techniques aren’t available for extension types. But practical integer-heavy applications usually use operands of different sizes. Taking the mpmath CI tests as a poor-mans benchmark we see that using the GMP shows a noticeable speedup (on CPython 3.14): 6m50s without vs 4m42s with. On another hand, the later result is close to the PyPy3.11 (4m59s) without GMP (and much worse on PyPy with GMP). These results explained by statistics on operands size for mpmath’s tests: ~95% operands are integers of less than 10 bits, ~99% - less than 53 bits. I would say it’s a typical situation for an application like computer algebra system or arbitrary arithmetic package: lots of small integers, but we will still might have a net win from using GMP due to better timings for a small set of “big enough” integers.
With the GMP we will have best performance for operands of all sizes, less maintenace burden for core developers, some simplification in the Python ecosystem (projects like SymPy, mpmath or Sage could unconditionally use builtin integers). Of course, the GMP is not only option (here is an incomplete list of arbitrary-precision arithmetic software, the Libtommath was already mentioned). Though, it looks to be most popular, developed and optimized, it’s performance hard to beat.