Faster large integer multiplication

Details are what sink grand schemes, and there’s no way to out-guess what they may turn out to be.

You never know. For example, in the old days GMP was rejected in many projects because its reaction to running out of memory was “kill the process”. That’s no good in the CPython context. CPython reacts to a failed memory allocation request by freeing all other temp memory allocated before then, putting everything in a “sane” state, and raising a MemoryError exception a program can catch.

>>> x = 1 << 1000000000000
Traceback (most recent call last):
...
MemoryError
>>> # And now I can continue as if nothing bad happened.

Does GMP still work that way? I don’t know. I know they changed things to allow using an external (to GMP) memory allocator, but I have no idea whether they went on to move Heaven and Earth (as CPython does) to support safe recovery from an allocation failure. That has to be forced to work - it doesn’t happen by magic.

Details matter a whole lot, and they’re often in areas that catch us by surprise.

As Oscar just mentioned in a different reply, a predictable one is sorting out GNU licenses. I’m not a lawyer, and we would have to pay one to get a semi-credible answer about what including LGPL-licensed code would imply for us.

Note that I’m not saying that anything of this sort kills the idea. I am saying that any number of surprises can pop up that would kill it. There’s just no way to know without putting major effort into getting most of the way to working code. Which I can pretty much guarantee will be more work than you’re expecting :wink:. CPython is not a new project and we can’t “break” anything anymore.

As I briefly hinted at before, while I wouldn’t oppose it, I’m not that keen on it either. Even if CPython did use mpz integers, I’d still install gmpy2, because it’s the many functions that supports that are the real value for me.

1 Like