Why? Looks like the Python code Neil has (str_to_int() in gh-90716) now is already faster, although likely the same asymptotic behavior (basically the same algorithm, but top-down recursive instead of bottom-up). Although that’s limited to decimal string inputs, because nobody in real life gives a rip about strings in bases other than 10 and powers of 2 (and the latter are already fast).
That limit doesn’t make sense to me, and I haven’t been able to find the reasoning behind it. Here with the currently released 3.10.6 on my box:
$ py -m timeit -s "s = '9' * 4300; i = int(s)" "int(s)"
5000 loops, best of 5: 84.8 usec per loop
$ py -m timeit -s "s = '9' * 4300; i = int(s)" "str(i)"
1000 loops, best of 5: 271 usec per loop
How can an operation that can be performed well over a thousand times per second be considered a vector for “a DoS attack”?
But it’s also so small that asymptotic behavior becomes increasingly irrelevant. In general, the fancier the algorithm, the higher the overheads to keep track of it all. That’s why, e.g., nobody on Earth uses FFT-based integer multiplication until operands get “very” large.
For smaller inputs, I expect the code in Neil’s PR gets more attractive, because it doesn’t incur the overheads of building temporary lists. It builds a dict to memoize the powers actually needed, but that’s O(log log n) space (tiny).