Hello everyone,

I was writing down a code which includes a number of 23 million digits. After running the code, it threw me an error saying that it cannot perform a division function on the given number. The number is made by exponentiation of 2. I read somewhere that multiplication instead of exponentiation accepts to do functions such as this. Can anybody help me with this? How am I supposed to write a code that multiplies 2 that many times?

Floating point division has limits that integer division can get around - at the cost of, well, being integer division, and thus only giving you the quotient with no remainder. You can get integer division using the double-slash â€śfloor divideâ€ť operator. Might not solve your problem, but itâ€™s worth a try!

Oh thank you so much. Tbh I didnâ€™t expect a reply until tomorrow so TYSM!

Your idea seems quite reasonable and I tried it and well, at least it isnâ€™t giving off an error in the first place. The code is just taking itâ€™s time to give out the output, like it still is calculating it or whatever. But at least it kinda works.

Thanks Chris!

No probs! People are on this Discourse at all times of the day, so thereâ€™s a bit of a luck factor in exactly when youâ€™ll get a response.

You started with a 23 million digit number. Thatâ€™s pretty whopping, but far from the maximum that Python can handle. However, you may find that the slowest part isnâ€™t the division itself, but the displaying of the result on your screen. What were you dividing it by? If you were dividing by a fairly small number, youâ€™ll still have millions of digits, and depending on your computer, that can be slow to display.

Actually the number was in a while loop in which for a condition the number triples and is increased by one and in the other condition the number is halved i.e. divided by 2. So it was a continuous loops so ig that it might take some time considering its tripling the number several times during the process.

BTW do you know if it might be an IDE problem? What I mean to ask is that are there certain IDEs that are slow in big calculations and such? I am using PYCHARM right now.

It absolutely IS possible, but again, itâ€™s much more likely to be slowing down the output (due to rendering it within the IDE) than slowing down the actual run time. You should be able to use a process monitor like `top`

on a Unix system or the Task Manager on Windows to be able to see whatâ€™s happening - if the CPU time is in the Python process, youâ€™re fine, but if itâ€™s in the IDE, itâ€™s slowing you down.

P.S. there is also `divmod`

if the remainder is important.

Ah, Collatz I conjecture. Have fun!

I found writing them out in binary backwards was quite interesting, but never considered going that bigâ€¦

Lol I wasnâ€™t expecting anyone to guess it but cheers

Some back-of-the-envelope calculations, aided by use of Pythonâ€™s `timeit`

module, give some idea of what sort of running times to expect (though Iâ€™m guessing about what you want to do), which is helpful if you want to determine whether what youâ€™re trying to do is reasonable or whether itâ€™s something thatâ€™ll take longer than the age of the universe to complete â€¦

Letâ€™s suppose for the sake of argument that you want to apply the Collatz iteration starting with the second-largest-known Mersenne prime, `2**p - 1`

, where `p = 77232917`

, and that youâ€™re interested in counting how many steps it takes to reach `1`

.

Pythonâ€™s `timeit`

module helps a lot with figuring out some basic timings:

The initial computation of `2**p - 1`

is fairly fast considering the size of the number itâ€™s creating - a few hundred milliseconds:

```
mdickinson@lovelace ~ % python -m timeit -s 'p = 77232917' 'n = 2**p - 1'
1 loop, best of 5: 311 msec per loop
```

Itâ€™s significantly faster if we use bit shifting instead of the power operation:

```
mdickinson@lovelace ~ % python -m timeit -s 'p = 77232917' 'n = (1<<p) - 1'
100 loops, best of 5: 3.96 msec per loop
```

But weâ€™re not terribly interested in this timing, since itâ€™s going to be negligible compared to the time to carry out the Collatz iteration to completion. So letâ€™s look at that. There are two types of steps: multiply by 3 and add 1, and divide by 2. Here are some timings for integers of the above size:

```
$ python -m timeit -s 'p = 77232917; n = (1<<p) - 1' 'm = 3 * n + 1'
50 loops, best of 5: 6.24 msec per loop
$ python -m timeit -s 'p = 77232917; n = (1<<p) - 1' 'm = n // 2'
10 loops, best of 5: 21.3 msec per loop
```

Again, shifting right by 1 is faster than dividing by 2:

```
$ python -m timeit -s 'p = 77232917; n = (1<<p) - 1' 'm = n >> 1'
50 loops, best of 5: 4.31 msec per loop
```

The other thing you need to be able to do is tell whether a huge integer is even or odd, but using bit operations thatâ€™s orders of magnitude faster than either of the above operations (itâ€™s a constant-time operation), so we can pretty much ignore it:

```
$ python -m timeit -s 'p = 77232917; n = (1<<p) - 1' 'm = n & 1'
10000000 loops, best of 5: 38.1 nsec per loop
```

So weâ€™ve got some steps taking ~6 ms, and some taking ~4 ms. Letâ€™s be generous and assume that you have a faster machine than mine (which wouldnâ€™t be much of a stretch), and that on your machine each step takes around 1 millisecond. (The times get slower as the numbers get bigger, but also faster as the numbers gradually decrease to 1, so our average over the entire process could be a little smaller, but 1 msec still seems like a good ballpark figure.)

So how many steps might we expect to need? Well, thereâ€™s a heuristic for the number of steps required for a large â€śrandomâ€ť number, based on the assumption that steps of the form `n = n // 2`

occur with roughly the same frequency as the double-step sequence `n = (3n + 1) // 2`

. That gives a prediction of the number of steps as `3 / log(4/3) * log(n)`

. For us, `n = 2**p-1`

so `log(n)`

is `p * log(2)`

(near enough), and we get `3 / log(4/3) * log(2) * p`

, or around 560 million steps.

But thereâ€™s an interesting twist: we donâ€™t have a large *random* number. We have a large number of a very special form: `2**p - 1`

. And for numbers of that form, with a bit of work you can convince yourself that the first `2*p`

steps of the Collatz iteration are very predictable: they alternate between steps of the form `n = 3n+1`

and steps of the form `n // 2`

, and after `2*p`

steps youâ€™ve grown that original value to *exactly* `3**p - 1`

. (To convince yourself of this, try it on a smaller number like `2**10 - 1`

: the first 20 steps of the Collatz iteration will convert that to `3**10 - 1`

.)

After that things become a lot less predictable, and itâ€™s reasonable to try to apply the heuristic above (but now to `3**p-1`

instead of `2**p-1`

). So we end up with `(2 + 3 log(3) / log(4/3)) p`

steps, or around `13.5 p`

steps. For `p = 77232917`

thatâ€™s a bit over a billion steps.

So for 1 billion steps at 1 step per millisecond, youâ€™re looking at around 1 million seconds, or around eleven and a half days. So not at all unreasonable (but bear in mind that our heuristics and assumptions are *very* hand-wavy, and it could easily be several weeks or just a couple of days instead).

As @Rosuav points out, you want to avoid actually printing any of these huge numbers out at any stage: the basic Collatz operations take time linear in the number of bits of the number youâ€™re operating on, but converting a Python integer to a string is a quadratic-time process in Python versions earlier than 3.12, and still superlinear in Python >= 3.12. Either way, printing any of these numbers will slow things down a lot.

For what itâ€™s worth, with a couple of tricks I was able to reduce the running time on my machine to a few hours, to confirm that the stopping time for `2**77_232_917 - 1`

is exactly `1039248803`

(so indeed a bit over a billion), which matches what the OEIS has at A181777 - OEIS. Happy to share Python code if thatâ€™s of interest.

Yeah sure I would love to take a look at your code once!

The main observation is that given a huge integer `n`

, we can determine the next few Collatz iteration steps for `n`

just by looking at its trailing bits. For example, suppose that `n`

is greater than `8`

and its last three bits are (in binary) `011`

(for example, `n=11`

, `n=19`

, etc.). Then without knowing anything about the other bits of `n`

, we already know that:

`n`

is odd, so the next step will transform`n`

to`3n + 1`

- after that transformation, the last three bits of
`n`

must be`010`

, so`n`

is even, so the following step will be`n -> n / 2`

- after the division by
`2`

, the last*two*bits of`n`

are`01`

(at this point we no longer have any information about the third bit), so`n`

is odd and we again do a`3n + 1`

step - now the last two bits are
`00`

, so we get two`n / 2`

steps in succession - at this point we canâ€™t say anything more without examining more bits of
`n`

.

So we end up with a total of 5 steps, and plugging through the maths, the overall transformation maps `n`

to `(9n + 5) / 8`

.

So in general, we can examine the last few bits of `n`

, do a cheap small-integer computation to compute in advance what transformation will be applied for the next few steps, and then apply that transformation to the full `n`

. That greatly reduces the number of operations that are being performed at the full precision of `n`

.

Below is code that does this; it completes in under 5 hours on my machine. There are a couple of additional things going on:

- We use
`gmpy2`

for the large integer arithmetic. For me, this speeds things up by around a factor of 4. - We apply the above approach in two stages: we cache transformations based on the last 16 bits of
`n`

(there are only 65536 of these, so space requirements are small), and then use those cached transformations to compute transformations on the fly based on the last 1024 bits of`n`

. - Since weâ€™re interested in Mersenne numbers and we know that for an input
`n = 2**p - 1`

the first`2*p`

steps convert`n`

to`3**p - 1`

, we can just start with`3**p - 1`

instead (and then add`2*p`

to the final step count). This only saves about 15% of the computation time - significant, but not huge.

Iâ€™m sure that there are many ways that this could be sped up further.

Hereâ€™s the code:

```
from functools import cache
@cache
def superstep(n, t):
"""
Determine the next few steps that would be applied by the Collatz iteration.
Given an integer *n*, examine the *t* low-order bits of *n* to find the next Collatz
steps that would be applied to *n*, and the resulting overall transformation.
Returns integers a, b, s, steps, where a, b and s represent the transformation
n |-> a * n + b >> s, and steps is the number of basic steps that the transformation
represents.
"""
a, b, s, steps = 1, 0, 0, 0
while t:
if n & 1:
n = 3 * n + 1 & ~(~0 << t)
a, b = 3 * a, 3 * b + (1 << s)
else:
t -= 1
n >>= 1
s += 1
steps += 1
return a, b, s, steps
def hyperstep(n, t, chunk_size):
"""
Same as superstep, but chunks the transformation computation.
"""
a, b, s, steps = 1, 0, 0, 0
while t:
t2 = min(chunk_size, t)
a1, b1, s1, steps1 = superstep(n & ~(~0 << t2), t2)
a, b, s, steps = a1 * a, a1 * b + (b1 << s), s1 + s, steps1 + steps
n, t = ((a1 * n + b1) & ~(~0 << t)) >> s1, t - s1
return a, b, s, steps
# Parameters
BITS = 16
BITS2 = 1024
# Derived parameters
MASK = ~(~0 << BITS)
MASK2 = ~(~0 << BITS2)
def stopping_time(n):
"""
Find the number of steps required to reach 1 in the Collatz iteration.
"""
total_steps = 0
while n.bit_length() > BITS2:
a, b, s, steps = hyperstep(n & MASK2, BITS2, chunk_size=BITS)
n = a * n + b >> s
total_steps += steps
while n.bit_length() > BITS:
a, b, s, steps = superstep(n & MASK, BITS)
n = a * n + b >> s
total_steps += steps
while n > 1:
n = 3 * n + 1 if n & 1 else n >> 1
total_steps += 1
return total_steps
def mersenne_stopping_time(p):
"""Stopping time for the Mersenne number 2**p - 1."""
return stopping_time(3**p - 1) + 2 * p
# Use gmpy2 to speed up arithmetic with huge numbers.
from gmpy2 import mpz
# Quick tests of known good values.
assert mersenne_stopping_time(mpz(86_243)) == 1_158_876
assert mersenne_stopping_time(mpz(756_839)) == 10_197_081
# And the value we're really interested in.
print(mersenne_stopping_time(mpz(77_232_917)))
```