Is there anyway to effectively make this code use all available CPU threads?

I would like this code to be able to utilize all available cpu threads if possible. Im a total newb when it comes to coding and have no idea where to start with adding the needed multithreading function.


import time

def nextInt(input1, max_input):
    start_time = time.time()  # Record the start time
    cont = True

        input1 = int(input1)
        return 'ERROR'

    while cont and input1 <= max_input:
        steps = 0
        input2 = input1

        while input2 != 1:
            steps += 1
            if input2 % 2 == 0:
                input2 = input2 // 2
                input2 = 3 * input2 + 1
        value = [input1, steps, input2]
        if input1 != max_input:
            input1 += 1
            cont = False

    end_time = time.time()  # Record the end time
    total_time = end_time - start_time
    print(f"Total running time: {total_time} seconds")
    input("Press any key to exit")

max_value = int(input('Enter the maximum input value: '))
nextInt(1, max_value)```

Python isn’t the best language to make your code concurrent, because GIL.
Now, what exactly is this suppose to do? The way I see it, you calculate the sum of steps it takes to divide all numbers from 1 to max?

The GIL is completely irrelevant here. Regardless of the language, single-threaded code is single-threaded code.

To make this use more than one CPU core, you first have to separate the job into multiple threads that can run at the same time. What you have here is completely serial.

I didn’t say that GIL and single-threaded code are related any way, did I? And this is basically a loop where 1 variable goes from 1 to max number and you have another variable that is incremented by 1 in each iteration. This is perfectly parallelizable if this is all that’s needed.

No, you didn’t. What you said was that the GIL means this won’t work, which is wrong. More accurately, this won’t work because it’s single-threaded, and if it were multi-threaded, it might be possible to parallelize it. And if it’s parallelized and properly done using something like numpy, it can be run across multiple CPU cores even with a GIL.

It annoys me when people blame the GIL for every single problem with Python and multithreading. I have had plenty of multithreaded Python programs, including some that have consumed more than 100% CPU time. Can you stop blaming the wrong thing?

Well, first of all I didn’t say that it wouldn’t work, especially because of GIL. Read it again if you need to.
Second, all code originally is single threaded. It doesn’t mean you cannot parallelize it. In fact, I can write probably around 10 different concurrent versions of that code if I think about it a little and 3-4 versions if I don’t. In fact, I actually already did, I made and tried 3 different concurrent versions of that code. I just need OP to verify what the final output should be.
I have no idea what kind of code you think is “concurrent”, but parallelizing loops like that one is called an embarrassingly parallelizable problem.

Why did you mention the GIL at all? Were you, or were you not, blaming it for parallelization problems? If you were, then my point still stands. If you weren’t, then it’s not relevant, and you shouldn’t have called it out as a problem.

The word “especially” was not in your post. You said that Python isn’t the best language, because GIL. You blamed the GIL and nothing but the GIL

Writing concurrent code in Python will often make it run slower that single-threaded code, because GIL. It would work just fine, run just fine, do the job just fine, and be 2-3 times slower than the same exact thing single-threaded. Even for embarrassingly parallel problems that have read-only data in separate memory spaces and do not intersect. Because GIL is shit.
When you need to process 12 terabytes of data and try to make it concurrent because your case is a perfect example of an embarrassingly parallel problem, and then you write all proper code and algorithms and it runs 3 times slower than single-threaded, then you suddenly realize that
writing concurrent code in Python is not the best idea. Because GIL. GIL can go and … well, you know.

No, this code is iterating through the operations of the Collatz Conjecture or “3n + 1 problem”. It’s an inherently iterative process AFAICT so I’m not sure there’s any concurrent improvement to speed up the result for a single input. [1] Memoization seems like a more plausible approach to improvement if building up history is an option (and for that each input could be executed concurrently).

  1. speculation, I could certainly be wrong ↩︎


Hmm, I think you are right. It seems he is indeed trying to do that one, and if that is the case, then it probably would not be concurrent indeed. The provided code doesn’t seem do that, though. Maybe it’s a bug or unfinished

Ok, I think that the outer loop is making different starting numbers for Collatz conjecture, while inner loop supposedly makes the sequence for the provided number. In this case we can make starting numbers concurrent

And that is flat-out wrong. It is actually not hard to make heavy use of threads if you design your Python program properly (mostly, by using numpy for the heavy lifting). But thank you for making your GIL hatred obvious. You do not understand it and you hate it, therefore it must be to blame for everything.

Maybe you just need to learn how to write parallelizable code in Python? If you write a bubble sort in Python and it’s slow on a billion-item list, does it mean that Python’s list object is slow, or that you picked the wrong way to do things?

Even if it IS fundamentally impossible to multithread, it’s not that hard to spawn multiple Python processes. It really sounds like you attempted the naive method, found that it didn’t work, and immediately blamed Python instead of doing a few minutes of research.

Thanks for confirming. It definitely looked very serial to me, which is why I said what I did.

Building up history absolutely IS possible. If you can design a pure function that does part of the work, it’s trivially easy to decorate it in functools.cache. That might be a good strategy.

You know neither my problem nor my code and yet you are making assumptions about my competency and what I did. I have noticed before that you actually seem to believe you know better then the top experts in the field and you keep talking this way.
Yes, what I said is true. Not because it’s my opinion, but because of facts and numbers. Not only me, but many people have done it, measured it and posted results.
And handing your “workload” to another library doesn’t make python any more concurrent.
Before you actually speak you should go and try to write some concurrent code in Python and another language, then compare the numbers. Because the way you talk actually sounds more like you read some stuff without actual experience and just have faith in CPython more than anything else.
And I also won’t repsond about this topic to you anymore.

It’s certainly technically possible, I guess “is permissible” would have been a better word choice for my intent.

Okay. Go ahead and write Python code that doesn’t use any other language or library. That includes CPython’s own code. I’ll wait.

No? Okay then. So it’s not cheating to write your logic in one language and let the heavy lifting be done by a library. Awesome.

You are the one who made generalizations. You never said anything about your specific code, you just blamed the GIL for everything. If you’d said “the nature of MY PROBLEM made the GIL impossible to use”, that would be different, but that’s not what you said.


I don’t fully understand what your code is doing but it looks like each iteration is dependent on the previous iteration. To spread this type of calculation across multiple cores you need to be able to split the computation work into independent blocks, which may be impossible.

If you would like the code to run faster then you could try to remove unneeded steps from the code. As a side note you could remove the variable cont from your code and use break to break out of the loop.

Best wishes,

The computations of the outer loop (checking the length of the Collatz’s orbit for all integers from input1 to max_input) are independent from each other. They can be done separately.

1 Like

How could this be implemented into the code though? After reviewing the comments here, I realize I am way over my head.

The first step is to figure out which parts can be done in parallel, and find a way to call a function or something for each one. Then you can think about using multiprocessing or any of the nice high level wrappers around it. But the first thing has to be dividing the problem into parts that don’t depend on each other.

You cannot do internal loop concurrently because it uses computation of each iteration for the next iteration. But you can parallelize your external loop. This will do approximately what your posted code does, but generating your numbers concurrently. Another question is - why do you specifically need concurrency? If you want to optimize it, there are other ways to make it run faster.

import multiprocessing

def collatz_sequence(input_value):
    steps = 0
    while input_value != 1:
        if input_value % 2 == 0:
            input_value //= 2
            input_value = input_value * 3 + 1
        steps += 1
    return steps

def parallel_collatz(start_values):
    with multiprocessing.Pool() as pool:
        results =, start_values)
    return results

if __name__ == "__main__":

    start_values = list(range(1, 10000))

    results = parallel_collatz(start_values)

    for start, steps in zip(start_values, results):
        print(f"Starting from {start}, steps: {steps}")
1 Like