# Python loop duration

I am trying to execute this code to try how loops work with high iterations:

``````N=200
ae=[1]*(N+2)
bp=[1]*(N+2)
ap=[1]*(N+2)
aw=[1]*(N+2)
T=[0]*(N+2)
Ti=[1]*(N+2)
dif=[1]*(N+2)
p=0;
z=0

while p<150000:
for k in range(1,N+1):
T[k]=(ae[k]*T[k+1]+aw[k]*T[k-1]+bp[k])/ap[k]
dif[k]=abs(T[k]-Ti[k])
T[N+1]=(aw[N+1]*T[N]+bp[N+1])/ap[N+1]
Ti[k]=T[k]
z=z+1
p=p+1

print(p)
``````

Even though, it lasts a lot of time to execute, like 3 minuts or something like that. On the other hand, if I execute the exactly same code with matlab, it only lasts 5 seconds. Why does this happen. Is something bad with my computer preferences or this is how Python works?

Thank you very much

It takes about 15-20 seconds on my computer (depending on the Python version), which is now almost 10 years old and was nothing special when I bought it, so itâ€™s hard to understand the complaint.

Hi,

So, do you know what does that mean? Why it lasts all that amount of time on my computer?

I just have bought it this year.

My 10-year-old PC took 41 seconds.

Python is built not for speed, but for convenience.

One of its features is the ability to extend it with modules written in compiled languages that give it the speed without sacrificing the convenience.

If youâ€™re going to do a lot of computation, have a look at, say, `numpy`.

You will have to share the details of what your system is for us to guess.
What else was running on your computer?
What OS?
What CPU model?
How much memory?

But, with the same computer, if I run the same code with matlab, it goes so fast. However, with pyhton is so slow. With the same CPU, same memory and caharcetristics.

But this is a simply code, I donâ€™t understand why do I have to use libraries.

In my computer, this code takes about 7 seconds, but it overflows â€“ I get lots of `inf` values in the array `T`. Once this starts happening, things might sometimes slow down dramatically on some architectures?

Do you mean your arrays to be integers? Or floating point numbers?

Are you sure about `for k in range(1,N+1)`? Python uses zero-based arrays. Do you mean there to be a line which is always working on the last entry: `T[N+1]=(aw[N+1]*T[N]+bp[N+1])/ap[N+1]`?

I donâ€™t think the inf is why it happens, because in the real program have real values, here I just was trying to fix my problem with the time it takes to execute everything. Now I have tried my arrays to be floats and keeps happening the same.
In general, the program, works good, because if I try the same with Matlab, it doesnâ€™t happen.

Matlab immediately delegates the work to underlying C code that is optimized for array computations, and that is why it is fast. Python does not have any such optimized array code built-in. However, the `numpy` library also delegates work to underlying C code that is optimized for array computations, which is why `numpy` code can be as fast as Matlab.

Lesson: The correct analogue to Matlab is not plain Python by itself, it is â€śPython + numpyâ€ť.

FWIW your code runs in under 10 seconds on my M3 MBP.

2 Likes

Can you actually do their calculation with NumPy? Seems tricky.

Are we flexing our hardware here? 8.7 seconds, Intel 14700KF

More useful information, though, would be what version of Python this is being run on, which could make a HUGE difference. @rotara03 what version were you using? I tested it on CPython 3.13. Any version from 3.11 onward can be considered current, and anything 3.8 onward is reasonably recent, but the further back you are from there, the less advantages youâ€™ll see. Notably, Python 2.7 is now quite ancient, AND it has a number of significant architectural differences. Your code will probably still run, but I did find a huge performance penalty. In fact, itâ€™s possible that something actually isnâ€™t working at all, as itâ€™s been six minutes and not finished yet; Iâ€™ll leave it burning a CPU core for a while and see if it finishes, but otherwise Iâ€™ll assume that this completely doesnâ€™t work in Py2.

Soâ€¦ make sure youâ€™re using Python 3.

@Rosuav In Python 2 you get ints/longs and they grow quite large and thus slow.

Can you actually do their calculation with NumPy? Seems tricky.

I am responding to their statement, taken at face value:

On the other hand, if I execute the exactly same code with matlab, â€¦

Hmm. Conceptually, that should be the same in Py3, with all ints being longs; but yes, there have been a number of performance enhancements on the Py3 int type.

I gave up on the Py2 one after 17 minutes, and that was after changing it to xrange instead of range (since thatâ€™s a more fair comparison). There must be something that isnâ€™t behaving the same way; possibly because int/int â†’ int instead of int/int â†’ float. And that, in turn, likely implies that the OP was not using Py2.Still, use of older Pythons definitely impacts performance, although I was unable to recreate the OPâ€™s level of slowdown (eg 3.6 and 3.8 took about 50% longer than 3.13, but not minutes).

But Py2 has `/` as integer division and Py3 will convert those to floats. So you start to get floats in your array and eventually overflows everywhere. In Py2 you stay with BigInts. edit: you snuck it in!

I donâ€™t think this loop is particularly useful because I donâ€™t think itâ€™s a complete example of the computation in questionâ€“in particular, it just overflows.

1 Like

How can I run python on Numpy. Do I have to define every variable like np.array() or there is a command that sets that for all my variables?

Not really an architectural difference, rather one of the classic semantic ones. In 2.x, the two `/`s in there perform floor division, which keeps the numbers integer (really `long`) and they do get quite large overall. So 3.x is performing faster by just losing that arbitrary precision (which as far as I know Matlab is also doing).

If OP is somehow stuck running the code on 2.x then that explains everything, including