Infinite recursion

In python version 3.13…
I noticed the possibility of creating infinite recursion, that is, the stack does not overflow, this could be attributed to optimizing the “tail” recursion, but memory measurements showed that memory is consumed until it runs out.
Here is the code for reproducing such a recursion:

def foo(depth = None):
    print(foo.count)
    foo.count  = 1
    return foo()

foo.count = 1
print(foo())

It is noteworthy that if you remove the default argument, or run this code on version 3.11…, the stack overflow error will cause

5 Likes

I think this is becuase of a “optimisation” that notices that you can eliminate a new stack frame. But you need a core dev to confirm.

1 Like

This is neat but I don’t think it’s a problem (I’m not sure if you’re suggesting it is). It has always been possible to write an infinite loop in Python. It’s arguably a required feature for a programming language. As long as you can get out with an interrupt, it’s okay.

The fact that infinite loops are now possible with tail call optimization is actually cool and a useful feature in some scenarios. In purely functional languages where you can rely on this behavior, you might write an event loop this way:

def handle_next(state):
    do_stuff_with(state)
    ...
    handle_next(state)

It’s interesting that it’s running out of memory eventually, though. I doesn’t seem like it should, and I wonder if there is some further optimization possible there.

2 Likes

An endless loop is one thing, optimization is another, there is no optimization in the code that I provided, because the stacks are saved, this can be seen if you measure the RAM consumption during the code, it
grows exponentially

1 Like

So the limit configurable with sys.setrecursionlimit just gets ignored?

yes
, I got to 1_000_000 and break myself, if you remove the argument or run on 3.11, the built-in raise works at the limit as expected

sorry my england

A variation that does something after the call:

import sys

print(sys.getrecursionlimit())
print(sys.version)

def foo(depth = None):
    if foo.count < 2000:
        print(foo.count)
        foo.count += 1
        foo()
        foo.count -= 1
        print(foo.count)

foo.count = 1
foo()

Abbreviated output (Attempt This Online!):

1000
3.13.0 (main, Nov  9 2024, 10:04:25) [GCC 14.2.1 20240910]
1
2
3
...
1998
1999
1999
1998
...
3
2
1

The fact it doesn’t respect the sys recursion limit seems to be a bug IMO.

6 Likes

You’re missing the + in your example:

def foo(depth = None):
    print(foo.count)
    foo.count += 1
    return foo()

foo.count = 1
print(foo())

From my understanding what was changed in CPython 3.13 is that a Python function call no longer results in a C stack frame being created. In other words the C code in the interpreter no longer calls itself recursively to implement a Python function call. That means that the C stack size is not a reason to limit recursion any more.

However the Python call stack still exists and the Python call frames are still there. They just live in the heap. The interpreter is supposed to check how many of those there are and compare it with the recursion limit before each function call.

My guess here here is that in 3.13 something like the adaptive interpreter optimises this into something that no longer checks the recursion limit e.g. maybe the optimiser sees that this function foo is calling itself and reduces it to a loop.

Reduce the amount of printing and you can bork your system faster:

def foo(depth = None):
    if foo.count % 1000000 == 0:
        print(foo.count)
    foo.count  += 1
    return foo()

foo.count = 1
print(foo())

If I run that then in a couple of seconds it consumes gigabytes of RAM and Ctrl-C locks up:

$ python t.py 
1000000
2000000
3000000
4000000
5000000
6000000
7000000
8000000
9000000
10000000
11000000
12000000
13000000
14000000
15000000
16000000
17000000
18000000
19000000
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^\Quit (core dumped)

Trusty Ctrl-\ still works to exit with a core dump though.

Here is one from years ago which achieves much faster builtin performance borkage:

# DANGEROUS:
a = [...]
a.extend(iter(a))

Depending on OS and other things it might be that you need a hard reboot to recover from these things so make sure to save your work etc before trying.

1 Like

Wouldn’t emitting a Warning be a good compromise then, so the user is warned, and knows how to exit the code? Something like

Recursionwarning: Running this code may break Python. If you want to exit, use 'CTRL + \'

Obviously that should only be emmited, when it can cause actual harm, so only when the OS, Version and implementation are those that break.

IMO informing the user about such errors should be quite important, as it might reduce the amount of people that get problems for running simple code.

Wasn’t that in 3.11?

Inlined Python function calls

Yes, I misremembered.

That is what RecursionError is for. It just isn’t being raised here probably because of a bug.

I think you misunderstand what Ctrl-\ is. In my terminal on Linux Ctrl-\ sense SIGQUIT which is a more forceful way of telling a process (not necessarily Python) to stop.

but if you remove the default argument, the depth error will work out as it should again.

def foo():
    if foo.count % 1000000 == 0:
        print(foo.count)
    foo.count  += 1
    return foo()

foo.count = 1
print(foo())
1 Like

I suppose I got too excited because I’d like to see real TCO in Python (even though there’s no plan for it as far as I know). I just think it’s neat!

1 Like

Isn’t this an example of the Halting problem - Wikipedia

1 Like

This is what makes me think it is to do with the optimising interpreter or one of the other optimisation things that have recently been added to CPython. The optimising interpreter will see when some function is being called lots of times and then try to optimise its byte code. It will do that using various heuristics and other things and I can well imagine that having a parameter even if unused might affect the decisions it makes. If one of the optimiser’s code paths has a bug then the manifestation of that bug could be very sensitive to details in the code.

To be clear I think this is a bug. It would be better to open a github issue rather than discussing it here.

1 Like

It looks like what is happening is something like this:

The CALL instruction takes the function that is called, makes a new stack frame, and then goes back to where the interpreter checks that there is still enough room left before reaching the recursion limit, then goes on to the beginning of the called function. If there isn’t enough room left, the interpreter raises a RecursionError.

If one function is being called at a particular place, and the number of arguments given is the same as the number of arguments that function takes, then the CALL instruction gets replaced with a CALL_PY_EXACT_ARGS. This instruction will check that it isn’t about to run out of room, and if it is, it turns the CALL_PY_EXACT_ARGS back into a CALL. If that’s not the case, then a new stack frame is made and it skips straight to the start of the called function.

However, if the number of arguments in the call is different from the number of arguments that the function takes, like in the case where there is a default argument, then the CALL instead becomes a CALL_PY_GENERAL. This instruction also, at the end, skips straight to the start of the called function. But, it omits the part where it checks that it isn’t running out of room. So, it ends up never checking the recursion limit at all, and the RecursionError isn’t raised.

So I agree that this is probably a bug.

3 Likes

I put up an issue here and a fix Runaway recursion on 3.13 and higher for _PY_FRAME_GENERAL · Issue #132744 · python/cpython · GitHub

4 Likes

Thank you for your responsiveness