Cached integers and big integers at comp time

So I know cpython caches integers -5 upto 256 (so the behavior of 256 is 256 is understandable to me).
BUT
I observed that if I pass a number above 2147483648 (might be off a bit) to a calculation, even if the resulting integer goes back to range -5 to 256, a new int object will created and kept.

*** edited unnecessary stuff out. Sorry guys I misunderstood the dissassembly.

In [88]: dis("y = 2 ** 31 % (2**31-1)")
  1           0 LOAD_CONST               0 (1)
              2 STORE_NAME               0 (y)
              4 LOAD_CONST               1 (None)
              6 RETURN_VALUE

So 1 is calculated by the compiler but the output would still suggest a new int object is created;

In [91]: x = 1

In [92]: y = 2 ** 31 % (2**31-1)

In [93]: y
Out[93]: 1

In [94]: x is y
Out[94]: False

So is the issue here is some kind of integer limit (that makes it create a new int object)?

Would love an explanation of this behavior, thanks !

Very odd.

I don’t know if it is a bug, but I’ve reported it on the bug tracker.

https://bugs.python.org/issue46961