Python-3.11.0 Build failure

My Python build on x86 architecture is failing with MemoryError. I cross checked the available memory as well and everything seems normal. This error occurred while executing make command for building freeze modules. any idea on how to fix this?

Can you post the actual output you’re seeing? There’s no way to diagnose what’s happening without the build logs.

./Programs/_freeze_module importlib._bootstrap /mydir/zuhu/python/Python-3.11.0/Lib/importlib/_bootstrap.py Python/frozen_modules/importlib._bootstrap.h

I see ./Programs/_freeze_module is the script that python has generated and it’s trying to create a new file by executing this script which takes an input path and and output path.

immediately following this script execution which is taken from make file I am getting a MemoryError.
I am afraid I can’t upload the build logs as of now, I don’t have access to those. When I went on to debug I see the program returning a NULL for marshalled in line

PyObject *marshalled = compile_and_marshal(name, text); in in Programs/_freeze_module.c

which then goes into PyErr_Print(); few lines below to print the error.

That’s very odd, we’ve not seen that problem on any of our buildbots. What OS are you using? What configure options did you use? Does the problem still occur if you try a completely clean checkout? Where did you get your source code? (If using GitHub, you should be on the 3.11 branch but not on 3.11.0.)

How much memory is free in you system?

I download Gzipped source tarball file from python official website. When I debugged it further I find that the p->error_indicator is being set to 1 in file_rule function in Parser\parser.c, but when I try to step into the if condition in file_rule which is

    if (
        (a = statements_rule(p), !p->error_indicator)  // statements?
        &&
        (endmarker_var = _PyPegen_expect_token(p, ENDMARKER))  // token='ENDMARKER'
    )

The number of statements to track are too huge, however I know that this is the function which is causing this error. What is the significance of file_rule function in the parser?

398603.768 MB of space is free.

Also the architecture is TNS/X NonStop. I don’t think currently python supports this or I may be wrong. But we are trying to incorporate python for this so any help of any sort would be appreciated.

I don’t know, but I recommend that you answer my other questions and suggest that you try a checkout from github.

–with-system-ffi --prefix=/usr, these are the configure options, the OS that we use is a wrapper on top of linux OS.

I will try git checkout from the source, but wouldn’t it all be the same since I downloaded 3.11.0 from official website itself?

Also what is the significance of file_rule function in Parser/parser.c that I have mentioned in the chat above?

I don’t know that part of the compiler, sorry. (If someone else is reading this who does, please don’t be quiet!)

But since you have a lot of memory available, and I’ve never heard of your platform before, I recommend that you look into the possibility of a compiler or C runtime bug – even if it’s a wrapper over Linux, maybe it’s an older version of Linux or has an older version of GCC or clang? Or an unusual malloc library (given that you’re apparently seeing a malloc failure)?

I still recommend checking out from GitHub for two reasons:

  • To ensure you’re not dealing with a corrupt tarball (or broken unzip/tar utility) – you didn’t say whether you verified the checksum
  • To see if possibly something changed in 3.11.1

Wait that can’t be right. That would be nearly 400 GB of free memory. Is this an absolute monster of a machine? Or is that a typo? Or perhaps your system is lying to you? Or are you quoting free disk space?

Oh wait it’s the parser, not the compiler. You may have to take into account the possibility that it’s a stack overflow. Try increasing the C stack size (sorry, not a Linux user, no idea how to do that).

1 Like

I have seen test_largefile fail on our platform 9 out of 10 times that we have run it. What is the max space that test_largefile might require? I was quoting free disk and PS yes it is a huge computing machine with lot of processes running on it. I will confirm RAM size in a short while.

Currently we are using 32MB as stack size. Will this be enough?

That’s an unrelated issue from the MemoryError in parser.c from _freeze_module. I would just skip the test_largefile test. If you need to know just read the source for that test, that’s what I would have to do in order to be able to answer it.

(Honestly, if you’re seeing a crash in _freeze_module, I’m not sure how you get to run test_largefile?)

I meant test_largefile failed for our other python builds like 3.6, and yes it skips the test like you mentioned.

we haven’t yet built 3.11 as build fails.