OK, this worries me a lot: after reading the documentation for shared memory: multiprocessing.shared_memory — Shared memory for direct access across processes — Python 3.11.1 documentation
I had been planning how to implement software for fast data exchange between processes controlling a robot. Sharing numpy arrays between those processes looked like the ideal way to do this.
However, by chance I stumbled over this bug before I even started to write the first line of code:
resource tracker destroys shared memory segments when other processes should still have valid access · Issue #82300 · python/cpython · GitHub and the impact of this bug looks terrible!
How can it be that while this bug has existed for 4 major versions it does not even get mentioned in the docs?
I made a quick test:
- process A creates a shared memory and keeps running
- process B accesses the memory, does something and closes it, but does not unlink, then exits
- process C tries to access memory: not around any more, because it got cleaned up against my will when process B exited
This bug is so bad that it basically makes using shared memory on Linux impossible, in my opinion? Why is this not mentioned in the documentaiton? Is there a good way to work around this (which does not involve compiling a modified version of Python myself)?
I have to say I am truly shocked about that.
UPDATE: ok it seems there is a patch - I tested it and it seems to work. Beyond me why this is not getting mentioned in the documentation or even implemented as the default behaviour after all this time.