Should SharedMemory.close() be called explicitly or we should just rely on __del__()?

Hi community. I’m using multiprocessing.shared_memory.SharedMemory to transfer tensor through processes. I tried to encapsulate it in a Block class, but a segmentation fault is raised since the SharedMemory.del() method closed the memoryview. You can reproduce it by this snippet:

import numpy as np
from multiprocessing.shared_memory import SharedMemory

class Block(object):
    def __init__(self, nbytes, shape, dtype):
        self._nbytes = nbytes
        self._shape = shape
        self._dtype = dtype

    def get_ndarray(self):
        shm = SharedMemory(name=self._shm_name)
        return np.ndarray(self._shape, dtype=self._dtype, buffer=shm.buf)

    def set_ndarray(self, arr):
        shm = SharedMemory(create=True, size=self._nbytes)
        shm_arr = np.ndarray(self._shape, dtype=self._dtype, buffer=shm.buf)
        shm_arr[:] = arr[:]
        self._shm_name = shm.name

arr = np.random.rand(1, 100)
block = Block(arr.nbytes, arr.shape, arr.dtype)
block.set_ndarray(arr)
arr_on_shm = block.get_ndarray()
print(f'arr_on_shm: {arr_on_shm}')

I know it’s more like a misuse than a bug. My question is, since explicitly calling the SharedMemory.close() method is emphasized in the document, should we call it implicitly in the SharedMemory.del() method? Or can we give more hints to the users instead of just raising a segmentation fault? Thank you!

This doesn’t sound like an enhancement idea for Python. Please use the #users category, thanks.

OK, moved.

1 Like

I don’t know much about multiprocessing, but as a general rule, you should never rely on __del__ and always prefer an explicit close operation (or a context manager with statement).

Since you say that the documentation emphasizes explicitly calling the SharedMemory.close() method, I think that it is likely to be a good idea to call the close method rather than hope for the destructor method to be called at the right time.

Is there a reason you think the docs may be wrong?

By the way, even if this is a misuse of the SharedMemory, a segfault is Bad with a capital B. You should report this as a bug. Is it numpy or Python segfaulting?

Hi Steven, I know we shouldn’t rely on __del__() method and I never take it as a big deal, but sometimes it worked unexpectedly like what happened in the code I gave.

When I see the document emphasizing that users should call SharedMemory.close() explicitly, I think I should take care of the life cycle manually. But actually, it’s something between automatic gc and manual management. That’s the vague part of the doc I think.

Back to this problem, I think the reason is that the SharedMemory.__del__() method calls the SharedMemory.close() method, in which the memoryview and mmap are closed: cpython/shared_memory.py at main · python/cpython · GitHub

But the numpy.ndarray who take the memoryview from SharedMemory as its buffer(it’s a typical use case in the official document) don’t know that the memoryview and mmap is closed. So when users try to visit this ndarray, seg fault is raised. If I remove lines 226-231 from the aforementioned file, the problem disappears but I’m not sure if it will lead to memory leakage.

For now I think it’s more like a Python problem since numpy won’t know where the buffer is from.

The point about relying on __del__ is that you shouldn’t assume that it’ll be called promptly, or that it’ll be called at all; but if it IS called, it’s correct for resources to be released. So I would say the correct thing to do is to retain a reference to the SharedMemory until you know that you won’t be using it any more (most likely as an attribute of the Block), with a context manager to guarantee disposal.

Relying on __del__ can cause resource leaks in certain circumstances, but decrying its use would also lead to leaks, and probably much more common ones. Retaining the objects in question is usually the easiest way to indicate that you’re still using the underlying memory.

Thank you for the reply! I fully agree that __del__ is necessary to avoid leakage. I’m wondering if there’s a proper way to avoid the seg fault or at least give some hints when it’s about to happen? Maybe I encountered a corner case but it really took me some time to figure it out😂.

When you construct it, assign it to self.mem or something. That should work!

Yeah it works, but it requires users to use it correctly. I’m thinking about if it’s possible to avoid such misusage at the library level.

In general, if you’re using something, retain a reference to it. The part that’s weird is that you can get a view that you then hand off to numpy, and which doesn’t seem to hold a reference back to the original shared memory object. That IS a bit odd, but I’m not sure whether it’s possible to bind the returned ndarray to the SharedMemory in any useful way, so the easiest would be to bind it to the Block instead.

You shouldn’t get a seg fault. A seg fault is an unambiguous sign of a serious bug in the implementation, not in your code. Your code is only revealing the existence of that bug.

The only question is whether it is a bug in numpy or in the Python interpreter. Please report it as a bug so it can be fixed.

Hi Steven, thx for the advice! I reported the bug to the numpy community: BUG: Got Segmentation Fault when use-after-free a ndarray using SharedMemory as its buffer · Issue #23305 · numpy/numpy · GitHub