Why is cpython multiprocessing using named semaphores?

Hello everyone,

according to cpython/Modules/_multiprocessing/semaphore.c at a03ec20bcdf757138557127689405b3a525b3c44 · python/cpython · GitHub, multiprocessing.SemLock is always using a named semaphore, created with sem_open. In regular glibc environments, this may not be a relevant factor as the number of named semaphores is only limited by system resources. However when using musl (as done e.g. by alpine), only the POSIX-Default of at most 256 semaphores per process is supported.

This problem can be seen in Python multiprocessing No file descriptors available error inside docker alpine - Stack Overflow - basically limiting the available Queues on these platforms. So is there a reason for CPython to use named posix semaphores, especially as the names of them are randomly generated (Lib/multiprocessing/synchronize.py, line 122).

Note that I am not a particular expert in C or POSIX semaphores, but maybe in-memory-semaphores would be an option too, which would not require a call to sem_open.

Thanks for your opinions,
prauscher

How can you pass anonymous semaphores between unrelated processes? See the SemLock.rebuild method.

1 Like

This is a platform problem. Musl, or at least Alpine, should do something about their limitation.

As Antoine noted, these are passed between processes via pickle which uses that rebuild method to reconstruct them in the other process.

1 Like

Thanks for pointing the rebuild using pickle out to me. What is not yet clear to me is why a Semaphor needs to be pickled for transfer to a child-process? As far as I know, multiprocessing-Objects using Semaphor like Queue etc may only inherited from the parent process, not transported via IPC, or am I missing something here?

In this case, unnamed semaphores would be enough, right?

I sure understand how this can be a alpine / musl problem, but I pretty much get their point of sticking to POSIX, which only opts for up to 256 named semaphores. Being a python-developer using multiprocessing on alpine I feel a bit fallen between two stools here :wink:

Semaphores and queues can be transported over queues, i.e. IPC.

Well, I understand that musl wants to keep to a dirt-simple implementation, but that comes with consequences.

That said, there’s a good point being made on the musl ML:

Named semaphores are incredibly inefficient. On a system with
reasonable normal page size of 4k, named sems have 25500% overhead vs
an anon sem. On a system with large 64k pages, that jumps to 409500%
overhead. Moreover, Linux imposes limits on the number of mmaps a
process has (default limit is 64k, after consolidation of adjacent
anon maps with same permissions), and each map also contributes to
open file limits, etc. Even just using the musl limit of 256 named
sems (which is all that POSIX guarantees you; requiring more is
nonportable), you’re wasting 1M of memory on a 4k-page system (16M on
a 64k-page system) on storage for the semaphores, and probably that
much or more again for all of the inodes, vmas, and other kernel
bookkeeping. At thousands of named sems, it gets far worse.

glibc does the same thing of allocating a separate page per named semaphore.