Unbuffered queue

With the availability of free-threaded python, it seems likely that threading will become more prominent in the ecosystem and better tools to manage concurrency would be a good idea.

Currently, a major missing block at the mid-level level is the unbuffered (or rendezvous) queue: Python has the very low-level tools (semaphores, mutexes, barriers) in threading and things like queues for higher level constructs, but because Queue(maxsize=0) is an unbounded queue it can not support rendezvous.

As the name indicates, an unbuffered queue has no buffer, a producer and a consumer perform the handoff synchronously. It is thus both an exchange of data between multiple threads and a synchronisation primitive not unlike a Barrier(2) except directional (it has a producer and a consumer). This results in them providing reactive backoff and limited resource consumption (e.g. no extra work), possibly at some cost in latency

Unbuffered queues are probably most famous from Go as it’s the default mode for channels, but they’re also used in languages like Kotlin and Rust.

Such a queue could live either in queue alongside the existing buffered queues, or in threading alongside the lower-level primitives.

Does it need to be in the stdlib, at least initially? I’m really excited to see more “structured concurrency” tools, but my impression is that there’s a lot of possibilities and it’s not yet clear what the most important ones will be[1]. So for now, I feel as though we should be putting effort into experimenting in 3rd party modules, rather than jumping straight to putting things in the stdlib.


  1. a bit like with asyncio, where experience with 3rd party APIs like trio resulted in a rethink of some parts of the stdlib API design ↩︎

3 Likes

I have a Channel class in my cs.queues module on PyPI which is one
of these. Feel free to use it.

1 Like