Asyncio synchronization

since all tasks run in a single thread in what case are they needed ?

By nature, asynchronous tasks can overlap - one may pause for I/O and the event loop begins running another task instead. So even though everything is in a single thread (there is no parallelism), if two tasks have some kind of dependency or share some state then synchronization is still required to ensure things happen in a predictable order.

You do tend to need synchronization less often (imo), but it’s still a kind of concurrent programming so unless all your tasks are isolated from each other you may still have to synchronize to make things deterministic.


by docu (see Concurrency and Multithreading), as far as I can read, that cannot happen :roll_eyes:

Lock, as are used with threads, make no sense in pure async code.

You would use state machines to make sure that streams of async events are handled in
an approiate way.

I’ve never needed locks in asyncio code, but that doesn’t mean they can’t happen. However, they are definitely unnecessary if there are no await points in the locked block. It would have to be something like:

async def task():
    with lock governing some resource:
        use resource
        await something
        use same resource again

where you need a guarantee that the resource won’t be used by any other task while you’re awaiting the thing in the middle. Not an impossible situation, to be sure, but certainly unusual.

they are different to those and should only be used for tasks within a single thread

it sounds most plausible for tasks in a single thread, and this ‘Not thread-safe’ is just to confuse

That’s true, but there’s no particular reason you can’t mix asyncio and threads - there are several viable hybrid systems. So knowing that these locks cannot be safely used across threads is important.

The documentation for asyncio Synchronization Primitives seems comprehensive.

asyncio primitives are not thread-safe, therefore they should not be used for OS thread synchronization (use threading for that)

For instance, when implementing an event-driven architecture, you can make use of the synchronization primitive called Event. However, it’s important to note that you cannot trigger a coroutine in another thread using this synchronization primitive. In such cases, you would need to employ threading synchronization primitives, e.g., threading.Event.

Fine, I’ll try a different word - asynchronous tasks can interleave. It’s literally the whole point of using asynchronous I/O - a task that is blocked (I’ll call it task A) lets the event loop resume, and the event loop starts executing another one (say task B). At a point in the future the stopped task A is resumed because the disk or network has produced whatever it was waiting for. The effect of this, from the perspective of task A itself, is that task B has executed after A started and before A finished. If A and B share a resource, you may need to control access to it.

That, and in any concurrent programming the equivalent of Events/condition variables/binary semaphores and similar are just plain useful in whatever language. So it’s nice to have asyncio versions even if they are less critical.

All that said, this is also good advice imo:

If tractable for a given set of requirements, an architecture that explicitly models state transitions can be a lot easier to reason about.

1 Like

This is a distinction without a difference. Unless you have fewer threads than CPU cores, threads interleave too, and it’s no different.

The important difference between asyncio and threading is that with threads, context switching may occur at any time, whereas with asyncio, it happens only at an await point. Thus, the need for locks with threaded code can happen without a visible await point, but the need for locks in asyncio code happens only when you can see a potential context switch inside the guarded block.

For example, this line of code (on its own) cannot require a lock in asyncio tasks:

global jobs_done; jobs_done += 1

But in threaded code, it might. The bytecode for this operation is not truly in-place (for integers), as it first performs the addition and then stores it back. Will it ever break your code? Maybe not. It also depends on your Python interpreter (CPython may behave differently to PyPy may behave differently to Jython, etc) and version (this could become safe or become unsafe in any update). Does this need a lock? You can certainly hide the lock by doing this with a thread-safe queue or something, but does the queue then need a lock? Probably, to be safe. Hence the higher-throughput non-threadsafe options provided in asyncio, which can make more assumptions.

The rest of your post, I agree with; the importance is the interleaving, aka “concurrency”. Parallelism is a higher-throughput form of concurrency than pure interleaving, but both are concurrent.

1 Like