One of the main issues with attempting to dynamically switch between the two is that asyncio.run
is really slow compared to a regular synchronous function call:
$ python3 -m timeit -s "def f(): pass" "f()"
5000000 loops, best of 5: 41 nsec per loop
$ python3 -m timeit -s "from asyncio import run" -s "async def cr(): pass" "run(cr())"
5000 loops, best of 5: 69.4 usec per loop
Note the difference in units: nanoseconds for a regular synchronous function call, microseconds to start an event loop, run a coroutine, and then shut the event loop down again.
In the other direction, the overhead of using asyncio.to_thread
to run synchronous APIs in an async context is lower than that of using asyncio.run
in a synchronous context (since the event loop sets up and manages a thread executor that lives as long as the event loop does), but it’s still not negligible (one order of magnitude for a do-nothing function rather than the 3 orders of magnitude we saw with asyncio.run
):
$ python3 -m timeit -s "from asyncio import run, to_thread" -s "def f(): pass" "run(to_thread(f))"
500 loops, best of 5: 650 usec per loop
Thus library authors that want to offer native support for both sync and async usage often gritting their teeth and duplicating the API surfaces in order to offer the best possible performance for both usage models.