Asyncio for files

These coroutines can be implemented within the asyncio module, where both high-level and low-level API designs are already defined.

However, these coroutines should yield better performance numbers than the io module. This will justify their presence in the asyncio package. In other words, these should not be more blocking than the blocking approach.

For example, aiofiles is consistently slower than io on my computer using an SSD drive. See benchmark:

Benchmark
import asyncio
import time
import timeit

import aiofiles


async def aio():
    t = timeit.default_timer()
    async with aiofiles.open('aio.test', mode='wb') as f:
        await f.write(b'0' * 1024 * 1024 * 1024)

    t = timeit.default_timer() - t
    print('aio: ', t)

    t = timeit.default_timer()
    async with aiofiles.open('aio.test', mode='wb') as f:
        for i in range(1024):
            await f.write(b'0' * 1024 * 1024)

    t = timeit.default_timer() - t
    print('aio small chunks: ', t)


def io():
    t = timeit.default_timer()
    with open('io.test', mode='wb') as f:
        f.write(b'0' * 1024 * 1024 * 1024)

    t = timeit.default_timer() - t
    print('io: ', t)

    t = timeit.default_timer()
    with open('io.test', mode='wb') as f:
        for i in range(1024):
            f.write(b'0' * 1024 * 1024)

    t = timeit.default_timer() - t
    print('io small chunks: ', t)


async def run():
    io()

    time.sleep(1.0)

    await aio()


asyncio.run(run())

Given the complexity of these libraries, their maintenance burden is too high. Furthermore, the performance gain from utilizing these operating system’s async IO facilities might not be noticeable in Python, as CPython could become the bottleneck in this scenario.