AssertionError asyncio.streams in _drain_helper

I’m seeing this assert being hit in production, is this something wrong with my code or have I found an asyncio issue? If the latter what would you need to track it down?

More specifically, in what context did the assertion error occur and can it be replicated? Any tracebacks would be helpful. It could also be useful to know what async libraries are being used, if any (other than asyncio, of course).

It does seem atypical, but there’s not much of a way to accurately diagnose it (or potentially fix it) without knowing how it occurred in the first place.

The traceback is,

  File "hypercorn/asyncio/tcp_server.py", line 81, in protocol_send
    await self.writer.drain()
  File "asyncio/streams.py", line 387, in drain
    await self._protocol._drain_helper()
  File "asyncio/streams.py", line 194, in _drain_helper
    assert waiter is None or waiter.cancelled()

The stream writer (self.writer below),

<StreamWriter transport=<asyncio.sslproto._SSLProtocolTransport object at 0x7f3db1520460> reader=<StreamReader waiter=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f3db1585940>()]> transport=<asyncio.sslproto._SSLProtocolTransport object at 0x7f3db1520460>>>

and the protocol and waiter,

<asyncio.streams.StreamReaderProtocol object at 0x7f3db19db400>
<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f3db19d4ca0>()]>

The code that triggers the error (rarely),

self.writer.write(event.data)
await self.writer.drain()

I can’t reproduce it and only see it in production occasionally.

Hmm, I’d recommend opening an issue on bugs.python.org. Since it’s not reproducible and only happens occasionally, it could potentially be a subtle race condition. Be sure to add “yselivanov” and “asvetlov” (maintainers of asyncio) to the nosy list. In the meantime, I’ll look into further into it and see if I can find any potential cause for the issue. Thanks for bringing it to our attention.

Additional information, such as how the StreamWriter object is being instantiated, the surrounding context in your protocol_send(), and platform used may be helpful.

Edit: Also, be sure to mention the specific Python version used.

I figured this out. It turns out I had multiple (two) tasks writing to the stream and then awaiting the drain. In the unlikely case that one was waiting to drain and then the second tried to this error would be raised. Hence the solution was to wrap the write and drain calls in a lock.

I think https://github.com/python/cpython/pull/19240 (https://bugs.python.org/issue40124) will help clarify this for future users.

Thanks for following up on issue and reporting back with your solution! I’ll take a look at the bpo issue and PR.