Add `task_factory` to asyncio.start_server and friends

Should we add a task_factory parameter to asyncio.start_server and EventLoop.create_server?

These are used to accept connections and spawn tasks to handle them. From what I can see in the code, they ultimately delegate to EventLoop.create_task. But since we have TaskGroups now, it’d be useful to be able to pass in a taskgroup.create_task factory instead.

This would enable any web framework to handle graceful shutdown in an easier manner, and just seems like a useful thing to do.

The core idea here is a good one.

It’s already possible to use these with task groups by passing a synchronous callback instead of a coroutine, which then spawns the task in the group:

async with asyncio.TaskGroup() as tg:
    def handle_in_taskgroup(r, w):
        tg.create_task(handle_connection(r, w))
    server = await asyncio.start_server(handle_in_taskgroup, port=port)

(This is unsafe unless eager task execution is turned on)

Certainly it would be useful to have a more convenient way of doing it. The OP suggestion of task_factory would look like this:

    server = await asyncio.start_server(handle_connection, port=port, task_factory=tg.create_task)

Certainly simpler, but why not have a task group parameter?

    server = await asyncio.start_server(handle_connection, port=port, task_group=tg)

The obvious answer is that this is less general, but you can always use the style of first snippet if you want more flexibility. I think making the common case be extra convenient is best. Also, this makes the documentation a bit more evocative: it suggests to readers that perhaps they could be using task factories, even if they hadn’t previously considered the possibility, so it could push people in the right direction. task_factory sounds really abstract so I don’t think that prompt anyone to consider task factories if they weren’t already.

For clarity, are you wanting them to inherit the behaviour of cancelling all tasks on the first exception too? Or does the server code trap BaseException? Or…

For clarity, are you wanting them to inherit the behaviour of cancelling all tasks on the first exception too?

Yes that’s right (but of course your connection handlers can always catch exceptions and suppress them if that’s desired). So task_group=... would be functionally identical to my first snippet with handle_in_taskgroup and (I believe) Tim’s task_factory suggestion. It’s also the same way that trio.serve_tcp() works (except that can also spawn its own internal nursery, which is very nice but I’m not suggesting that for asyncio).

Just to be even more clear(!), it doesn’t need to be the same task factory as the server is running in. In fact I’ve found a common pattern is that the server runs in an inner task group, so that during shutdown you can just cancel that at first (which causes the server to stop accepting new connections but allows existing connections to continue) then sleep a while before forceably cancelling any straggling connections if they don’t shutdown quickly enough.

    async with asyncio.TaskGroup() as handler_tg:
        async with asyncio.TaskGroup() as server_tg:
            for port in ports_list:
                server = await asyncio.start_server(connection_handler, port=port, task_group=handler_tg)

The complication with asyncio’s task groups is that you can’t directly cancel them, but you can always either wrap them each in their own task and cancel that or start a task in them that raises an exception and then catch it just outside.

This pattern is what I’ve been thinking about with aiotools.PersistentTaskGroup (which resembles the concept of Kotlin’s SupervisorScope). Many server applications are composed of a nested tree of such “task scopes” which should be gracefully shutdown in an orderly fashion.

Though, what I’m still not sure is the design of “scoped” handler interfaces for continuously streamed results and exceptions, in addition to the current loop-level fallback exception handler.

Please have a look at: Server-oriented task scope design