PEP 806: Mixed sync/async context managers with precise async marking

Hi all - after some earlier discussion, I’m pleased to present PEP 806 for discussion. It’s in most ways a very small proposal, which would make a surprisingly large difference for my code!

Feedback, questions, requests for clarification, etc. all most welcome.

Abstract

Python allows the with and async with statements to handle multiple context managers in a single statement, so long as they are all respectively synchronous or asynchronous. When mixing synchronous and asynchronous context managers, developers must use deeply nested statements or use risky workarounds such as overuse of AsyncExitStack.

We therefore propose to allow with statements to accept both synchronous and asynchronous context managers in a single statement by prefixing individual async context managers with the async keyword.

This change eliminates unnecessary nesting, improves code readability, and improves ergonomics without making async code any less explicit.


For example:

async def process_data():
    async with acquire_lock() as lock:
        with temp_directory() as tmpdir:
            async with connect_to_db(cache=tmpdir) as db:
                with open('config.json', encoding='utf-8') as f:
                    # We're now 16 spaces deep before any actual logic
                    config = json.load(f)
                    await db.execute(config['query'])
                    # ... more processing

becomes

async def process_data():
    with (
        async acquire_lock() as lock,
        temp_directory() as tmpdir,
        async connect_to_db(cache=tmpdir) as db,
        open('config.json', encoding='utf-8') as f,
    ):
        config = json.load(f)
        await db.execute(config['query'])
        # ... more processing
32 Likes

I’m a big +1 on this, as I think this would help to improve the readability l. Are the parenthesis necessary however, or could work t be written as a one-liner?

The PEP proposes allowing the one-line form, including a simple with async ctx(): without parentheses.

That said, I think it will remain better style to write async with ctx(): for this case, and would only recommend using the new feature for multi-line parenthesized cases.

1 Like

The PEP only gives examples of

with (
    async aquire_lock() as l,
    open("file") as f)
):
    ...

Or

with async aquire_lock() as l:
    ...

So, to clarify, would

with async aquire_lock() as l, open("file") as f:
    ...

Be allowed?

1 Like

I have a related question: What happens with with async followed by parentheses. Is the code
with async (
foo_context() as foo,
bar_context() as bar ):
legal, and so are both foo and bar asynchronous or only foo.

Joe Gottman

1 Like

Thanks for asking! The async keyword is specified in relation to the ast.withitem, and so this would be a SyntaxError.

If you play around with adding parens in the current with statement, you’ll find that the only valid placements are (1) around the entire expression, with ( …. ):, or around individual context-manager-returning expressions like with ctx1() as a, (foo().ctx_attr) as b:. The PEP doesn’t propose changing this, only allowing you to prepend async to each ast.withitem.

1 Like

I’m +1 on the PEP!

It’s maybe a little unfortunate the motivating examples use file operations as examples of sync operations. I’ve already seen comments on Reddit to the tune of “hurr durr they should be using async wrappers for files instead“ (while a complex topic, also isn’t incorrect in the general sense). I imagine the main motivating cases are actually the Trio cancel scope managers and their ilk?

Some examples I’ve encountered where I’ve needed to mix sync and async context managers is using our internal library for tracing (with TRACER.span(): ) and the timeout managers in my Quattro library (which are modeled after the Trio ones). But I understand you can’t stick those into a PEP.

Edit: maybe examples using pytest.raises(…) and contextlib.suppress(…) would be clearer.

I’m a bit skeptical of this proposal because it weakens Python’s principle that async code should be visually explicit. One of the best design decisions with async/await was making asynchronous code unmistakably distinct: when you see async with, you immediately know “this can suspend execution.” The proposal blurs this by allowing with async ctx() which looks like a sync statement with an async detail buried inside. At a glance, you lose the clear signal that this block involves async operations. The current forced nesting actually serves a purpose: it makes you think about the async/sync boundary and makes it visually obvious which scopes are async.

The motivating example showing “16 spaces deep” is already a code smell that should be refactored regardless of async: you shouldn’t have four nested context managers in the first place. If deep nesting is genuinely necessary, the explicitness of separate async with blocks is worth the verbosity because it maintains clarity about where suspension points exist. In my view the ergonomic improvement isn’t worth compromising the visual distinction that makes async code safe and maintainable.

2 Likes

I had to give this some significant thought and look at my own use, personal and professional.

I don’t personally see allowing mixing these as a positive after doing so.

I couldn’t find any examples in my own use that even came close to paralleling this, and I do think that this compromises on the primary benefit of async as a keyword in the first place (the high visibility and clear ordering of context switches, note that these context switches also happen on exiting the async context managers)

I’m going to critique the example, heavily, but the same critiques likely apply to any other code that would seem to benefit from this, as I believe there’s a reason I can’t find any code in my own use that would even remotely benefit from this.

The example given seems pretty weak, I’d like to see a realworld example where you actually need to aquire these, in this particular order, all at once, using a mix of sync and async context managers to do it. If the ordering is unimportant, just that all are aquired before continuing, then released afterward, or even if only the aquisition of the lock happening at the outermost level matters, you can already group this into two groups (those that are async, and those that are not).

What’s the lock even for? temp directories should always be sufficiently unique, file access can be in exclusive mode, and dbs typically provide their own locking as needed. If multiple things would be frequently updating the same config file, and this is intentional, you should be using a write queue rather than locked access, or even storing configuration in the database.

Edit: I think that this is essentially already acting as a visual indication of code that should be refactored, not of code that the language needs to accomodate better.

1 Like

A simple question: are you using asyncio.timeout()? If you value the benefit of explicit suspension points (like I do, and you imply you do) you should be using something else instead (either a home-grown solution, or quattro.fail_after()/quattro.move_on_after() and friends) since asyncio.timeout() is an async context manager but without any possible suspension points inside. (Fun fact, I raised this point while working on it with Andrew but we were ultimately overruled.)

And I personally have had need to combine quattro.fail_after() with taskgroups etc, so that would be my concrete example.

No. If I need a timeout on a coroutine that doesn’t provide it, I don’t use asyncio.timeout, but asyncio.wait_for or asyncio.wait (depending on the cancellation semantics needed in case of timeout) with a timeout parameter. While structured in code differently than the [edit: closest] context manager equivalent, the semantics are in general better.