Is there a way to define a function that can be used as a nomal function and an async function?

As title. Currently you have to define two distinct functions.

I’m not proposing a solution. The “not” is because I’m really doubtful about it.

Instead of writing:

async def f():
    [some code]
    await some_coro()
    await some_task

you could be able to write it also as:

def f() -> MaybeCoro[[], None]:
    [some code]
    some_coro()
    some_task

I know, this is almost surely impossible, since the type hint in this case should act as a sort of keyword of the parser. Furthermore in this case, all coroutines and tasks will be awaited by default, and I don’t know if there’s a case in which is good to not await them and, if so, how to do it in the second code (maybe using asyncio.create_task()?)

you can use unasync to strip async/await from your code. If you don’t need to support python2 you can drive your coroutine without asyncio

Well, is a solution, but you have to put all your async code in a folder apart.

Can’t we apply duck typing to function and coroutines in some way?

If you don’t need to support python2 you can drive your coroutine without asyncio:

here’s an example of driving a coroutine-using application without asyncio:

import asyncio
import sniffio
import httpx
import requests


def is_async():
    """
    A function to determine if running in a sync or async context
    """
    try:
        return sniffio.current_async_library()
    except sniffio.AsyncLibraryNotFoundError:
        return None

# this will be a low-level function away from any business logic in your application
async def some_coro():
    if backend := is_async():
        # this async/await call requires an event loop to work
        async with httpx.AsyncClient() as client:
            return backend, (await client.get("https://httpbin.org/get")).content

    # as long as the "bottom" of your async/await call stack doesn't use an
    # eventloop you can drive your coroutine with a sync function
    return backend, requests.get("https://httpbin.org/get").content

async def f():
    return await some_coro()
import asyncio
import sniffio
import httpx
import requests

def run(backend, corofn, /, *args, **kwargs):
    if backend == "sync":
        # if you try and run sync code in a callback you'll block your eventloop
        assert not is_async()
        try:
            # here we implement the first step of an eventloop
            # and breakout with a runtime error if the corofn needs an eventloop
            next(corofn(*args, **kwargs).__await__())
        except StopIteration as ex:
            return ex.value
        # in a real eventloop you would have to process values yielded from
        # your coroutine, schedule them and call next(...) when they are "done"
        # here - we just bail out.
        raise RuntimeError(f"{corofn=} did not complete synchronously")
    if backend == "asyncio":
        return asyncio.run(corofn(*args, **kwargs))
    raise RuntimeError(f"unsupported {backend=}")


# the top of your application needs to decide if you want to run sync or async and stick to it
if __name__ == "__main__":
    print(run("sync", f))
    print(run("asyncio", f))

A better solution in general is to write your library using https://pypi.org/project/anyio/ and require synchronous applications to call into your library using result = anyio.run(your_library, backend='asyncio')

It seems that anyio does is abstraction (facade?) upon asyncio and other python async programming libraries. It makes not an abstraction upon async and sync functions.

Well, again, is effective and maybe better than unasync, but you have to duplicate your code anyway.

it provides an “anyio.run” to execute a coroutinefunction synchronously

I don’t see the duplication you’re referring to

Ok, I skipped this part:

So, to run the async function synchronously, you want to reimplement asyncio and run it async anyway?

So, to run the async function synchronously, you want to reimplement asyncio and run it async anyway?

The main difference between async/sync is how file descriptors are read from/written to. async programming allows you to multiplex tasks around a single select.select call. asyncio does this with coroutinefunctions. some coroutinefunctions can run synchronously (they throw StopIteration before yielding) you can design your codebase so all you coroutinefunctions work in this way if a ContextVar flag is set.

I’m also not sure what drives your desire for writing all your business logic to support arbitrary sync or async entry points? For example urllib3 needs 1 codebase python2 sync and Python3 anyio support, what’s driving your usecase?

The problem was with an “old” server written with aiohttp. Many times I faced the problem that a function that was normal had to become async, and the contrary. I had to change all the other functions B that called that functions A, and the functions C that called B, and so on.

As title. Currently you have to def ine two distinct functions.

The problem was with an “old” server written with aiohttp. Many times I faced the problem that a function that was normal had to become async, and the contrary. I had to change all the other functions B that called that functions A, and the functions C that called B, and so on.

Once you’ve converted everything that needs coroutinefunctions then you’ll be able to use them in sync def and async def contexts - you don’t need to write two distinct functions

I’m not completely sure how does this relate exactly to what you had in mind, but I have had a common pattern where users could write a function that would be called at runtime, usually as some kind of an event handler callback or similar. Since the runtime code is async, it can call both async and regular functions, so I wanted to give the users the flexibility to define either, depending on their use case.

Traditionally, I would simply pass the function through the asyncio.coroutine decorator, but that has been deprecated and is scheduled for removal in 3.10. So I have my own reimplementation, purely for this purpose:

from functools import wraps
from inspect import iscoroutinefunction
from typing import Callable


def coroutine(fn: Callable) -> Callable:
    """Decorator to convert a regular function to a coroutine function.

    Since asyncio.coroutine is set to be removed in 3.10, this allows
    awaiting a regular function. Not useful as a @-based decorator,
    but very helpful for inline conversions of unknown functions, and
    especially lambdas.
    """
    if iscoroutinefunction(fn):
        return fn

    @wraps(fn)
    async def _wrapper(*args, **kwargs):
        return fn(*args, **kwargs)

    return _wrapper

This is really good, but what about await? An async function must be awaited, and if you await something, it must be an async function. So if I define the undecorate function, I must await nothing.

Maybe is there some asyncio function that emulates await?

if I define the undecorate function, I must await nothing

Yes, that is why I mentioned that I’m not sure that it solves your problem completely. But I think there is a conceptual problem here: non-blocking (async) code can execute blocking (sync) code; but the opposite by definition is simply not possible. The only way to execute async code from sync code is to encapsulate it using asyncio.run or something similar. So the abstraction works in only one direction.