The function itself is not defined with async def, but FastAPI takes care of running the application asynchronously.
Question: Do I still need to use a cache specifically designed for asynchronous code, like aiocache or aiomcache, or would I only need this for endpoints defined with async def (FastAPI allows for the use of both).
Ignore the name of the function. I was too lazy to type and just copied it. It is a function that does some computations, the return value depends only on the arguments, and has no side effects.
Oh OK no worries. Doesn’t uvicorn work by spawning a new subprocess for each http request? If so, a naive cache won’t be shared between subprocesses, and each would be garbage collected when its subprocess serving the request terminates.
It’s common for webservers to use Redis or something for cacheing, but maybe that’s overkill for this situation if you don’t need persistence (saving the cache to disk) between server reboots.
It doesn’t matter if the return value is deterministic and the same for all clients. But in general it’s non trivial: what if two cache misses for the same key occur at the same time? Which one wins? So I’d look through the async docs, and look at popular async libraries on PyPi to see if such a feature already exists.
My knowledge of it is very limited, but I think that each web worker is a different process, but each request (handled by a given worker) is run in a threadpool. Each worker will have its own cache, yes, but it should persist and be available for all requests being handled by that worker.