Reduce the overhead of functools.lru_cache for functions with no parameters

Can you show there’s enough of a performance improvement on subsequent calls to the functions to warrant the maintenance cost of special-casing for this?

I believe I can, I’ll try and create a PoC this weekend. Looking at the implementation for lru_cache I also think it’s pretty simple to do - I was going to use the PyObject member that’s currently always a PyDict as the place to store the returned object.

I think most of the savings will be memory from PyDict, but there is also hashing and PyDict calls that will be removed.

And to be honest, I’ve never heard of the idea of using functools.lru_cache for this before. Maybe it’s a trick that’s only common in django, which makes me think that django could expose a custom version of this if the performance improvement warrants it?

This pattern is pretty prevalent, far more than just Django. If you browse through the first few pages of Github code search you can find quite a few:

@once might be a semantically better, but it doesn’t exist in the stdlib, and people (myself included) have clearly cottoned on to the idea of using lru_cache as a simple, copy-paste free replacement. Plus it’s faster (Using the once implementation above)

In [14]: %timeit return_1_once()
98.8 ns ± 4.51 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

In [15]: %timeit return_1_lru_cache()
56.7 ns ± 1.79 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)