Simplify lru_cache

I’ve just read through this thread as a curious observer. One thing I found odd is that so much of it was focused on performance benchmarking, which to me seems like it should be only a 4th or 5th level concern for this code. Correctness (in single and multiple threading contexts, and re-entrancy) and simplicity of maintenance at least are two much more important goals.

As for whether this “just” “shifts the work” of bookkeeping to the dict code - I should think that would be universally accepted as a good thing. I don’t understand the argument for why that would be bad.

I greatly appreciate @iritkatriel for pointing out that we want to encourage people to run experiments, even on code that we think may be “solved”.

Finally - the suggestion seems to be that an API addition to the dict code might be a more tenable path. I very much doubt that if the OP had come here saying “I have some API additions to dict that I’d like to make, which would simplify some toy caching implementations I’ve been playing with” he’d have been greeted any more warmly. That seems quite a bit less promising as an opening move than “I have a simplification of existing code that also happens to be faster.”

That said, I would definitely welcome those specific dict additions, because I have badly wanted them in the past. So +1 from me on that.

2 Likes

@kenahoo Thanks for the encouragement!

I suspect I’m (mostly) to blame for the performance focus? I got into this investigation mostly just to see whether it’s possible at all, and whether it makes the code simpler.

And yes, I mostly picked lru-cache as an example to show what the existing dict can already (almost) do.

Then Serhiy pointed me to his past re-implementations, and as far as I could tell, his otherwise excellent work was shrugged off, because the performance was awful. When I got the first results that showed that performance actually improved with a careful implementation, I naturally focussed on that.

And then there was some back-and-forth because Raymond couldn’t believe that performance could actually improve; and I still had to post benchmark numbers etc. So that hijacked the discussion a bit.

Yes, I agree that the performance differences are fairly small. I like that the new approach is a lot simpler, and calls out to user-code less often.

Yes, those two things are actually what I mostly sought feedback on. But the discussion went elsewhere.

So I am working on another prototype that essentially turns the compact ‘entries’ table in the regular dict into a circular buffer. Existing dict operations won’t be affected (nor does the new approach need more memory).

But we can support all the operations that both OrderedDict and lru-cache essentially for free. At least without any extra memory, and with vastly simpler and shorter code.

I suspect multi-threading and correctness concerns should be less in the long run, because my prototype only adds very limited extra processing to the dict in places that already need to hold a lock (implicit via GIL) on the dict. Nothing needs more user-code calls than what dict already does.

Of course, I can only say ‘in the long run’, because the existing code has the benefit of already being written and reviewed and tested and used in production.