Hello! Herein I briefly tell the sad tale of
functools.cached_property, and request your input on next steps to give it a happier future.
Once upon a time, someone invented the clever idea of using a “non-data descriptor” (that is, one defining only
__get__, which has lower precedence than an entry in an instance dictionary) to create “cached properties” which compute a value only once per instance and cache it in the instance dict itself. This has some neat properties (it’s very simple, the storage and lifetime of the cache is neatly and inherently tied to the lifetime of the instance, subsequent accesses of the cached value have exactly the same cost as any other normal attribute access) and some downsides (mostly that it only works on objects with a mutable
This technique became very popular, at least in some corners of the Python community (notably web frameworks), and eventually pretty much every popular web framework (and many other projects) had cargo-culted their own basically-identical implementation of it.
Back in the Python 3.8 timeframe, I had the bright idea to add this to functools so everyone wouldn’t have to maintain their own version of it. And I did just that.
Sadly, in implementing the PR, I strayed from the path of righteousness (that is, just port the same simple battle-tested implementation everyone was already using) and succumbed to feature creep. A commenter suggested that
cached_property should be thread-safe via locking, and pointed to an existing implementation that included locking. So I added locking in the same manner, and it was reviewed and merged.
Some time later, people actually using it with threads observed that the locking has a critical performance problem: it locks the entire cached property across all instances, not just a single instance. As a result, across the web one can now find warnings to avoid
functools.cached_property, and most of the projects that previously maintained their own version have just continued to do so.
How can we rescue
cached_property? Some options that have been proposed, with pros and cons (thanks @Jelle for originally framing these 3 options):
------ OPTION 1: give a couple versions’ warning in deprecation docs and then just remove the locking
On the plus side, this allows us to arrive, with minimal churn and within a few versions, at the simplest behavior that empirically worked just fine for most users of
cached_property before it was added to stdlib. Normal Python data structures and instance methods don’t inherently protect you from multi-threading data races, you have to provide your own synchronization that meets your specific needs; it’s not clear why
cached_property should be any different.
On the minus side, if anyone out there is using stdlib
cached_property, and depending on the current locking to guarantee synchronization, and they don’t notice any of the deprecation warnings in the docs, we may be gifting them with an unpleasant debugging puzzle. At this point this is hypothetical (despite some level of internet noise about locking issues with
cached_property, nobody has yet turned up to say “I’m relying on locking”), but it does seem possible.
------ OPTION 2: add a
lock=True kwarg or add a new
We could add a keyword argument to allow turning off the locking, or just add a separate
cached_property_no_lock. This could be done with or without deprecating the current locking behavior.
On the positive side, this avoids any breakage for current users; any behavior change is opt-in.
On the negative side, if we don’t deprecate this means that forever the probably-better-for-most-users version of
cached_property has to live under the ugly name
cached_property_no_lock, and beginners are likely to stumble onto the version that is probably worse for them. (And unless we also do option 3, we forever keep under the most accessible name a basically-broken
cached_property that doesn’t seem to meet anyone’s needs.) And if we do deprecate, we create a lot of thrash and churn for anyone who has already adopted
cached_property: first we ask them to add
lock=False, and then probably a few years down the road we switch the default, and then later we maybe ask them to stop using the
lock keyword altogether.
We could also attempt to make the deprecation less noisy by only issuing the warning for users who appear to be relying on the locking (e.g. by issuing the warning only when there is lock contention.) But this is a pretty unreliable and non-deterministic heuristic to base a deprecation warning on.
------ OPTION 3: make the locking per-instance
We could try to update the locking code so that it locks per-instance instead of per-cached-property.
On the positive side, this preserves the guarantee that for a given instance a cached property will execute only once, even in the presence of threads. Someone out there may currently be using
cached_property and depending on that.
On the negative side, this will likely be complex to implement and hard to get right (nobody has yet offered an implementation), it will probably continue to cause some issues with pickling, and the value of it is hypothetical, not user-requested. (In the typical use cases for
cached_property where it’s just computing some cached value specific to the instance based on other immutable fields of the instance, it doesn’t really matter if on rare occasions it could be computed twice for the same instance.)
Also, this option doesn’t fully preserve compatibility either: it’s also theoretically possible that someone out there is currently implicitly protecting global state with their
cached_property lock, and even just changing the lock to per-instance could cause subtle bugs for them.
If we did create a robust implementation of per-instance method locking, it would seem more flexible to put this implementation in its own
lock decorator which can be optionally composed with
cached_property for those who need it.
What do you think of these options? Which seems the least bad? Are there additional options, or additional pros/cons of the above options, that we should consider?
Thanks for your consideration!