So I think there are 2 main points to consider.
1. Observation (option 1 partial proxy) vs (option 2 full proxy).
And I think if this is going to be implemented to maximise usefulness, it might be better to go with option 2.
Let’s take an example (which is not new, but bear with me).
from external_library import get_something_with_default
result = get_something_with_default(
'a',
default=`expensive_function()`
)
# What if "black-box" function uses `is` or `type`?
So ideally one would want to be able to use this functionality in any code and be at least 99.99% sure that all is going to work as expected.
Otherwise, this feature will not be compatible with most of the code. And if external libraries need extra banner “deferred evaluation compatible”, to me this sounds like a very bad deal.
So 2 main ones as I see (there might be more of them) are is
and type
.
Although (option 1) is much simpler as one can actually make such object in pure python, but if implementing this in standard library, where capabilities are greater than that, I think this needs to be made use of.
So I suggest that ideally the proxy deferred object needs to be a 99.99% substitute, as opposed to 98% substitute that one gets if is
and type
do not act on the evaluated value.
In this case, there would need to be special cases to type
and is
and some convenient back-door access to ProxyObject
needs to be devised.
2. Variable binding (at definition vs at evaluation)
I am not sure about this, but I might have a good starting point to think about this.
Serialisation.
Say:
class A:
def __init__(self, a):
self.a = a
from external_library import factorial
b = 100_000
inst = A(a=`factorial(b)`)
inst_des = pickle.loads(pickle.dumps(inst))
What would make most sense here to happen?
If this was lambda, i.e. lambda: factorial(b)
, it would fail. And it should. Because the values within are subject to change.
But if this was NOT to behave as lambda
, and we are binding variables at definition, this could have a well defined and fixed behaviour.
Advantages of binding at definition would be a much more robust object that can be serialised and used in loops:
expr = 1
for a in [1, 2, 3, 4]:
expr = `expr + a`
Disadvantages are that one could not use it for cases such as “A non-backdoor approach for typing forward references”.
However, if one wants lambda
behaviour even if “binding on definition” approach is taken:
c = `(lambda: a + b)()`
Then one gets lambda behaviour and serialisation fails - as expected.
Personally, I would favour “binding at definition”.
Mainly because lambda already does “binding on evaluation”
And this would bring more to the table if this was different and not having same “issues” as something that already exists so that it can cover different cases.
Especially if there still was a way (although slightly more verbose) to use it with lambda
within.