Lazy evaluation of expressions

But then we would have type declarations affecting the runtime behaviour of code, which they’re explicitly not supposed to do in Python.

2 Likes

A related issue to argument passing, post above, is reassignment. Consider:

x: Lazy = 1 + 2  # Typed and therefore clear. 
x = 3 + 4  # No typing what should be done?

Does this get translated into:

x: Lazy = Lazy(lambda: 1 + 2)
x = Lazy(lambda: 3 + 4)

Or:

x: Lazy = Lazy(lambda: 1 + 2)
x = 3 + 4

I’m favouring the 2nd one and to get the 1st behaviour you need to type the second line also. .

1 Like

Scala lets you write functions that capture the AST of expressions provided as arguments instead of their evaluated results:

def foo(x: Expr[Unit]) = x

The function foo accepts a single argument that must be an expression returning Unit (Scala’s None), and returns the AST of this expression as it was captured at the call site.

So the expression foo(print("hello")) does not actually print anything, it just returns an AST node representing the call to print. This can be used with macros to generate code that defers the evaluation of expressions similarly to how your Lazy type would work.

This is a really neat feature, but it relies on the fact that Scala is statically typed. This sort of design where the compiler can change how to interpret the same piece of syntax depending on type information depends on a global type-checker built into the language. Even if you could get everyone on board with the feature, it would be impossible to implement in a satisfying way.

1 Like

But Python already does. Consider:

@dataclass
class C0:
	a: int = 0

@dataclass
class C1:
	a = 0

C0.a is an instance variable and C1.a is a class variable, so completely different semantics, and the only difference is the typing.

No, dataclasses, a library, does. The core language does not. That is the promise that was made.

2 Likes

Well strictly speaking the core language does by updating the class’ __annotations__ attribute at runtime so the library can act accordingly.

But C functions do not have annotations. The typeshed project may help but requiring each call to check for a lazy-annotated argument at runtime sounds like a significant overhead to me.

1 Like

Here is an example of a possible problem:

def foo(arg, default):
    # don't want to evaluate a lazy default in this message,
    # it should be printed as-is
    print("Debug: called with: {arg}, {default}") 
    ...
    ...
    # but do want to evaluate it in this message
    print("Debug: returning {default}")
    return default

And importantly, none of this changes how the caller behaves in any situation. The decorator is able to manipulate the class, but the rules for how everyone else interacts with that class are not in any way affected. When you do C0.a, C0().a, C1.a, and C1().a, language rules determine what happens, and those rules have nothing to do with the annotations or the classes. When I write a function call, I can guarantee that its behaviour prior to calling that function is the same regardless of which function it is. And this is important. I can write a very simple form of tracing by doing this:

def trace_calculate(*a, **kw):
    print("Calculating", a, kw)
    ret = calculate(*a, **kw)
    print("Result", ret)
    return ret
import some_module
some_module.calculate = trace_calculate

This would break if annotations could make a function’s arguments lazy. IMO this alone is enough to kill the proposal, at least in its current form. Annotations simply are not designed to make this sort of fundamental change. Not in Python.

3 Likes

Yep, I was thinking of typesheds. If it doesn’t have a typeshed it is always unwrapped when past to C. Default behaviour would be to unwrap if type unknown. This is to be compatible with exciting, pre-Lazy code.

I’m probably misunderstanding you. Your trace calculate function doesn’t define the arguments as Lazy, therefore they will be evaluated. As I said I think I’m not understanding you?

arg and default are not typed as Lazy, therefore they will be passed to foo evaluated. If they were typed as Lazy, then they would be evaluated on 1st use and the cached value used thereafter. This, evaluation on 1st use with caching, is the intended behaviour.

Yes, other languages that I have used that have a similar feature to the lazy evaluation proposed are Scala (your example), Swift, Kotlin, and Mathematica. Kotlin even uses a type Lazy to do the wrapping! Mathematica is interesting because it is an untyped language, you annotate arguments to prevent evaluation (so very similar to proposal).

I wrote that example after the use of typehints in runtime were ruled out - or at least I did understand it that way.

trace_calculate is currently a function that forwards arguments completely unchanged to an underlying function, without caller or callee being able to tell a difference (unless they are doing stack inspection or something)

Any proposal that breaks this behavior has essentially a 0% chance of being accepted. Whatever you think of needs to be able to deal with this example.

1 Like

Yes, and that is true even if the underlying function declared them as lazy. That means that simply adding a wrapper around a function will change its semantics.

2 Likes

Because trace_calculate’s arguments are not typed as Lazy this function would operate unchanged (exactly as it currently does).

But that is the same as and and or presently in Python (and same as the other languages listed above as having this proposed lazy evaluation feature). EG:

def trace_calc(a, b):
    …
    result = a and b
    …
    return result 

b is always evaluated, it looses its laziness.

def calculate(formula: str, **exprs: Lazy[...]):
    ...

calculate("a or b", a=1, b=math.factorial(1000))

Is this going to eagerly calculate the value for b, or is it not?

Will that still be true if the tracer function is inserted?

This is not laziness, this is conditional evaluation.

I think I’m done trying to explain this. I’ve worded it in as many ways as I can, so now there is only one thing left: Go and implement it. Once you’ve done that, you will be able to explain this and other issues.

There is a fundamental misconception here about the possibility of using type annotations as part of the runtime. Type annotations in Python are for static typing which means that a static type checker that is not part of the runtime can read the annotations. This includes reading things like stub files that are completely unavailable to the runtime. There are some limited situations in the language where for convenience the runtime uses the annotations such as for dataclasses. In general though annotations need to be understood as not being something that basic runtime features can depend on.

While the annotations may be attached to some functions at runtime it would not be remotely feasible to inspect those annotations for every parameter as part of every function call. The compiler would have to generate all kinds of weird code for every function call based on the possibility that any one of the parameters of any thing that is called may or may not be marked as lazy. In a language that checks types at compilation time the situation is very different because the compiler would have full access to exactly which parameters are lazy when generating the code.

The proposal as suggested that the type of the parameter determines whether what is passed to it is lazy or not is also not something that is going to be accepted in Python because of the action at a distance effect. If there are to be lazy expressions in Python then the marker that makes the expression lazy would need to be part of the syntactic expression like lambda: x + y or otherwise marked explicitly at the call site rather than on the parameter.

1 Like