Lazy evaluation of expressions

The compiler knows the types, but it doesn’t enforce them, it does however put them into __annotations__. EG:

def f(a: int) -> int:
	return a

print(f.__annotations__)

Prints:

{'a': <class 'int'>, 'return': <class 'int'>}

Therefore the compiler, not the runtime, will transform assignment to Lazy variables.

I am going to bow out of this thread; OP seems to be unwillingly to try to properly understand the counter arguments, I don’t believe there is any value in continuing this discussion.

1 Like

There seems to be a misunderstanding about how the existing short-circuit operators, and & or in Python. 1st short-circuit is a form of lazy evaluation, e.g. see Short-circuit evaluation - Wikipedia. 2nd wrapping the operators cause their laziness to be lost. An example in this thread suggested by others is a trace_calc wrapper, but as demonstrated below this doesn’t work for and & or:

def trace_calc(calc, *args, **kwargs):
  result = calc(*args, **kwargs)
  print(f'{result}, {args}, {kwargs}')
  return result 

If we define an or expression that has side effects:

a = True
def b():
	print('Evaluate b')
	return True

print(a or b()) 

It prints “True” and importantly doesn’t print “Evaluate b” because b isn’t evaluated. However, if the or expression is wrapped in trace_calc the laziness id lost:

print(trace_calc(lambda l, r: l or r, a, b())) 

Prints “Evaluating b”, “True, (True, True), {}”, and “Tue”, importantly “Evaluating b” which shows that the evaluation has been changed by the wrapping.

Therefore in exisiting Python you cannot write a general expression wrapper that does not change the evaluation.

The proposed Lazy does not have this problem.

def my_or(l, r: Lazy):
	if l:
		return True
	return r()

print(my_or(a, Lazy(lambda: b())))  # After compiler re-write

Prints “True” and not “Evaluate b”, showing that it behaves like the built in or. When wrapped in trace_calc:

print(trace_calc(my_or, a, Lazy(lambda: b())))

Which prints “True” and not “Evaluate b”, which shows that Lazy variables are passed on unevaluated.

Note the my_or example was just an example, it is not proposed that and nor or be changed.

Citation does not support the given claim.

import time


def slow():
    time.sleep(3600)

    return False


def func(a, lazy):
    if a:
        return True

    return lazy()


print(func(True, lambda: slow))

Is there a particular reason to avoid using a lambda in this case? What is Lazy trying to achieve that a lambda couldn’t already handle?

For your exact example the advantage of Lazy is:

  1. That the call to evaluate lazy inside func would be inserted by the compiler, ie programmer would write:
def func(a, lazy: Lazy):
    if a:
        return True
    return lazy

Translated by compiler into:

def func(a, lazy: Lazy):
    if a:
        return True
    return lazy()  # Call added by compiler
  1. That the wrapping when calling func is automatic, ie programmer would write:
func(a, slow())

Translated by compiler into:

func(a, Lazy(lambda: slow()))
  1. Not relevant to example, the result of the evaluation is cached.

Does this mean lazy is evaluated before being returned? Why? Who is actually using lazy?

def func(a, lazy: Lazy):
    if a:
        return True
    return lazy()  # Call added by compiler

def func2(lazy: Lazy):
    return lazy  # not used 

func2(func(1, lazy))  # not used

In the example above, lazy is never actually used.

Why cache the result, is it that expensive to compute? And is caching itself really free?


Here’s an example of a class designed for expensive computations that are evaluated only when used. You can pass an instance of this class around at no cost.

import time


class Slow:
    def __init__(self, value):
        self._value = value

    def __int__(self):
        return self._value

    def __add__(self, other):
        print("using __add__...")
        time.sleep(1)
        return self._value + other

    def __iadd__(self, other):
        print("using __iadd__...")
        time.sleep(1)
        self._value += other
        return self

    def __radd__(self, other):
        print("using __radd__...")
        time.sleep(1)
        return other + self._value


# Example usage
slow = Slow(10)

print(slow + 1)  # __add__
print(2 + slow)  # __radd__
slow += 3  # __iadd__
print(int(slow))

recursions = 100


def func(slow, recursion=0):
    if recursion > recursions:
        return time.time()
    else:
        return func(slow, recursion + 1)


t = time.time()
s = (func(slow) - t)
print('recursions/s: ', 100 // s)

If return type is Lazy then unevaluated. No return type given, therefore evaluated.

Some applications I used this for are expensive, eg wanted ease of use like a variable with out a surprise like a recalculation.

I don’t think you would use Lazy for something like Slow.

What exactly makes lazy lazy in this case?

  1. If a variable/attribute/return-value is typed as Lazy and is assigned to:

a. If the value to be assigned is of type Lazy, then it is assigned unchanged.

b. If the value to be assigned is not of type Lazy or is an expression it is wrapped by Lazy(lambda: <expr>).

  1. If a variable/attribute/return-value is not typed as Lazy and is assigned to:

a. If the value to be assigned is of type Lazy, then it is unwrapped by calling the Lazy object before assignment.

b. If the value to be assigned is not of type Lazy then normal assignment.

I mean, where is the lazy evaluation actually happening? You’re managing it strictly and explicitly using type hints*, that’s just normal evaluation. It’s no different from the Slow class example: standard evaluation behavior.

This would qualify as lazy evaluation, if it didn’t raise ZeroDivisionError: division by zero:

def func(x):
    return 42


func(1 / 0)

In the example above, the x parameter is never used, so it’s never evaluated.

Laziness in evaluation refers to how the interpreter or compiler evaluates code. Python is not a lazy language. For a true lazy language, look at Haskell.

*Using type hints has no effect at runtime, as explained earlier in a previous post.

The proposal is to change the compiler so that at compile time, not runtime, code annotated with lazy gets transformed.

As an aside. Transformation of code based on type annotations already happens in Python, notably Dataclasses.

Back to the proposal. If your example was changed to use a Lazy annotation:

def func(x: Lazy):
    return 42

Then the call:

func(1 / 0)

Wouldn’t cause an exception because the compiler would have translated it into:

func(Lazy(lambda: 1 / 0))

Before execution.

(Copying a comment I made in a different thread)

Having a language feature gated behind the use of type annotations is a major change in Python’s design philosophy. Even dataclasses can be used without type annotations, if you want to.

I’m not saying that it’s impossible (maybe typing is now ubiquitous enough that it’s time to allow features to not support untyped code), but it would need to be agreed as a matter of principle, with consensus from the core devs/SC.

Going against the “typing is optional” principle in an ordinary feature proposal is a good way to get your proposal rejected, I’m afraid.

To be explicit, I’ve only been skimming this discussion - if the proposal includes a way to use it without needing type annotations, what I say above doesn’t apply. But it sounds very much like the annotation is an integral part of the proposal.

3 Likes

You can use annotations, much like the @dataclass decorator, but the interpreter doesn’t care. Honestly, neither do I, if I wanted to enforce types, I’d be using C.

def example(a: int, b: str) -> bool:
    return str(a) == b

print(example.__annotations__)
# {'a': <class 'int'>, 'b': <class 'str'>, 'return': <class 'bool'>}

It can be used manually, but there isn’t much point, therefore I would say that it would need core-dev/SC imprimatur. How does that happen?

In the context of this thread it won’t happen. You have proposed an unworkable idea and then ignored explanations as to why it is unworkable. No core dev can come along and somehow resolve that even if they wanted to.

1 Like

If the compiler generates different code that does different things, that most certainly affects runtime behaviour.