Personally my interest in it, besides being a very cool and impressive feature, as quoted earlier, is being a part of a “durable execution” mechanism.
I have a program that runs some flow/task with multiple steps, that has to wait to be fed some external event between steps. However, my program may need to scale down, update, restart, or stop for any other reason during the waiting.
My ideal solution for this would be a generator. I would run the next() step of it, then pickle it and persist it somewhere. If at any point my program stops, the next time it starts, I can just let it pick off from where it stopped and .send() it the event when it arrives.
The code for implementing the tasks then becomes a lot easier to read and follow. I have a single function that always keeps it’s state between steps. So I can define a variable locally and use it in another step, even if that happens in a different process! The flow is very easy to follow and it just looks like a normal function.
Instead, right now, I have some class that has multiple functions, each representing a step, and another function, something like get_steps, that says the order of the functions. It’s much less readable and each function also requires naming it, which is just an extra burden. In addition, I have to keep track myself of the current state of the task and what step it’s in.
I think picklable generators would be a very powerful primitive for this, and for many other use cases.
I do want to recognize that this would probably require significant work to add to cpython, and as with every idea here, this discussion is worth much less without someone that is willing to implement it, or even a proof of concept.