I’d recommend you start a PR now to change the PEP and (maybe after some review) mention it to the SC so they can take a look.
(Not speaking on behalf of the SC, but that’s what I have done on some previous PEPs and they never got upset at me
.)
I’d recommend you start a PR now to change the PEP and (maybe after some review) mention it to the SC so they can take a look.
(Not speaking on behalf of the SC, but that’s what I have done on some previous PEPs and they never got upset at me
.)
Hm.. I was also thinking that [x for x in *z] would be a better syntax for this, but this is a reasonable counterpoint, I guess.
The thing that I find most unintuitive about this syntax is how the name binding to the elements of the iterable interacts with this in expressions, so for example:
a = [*(i + 2) for i in [[1, 2, 3]] * 2]
I’m guessing that thus unravels to:
a = []
for _intermediate in [[1, 2, 3]] * 2:
for i in _intermediate:
a.append(i + 2)
Which gives [3, 4, 5, 3, 4, 5,but that feels very weird to me, because i is explicitly being bound to the elements of the thing being iterated over, the parens usually mean “do this before you do anything else” and * (in this context), usually means “unpack the thing I am attached to), not “reach in and change the meaning of a variable in an expression, so I would expect it to unravel to something more like:
a = []
for i in [[1, 2, 3]] * 2:
a.extend(i + 2)
which would jus traise an exception when you try and add 2 to a list.
I can’t think of any way to spell this on the left side where it’s obvious that the i name is being applied to all elements of the iterators in it. Something like [it for it in *its] feels to me more like you are saying, “expand its and then bind it to every element of the expanded iterable”.
No, that unrolls to
a = []
for i in [[1, 2, 3]] * 2:
a.extend(i + 2)
Which results in a TypeError because i + 2 is trying to add a list and an integer, which is a type error.
IDK where you are getting the semantics from that you are imagining; They are not part of the PEP.
Oh, then I guess that is fairly intuitive (once you get past the stage of “what is this star doing here”), but also seems somewhat less useful to me, since it means that you need a nested generator if you want to apply anything to each of the elements, but I guess in languages with flatMap style operations it usually does the mapping first and then the flattening and not the other way around, so maybe that’s the more common use case? Feels weird to me.
I think I must have misunderstood what was happening in the one example that has a non-trivial expression (the one with y := [i, i+1].
Sorry that this took a bit longer than I had expected, @pablogsal, but I put in (and @Jelle approved) a PR for some wording changes related to this initial feedback this morning (diff). The biggest change was to the Rationale section (which was substantially rearranged/reworded), but I also made a few other small wording tweaks throughout the document.
Just wanted to give a heads-up about the changes; if there’s a better way to let the SC know about revisions like this (presumably there will be more after the next round of feedback), please let me know!
new_list = []
for x in its:
new_list.extend(expr)
new_set = set()
for x in its:
new_set.update(expr)
new_dict = {}
for x in dicts:
new_dict.update(expr)
Shouldn’t x here be expr or vice versa? I think it’s a place that was missed during refactoring.
Or am I missing something?
Thanks for taking a detailed look!
That code is actually what I intended to write since the thing we’re unpacking can be any arbitrary expression (it doesn’t just have to be x). It also mirrors the examples above that use the proposed new syntax:
new_list = [*expr for x in its]
new_set = {*expr for x in its}
new_dict = {**expr for d in dicts}
Ah okay, my bad ![]()
Yeah then it makes sense, thank you! ![]()
On that note couldn’t we write this as applying a function to x?
new_list = [*f(x) for x in its]
new_set = {*f(x) for x in its}
new_dict = {**g(d) for d in dicts}
This is nit-picking to the point of being off topic, but:
From _weakrefset.py:
# current: return self.__class__(e for s in (self, other) for e in s) # proposed: return self.__class__(*s for s in (self, other))
IMO, this one would be cleaner to simplify to:
return self.__class__(*self, *other)
instead of using the new syntax. Mostly just noting this in case the implementation involves making the changes proposed in the PEP.
Your code doesn’t mean the same thing, your code unpacks into n parameters, the original and proposed code both are only a single iterable parameter and in fact that’s the only thing WeakSet will accept, so your code wouldn’t work.
Ugh. Generator comprehensions as function arguments always mess me up, and I’d just read that section. I will probably avoid f(*x for x in xs) like the plague, since I will absolutely expect them to expand as *args.
I had played with this idea when drafting the PEP but when with expr because I didn’t want to accidentally imply that the expression in that spot needed to be a function call (though perhaps I’m overthinking that…). I’m not at all opposed to changing it, though, if that’s clearer.
Thanks for the changes, and I think following up here is a good way to share them with the SC.
I get this backwards too, but is this really an issue in practice?
If you’re in the REPL, then just verify the results. If you’re in your text editor, then add a type hint of x: list[int] = [...] if you’re unclear about the order of the nested for ys in xs for y in ys loop mechanic. This isn’t 2015, we have very useful static analyzing tools and pretty solid LSP implementations.
This may be an anecdote, but I’ve been writing Python professionally for over a decade now and I constantly get this wrong. Yeah linters and typecheckers help, but it’s a sharp edge. Or if not sharp, definitely not ergonomic. I’d love to have this PEP in.
I’d love it. IMO It’ll be a easier way to unpack nested containers than itertools.chain or sum (why sum???)
大爱此pep!!!这不比itertools.chain或者sum(不是,为什么我要用sum?)好用多了
Hi, welcome to the python discourse. I’d be appreciated, if you were to write your whole post in English, as that is easier to moderate.
After careful deliberation, the Python Steering Council is pleased to accept PEP 798 – Unpacking in Comprehensions.
We are accepting this PEP with one modification: we require that both synchronous and asynchronous generator expressions use explicit loops rather than yield from for unpacking operations.
The Steering Council believes that simplicity and consistency are paramount here. The delegation behaviour provided by yield from adds semantic complexity for advanced use cases involving .send(), .throw(), and .close() that are rarely relevant when writing comprehensions. We don’t believe that developers writing comprehensions should have to think about the differences between sync and async generator semantics or about generator delegation protocols. We firmly believe that the mental model should be as simple as possible and as symmetric as possible between all kinds of comprehensions. The straightforward semantics of explicit loops provide a uniform mental model that works the same way regardless of context, and also provides better parity with the function-like versions, such as itertools.chain.from_iterable. For the rare cases where someone actually needs delegation behaviour, the Steering Council believes they should use an explicit generator function with yield from rather than a comprehension.
Congratulations, and thank you for your excellent work on this PEP. We look forward to seeing this feature in Python 3.15!
— The Python Steering Council
Very exciting! Thanks @pablogsal and the rest of the SC, and huge thanks to everyone here (and in the other thread) for your feedback along the way!
Expect some PRs from me in the near future ![]()