You’re speaking purely from the perspective of performance optimization, when really the more important quality of a meaingful language construct IMHO is whether it’s able to reduce the cognitive load required in reading/writing a common code pattern. It’s why we got the @decorator
syntax when all it does is f = decorator(f)
. It’s why we got comprehensions when for
loops can already do the same thing. It’s why we got the match-case
statement when if-elif-else
would do. These syntactic sugars were introduced because they allow us to read/write those common code patterns in ways that fit our mental model.
To be sure, no language construct can do things that are not possible with existing facilities of a programming language so long as the language is already Turing-complete.
Python has a great assortment of high-order functions and transformation functions built-in and in the stdlib to support the paradigm of functional programming. But currently functional programming in Python simply isn’t great to read or write because:
- If we compose function calls with
f(g(x))
, we readf
beforeg
when the code actually flows fromg
tof
, so we always have to search inwards for the starting point of the code. - If we split function calls into separate statements, i.e.
temp = g(x); result = f(temp)
, we are forced to perform intermediate assignments even when the assignments are otherwise meaningless and only adds noise and boilerplate to the code.
A good pipeline syntax would address both of these issues, expressing the actual flow of the program with minimal noise.