Would the comprehension equivalent of your example would be simpler? Legibility is a matter of preception also: if you’re trying to implement these kind of pipe-like operations with functional constructs, the readability of the code will depend on a few things, foremost, probably indentation, comments etc. Functional expressions are often nicer than the imperative equivalents, and much more elegant.
And I should also add that there are many legitimate use cases for pipe operations in the real world: a lot of data engineering / ETL code is just pipes, featuring a sequence of transformations, each transformation implemented by chaining multiple smaller steps into a single statement, and consuming the output of the previous transformation, e.g. there’s a lot of Pandas code that can be written this way. Provided it is indented and commented properly, it is quite readable, and nice too.
Here are two examples from recent work projects, the first from catastrophe loss modelling:
pd.merge_asof(
sorted_loss_tiv_ratios_df,
enhancement_params_df,
left_on='loss_tiv_ratio',
right_on='mean_damage_ratio',
direction='nearest'
)
.set_index('loss_tiv_ratio')
.loc[loss_tiv_ratios, :]
.reset_index(drop=True)
['enhancement_factor']
A second example (also from catastrophe loss modelling):
hazard_table
.drop(columns=drop_columns)
.loc[1:, keep_columns]
.astype(float)
.fillna(0)
.rename(columns=output_columns)
.reset_index(drop=True)
I won’t attempt the imperative equivalent of this in Pandas, but it won’t look pretty: if you see people doing it, it often involves iterating over rows, and performing row-wise operations. Slow and horrible.
My examples use Pandas, but I’m trying to draw an analogy with piped operations in general. What the OP (@dg-pb) proposes would, I believe, enable users to write this kind of piped code where you can build and compose several pipelines cleanly into one, in a very consise way.