I’ve been supporting this proposal from the start, but have never said you should ALWAYS use it. Use it when it makes sense, like every other new feature. I’m personally against autoformatters in general, so I’ve stayed out of that wing of the discussion, so I suppose maybe my silence was taken as support for that view??
I haven’t seen people in general arguing along these lines with respect to PEP 736 as a language feature. I’m not sure if anyone has said anything along those lines? The conversation about linters/formatters is a different story. Obviously if linters/formatters adopt auto-conversion by default then that is forcing users of those tools into using the short hand syntax in all cases because, as you point out, a linter can’t look at the semantics of the code and determine, based on writers preference, whether a particular instance should be converted or not.
I guess I don’t yet know if I would want my linter/formatter to flag or convert these transformations yet. I’d want to use the feature for a bit and see if I like it. I’m also suspicious that I would be in favor of using this feature in all cases where its applicable but haven’t really looked at code with an eye towards this to be sure. If my suspicions are right then I wouldn’t be upset by my linter/formatter automatically converting. In other words, I’m curious to try out the feature a bit and find cases where I would prefer NOT to use the shorthand syntax. But I 100% recognize that this whole paragraph is my personal opinion and others will validly have different opinions.
For the short term my opinion is the safe route seems to be linters/formatters would support the shorthand notation or the long hand notation, but not convert one way or the other unless the user opts into that behavior.
I want to look at the PEP language again but if it needs stronger warning against blindly applying the transformation from f(x=x) to f(x=) in all cases I’m not opposed to that at all.
As with any other language feature, the programmer should exercise their own judgement about whether to use it in any given context. We do not recommend enforcing a rule to use the feature in all cases where it may be applicable.
And Joshua extended above
As described above, we propose that a reasonable rule of thumb would be to use this in cases where a parameter and its argument have the same semantics in order to reduce unintentional desynchronisation without causing inappropriate coupling.
Maybe more emphasis on this point could be included in the how to teach this section where examples could be given for when users might consider using this feature and when users might consider not using this feature.
Do we have concrete examples of when not to use this feature in this thread already? I’ll take a glance through.
edit:
Actually there are two things to warn against. It sounds like there are cases where one should avoid converting f(x=x) to f(x=), but we should also warn against specifically renaming variables for the sole purpose of using this feature. In some cases that might be a fine thing to do. But in some cases it should likewise be avoided.
What I mean here is that module-level configuration, set from one place then then affecting the results other callers get, is questionable design. If a module is a private singleton in an application, I think it is manageable. But if you hope for re-use, then it’s a bad idea, as different parts of the same application might want different configuration.
I think this is what @boxed means by “universally seen as horribly bad”. Several libraries that started this way have introduced a context object class to avoid this problem and that’s the usual way to proceed in new code.
I read your post as pointing out (by irony) that PEP 736 made this anti-pattern more attractive to the inexperienced programmer.
The idea that linters should enforce every possible syntactical construct is weird. I wouldn’t want a linter to start enforcing the assignment operator everywhere possible, for example.
But then, I have a deep dislike of Black so it’s no wonder that I disagree with its designers’ mindset as well.
The key sentence in PEP 736 is that the new syntax
will be interpreted exactly equivalently
to the current verbose syntax. This implies the produced AST will be equivalent. The code isn’t different semantically at all, it’s just shorter, i.e. reads (and writes!) quicker.
Therefore, there’s no danger in making this transformation. It produces consistent results across the board, inviting more use, which – in line with the PEP – has the effects of encouraging more consistent variable naming, encouraging use of keyword arguments, and reduces verbosity.
Of course, this transformation can only be performed when the config file for the codebase specifies that it’s 3.13+, otherwise it would no longer run on older versions of Python.
Ultimately that’s for the SC to decide. If the PEP passes then I believe the message is that it’s clearly better to have it rather than not. We don’t usually disturb status quo for marginal wins.
Due to the sheer decrease in character count, information density is lowered. I believe this will be a net positive.
Is the equal sign too insignificant for visibility? I don’t think so. It’s already used as a postfix symbol in f-string expressions without issue. And it occupies just as much space as the asterisk that signifies variadic arguments.
But again, it’s up to the SC. If this gets accepted, Black intends to embrace it.
Does black also recommend transforming all instances of f"x={x}" to f"{x=}"? Those are also entirely equivalent, but express a slight difference in intent (if you rename the variable x, the second one will have the label follow the rename, the first one won’t).
This has to be a none argument, otherwise black should remove all unnecessary whitespace and reduce all indentation to 1 space per level. Being shorter and quicker the read/write is not the measurement for what a linter should do.
That is not how density works. It is the same amount of information expressed in less characters, so the density is higher.
But tbf, I have zero interested in black’s opinions anyway and this stance hasn’t improved my view of the project. I am still weakly in favor of this proposal, but if the result in PRs “improving” the code with this change because black/the SC said so (we are already getting PRs with nothing but “black said so”), then I am strongly against.
Ironically the AST for the f-string case is not equivalent. f"{x=}" means f"x={x!r}".
Consequently, Black doesn’t touch it. In fact, currently it doesn’t touch string insides (besides reindenting docstrings and tidying escape sequences).
I’m sorry, yes, I meant to say the opposite but I hope you got my meaning.
I mean, technically? But I am not sure if you are of the opinion if lower or higher information density is better. IMO, it has little to do with readability unless it’s at the extremes, so I am convinced that it’s a useful metric here.
I would be interested to hear an example of a case when someone would not want to use this feature contrasted with a case where they would want to use this feature. (That is “ I would never use this feature” doesn’t count.)
I’d avoid using this feature in any case where there was mixed use of transparently passed and non-transparently passed kwargs. It invites asking “was leaving this out intentional?” when something is done differently, and while a comment would explain it wasn’t a mistake, so would just spelling it out.