Full disclosure: I’m not sure this is a good idea (since it is seriously cryptic when you first encounter it, and hard to look up on your own), so I have precisely zero plans to pursue it any further myself. I do, however, find it to be an intriguing idea (prompted by this post in the PEP 736 discussion thread), so it seemed worth sharing in case someone else liked it enough to pick it up and run with it.
The core of the idea:
A standalone @ symbol on the right hand side of an assignment expression or a regular (not augmented) assignment statement with a single target becomes a shortand that means “assignment target”. Exactly what that means depends on the nature of the assignment target (more details on that below).
When the assignment target is an identifier, @'' or @"" can be used to mean “the assignment target as a string” (useful for APIs like collections.NamedTuple and typing.NewType)
In function calls using keyword arguments, @ and @'' are resolved as if the parameter name were a local variable (so param_name=@ would pass a local variable named param_name, and param_name=ns[@''] would be equivalent to param_name=ns["param_name"]
Handling different kinds of assignment targets:
identifiers: @ is an ordinary variable lookup for that name
dotted attributes: @ looks up the corresponding attribute rather than setting it. The target expression up to the last dot is only evaluated once for the entire statement rather than being evaluated multiple times
subscript or slice: @ looks up the corresponding container subscript rather than setting it. The target container expression and subscript expression are only evaluated once for the entire statement rather than being evaluated multiple times
tuple unpacking: not allowed (specifically due to star unpacking)
multiple targets: not allowed (which target should @ refer to? Leftmost? Rightmost? Tuple of all targets?)
Examples:
Vector = typing.NewType(@'', tuple[float, ...])
some_target = @ if @ is not None else []
call_with_local_args(some_target=@)
some.nested.attribute = @.lower()
some_container[subscript_expression()] = @.lower()
some.nested.container[x:y] = reversed(@)
running_tally = 0
running_tallies = [(running_tally := @ + len(x)) for x in items]
Disallowed due to the potential for ambiguity:
a = b = c = d = @
(a, b, *c, d) = reversed(@)
(a, b, c, d) = reversed(@) # Could potentially be allowed
a += @ # Could potentially be allowed
Note that it isn’t that there’s no reasonable meaning for these (see the various comments about that later in the thread), it’s that the use cases for them are weak and they’d necessarily be harder to interpret than the simpler cases above. If they’re left out initially, it’s straightforward to add them later. If they’re included up front and are later found to be problematic, they’re hard to get rid of.
The mnemonic for remembering this use of the @ (“at”) symbol is the acronym “AT” for “Assignment Target”. Aside from the shared symbol, it has no connection to function decorators or matrix multiplication. As a style recommendation, it would be worth pointing out that combining assignment target references and the matrix multiplication operator in the same expression is intrinsically hard to read (due to the potential confusion between the two uses).
Edit: added a clarifying note to the list of arguably ambiguous targets
The = @.lower() examples reminded me of the previous .= assignment discussion. Still not sure whether it would be a good idea, but the @ form is more general than the .= special case. Definitely intriguing …
I was pretty skeptical when I started reading, but the examples section really sold it to me!
So many of these are things I’ve had to do many times and shrugged about the verbosity.
Especially these:
some_target = @ if @ is not None else []
some.nested.attribute = @.lower()
(and similar)
Infact, the function call examples (PEP 736-like) seemed the least useful and intuitive to me, and I’m not quite sure I follow on how they’re consistent with the other assignment targets
The idea wouldn’t be useful in a regular for loop, but could reasonably be defined in comprehensions and generator expressions:
[@ for semantic_name in container if @]
Less useful though, since it’s already acceptable style to use one letter iteration variable names in such cases.
I listed those separately because they’re genuinely different from the pure assignment case. Specifically, they’re looking up the given name in a different context from the one where the name is getting set.
Both tuple unpacking and multi-assigment could be allowed by having @ be a tuple in those cases.
My problem with that for multi-assigment is that it feels hard to remember (since rightmost and leftmost target remain plausible alternative meanings)
For tuple unpacking, the restriction could be limited to cases without star unpacking rather than all tuple targets, but it just doesn’t seem useful enough to justify that complexity.
I like the idea a lot, and I don’t quite see what can be potentially ambiguous about the two usages quoted above. The reversed example in particular makes the intent of the code that much more pronounced.
I do think that we should disallow mixed usage of matrix multiplication and the proposed assignment target reference as a special case though.
Maybe I’m missing something obvious, but I don’t even see what can be ambiguous about allowing a starred expression in the assignment targets, where the quoted example would intuitively translate into:
(a, b, *c, d) = reversed((a, b, *c, d))
Also note that we should preserve the data types of tuples and/or lists in the assignment targets when evaluating them on the RHS:
[a, b] = @ + [] # ok
a, b = @ + [] # TypeError: can only concatenate tuple (not "list") to tuple
This would be ambiguous if the function returns a value to an assignment target:
target = call_with_local_args(some_target=@) # is @ target or some_target?
Thanks for sharing this interesting idea! I think the use of @ to refer to assignment targets could be a neat feature, especially for scenarios where we need to reference the target itself, like in the examples with collections.NamedTuple and typing.NewType.
However, I do agree that the initial learning curve might be steep for newcomers, and the potential for confusion with existing uses of @ (decorators and matrix multiplication) is a concern.
It would be great to see more community feedback on this proposal, especially regarding the specific cases where it’s disallowed and how those could be handled differently.
I almost wrote “Python is not Perl” as one of my reasons for being dubious about the idea. I changed the expression of that sentiment to the direct comment about it being cryptic instead, as I suspect there may be folks around now whose reaction to my first framing would be “What’s Perl?”
My current summary of my own feelings: “Wow, that’s cool… I think I hate it”. Super useful (so it would get used), but super cryptic and not pretty to read (so I’d feel bad every time I used it, or saw someone else use it). I want to like it because of its utility and practicality, but the way it looks… ugh.
That said, I still feel much the same way about the genuinely popular addition that is the walrus operator, so maybe my aesthetic judgement isn’t the one to rely on here
(Squaring a matrix in-place is amusing in its old school emoticon like nature, though: A @=@ is somewhat similar to the historical way of writing the “amazed” expression that is now rendered as )
I only left out string forms for the non-identifier cases because I couldn’t think of a reasonable use case for them.
Defining it to be the literal source code target string regardless of the target kind wouldn’t be ambiguous though (and wouldn’t change the meaning for identifiers). That way APIs that accept comma separated strings that are expected to match the names of tuple assignment targets could be passed @'' as shown, while those that accept iterables could be passed @''.split(',').
It would also strengthen the case for allowing @ for target tuples in general (since it would be odd if @'' was allowed, but @ wasn’t).
In my opinion, Jax solved this problem through better design. Instead of creating the symbols by binding a name, you do:
@jit
def some_function(x):
...
The decorator replaces the passed-in value x with a symbol x, produces the expression graph, compiles it, and then calls it with the passed-in arguments.
Similarly, SymPy could have been designed this way:
@sympy.symbolize
def f(x, y, z, u, n, a, b, c, d, e):
... # Use symbols here.
return expression_root
expression_graph = f()
That is not a nice design for something that is often used interactively or in notebooks, short scripts etc. I guess jax has a more limited scope for how the symbols are used but needing to have all code in functions would be quite limiting for most SymPy users (including many who are not really “programmers” as such).
Other computer algebra systems often just allow symbols to be appear automagically without needing to be declared (which is both good and bad). Some like Matlab and Julia which are programming languages that have symbolic capability allow a nicer declaration something like:
@syms x, y, z, t
Those languages could be extended to support this in a nicer way but Python can’t because it lacks macros and it would be considered out of scope for this sort of thing to be built in. The duplication in x, y = symbols('x, y') is of course not unique to SymPy and applies to the other situations mentioned in this thread as well which might equally be solved by macros in other languages.
Fair enough, I haven’t seen enough real life SymPy code to know.
Anyway, I think this usage in SymPy and similarly in (now-deprecated) type variables (TypeVar) is the most compelling use case to me. (Are there any other similar cases?)
The other examples seem to just illustrate how this features makes typing code easier, but reading code slightly harder (at least to my eyes). I think it should be the other way around: we should try to make reading code easier.
I’m going to heart this idea since I think it’s a worthwhile and interesting discussion even though I don’t think it would be a good addition.
This is way off topic, and I’m not trying to suggest how SymPy should be designed, but I’m surprised no-one has mentioned abusing sys._getframe(1).f_locals to have a function that sets variables in the parent frame. With that you could simply do something like
>>> from sympy import Symbol
>>> import sys
>>> def declare_symbols(varnames):
... vars = varnames.split()
... locals = sys._getframe(1).f_locals
... for var in vars:
... locals[var] = Symbol(var)
...
>>> declare_symbols("x y z a b c")
>>> expr = x * y + z - (a + b + c)
>>> expr
-a - b - c + x*y + z
Part of it is the symbol @ itself, which is fine having that story of @ => AT => Assignment Target, but that is a story long to remember and one has to know it first.
An alternative is to use instead a keyword that reads in English as what is meant, such one can guess the meaning relying on knowledge that is more common.
I wouldn’t know which, for not knowing English enough: thesame, idem, ditto?