Why are some @expressions syntax errors?

(Albert Hopkins) #1

I have the following 3 functions:

def dec1(func):
    return func

def dec2():
    return dec1

def dec3():
    return dec2

If I apply the @ decorator syntax to a function then:

  1. @dec1 works as expected.
  2. @dec2() works as expected.
  3. @dec3()() is a SyntaxError.

Just by looking at it, 3 seems that it shouldn’t be that different than 2 (which seems not that different than 1). So why then is it a syntax error?

Of course, not using the @ syntax (e.g. foo = dec3()()(foo)) works as expected.

(Albert Hopkins) #2

For that matter this works:

dec = lambda func: func

def myfunc():

This is a SyntaxError:

@(lambda func: func)
def myfunc():

Which should be equivalent to:

def myfunc():

myfunc = (lambda func: func)(myfunc)

These are obviously useless examples, but I’m just wondering why they are not allowed.

(Jack Jansen) #3

The grammar indeed shows that after the @ you can have only a dotted name followed by an optional arglist. So it looks quite a bit like an expression but it really isn’t.

See https://docs.python.org/3/reference/grammar.html

(Nathaniel J. Smith) #4

PEP 318 cites this email as the source of the restriction: https://mail.python.org/pipermail/python-dev/2004-August/046711.html

(Albert Hopkins) #5

Thanks for the responses. Good to know. I wonder if the gut feeling payed out. It kind of would be nice if it were an expression, but I can see how that might get confusing/ugly.

(Jeroen Demeyer) #6

I have also been bitten by this. Personally, I think that it was a mistake to not allow arbitrary expressions. Just because it’s ugly doesn’t mean that it should be forbidden. As far as I know, it’s the only place in the Python grammar with such a strange rule…

(Nick Coghlan) #7

https://bugs.python.org/issue19660 is a more in-depth discussion of a proposal to drop the restriction.

Actually doing so would require a PEP, since it’s a syntax change, and the biggest challenge faced is that most of the currently disallowed cases end up being more readable when they’re refactored to turn the decorator expression into an appropriately named function.

The context where the original syntactic restriction was imposed was one where the existing code pattern the new syntax was intended to replace was either:

def my_method():

my_method = classmethod(my_method)


def my_function():

my_function = some_decorator_factory(*some_args)(my_function)

So the original syntactic restriction had the effect of making it more obvious which cases of function post-modification were suitable for refactoring to use the new more declarative decorator syntax, and which should continue to be handled as imperative post-modification. It also had the benefit of almost entirely eliminating the risk of cryptic one-liners that overshadowed the trailing function definition as a potential point of concern. So while the PEP specifically cites Guido’s post about it as BDFL, he was far from being the only one concerned by the prospect of folks getting carried away with overly complex decorator expressions, and it turned out that having to nest things inside a no-op function call was enough of a hint to get folks not to do that kind of thing.

The situation today is different, in that we have a decade-and-a-half of experience with the restricted syntax to set the precedent for what “reasonable decorator syntax usage” looks like, and a couple of currently disallowed cases where readability would remain quite reasonable (i.e. using subscripting instead of a function call, doing an attribute lookup after the function call).

So while a PEP to change this would still face a lot of challenges (since any grammar change has a non-trivial implementation cost not only for the reference interpreter, but also for developers of other implementations and tools that are sensitive to the exact language grammar), it would have a chance of success if it was able to identify cases where a suboptimal API design was chosen to fit within the constraints of the decorator syntax, even though there were other options that would have been more readable (or equally readable with less code complexity) if the syntax had permitted them.

(Jeroen Demeyer) #8

What I still do not get is why that gut feeling ended up affecting the grammar. There are plenty of ways to write ugly, difficult-to-understand Python code. But somehow for this one special case of decorators, it was felt that the grammar needed to be restricted. As others have said it too: style issues belong in a style guide, not in the grammar.

Besides, which of these is more readable?

myinteract = interact.options(manual=True)
def f(x):


def f(x):
f = interact.options(manual=True)(x=(0,10))(f)


def f(x):

Personally, I find the last one the most readable. For context, see https://github.com/jupyter-widgets/ipywidgets/pull/771#issuecomment-262684060

(Guido van Rossum) #9

Personally I find anything that calls the result of a call an abomination. My brain just isn’t wired to parse that, and wen I encounter e.g. interact.options(manual=True)(x=(0,10)) I usually miss the fact that this is a call of a call, and I end up having to stop and analyze that expression in depth before I can go on understanding the rest of the code. This doesn’t happen with nested calls for me.

I still can’t explain this (maybe it’s just that I grew up with Algol and Pascal, or my Haskell aversion is showing :slight_smile: but it is there nevertheless.

(Nick Coghlan) #10

@jdemeyer You’re disadvantaging the first case by deliberately giving the interim result an uninformative name, and you’re also thoroughly familiar with the interact.options API, so it doesn’t cause a mental hiccup for you to have to read it in a larger expression.

Contrast that to:

interact_on_request = interact.options(manual=True)

def f(x):

Now someone that’s never seen the interact.options API before will still know what that code is doing, without having to try to find the interact.options documentation (which Google really struggles to locate, even when qualified as “interact options jupyter”).