PEP 657 -- Include Fine Grained Error Locations in Tracebacks

It bit me so hard I had to read Jelle’s explanation three times before I
got it. So I agree that this will be a common user confusion.

Thanks! I think that this is something that will need some adaptation from users to match expectations as I mentioned in PEP 657 -- Include Fine Grained Error Locations in Tracebacks - #8 by pablogsal.

In any case, we can fine-tune this as part of the implementation as this doesn’t really modify the proposal so I would prefer to discuss this later if the PEP is accepted to keep the discussion focused.

I made the same mistake, but I think the lesson to learn is that the error is entirely inside the ^ range – which should make it simpler to understand.

1 Like

Maybe using two different markers can alleviate the ambiguity? For example:

 File "", line 6, in lel
    return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
TypeError: 'NoneType' object is not subscriptable
1 Like

I’d personally prefer to just to teach the mental model instead of having a mixed annotation like that. In terms of aesthetics, it doesn’t look as good. Other than that, we would end up with manual metadata propagation which is something that would complicate the compiler and very error-prone (we need to declare a starting point per node (e.g in this case Subscript, in the division case BinOp (and all the places they might end up in the compiler)). Plus it would cost an additional byte per all potential instructions.

In some cases, the range that could be potentially highlighted might include code on consecutive lines. The same situation could be present in the current much improved highlighting of SyntaxErrors in Python 3.10 (Thanks Pablo!). However, tracebacks only show a single line of code, thus not including all the potentially problematic code.

It might be helpful to add something --- to the range indicated by ^s as an indication that the location of the error continues beyond the code shown.

For example, if x[1][2][3] is None, then the location of an error in the following code:

a = x[1][2

would be shown as follows:

a = x[1][2

Admitedly, in this simple example, one can clearly see an unclosed bracket indicating that the
code continues below. However, in more complex cases (especially for SyntaxErrors spanning multiple tokens), such indication might be helpful.

(I initially thought of using ... instead of --- but thought that the latter was visually more appealing in this context.)


After experimenting some more, I think that --> might look even better:

Traceback (most recent call last): 
   File "", line 1
     a = {'a': 123 
 SyntaxError: invalid syntax. Perhaps you forgot a comma?

+1 on Jelle’s comment on what should be highlighted and Guido’s summary.

I wanted to point that out, but also the inverse, consider a complex expression like (imagine the placeholder names to be much longer):

foo = bar(

The “failing evaluation” may easily be multiline, e.g. from 4:2 until 6:7.
I guess this information also needs to be preserved, even if it’s not trivial to underscore the failing op, a richer environment could highlight the background of the failing op.

This is a fantastic improvement that I’m looking forward to seeing implemented! However, I was rather confused on the opt-out behavior in the PEP, because the opt-out section discusses what, based on this discussion, seems to be the current proposed syntax (env var and -X runtime flag), while in the rejected alternatives section, the Configure flag section refers to what I presume was the previous syntax, tying it to the optimization flag.

Presuming I’ve interpreted this correctly, it seems you might want to update the configure flag section to refer to the current syntax, consider adding a rejected alternatives item describing why tying the opt-out to the optimization flag (which naively seemed logical) wasn’t ideal, and perhaps maybe even an explicit clarification in the optout section that the optimization flag doesn’t trigger the behavior? Or, if this is not the case and I am still confused, clarifying it with whatever is the case :slight_smile:

No, the configure flag refers to the autotools configure step, in which you basically set up a bunch of preprocessor macros. This means that the rejected idea is to configure this at CPython (the interpreter) compile time (configure time).

Sorry for the confusion; I should have quoted the specific part of that section I was referring to.

Yup, I did infer that, though it would be very helpful to include your very clear and explicit statement of it above in the PEP (per the Zen of Python). As-is the section leaves several of those critical details implicit and uses complied/compilation to refer to two very different things (compilation of Python and compilation of pyc files), and it took a careful reading and some background familiarity with C and the CPython build process to be sure I was interpreting it correctly. Without that, I could certainly see many Python developers being confused or thinking this refers to runtime flags, site configuration or something else.

However, the confusion I actually meant to refer to above was in sentence at the end of the section that presumably is intended to refer to the alternative that is currently described in the PEP,

For these reasons we have decided to use the -O flag to opt-out of this behaviour.

as well as a reference in the first sentence (emphasis mine)

Having a configure flag to opt out of the overhead even when executing Python in non-optimized mode

which doesn’t seem to be up to date with the latest changes to use a dedicated env var and -X runtime flag to opt-out rather than the -O optimization runtime flag, if I’m understanding the flow of events correctly. Assuming this is right, these passages should be updated to reflect that, and the previous approach of using the optimization runtime flag to control this behavior should presumably be documented this as a separate item in the rejected alternatives section. Thanks!