PEP 657 -- Include Fine Grained Error Locations in Tracebacks

6 Likes

I very, very much like this proposal. However, I would suggest a small change. The PEP currently as written includes the following:

Maintaining the current behavior, only a single line will be displayed in tracebacks. For instructions that span multiple lines (the end offset and the start offset belong to different lines), the end offset will be set to 0 (meaning it is unavailable)

Other enhanced traceback packages [1] often display multiple lines surrounding the actual line where an error was located. Such packages might want to highlight any problem code spanning multiple lines. For such tools, it would be very useful to have the end offset NOT set to zero when it is on a different line. If the end line was recorded, CPython could simply use the fact that the end line was different from the beginning line the same way that ā€œend offset == 0ā€ is currently proposed; other packages could make use of the entire information as needed.

[1] An example of such a package is GitHub - aroberge/friendly: Aimed at Python beginners: replacing standard traceback by something easier to understand. In some cases, friendly already shows similar enhanced information about the location in tracebacks as shown on the picture below. A list of other enhanced traceback packages can be found at the bottom of Some thoughts on the design of friendly ā€” friendly-traceback 0.3.142 documentation

2 Likes

Thank you for bringing this up, we do mention in the reject ideas section that something like this ought to be useful for external tools but that it canā€™t really be taken advantage of by CPython itself without making big changes to the traceback machinery.

Long term this might be useful to add (I think line end numbers or deltas would also compress fairly well since most bytecodes will just be on the same line), but I think we would like to keep it out of scope for this PEP to keep the implementation simple and the overhead low.

Iā€™ve definitely been bitten by this issue before (not knowing which part of a complex expression threw an error). Iā€™m excited that itā€™s finally getting fixed!

I do have a piece of feedback. In this example from the PEP:

  File "test.py", line 6, in lel
    return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
                         ^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable

my first instinct was that the highlighted expression x['z']['x']['y']['z'] was None. Thinking about it some more, I realized it actually means that x['z']['x']['y'] is None, because the caret points at the piece of AST being evaluated, not the part thatā€™s incorrectly None.

Iā€™m worried that this will end up being a common user confusion. I donā€™t have a good suggestion for fixing it, though, so maybe itā€™s just something people will have to learn.

3 Likes

Maybe we should highlight the text most closely associated with the operation that fails rather than with the failing subexpression? E.g. if we have f() + g() and the + operation fails, the caret should only point at +. In Jelleā€™s example (which also bothered me when I first saw it) I would highlight ['z'] only.

4 Likes

Thatā€™s s good idea. I think the best approach is some hybrid thing where we highlight the range and some position inside it. Many people mentioned that a single caret can be hard to read, specially for people with impaired vision, which motivated this change.

We will explore that but one of the problems that I can see of that is that it requires more manual intervention in the compiler and has more room for error. On the other hand, the current approach is still complex but we can reuse a lot of machinery and get always correct ranges (as long as the line numbers are correct). Given how many inaccuracies the old manual AST positions have we are advocating for the less amount of manual work, at least for the first version of this.

I agree with @aroberge here. While I understand that thereā€™s a current prototype that doesnā€™t store end line numbers at all, it seems a shame to drop that information, since it is present in the AST. Surely some creative encoding of the data could support this with little overhead (since most instructions donā€™t span multiple lines).

I also think that a more creative solution ought to be found for the case where the offset does not fit in an 8-bit byte.

Finally, I think the discussion should be reopened on how to disable this. The argument that if the info isnā€™t present it might cause crashes seems to be defeated by the possibility of the info being absent when -OO is used. (Also, I think we should not tie this to -OO.)

1 Like

Thinking about it some more, I realized it actually means that x['z']['x']['y'] is None, because the caret points at the piece of AST being evaluated, not the part thatā€™s incorrectly None.

Indeed, we will think about it but fixing this is not trivial. Notice that the error is not saying ā€œthis is Noneā€ but that ā€œNone cannot be indexedā€. Therefore the highlighted range is the indexed operation, not the part that is None.

Is the same as if you do

X=None
X[7]

What is wrong is X[7] not X.

One of the important points if preserve the AST ranges is that if you get the code represented by the range, it will always be a valid syntactic structure, while in the original example ['z'] is not (is actually valid as a list, but that would be the wrong one :wink: ).

In any case, we can fine tune this as part of the implementation, but this doesnā€™t really modify the proposal so I would prefer to discuss this later if the pep is accepted.

(Meta: can you use quoting instead of saying ā€œthisā€ or ā€œthatā€? Messages may appear that make such references ambiguous.)

(Meta: can you use quoting instead of saying ā€œthisā€ or ā€œthatā€? Messages may appear that make such references ambiguous.)

I will do it in my next messages, but Iā€™m writing this on my phone and that makes quoting very hard :frowning:

1 Like

I am afraid I donā€™t understand this. What makes special OO is that there is an entire different pyc file generated for that and therefore we can choose to not include the extra information there and therefore not to read it. On the other hand, non OO uses regular pyc files so we can read the information knowing it will be there. The problem arises when you have s regular pyc file without the information. The marshal code would need some extra information in the header of said pyc file to know that we wonā€™t find the extra information in there and we would need to propagate that from all unmarshall APIs. In short, there are two things to deactivate:

  • Overhead in memory
  • Overhead in pyc files.

The second needs either an entire new set of pyc files with a different extension or some flag + changes in the import mechanism + new APIs to know if is there or not and not crash when reading it.

What had been proposed is to always write the information and then having a flag (something different from -OO) that will throw away the extra information after reading it and place None in the field. The advantage of this is that we donā€™t need to care about the complexity of pyc files with or without the information and we can select the behaviour at runtime. This still has some challenges like it may slow down the code object constructor to look the global configuration and/or it may need some new APIs in the marshal module, which makes things more complex.

The rationale says the data types were chosen in a way that tries to minimize the impact of the size of code objects in memory. However, it doesnā€™t say whether variable-length encodings were seriously considered. It seems likely to me that a variable-length encoding could permit offsets bigger than 255 to be represented without increasing memory over the current proposal for most projects. It also even seems possible that a more clever encoding could be chosen that uses less memory than the current proposal.

In the schemes Iā€™m thinking of, the two offsets would be stored as (start_offset, length) across one or more bytes. In the simplest scheme, if the first bit is 0, then the data for both offsets would be contained in two bytes: the next 8 bits would be the start offset, and the remaining 7 the length. That would let you represent start offsets up to 255 and lengths up to 127 in two bytes.

In a more clever scheme, the most common ā€œrectangleā€ of (start_offset, length) pairs could be represented in one byte, pairs up to (127, 127) in two bytes (encompassing all PEP-8 compliant code), and larger pairs in three or more bytes. That could potentially be more memory efficient than the current proposal. However, it would depend on knowing more about the real-world distribution of (start_offset, length) pairs.

We would be very happy to include this in fact., If everyone is ok with the potential extra overhead The encoding would be something like a bytearray of pairs (index, delta) where the index is an index of a bytecode instruction and the delta is how many lines of difference the end line had with respect of the initial one. We can encode big deltas in the same fashion as with lnotab (255 means ā€œread the next offsetā€).

In the worst case this may still represent a non trivial amount, thought, and is not usable currently in CPython so thatā€™s why we decided to defer adding this field.

I think this is a great idea and will have quite a big improvement for debugging. I think the small space cost (+22%?) is minor relative to the value of the feature. Putting the table in a separate section of the .pyc file seems like a nice optimization. If we donā€™t want to complicate the interpreter with extra options to exclude it, perhaps we could provide a tool that strips that table from .pyc files. Then, someone who is very space constrained could use that tool. It would be kind of similar to the code minifiers used by Javascript.

Thatā€™s s good idea but I really donā€™t want to complicate unmarshalling and pyc file reading for the initial version of this proposal at least. The problem with the extra table or different files or some header flag in pyc files is that it adds quite a lot of complexity in the import mechanism and unmarshalling and I really donā€™t want to have to add complexity in that region of the interpreter for this proposal.

I think there is more interesting options on having a single flag that deactivated the generation and just place None instead in the new field. There will be an overhead of object per code object but thatā€™s much much lower than one byte per instruction.

That makes sense, the overhead when column offsets are disabled is minimal that way, and no new pyc format is required.

Note however that maybe we should consider a radical different pyc format, to speed up loading code. Just not in this PEP. But we should be careful to separate the API and functionality (and their motivation) from the space and speed considerations (based on the current marshaling approach).

Finally, should out-of-process debuggers be able to access this data? That complicated PEP 626, but I think it was important. Is it important here?

(Meta: Discourse tells me I should give it a rest. Goodnight!)

Excel point! I think is important, so thatā€™s why we provide python and c APIs to fetch the data (for regular debuggers and tools) and as this fields is as accessible and any other field in code objects, and will not require code execution to obtain it, out of process debuggers could just fetch the memory and interpret it (evey version of the interpreter may change the internal representation but never in a way they requires executing code or chasing extra pointers).

Thanks for the comment! We actually did consider variable length encodings but by definition they are not minimizing the memory layout. As you can see in the pep we went and check a ot of PyPI packages and an extreme small number of lines exceed 255 so we went with the actual proposal.

If we collectively believe supporting arbitrary big column numbers is more important than the extra complexity and size increase, we could probably go with some encoding similar to lnotab and maybe two different byte arrays in the code object instead of one.

In any case, the important thing to highlight is that the APIs that we offer allow to improve the encoding in the future without breaking consumers, so technically the internal choice of representation is not intrinsically linked to the proposal so it can be discussed and implementeted in the future.

That misinterpretation bit me too.

It bit me so hard I had to read Jelleā€™s explanation three times before I
got it. So I agree that this will be a common user confusion.

Quoting is also hard from email. Discuss seems to strip email quotes

(lines beginning with one or ā€˜>ā€™) and I havenā€™t yet worked out what sort

of markup to use in its place.