PEP 657 -- Include Fine Grained Error Locations in Tracebacks

Thank you for bringing this up, we do mention in the reject ideas section that something like this ought to be useful for external tools but that it can’t really be taken advantage of by CPython itself without making big changes to the traceback machinery.

Long term this might be useful to add (I think line end numbers or deltas would also compress fairly well since most bytecodes will just be on the same line), but I think we would like to keep it out of scope for this PEP to keep the implementation simple and the overhead low.

I’ve definitely been bitten by this issue before (not knowing which part of a complex expression threw an error). I’m excited that it’s finally getting fixed!

I do have a piece of feedback. In this example from the PEP:

  File "", line 6, in lel
    return 1 + foo(a,b,c=x['z']['x']['y']['z']['y'], d=e)
TypeError: 'NoneType' object is not subscriptable

my first instinct was that the highlighted expression x['z']['x']['y']['z'] was None. Thinking about it some more, I realized it actually means that x['z']['x']['y'] is None, because the caret points at the piece of AST being evaluated, not the part that’s incorrectly None.

I’m worried that this will end up being a common user confusion. I don’t have a good suggestion for fixing it, though, so maybe it’s just something people will have to learn.


Maybe we should highlight the text most closely associated with the operation that fails rather than with the failing subexpression? E.g. if we have f() + g() and the + operation fails, the caret should only point at +. In Jelle’s example (which also bothered me when I first saw it) I would highlight ['z'] only.


That’s s good idea. I think the best approach is some hybrid thing where we highlight the range and some position inside it. Many people mentioned that a single caret can be hard to read, specially for people with impaired vision, which motivated this change.

We will explore that but one of the problems that I can see of that is that it requires more manual intervention in the compiler and has more room for error. On the other hand, the current approach is still complex but we can reuse a lot of machinery and get always correct ranges (as long as the line numbers are correct). Given how many inaccuracies the old manual AST positions have we are advocating for the less amount of manual work, at least for the first version of this.

I agree with @aroberge here. While I understand that there’s a current prototype that doesn’t store end line numbers at all, it seems a shame to drop that information, since it is present in the AST. Surely some creative encoding of the data could support this with little overhead (since most instructions don’t span multiple lines).

I also think that a more creative solution ought to be found for the case where the offset does not fit in an 8-bit byte.

Finally, I think the discussion should be reopened on how to disable this. The argument that if the info isn’t present it might cause crashes seems to be defeated by the possibility of the info being absent when -OO is used. (Also, I think we should not tie this to -OO.)

1 Like

Thinking about it some more, I realized it actually means that x['z']['x']['y'] is None, because the caret points at the piece of AST being evaluated, not the part that’s incorrectly None.

Indeed, we will think about it but fixing this is not trivial. Notice that the error is not saying “this is None” but that “None cannot be indexed”. Therefore the highlighted range is the indexed operation, not the part that is None.

Is the same as if you do


What is wrong is X[7] not X.

One of the important points if preserve the AST ranges is that if you get the code represented by the range, it will always be a valid syntactic structure, while in the original example ['z'] is not (is actually valid as a list, but that would be the wrong one :wink: ).

In any case, we can fine tune this as part of the implementation, but this doesn’t really modify the proposal so I would prefer to discuss this later if the pep is accepted.

(Meta: can you use quoting instead of saying “this” or “that”? Messages may appear that make such references ambiguous.)

(Meta: can you use quoting instead of saying “this” or “that”? Messages may appear that make such references ambiguous.)

I will do it in my next messages, but I’m writing this on my phone and that makes quoting very hard :frowning:

1 Like

I am afraid I don’t understand this. What makes special OO is that there is an entire different pyc file generated for that and therefore we can choose to not include the extra information there and therefore not to read it. On the other hand, non OO uses regular pyc files so we can read the information knowing it will be there. The problem arises when you have s regular pyc file without the information. The marshal code would need some extra information in the header of said pyc file to know that we won’t find the extra information in there and we would need to propagate that from all unmarshall APIs. In short, there are two things to deactivate:

  • Overhead in memory
  • Overhead in pyc files.

The second needs either an entire new set of pyc files with a different extension or some flag + changes in the import mechanism + new APIs to know if is there or not and not crash when reading it.

What had been proposed is to always write the information and then having a flag (something different from -OO) that will throw away the extra information after reading it and place None in the field. The advantage of this is that we don’t need to care about the complexity of pyc files with or without the information and we can select the behaviour at runtime. This still has some challenges like it may slow down the code object constructor to look the global configuration and/or it may need some new APIs in the marshal module, which makes things more complex.

The rationale says the data types were chosen in a way that tries to minimize the impact of the size of code objects in memory. However, it doesn’t say whether variable-length encodings were seriously considered. It seems likely to me that a variable-length encoding could permit offsets bigger than 255 to be represented without increasing memory over the current proposal for most projects. It also even seems possible that a more clever encoding could be chosen that uses less memory than the current proposal.

In the schemes I’m thinking of, the two offsets would be stored as (start_offset, length) across one or more bytes. In the simplest scheme, if the first bit is 0, then the data for both offsets would be contained in two bytes: the next 8 bits would be the start offset, and the remaining 7 the length. That would let you represent start offsets up to 255 and lengths up to 127 in two bytes.

In a more clever scheme, the most common “rectangle” of (start_offset, length) pairs could be represented in one byte, pairs up to (127, 127) in two bytes (encompassing all PEP-8 compliant code), and larger pairs in three or more bytes. That could potentially be more memory efficient than the current proposal. However, it would depend on knowing more about the real-world distribution of (start_offset, length) pairs.

We would be very happy to include this in fact., If everyone is ok with the potential extra overhead The encoding would be something like a bytearray of pairs (index, delta) where the index is an index of a bytecode instruction and the delta is how many lines of difference the end line had with respect of the initial one. We can encode big deltas in the same fashion as with lnotab (255 means “read the next offset”).

In the worst case this may still represent a non trivial amount, thought, and is not usable currently in CPython so that’s why we decided to defer adding this field.

I think this is a great idea and will have quite a big improvement for debugging. I think the small space cost (+22%?) is minor relative to the value of the feature. Putting the table in a separate section of the .pyc file seems like a nice optimization. If we don’t want to complicate the interpreter with extra options to exclude it, perhaps we could provide a tool that strips that table from .pyc files. Then, someone who is very space constrained could use that tool. It would be kind of similar to the code minifiers used by Javascript.

That’s s good idea but I really don’t want to complicate unmarshalling and pyc file reading for the initial version of this proposal at least. The problem with the extra table or different files or some header flag in pyc files is that it adds quite a lot of complexity in the import mechanism and unmarshalling and I really don’t want to have to add complexity in that region of the interpreter for this proposal.

I think there is more interesting options on having a single flag that deactivated the generation and just place None instead in the new field. There will be an overhead of object per code object but that’s much much lower than one byte per instruction.

That makes sense, the overhead when column offsets are disabled is minimal that way, and no new pyc format is required.

Note however that maybe we should consider a radical different pyc format, to speed up loading code. Just not in this PEP. But we should be careful to separate the API and functionality (and their motivation) from the space and speed considerations (based on the current marshaling approach).

Finally, should out-of-process debuggers be able to access this data? That complicated PEP 626, but I think it was important. Is it important here?

(Meta: Discourse tells me I should give it a rest. Goodnight!)

Excel point! I think is important, so that’s why we provide python and c APIs to fetch the data (for regular debuggers and tools) and as this fields is as accessible and any other field in code objects, and will not require code execution to obtain it, out of process debuggers could just fetch the memory and interpret it (evey version of the interpreter may change the internal representation but never in a way they requires executing code or chasing extra pointers).

Thanks for the comment! We actually did consider variable length encodings but by definition they are not minimizing the memory layout. As you can see in the pep we went and check a ot of PyPI packages and an extreme small number of lines exceed 255 so we went with the actual proposal.

If we collectively believe supporting arbitrary big column numbers is more important than the extra complexity and size increase, we could probably go with some encoding similar to lnotab and maybe two different byte arrays in the code object instead of one.

In any case, the important thing to highlight is that the APIs that we offer allow to improve the encoding in the future without breaking consumers, so technically the internal choice of representation is not intrinsically linked to the proposal so it can be discussed and implementeted in the future.

That misinterpretation bit me too.

It bit me so hard I had to read Jelle’s explanation three times before I
got it. So I agree that this will be a common user confusion.

Quoting is also hard from email. Discuss seems to strip email quotes

(lines beginning with one or ‘>’) and I haven’t yet worked out what sort

of markup to use in its place.

It bit me so hard I had to read Jelle’s explanation three times before I
got it. So I agree that this will be a common user confusion.

Thanks! I think that this is something that will need some adaptation from users to match expectations as I mentioned in PEP 657 -- Include Fine Grained Error Locations in Tracebacks - #8 by pablogsal.

In any case, we can fine-tune this as part of the implementation as this doesn’t really modify the proposal so I would prefer to discuss this later if the PEP is accepted to keep the discussion focused.

I made the same mistake, but I think the lesson to learn is that the error is entirely inside the ^ range – which should make it simpler to understand.

1 Like