PEP 722/723 decision

So this was not an easy decision to make between PEP 722 and PEP 723, making it hard to be the PEP delegate (and that’s ignoring all of the controversy surrounding me being the delegate to begin with). I think pretty much everyone acknowledged that either solution would be great to have, but neither PEP explicitly stood out as objectively better than the other (I saw plenty of comments along the lines of, “I would accept either,” after expressing a preference). So this took a lot of thought, reflecting, and honestly gut feel, to come to a conclusion.

In the end I have decided to accept PEP 723 with the condition that we are okay introducing the [run] table to pyproject.toml (i.e., I don’t want /// pyproject to have anything that’s invalid in pyproject.toml; it should at best be a subset of what’s available, not a superset). I’m assuming we are okay with this after our last discussion about how [project] is really meant for what ends up in wheel files, but since PEP 723 has knock-on effects I want to make sure I’m not misreading the room or get accused of sneaking in a change to pyproject.toml via this PEP. If @pf_moore wants I can make the call about our general comfort on this topic as well.

So, as for why I went with PEP 723 over 722 (although it was admittedly a tough call), I have a couple of reasons. For any who view them as weak or could have gone the other way, then just assume my gut told gave me the ultimate guidance. :grin:

First, I realized you are going to need to learn both formats as neither of them are self-explanatory as to what effects that can have. With either you can probably work out that they are listing dependencies, but neither are self-documenting enough for you to understand that e.g. pipx run script.py will do something magical with what’s listed inside the file. For me, that weakens PEP 722’s argument of being easier to read if you’re going to have to read up on how to use it appropriately anyway.

Second, the user study suggests that learning TOML isn’t that big of a burden. While I fully admit PEP 722 is easier to explain, it seems the delta between the two PEPs is not enough to outright reject PEP 723 for being too burdensome for folks who never go beyond the script-writing stage of Python development.

Third, with [run] being in pyproject.toml, it will help with documentation. Having what PEP 723 provides be what you would put into pyproject.toml allows for a “doubling up” on documentation and help. Unfortunately with PEP 722, it is a separate thing to learn (e.g., if I was to accept PEP 722 there was going to be a need to add a “requires Python” field, but it would be bespoke to this mechanism, just like how to list dependencies already is; this would continue to be a concern for any other metadata we add in the future). This isn’t a massive burden from PEP 722’s side, but it is a perk.

Fourth, I do appreciate that PEP 723 does help migrate people to pyproject.toml if that day comes for that user. Admittedly that won’t apply to everyone, but for those that it does it will be a benefit. And assuming my points 1 and 2 are not off, this benefit for those that go as far as a pyproject.toml will not come at a (great) cost to those who never migrate past PEP 723. Heck, maybe it will even help demystify pyproject.toml for folks who should actually migrate to that layout anyway since they already did some upfront work.

Fifth, from a tooling perspective, PEP 723 is rather straight-forward: line.removeprefix("#").removeprefix(" ") to each line you find between the /// pyproject and /// markers. Then you simply pass what you have to a TOML parser to tell you if everything is okay. The skew between tools potentially implementing things differently is very much minimized in this regard which will be useful when we have potential tools written in Python, Rust, TypeScript, etc. wanting access to the contained data. Compare this to PEP 722 where you would have to handle comments appropriately, etc. and in a unique, bespoke fashion.

So that’s my reasoning. I think you could have argued in either direction, but thinking long-term (i.e. decades), I think PEP 723 will be the (slightly) better outcome for us. But even if I’m wrong, it luckily won’t be the worst mistake I’ve ever made for Python. :sweat_smile:

I obviously want to thank both @pf_moore and @ofek for their PEPs and at least attempting to reconcile their differences to avoid needing separate PEPs. And also thanks to everyone who participated constructively in the discussions around these PEPs. Thanks to @courtneywebster for setting up and running a user study on these PEPs, as well as my co-workers listening to me go on and on about this decision. Ditto for my spouse, Andrea, who put up with me talking through this decision with me during our vacation last week.

Regardless of whether your preferred PEP was chosen, hopefully everyone can at least agree that we are getting something that will greatly benefit the community (I for one already have plans on how to use this to hopefully great effect)!

62 Likes

Thank you Brett for spearheading this effort and everyone else for the feedback throughout. I especially would like to thank Paul for being the original one to show that this was very desirable to the community at large :slightly_smiling_face:

I am extremely appreciative of Courtney for conducting that user study. From my statistics experience I always knew that surprisingly small sample sizes can in many cases represent populations but until now I didn’t realize how much information could be gleaned from direct interaction with a handful of folks. Very cool!

My goal is to implement this in Hatch and release by the end of the year, hopefully sooner.

edit: I also forgot to thank Adam Turner for sponsoring my PEP and teaching me best practices during PR reviews!!!

26 Likes

Thanks Brett for making this decision. I think this will make a big difference for people who want to use 3rd party Python packages but find managing virtual environments more of an overhead than they are comfortable with. Thanks to Ofek for championing the TOML format, and for his willingness to look for a compromise solution (even if we didn’t manage to achieve that in the end).

I also agree with Ofek’s comments about the user study. Whenever I see this sort of work done, I’m amazed by how much can be gained from it - we need to do more of this sort of thing, and I am really grateful to Courtney for putting the study together and interpreting the results.

Hopefully, someone will be able to find the time to implement PEP 723 for pipx now. The current support for declaring dependencies is still unreleased, and it would be great if we could avoid the transition cost of releasing it and switching to PEP 723 shortly afterwards. PEP 723 support in pip-run would also be good, but we can’t avoid a transition there, as the existing mechanisms have been around for some time.

I’m not 100% sure what you want from me here. My view is that PEP 723 doesn’t require us to add [run] to pyproject.toml, but places specific requirements on any PEP that does so. Are you suggesting that we extend this to add [run] to pyproject.toml right now? Because if so, then I do have some concerns as I don’t see what the intended semantics would be - PEP 723 quite correctly didn’t specify them (it defines semantics specifically in terms of “running the script”), so we need some form of standard to fill that gap.

I’m fine with “[run] in pyproject.toml” being a new PEP, which I have no problem being PEP delegate for. And I’m fine with PEP 723’s statements being a pre-requisite for any such PEP. But unless I’m missing something, I don’t think we have enough detail to introduce [run] into pyproject.toml without its own PEP. If you’re suggesting otherwise, then I guess I would like you to describe how you expect it to work.

21 Likes

I appreciate everyone involved, although I feel like all the extended discussion only resulted in a spec that will require me copy-pasting from a template.

There’s something about the comments + /// + pyproject + [run] + dependencies = [ + quoting + ] + closing /// that simply does not want to stay in my memory.

:man_shrugging:

Don’t worry, tooling will offer commands to do everything for you :wink:

1 Like

Thanks to Brett, Ofek, Paul and Courtney for their work on the two PEPs.

I put together a quick parser yesterday and I wanted to check it against the reference implementation given in the pep. However it seems that the reference implementation for reading a block doesn’t work and gives an AttributeError unless I’m missing something?

Reference Code
import re
import tomllib

REGEX = r'(?m)^# /// (?P<type>[a-zA-Z0-9-]+)$\s(?P<content>(^#(| .*)$\s)+)^# ///$'


def read(script: str) -> dict | None:
    name = 'pyproject'
    matches = list(
        filter(lambda m: m.group('type') == name, re.finditer(REGEX, script))
    )
    if len(matches) > 1:
        raise ValueError(f'Multiple {name} blocks found')
    elif len(matches) == 1:
        return tomllib.loads(matches[0])
    else:
        return None

sample = """
# /// pyproject
# [run]
# requires-python = ">=3.11"
# dependencies = [
#   "requests<3",
#   "rich",
# ]
# ///
"""

print(read(sample))

Output:

AttributeError: 're.Match' object has no attribute 'replace'

It looks like it needs to extract the block from the re.match and remove the "# " from each line before passing to tomllib.loads. Something like:

    elif len(matches) == 1:
        content = matches[0].group("content")
        data = "\n".join(line[2:] for line in content.split("\n"))
        return tomllib.loads(data)

The reference implementation as-is works perfectly. I suspect since your implementation seems more complex (why is that?) that there is just a bug.

edit: your implementation could be simplified to the following:

import re

# /// pyproject
# [project]
# requires-python = ">=3.11"
#
# dependencies = [
#   "requests<3",
#   "rich",
# ]
# ///

# /// foo
# bar bar
# ///

# /// bar
# foo foo
# ///

REGEX = r'(?m)^# /// (?P<type>[a-zA-Z0-9-]+)$\s(?P<content>(^#(| .*)$\s)+)^# ///$'

def parse(script: str) -> tuple[str, str]:
    for match in re.finditer(REGEX, script):
        yield match.group('type'), ''.join(
            line[2:] if line.startswith('# ') else line[1:]
            for line in match.group('content').splitlines(keepends=True)
        )


def main():
    with open(__file__, encoding='utf-8') as f:
        for name, content in parse(f.read()):
            print(name)
            print(content)


if __name__ == '__main__':
    main()

The section I posted under the Reference Code section is copy/pasted from the ‘Reference Implementation’ section of the PEP I linked.

I’ve only added the sample (copied from the PEP again) as an inline string to run against the PEP code.

This is not my implementation.

Ah, I see that there is a small diff between my local script and the block in the PEP. Thanks! I will open a PR to rectify that and mark as final.

Also, feel free to use the example script I posted above as I think that satisfies what you’re trying to do.

1 Like

PR is open: PEP 723: Mark as final by ofek · Pull Request #3505 · python/peps · GitHub

I also added an example of the current topic of parsing a stream of metadata blocks :slightly_smiling_face:

1 Like

That’s ok, I mostly wanted to see what happened running the reference implementation on potentially common “wrong” data.

Blocks like:

# /// block
# <data>
<Code starts without closing the block.>

Which the reference ignores (I currently raise an exception here, I can see this being a common mistake that could lead to headaches with someone trying to figure out why their requirements aren’t being used).

and

# /// block-1
# <data>
# /// block-2
# ...
# ///

The reference puts /// block-2 inside /// block-1.

I feel like this is likely to be an error but I also guess there could be valid data like that inside the block? I’m currently treating this as an error but perhaps I shouldn’t.

With the implementation you posted I was also getting confused about blocks not getting picked up until I realised that the regex doesn’t accept underscores in TYPE names and I was using underscores in my examples.

My implementation is a bit more fiddly, both to pick out these potential errors but also to avoid importing re if possible (this is probably me being overly focused on import times).

1 Like

I think that given the PEP explicitly states that the reference implementation is definitive, you’d have to accept this as valid.

Similarly, unclosed blocks, as per the behaviour of the PEP code, should be ignored (and probably consume the rest of the file? I haven’t checked whether that’s the behaviour, but I suspect it is).

I think we need to be very cautious about changing the parsing behaviour here - the PEP argued convincingly (at least to the extent that it got accepted :wink:) that parsing was simple, you just use the regex from the PEP. If we start changing that specification, we risk making it difficult to parse the data in languages other than Python, and we could end up with subtle but important differences in what various implementations will accept.

@brettcannon as PEP delegate here, would you agree with the above?

2 Likes

The PEP does not say the reference implementation is definitive, the text specification is stated as definitive.

In circumstances where there is a discrepancy between the text specification and the regular expression, the text specification takes precedence.

The text specification states:

Any Python script may have top-level comment blocks that start with the line # /// TYPE where TYPE determines how to process the content, and ends with the line # ///. Every line between these two lines MUST be a comment starting with #. If there are characters after the # then the first character MUST be a space.

The specification states what is and isn’t a valid block, based on my reading this would mean that the ‘double’ block would be valid and I shouldn’t error there. The reason I wanted to test what the regex implementation did was to know if instead it would ignore the first part and start a block at the second part, as technically everything from # /// block-2 to # /// is a valid block that is merged in with the first block (though this would make block-1 an incomplete block).

However the unclosed block is still invalid, and I think it’s completely fair behaviour to raise an error and as a user I would appreciate getting this error over silently ignoring the invalid block. The PEP dictates what is a valid block, but doesn’t give any required behaviour on finding something that is invalid other than erroring if two blocks of the same TYPE are found.


On that point the PEP states:

When there are multiple comment blocks of the same TYPE defined, tools MUST produce an error.

I’ve interpreted this as if there are multiple comment blocks of the same ‘TYPE’ encountered, as the only way to know if there are multiple defined is to consume the whole file.

OH!!! Did that change at some point? I’m pretty sure that this was brought up in discussion and the regex (sorry, I should have been explicit that I meant the regex, not the implementation) was definitive.

My apologies, I missed that the text was stated as definitive. Although if the text doesn’t tell you the answer, I’d argue that the quoted sentence implies that the regex provides the definition (the regex is described as canonical, and there’s no discrepancy, just something the regex covers but the text doesn’t).

But you’re right, this is a (rather frustrating, IMO) case where the PEP is underspecified, and we therefore have the potential for different tools to interpret the same file in different ways. That’s disappointing, as my experience suggests we’re going to get issues because of this. PEPs shouldn’t be changed after acceptance, so we’re either going to have to bend the rules or live with the ambiguity :slightly_frowning_face:

1 Like

PEP 1 has provisions for changes to Accepted PEPs, with SC (in this case PEP delegate) approval:

In general, PEPs are no longer substantially modified after they have reached the Accepted, Final, Rejected or Superseded state. Once resolution is reached, a PEP is considered a historical document rather than a living specification. Formal documentation of the expected behavior should be maintained elsewhere, such as the Language Reference for core features, the Library Reference for standard library modules or the PyPA Specifications for packaging.

If changes based on implementation experience and user feedback are made to Standards track PEPs while in the Provisional or (with SC approval) Accepted state, they should be noted in the PEP, such that the PEP accurately describes the implementation at the point where it is marked Final.

(And the PEP hasn’t actually been marked as accepted yet (open PR: python/peps#3505), perhaps the PEP delegate could modify acceptance to “acceptance once X changes are made”.)

1 Like

Yes IIRC I changed that based on feedback from you.

That is correct.

My view is that the unclosed block is valid and does not count as anything as described by the text and encoded in the regular expression. I am a hard -1 on changing the wording to require tools to account for this use case but I am willing to update the text to make that explicit. If tools want to provide extra error checking then they can but this should not be a requirement.

Yeah, let’s see what @brettcannon thinks. I’ll be honest, I’m not the most unbiased person to comment here, I felt that PEP 722 was specifically the “easy to parse” alternative, and it’s rather too easy for me to end up taking an “I told you so” stance :frowning:

Thanks @DavidCEllis for testing this out and finding these issues - it’s immensely valuable to get this dealt with before the PEP starts getting used in the wild. And honestly, I think practicality should probably beat purity here, the important thing is to tighten up the spec now, not to stand on process[1].

Sigh. I obviously didn’t follow through and make sure the text was sufficiently precise. I’m going to claim that’s because I was working on my own PEP, but honestly I dropped the ball there. Sorry.

My recollection of the discussion isn’t clear, let alone what I was thinking at the time, but I think I assumed that the end result would be that the regex would be definitive because it matched the text, not that the regex could give incorrect results (i.e., the regex was a reference implementation of the syntax defined in the text). It’s water under the bridge now, though.

What matters most at this point, I think, is that there’s some implementable behaviour that doesn’t have ambiguities. If you don’t want to cover it in the text, then I think we need to state explicitly that tools are allowed to use the regex as their implementation, and furthermore that files must not include constructs that can’t be parsed with the regex. That puts the edge cases firmly in “undefined behaviour” territory, which is not ideal, but is probably the best we can do. And it makes the regex into a “conformance test” for valid input data, which is a good thing to have.

It would defeat the object of the PEP if files were allowed to have data that could only be parsed correctly by some implementations of the PEP. We might as well have simply left things with the syntax being tool-defined in that case…


  1. although I hope we don’t set a precedent by doing so ↩︎

3 Likes

If Brett allows then I am fine adding a line to the open PR expressing your idea about putting increased emphasis on the regular expression.

I’m considering ‘valid’ as meaning ‘interpreted as a metadata block by the parser’ in what I’ve stated. So when I say invalid I mean that it’s (likely) intended as a metadata block, but due to incompleteness it’s ignored/rejected by the parser.

I don’t want to interpret as an error a block that the reference would accept[1] and ideally I don’t want to accept something the reference will ignore so I wanted to investigate malformed data and compare behaviour to check if it matches. However when the reference ignores something that I would consider was intended to be a metadata block I’d like to be permitted to provide potentially helpful errors/warnings.

I honestly wasn’t trying to pick holes in anything, I just wanted to clarify some behaviour that I needed a working reference for :sweat: .


  1. I may look at ways of storing potential errors relating to the block format within a pyproject block to use as additional information if/when the toml parsing fails though. ↩︎

I assume that the vast majority of people implementing the PEP will simply copy the regex and never even think of checking it against the text. I’m not sure that “increased emphasis” helps here, though, I’d see it more as people expecting the regex to be the reference implementation of the parsing described by the text. That’s what “regular expression that MAY be used to parse the metadata” means to me - you can use this and get a standard-conforming parser, or you can write your own based on the text.

The idea that “the text specification takes precedence” for me means that no-one needs to reverse-engineer a regex to work out how to parse the format - it does not allow for the text and the regex to give different results when parsing the same input[1]. I think that’s the case here, it’s just that both the text and the regex optimistically assume that they are being fed good data, so there’s no real consideration of error cases.

Maybe the solution is to say “the regex implements the specification described above, and may be used by tools to extract comment blocks”. That makes it clear that the two forms are equivalent.

But this is separate from the two “problem” cases @DavidCEllis noted, both of which are treated the same by both the regex and the textual specification (and the reference implementation, as that just uses the regex). The two cases are unclosed blocks and nested blocks (lines with the syntax of a block header inside a block).

  • A parser that handles an unclosed block by reporting an error or warning that states “block # /// xxx was not closed, the remainder of the file was ignored” is in conformance with the spec, and therefore acceptable. The regex won’t return the unclosed block as a match, so you’d need to write your own parser to be able to spot this case and report it, but that’s acceptable (if you want to bother).
  • Currently, nested blocks are invalid, because the only defined block type is pyproject, which must contain TOML, and lines starting with /// are not valid TOML. So for now, at least, it’s possible to identify, and report as malformed, such files. Naive parsing of the data as TOML won’t give a particularly helpful message by default, but that’s a quality of implementation issue. Future PEPs that introduce new block types will need to either disallow lines that look like a block header, or tackle the nesting problem some other way. But that’s their problem, not ours right now.

So I think we’re OK in practice. The two cases (unterminated and nested blocks) are invalid under both the text and the regex, and they can be reported to the user with a helpful message.

Did I miss anything?

No problem at all - making sure that implementations can actually implement the spec is crucial, so your feedback is welcome. And (luckily :wink:) I don’t think you’ve found any holes, just spotted some error cases that aren’t immediately clear from the spec.


  1. After all, if the regex has the potential to be incorrect, then it’s not acceptable for the reference implementation - which must be correct - to use it! ↩︎

3 Likes