PEP 722: Dependency specification for single-file scripts

True. The nested loop was (I think) a holdover from when it was possible to have multiple blocks in the file. Having formally said that only the first valid block needs to be parsed does make the code less tricky.

There’s something a little unnerving in the first loop in your version, though. I think it’s because if there’s no dependency block, the second loop still gets executed (although it does nothing because we’re at EOF). Personally, I’d want to add comments to your version whereas I didn’t feel the need with mine. It’s very much just a coding style question, though.

But I don’t think this is what Petr was talking about, because both versions seem to me to be equally translatable (or not) to other languages.

Edit: I decided to go with your version, with a couple of comments, it is cleaner. Thanks. I also fixed a bug with the empty line handling (break rather than continue, terminating the block prematurely).


Running your reference implementation on the example you give only obtains:


Is it intended to stop on comment lines/blank lines? Or should that be continue instead of break after the split?

Edit: I think you edited to fix this as I replied.

1 Like

I’ve update the implementation in viv to use this revised spec.

I did however modify the reference implementation in the currently rendered version since based on the reading of the spec I think it’s supposed to continue rather than break on comment-only lines, but I might be misinterpreting the expected behavior.

Folks can test locally using python3 <(curl -fsSL run --script ./ if they’d like.

Nope. See Record the top-level names of a wheel in `METADATA`? - #52 by thejcannon as a discussion about recording at least the top-level names.

I too would be surprised. :wink: But rejection of both PEPs is also possible, so who knows. As of right now I’m trying not to bias myself until we can try some user studies and see what the reactions are (I already have an opinion simply based on personal experience, but I want to avoid as much bias a possible in making the final decision as I can by not unconsciously discounting any feedback we get from the target audience of the PEPs).

Honestly, the August 14th was more to make sure Ofek was serious enough to write a PEP and to try and get overall PEP discussions done without them dragging on for a month. I definitely do not consider that deadline a hard one but more of an aspirational one. It seems this topic, though, is nicely staying on-topic and reaching convergence, so I’m not concerned about PEP 722 (I haven’t read the PEP 723 thread yet, though :sweat_smile:).

1 Like

OK, the discussion here seems to have died down, and we’ve readhed the 14th, so I’m going to say that PEP 722 is ready for approval.

@brettcannon I’m happy if you want to wait to give PEP 723 some additional time, or if you want to delay in case there’s still a possibility of @ofek and I coming up with some sort of merged proposal. There’s no rush here, I simply wanted to formally confirm that PEP 722 is ready when you are.


I’ve been exploring making a basic tool to launch scripts based on this specification, plus a non-standard x-requires-python block that gets used with ‘pyenv’ or ‘py’ to find the appropriate python executable. (I probably won’t make it build the appropriate Python with pyenv if it’s missing, but I may make it output the command you would need).

With respect to a proposed TOML based format from a merged proposal I’d note that despite implementing this in Python I’ve tried to make the time from start to running a script when a cached venv can be used as fast as possible[1] and just by importing a toml parser library this takes twice as long before doing any parsing[2]. This probably doesn’t matter if you’ve decided to implement such a thing in rust, and you may consider the overall time to still be small enough not to care, but I did want to point it out.

  1. This is somewhat limited by the launch time of Python itself, but it’s easy to make it much slower by importing certain modules. ↩︎

  2. I tested rtoml, pytomlpp, tomllib and tomlkit on my hardware - 2x was the best case. ↩︎


This interests me greatly because responsiveness of the Hatch CLI is something I try to optimize. Do you have stats on the import times of each library that you tested?

The case that needs to be as fast as possible is the frequent case where you iterate many times on your script and/or reuse it many times, but without changing the metadata block. You can detect that case by using the dependency block string as cache key, and just skip importing a TOML library if the metadata is the same string as the previous run.

(Also, you’re comparing code that you have purposefully optimized for startup time with code that might not have received startup time optimization.)


It depends what the TOML block ends up looking like as to whether the cache of the exact text is enough. (I’d like to share the env between scripts with the same dependencies so I need it not to have any extra unnecessary details). Perhaps the current proposed [run] block will be fine, but the proposal seems to have changed every time I look at that thread.

The current code parses the block and compares the parsed details to a cache and can do that before any of the toml libraries I’ve found have finished importing. Skipping the parsing step in the initial comparison is a possibility but it’s not necessary with the PEP 722 format. (I’m not going to write a TOML parser just to optimise it for import time for this one use case but I don’t think that’s what you were suggesting).

It’s hard to say what the impact would be in the context of hatch. For instance tomllib looks to be somewhere in the region of 2%[1] of your start time based on python -Ximporttime -c "import hatch.cli". However unlike my tool, hatch is already importing some of the dependencies tomllib requires. So for instance import tomllib might be a 2ms import for hatch, but a 22ms import for a new project. I don’t think you’d see any noticeable difference with any of the other libraries (except tomlkit would probably take longer).

I’m not claiming import time is the most important thing in general, just that I’d like to keep it down for something like this that is intended to launch small scripts if possible.

  1. This is just on my development machine, which is not a super stable benchmarking tool. ↩︎


Hm, in my mind the most important case is when the script is not changing at all. That’s the “simple distribution (via email or something)” use case that is a major motivating factor here. I would think that the developer of the script would already have an environment with the dependencies, in many cases.

I guess this depends on workflow and speed is always nice. I’m just pointing this out because if I were hellbent on optimizing time-to-launch, I’d consider checking the file’s mtime before parsing anything.

1 Like

Thanks for this - this sort of practical experience is extremely useful for ensuring the final standard is as good as we can make it.

Personally, I do consider startup time of importance. I’ve been looking at how to design reusable environments so that we don’t need to install anything that’s already available[1] so I’d really like it if I didn’t lose any time I gain from that to importing a TOML parser…

  1. yes, I know I’m reinventing nix :slightly_smiling_face: ↩︎

1 Like

@brettcannon I think I need to add two more things based on comments that came up today.

  1. Time taken to parse TOML to the “Rejected option” of using TOML.
  2. “Just have a runtime function to install dependencies” to “Rejected options”.

Neither of these are critical to the approval process, but I’d like to add them for completeness. I’ll try to do them tomorrow.


Out of curiosity, how are you timing this? Maybe I’m doing it wrong but python -m timeit "import tomli" isn’t nearly that slow (on my machine).

edit: ah yes, per below this is definitely caching the import. timeit is not for this I guess!

Part of the goal for my use case is that if the tool is fast enough then I don’t need to build an environment with the dependencies as the developer. I’m not sure there’s much to win with mtime caching over parsing the current block (it takes something like 0.2ms to actually parse) and it’s not going to allow me to share environments between scripts with the same dependencies.

I’m using python -Ximporttime -c "import tomllib" it gives a nice breakdown of all of the modules being pulled in on import. I’m also using hyperfine for rough timing including the time taken for python to launch.

I’m fairly sure that your command imports it once and then just looks it up in sys.modules for every subsequent iteration.


I was thinking you could parse on any cache miss, and then cache the link from script to env. So if you parse a new script’s requirements and already have a compatible environment, you could re-use it and save that link.

I don’t know if this is a viable design in your case, just seemed like something I’d want to do if startup time was a concern here.

I always do the following (and run it a few times):

python -m timeit -n 1 -r 1 "import ..."

You could simply cache a file path => TOML string dict if that is the approach you choose.

Case in point. After installing pytomlpp in a venv, I can observe a startup time that is about twice as much as the time to start Python itself, which is in line with your argument.

$ hyperfine --warmup 10 "python -c ''"
Benchmark 1: python -c ''
  Time (mean ± σ):       8.1 ms ±   0.9 ms    [User: 7.8 ms, System: 0.8 ms]
  Range (min … max):     7.0 ms …  11.4 ms    210 runs
(venv) ~/snakerun $ hyperfine --warmup 10 "python -c 'import pytomlpp'"
Benchmark 1: python -c 'import pytomlpp'
  Time (mean ± σ):      23.8 ms ±   1.1 ms    [User: 19.0 ms, System: 4.6 ms]
  Range (min … max):    22.1 ms …  26.6 ms    116 runs

But once I change the start of pytomlpp/ from

import os
from typing import Any, BinaryIO, Dict, TextIO, Union

from . import _impl

FilePathOrObject = Union[str, TextIO, BinaryIO, os.PathLike]


from __future__ import annotations

import os
# from typing import Any, BinaryIO, Dict, TextIO, Union

from . import _impl

#FilePathOrObject = Union[str, TextIO, BinaryIO, os.PathLike]

I get

(venv) ~/snakerun $ hyperfine --warmup 10 "python -c 'import pytomlpp'"
Benchmark 1: python -c 'import pytomlpp'
  Time (mean ± σ):      12.4 ms ±   0.9 ms    [User: 9.6 ms, System: 2.7 ms]
  Range (min … max):    11.2 ms …  16.5 ms    223 runs

So by shipping its type hints separately you would reduce the import time to ~half of the time to start up Python. And I did this without looking deeper into any of what pytomlpp does.

Bottom line: you spent a little time and effort optimizing your code for startup time (e.g., you say you didn’t use regular expressions for that reason). You could also put a little time and effort into contributing import performance improvements to one of these libraries (for example the typing change in pytomlpp, or refactoring tomllib, which, from a cursory glance, uses regular expressions and compiles them on import).

(Sorry, I originally posted a draft of this by accident.)


It’s fair, but I’m not concerned with performance on the sub millisecond level for this specific task. I’m actually somewhat concerned that I’d implement this and the extra lookups would take just as long in the end.

I’m perfectly willing to put time and effort into contributing performance improvements where I believe it is appropriate to do so. [example]

I do not believe either of your suggestions are appropriate recommendations. Both would generate an extra maintenance burden on the developers for little[1] or no[2] noticeable performance gain.

  1. Typing is widespread enough that removing import typing from one module probably won’t make a difference for most projects as it’s being imported somewhere else. Maintaining separate type stub files that are accurate is not an insignificant maintenance burden for development though. ↩︎

  2. The tomllib regexes are used in the main function of the library, the import performance boost would be lost as soon as you actually parse a toml file, which is the whole point of importing the library. ↩︎

I’d suggest that discussions on how to optimise this (or any) particular implementation are off-topic here. The basic message is that “for some applications, TOML parsing has a non-trivial impact on performance”. And that’s relevant because startup time for a simple script is important - this has been noted on many occasions.

Whether script runners can be optimised, or caching can improve performance, is simply demonstrating that “it’s harder to write a performant script runner with TOML data than with a simpler structure”.

And even then, whether the difference matters is something that will ultimately be decided by choosing one proposal over the other. Not by people demonstrating that optimisation is possible.


It lean into the “it’s an optimization thing”, if you don’t cache already then I suspect the TOML reading is minor compared to the network overhead of communicating with PyPI.