The entire point of this PEP is for use cases where a user isn’t using any of these tools. If someone is already using e.g. poetry then they can easily just keep using poetry.
The single file use case doesn’t use any existing packaging tools because it can’t, that’s the whole point. If a tool other than a script runner wants to go above and beyond and add support for this format then that’s the decision of that tool’s author and it’s community, but nothing about this format or the use case it targets compels tool authors to do so. Indeed I would not expect e.g. syntax highlighting for the PEP 722 comments or any automated tool support whatsoever, because if I’m bringing additional dependency management tooling into a project then I now have a project with things like separate development and runtime requirements and have outgrown the use case this is intended for.
Is saving a trivial amount of work for tool implementers seriously more important than what users of Python have a need for? I could write a parser for the proposed PEP 722 format in less than an hour (conservatively) given that the hardest part is the actual PEP 508 specifiers that there are already parsers available for. And if I did want to reuse existing code that supports pyproject.toml, once you have a list of specifiers any dependency handling flow for that list is exactly the same.
I agree with those 2 points (very important in my opinion):
I do not understand why tools like Poetry or Hatch would care about PEP 722 in the short term. Maybe there is a use case, but it is not obvious to me just yet. At least that is not at all my expectation that Poetry or Hatch should support PEP 722.
There is a whole world of things that can be written in a pyproject.toml file (the [tool:XXX] sections and [build-system]). It is already quite unclear how [project] could fit in a single-file script use case (see also “Projects that aren’t meant to generate a wheel and pyproject.toml” again). But I get a headache thinking what would happen if we said everything that goes in pyproject.toml can now be written inside any Python script. What if users expect that all tools that store their config in [tool:XXX] to also support the embedded version? Would that be the expectation from the PEP? If it is a subset of pyproject.toml that is allowed to be embedded, which subset?
Of course, I can see the appeal of having the configuration settings of things like linting, formatting, or maybe even whole packaging metadata (including build system) into a single file script, but that is a very different scope than what PEP 722 is aiming for. And I guess theoretically it could work, and if it were to happen that could be great. But in practice I can see quite a bit of chaos happening.
If people want to pursue this path, fair enough, I am not against it (for what it’s worth), but clearly it can not be written within a week (it might well be that the “within a week” was not meant literally, but for sure don’t take my quoting of it literally). I believe there is a whole bunch of things that need to be clarified, in my opinion this would be on a completely different scale compared to PEP 722. If it has to be, then let’s drop PEP 722 and restart from scratch with proof of concepts implementations, soft approval from tools and so on.
Aside: Maybe I am wrong, but I feel like people are writing pyproject.toml when they actually mean only its [project] section, and it is a bit frustrating to me, because then it feels to me like people are talking past each other: good arguments are made but ignored and bad arguments are made but not refuted. I know I have written this a couple of times already in this past few days, but I feel like it important to be repeated.
I’m quite confused by the dismissal of the fact that tools like Hatch, Poetry and Visual Studio Code will most certainly be asked to support this. This will never be implemented in Python itself so a tool will be a requirement. And then if, as has been said, this is only for script runners then what tools are being talked about other than pipx? If this is such a small use case then why are we standardizing? Just use pipx.
Alternatively, if this is not just for some theoretical large number of existing script runners then this will most certainly be implemented by the projects I mentioned.
If we take Hatch as an example, the interface would be simply hatch run /path/to/script.py [args] and Hatch will manage the environment for that script. You would also be able to enter a shell into that environment like other projects by doing hatch -p /path/to/script.py shell since the project flag would learn that project metadata could be read from a single file.
Brett can speak more about Visual Studio Code but I assume (especially since he seems to be anticipating scheduling work) that they will implement this and as part of that I would be extremely surprised if syntax highlighting would not be planned since they take design very seriously.
I don’t think that use case is being dismissed, it would definitely make sense for project management tools that can run projects to also be able to run single-file scripts.
The pushback seems to be on the suggestion that we need all the capabilities (and complexity) of pyproject.toml (or even just the standard project metadata fields), when it’s not clear that anything beyond dependencies (and maybe requires-python) would make sense for quick scripts that will not be distributed on PyPI or installed as a dependency.
If pyproject.toml can be embeded in a single-file script, then all tools that read configuration might now get requests to read their configuration from a script instead of a separate file. If only dependencies can be embeded in scripts, then tools that run scripts will get requests to support that, but otherwise the current configuration method remains the same. By the time you need all the features of pyproject.toml, you may as well write a pyproject.toml next to the script file, but that’s not needed in the case being described here.
I don’t anticipate any significant updates to this version - most of the arguments have been made by this point, and I’ve done my best to provide responses to them all in the PEP. I’ll give people a chance to read the new version and offer their comments. If there’s anything further that comes out of that which needs a change to the PEP I’ll do that, and I’ll then submit the PEP for approval. I’ll leave it up to @brettcannon as to how long he wants to wait for any other proposals to be submitted at that point.
The rest of this message is in response to @ofek’s comments since my last post. Sorry for making this a “dual purpose” post, but I don’t want to wait another 8 hours to respond.
Thanks for this - I was going to ask how you expected hatch to need to support PEP 722, but this answers that question. So basically, you plan on having hatch implement its own version of pipx run. That’s fine, and I guess it fits with the “one tool to do everything” philosophy, but I’m not sure I’d consider it a critically important feature to add. I don’t think I’d personally bother using a heavyweight tool like hatch in the shebang line of all my personal utilities. Something lightweight like pip-run seems far more appropriate to me. But yes, I do know that’s a personal view.
But I really don’t see why you’re so concerned about the cost of adding it. Obviously there’s all of the environment management, but that has nothing to do with this PEP. The only impact the PEP would have on how a hatch run command would work, is the code to read dependency data from the script. That is literally less than 25 lines of code, including imports and comments. The reference implementation is in the PEP, and you’re welcome to copy it. I’d honestly be impressed if you could write code to read a TOML form of the dependency data in less than that.
Even if there’s existing code to read pyproject.toml, that code wouldn’t work directly with an embedded version of that file. Presumably the embedded version would be written as a comment block, so the existing code would need modifying to ignore that. Again, I’m finding it difficult to imagine that additional code would be more complex to add than copying a 25-line implementation of PEP 722.
So I’m baffled as to why this PEP is considered such a huge impact on such tools. Yes, I can see that adding a command to implement a “run a script” command is a non-trivial new feature, but that’s true whatever syntax is involved. And no-one is forcing you to add such a feature - this feels like another example of people assuming that the mere existence of this PEP will cause a huge new level of interest in commands to run single file scripts.
Yes, I hope this PEP encourages innovation and experimentation in the area of tools to run scripts with their dependencies. Yes, I would like to see better solutions than we have now. But no-one is forced to work on this - there are projects like pip-run and pipx working on it, and providing a solution for now. This PEP will (hopefully) help projects that want to focus on mechanisms, by providing an “off the shelf” data format to start from. But I doubt it’s going to trigger a massive groundswell of interest that results in users demanding that all of the existing Python workflow tools add support for running scripts.
Based on the format suggestion I made, a quick attempt yields 20 lines, not code-golfed, with basic error handling[1].
import ast, tomllib, sys # assumes Python >=3.11
# get docstring; does not need to import modules used by myscript.py!
a = ast.parse(open("myscript.py").read())
doc = ast.get_docstring(a)
# get toml-in-docstring
def cut_toml(doc: str) -> str:
res, count = [], 0
for line in doc.splitlines():
if count == 1:
res.append(line)
count += int(line.strip() == "```toml") # marker up for bikeshedding
if count != 2:
raise ValueError("Needs to have exactly 2 markers for embedded toml!")
# last line of res is second marker
return "\n".join(res[:-1])
# off to the races...
reqs = tomllib.loads(cut_toml(doc))
If you take a look at my example, there is no comment block. Not saying that this is the be all and end all, but at least there is a concrete suggestion that doesn’t need extra characters like # in front of the requirements.
it should be clear that this is for illustrative purposes, not a production-ready implementation ↩︎
Thank you for the detailed arguments you put into the PEP.
I will not respond to everything (to avoid repeating already-made arguments).
I do not agree that TOML would be a significant learning burden. In practice, all the beginner-type people are just going to just copy-paste the format found on StackOverflow/whatever, so whether it is actually TOML under the hood or some other special format makes little difference. The format specified in the PEP is not free of cognitive overhead (for example, the need to double the # for comments or the exact keywords Script Dependencies: with a colon and case-sensitive are things you could forget just as well as the [project] brackets or whatever part of TOML). To me,
I don’t really understand the argument about TOML being more complicated to edit while retaining comments and style. Yes, it’s a burden, but aren’t most of the tools which are going to implement this spec tools that already need to deal with pyproject.toml files? Ok, that’s not the case for pipx, but it doesn’t seek to edit the file IIUC. The existing tools I’d expect to implement this are things like Hatch and VSCode. They already have this need for full-fledged projects. The users and use case for single-file scripts aren’t the same as for full-blown projects, but the tools largely are, aren’t they? (And if they aren’t, isn’t it a long-term goal to make them so?)
Most importantly, I would like to comment on this:
By reserving metadata types starting with X-, the specification allows experimentation with additional data before standardising.
and this:
While I am aware that “you should” statements can be frustrating from someone who is not [1] a meaningful contributor to packaging, I would like to say that, as a user of Python packaging, the last thing I want is more innovation and experimentation for tool UX. There are already way too many different tools for managing full-fledged packages and the resulting confusion is one of the main problems with packaging as it stands. Metadata for single-file scripts has comparatively prior art, so there is a unique chance to get it right from the start and not have lots of diverging tools and formats.
I don’t really understand the argument about TOML being more complicated to edit while retaining comments and style. Yes, it’s a burden, but aren’t most of the tools which are going to implement this spec tools that already need to deal with pyproject.toml files? Ok, that’s not the case for pipx, but it doesn’t seek to edit the file IIUC. The existing tools I’d expect to implement this are things like Hatch and VSCode. They already have this need for full-fledged projects. The users and use case for single-file scripts aren’t the same as for full-blown projects, but the tools largely are, aren’t they? (And if they aren’t, isn’t it a long-term goal to make them so?)
My tool viv and pip-run, I think fall into this category of not intending to serve the same purpose as a project managers like hatch, poetry, or pdm. I don’t already handle pyproject.toml because it’s outside the scope of my intended feature set, which is to allow using/running existing python packages or scripts easier.
An admittedly self-imposed constraint on my own package is that besides pip I will not use any libraries outside the standard library including a separate toml parser.
I have provisionally added support based on the reference implementation for the style proposed in PEP722 as an additional example to have alongside pipx and pip-run.
The example script from the proposal could be run (on *nix machines) with the below command: python3 <(curl -fsSL viv.dayl.in/viv.py) run -s https://raw.githubusercontent.com/daylinmorgan/viv/main/examples/pep722.py
I don’t want to let this drag out by having 5 competing PEPs because I have already been messaged by someone wanting to proposal yet another alternative. The fact that I’m willing to even consider another PEP is a bonus.
I only have so much time and MS is already being nice enough to let me do this on some work time and spend the money to do user studies (because they are not free), and that’s not counting everyone’s salary on my team when they are doing this instead of working on other things. So anything taking more time – such as having to have multiple user studies to cover every proposal out there, more for me to read and consider, etc. – does not come for free for me.
I think I have done it before. Usually, though, delegates just refuse to consider another PEP and force everyone to agree.
I don’t think you mean anything negative about this, but I feel like I’m being questioned/pressured unfairly to have to justify how I’m trying to be accommodating to this alternative proposal more than some people would if they were in my position. If you want me to consider an alternative to Paul’s PEPs as a PEP delegate simultaneously, these are my stipulations. If Paul would rather ask someone else to be PEP delegate who has more time to consider more PEPs, then that’s fine and I won’t be offended if he chooses someone else (but I don’t expect he will since I’m in a unique position to help with this topic not everyone has the time or inclination to be put in the time-consuming, stressful position of being a PEP delegate).
Not really. @ofek has been around long enough to know what goes into a PEP and Paul already wrote a PEP that can be referenced from Ofek’s PEP, which means the PEP can very much focus on the concrete proposal instead of the reason for wanting any of this. Now if Ofek tells me he is about to go on vacation or something then he and I can talk about that (he knows how to reach me and knows he chat with me if he ends up needing more time).
No worries!
I appreciate that and I don’t want to be in that position either if it can be avoided. But I will say upfront I will reject all PEPs if a specific one doesn’t make sense. No one writing a PEP should have any illusion that just because they put the work in their PEP it means it will get accepted (go check out peps.python.org and see how many of my own PEPs have been rejected to know that it very much happens).
That’s going to be up to Ofek to clarify in his PEP.
If you look at it from the perspective of a child in a programming class at school I can see where people are coming from. A teacher might get their students to first install Python. But then what comes next? Is Hatch going to make the most sense, or some simpler tool (for instance, if this is simple enough to implement you could make it work with a .pyz and have a custom experience for that class alone)? At that point do you view it as “graduating up” to Hatch, or do you simply start with Hatch?
Thanks so much!
@ofek , do you think you can have a draft up by Monday, August 14th?
Nice! Could this be split into a separate thread? I don’t think it’s reasonable or fair to mix the two discussions, and I don’t think having “slow mode” on for discussion of a new PEP is a good idea. @davidism I know you’re a moderator, and apparently I’m not allowed to tag the moderators group as a whole, so maybe you could do this if no-one else sees it?
Regarding the PEP itself, I will note that (not surprisingly!) it directly clashes with a number of the “rejected alternatives” in PEP 722. I’d really rather not post all of the arguments from that PEP into Discourse, just to re-hash old debates, as that will waste a lot of everyone’s time. Can I therefore ask that you respond in your PEP to all of those points (maybe in a big “Rejected Alternatives” section of your own?) as if I had raised them all here. Even if all you want to say is that you don’t agree with the objections raised in PEP 722, then could you do that explicitly so that it’s clear that your position is that the objections I give are simply a matter of opinion that you disagree with?
Also, could you make it clear in your PEP that the whole “pyproject.toml for projects that don’t build into a wheel” discussion is still unresolved, as your PEP rather critically depends on the outcome of that…
I’ve added a couple more points to the PEP draft as review comments. I know that’s not the ideal place to do so, but until your PEP has a dedicated thread, that seemed like the best approach.
Other posts
Some other points, related to earlier posts about PEP 722. Again, sorry for the “combined” post, but I didn’t want to wait another 8 hours to cover these.
You did see the point I made in PEP 722 that we can’t assume all clients will have access to the Python AST module? And that the Python 3.11 AST module (for example) might fail to parse Python 3.12?
I assume you disagree with those points, so I guess yeah, your code seems fine if you want to take that stance.
And most of what’s on such places will not use TOML for quite some time yet, as it will date from before that was added as a standard. So copy/paste won’t be enough, they’ll have to add a TOML wrapper. As opposed to PEP 722, where they will have to add a header, and a ## to each line. Yes, both need editing, but IMO PEP 722 is easier.
It’s not, but there are a number of other ways of writing the latter:
(Yes, I know the last one is pathological, but for a requirement with complex markers someone might do that sort of thing).
The point is that TOML is a complex format, and saying it’s easy based on the fact that you assume people will “only encounter the easy bits” is at best risky, and at worst naive.
Well, the fact that very few libraries offer editing suggests that it’s true. Personally, I wouldn’t even know how to spec code to parse, change and write arbitrary TOML without losing at least some formatting choices (for example that string split over 2 lines with a backslash…)
I have no idea what tools you are thinking about, but most of the ones I am thinking of[1] wouldn’t.
I want to address this specifically, as I think it’s a key point in the whole “why are you ignoring the survey?” argument as well.
I absolutely agree that the packaging ecosystem is confusing and unwelcoming for beginners at the moment. There are multiple tools claiming to be the “right way” to do workflow, and they are all competing with each other. New users are faced with everyone telling them that “tool X is best, you should use it” (and often “you should change your workflow to fit how it does things”!). And they don’t even agree on what X is! Having a single, well-understood and “official” approach would be a significant improvement. But that’s a long way off. I doubt Ofek would be willing to abandon hatch to support poetry, for example, and I’m sure the same is true the other way around - so how do we choose?
But use cases like single-file scripts are largely outside of this conflict. There are currently no well-known solutions in this area (pip-run is niche, and pipx support isn’t released yet). If I’m honest, I think that having poetry and hatch support PEP 722 would worsen the situation, because it would suck people writing single scripts into the whole “which workflow tool” debate. That’s why I chose the terms “innovation” and “experimentation”. I’d like to see smaller tools look at solutions that meet user needs in this area, before tools like hatch come along and say “you should all run your scripts in ~/.local/bin with hatch, and configure your environment cache like this, and actually, why not stop writing scripts and just define your scripts in pyproject.toml, and now that you use hatch why not switch your poetry projects to hatch…”
(Sorry @ofek, that’s a dreadful caricature of some of the more evangelical advocates of workflow tools - I only used hatch because it’s the tool I know best, and because normally, everyone has a go at conda over this. I certainly don’t want to suggest this is common, or that hatch is a particularly bad example. But it does happen.)
But putting aside hyperbole, I confidently expect that ultimately, there will be a single “best practice” for running scripts. Personally, I’d like it not to be one of the big workflow tools, but for it to get built into something like the py launcher. But before that can happen we need developers to experiment, in order to work out the best ways of doing things like managing and caching environments[2]. I didn’t mean anything more than that, and I certainly wasn’t advocating for multiple solutions in the longer term.
Actually, I was going to make this a footnote, but I want to call it out explicitly. One of the reasons I see “must be usable in tools not written in Python, that can’t easily parse Python source” as an important requirement is because I’d ultimately like to see the py launcher handle this sort of “script runner” role. And the existing launchers are written in C and Rust.
Also, as per my previous comment, if we can get some sort of “run script with dependencies” into the launcher, why use a workflow tool for intended project developers, rather than just the standard launcher? (An if the launcher is the long-term goal but doesn’t handle this yet, why not use a small dedicated tool as an interim solution?)
Runners like pipx and pip-run, audit tools that scan a directory to find out what libraries are used, environment builders that build a combined environment to support a set of scripts… ↩︎
And by the way, the priorities and trade-offs for managing environments for scripts are very different than those for managing project environments. ↩︎
Hi all, as someone who has been learning a ton about how packaging works in Python (and reading a lot of these long threads) in recent months, I’m very excited to see one of these discussions happening live and hoped to chime in with my experience.
As a starting point, I really like the idea of PEP722 and would desperately like to see it adopted.
I’ve also been a strong convert to Hatch (thank you Ofek!) and use that exclusively for all of my own projects now, but see this use-case as completely unrelated to what I use (and am actively teaching a lot of people to use) Hatch for.
I work at a manufacturing company where I’m the technical lead for a small team doing data analysis for manufacturing lines in Python and presenting a lot of those findings in a custom built web app (using Django), so have decent exposure to both the data science and web dev sides. Most of the people on my team do not have a software background at all, they are instead primarily manufacturing engineers coming from plants who are learning Python on the fly.
Also, I’m currently in a master’s program for data science that is pretty heavily targeted at a similar demographic (people who have an undergraduate engineering degree, worked in an industrial or business setting, and now want to pivot into data science/analysis). Again, a lot of smart people coming in who often only have a basic familiarity with Python from a course or two in undergrad maybe a decade ago, but who are now having to learn it again (while also often having to learn R for a different class at the same time).
I think a lot of people here are vastly overestimating the understanding of beginning users, even people with backgrounds in technical fields like engineering.
Most of the people I work with have no idea what a shebang line is, what the $PATH variable is, what virtual environments are, what TOML is, etc.
Almost all of them have a folder (usually named “Python Scripts”), full of random scripts that do one simple thing, because bash/Powershell/etc. are much much much harder to learn than Python, and the primary way they are shared with other people is by email/Teams/slack (the actual file if I’m lucky, often just all of the contents of the file copied and pasted into a message)
Plenty of people who have asked me for help with something in Python could not tell me how they even installed Python in the first place and have no idea how to manage dependencies. No one has the slightest idea how to properly package up and distribute a script as an installable wheel with entry points defined.
Part of what has been confusing for a lot of the people I work with and why packaging seems complex and not user-friendly, is that there is essentially no on-ramp between:
“Here’s a single file script using only the standard library that you can run with python my_script.py”
and
“You need to have a project directory with this special file in a format you’ve never heard of before, preferably with a nested src layout that you don’t see the point of, and then your code needs to be built into another format you’ve never heard of before, and then to use it you need to create a virtual environment following these steps and remember to activate that environment each time you want to use it”
The simpler the solution (which I don’t think it could really get any simpler than the current draft of PEP722) the absolute better it will be for all of the people I know.
The opening example in the PEP is just about perfect, my only suggestion would be to use a more “magical” looking set of beginning characters, maybe #% to somewhat copy the Jupyter world.
# In order to run, this script needs the following 3rd party libraries
#
## Script Dependencies:
## requests
## rich
It reads like plain English, it’s incredibly simple to parse, and even if you don’t know that there are special tools that could run this script for you, it’s not the same huge leap to look at that and say:
“I know about pip, I should probably do pip install requests and pip install rich”
I don’t think anything using TOML is going to be beginner-user friendly, since all of the beginners I’ve worked with do not even know what TOML is.
This is true, but do they need to know what TOML is to understand the equally-magical formatting in e.g. this post? It seems like there’s a balance to strike between “human-readable format” and “clearly a format, not just prose”. In my experience[1], it just takes a while for people understand that exact formatting is important.
It feels like the discussion has bounced back and forth between a desire to standardize and a desire for experimentation. If the tools haven’t been written and the best practices are unknown, why codify the format in a PEP? It just seems like a constraint on future experimentation, even just for yourself. [2]
It makes sense to figure out a standard format to allow multiple tools that do different things with the same info. But when those tools don’t exist and don’t even have an author planning to create them, there aren’t enough use-cases or feature requests to figure out what the right format should be.
I totally get the desire to standardize early, before an ecosystem of different things develop and standardization gets harder, but maybe having pipx out there for a while is necessary to clarify what people want.
FWIW tomli parses all of these correctly and consistently, even the pathological case if there isn’t trailing whitespace (and there shouldn’t be).
similar to yours, in that I work with a lot of Python novices ↩︎
and if no one else actually bothers to write a tool for this, then hey, whatever you implement is the standard. ↩︎
We don’t need all the capabilities (and complexity) of pyproject.toml (or even just the standard project metadata fields), when it’s not clear that anything beyond dependencies (and maybe requires-python) would make sense for quick scripts that will not be distributed on PyPI or installed as a dependency.
If pyproject.toml can be embeded in a single-file script, then all tools that read configuration might now get requests to read their configuration from a script instead of a separate file. If only dependencies can be embeded in scripts, then tools that run scripts will get requests to support that, but otherwise the current configuration method remains the same. By the time you need all the features of pyproject.toml, you may as well write a pyproject.toml next to the script file, but that’s not needed in the case being described here.
If pyproject.toml had the feature of building multiple different binaries from the same directory, that would help, but there’s no such thing. I would prefer that as the solution rather than embedded toml.
Basically, because Brett expressed an interest in the format being standardised, so that VS Code could support it in some sort of “add a dependency to this script” action. So there is an interest from other tools in supporting scripts that declare their own dependencies, but only if it’s standardised (which is something I can understand, pip has a similar policy).
Not really. It’s come up a few times, but people seem to keep missing this point - the target audience is people writing scripts where the important point is explicitly that there’s no need to have a “build” step, they are just “write and run”.
FYI I plan to add some form of support to the Python Launcher for Unix for whichever PEP gets accepted (probably via a py run subcommand, once subcommands even exist and I have had time to verify my in-development py pip subcommand works appropriately). As for the Windows launcher, the tricky bit would probably be how to do the installation for such an official tool (i.e., do you assume pip is installed?). I can cheat with the Unix version since I get to do what I want (which is probably download and cache pip.pyz like I am doing with py pip).
To be clear, Jupyter doesn’t have some special support for #%, correct? I know about magic commands via %% and so I assume you’re suggesting a take on that and not something already supported?
I will only respond to this because I think it actually condenses the core of the issue under discussion: where we want to be in 10 years, what world we’re trying to build.
In my ideal packaging world, there is one tool that
most people use, beginners and advanced users alike (but it will never be used by everybody, and that’s fine)
can run scripts
can run REPLs
can install Python and manage Python versions
can manage projects
can install projects
can manage lock files
etc.
reads one data format (TOML, PEP 621) for all metadata
(That is why I’m fond of Rye, which, in spite of being “yet another tool”, is the first tool to date, AFAIK, to handle the “can install Python and manage Python versions” bullet.)
(And in contrast, there can be as many build backends as useful, though preferably one “canonical” build backend for pure-Python projects, e.g., hatchling.)
I think the “too many tools” mantra is well-accepted by now. What are your reasons to think that in an ideal world of packaging that we arrive at in 10 years, there are still different tools for scripts and for projects[1]? How does that allow better UX?
In a footnote, you write “And by the way, the priorities and trade-offs for managing environments for scripts are very different than those for managing project environments.”. Can you explain what you mean by that?
I’ve hesitated to write “scripts and projects” because I view the line between the two as blurrier than the PEP text. ↩︎