That’s totally fine! Sharing the blog post was to make sure what it proposed seemed reasonable for today (which my takeaway is people are supportive of it). I then tacked on my “I wish we were using a standardized file to record what has been installed”, and it led to this discussion which gets us to what will be reasonable eventually. You know I’m always supportive of standardizing stuff, so I’m still happy with where this has gone so far!
That would be great! I personally would support standardizing on a way to embed runtime requirements so it isn’t solely a pipx thing and something VS Code could utilize.
And that last is why we are currently planning to help users get to that stage upfront (at least to start).
Multi-root workspace if you wanted to have all of the problems open at once instead of viewing each problem as an independent project. Otherwise as one big project where you created a package per solution.
Yes, because stage 1 leads you into stage 2 quickly and the leap is small when tooling can help you write down your dependencies. But stage 2 requires a way to write down those dependencies which we currently can’t do in a standardized way, hence us going straight to stage 3 where the baseline use case of being able to share things and not panic if you break your virtual environment is supported upfront. It’s all a question of which frustrations you’re trying to avoid.
I think to make stages 1 and 2 simpler we would need to be able to specify dependencies inside of a script to make that self-contained and take the guesswork of figuring out what to install out of the equation. After that is the transparent creation of virtual environments and the installation of dependencies as an inherent part of execution. That would take it from:
- Write code (
.py
file) - Write down dependencies (
requirements.txt
) - Create virtual environment (
venv
) - Install dependencies (
pip
) - Run code (
python
)
to:
- Write code w/ dependencies (
.py
file) - Run code (
pipx run
)
Both allow for redistribution and reproducible results, but the former can lead into more complicated flows naturally while the latter is much simpler and thus has less stumbling blocks.
I think the other question with this hypothetical is whether that latter approach is enough to go from “simple, self-contained, and no control” to “requirements file or pyproject.toml
”? Or is that too much of a leap from stage 1/2 to stage 3? It’s probably fine as long as we never let the in-file dependency list allow for decisions, so no extras or anything else where the user might need to provide input.
I could imagine a world where if people could specify dependencies in a file, VS Code’s Run button (the green in the UI) would inspect the file and do the whole virtual environment creation and installation on the fly much like pipx run
would do (heck, if we standardized the naming of the temp directory we could even reuse an existing one if people wanted to). That way beginners wouldn’t even have to think about it. And we could also provide code actions to help write down any packages necessary when an import clearly isn’t from the stdlib (which is where Record the top-level names of a wheel in `METADATA`? comes into play to help with that). We could even warn the user that we don’t think running the file will succeed due having not written down any dependencies (or they are running without a virtual environment).
One of the trickiest things we have to balance in VS Code is that golden path where we have to guess versus asking the user to participate in making decisions. In general we lean on the latter because we get yelled at less that way (although as you can tell from the blog post, we are getting asked to be opinionated to help beginners out). But if we had a more restrictive flow for the simple case where guessing wasn’t a concern then that makes it easier for everyone: we get to follow a standard/common practice that no one will argue with us over and users get exactly what they were expecting.