The ~/.python should be either in .local or ,cache in modern unix XDG world.
Another use of non-local environments is for contexts such as Docker, where one really wants to isolate the system dependencies from the application, yet the application environment is not confined to a particular part of the filesystem. I know that some people are happy to use the
-u feature for this, but there are legitimate reasons to use a virtualenv in Docker containers, such as multi-stage builds.
It seems to me that it’s a fairly uncontroversial statement to point out that both points can be true:
- “.venv” is highly convenient
- “.venv” does not cover the span of legitimate uses
Something that I am struggling with is “what problem are we trying to solve?”. I think the answer is twofold:
pipshould probably default to using isolated environments for installs".
- We need to lower the friction to getting started with a Python environment.
So I think we’d be setting out a standard here so that both
python would discover and use the appropriate environment. Is this a reasonable conclusion, @brettcannon?
If so, there are two parts to this:
- Environment discovery
- Environment creation
@dstufft makes a good point that we want to avoid lock-in to a single environment. I really like @brettcannon’s discussions on both
_py_launcher_. I wonder whether they would have benefited from greater visibility, e.g. here on Discourse? The idea of letting this be implemented by binaries on
$PATH that facilitate environment discovery seems highly promising. We’ve mentioned alternative platforms as one motivational use case, I am not confident that there wouldn’t be other beneficiaries of this flexibility.
For example, using just a discoverer mechanism, the bundled discoverer might look like:
import os import pathlib if __name__ == "__main__": # VIRTUAL_ENV if 'VIRTUAL_ENV' in os.environ: interpreter_path = (pathlib.Path(os.environ['VIRTUAL_ENV']) / "bin" / "python").resolve() print(interpreter_path) # .venv local_interpreter_path = pathlib.Path.cwd() / ".venv" / "bin" / "python" if local_interpreter_path.exists(): print(local_interpreter_path.resolve())
This is probably too primitive; I’d perhaps want to include names in this interface as well.
Conda could then bundle their own discoverer. @dstufft’s example of multiple-platform environments would then either be a new discoverer that users need to install, or a modification to the example I give above. I am not worried about the details here, as you might imagine. Now,
pycharm et al. can use this mechanism to identify the environments available for a particular project. Crucially, unlike a physical in-tree mapping, this system allows anyone controlling
$PATH to add new environment discoverers, which feels like a much more flexible, extensible approach.
Environment creation is more tricky. This might be where
python has a default that, in the absence of any discovered environments, it creates a
I think this should work with tools like Hatch that support their own environment management. I could see Hatch defining a discoverer that exposes all of its environments, with priority given to the first (default) env.
Let me finish on this note: I’ve not been hugely involved in these conversations, and I might be repeating old lines of discussion or missing some obvious points. If so, let me know!
I am a Physicist, and I use
.venvin all of my projects (except a Docker environment that runs the “core” part of my analysis package, as @tacaswell describes) ↩︎
I suppose that they were narrower scoped conversations at the time of writing, but have since broadened given the substantial overlap of these new discussions. ↩︎
Of course, this means that these discoverers need to be on the system
$PATH, which might be a pain if one wanted to use a discoverer from PyPI … but that’s already a chicken-and-egg problem anyway. ↩︎
From what I’ve heard, I’m feeling really left out by the fact that @brettcannon’s launcher is for Unix and the Windows launcher shipped with Python is missing all these neat features
When I need to get something done quickly, I always start by creating a virtual environment at
.venv. I even wrote a small tool to do just that. So it would be the perfect default location for me. Clear winner, no debate.
Anytime I do actual regular work it is in
tox-managed virtual environments. So I follow tox naming rules, no surprise. And I guess that if I were to use a “dev workflow tool” such as Poetry, Hatch, or PDM, then I would not need to know where the virtual environments are because I would always use their
If I need to do anything that is a bit more involved (maybe working on 2 libraries at the same time to debug something a bit tricky, or deploying something on production machine, or anything else a bit out of the ordinary) where a single
.venv does not cut it then I will create virtual environments by hand with names and locations that I will pick on the spot, depending on the actual task, which can be anything. I do not think there is any rule or logic that can be predicted here, and in my opinion trying to make up rules here seems like it would be a waste of time and energy.
If there was something like a
.venv file (or any kind of pointer to the actual environment), then this pointer would need to be kept up-to-date, which means we would most likely have a tool to manage its content and this tool should probably offer
I imagine many of us do something very similar. For posterity, I end up doing something like
cd $(mktemp -d) echo "layout python3" >> .envrc && direnv allow .
probably 2+ times per day. So I can see a need for this to be immediate, e.g. if
venv was used by default if no environments could be discovered.
I think the default structure of what Hatch does is ideal:
hashed_root = sha256(str(project_root).encode('utf-8')).digest() checksum = urlsafe_b64encode(hashed_root).decode('utf-8')[:8] virtual_env_path = data_directory / normalized_project_name / checksum / venv_name
The data directory in this standardized approach would be
platformdirs.user_data_dir('.python', appauthor=False) / 'env'.
Note that it is necessary to incorporate the path to the project because the same name might be used elsewhere, perhaps for testing. IDEs like VSCode have that information necessarily so they would be able to resolve the path to the virtual environment.
Is Hatch able to detect (and possibly garbage collect) orphaned virtual environments?
I wonder if there are cases where I would want to run a 3rd party tool outside of
hatch run or
hatch shell (so that this tool needs to know hatch’s naming logic for virtual environments). And if I understood correctly the
venv_name part is a user-defined variable (that can not be inferred by a 3rd party tool), right?
Orphaned as in the project directory no longer exists? Not yet.
That is a good point but not exclusive to Hatch as the name of the environment would need to be known in the case of all tools. I think the solution is still Brett’s Python launcher idea where there’s some communication mechanism that each tool exposes.
This thread is just about what the path should be so I thought I would chime in since I think I came up with the most appropriate way to isolate them
Ok, that was just out of curiosity. Seems like everything is in place to make it possible anyway. Maybe it is an idea for a plugin.
I understood it as the point is that the path to the environment should be inferred without external input. So in the case of hatch, it’s all good up until the
venv_name. I think Poetry has (or had, last time I looked into it years ago) only 1 environment per project (and per Python interpreter) so it can be inferred (there is also some kind of a hash of the project’s path). I do not know how PDM does it, except in the
One environment per project will not work for standardization.
Right, I re-read the proposal and the discussion. I understand better now.
And enough people feel that way that I don’t think we can ignore that use case overall. This is actually why I started this conversation about whether we can come up with some guidelines for tool creators and integrators to follow beyond just
.venv so we can support the multiple environment scheme (although it honestly seems to most be in some global directory instead of being local).
This is why I’m proposing the
.venv file idea as that works around the symlink issue.
How important is this to people? I assume this is mostly for automated cleanup of orphaned environments? We could suggest tools record the workspace the environments are meant for in some text file or something.
So the plan was to always bring that discussion here, but I have been waiting on some critical feedback from …
conda. Review the proprosal to develop a JSON schema and approach to facilitating environment/interpreter discovery · Issue #11283 · conda/conda · GitHub (I have gotten some tacit confirmation that conda likes the idea).
I am hoping to use the code in the Python Launcher to handle environment discovery in VS Code, which would mean some form of Windows support. So I assume it will happen eventually.
To try and refocus this conversation, my questions for everyone are:
- What do you think of the
.venvfile idea as a cheap, simple way to tie a workspace to a virtual environment stored elsewhere?
- Is there a directory where people would install virtual environments that we can recommend to tools to use?
- Is there some naming/structure scheme within that global directory that we can recommend to tools for having multiple environments for an associated workspace (like what @ofek suggested in Setting up some guidelines around discovering/finding/naming virtual environments - #13 by ofek)?
I get it if the answer to the above questions is “don’t need it”, but I will say this is not a a theoretical issue; we have constant problems trying to find people’s environments properly in VS Code and right now it’s a jumble of custom code per environment management tool we choose to support (which is a similar problem for the Python Launcher). My planned solution is Support a way for other tools to assist in environment/interpreter discovery · Discussion #168 · brettcannon/python-launcher · GitHub (which I will have a proper discussion here when I’m ready to start implementing it), but I’m not sure if that’s not a bit too heavy-handed for common cases (although it will totally meet my needs and everything I’m asking about). But if all we can agree on is what’s in PEP 704 for the situation of when one only needs a single virtual environment and chooses to store it locally, then so be it.
- I think the
.venvfile idea is great. I’ve been using it for a couple years and it’s been really nice and flexible.
I’ve shared this setup with n=2 novice Python users and they’ve found it really easy to use, in combination with a shell plugin that automatically a) activates and deactivates venvs based on
.venv in cwd, b) suggests creating venvs when entering directories with pyproject.toml or setup.py (and without a venv active).
Orphaned virtualenvs haven’t been a concern for me; it’s been easy to clean them up if I feel the need to. I think an individual tool could easily keep track of which virtualenvs it’s created and detect orphaned ones.
platformdirs.PlatformDirs("virtualenvs").user_data_dirseems like a solid choice (although I currently just use
ofek’s suggestion is good, although I’d hash
sys.implementation.cache_tagor something in there as well. And maybe use hexdigest instead of base64 for simplicity. Could also be worth hashing in the tool name.
I only brought it up because @pf_moore mentioned he wanted something like this:
To me personally, it’s not very important, and something that could be quite easily handled with a dedicated discovery/cleanup tool. Compared to the UX benefits, it’s IMO an unnecessarily hard problem to solve well in a more integrated fashion.
including all the edge cases of project/folder/path mutation that happen completely outside of the purview of any specific tool, e.g. because the user just drag-and-drops a project folder somewhere else. ↩︎
If I understand you correctly, then, you’re proposing that whilst a “discovery-binary” based approach is the big-picture solution (in your view), the
.venv simple-case is one that we could deliver first, and enshrine in a simpler PEP?
i.e., they would form complementary solutions, and e.g.
.venv support could be realised in practice using the big-picture environment-discovery binary down the road?
I feel that there are both merits and demerits to this idea:
- Smaller scope for initial PEP is more likely to win-over people if it does not prevent other users
- Smaller scope PEP is easier to actually PoC implement
.venvin its own PEP is a harder requirement than making it a default part of environment discovery
If my reasoning aligns with yours, then why not limit the scope further to a single environment? To be clear, this would not be the end-goal for environment management (and indeed we’d immediately plan the next phase), just the smallest deliverable that helps the most people.
If only binary discovery were the official mechanism for locating venvs, then we might dissuade people from explicitly looking for the
.venvlocation, and instead have them use this more general discovery pattern. This would mean a smaller “surface” to maintain. ↩︎
This is a heavy assertion; that most users only need a one-to-one mapping of environments. ↩︎
I rarely use virtual environments stored outside of the project directory, so my needs are limited here (mostly only for cases when I’m using tools that insist on out-of-tree virtual environments). With that said:
.venvfile seems like a good idea. Although existing tools may be confused by it being a file rather than a directory - how about a distinct name, like
- A common directory is more important to me than where it is. But note that the Windows “appdata” scheme uses per-application directories, which means I don’t consider that suitable (I don’t want to have to look in all of
%APPDATA%\VSCode…, plus the
- I’d rather a file in the virtual environment naming its “owner”. Having to decode a hashed checksum plus partial name is not something I can do easily (if at all). My typical requirement is "what the heck was I using the environment
C:\Users\Gustav\AppData\Local\hatch\env\virtual\b--0rwWMov\bfor? Do I still have the source directory? Go on - from that filename, can anyone answer that question for me?
There’s lots of other things I’d like to do with shared environments (basically, task-based environments that can be used whenever appropriate for a project) but those are out of scope for this discussion (and don’t seem to be a common pattern people use). For this discussion, being able to programmaitically convert in either direction between the project location and the venv location is sufficient for my needs. Having all tools use in-tree venvs would also be sufficient, though
Correct. Basically keep the simple case simple, while making the more complicated possible.
I’m honestly gone back and forth on this. One the one hand the
.venv name is already ignored by a lot of
.gitignore files out there. But you’re right there’s a potential for some tool breakage. My assumption, though, is those tools already have to be resilient in the face of
.venv being anything else, not having appropriate permissions, etc., and so it being a file wouldn’t break too much. But maybe the discoverability of the file would be harmed if it shared the
If it helps, I’d say that tools are much harder to fix than
.gitignore files. As much as I like
.venv, it’s already used for a different meaning. So let’s choose a new name, e.g.
I agree, I could be persuaded either way on whether the venv-file should be named
.venv or something different. As you say, either is potentially disruptive.
I’m not sure
.venv is necessarily in that many gitignore files. Virtualenv adds a
.gitignore file inside any virtualenv it creates, and I tend to rely on that making virtual environments “invisible” to git. I suspect many other users might do the same.
People are going to have to adapt either way, I guess.
As mentioned in the OP, it is in the default one that most people use gitignore/Python.gitignore at 4488915eec0b3a45b5c63ead28f286819c0917de · github/gitignore · GitHub