PyEM manages project-level virtual environments

I’m not sure where I should post this, but figured I should somewhere.

First off, the project: https://github.com/uranusjr/pyem

Install it with pipx install pyem so it is globally available. pip works fine and it has no hard dependencies right now, but pipx is still nicer. It provides one single command pyem, that has two modes:

  • pyem venv manages virtual environments in a Python project (marked by pyproject.toml).
  • pyem [cmd] runs [cmd] in the active virtual environment, or another one you specified with --spec.

Backstory: So I ran into another conversation the other day complaining how virtual environments are difficult to use for newcomers and a trouble to teach. The conversation went on to lament on __pypackages__ should be accepted and how nice Node does it, but I started to think the complaints don’t line up with the proposed solution. It does solve the problem to a degree (like Node), but brings other problems you wouldn’t have with virtual environments.

So the problem people have the most with virtual environments is the activate/deactivate stuff. It’s easy to forget until it’s etched into your muscle memory. You can do something ./venv-3.5/bin/python but that’s even more confusing for newcomers (thosee different lib/bin/scripts paths really don’t help either).

Tools like Poetry and Pipenv also manage environments, but they make a lot of assumptions to your workflow (e.g. one env per project at a time). PyEM tries to slide into the middle. It tires to make as few assumptions as possible, but hides away the most problemetic part of virtual environments. It keeps a directory .venvs that holds multiple virtual environments, identified by their backing interpreter (called quintuplet since the identifier has five parts). Environment addition is done by pyem venv add/remove to save some typing from python3.7 -m venv --prompt=$(basename $PWD) .venvs/my-venv-identifier, and the quintuplet is set up so you can identify an environment with short names.

There are a few edge cases that I hope to improve (especially how you can’t run shell builtins right now), but it’s in a good enough state to cover my own use cases. I’m interested in hearing how others might use it, and where this idea can be taken forward. I’m especially interested in the quintuplet idea (which is IMO the largest missing piece in PEP 582).

1 Like

So this sort of like npx - npm for virtual environments?

This has actually caused me to consider adding automatic detection of in-project virtual environments by the Python Launcher for UNIX and to use those implicitly.

Kind of, I guess. A venv for each Python.

If there’s ever any consensus how to structure that… there’s close to none right now.

How is this different from using tox’s --devenv?

P.S. I usually manage venvs with pyenv-virtualenv which also allows me to set per-folder envs via .python-version file and auto-sources them when I cd there. Another thing I like is that actual venvs are stored separately from the project’s fs.

The main difference to tox (or most virtualenv tools) is you don’t activate the env. I feel this is what trips up most newcomers the most.

Most virtualenv workflows let you activate once, and subsequent commands are exactly like you’re interacting with the global env. This causes the least friction if the user is already used to interacting with the global Python, but newcomers tend to forget that initial activate step, and then all their subsequent commands act wrong. PyEM’s approach is to make the venv presence explicit for every command, eliminating the initial activate part, and if the user misss one step they can correct it right there, rather than restart from the very beginning (because all previous commands went to the wrong environment).

As for pyenv-virtualenv, it’s more personal preference TBH. I dislike pyenv with a passion; the shims always caused more trouble for me than benefits, and the implicit environment switch via invisible .python-version is annoying to debug if you misplace one somewhere in the search path. And it is not viable for me now anyway since it doesn’t work on Windows.


(Edit: Sent too early)

The out-of-project vs in-project env is an endless debate I’m not going to involve myself into anymore (Pipenv went through multiple flame wars on this :stuck_out_tongue:). They are not compatible, and a tool must take a side. Just choose another tool if it’s a deal breaker.

2 Likes

One problem I see with many Python dep/venv management tool is that, it works well on its own, but once you add other tools, it breaks. For example it took me some time to figure out how to properly set up Poetry + pyenv-virtualenv + tox. You said you want to make less assumptions, maybe it just naturally mitigate this problem, but I still suggest testing it with as many workflows out there as possible.

I complete agree, and this is made exactly because I got frustrated between multiple tools across projects. I’ve tested it on as most scenarios as I could (on various projects I involve, including Pipenv, Poetry, and Flit), but eventually I can only think of so many ways to do things, and need adoption to be comprehensive.

What are the problems you got with the setup? I can imagine Poetry + Tox being problematic (since Tox doesn’t allow much customisation to the setup step), but Poetry works well with virtualenv from my experience, and shouldn’t cause too much trouble with pyenv-virtualenv either.

I commend you on the new tool. I like the idea and offering a fresh approach.

I use conda as my default everything. It’s great in many ways, but its designs keeps me from trying new tools like pipx, Poetry and Pipenv. Most of them only have partial conda integration. Since workflows were mentioned earlier, I’ll just mention integration with conda envs. In particular, I wonder how pyem will handle > pip install pyem inside a conda environment.

There are many small problems, but all I remember now are

  1. Poetry has too be installed in the same environment that tox does testing, using whitelist_externals will cause problems
  2. Disabling the built-in virtual environment of Poetry can solve most problems.

pipx is for installing global command line tools written as Python packages, and is orthogontal to Conda. You definitely can use them together, they don’t really interfere with the other.

The idea will probably work in general, but a Conda env needs to be created by conda create (which is not compatible with venv or virtualenv), so I’ll need to make it explicit (say pyem venv add --conda 3.7 to use conda create instead of python -m venv). There are likely some other edge cases needed to be considered. It seems like a fun excercise to get myself more familiar with Conda though; I’ll find some time to play with the idea.

1 Like

Yes; because during install poetry renames pyproject.toml to a temporary file and generates a setup.py file, which completely scuttles parallel builds with tox.

My workaround is to enable tox’s isolated builds option; that creates a source distribution in a separate venv first, then installs that source distribution (using pip) into the target test venvs. No need to whitelist poetry, and parallel builds run beautifully.

The price is doubled-up pytest version pins, or some kind of external machinery to translate poetry dev dependencies to tox command-line arguments or environment variables that can then be used in tox configuration substitutions.

What I did is to install poetry inside each tox environment, then use that poetry to install dependencies used in tests. So far so good.