Some of you might have heard of the mythical tox4, some even feared that it will never happen. While a first public release is still months away, it got to a point where I’m happy for some brave few to give it a try. It’s available to install from here: https://pypi.org/project/tox/4.0.0a2.
If you can spare a few minutes, try it and let me know how it goes, that would be much appreciated. At the moment I’m mostly interested in things that work with tox 3 but breaks with tox 4. While it becomes stable it installs under the tox4 console entry and uses the tox4 working directory (not to break v3). This is a complete rewrite of tox 3, with python3.6+ only and lots of performance and feature improvements. However, for now, it’s more important to find things that broke so let’s focus on that. Until it becomes stable the name is tox4 (and similarly works in .tox4 work directory), so you can try it/use it in parallel with tox 3.
Just to state the obvious, this breaks completely all existing plugins by design. And also has a few breaking changes: e.g. use isolated builds by default. But comes with some benefits too: should be faster, more correct, and has full package dependencies and requirements.txt support (aka if you add a new dep either for your package or within the requirements.txt) tox will identify the addition and auto-install just what has been added if it’s new dependencies; or if you remove stuff it recreates the env from scratch (at some later point we can implement figuring out what to remove when you remove deps, to avoid the need to recreate). We no longer build the entire config world at startup (subprocess calls), but lazily load all configs, so we only manifest options if we must. For example, if you don’t use env_site_package_dir we don’t call a subprocess during the config phase to calculate that.
We no longer build the entire config world at startup (subprocess calls), but lazily load all configs, so we only manifest options if we must. For example, if you don’t use env_site_package_dir we don’t call a subprocess during the config phase to calculate that.
Also, tox now has subcommand:
tox4 --help
run (r) run environments
run-parallel (p) run environments in parallel
depends (de) visualize tox environment dependencies
list (l) list environments
devenv (d) sets up a development environment at ENVDIR based on the env's tox configuration specified
config (c) show tox configuration
quickstart (q) Command-line script to quickly tox config file for a Python project
legacy (le) legacy entry-point command
As an OS packager, who wants to make a big set of software play nicely together, I have one wish. I see the command name is tox4, but the importable package is tox so it’s not possibl eto install tox 3 and tox 4 side by side. Would it be possible to rename the (hopefully internal) import name to tox4?
(I know pip and other PyPA tools won’t support “side-by-side” installs unless the distribution name is changed as well, but pip is more for isolated environments than tools you just might want to install system-wide.)
As a plugin author, I’m looking forward to hear how to fix everything :)
Is it too early to ask how to add a --runner?
The idea is that it will replace tox 3. Named the executable tox4 only to allow testing in parallel with tox3, installed in different virtual environments (e.g. via pipx). I’ll change it to just tox once it comes out of beta. I’d say cleaning up the plugin system and opening that part is likely coming in late February First I’m focusing on getting it working stable and then make it extensible.
Is the emphasis on functionalities existed in Tox 3? I’m interested in some new things, but tox4 quickstart gives me nothing (doesn’t seem to create or modify anything?) and documentation is non-existent at the moment.
For the next few weeks I’d like to get to a point where we have same functionality. What you’d like to know, no documentation yet, but can give you the answers here.
I’m particularly interested in devenv. How does this work, basically? Is this similar to the environment pipenv and poetry would set up for a project? Does it support multiple development environments that can be switched between?
Also, how will environments be named in Tox 4? Does the new Python discovery logic affect it? One issue I’ve had with Tox previously is it’s very non-straightforward to set up multiple interpreters of a same X.Y version, e.g. 32-bit vs 64-bit, Windows vs WSL. Does Tox 4 improve on this? How does the new naming scheme (if there’s one) work with devenv (if devenv is what I’m guessing it is)?
devenv is not a tox 4 thing, it already exists with tox 3. tox devenv -e py39 venv is basically tox -e py39 --develop where the virtual environment is created under the venv folder rather than the default .tox/py39. Everything else is the same. It’s a way of creating a tox environment with develop install outside of the private looking (.tox4/py39 folder - though note it’s not private otherwise).
I don’t think this is a 1 to 1 mapping. But then again I never seen poetry/pipenv past their tutorial so could not tell you entirely what they do, and how much of it does it map back.
Not sure what new python discovery logic you’re referring to, however tox does not implement the python discovery. It delegates that job to virtualenv, so everything from User Guide - virtualenv is relevant. The python spec string is the tox env-name string, with factor name contraction.
As far as sub-versions, and 32vs64 goes you can see virtualenv now does accept forms of py-32 and py-64 and select appropriate bitness. Now selecting WSL vs Windows, is a bit trickier. There’s no easy trait in the spec to specify this. Note though you could write a virtualenv plugin that does this; or as a one-of you can use the --discover flag (also available in tox 3 - Configuration - tox) to inject the python you want to use as first to be tried, and as such create with that python the environment.
devenv also takes a -e py flag (default is this example) where you can specify which tox environments dev version you want to create.
I assume you meant tox --devenv since --develop is the option for installing the package as editable. That’d make it not very useful for me, unfortunately. The biggest issue I had with --devenv is there’s basically no way to actually use the populated environment without manually digging into the .tox directory, making it no better than a manually created virtual environment. I was looking for a way to run an arbitrary command in a previously populated environment similar to pipenv run and poetry run.
The problem with both solutions is there’s no way to write this in the configuration file, so every developer working on the project is on their own to ensure the environments are set up correctly. The root issue (now I’m thinking about this) may be the basepython vocabulary; either it needs to be more flexible like Rust’s target triples, or be extendable in a way that can be enforced in the configuration.
tox --develop -e py39 where the target virtual environment is created within the in-line venv folder rather than the hidden-ish .tox/py39.
And yes, it installs the package in the non-pep 517 defined editable mode. This is how it works in tox 3, and is also how it works in tox 4. Though tox 4 introduces sub-commands, so:
tox --devenv -e py39 venv becomes
tox devenv -e py39 venv.
You can always activate the environment under .tox/py39 the same way you’d activate the in-line venv. Or type .tox/py39/bin/python instead of venv/bin/python. In either case the only difference is the folder you’re virtual environment is based on. I’d argue is better than a manually created environment because tox already knows how your project needs to be setup (e.g. what extras installed, what env-vars to set, what to pass along, what to strip, what flags passed to pip, what requirements have been already installed beforehand, what new requirements need to be installed, etc). I’d be interested to hear what more you’d want.
We could add a tox exec sub-command that allows you to do so, though no one requested so. And then you could do something like tox exec devenv_name -- python -m whatever some args? This would save you typing devenv_name/bin/python -m whatever some args, and could also handle environment variable mangling.
The vocabulary has been extended in the last 2 years to allow expressing 32/64 bitness. We need two answers for doing further extensions:
how would you express it,
how would you discover such interpreters.
What’s your proposal for differentiating/discovering WSL vs Windows?
I couldn’t find my notes on this, but from the top of my head, my conclusion when I looked into this topic previously was that a platform identifier needs to include
Python implementation name and version
CPU arch (not just 32/64 but something like wheels’ x86_64 to tell apart Apple M1 vs Intel)
sys.platform (anything more specific is likely too niche)
So maybe something as specific as cp36-windows-x86_64 would be needed.
So python implementation name and version are already supported. Similarly the 32/64 bitness. However, differentiating between Apple M1 vs Intel and platform specification is less about python specification and more about skipping that tox env entirely. Can you detail how one can differentiate between Rosetta vs native M1, or how would you know the difference between WSL vs native windows? Can you invoke WSL apps from the native windows world?
If you run Python in WSL, it thinks it’s on whatever version of Linux you’re running. Even if you run it from Windows. Likewise, if you run native Python from WSL, it thinks it’s on Windows.
Think of WSL as a virtual machine, rather than a Cygwin-like wrapper around the OS. There is no overlap between “inside” and “outside” WSL (and when there is, it’s almost always unintentional and broken).
In this at best we can talk again about platform-dependent environments. (e.g. run only if the platform is x, y). Because there’s no way to run WSL or native Windows on Windows. You are either within WSL and you must run WSL or you’re outside and then must run natively.
Yeah, that’s right. It’s possible to cross the boundary, but only in the same sense that it’s possible to open an SSH connection to another machine entirely (including the implications that the user must tell you which machine to connect to, and everything you want to run must already be there and able to run totally natively).