Providing a way to specify how to run tests (and docs?)

Other stuff that you most likely also support for a minimal viable product before running the test:

  • Discover targets defined (and select the ones to run)
  • Altering the current working directory
  • Setting environment variables
  • Passing through environment variables (which you should always pass and which you should remove)

And then the not must have, but probably nice to have concepts for a more robust/powerful usage:

  • per target temporary folder
  • setup/teardown commands
  • environment reuse between runs

Most django projects do, because the django version is included in the target name. Similarly, many people like having with coverage and without coverage variants (-cov suffix often). I’ve seen a few projects that also separate into unit and integration tests in separate targets so you have a quick test env and a slower but more robust one.

There’s a plan to add that. The interface haven’t been groomed and implemented just yet though but might be a reality next year. (PS. also tox is always all lowercase).

This likely is the easiest path ahead. It has to likely tell just the dependencies and default target(s) to call for OS repackagers. E.g. could specify that for OS repackage the style checks are not needed, so only call the py target (and the target can be interpreted by the tool). Something like:

[project.tasks]
requires = ["tox>=4"]
test-target = ["py-unit", "py-integration"]

This does imply that we need test runners to only support a PEP-517 style API that can take the target list. Alternatively, we could make the interface CLI bound:

[project.tasks]
requires = ["tox>=4"]
target = ["tox", "-e", "py-unit", "py-integration"]

I prefer tough the PEP-517 interface because we can then add an endpoint of get_valid_targets that could return not just test targets but lint targets too.

3 Likes