Is it preferable to add test dependensies as an extra_require instead of tox.ini?

Adding test dependencies inside tox.in is clearly not an ideal approach as it creates a hard dependency on a specific testing tool, making impossible to automate installation of test dependencies for users that do not want to use tox.

In the past there was a setuptools tests_require which was deprecated and if I remember well there was a suggestion to use extras to achieve the same functionality.

The main benefit of using extras is that it very simple for a developer to do a “pip install foo[test]”. I seen it used in several projects.

Another benefit from this approach is that it keeps all dependencies in a single place, instead of spreading them in multiple places.

Still, I am raising the question here because I want some feedback from others and see if there are any downsides for doing this, or a better alternative. Please refrain from suggesting features that are not yet working, like pep-621.

1 Like

At our organisation, we used to have a test extra, but we realised that we weren’t distributing the tests with the package and the test dependencies only added confusion to downstream users. Instead, we simply put test requirements in “tests/requirements.txt”, then documented that for testing users will need to run pip install -r tests/requirements.txt before running the test script (pytest, in our case).

This is for private use however, and I know that there’s an argument for distro packagers that the tests need to be shipped with the package, in which case the test extra seems like the perfect solution.

2 Likes

I think a tests extra is perfectly fine, even if it sometimes does end up in a package where it doesn’t make sense. And pip install foo[test] or even pip install -e.[test] (-e for “editable”, . for “current directory”) is indeed simple.

A test-requirements.txt file would work, but it’s just as tool-specific as tox configuration. If you keep it a simple list of package names it’s probably OK, but the rest of it can tie you to pip. Extras, on the other hand, are standardized at least a bit.

1 Like

“Preferable”? That’s difficult to answer. There are active discussions ongoing at the moment because there’s no standardised definition of how extras work. So doing anything “complicated” with them has risks, because you’re relying on implementation defined behaviour. But conversely, your point about tying yourself to a specific tool is valid.

As with a lot of things, the best answer is probably “some day we might have a good standard for this, but getting there depends on unpaid volunteer work, and resources are extremely limited, so don’t hold your breath”.

I strongly prefer using an extras marker for test dependencies as it keeps your project dependencies all in one place, so less files to manage.

There’s very little reason/benefit to not distribute your tests within your source distribution (you don’t have two install it, as you can strip away. the tests when generating the wheel).

2 Likes

Same here, I use a bunch of dev_* extras, and I distribute the tests (and docs, etc.) with the source distribution. I am aware that it feels a bit like a “misuse” of extras, but that’s the best compromise I found. Apart from the fact that it feels like a misuse, I haven’t seen any downsides. (Extras of packages are not really published anywhere, so it doesn’t really confuse anyone: “what is this dev_test extra, should I install it?”).

I used a dev-requirements.txt or something like that for a while, but in the end it didn’t feel right either (don’t remember why, fact that it’s kind of specific to pip might have played a role).

Poetry has dev-dependencies. But then tox is not really able to pick up those dependencies (there definitely should be a plugin that does that). I don’t know if there is need for a standard in this case, it feels to me like it could stay tool-specific (setuptools, poetry) and plugins should make the link (between tox and poetry for example). But a standard would still be welcome I guess.

As others noted the inclusion of tests with source is a strong reason for adding an extra. While many projects tend to optimize their wheels for size, that should not be the case for source distribution.

I did had some very good reasons for including tests even inside the wheel. When you write a library that supports plugins, sooner of later you will want to share some of test code (like fixtures) and allow plugins to reuse them. If you include the tests, you make it much easier for consumers to use them.

The disk footprint is minimal and if system packagers (rpms, debs,…) really want to avoid including the tests they can always produce two different packages, one with core and one with tests (mainly the extra) but I I would prefer to delegated to them the extra maintenance cost :wink:

PS. Note that there are specific projects where tests are a big deal, where bundling all does really have measurable costs, still most do not.

While I agree that tests should be distributed with sdists, I feel that they generally shouldn’t be installed/distributed with wheels, as the necessary information about how to run the tests (Do you run pytest, unittest, or nose? With what options? Etc.) is not present when installed, and thus the tests are of little use. When tests are not present when installed, this means that the test requirements are of no relevance to end-users, and thus they shouldn’t be declared as extras. I see having a “hard dependency” on tox as a non-issue, as your tests already have a hard dependency on pytest/unittest/nose/whatever.

1 Like

I would be much more on board with a “tests” dependency if it were not exposed in the package metadata. I find it very weird that you can do pip install mypkg[tests] or mypkg[dev] and get mypkg plus all the dependencies to run the tests or for the dev environment. To the extent that there are tools that list available extras, I like it even less that these things would be listed (I don’t use any such thing, but I can imagine them being useful).

It might make sense to add a concept to allow marking certain extras as dev-facing and not user-facing.

That said, I’m not entirely sure I’d adopt such a thing anyway. There are probably some dependencies that are independent or semi-independent of the specific tests I’m trying to run, but usually it wouldn’t be appropriate to re-use the list of test dependencies anywhere except in a specific testenv. It would be annoying to have half my configuration (dependencies) in setup.cfg and the rest of it (test command, environment variables, etc) in tox.ini. Even if both could go in the same file, I’d still want to keep them close to one another so I don’t have to jump around.

3 Likes

This actually came up as part of the discussions of PEP 621, but people were not all comfortable with the idea of enshrining that specific workflow (at least as part of PEP 621).

In Fedora we generally run the tests after building the RPM, but they’re not included in the RPM.

I don’t see the problem. IMO this fits nicely with extras – these are extra packages for an extra use case. With pip you often need pip install -e.[tests], but the documentation could tell you that.
For any extra, you’d want to study the documentation to see what it’s for. I guess that if tools that list them and allow installing them become common, allowing some kind of docstrings for extras would be even more useful than marking them as dev-facing or user-facing.

I’ve always seen extras as a sort of mechanism for feature flags, which is why this kind of thing jars on me, and also probably why the idea of default extras and “recommended extras” keep coming up.

I don’t think it’s a huge deal that it’s exposed like that, but it’s just very strange to ship an implementation detail like, “These are the things to install when developing this module”. It’s not something you’d ever need or want to expose to your end-users.

Maybe it’s overkill to design a whole system around having a difference between “local extras” and “feature extras” or something, but I can imagine that it wouldn’t be terribly difficult to add a hook to PEP 517 backends for requesting extras (or something extra-like) not present in the generated METADATA. I know that there is also some interest from a few corners in creating standardized test entry points á la PEP 517, so maybe that would be more fruitful than pursuing a distinction between local and feature extras.

Quoting myself in The 'extra' environment marker and its operators

PEP 426 described extras in more detail (saying they’re for “optional dependencies”). But that PEP was withdrawn, and did not clarify what optional dependencies mean either.

[E]xtras are so under-specified that different package users and packagers have different, conflicting ideas what they are exactly. They come complaining when PyPA tools do not do what they expect, and there’s no way to tell them what’s wrong or right because there are absolutely no rules. Hell, even PyPA tools do things differently within themselves. To untangle this, PyPA need to write down those semantics (what extras are) to the extras feature, and come up rules (what packagers and users should expect) from those semantics.

IMO this whole topic is futile until someone writes and pushes through a PEP to define what extras actually are. All opinions and preferences are equally correct (or wrong) until then.

That reminded me about a missing feature from extras that I needed multiple times: OR between extras. It currently not possible to “require foo or bar”. If you have a library that needs at least one of foo or bar, there is no way to declare it.

I have no idea what something like that would even mean. If I tried to install A which depended on B or C, how would pip decide which of B or C to install? (And “it doesn’t matter” isn’t realistically an option - people would be bound to end up depending on whatever implementation-defined answer we came up with if the spec didn’t say).

The logic I’ve seen people ask for here is “if C is installed, include its requirement, otherwise include B” (noting that you could swap the order and nobody really cares as long as we define an order and document what it will be). (And yes, this could cause an upgrade of C, but that seems to be okay, and in any case is a design decision that could be specified and doesn’t have to be perfect, just decided.)

Probably the challenge is that many users think in terms of adding individual packages to an existing environment, while most of the devs seem to think about solving an environment all at once. Both are valid, but lead to different ideas about valuable features.

Making this one work in the single-solve-step approach is definitely more complex, but might work if it’s only impacted by explicit constraints (for example, “pip install A” gets you A+B, “pip install A C” gets you A+C, but “pip install A D” where D depends on C gets you A+B+C+D, because there’s no explicit requirement for C and so the A->B requirement is the one kept).