Unittest: Fail if zero tests were discovered

I would propose to have unittest’s test runner exit non-zero if 0 tests were discovered, instead of exiting 0. This usually means the tests weren’t run correctly, finding 0 tests is not a successful test run.

It could make sense to put this behind an optional command line argument to avoid breaking existing users.

One could write a custom test runner to do this, but I would suggest that it should be built-in, and possibly even default.

Background:

In Debian, we like to run tests on all our modules and applications at build time, and when their dependencies change. Automated tests are a major help in being able to manage complex systems with many moving parts.

It’s pretty common to have packages that run 0 tests, because the test runner needed to be told exactly where to look for tests. If tests move between versions, or infrastructure changes, we could go from successfully running a package’s test suite to not running any tests, because we can’t find them. This problem should be reported as a failure, so we can investigate and fix the issue. Currently if we want to do that, we probably have to parse build log output.

pytest implemented an exit code of 5 for finding 0 tests, and seems to have been happy enough with this change to not implement an opt-out (but it can be achieved with a plugin):

4 Likes

I am sure that this has been discussed before, but I could not find an issue.
@vstinner Victor, what do you remember?

We do this at work, it has been very useful to catch cases where tests were not actually run because someone forget an absltest.main() call at the end of the file. See absl-py · PyPI but I don’t recall if that bit is in the open source absl code (I’m not writing this from a device I can go check right now, but I doubt it is, IIRC it relies on some surrounding infrastructure).

But doing it from the unittest module seems difficult. When do you trigger it and how? An atexit handler at import time? What if something that isn’t a test imports unittest? Boom.

I’m sympathetic to the request, it’s a demonstrably helpful failsafe concept, but finding a practical way to do it that adds meaningful value yet doesn’t blow up when it shouldn’t is required for the stdlib.

For people’s own projects they can just use pytest.

Thanks, I had searched and not found it, either :slight_smile:

My proposal would be to do it in unittest’s runner (the unittest.main module). I don’t think it really makes sense to do it in the test suite itself. The problem we are chasing is that we haven’t found the suite.

In the specific use-case for Debian, we would like to be able to have this behaviour at scale. Not be something we have to patch into every package’s test suite, separately. We have common code that most Python packages use to run their test suites, so we could make that pass an extra argument. Packages without test suites that are currently running 0 tests successfully would have to explicitly disable tests, which seems like the right way to go, these days. I guess we could also look at changing our default test runner from unittest to pytest, to achieve this, but that brings in extra dependencies, at build time, everywhere, which isn’t great.

1 Like

Recenly, I modified test.libregrtest to fail if “no tests ran” to help me with “test.bisect_cmd”. Previously, I made mistakes in bisection when testing an old Python version which didn’t have the test. The tool treated this case as “success”.

Issue: Make Python test suite fail if no tests ran · Issue #98903 · python/cpython · GitHub

1 Like

It’s interesting that pytest has an exitcode for the “no tests ran” case: exit code 5. Maybe we should modify test.libregrtest to use the same exitcode (currently, it uses exit code 4 for this case).

2 Likes

FWIW I checked out internal implementation, indeed it is not in absltest, but in our internal code deriving from that. It is the atexit handler I was thinking of. It merely checks a global if not _is_actually_tested: in our equivalent to unittest/absltest having been set by the runner itself and after emitting an instructive error message if possible, does an exit(4) equivalent as 4 matched an error code that I think Bazel used or uses as an indication of “testing was requested but no tests ran”. I’m not particular to which exit code is used; just define it via a unittest module constant that others could override if they have need to for their test infrastructure. The important thing is to fail.

It’s got on additional false positive hack in the check: we skip checking and exiting with an error if an environment variable that Bazel and our test infrastructure is expected to always set when running tests is present. If not, we assume this wasn’t a test run so no check is done. We can’t make such an assumption in the Python stdlib.

python -m test -j0 -m 'nosuchtest' passes because test_pep646_syntax passes because it runs doctest on its own docstring and that test does not get filtered out.

Aha, that’s a bug that should be fixed :slight_smile: Usually, I only use -m pattern with a test name, like: python -m test test_os -m test_access.

1 Like

I have filed GH-102051 to implement this.

1 Like

The new Python 3.12 message surprises users when all tests are skipped: see issue 3.12 starts to return an error when all tests in a test file are skipped · Issue #113661 · python/cpython · GitHub.

It seems to me that the case “no tests were found at all” is fundamentally different from the case “one or more tests were found, but all of them were skipped”. The first can be caused by failing to configure test discovery properly; the second is much more likely to be an intentional result. I can see why someone might want to flag the second as a failure, but I would expect it to be treated as a success by default. (Similarly, I can imagine someone wanting to treat “no tests found” as a success, confidently expecting no tests to be found - but this should be rare, since someone who expects that no tests will be found may have a hard time justifying use of the test runner!)

I hadn’t considered that case in my PR. I’d agree that exiting 0 there makes more sense.