Parametrized unit tests with information inferred at run time

I’m using the unittest module in a way in which it probably wasn’t intended to be used. I’m not doing anything undocumented, I’m just not using it to run unit tests per se.

Without going into a lot of details, I’m using it as the core of a test runner that tests integration with a Windows kernel driver: the actual tests are implemented as executable programs that talk to that said driver. Based on the machine on which the tests are executed, and on a few other options I select the programs that need to run and the command line arguments they require, I invoke them, and I interpret the result.

I ended up using unittest for this in an effort to not re-implement a lot of its core functionality and because it is easy to have a nice test report at the end.

In order to do this, I extend unittest.TestCase with a class that looks more or less like this:

class NumbersTest(unittest.TestCase):
    def __init__(self, value: int, test_name: str):
        setattr(self, test_name, self._test_method)
        super().__init__(methodName=test_name)
        self._value = value

    def _test_method(self):
        self.assertEqual(self._value % 2, 0)

I do that weird thing in __init__ in order to have unique names for each test. In this case, constructing the tests and running them looks like this:

    tests = []
    for i in range(0, 6):
        tests.append(NumbersTest(i, f"test_{i}"))
    test_suite = unittest.TestSuite(tests)
    runner = unittest.TextTestRunner()
    result = runner.run(test_suite)

This generates a really nice output, with a clear unique name for each failed test:

.F.F.F
======================================================================
FAIL: test_1 (__main__.NumbersTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".\test_x.py", line 28, in _test_method
    self.assertEqual(self._value % 2, 0)
AssertionError: 1 != 0

======================================================================
FAIL: test_3 (__main__.NumbersTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".\test_x.py", line 28, in _test_method
    self.assertEqual(self._value % 2, 0)
AssertionError: 1 != 0

======================================================================
FAIL: test_5 (__main__.NumbersTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".\test_x.py", line 28, in _test_method
    self.assertEqual(self._value % 2, 0)
AssertionError: 1 != 0

----------------------------------------------------------------------
Ran 6 tests in 0.001s

FAILED (failures=3)

I know that I can obtain a similar behavior by using subtests:

class NumbersTest(unittest.TestCase):
    def runTest(self):
        for i in range(0, 6):
            with self.subTest(i=i):
                self.assertEqual(i % 2, 0)

tests = [NumbersTest()]
test_suite = unittest.TestSuite(tests)
runner = unittest.TextTestRunner()
result = runner.run(test_suite)

But this has two disadvantages.

First, the output is slightly harder to look at: FAIL: runTest (__main__.NumbersTest) (i=1) vs the previous FAIL: test_1 (__main__.NumbersTest) (also note that in my actual code the parameters are more complex than a simple integer). This is not a huge issue, but depending on the test runner that is being used, the output may change in ways in which it is harder to navigate.

And second, as stated in the intro, I don’t know ahead of time all the possible parameters that a test may need. I could do an ugly hack and set those in a global and then look at that global, but that is not a solution in my opinion. Maybe the docs explain how to do this and I missed it?

So, a few questions:

  1. is my constructor hacky/touching things it shouldn’t touch? is there a better way of doing this?
  2. is there a way of having parametrized tests with parameters constructed on the fly (either with unittest or with another unit test framework)?

EDIT: I may as well combine the two versions: extend unittest.TestCase with a constructor that takes an array of tests, and then use a sub test for each entry in that array and simply accept the difference in the output. But my questions still stand.