@nedbat Just wonder if there is a mode to get only file to file coverage info instead of line level, so that the tool would run in a more performing way? In our scenario we only need file to file call relationship in dynamic analysis.
Coverage.py doesn’t have a way to do that. How much overhead is coverage adding to your test suite? Make sure you have the C tracer, not the Python tracer. Use coverage debug sys and look for the “CTracer” line.
@nedbat If I may ask another question in the thread.
If I run the coverage with -m pytest' trying to run a group of pytest cases, the report result from coverage result` will be the aggregated result for all these test cases. Is there a way I can separate the result by each file (group result by filename), we need to obtain coverage info in file level, so right now I have to run it by each file, this brought performance overhead that in worst case reach 60% of the original test run time.
Sorry, I’m not sure what you are seeing. Coverage.py never reports for each test case. coverage report (or coverage html) produce reports that list your files. You can run it once, and it will give you one report with a coverage percentage for each file. You are doing something very different, but I’m not sure what.
Let me clarify my scenario further: If I have test_1.py test_2.py under a folder, if I run coverage against pytest on that folder, it will run pytest on both files, and report on all files that get called by the two test, what I need is a report generate coverage on a per test base, something like:
For test_1.py:
file1.py 6 1 83%
file3.py 2 1 10%
For test_2.py:
file2.py 1 1 15%
file4.py 5 1 2%
Right now the tool generate something like the following, it doesn’t give each entry belongs to which test case:
@PythonCHB Thanks for your reply.
I understood that and already implemented a solution via that, but it’s performance heavy to need to run the tool on each file, especially for large repository where it has hundreds of test files. if I could run only one pytest cmd to kick off a run on all test and the result should be grouped by each file, then that’s the most desirable.
Coverage.py includes a feature called dynamic contexts that can be used to record which tests ran which lines: Measurement contexts — Coverage.py 7.2.3 documentation That can produce an HTML report which annotates each source line with the list of tests that ran the line.
That doesn’t give you a percentage for each test, but the data is in the .coverage data file, and so could possibly be used to get the reports you want with a single test suite run.
I’m curious though how you will use those reports? How many different reports will that produce?