Tips For Running KUnit Tests¶
Usingkunit.pyrun (“kunit tool”)¶
Running from any directory¶
It can be handy to create a bash function like:
functionrun_kunit(){(cd"$(gitrev-parse--show-toplevel)"&&./tools/testing/kunit/kunit.pyrun"$@")}
Note
Early versions ofkunit.py (before 5.6) didn’t work unless run fromthe kernel root, hence the use of a subshell andcd.
Running a subset of tests¶
kunit.pyrun accepts an optional glob argument to filter tests. The formatis"<suite_glob>[.test_glob]".
Say that we wanted to run the sysctl tests, we could do so via:
$echo-e'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y'>.kunit/.kunitconfig$./tools/testing/kunit/kunit.pyrun'sysctl*'
We can filter down to just the “write” tests via:
$echo-e'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y'>.kunit/.kunitconfig$./tools/testing/kunit/kunit.pyrun'sysctl*.*write*'
We’re paying the cost of building more tests than we need this way, but it’seasier than fiddling with.kunitconfig files or commenting outkunit_suite’s.
However, if we wanted to define a set of tests in a less ad hoc way, the nexttip is useful.
Defining a set of tests¶
kunit.pyrun (along withbuild, andconfig) supports a--kunitconfig flag. So if you have a set of tests that you want to run on aregular basis (especially if they have other dependencies), you can create aspecific.kunitconfig for them.
E.g. kunit has one for its tests:
$./tools/testing/kunit/kunit.pyrun--kunitconfig=lib/kunit/.kunitconfigAlternatively, if you’re following the convention of naming yourfile.kunitconfig, you can just pass in the dir, e.g.
$./tools/testing/kunit/kunit.pyrun--kunitconfig=lib/kunitNote
This is a relatively new feature (5.12+) so we don’t have anyconventions yet about on what files should be checked in versus justkept around locally. It’s up to you and your maintainer to decide if aconfig is useful enough to submit (and therefore have to maintain).
Note
Having.kunitconfig fragments in a parent and child directory isiffy. There’s discussion about adding an “import” statement in thesefiles to make it possible to have a top-level config run tests from allchild directories. But that would mean.kunitconfig files are nolonger just simple .config fragments.
One alternative would be to have kunit tool recursively combine configsautomagically, but tests could theoretically depend on incompatibleoptions, so handling that would be tricky.
Setting kernel commandline parameters¶
You can use--kernel_args to pass arbitrary kernel arguments, e.g.
$./tools/testing/kunit/kunit.pyrun--kernel_args=param=42--kernel_args=param2=false
Generating code coverage reports under UML¶
Note
TODO(brendanhiggins@google.com): There are various issues with UML andversions of gcc 7 and up. You’re likely to run into missing.gcdafiles or compile errors.
This is different from the “normal” way of getting coverage information that isdocumented inUsing gcov with the Linux kernel.
Instead of enablingCONFIG_GCOV_KERNEL=y, we can set these options:
CONFIG_DEBUG_KERNEL=yCONFIG_DEBUG_INFO=yCONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=yCONFIG_GCOV=y
Putting it together into a copy-pastable sequence of commands:
# Append coverage options to the current config$./tools/testing/kunit/kunit.pyrun--kunitconfig=.kunit/--kunitconfig=tools/testing/kunit/configs/coverage_uml.config# Extract the coverage information from the build dir (.kunit/)$lcov-t"my_kunit_tests"-ocoverage.info-c-d.kunit/# From here on, it's the same process as with CONFIG_GCOV_KERNEL=y# E.g. can generate an HTML report in a tmp dir like so:$genhtml-o/tmp/coverage_htmlcoverage.info
If your installed version of gcc doesn’t work, you can tweak the steps:
$./tools/testing/kunit/kunit.pyrun--make_options=CC=/usr/bin/gcc-6$lcov-t"my_kunit_tests"-ocoverage.info-c-d.kunit/--gcov-tool=/usr/bin/gcov-6
Alternatively, LLVM-based toolchains can also be used:
# Build with LLVM and append coverage options to the current config$./tools/testing/kunit/kunit.pyrun--make_optionsLLVM=1--kunitconfig=.kunit/--kunitconfig=tools/testing/kunit/configs/coverage_uml.config$llvm-profdatamerge-sparsedefault.profraw-odefault.profdata$llvm-covexport--format=lcov.kunit/vmlinux-instr-profiledefault.profdata>coverage.info# The coverage.info file is in lcov-compatible format and it can be used to e.g. generate HTML report$genhtml-o/tmp/coverage_htmlcoverage.info
Running tests manually¶
Running tests without usingkunit.pyrun is also an important use case.Currently it’s your only option if you want to test on architectures other thanUML.
As running the tests under UML is fairly straightforward (configure and compilethe kernel, run the./linux binary), this section will focus on testingnon-UML architectures.
Running built-in tests¶
When setting tests to=y, the tests will run as part of boot and printresults to dmesg in TAP format. So you just need to add your tests to your.config, build and boot your kernel as normal.
So if we compiled our kernel with:
CONFIG_KUNIT=yCONFIG_KUNIT_EXAMPLE_TEST=y
Then we’d see output like this in dmesg signaling the test ran and passed:
TAP version 141..1 # Subtest: example 1..1 # example_simple_test: initializing ok 1 - example_simple_testok 1 - example
Running tests as modules¶
Depending on the tests, you can build them as loadable modules.
For example, we’d change the config options from before to
CONFIG_KUNIT=yCONFIG_KUNIT_EXAMPLE_TEST=m
Then after booting into our kernel, we can run the test via
$ modprobe kunit-example-test
This will then cause it to print TAP output to stdout.
Note
Themodprobe willnot have a non-zero exit code if any testfailed (as of 5.13). Butkunit.pyparse would, see below.
Note
You can setCONFIG_KUNIT=m as well, however, some features will notwork and thus some tests might break. Ideally tests would specify theydepend onKUNIT=y in theirKconfig’s, but this is an edge casemost test authors won’t think about.As of 5.13, the only difference is thatcurrent->kunit_test willnot exist.
Pretty-printing results¶
You can usekunit.pyparse to parse dmesg for test output and print outresults in the same familiar format thatkunit.pyrun does.
$./tools/testing/kunit/kunit.pyparse/var/log/dmesg
Retrieving per suite results¶
Regardless of how you’re running your tests, you can enableCONFIG_KUNIT_DEBUGFS to expose per-suite TAP-formatted results:
CONFIG_KUNIT=yCONFIG_KUNIT_EXAMPLE_TEST=mCONFIG_KUNIT_DEBUGFS=y
The results for each suite will be exposed under/sys/kernel/debug/kunit/<suite>/results.So using our example config:
$modprobekunit-example-test>/dev/null$cat/sys/kernel/debug/kunit/example/results...<TAPoutput>...# After removing the module, the corresponding files will go away$modprobe-rkunit-example-test$cat/sys/kernel/debug/kunit/example/results/sys/kernel/debug/kunit/example/results:NosuchfileordirectoryGenerating code coverage reports¶
SeeUsing gcov with the Linux kernel for details on how to do this.
The only vaguely KUnit-specific advice here is that you probably want to buildyour tests as modules. That way you can isolate the coverage from tests fromother code executed during boot, e.g.
# Reset coverage counters before running the test.$echo0>/sys/kernel/debug/gcov/reset$modprobekunit-example-test
Test Attributes and Filtering¶
Test suites and cases can be marked with test attributes, such as speed oftest. These attributes will later be printed in test output and can be used tofilter test execution.
Marking Test Attributes¶
Tests are marked with an attribute by including akunit_attributes objectin the test definition.
Test cases can be marked using theKUNIT_CASE_ATTR(test_name,attributes)macro to define the test case instead ofKUNIT_CASE(test_name).
staticconststructkunit_attributesexample_attr={.speed=KUNIT_VERY_SLOW,};staticstructkunit_caseexample_test_cases[]={KUNIT_CASE_ATTR(example_test,example_attr),};
Note
To mark a test case as slow, you can also useKUNIT_CASE_SLOW(test_name).This is a helpful macro as the slow attribute is the most commonly used.
Test suites can be marked with an attribute by setting the “attr” field in thesuite definition.
staticconststructkunit_attributesexample_attr={.speed=KUNIT_VERY_SLOW,};staticstructkunit_suiteexample_test_suite={...,.attr=example_attr,};
Note
Not all attributes need to be set in akunit_attributes object. Unsetattributes will remain uninitialized and act as though the attribute is setto 0 or NULL. Thus, if an attribute is set to 0, it is treated as unset.These unset attributes will not be reported and may act as a default valuefor filtering purposes.
Reporting Attributes¶
When a user runs tests, attributes will be present in the raw kernel output (inKTAP format). Note that attributes will be hidden by default in kunit.py outputfor all passing tests but the raw kernel output can be accessed using the--raw_output flag. This is an example of how test attributes for test caseswill be formatted in kernel output:
# example_test.speed: slowok 1 example_test
This is an example of how test attributes for test suites will be formatted inkernel output:
KTAP version 2 # Subtest: example_suite # module: kunit_example_test 1..3 ...ok 1 example_suite
Additionally, users can output a full attribute report of tests with theirattributes, using the command line flag--list_tests_attr:
kunit.pyrun"example"--list_tests_attrNote
This report can be accessed when running KUnit manually by passing in themodule_paramkunit.action=list_attr.
Filtering¶
Users can filter tests using the--filter command line flag when runningtests. As an example:
kunit.pyrun--filterspeed=slow
You can also use the following operations on filters: “<”, “>”, “<=”, “>=”,“!=”, and “=”. Example:
kunit.pyrun--filter"speed>slow"This example will run all tests with speeds faster than slow. Note that thecharacters < and > are often interpreted by the shell, so they may need to bequoted or escaped, as above.
Additionally, you can use multiple filters at once. Simply separate filtersusing commas. Example:
kunit.pyrun--filter"speed>slow, module=kunit_example_test"Note
You can use this filtering feature when running KUnit manually by passingthe filter as a module param:kunit.filter="speed>slow,speed<=normal".
Filtered tests will not run or show up in the test output. You can use the--filter_action=skip flag to skip filtered tests instead. These tests will beshown in the test output in the test but will not run. To use this feature whenrunning KUnit manually, use the module paramkunit.filter_action=skip.
Rules of Filtering Procedure¶
Since both suites and test cases can have attributes, there may be conflictsbetween attributes during filtering. The process of filtering follows theserules:
Filtering always operates at a per-test level.
If a test has an attribute set, then the test’s value is filtered on.
Otherwise, the value falls back to the suite’s value.
If neither are set, the attribute has a global “default” value, which is used.
List of Current Attributes¶
speed
This attribute indicates the speed of a test’s execution (how slow or fast thetest is).
This attribute is saved as anenumwith the following categories: “normal”,“slow”, or “very_slow”. The assumed default speed for tests is “normal”. Thisindicates that the test takes a relatively trivial amount of time (less than1 second), regardless of the machine it is running on. Any test slower thanthis could be marked as “slow” or “very_slow”.
The macroKUNIT_CASE_SLOW(test_name) can be easily used to set the speedof a test case to “slow”.
module
This attribute indicates the name of the module associated with the test.
This attribute is automatically saved as a string and is printed for each suite.Tests can also be filtered using this attribute.
is_init
This attribute indicates whether the test uses init data or functions.
This attribute is automatically saved as a boolean and tests can also befiltered using this attribute.