Using KUnit¶
The purpose of this document is to describe what KUnit is, how it works, how itis intended to be used, and all the concepts and terminology that are needed tounderstand it. This guide assumes a working knowledge of the Linux kernel andsome basic knowledge of testing.
For a high level introduction to KUnit, including setting up KUnit for yourproject, seeGetting Started.
Organization of this document¶
This document is organized into two main sections: Testing and IsolatingBehavior. The first covers what unit tests are and how to use KUnit to writethem. The second covers how to use KUnit to isolate code and make it possibleto unit test code that was otherwise un-unit-testable.
Testing¶
What is KUnit?¶
“K” is short for “kernel” so “KUnit” is the “(Linux) Kernel Unit TestingFramework.” KUnit is intended first and foremost for writing unit tests; it isgeneral enough that it can be used to write integration tests; however, this isa secondary goal. KUnit has no ambition of being the only testing framework forthe kernel; for example, it does not intend to be an end-to-end testingframework.
What is Unit Testing?¶
Aunit test is a test thattests code at the smallest possible scope, aunit of code. In the Cprogramming language that’s a function.
Unit tests should be written for all the publicly exposed functions in acompilation unit; so that is all the functions that are exported in either aclass (defined below) or all functions which arenot static.
Writing Tests¶
Test Cases¶
The fundamental unit in KUnit is the test case. A test case is a function withthe signaturevoid(*)(structkunit*test). It calls a function to be testedand then setsexpectations for what should happen. For example:
voidexample_test_success(structkunit*test){}voidexample_test_failure(structkunit*test){KUNIT_FAIL(test,"This test never passes.");}
In the above exampleexample_test_success always passes because it doesnothing; no expectations are set, so all expectations pass. On the other handexample_test_failure always fails because it callsKUNIT_FAIL, which isa special expectation that logs a message and causes the test case to fail.
Expectations¶
Anexpectation is a way to specify that you expect a piece of code to dosomething in a test. An expectation is called like a function. A test is madeby setting expectations about the behavior of a piece of code under test; whenone or more of the expectations fail, the test case fails and information aboutthe failure is logged. For example:
voidadd_test_basic(structkunit*test){KUNIT_EXPECT_EQ(test,1,add(1,0));KUNIT_EXPECT_EQ(test,2,add(1,1));}
In the above exampleadd_test_basic makes a number of assertions about thebehavior of a function calledadd; the first parameter is always of typestructkunit*, which contains information about the current test context;the second parameter, in this case, is what the value is expected to be; thelast value is what the value actually is. Ifadd passes all of theseexpectations, the test case,add_test_basic will pass; if any one of theseexpectations fail, the test case will fail.
It is important to understand that a test casefails when any expectation isviolated; however, the test will continue running, potentially trying otherexpectations until the test case ends or is otherwise terminated. This is asopposed toassertions which are discussed later.
To learn about more expectations supported by KUnit, seeTest API.
Note
A single test case should be pretty short, pretty easy to understand,focused on a single behavior.
For example, if we wanted to properly test the add function above, we wouldcreate additional tests cases which would each test a different property that anadd function should have like this:
voidadd_test_basic(structkunit*test){KUNIT_EXPECT_EQ(test,1,add(1,0));KUNIT_EXPECT_EQ(test,2,add(1,1));}voidadd_test_negative(structkunit*test){KUNIT_EXPECT_EQ(test,0,add(-1,1));}voidadd_test_max(structkunit*test){KUNIT_EXPECT_EQ(test,INT_MAX,add(0,INT_MAX));KUNIT_EXPECT_EQ(test,-1,add(INT_MAX,INT_MIN));}voidadd_test_overflow(structkunit*test){KUNIT_EXPECT_EQ(test,INT_MIN,add(INT_MAX,1));}
Notice how it is immediately obvious what all the properties that we are testingfor are.
Assertions¶
KUnit also has the concept of anassertion. An assertion is just like anexpectation except the assertion immediately terminates the test case if it isnot satisfied.
For example:
staticvoidmock_test_do_expect_default_return(structkunit*test){structmock_test_context*ctx=test->priv;structmock*mock=ctx->mock;intparam0=5,param1=-5;constchar*two_param_types[]={"int","int"};constvoid*two_params[]={¶m0,¶m1};constvoid*ret;ret=mock->do_expect(mock,"test_printk",test_printk,two_param_types,two_params,ARRAY_SIZE(two_params));KUNIT_ASSERT_NOT_ERR_OR_NULL(test,ret);KUNIT_EXPECT_EQ(test,-4,*((int*)ret));}
In this example, the method under test should return a pointer to a value, soif the pointer returned by the method is null or an errno, we don’t want tobother continuing the test since the following expectation could crash the testcase.ASSERT_NOT_ERR_OR_NULL(…) allows us to bail out of the test case ifthe appropriate conditions have not been satisfied to complete the test.
Test Suites¶
Now obviously one unit test isn’t very helpful; the power comes from havingmany test cases covering all of a unit’s behaviors. Consequently it is commonto have manysimilar tests; in order to reduce duplication in these closelyrelated tests most unit testing frameworks - including KUnit - provide theconcept of atest suite. Atest suite is just a collection of test casesfor a unit of code with a set up function that gets invoked before every testcase and then a tear down function that gets invoked after every test casecompletes.
Example:
staticstructkunit_caseexample_test_cases[]={KUNIT_CASE(example_test_foo),KUNIT_CASE(example_test_bar),KUNIT_CASE(example_test_baz),{}};staticstructkunit_suiteexample_test_suite={.name="example",.init=example_test_init,.exit=example_test_exit,.test_cases=example_test_cases,};kunit_test_suite(example_test_suite);
In the above example the test suite,example_test_suite, would run the testcasesexample_test_foo,example_test_bar, andexample_test_baz,each would haveexample_test_init called immediately before it and wouldhaveexample_test_exit called immediately after it.kunit_test_suite(example_test_suite) registers the test suite with theKUnit test framework.
Note
A test case will only be run if it is associated with a test suite.
For more information on these types of things see theTest API.
Isolating Behavior¶
The most important aspect of unit testing that other forms of testing do notprovide is the ability to limit the amount of code under test to a single unit.In practice, this is only possible by being able to control what code gets runwhen the unit under test calls a function and this is usually accomplishedthrough some sort of indirection where a function is exposed as part of an APIsuch that the definition of that function can be changed without affecting therest of the code base. In the kernel this primarily comes from two constructs,classes, structs that contain function pointers that are provided by theimplementer, and architecture specific functions which have definitions selectedat compile time.
Classes¶
Classes are not a construct that is built into the C programming language;however, it is an easily derived concept. Accordingly, pretty much every projectthat does not use a standardized object oriented library (like GNOME’s GObject)has their own slightly different way of doing object oriented programming; theLinux kernel is no exception.
The central concept in kernel object oriented programming is the class. In thekernel, aclass is a struct that contains function pointers. This creates acontract betweenimplementers andusers since it forces them to use thesame function signature without having to call the function directly. In orderfor it to truly be a class, the function pointers must specify that a pointerto the class, known as aclass handle, be one of the parameters; this makesit possible for the member functions (also known asmethods) to have accessto member variables (more commonly known asfields) allowing the sameimplementation to have multipleinstances.
Typically a class can beoverridden bychild classes by embedding theparent class in the child class. Then when a method provided by the childclass is called, the child implementation knows that the pointer passed to it isof a parent contained within the child; because of this, the child can computethe pointer to itself because the pointer to the parent is always a fixed offsetfrom the pointer to the child; this offset is the offset of the parent containedin the child struct. For example:
structshape{int(*area)(structshape*this);};structrectangle{structshapeparent;intlength;intwidth;};intrectangle_area(structshape*this){structrectangle*self=container_of(this,structshape,parent);returnself->length*self->width;};voidrectangle_new(structrectangle*self,intlength,intwidth){self->parent.area=rectangle_area;self->length=length;self->width=width;}
In this example (as in most kernel code) the operation of computing the pointerto the child from the pointer to the parent is done bycontainer_of.
Faking Classes¶
In order to unit test a piece of code that calls a method in a class, thebehavior of the method must be controllable, otherwise the test ceases to be aunit test and becomes an integration test.
A fake just provides an implementation of a piece of code that is different thanwhat runs in a production instance, but behaves identically from the standpointof the callers; this is usually done to replace a dependency that is hard todeal with, or is slow.
A good example for this might be implementing a fake EEPROM that just stores the“contents” in an internal buffer. For example, let’s assume we have a class thatrepresents an EEPROM:
structeeprom{ssize_t(*read)(structeeprom*this,size_toffset,char*buffer,size_tcount);ssize_t(*write)(structeeprom*this,size_toffset,constchar*buffer,size_tcount);};
And we want to test some code that buffers writes to the EEPROM:
structeeprom_buffer{ssize_t(*write)(structeeprom_buffer*this,constchar*buffer,size_tcount);intflush(structeeprom_buffer*this);size_tflush_count;/* Flushes when buffer exceeds flush_count. */};structeeprom_buffer*new_eeprom_buffer(structeeprom*eeprom);voiddestroy_eeprom_buffer(structeeprom*eeprom);
We can easily test this code byfaking out the underlying EEPROM:
structfake_eeprom{structeepromparent;charcontents[FAKE_EEPROM_CONTENTS_SIZE];};ssize_tfake_eeprom_read(structeeprom*parent,size_toffset,char*buffer,size_tcount){structfake_eeprom*this=container_of(parent,structfake_eeprom,parent);count=min(count,FAKE_EEPROM_CONTENTS_SIZE-offset);memcpy(buffer,this->contents+offset,count);returncount;}ssize_tfake_eeprom_write(structeeprom*parent,size_toffset,constchar*buffer,size_tcount){structfake_eeprom*this=container_of(parent,structfake_eeprom,parent);count=min(count,FAKE_EEPROM_CONTENTS_SIZE-offset);memcpy(this->contents+offset,buffer,count);returncount;}voidfake_eeprom_init(structfake_eeprom*this){this->parent.read=fake_eeprom_read;this->parent.write=fake_eeprom_write;memset(this->contents,0,FAKE_EEPROM_CONTENTS_SIZE);}
We can now use it to teststructeeprom_buffer:
structeeprom_buffer_test{structfake_eeprom*fake_eeprom;structeeprom_buffer*eeprom_buffer;};staticvoideeprom_buffer_test_does_not_write_until_flush(structkunit*test){structeeprom_buffer_test*ctx=test->priv;structeeprom_buffer*eeprom_buffer=ctx->eeprom_buffer;structfake_eeprom*fake_eeprom=ctx->fake_eeprom;charbuffer[]={0xff};eeprom_buffer->flush_count=SIZE_MAX;eeprom_buffer->write(eeprom_buffer,buffer,1);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[0],0);eeprom_buffer->write(eeprom_buffer,buffer,1);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[1],0);eeprom_buffer->flush(eeprom_buffer);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[0],0xff);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[1],0xff);}staticvoideeprom_buffer_test_flushes_after_flush_count_met(structkunit*test){structeeprom_buffer_test*ctx=test->priv;structeeprom_buffer*eeprom_buffer=ctx->eeprom_buffer;structfake_eeprom*fake_eeprom=ctx->fake_eeprom;charbuffer[]={0xff};eeprom_buffer->flush_count=2;eeprom_buffer->write(eeprom_buffer,buffer,1);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[0],0);eeprom_buffer->write(eeprom_buffer,buffer,1);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[0],0xff);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[1],0xff);}staticvoideeprom_buffer_test_flushes_increments_of_flush_count(structkunit*test){structeeprom_buffer_test*ctx=test->priv;structeeprom_buffer*eeprom_buffer=ctx->eeprom_buffer;structfake_eeprom*fake_eeprom=ctx->fake_eeprom;charbuffer[]={0xff,0xff};eeprom_buffer->flush_count=2;eeprom_buffer->write(eeprom_buffer,buffer,1);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[0],0);eeprom_buffer->write(eeprom_buffer,buffer,2);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[0],0xff);KUNIT_EXPECT_EQ(test,fake_eeprom->contents[1],0xff);/* Should have only flushed the first two bytes. */KUNIT_EXPECT_EQ(test,fake_eeprom->contents[2],0);}staticinteeprom_buffer_test_init(structkunit*test){structeeprom_buffer_test*ctx;ctx=kunit_kzalloc(test,sizeof(*ctx),GFP_KERNEL);KUNIT_ASSERT_NOT_ERR_OR_NULL(test,ctx);ctx->fake_eeprom=kunit_kzalloc(test,sizeof(*ctx->fake_eeprom),GFP_KERNEL);KUNIT_ASSERT_NOT_ERR_OR_NULL(test,ctx->fake_eeprom);fake_eeprom_init(ctx->fake_eeprom);ctx->eeprom_buffer=new_eeprom_buffer(&ctx->fake_eeprom->parent);KUNIT_ASSERT_NOT_ERR_OR_NULL(test,ctx->eeprom_buffer);test->priv=ctx;return0;}staticvoideeprom_buffer_test_exit(structkunit*test){structeeprom_buffer_test*ctx=test->priv;destroy_eeprom_buffer(ctx->eeprom_buffer);}
KUnit on non-UML architectures¶
By default KUnit uses UML as a way to provide dependencies for code under test.Under most circumstances KUnit’s usage of UML should be treated as animplementation detail of how KUnit works under the hood. Nevertheless, thereare instances where being able to run architecture specific code or testagainst real hardware is desirable. For these reasons KUnit supports running onother architectures.
Running existing KUnit tests on non-UML architectures¶
There are some special considerations when running existing KUnit tests onnon-UML architectures:
- Hardware may not be deterministic, so a test that always passes or failswhen run under UML may not always do so on real hardware.
- Hardware and VM environments may not be hermetic. KUnit tries its best toprovide a hermetic environment to run tests; however, it cannot manage statethat it doesn’t know about outside of the kernel. Consequently, tests thatmay be hermetic on UML may not be hermetic on other architectures.
- Some features and tooling may not be supported outside of UML.
- Hardware and VMs are slower than UML.
None of these are reasons not to run your KUnit tests on real hardware; they areonly things to be aware of when doing so.
The biggest impediment will likely be that certain KUnit features andinfrastructure may not support your target environment. For example, at thistime the KUnit Wrapper (tools/testing/kunit/kunit.py) does not work outsideof UML. Unfortunately, there is no way around this. Using UML (or even just aparticular architecture) allows us to make a lot of assumptions that make itpossible to do things which might otherwise be impossible.
Nevertheless, all core KUnit framework features are fully supported on allarchitectures, and using them is straightforward: all you need to do is to takeyour kunitconfig, your Kconfig options for the tests you would like to run, andmerge them into whatever config your are using for your platform. That’s it!
For example, let’s say you have the following kunitconfig:
CONFIG_KUNIT=yCONFIG_KUNIT_EXAMPLE_TEST=y
If you wanted to run this test on an x86 VM, you might add the following configoptions to your.config:
CONFIG_KUNIT=yCONFIG_KUNIT_EXAMPLE_TEST=yCONFIG_SERIAL_8250=yCONFIG_SERIAL_8250_CONSOLE=y
All these new options do is enable support for a common serial console neededfor logging.
Next, you could build a kernel with these tests as follows:
makeARCH=x86 olddefconfigmakeARCH=x86
Once you have built a kernel, you could run it on QEMU as follows:
qemu-system-x86_64 -enable-kvm\ -m1024\ -kernel arch/x86_64/boot/bzImage\ -append'console=ttyS0'\ --nographic
Interspersed in the kernel logs you might see the following:
TAP version 14 # Subtest: example 1..1 # example_simple_test: initializing ok 1 - example_simple_testok 1 - example
Congratulations, you just ran a KUnit test on the x86 architecture!
In a similar manner, kunit and kunit tests can also be built as modules,so if you wanted to run tests in this way you might add the following configoptions to your.config:
CONFIG_KUNIT=mCONFIG_KUNIT_EXAMPLE_TEST=m
Once the kernel is built and installed, a simple
modprobe example-test
…will run the tests.
Writing new tests for other architectures¶
The first thing you must do is ask yourself whether it is necessary to write aKUnit test for a specific architecture, and then whether it is necessary towrite that test for a particular piece of hardware. In general, writing a testthat depends on having access to a particular piece of hardware or software (notincluded in the Linux source repo) should be avoided at all costs.
Even if you only ever plan on running your KUnit test on your hardwareconfiguration, other people may want to run your tests and may not have accessto your hardware. If you write your test to run on UML, then anyone can run yourtests without knowing anything about your particular setup, and you can stillrun your tests on your hardware setup just by compiling for your architecture.
Important
Always prefer tests that run on UML to tests that only run under a particulararchitecture, and always prefer tests that run under QEMU or another easy(and monetarily free) to obtain software environment to a specific piece ofhardware.
Nevertheless, there are still valid reasons to write an architecture or hardwarespecific test: for example, you might want to test some code that really belongsinarch/some-arch/*. Even so, try your best to write the test so that itdoes not depend on physical hardware: if some of your test cases don’t need thehardware, only require the hardware for tests that actually need it.
Now that you have narrowed down exactly what bits are hardware specific, theactual procedure for writing and running the tests is pretty much the same aswriting normal KUnit tests. One special caveat is that you have to resethardware state in between test cases; if this is not possible, you may only beable to run one test case per invocation.
KUnit debugfs representation¶
When kunit test suites are initialized, they create an associated directoryin/sys/kernel/debug/kunit/<test-suite>. The directory contains one file
- results: “cat results” displays results of each test case and the resultsof the entire suite for the last test run.
The debugfs representation is primarily of use when kunit test suites arerun in a native environment, either as modules or builtin. Having a wayto display results like this is valuable as otherwise results can beintermixed with other events in dmesg output. The maximum size of eachresults file is KUNIT_LOG_SIZE bytes (defined ininclude/kunit/test.h).