Linux Kernel Selftests

The kernel contains a set of “self tests” under the tools/testing/selftests/directory. These are intended to be small tests to exercise individual codepaths in the kernel. Tests are intended to be run after building, installingand booting a kernel.

You can find additional information on Kselftest framework, how towrite new tests using the framework on Kselftest wiki:

https://kselftest.wiki.kernel.org/

On some systems, hot-plug tests could hang forever waiting for cpu andmemory to be ready to be offlined. A special hot-plug target is createdto run the full range of hot-plug tests. In default mode, hot-plug tests runin safe mode with a limited scope. In limited mode, cpu-hotplug test isrun on a single cpu as opposed to all hotplug capable cpus, and memoryhotplug test is run on 2% of hotplug capable memory instead of 10%.

kselftest runs as a userspace process. Tests that can be written/run inuserspace may wish to use theTest Harness. Tests that need to berun in kernel space may wish to use aTest Module.

Running the selftests (hotplug tests are run in limited mode)

To build the tests:

$ make -C tools/testing/selftests

To run the tests:

$ make -C tools/testing/selftests run_tests

To build and run the tests with a single command, use:

$ make kselftest

Note that some tests will require root privileges.

Kselftest supports saving output files in a separate directory and thenrunning tests. To locate output files in a separate directory two syntaxesare supported. In both cases the working directory must be the root of thekernel src. This is applicable to “Running a subset of selftests” sectionbelow.

To build, save output files in a separate directory with O=

$ make O=/tmp/kselftest kselftest

To build, save output files in a separate directory with KBUILD_OUTPUT

$ export KBUILD_OUTPUT=/tmp/kselftest; make kselftest

The O= assignment takes precedence over the KBUILD_OUTPUT environmentvariable.

The above commands by default run the tests and print full pass/fail report.Kselftest supports “summary” option to make it easier to understand the testresults. Please find the detailed individual test results for each test in/tmp/testname file(s) when summary option is specified. This is applicableto “Running a subset of selftests” section below.

To run kselftest with summary option enabled

$ make summary=1 kselftest

Running a subset of selftests

You can use the “TARGETS” variable on the make command line to specifysingle test to run, or a list of tests to run.

To run only tests targeted for a single subsystem:

$ make -C tools/testing/selftests TARGETS=ptrace run_tests

You can specify multiple tests to build and run:

$  make TARGETS="size timers" kselftest

To build, save output files in a separate directory with O=

$ make O=/tmp/kselftest TARGETS="size timers" kselftest

To build, save output files in a separate directory with KBUILD_OUTPUT

$ export KBUILD_OUTPUT=/tmp/kselftest; make TARGETS="size timers" kselftest

Additionally you can use the “SKIP_TARGETS” variable on the make commandline to specify one or more targets to exclude from the TARGETS list.

To run all tests but a single subsystem:

$ make -C tools/testing/selftests SKIP_TARGETS=ptrace run_tests

You can specify multiple tests to skip:

$  make SKIP_TARGETS="size timers" kselftest

You can also specify a restricted list of tests to run together with adedicated skiplist:

$  make TARGETS="bpf breakpoints size timers" SKIP_TARGETS=bpf kselftest

See the top-level tools/testing/selftests/Makefile for the list of allpossible targets.

Running the full range hotplug selftests

To build the hotplug tests:

$ make -C tools/testing/selftests hotplug

To run the hotplug tests:

$ make -C tools/testing/selftests run_hotplug

Note that some tests will require root privileges.

Install selftests

You can use the kselftest_install.sh tool to install selftests in thedefault location, which is tools/testing/selftests/kselftest, or in auser specified location.

To install selftests in default location:

$ cd tools/testing/selftests$ ./kselftest_install.sh

To install selftests in a user specified location:

$ cd tools/testing/selftests$ ./kselftest_install.sh install_dir

Running installed selftests

Kselftest install as well as the Kselftest tarball provide a scriptnamed “run_kselftest.sh” to run the tests.

You can simply do the following to run the installed Kselftests. Pleasenote some tests will require root privileges:

$ cd kselftest$ ./run_kselftest.sh

Packaging selftests

In some cases packaging is desired, such as when tests need to run on adifferent system. To package selftests, run:

$ make -C tools/testing/selftests gen_tar

This generates a tarball in theINSTALL_PATH/kselftest-packages directory. Bydefault,.gz format is used. The tar format can be overridden by specifyingaFORMAT make variable. Any value recognized bytar’s auto-compress optionis supported, such as:

$ make -C tools/testing/selftests gen_tar FORMAT=.xz

make gen_tar invokesmake install so you can use it to package a subset oftests by using variables specified inRunning a subset of selftestssection:

$ make -C tools/testing/selftests gen_tar TARGETS="bpf" FORMAT=.xz

Contributing new tests

In general, the rules for selftests are

  • Do as much as you can if you’re not root;
  • Don’t take too long;
  • Don’t break the build on any architecture, and
  • Don’t cause the top-level “make run_tests” to fail if your feature isunconfigured.

Contributing new tests (details)

  • Use TEST_GEN_XXX if such binaries or files are generated duringcompiling.

    TEST_PROGS, TEST_GEN_PROGS mean it is the executable tested bydefault.

    TEST_CUSTOM_PROGS should be used by tests that require custom buildrules and prevent common build rule use.

    TEST_PROGS are for test shell scripts. Please ensure shell script hasits exec bit set. Otherwise, lib.mk run_tests will generate a warning.

    TEST_CUSTOM_PROGS and TEST_PROGS will be run by common run_tests.

    TEST_PROGS_EXTENDED, TEST_GEN_PROGS_EXTENDED mean it is theexecutable which is not tested by default.TEST_FILES, TEST_GEN_FILES mean it is the file which is used bytest.

  • First use the headers inside the kernel source and/or git repo, and then thesystem headers. Headers for the kernel release as opposed to headersinstalled by the distro on the system should be the primary focus to be ableto find regressions.

  • If a test needs specific kernel config options enabled, add a config file inthe test directory to enable them.

    e.g: tools/testing/selftests/android/config

Test Module

Kselftest tests the kernel from userspace. Sometimes things needtesting from within the kernel, one method of doing this is to create atest module. We can tie the module into the kselftest framework byusing a shell script test runner.kselftest/module.sh is designedto facilitate this process. There is also a header file provided toassist writing kernel modules that are for use with kselftest:

  • tools/testing/kselftest/kselftest_module.h
  • tools/testing/kselftest/kselftest/module.sh

How to use

Here we show the typical steps to create a test module and tie it intokselftest. We use kselftests for lib/ as an example.

  1. Create the test module
  2. Create the test script that will run (load/unload) the modulee.g.tools/testing/selftests/lib/printf.sh
  3. Add line to config file e.g.tools/testing/selftests/lib/config
  4. Add test script to makefile e.g.tools/testing/selftests/lib/Makefile
  5. Verify it works:
# Assumes you have booted a fresh build of this kernel treecd /path/to/linux/treemake kselftest-mergemake modulessudo make modules_installmakeTARGETS=lib kselftest

Example Module

A bare bones test module might look like this:

// SPDX-License-Identifier: GPL-2.0+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt#include"../tools/testing/selftests/kselftest/module.h"KSTM_MODULE_GLOBALS();/* * Kernel module for testing the foobinator */staticint__inittest_function(){...}staticvoid__initselftest(void){KSTM_CHECK_ZERO(do_test_case("",0));}KSTM_MODULE_LOADERS(test_foo);MODULE_AUTHOR("John Developer <jd@fooman.org>");MODULE_LICENSE("GPL");

Example test script

#!/bin/bash# SPDX-License-Identifier: GPL-2.0+$(dirname$0)/../kselftest/module.sh"foo" test_foo

Test Harness

The kselftest_harness.h file contains useful helpers to build tests. Thetest harness is for userspace testing, for kernel space testing seeTestModule above.

The tests from tools/testing/selftests/seccomp/seccomp_bpf.c can be used asexample.

Example

#include"../kselftest_harness.h"TEST(standalone_test){do_some_stuff;EXPECT_GT(10,stuff){stuff_state_tstate;enumerate_stuff_state(&state);TH_LOG("expectation failed with state: %s",state.msg);}more_stuff;ASSERT_NE(some_stuff,NULL)TH_LOG("how did it happen?!");last_stuff;EXPECT_EQ(0,last_stuff);}FIXTURE(my_fixture){mytype_t*data;intawesomeness_level;};FIXTURE_SETUP(my_fixture){self->data=mytype_new();ASSERT_NE(NULL,self->data);}FIXTURE_TEARDOWN(my_fixture){mytype_free(self->data);}TEST_F(my_fixture,data_is_good){EXPECT_EQ(1,is_my_data_good(self->data));}TEST_HARNESS_MAIN

Helpers

TH_LOG(fmt,)

Parameters

fmt
format string
...
optional arguments

Description

TH_LOG(format,...)

Optional debug logging function available for use in tests.Logging may be enabled or disabled by defining TH_LOG_ENABLED.E.g., #define TH_LOG_ENABLED 1

If no definition is provided, logging is enabled by default.

If there is no way to print an error message for the process running thetest (e.g. not allowed to write to stderr), it is still possible to get theASSERT_* number for which the test failed. This behavior can be enabled bywriting_metadata->no_print = true; before the check sequence that isunable to print. When an error occur, instead of printing an error messageand callingabort(3), the test process call_exit(2) with the assertnumber as argument, which is then printed by the parent process.

TEST(test_name)

Defines the test function and creates the registration stub

Parameters

test_name
test name

Description

TEST(name){implementation}

Defines a test by name.Names must be unique and tests must not be run in parallel. Theimplementation containing block is a function and scoping should be treatedas such. Returning early may be performed with a bare “return;” statement.

EXPECT_* and ASSERT_* are valid in aTEST() { } context.

TEST_SIGNAL(test_name,signal)

Parameters

test_name
test name
signal
signal number

Description

TEST_SIGNAL(name,signal){implementation}

Defines a test by name and the expected term signal.Names must be unique and tests must not be run in parallel. Theimplementation containing block is a function and scoping should be treatedas such. Returning early may be performed with a bare “return;” statement.

EXPECT_* and ASSERT_* are valid in aTEST() { } context.

FIXTURE_DATA(datatype_name)

Wraps the struct name so we have one less argument to pass around

Parameters

datatype_name
datatype name

Description

FIXTURE_DATA(datatypename)

This call may be used when the type of the fixture datais needed. In general, this should not be needed unlesstheself is being passed to a helper directly.

FIXTURE(fixture_name)

Called once per fixture to setup the data and register

Parameters

fixture_name
fixture name

Description

FIXTURE(datatypename){typeproperty1;...};

Defines the data provided toTEST_F()-defined tests asself. It should bepopulated and cleaned up usingFIXTURE_SETUP() andFIXTURE_TEARDOWN().

FIXTURE_SETUP(fixture_name)

Prepares the setup function for the fixture._metadata is included so that EXPECT_* and ASSERT_* work correctly.

Parameters

fixture_name
fixture name

Description

FIXTURE_SETUP(fixturename){implementation}

Populates the required “setup” function for a fixture. An instance of thedatatype defined withFIXTURE_DATA() will be exposed asself for theimplementation.

ASSERT_* are valid for use in this context and will prempt the executionof any dependent fixture tests.

A bare “return;” statement may be used to return early.

FIXTURE_TEARDOWN(fixture_name)

Parameters

fixture_name
fixture name

Description

_metadata is included so that EXPECT_* and ASSERT_* work correctly.

FIXTURE_TEARDOWN(fixturename){implementation}

Populates the required “teardown” function for a fixture. An instance of thedatatype defined withFIXTURE_DATA() will be exposed asself for theimplementation to clean up.

A bare “return;” statement may be used to return early.

FIXTURE_VARIANT(fixture_name)

Optionally called once per fixture to declare fixture variant

Parameters

fixture_name
fixture name

Description

FIXTURE_VARIANT(datatypename){typeproperty1;...};

Defines type of constant parameters provided toFIXTURE_SETUP() andTEST_F()asvariant. Variants allow the same tests to be run with differentarguments.

FIXTURE_VARIANT_ADD(fixture_name,variant_name)

Called once per fixture variant to setup and register the data

Parameters

fixture_name
fixture name
variant_name
name of the parameter set

Description

FIXTURE_ADD(datatypename){.property1=val1;...};

Defines a variant of the test fixture, provided toFIXTURE_SETUP() andTEST_F() asvariant. Tests of each fixture will be run once for eachvariant.

TEST_F(fixture_name,test_name)

Emits test registration and helpers for fixture-based test cases

Parameters

fixture_name
fixture name
test_name
test name

Description

TEST_F(fixture,name){implementation}

Defines a test that depends on a fixture (e.g., is part of a test case).Very similar toTEST() except thatself is the setup instance of fixture’sdatatype exposed for use by the implementation.

Warning: use of ASSERT_* here will skip TEARDOWN.

TEST_HARNESS_MAIN()

Simple wrapper to run the test harness

Parameters

Description

TEST_HARNESS_MAIN

Use once to append a main() to the test file.

Operators

Operators for use inTEST() andTEST_F().ASSERT_* calls will stop test execution immediately.EXPECT_* calls will emit a failure warning, note it, and continue.

ASSERT_EQ(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_EQ(expected, measured): expected == measured

ASSERT_NE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_NE(expected, measured): expected != measured

ASSERT_LT(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_LT(expected, measured): expected < measured

ASSERT_LE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_LE(expected, measured): expected <= measured

ASSERT_GT(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_GT(expected, measured): expected > measured

ASSERT_GE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_GE(expected, measured): expected >= measured

ASSERT_NULL(seen)

Parameters

seen
measured value

Description

ASSERT_NULL(measured): NULL == measured

ASSERT_TRUE(seen)

Parameters

seen
measured value

Description

ASSERT_TRUE(measured): measured != 0

ASSERT_FALSE(seen)

Parameters

seen
measured value

Description

ASSERT_FALSE(measured): measured == 0

ASSERT_STREQ(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_STREQ(expected, measured): !strcmp(expected, measured)

ASSERT_STRNE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

ASSERT_STRNE(expected, measured): strcmp(expected, measured)

EXPECT_EQ(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_EQ(expected, measured): expected == measured

EXPECT_NE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_NE(expected, measured): expected != measured

EXPECT_LT(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_LT(expected, measured): expected < measured

EXPECT_LE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_LE(expected, measured): expected <= measured

EXPECT_GT(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_GT(expected, measured): expected > measured

EXPECT_GE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_GE(expected, measured): expected >= measured

EXPECT_NULL(seen)

Parameters

seen
measured value

Description

EXPECT_NULL(measured): NULL == measured

EXPECT_TRUE(seen)

Parameters

seen
measured value

Description

EXPECT_TRUE(measured): 0 != measured

EXPECT_FALSE(seen)

Parameters

seen
measured value

Description

EXPECT_FALSE(measured): 0 == measured

EXPECT_STREQ(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_STREQ(expected, measured): !strcmp(expected, measured)

EXPECT_STRNE(expected,seen)

Parameters

expected
expected value
seen
measured value

Description

EXPECT_STRNE(expected, measured): strcmp(expected, measured)