4.Getting the code right¶
While there is much to be said for a solid and community-oriented designprocess, the proof of any kernel development project is in the resultingcode. It is the code which will be examined by other developers and merged(or not) into the mainline tree. So it is the quality of this code whichwill determine the ultimate success of the project.
This section will examine the coding process. We’ll start with a look at anumber of ways in which kernel developers can go wrong. Then the focuswill shift toward doing things right and the tools which can help in thatquest.
4.1.Pitfalls¶
4.1.1.Coding style¶
The kernel has long had a standard coding style, described inDocumentation/process/coding-style.rst. For much ofthat time, the policies described in that file were taken as being, at most,advisory. As a result, there is a substantial amount of code in the kernelwhich does not meet the coding style guidelines. The presence of that codeleads to two independent hazards for kernel developers.
The first of these is to believe that the kernel coding standards do notmatter and are not enforced. The truth of the matter is that adding newcode to the kernel is very difficult if that code is not coded according tothe standard; many developers will request that the code be reformattedbefore they will even review it. A code base as large as the kernelrequires some uniformity of code to make it possible for developers toquickly understand any part of it. So there is no longer room forstrangely-formatted code.
Occasionally, the kernel’s coding style will run into conflict with anemployer’s mandated style. In such cases, the kernel’s style will have towin before the code can be merged. Putting code into the kernel meansgiving up a degree of control in a number of ways - including control overhow the code is formatted.
The other trap is to assume that code which is already in the kernel isurgently in need of coding style fixes. Developers may start to generatereformatting patches as a way of gaining familiarity with the process, oras a way of getting their name into the kernel changelogs - or both. Butpure coding style fixes are seen as noise by the development community;they tend to get a chilly reception. So this type of patch is bestavoided. It is natural to fix the style of a piece of code while workingon it for other reasons, but coding style changes should not be made fortheir own sake.
The coding style document also should not be read as an absolute law whichcan never be transgressed. If there is a good reason to go against thestyle (a line which becomes far less readable if split to fit within the80-column limit, for example), just do it.
Note that you can also use theclang-format tool to help you withthese rules, to quickly re-format parts of your code automatically,and to review full files in order to spot coding style mistakes,typos and possible improvements. It is also handy for sorting#includes,for aligning variables/macros, for reflowing text and other similar tasks.See the fileDocumentation/dev-tools/clang-format.rstfor more details.
Some basic editor settings, such as indentation and line endings, will beset automatically if you are using an editor that is compatible withEditorConfig. See the official EditorConfig website for more information:https://editorconfig.org/
4.1.2.Abstraction layers¶
Computer Science professors teach students to make extensive use ofabstraction layers in the name of flexibility and information hiding.Certainly the kernel makes extensive use of abstraction; no projectinvolving several million lines of code could do otherwise and survive.But experience has shown that excessive or premature abstraction can bejust as harmful as premature optimization. Abstraction should be used tothe level required and no further.
At a simple level, consider a function which has an argument which isalways passed as zero by all callers. One could retain that argument justin case somebody eventually needs to use the extra flexibility that itprovides. By that time, though, chances are good that the code whichimplements this extra argument has been broken in some subtle way which wasnever noticed - because it has never been used. Or, when the need forextra flexibility arises, it does not do so in a way which matches theprogrammer’s early expectation. Kernel developers will routinely submitpatches to remove unused arguments; they should, in general, not be addedin the first place.
Abstraction layers which hide access to hardware - often to allow the bulkof a driver to be used with multiple operating systems - are especiallyfrowned upon. Such layers obscure the code and may impose a performancepenalty; they do not belong in the Linux kernel.
On the other hand, if you find yourself copying significant amounts of codefrom another kernel subsystem, it is time to ask whether it would, in fact,make sense to pull out some of that code into a separate library or toimplement that functionality at a higher level. There is no value inreplicating the same code throughout the kernel.
4.1.3.#ifdef and preprocessor use in general¶
The C preprocessor seems to present a powerful temptation to some Cprogrammers, who see it as a way to efficiently encode a great deal offlexibility into a source file. But the preprocessor is not C, and heavyuse of it results in code which is much harder for others to read andharder for the compiler to check for correctness. Heavy preprocessor useis almost always a sign of code which needs some cleanup work.
Conditional compilation with #ifdef is, indeed, a powerful feature, and itis used within the kernel. But there is little desire to see code which issprinkled liberally with #ifdef blocks. As a general rule, #ifdef useshould be confined to header files whenever possible.Conditionally-compiled code can be confined to functions which, if the codeis not to be present, simply become empty. The compiler will then quietlyoptimize out the call to the empty function. The result is far cleanercode which is easier to follow.
C preprocessor macros present a number of hazards, including possiblemultiple evaluation of expressions with side effects and no type safety.If you are tempted to define a macro, consider creating an inline functioninstead. The code which results will be the same, but inline functions areeasier to read, do not evaluate their arguments multiple times, and allowthe compiler to perform type checking on the arguments and return value.
4.1.4.Inline functions¶
Inline functions present a hazard of their own, though. Programmers canbecome enamored of the perceived efficiency inherent in avoiding a functioncall and fill a source file with inline functions. Those functions,however, can actually reduce performance. Since their code is replicatedat each call site, they end up bloating the size of the compiled kernel.That, in turn, creates pressure on the processor’s memory caches, which canslow execution dramatically. Inline functions, as a rule, should be quitesmall and relatively rare. The cost of a function call, after all, is notthat high; the creation of large numbers of inline functions is a classicexample of premature optimization.
In general, kernel programmers ignore cache effects at their peril. Theclassic time/space tradeoff taught in beginning data structures classesoften does not apply to contemporary hardware. Spaceis time, in that alarger program will run slower than one which is more compact.
More recent compilers take an increasingly active role in deciding whethera given function should actually be inlined or not. So the liberalplacement of “inline” keywords may not just be excessive; it could also beirrelevant.
4.1.5.Locking¶
In May, 2006, the “Devicescape” networking stack was, with greatfanfare, released under the GPL and made available for inclusion in themainline kernel. This donation was welcome news; support for wirelessnetworking in Linux was considered substandard at best, and the Devicescapestack offered the promise of fixing that situation. Yet, this code did notactually make it into the mainline until June, 2007 (2.6.22). Whathappened?
This code showed a number of signs of having been developed behindcorporate doors. But one large problem in particular was that it was notdesigned to work on multiprocessor systems. Before this networking stack(now called mac80211) could be merged, a locking scheme needed to beretrofitted onto it.
Once upon a time, Linux kernel code could be developed without thinkingabout the concurrency issues presented by multiprocessor systems. Now,however, this document is being written on a dual-core laptop. Even onsingle-processor systems, work being done to improve responsiveness willraise the level of concurrency within the kernel. The days when kernelcode could be written without thinking about locking are long past.
Any resource (data structures, hardware registers, etc.) which could beaccessed concurrently by more than one thread must be protected by a lock.New code should be written with this requirement in mind; retrofittinglocking after the fact is a rather more difficult task. Kernel developersshould take the time to understand the available locking primitives wellenough to pick the right tool for the job. Code which shows a lack ofattention to concurrency will have a difficult path into the mainline.
4.1.6.Regressions¶
One final hazard worth mentioning is this: it can be tempting to make achange (which may bring big improvements) which causes something to breakfor existing users. This kind of change is called a “regression,” andregressions have become most unwelcome in the mainline kernel. With fewexceptions, changes which cause regressions will be backed out if theregression cannot be fixed in a timely manner. Far better to avoid theregression in the first place.
It is often argued that a regression can be justified if it causes thingsto work for more people than it creates problems for. Why not make achange if it brings new functionality to ten systems for each one itbreaks? The best answer to this question was expressed by Linus in July,2007:
So we don't fix bugs by introducing new problems. That way liesmadness, and nobody ever knows if you actually make any realprogress at all. Is it two steps forwards, one step back, or onestep forward and two steps back?
(https://lwn.net/Articles/243460/).
An especially unwelcome type of regression is any sort of change to theuser-space ABI. Once an interface has been exported to user space, it mustbe supported indefinitely. This fact makes the creation of user-spaceinterfaces particularly challenging: since they cannot be changed inincompatible ways, they must be done right the first time. For thisreason, a great deal of thought, clear documentation, and wide review foruser-space interfaces is always required.
4.2.Code checking tools¶
For now, at least, the writing of error-free code remains an ideal that fewof us can reach. What we can hope to do, though, is to catch and fix asmany of those errors as possible before our code goes into the mainlinekernel. To that end, the kernel developers have put together an impressivearray of tools which can catch a wide variety of obscure problems in anautomated way. Any problem caught by the computer is a problem which willnot afflict a user later on, so it stands to reason that the automatedtools should be used whenever possible.
The first step is simply to heed the warnings produced by the compiler.Contemporary versions of gcc can detect (and warn about) a large number ofpotential errors. Quite often, these warnings point to real problems.Code submitted for review should, as a rule, not produce any compilerwarnings. When silencing warnings, take care to understand the real causeand try to avoid “fixes” which make the warning go away without addressingits cause.
Note that not all compiler warnings are enabled by default. Build thekernel with “make KCFLAGS=-W” to get the full set.
The kernel provides several configuration options which turn on debuggingfeatures; most of these are found in the “kernel hacking” submenu. Severalof these options should be turned on for any kernel used for development ortesting purposes. In particular, you should turn on:
FRAME_WARN to get warnings for stack frames larger than a given amount.The output generated can be verbose, but one need not worry aboutwarnings from other parts of the kernel.
DEBUG_OBJECTS will add code to track the lifetime of various objectscreated by the kernel and warn when things are done out of order. Ifyou are adding a subsystem which creates (and exports) complex objectsof its own, consider adding support for the object debugginginfrastructure.
DEBUG_SLAB can find a variety of memory allocation and use errors; itshould be used on most development kernels.
DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find anumber of common locking errors.
There are quite a few other debugging options, some of which will bediscussed below. Some of them have a significant performance impact andshould not be used all of the time. But some time spent learning theavailable options will likely be paid back many times over in short order.
One of the heavier debugging tools is the locking checker, or “lockdep.”This tool will track the acquisition and release of every lock (spinlock ormutex) in the system, the order in which locks are acquired relative toeach other, the current interrupt environment, and more. It can thenensure that locks are always acquired in the same order, that the sameinterrupt assumptions apply in all situations, and so on. In other words,lockdep can find a number of scenarios in which the system could, on rareoccasion, deadlock. This kind of problem can be painful (for bothdevelopers and users) in a deployed system; lockdep allows them to be foundin an automated manner ahead of time. Code with any sort of non-triviallocking should be run with lockdep enabled before being submitted forinclusion.
As a diligent kernel programmer, you will, beyond doubt, check the returnstatus of any operation (such as a memory allocation) which can fail. Thefact of the matter, though, is that the resulting failure recovery pathsare, probably, completely untested. Untested code tends to be broken code;you could be much more confident of your code if all those error-handlingpaths had been exercised a few times.
The kernel provides a fault injection framework which can do exactly that,especially where memory allocations are involved. With fault injectionenabled, a configurable percentage of memory allocations will be made tofail; these failures can be restricted to a specific range of code.Running with fault injection enabled allows the programmer to see how thecode responds when things go badly. SeeFault injection capabilities infrastructure for more information onhow to use this facility.
Other kinds of errors can be found with the “sparse” static analysis tool.With sparse, the programmer can be warned about confusion betweenuser-space and kernel-space addresses, mixture of big-endian andsmall-endian quantities, the passing of integer values where a set of bitflags is expected, and so on. Sparse must be installed separately (it canbe found athttps://sparse.wiki.kernel.org/index.php/Main_Page if yourdistributor does not package it); it can then be run on the code by adding“C=1” to your make command.
The “Coccinelle” tool (http://coccinelle.lip6.fr/) is able to find a widevariety of potential coding problems; it can also propose fixes for thoseproblems. Quite a few “semantic patches” for the kernel have been packagedunder the scripts/coccinelle directory; running “make coccicheck” will runthrough those semantic patches and report on any problems found. SeeDocumentation/dev-tools/coccinelle.rstfor more information.
Other kinds of portability errors are best found by compiling your code forother architectures. If you do not happen to have an S/390 system or aBlackfin development board handy, you can still perform the compilationstep. A large set of cross compilers for x86 systems can be found at
Some time spent installing and using these compilers will help avoidembarrassment later.
4.3.Documentation¶
Documentation has often been more the exception than the rule with kerneldevelopment. Even so, adequate documentation will help to ease the mergingof new code into the kernel, make life easier for other developers, andwill be helpful for your users. In many cases, the addition ofdocumentation has become essentially mandatory.
The first piece of documentation for any patch is its associatedchangelog. Log entries should describe the problem being solved, the formof the solution, the people who worked on the patch, any relevanteffects on performance, and anything else that might be needed tounderstand the patch. Be sure that the changelog sayswhy the patch isworth applying; a surprising number of developers fail to provide thatinformation.
Any code which adds a new user-space interface - including new sysfs or/proc files - should include documentation of that interface which enablesuser-space developers to know what they are working with. SeeDocumentation/ABI/README for a description of how this documentation shouldbe formatted and what information needs to be provided.
The fileDocumentation/admin-guide/kernel-parameters.rst describes all of the kernel’s boot-time parameters.Any patch which adds new parameters should add the appropriate entries tothis file.
Any new configuration options must be accompanied by help text whichclearly explains the options and when the user might want to select them.
Internal API information for many subsystems is documented by way ofspecially-formatted comments; these comments can be extracted and formattedin a number of ways by the “kernel-doc” script. If you are working withina subsystem which has kerneldoc comments, you should maintain them and addthem, as appropriate, for externally-available functions. Even in areaswhich have not been so documented, there is no harm in adding kerneldoccomments for the future; indeed, this can be a useful activity forbeginning kernel developers. The format of these comments, along with someinformation on how to create kerneldoc templates can be found atDocumentation/doc-guide/.
Anybody who reads through a significant amount of existing kernel code willnote that, often, comments are most notable by their absence. Once again,the expectations for new code are higher than they were in the past;merging uncommented code will be harder. That said, there is little desirefor verbosely-commented code. The code should, itself, be readable, withcomments explaining the more subtle aspects.
Certain things should always be commented. Uses of memory barriers shouldbe accompanied by a line explaining why the barrier is necessary. Thelocking rules for data structures generally need to be explained somewhere.Major data structures need comprehensive documentation in general.Non-obvious dependencies between separate bits of code should be pointedout. Anything which might tempt a code janitor to make an incorrect“cleanup” needs a comment saying why it is done the way it is. And so on.
4.4.Internal API changes¶
The binary interface provided by the kernel to user space cannot be brokenexcept under the most severe circumstances. The kernel’s internalprogramming interfaces, instead, are highly fluid and can be changed whenthe need arises. If you find yourself having to work around a kernel API,or simply not using a specific functionality because it does not meet yourneeds, that may be a sign that the API needs to change. As a kerneldeveloper, you are empowered to make such changes.
There are, of course, some catches. API changes can be made, but they needto be well justified. So any patch making an internal API change should beaccompanied by a description of what the change is and why it isnecessary. This kind of change should also be broken out into a separatepatch, rather than buried within a larger patch.
The other catch is that a developer who changes an internal API isgenerally charged with the task of fixing any code within the kernel treewhich is broken by the change. For a widely-used function, this duty canlead to literally hundreds or thousands of changes - many of which arelikely to conflict with work being done by other developers. Needless tosay, this can be a large job, so it is best to be sure that thejustification is solid. Note that the Coccinelle tool can help withwide-ranging API changes.
When making an incompatible API change, one should, whenever possible,ensure that code which has not been updated is caught by the compiler.This will help you to be sure that you have found all in-tree uses of thatinterface. It will also alert developers of out-of-tree code that there isa change that they need to respond to. Supporting out-of-tree code is notsomething that kernel developers need to be worried about, but we also donot have to make life harder for out-of-tree developers than it needs tobe.