Ahybrid kernel is anoperating systemkernel whose architecture attempts to combine aspects and benefits ofmicrokernel andmonolithic kernel architectures used in operating systems.[1][2][unreliable source?]
The traditional kernel categories are monolithic kernels and microkernels (withnanokernels andexokernels seen as more extreme versions of microkernels). The "hybrid" category is controversial, due to the similarity of hybrid kernels and ordinary monolithic kernels; the term has been dismissed byLinus Torvalds as simple marketing.[3]
The idea behind a hybrid kernel is to have a kernel structure similar to that of a microkernel, but to implement that structure in the manner of a monolithic kernel. In contrast to a microkernel, all (or nearly all) operating system services in a hybrid kernel are still inkernel space. There are none of the reliability benefits of having services inuser space, as with a microkernel. However, just as with an ordinary monolithic kernel, there is none of the performance overhead for message passing and context switching between kernel and user mode that normally comes with a microkernel.
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(August 2022) (Learn how and when to remove this message) |
One prominent example of a hybrid kernel is theMicrosoftWindowsNT kernel that powers all operating systems in theWindows NT family, up to and includingWindows 11 andWindows Server 2022, and powersWindows Phone 8,Windows Phone 8.1,Windows 10 Mobile, and theXbox One andXbox Series consoles.
Windows NT was the first Windows operating system based on a hybrid kernel[citation needed]. The hybrid kernel was designed as a modifiedmicrokernel, influenced by theMach microkernel developed byRichard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. NT-based Windows is classified as a hybrid kernel (or a macrokernel[4]) rather than a monolithic kernel because the emulation subsystems run in user-mode server processes, rather than in kernel mode as on a monolithic kernel, and further because of the large number of design goals which resemble design goals of Mach (in particular the separation of OS personalities from a general kernel design). Conversely, the reason NT is not a microkernel system is because most of the system components run in the sameaddress space as the kernel, as would be the case with a monolithic design (in a traditional monolithic design, there would not be a microkernel per se, but the kernel would implement broadly similar functionality to NT's microkernel and kernel-mode subsystems).
The primary operating system personality on Windows is theWindows API, which is always present. The emulation subsystem which implements the Windows personality is called theClient/Server Runtime Subsystem (csrss.exe). On versions of NT prior to 4.0, this subsystem process also contained the window manager, graphics device interface and graphics device drivers. For performance reasons, however, in version 4.0 and later, these modules (which are often implemented in user mode even on monolithic systems, especially those designed without internal graphics support) run as a kernel-mode subsystem.[4]
Applications that run on NT are written to one of the OS personalities (usually the Windows API), and not to the native NT API for which documentation is not publicly available (with the exception of routines used in device driver development). An OS personality is implemented via a set of user-mode DLLs (seedynamic-link library), which are mapped into application processes' address spaces as required, together with an emulation subsystem server process (as described previously). Applications access system services by calling into the OS personality DLLs mapped into their address spaces, which in turn call into the NT run-time library (ntdll.dll), also mapped into the process address space. The NT run-time library services these requests by trapping into kernel mode to either call kernel-mode Executive routines or makelocal procedure calls (LPCs) to the appropriate user-mode subsystem server processes, which in turn use the NT API to communicate with application processes, the kernel-mode subsystems and each other.[5]
XNU is the kernel thatApple Inc. acquired and developed for use in themacOS,iOS,watchOS, andtvOS operating systems and released asfree and open source software as part of theDarwin operating system.XNU is anacronym forX is NotUnix.[6]
Originally developed byNeXT for theNeXTSTEP operating system, XNU was a hybrid kernel combining version 2.5 of theMach kernel with components from4.3BSD and an object-oriented API for writing drivers called Driver Kit.
After Apple acquired NeXT, the Mach component was upgraded to OSFMK 7.3,[7] which is a microkernel.[8] Apple uses a heavily modified OSFMK 7.3 functioning as a hybrid kernel with parts of FreeBSD included.[7] (OSFMK 7.3 includes applicable code from the University of Utah Mach 4 kernel and applicable code from the many Mach 3.0 variants thatforked off from the original Carnegie Mellon University Mach 3.0 kernel.) The BSD components were upgraded with code from theFreeBSD project and the Driver Kit was replaced with aC++API for writing drivers called I/O Kit[citation needed].
Like some other modern kernels, XNU is a hybrid, containing features of both monolithic and microkernels, attempting to make the best use of both technologies, such as themessage passing capability of microkernels enabling greater modularity[citation needed] and larger portions of the OS to benefit fromprotected memory,[citation needed] as well as retaining the speed of monolithic kernels for certain critical tasks.
As to the whole "hybrid kernel" thing - it's just marketing. It's "Oh, those microkernels had good PR, how can we try to get good PR for our working kernel? Oh, I know, let's use a cool name and try to imply that it has all the PR advantages that that other system has.