NVM Express (NVMe) orNon-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open, logical-device interfacespecification for accessing a computer'snon-volatile storage media usually attached via thePCI Express bus. The initialNVM stands fornon-volatile memory, which is often NANDflash memory that comes in several physical form factors, includingsolid-state drives (SSDs), PCIe add-in cards, andM.2 cards, the successor tomSATA cards. NVM Express, as a logical-device interface, has been designed to capitalize on the lowlatency and internal parallelism of solid-state storage devices.[2]
Architecturally, the logic for NVMe is physically stored within and executed by the NVMe controller chip that is physically co-located with the storage media, usually an SSD. Version changes for NVMe, e.g., 1.3 to 1.4, are incorporated within the storage media, and do not affect PCIe-compatible components such as motherboards and CPUs.[3]
By its design, NVM Express allows host hardware and software to fully exploit the levels ofparallelism possible in modern SSDs. As a result, NVM Express reducesI/O overhead and brings various performance improvements relative to previous logical-device interfaces, including multiple long command queues, and reduced latency. The previous interface protocols likeAHCI were developed for use with far slowerhard disk drives (HDD) where a very lengthy delay (relative to CPU operations) exists between a request and data transfer, where data speeds are much slower than RAM speeds, and where disk rotation andseek time give rise to further optimization requirements.
NVM Express devices are chiefly available in the form of standard-sized PCI Expressexpansion cards[4] and as 2.5-inch form-factor devices that provide a four-lane PCI Express interface through theU.2 connector (formerly known as SFF-8639).[5][6] Storage devices usingU.2 and theM.2 specification which support NVM Express as the logical-device interface are a popular use-case for NVMe and have become a dominant form of solid-state storage for servers, desktops, and laptops alike.
Specifications for NVMe released to date include:[7]
1.0e (January 2013)
1.1b (July 2014) that adds standardizedCommand Sets to achieve better compatibility across different NVMe devices,Management Interface that provides standardized tools for managing NVMe devices, simplifying administration andTransport Specifications that defines how NVMe commands are transported over various physical interfaces, enhancing interoperability.[8]
1.2 (November 2014)
1.2a (October 2015)
1.2b (June 2016)
1.2.1 (June 2016) that introduces the following new features over version 1.1b:Multi-Queue to supports multiple I/O queues, enhancing data throughput and performance,Namespace Management that allows for dynamic creation, deletion, and resizing of namespaces, providing greater flexibility, andEndurance Management to monitor and manage SSD wear levels, optimizing performance and extending drive life.[9]
1.3 (May 2017)
1.3a (October 2017)
1.3b (May 2018)
1.3c (May 2018)
1.3d (March 2019) that since version 1.2.1 addedNamespace Sharing to allow multiple hosts accessing a single namespace, facilitating shared storage environments,Namespace Reservation to provides mechanisms for hosts to reserve namespaces, preventing conflicts and ensuring data integrity, andNamespace Priority that sets priority levels for different namespaces, optimizing performance for critical workloads.[10][11]
1.4 (June 2019)
1.4a (March 2020)
1.4b (September 2020)
1.4c (June 2021), that has the following new features compared to 1.3d:IO Determinism to ensure consistent latency and performance by isolating workloads,Namespace Write Protect for preventing data corruption or unauthorized modifications,Persistent Event Log that stores event logs in non-volatile memory, aiding in diagnostics and troubleshooting, andVerify Command that checks the integrity of data.[12][13]
2.0d (January 2024),[15] that, compared to 1.4c, introducesZoned Namespaces (ZNS) to organize data into zones for efficient write operations, reducing write amplification and improving SSD longevity,Key Value (KV) for efficient storage and retrieval of key-value pairs directly on the NVMe device, bypassing traditional file systems,Endurance Group Management to manages groups of SSDs based on their endurance, optimizing usage and extending lifespan.[16][15][17]
2.1 (August 2024)[1] that introducesLive Migration to maintaining service availability during migration,Key Per I/O for applying encryption keys at a per-operation level,NVMe-MI High Availability Out of Band Management for managing NVMe devices outside of regular data paths, andNVMe Network Boot / UEFI for booting NVMe devices over a network.[18]
Intel SSD 750 series, an SSD that uses NVM Express, in form of aPCI Express 3.0 ×4expansion card (front and rear views)
Historically, most SSDs usedbuses such asSATA,[19]SAS,[20][21] orFibre Channel for interfacing with the rest of a computer system. Since SSDs became available in mass markets, SATA has become the most typical way for connecting SSDs inpersonal computers; however, SATA was designed primarily for interfacing with mechanicalhard disk drives (HDDs), and it became increasingly inadequate for SSDs, which improved in speed over time.[22] For example, within about five years of mass market mainstream adoption (2005–2010) many SSDs were already held back by the comparatively slow data rates available for hard drives—unlike hard disk drives, some SSDs are limited by the maximumthroughput of SATA.
High-end SSDs had been made using thePCI Express bus before NVMe, but using non-standard specification interfaces, using a SAS to PCIe bridge[23] or by emulating a hardware RAID controller.[24] By standardizing the interface of SSDs,operating systems only need one commondevice driver to work with all SSDs adhering to the specification. It also means that each SSD manufacturer does not have to design specific interface drivers. This is similar to howUSB mass storage devices are built to follow theUSB mass-storage device class specification and work with all computers, with no per-device drivers needed.[25]
The first details of a new standard for accessing non-volatile memory emerged at theIntel Developer Forum 2007, when NVMHCI was shown as the host-side protocol of a proposed architectural design that hadOpen NAND Flash Interface Working Group (ONFI) on the memory (flash) chips side.[28] A NVMHCI working group led by Intel was formed that year. The NVMHCI 1.0 specification was completed in April 2008 and released on Intel's web site.[29][30][31]
Technical work on NVMe began in the second half of 2009.[32] The NVMe specifications were developed by the NVM Express Workgroup, which consists of more than 90 companies; Amber Huffman ofIntel was the working group's chair. Version 1.0 of the specification was released on 1 March 2011,[33] while version 1.1 of the specification was released on 11 October 2012.[34] Major features added in version 1.1 are multi-path I/O (with namespace sharing) and arbitrary-lengthscatter-gather I/O. It is expected that future revisions will significantly enhance namespace management.[32] Because of its feature focus, NVMe 1.1 was initially called "Enterprise NVMHCI".[35] An update for the base NVMe specification, called version 1.0e, was released in January 2013.[36] In June 2011, a Promoter Group led by seven companies was formed.
The first commercially available NVMe chipsets were released byIntegrated Device Technology (89HF16P04AG3 and 89HF32P08AG3) in August 2012.[37][38] The first NVMe drive,Samsung's XS1715enterprise drive, was announced in July 2013; according to Samsung, this drive supported 3 GB/s read speeds, six times faster than their previous enterprise offerings.[39] The LSISandForce SF3700 controller family, released in November 2013, also supports NVMe.[40][41] A KingstonHyperX "prosumer" product using this controller was showcased at theConsumer Electronics Show 2014 and promised similar performance.[42][43] In June 2014, Intel announced their first NVM Express products, the Intel SSD data center family that interfaces with the host through PCI Express bus, which includes the DC P3700 series, the DC P3600 series, and the DC P3500 series.[44] As of November 2014[update], NVMe drives are commercially available.
In March 2014, the group incorporated to become NVM Express, Inc., which as of November 2014[update] consists of more than 65 companies from across the industry. NVM Express specifications are owned and maintained by NVM Express, Inc., which also promotes industry awareness of NVM Express as an industry-wide standard. NVM Express, Inc. is directed by a thirteen-member board of directors selected from the Promoter Group, which includes Cisco, Dell, EMC, HGST, Intel, Micron, Microsoft, NetApp, Oracle, PMC, Samsung, SanDisk and Seagate.[45]
In September 2016, the CompactFlash Association announced that it would be releasing a new memory card specification,CFexpress, which uses NVMe.[citation needed]
NVMeHost Memory Buffer (HMB) feature added in version 1.2 of the NVMe specification.[46] HMB allows SSDs to utilize the host'sDRAM, which can improve the I/O performance for DRAM-less SSDs.[47] For example, HMB can be used for cache theFTL table by the SSD controller, which can improve I/O performance.[48] NVMe 2.0 added optionalZoned Namespaces (ZNS) feature andKey-Value (KV) feature, and support for rotating media such as hard drives. ZNS and KV allows data to be mapped directly to its physical location in flash memory to directly access data on an SSD.[49] ZNS and KV can also decreasewrite amplification of flash media.
Almost all early NVMe solid-state drives are HHHL (half height, half length) or FHHL (full height, half length) AIC, with aPCIe 2.0 or 3.0 interface. A HHHL NVMe solid-state drive card is easy to insert into a PCIe slot of a server.
SATA Express allows the use of two PCI Express 2.0 or 3.0 lanes and two SATA 3.0 (6 Gbit/s) ports through the same host-side SATA Express connector (but not both at the same time). SATA Express supports NVMe as the logical device interface for attached PCI Express storage devices. It is electrically compatible withMultiLink SAS, so a backplane can support both at the same time.
U.2, formerly known asSFF-8639, uses the same physical port as SATA Express but allows up to four PCI Express lanes. Available servers can combine up to 48 U.2 NVMe solid-state drives.[50]
U.3 (SFF-TA-1001) is built on the U.2 spec and uses the same SFF-8639 connector. Unlike in U.2, a single "tri-mode" (PCIe/SATA/SAS) backplane receptacle can handle all three types of connections; the controller automatically detects the type of connection used. This is unlike U.2, where users need to use separate controllers for SATA/SAS and NVMe. U.3 devices are required to be backwards-compatible with U.2 hosts, but U.2 drives are not compatible with U.3 hosts.[51][52]
M.2, formerly known as theNext Generation Form Factor (NGFF), uses a M.2 NVMe solid-state drivecomputer bus. Interfaces provided through the M.2 connector arePCI Express 3.0 or higher (up to fourlanes).
NVM Express over Fabrics (NVMe-oF) is the concept of using atransport protocol over a network to connect remote NVMe devices, contrary to regular NVMe where physical NVMe devices are connected to aPCIe bus either directly or over aPCIe switch to a PCIe bus. In August 2017, a standard for using NVMe overFibre Channel (FC) was submitted by the standards organizationInternational Committee for Information Technology Standards (ICITS), and this combination is often referred to as FC-NVMe or sometimes NVMe/FC.[53]
As of May 2021, supported NVMe transport protocols are:
TheAdvanced Host Controller Interface (AHCI) has the benefit of wide software compatibility, but has the downside of not delivering optimal performance when used withSSDs connected via thePCI Express bus. As a logical-device interface, AHCI was developed when the purpose of ahost bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotatingmagnetic media. As a result, AHCI introduces certain inefficiencies when used with SSD devices, which behave much more likeRAM than like spinning media.[71]
The NVMe device interface has been designed from the ground up, capitalizing on the lower latency andparallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, the basic advantages of NVMe over AHCI relate to its ability to exploit parallelism in host hardware and software, manifested by the differences incommand queue depths, efficiency ofinterrupt processing, the number of uncacheableregister accesses, etc., resulting in various performance improvements.[71][72]: 17–18
The table below summarizes high-level differences between the NVMe and AHCI logical-device interfaces.
Intel sponsored a NVM Express driver forFreeBSD's head and stable/9 branches.[78][79] The nvd(4) and nvme(4) drivers are included in the GENERIC kernel configuration by default since FreeBSD version 10.2 in 2015.[80]
With the release of theiPhone 6S and6S Plus,Apple introduced the first mobile deployment of NVMe overPCIe in smartphones.[85] Apple followed these releases with the release of the first-generationiPad Pro andfirst-generation iPhone SE that also use NVMe over PCIe.[86]
Intel published an NVM Express driver forLinux on 3 March 2011,[87][88][89] which was merged into theLinux kernel mainline on 18 January 2012 and released as part of version 3.3 of the Linux kernel on 19 March 2012.[90] Linux kernel supports NVMe Host Memory Buffer[91] from version 4.13.1[92] with default maximum size 128 MB.[93] Linux kernel supports NVMe Zoned Namespaces start from version 5.9.
Apple introduced software support for NVM Express inYosemite 10.10.3. The NVMe hardware interface was introduced in the 2016MacBook andMacBook Pro.[94]
Development work required to support NVMe inOpenBSD has been started in April 2014 by a senior developer formerly responsible forUSB 2.0 andAHCI support.[96] Support for NVMe has been enabled in the OpenBSD 6.0 release.[97]
Arca Noae provides an NVMe driver forArcaOS, as of April, 2021. The driver requires advanced interrupts as provided by the ACPI PSD running in advanced interrupt mode (mode 2), thus requiring the SMP kernel, as well.[98]
Intel has provided an NVMe driver forVMware,[100] which is included invSphere 6.0 and later builds, supporting various NVMe devices.[101] As of vSphere 6 update 1, VMware's VSAN software-defined storage subsystem also supports NVMe devices.[102]
Microsoft added native support for NVMe toWindows 8.1 andWindows Server 2012 R2.[72][103] Native drivers forWindows 7 andWindows Server 2008 R2 have been added in updates.[104] Many vendors have released their own Windows drivers for their devices as well. There are also manually customized installer files available to install a specific vendor's driver to any NVMe card, such as using a Samsung NVMe driver with a non-Samsung NVMe device, which may be needed for additional features, performance, and stability.[105]
Support for NVMe HMB was added in Windows 10 Anniversary Update (Version 1607) in 2016.[46] In Microsoft Windows fromWindows 10 1607 toWindows 11 23H2, the maximum HMB size is 64 MB.Windows 11 24H2 updates the maximum HMB size to 1/64 of system RAM.[106]
Support for NVMe ZNS and KV was added inWindows 10 version 21H2 andWindows 11 in 2021.[107] TheOpenFabrics Alliance maintains an open-source NVMe Windows Driver for Windows 7/8/8.1 and Windows Server 2008R2/2012/2012R2, developed from the baseline code submitted by several promoter companies in the NVMe workgroup, specifically IDT, Intel, and LSI.[108] The current release is 1.5 from December 2016.[109]
^"NVM Express". NVM Express, Inc.Archived from the original on 2019-12-05. Retrieved2017-01-24.NVMe is designed from the ground up to deliver high bandwidth and low latency storage access for current and future NVM technologies.
^Werner Fischer; Georg Schönberger (2015-06-01)."Linux Storage Stack Diagram". Thomas-Krenn.AG.Archived from the original on 2019-06-29. Retrieved2015-06-08.