| VMware ESXi | |
|---|---|
| Developer | VMware(Broadcom) |
| Initial release | March 23, 2001; 24 years ago (2001-03-23) |
| Stable release | |
| Platform | IA-32 (x86-32) (discontinued in 4.0 onwards),[3]x86-64,ARM[4] |
| Type | Native hypervisor (type 1) |
| License | Proprietary |
| Website | www |
VMware ESX (formerly namedESXi) and a different historicVMware ESX[5] areenterprise-class,type-1 hypervisors developed byVMware, now a subsidiary ofBroadcom, for deploying andservingvirtual computers. As type-1 hypervisors, ESX is not asoftware application that is installed on anoperating system (OS); instead, they include and integrate vital OS components, such as akernel.[6]
Prior toESXi 3.0 (released in 2008) there was only the original hypervisor namedESX, and for a while both hypervisor products existed (until the final 4.1 release of the historicESX in 2010). ESXi replaces the Service Console (a rudimentary operating system) with a more closely integrated OS. ESX/ESXi is the primary component in theVMware Infrastructuresoftware suite.[7] However from version 9.0, VMware renamed ESXi toESX despite the name having already been used before for their earlier hypervisor product.[8]
The nameESX originated as an abbreviation ofElastic Sky X.[9][10] In September 2004, the replacement for ESX was internally calledVMvisor, but later changed to ESXi (as the "i" in ESXi stood for "integrated").[11][12]
ESX runs onbare metal (without running an operating system)[13] unlike other VMware products.[14] It includes its own kernel. In the historic VMware ESX, aLinux kernel was started first[15] and then used to load a variety of specialized virtualization components, including ESX, which is otherwise known as the vmkernel component.[16] The Linux kernel was the primary virtual machine; it was invoked by the service console. At normal run-time, the vmkernel was running on the bare computer, and the Linux-based service console ran as the first virtual machine. VMware dropped development of historic ESX at version 4.1, and now exclusively uses ESXi (since renamed to ESX in 2025), which does not include a Linux kernel at all.[17]
The vmkernel is amicrokernel[18] with three interfaces: hardware, guest systems, and the service console (Console OS).
The vmkernel handles CPU and memory directly, using scan-before-execution (SBE) to handle special or privileged CPU instructions[19][20]and the SRAT (system resource allocation table) to track allocated memory.[21]
Access to other hardware (such as network or storage devices) takes place using modules. At least some of the modules derive from modules used in theLinux kernel. To access these modules, an additional module calledvmklinux implements the Linux module interface. According to the README file, "This module contains the Linux emulation layer used by the vmkernel."[22]
The vmkernel uses the device drivers:[22]
These drivers mostly equate to those described in VMware'shardware compatibility list.[23] All these modules fall under theGPL. Programmers have adapted them to run with the vmkernel: VMware Inc. has changed the module-loading and some other minor things.[22]
In the historic ESX, the Service Console is a vestigial general purpose operating system most significantly used as bootstrap for the VMware kernel, vmkernel, and secondarily used as a management interface. Both of these Console Operating System functions were deprecated since historic ESX development stopped at version 4.1 and so the next version of 5.0 was ESXi only.[24]The Service Console, for all intents and purposes, is the operating system used to interact with VMware ESX and the virtual machines that run on the server.


In the event of a hardware error, the vmkernel can catch a Machine Check Exception.[25] This results in an error message displayed on a purple diagnostic screen. This is colloquially known as a purple diagnostic screen, or purple screen of death (PSoD, conferblue screen of death (BSoD)).
Upon displaying a purple diagnostic screen, the vmkernel writes debug information to the core dump partition. This information, together with the error codes displayed on the purple diagnostic screen can be used by VMware support to determine the cause of the problem.
VMware ESX used to be available in two main types: ESX (version 4.1 and earlier) and ESXi (version 3.5 onwards), but as of version 5, the original ESX has been discontinued in favor of ESXi (since renamed to ESX from version 9.0 onwards).
Historic ESX and ESXi before version 5.0 do not support Windows 8/Windows 2012. TheseMicrosoft operating systems can only run on ESXi 5.x or later.[26]
VMware ESX (formerly ESXi) is a smaller-footprint version of ESX which does not include the ESX Service Console nor use a Linux Kernel. Before Broadcom acquired VMware, it was available - without the need to purchase avCenter license - as a free download from VMware, with some features disabled.[27][28][29]
ESXi stood for "ESX integrated".[30]
VMware ESX (formerly ESXi) originated as a compact version of VMware ESX (historic) that allowed for a smaller 32 MB disk footprint on the host. With a simple configuration console for mostly network configuration and remote based VMware Infrastructure Client Interface, this allows for more resources to be dedicated to the guest environments.
Two variations of ESX exist:
The same media can be used to install either of these variations depending on the size of the target media.[31] One can upgrade ESXi toVMware Infrastructure 3[32] or toVMware vSphere 4.0 ESXi.
ESXi was originally named VMware ESX Server ESXi edition, through several revisions the ESXi product finally became VMware ESXi 3. New editions then followed: ESXi 3.5 up to ESXi 8, before being renamed to ESX with version 9.
VMware has been sued by Christoph Hellwig, a Linux kernel developer. The lawsuit began on March 5, 2015. It was alleged that VMware had misappropriated portions of the Linux kernel,[33][34] and, following a dismissal by the court in 2016, Hellwig announced he would file an appeal.[35]
The appeal was decided February 2019 and again dismissed by the German court, on the basis of not meeting "procedural requirements for the burden of proof of the plaintiff".[36]
In the last stage of the lawsuit in March 2019, the Hamburg Higher Regional Court also rejected the claim on procedural grounds. Following this, VMware officially announced that they would remove the code in question.[37] This followed with Hellwig withdrawing his case, and withholding further legal action.[38]
The following products operate in conjunction with ESX:
Network-connectivity between ESX hosts and the VMs running on it relies on virtual NICs (inside the VM) and virtual switches. The latter exists in two versions: the 'standard' vSwitch allowing several VMs on a single ESX host to share a physical NIC and the 'distributed vSwitch' where the vSwitches on different ESX hosts together form one logical switch. Cisco offers in theirCisco Nexus product-line theNexus 1000v, an advanced version of the standard distributed vSwitch. A Nexus 1000v consists of two parts: a supervisor module (VSM) and on each ESX host a virtual Ethernet module (VEM). The VSM runs as a virtual appliance within the ESX cluster or on dedicated hardware (Nexus 1010 series) and the VEM runs as a module on each host and replaces a standard dvS (distributed virtual switch) from VMware.
Configuration of the switch is done on the VSM using the standardNX-OSCLI. It offers capabilities to create standard port-profiles which can then be assigned to virtual machines using vCenter.
There are several differences between the standard dvS and the N1000v; one is that the Cisco switch generally has full support for network technologies such asLACP link aggregation or that the VMware switch supports new features such as routing based on physical NIC load. However, the main difference lies in the architecture: Nexus 1000v is working in the same way as a physical Ethernet switch does while dvS is relying on information from ESX. This has consequences for example in scalability where the Kappa limit for a N1000v is 2048 virtual ports against 60000 for a dvS.
The Nexus1000v is developed in co-operation between Cisco and VMware and uses theAPI of the dvS.[43]
Because VMware ESX is a leader in the server-virtualization market,[44] software and hardware vendors offer a range of tools to integrate their products or services with ESX. Examples are the products fromVeeam Software with backup and management applications[45] and a plugin to monitor and manage ESX usingHP OpenView,[46]Quest Software with a range of management and backup-applications and most major backup-solution providers have plugins or modules for ESX. Using Microsoft Operations Manager (SCOM) 2007/2012 with a Bridgeways ESX management pack gives the user a realtime ESX datacenter health view.
Hardware vendors such asHewlett Packard Enterprise andDell include tools to support the use of ESX(i) on their hardware platforms. An example is the ESX module for Dell's OpenManage management platform.[47]
VMware has added a Web Client[48] since v5 but it will work on vCenter only and does not contain all features.[49]
As of September 2020, these are the known limitations of VMware ESXi 7.0 U1.
Some maximums in ESXi Server 7.0 may influence the design of data centers:[50][51]
In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in operating system calls. In an unmodified operating system, OS calls introduce the greatest portion of virtualization "overhead".[citation needed]
Paravirtualization or other virtualization techniques may help with these issues. VMware developed theVirtual Machine Interface for this purpose, and selected operating systems currently[update] support this. A comparison betweenfull virtualization andparavirtualization for the ESX Server[52] shows that in some cases paravirtualization is much faster.
When using the advanced and extended network capabilities by using theCisco Nexus 1000v distributed virtual switch the following network-related limitations apply:[43]
Regardless of the type of virtual SCSI adapter used, there are these limitations:[53]
SRAT (system resource allocation table) – table that keeps track of memory allocated to a virtual machine.
Apparently, the 'i' in ESXi stands for Integrated, probably coming from the fact that this version of ESX can be embedded in a small bit of flash memory on the server hardware.
{{cite web}}: CS1 maint: archived copy as title (link){{cite web}}: CS1 maint: archived copy as title (link)