22.Intel Trust Domain Extensions (TDX)

Intel’s Trust Domain Extensions (TDX) protect confidential guest VMs fromthe host and physical attacks by isolating the guest register state and byencrypting the guest memory. In TDX, a special module running in a specialmode sits between the host and the guest and manages the guest/hostseparation.

22.1.TDX Host Kernel Support

TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) anda new isolated range pointed by the SEAM Ranger Register (SEAMRR). ACPU-attested software module called ‘the TDX module’ runs inside the newisolated range to provide the functionalities to manage and run protectedVMs.

TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) toprovide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDsas TDX private KeyIDs, which are only accessible within the SEAM mode.BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.

Before the TDX module can be used to create and run protected VMs, itmust be loaded into the isolated range and properly initialized. The TDXarchitecture doesn’t require the BIOS to load the TDX module, but thekernel assumes it is loaded by the BIOS.

22.1.1.TDX boot-time detection

The kernel detects TDX by detecting TDX private KeyIDs during kernelboot. Below dmesg shows when TDX is enabled by BIOS:

[..] virt/tdx: BIOS enabled: private KeyID range: [16, 64)

22.1.2.TDX module initialization

The kernel talks to the TDX module via the new SEAMCALL instruction. TheTDX module implements SEAMCALL leaf functions to allow the kernel toinitialize it.

If the TDX module isn’t loaded, the SEAMCALL instruction fails with aspecial error. In this case the kernel fails the module initializationand reports the module isn’t loaded:

[..] virt/tdx: module not loaded

Initializing the TDX module consumes roughly ~1/256th system RAM size touse it as ‘metadata’ for the TDX memory. It also takes additional CPUtime to initialize those metadata along with the TDX module itself. Bothare not trivial. The kernel initializes the TDX module at runtime ondemand.

Besides initializing the TDX module, a per-cpu initialization SEAMCALLmust be done on one cpu before any other SEAMCALLs can be made on thatcpu.

The kernel provides two functions,tdx_enable() andtdx_cpu_enable() toallow the user of TDX to enable the TDX module and enable TDX on localcpu respectively.

Making SEAMCALL requires VMXON has been done on that CPU. Currently onlyKVM implements VMXON. For now bothtdx_enable() andtdx_cpu_enable()don’t do VMXON internally (not trivial), but depends on the caller toguarantee that.

To enable TDX, the caller of TDX should: 1) temporarily disable CPUhotplug; 2) do VMXON andtdx_enable_cpu() on all online cpus; 3) calltdx_enable(). For example:

cpus_read_lock();on_each_cpu(vmxon_and_tdx_cpu_enable());ret = tdx_enable();cpus_read_unlock();if (ret)        goto no_tdx;// TDX is ready to use

And the caller of TDX must guarantee thetdx_cpu_enable() has beensuccessfully done on any cpu before it wants to run any other SEAMCALL.A typical usage is do both VMXON andtdx_cpu_enable() in CPU hotplugonline callback, and refuse to online iftdx_cpu_enable() fails.

User can consult dmesg to see whether the TDX module has been initialized.

If the TDX module is initialized successfully, dmesg shows somethinglike below:

[..] virt/tdx: 262668 KBs allocated for PAMT[..] virt/tdx: module initialized

If the TDX module failed to initialize, dmesg also shows it failed toinitialize:

[..] virt/tdx: module initialization failed ...

22.1.3.TDX Interaction to Other Kernel Components

22.1.3.1.TDX Memory Policy

TDX reports a list of “Convertible Memory Region” (CMR) to tell thekernel which memory is TDX compatible. The kernel needs to build a listof memory regions (out of CMRs) as “TDX-usable” memory and pass thoseregions to the TDX module. Once this is done, those “TDX-usable” memoryregions are fixed during module’s lifetime.

To keep things simple, currently the kernel simply guarantees all pagesin the page allocator are TDX memory. Specifically, the kernel uses allsystem memory in the core-mm “at the time of TDX module initialization”as TDX memory, and in the meantime, refuses to online any non-TDX-memoryin the memory hotplug.

22.1.3.2.Physical Memory Hotplug

Note TDX assumes convertible memory is always physically present duringmachine’s runtime. A non-buggy BIOS should never support hot-removal ofany convertible memory. This implementation doesn’t handle ACPI memoryremoval but depends on the BIOS to behave correctly.

22.1.3.3.CPU Hotplug

TDX module requires the per-cpu initialization SEAMCALL must be done onone cpu before any other SEAMCALLs can be made on that cpu. The kernelprovidestdx_cpu_enable() to let the user of TDX to do it when the userwants to use a new cpu for TDX task.

TDX doesn’t support physical (ACPI) CPU hotplug. During machine boot,TDX verifies all boot-time present logical CPUs are TDX compatible beforeenabling TDX. A non-buggy BIOS should never support hot-add/removal ofphysical CPU. Currently the kernel doesn’t handle physical CPU hotplug,but depends on the BIOS to behave correctly.

Note TDX works with CPU logical online/offline, thus the kernel stillallows to offline logical CPU and online it again.

22.1.3.4.Erratum

The first few generations of TDX hardware have an erratum. A partialwrite to a TDX private memory cacheline will silently “poison” theline. Subsequent reads will consume the poison and generate a machinecheck.

A partial write is a memory write where a write transaction of less thancacheline lands at the memory controller. The CPU does these vianon-temporal write instructions (like MOVNTI), or through UC/WC memorymappings. Devices can also do partial writes via DMA.

Theoretically, a kernel bug could do partial write to TDX private memoryand trigger unexpected machine check. What’s more, the machine checkcode will present these as “Hardware error” when they were, in fact, asoftware-triggered issue. But in the end, this issue is hard to trigger.

If the platform has such erratum, the kernel prints additional message inmachine check handler to tell user the machine check may be caused bykernel bug on TDX private memory.

22.1.3.5.Kexec

Currently kexec doesn’t work on the TDX platforms with the aforementionederratum. It fails when loading the kexec kernel image. Otherwise itworks normally.

22.1.3.6.Interaction vs S3 and deeper states

TDX cannot survive from S3 and deeper states. The hardware resets anddisables TDX completely when platform goes to S3 and deeper. Both TDXguests and the TDX module get destroyed permanently.

The kernel uses S3 for suspend-to-ram, and use S4 and deeper states forhibernation. Currently, for simplicity, the kernel chooses to make TDXmutually exclusive with S3 and hibernation.

The kernel disables TDX during early boot when hibernation support isavailable:

[..] virt/tdx: initialization failed: Hibernation support is enabled

Add ‘nohibernate’ kernel command line to disable hibernation in order touse TDX.

ACPI S3 is disabled during kernel early boot if TDX is enabled. The userneeds to turn off TDX in the BIOS in order to use S3.

22.2.TDX Guest Support

Since the host cannot directly access guest registers or memory, muchnormal functionality of a hypervisor must be moved into the guest. This isimplemented using a Virtualization Exception (#VE) that is handled by theguest kernel. A #VE is handled entirely inside the guest kernel, but somerequire the hypervisor to be consulted.

TDX includes new hypercall-like mechanisms for communicating from theguest to the hypervisor or the TDX module.

22.2.1.New TDX Exceptions

TDX guests behave differently from bare-metal and traditional VMX guests.In TDX guests, otherwise normal instructions or memory accesses can cause#VE or #GP exceptions.

Instructions marked with an ‘*’ conditionally cause exceptions. Thedetails for these instructions are discussed below.

22.2.1.1.Instruction-based #VE

  • Port I/O (INS, OUTS, IN, OUT)

  • HLT

  • MONITOR, MWAIT

  • WBINVD, INVD

  • VMCALL

  • RDMSR*,WRMSR*

  • CPUID*

22.2.1.2.Instruction-based #GP

  • All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH,VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON

  • ENCLS, ENCLU

  • GETSEC

  • RSM

  • ENQCMD

  • RDMSR*,WRMSR*

22.2.1.3.RDMSR/WRMSR Behavior

MSR access behavior falls into three categories:

  • #GP generated

  • #VE generated

  • “Just works”

In general, the #GP MSRs should not be used in guests. Their use likelyindicates a bug in the guest. The guest may try to handle the #GP with ahypercall but it is unlikely to succeed.

The #VE MSRs are typically able to be handled by the hypervisor. Guestscan make a hypercall to the hypervisor to handle the #VE.

The “just works” MSRs do not need any special guest handling. They mightbe implemented by directly passing through the MSR to the hardware or bytrapping and handling in the TDX module. Other than possibly being slow,these MSRs appear to function just as they would on bare metal.

22.2.1.4.CPUID Behavior

For some CPUID leaves and sub-leaves, the virtualized bit fields of CPUIDreturn values (in guest EAX/EBX/ECX/EDX) are configurable by thehypervisor. For such cases, the Intel TDX module architecture defines twovirtualization types:

  • Bit fields for which the hypervisor controls the value seen by the guestTD.

  • Bit fields for which the hypervisor configures the value such that theguest TD either sees their native value or a value of 0. For these bitfields, the hypervisor can mask off the native values, but it can notturnon values.

A #VE is generated for CPUID leaves and sub-leaves that the TDX module doesnot know how to handle. The guest kernel may ask the hypervisor for thevalue with a hypercall.

22.2.2.#VE on Memory Accesses

There are essentially two classes of TDX memory: private and shared.Private memory receives full TDX protections. Its content is protectedagainst access from the hypervisor. Shared memory is expected to beshared between guest and hypervisor and does not receive full TDXprotections.

A TD guest is in control of whether its memory accesses are treated asprivate or shared. It selects the behavior with a bit in its page tableentries. This helps ensure that a guest does not place sensitiveinformation in shared memory, exposing it to the untrusted hypervisor.

22.2.2.1.#VE on Shared Memory

Access to shared mappings can cause a #VE. The hypervisor ultimatelycontrols whether a shared memory access causes a #VE, so the guest must becareful to only reference shared pages it can safely handle a #VE. Forinstance, the guest should be careful not to access shared memory in the#VE handler before it reads the #VE info structure (TDG.VP.VEINFO.GET).

Shared mapping content is entirely controlled by the hypervisor. The guestshould only use shared mappings for communicating with the hypervisor.Shared mappings must never be used for sensitive memory content like kernelstacks. A good rule of thumb is that hypervisor-shared memory should betreated the same as memory mapped to userspace. Both the hypervisor anduserspace are completely untrusted.

MMIO for virtual devices is implemented as shared memory. The guest mustbe careful not to access device MMIO regions unless it is also prepared tohandle a #VE.

22.2.2.2.#VE on Private Pages

An access to private mappings can also cause a #VE. Since all kernelmemory is also private memory, the kernel might theoretically need tohandle a #VE on arbitrary kernel memory accesses. This is not feasible, soTDX guests ensure that all guest memory has been “accepted” before memoryis used by the kernel.

A modest amount of memory (typically 512M) is pre-accepted by the firmwarebefore the kernel runs to ensure that the kernel can start up withoutbeing subjected to a #VE.

The hypervisor is permitted to unilaterally move accepted pages to a“blocked” state. However, if it does this, page access will not generate a#VE. It will, instead, cause a “TD Exit” where the hypervisor is requiredto handle the exception.

22.2.3.Linux #VE handler

Just like page faults or #GP’s, #VE exceptions can be either handled or befatal. Typically, an unhandled userspace #VE results in a SIGSEGV.An unhandled kernel #VE results in an oops.

Handling nested exceptions on x86 is typically nasty business. A #VEcould be interrupted by an NMI which triggers another #VE and hilarityensues. The TDX #VE architecture anticipated this scenario and includes afeature to make it slightly less nasty.

During #VE handling, the TDX module ensures that all interrupts (includingNMIs) are blocked. The block remains in place until the guest makes aTDG.VP.VEINFO.GET TDCALL. This allows the guest to control when interruptsor a new #VE can be delivered.

However, the guest kernel must still be careful to avoid potential#VE-triggering actions (discussed above) while this block is in place.While the block is in place, any #VE is elevated to a double fault (#DF)which is not recoverable.

22.2.4.MMIO handling

In non-TDX VMs, MMIO is usually implemented by giving a guest access to amapping which will cause a VMEXIT on access, and then the hypervisoremulates the access. That is not possible in TDX guests because VMEXITwill expose the register state to the host. TDX guests don’t trust the hostand can’t have their state exposed to the host.

In TDX, MMIO regions typically trigger a #VE exception in the guest. Theguest #VE handler then emulates the MMIO instruction inside the guest andconverts it into a controlled TDCALL to the host, rather than exposingguest state to the host.

MMIO addresses on x86 are just special physical addresses. They cantheoretically be accessed with any instruction that accesses memory.However, the kernel instruction decoding method is limited. It is onlydesigned to decode instructions like those generated by io.h macros.

MMIO access via other means (like structure overlays) may result in anoops.

22.2.5.Shared Memory Conversions

All TDX guest memory starts out as private at boot. This memory can notbe accessed by the hypervisor. However, some kernel users like devicedrivers might have a need to share data with the hypervisor. To do this,memory must be converted between shared and private. This can beaccomplished using some existing memory encryption helpers:

  • set_memory_decrypted() converts a range of pages to shared.

  • set_memory_encrypted() converts memory back to private.

Device drivers are the primary user of shared memory, but there’s no needto touch every driver. DMA buffers andioremap() do the conversionsautomatically.

TDX uses SWIOTLB for most DMA allocations. The SWIOTLB buffer isconverted to shared on boot.

For coherent DMA allocation, the DMA buffer gets converted on theallocation. Checkforce_dma_unencrypted() for details.

22.3.Attestation

Attestation is used to verify the TDX guest trustworthiness to otherentities before provisioning secrets to the guest. For example, a keyserver may want to use attestation to verify that the guest is thedesired one before releasing the encryption keys to mount the encryptedrootfs or a secondary drive.

The TDX module records the state of the TDX guest in various stages ofthe guest boot process using the build time measurement register (MRTD)and runtime measurement registers (RTMR). Measurements related to theguest initial configuration and firmware image are recorded in the MRTDregister. Measurements related to initial state, kernel image, firmwareimage, command line options, initrd, ACPI tables, etc are recorded inRTMR registers. For more details, as an example, please refer to TDXVirtual Firmware design specification, section titled “TD Measurement”.At TDX guest runtime, the attestation process is used to attest to thesemeasurements.

The attestation process consists of two steps: TDREPORT generation andQuote generation.

TDX guest uses TDCALL[TDG.MR.REPORT] to get the TDREPORT (TDREPORT_STRUCT)from the TDX module. TDREPORT is a fixed-size data structure generated bythe TDX module which contains guest-specific information (such as buildand boot measurements), platform security version, and the MAC to protectthe integrity of the TDREPORT. A user-provided 64-Byte REPORTDATA is usedas input and included in the TDREPORT. Typically it can be some nonceprovided by attestation service so the TDREPORT can be verified uniquely.More details about the TDREPORT can be found in Intel TDX Modulespecification, section titled “TDG.MR.REPORT Leaf”.

After getting the TDREPORT, the second step of the attestation processis to send it to the Quoting Enclave (QE) to generate the Quote. TDREPORTby design can only be verified on the local platform as the MAC key isbound to the platform. To support remote verification of the TDREPORT,TDX leverages Intel SGX Quoting Enclave to verify the TDREPORT locallyand convert it to a remotely verifiable Quote. Method of sending TDREPORTto QE is implementation specific. Attestation software can choosewhatever communication channel available (i.e. vsock or TCP/IP) tosend the TDREPORT to QE and receive the Quote.

22.4.References

TDX reference material is collected here:

https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html