Sole-tenancy overview

Linux Windows

This document describes sole-tenant nodes. For information about how to provisionVMs on sole-tenant nodes, seeProvisioning VMs on sole-tenantnodes.

Sole-tenancy lets you have exclusive access to asole-tenant node, which is aphysical Compute Engine server that is dedicated to hosting only yourproject's VMs. Use sole-tenant nodes to keep your VMs physically separated fromVMs in other projects, or to group your VMs together on the same host hardwareas shown in the following diagram. You can also create a sole-tenant node groupand specify whether you want toshare it with other projects or with the entire organization.

A multi-tenant host versus a sole-tenant node.
Figure 1: A multi-tenant host versus a sole-tenant node.

VMs running on sole-tenant nodes can use the same Compute Enginefeatures as other VMs, including transparent scheduling and block storage, butwith an added layer of hardware isolation. To give you full control over the VMson the physical server, each sole-tenant node maintains a one-to-one mapping tothe physical server that is backing the node.

Within a sole-tenant node, you can provision multiple VMs on machine types ofvarious sizes, which lets you efficiently use the underlying resourcesof the dedicated host hardware. Also, if you choose not to share the hosthardware with other projects, you can meet security or compliance requirementswith workloads that require physical isolation from other workloads or VMs. Ifyour workload requires sole tenancy only temporarily, you can modifyVM tenancy as necessary.

Sole-tenant nodes can help you meet dedicated hardware requirements forbring your own license (BYOL)scenarios that require per-core or per-processor licenses. When you usesole-tenant nodes, you have some visibility into the underlying hardware, whichlets you track core and processor usage. To track this usage,Compute Engine reports the ID of the physical server on which a VM isscheduled. Then, by usingCloud Logging, you canview the historical server usage of a VM.

To optimize the use of the host hardware, you can do the following:

Through a configurable host maintenance policy, you can control the behavior ofsole-tenant VMs while their host is undergoing maintenance. You can specify whenmaintenance occurs, and whether the VMs maintain affinity with a specificphysical server or are moved to other sole-tenant nodes within a node group.

Workload considerations

The following types of workloads might benefit from using sole-tenant nodes:

  • Gaming workloads with performance requirements

  • Finance or healthcare workloads with security and compliance requirements

  • Windows workloads with licensing requirements

  • Machine learning, data processing, or image rendering workloads. For theseworkloads, considerreserving GPUs.

  • Workloads requiring increased I/O operations per second (IOPS) anddecreased latency, or workloads that use temporary storage in the form ofcaches, processing space, or low-value data. For these workloads, considerreserving Local SSD disks.

Node templates

A node template is a regional resource that defines the properties of each nodein a node group. When you create a node group from a node template, theproperties of the node template are immutably copied to each node in the nodegroup.

When you create a node template you must specify a node type. You can optionallyspecify node affinity labels when you create a node template. You can onlyspecify node affinity labels on a node template. You can't specify node affinitylabels on a node group.

Node types

When configuring a node template, specify a node type to apply to all nodeswithin a node group created based on the node template. The sole-tenant nodetype, referenced by the node template, specifies the total amount of vCPU coresand memory for nodes created in node groups that use that template. For example,then2-node-80-640 node type has 80 vCPUs and 640 GB of memory.

The VMs that you add to a sole-tenant node must have the same machine type asthe node type that you specify in the node template. For example,n2sole-tenant node types are only compatible with VMs created with then2machine type. You can add VMs to a sole-tenant node until the total amount ofvCPUs or memory exceeds the capacity of the node.

When you create a node group using a node template, each node in the node groupinherits the node template's node type specifications. A node type applies toeach individual node within a node group, not to all of the nodes in the groupuniformly. So, if you create a node group with two nodes that are both of then2-node-80-640 node type, each node is allocated 80 vCPUs and 640 GB ofmemory.

Depending on your workload requirements, you might fill the node with multiplesmaller VMs running on machine types of various sizes, includingpredefined machine types,custom machine types, andmachine types with extended memory.When a node is full, you cannot schedule additional VMs on that node.

The following table displays the available node types. To see a list of thenode types available for your project, run thegcloud compute sole-tenancy node-types listcommand or create anodeTypes.list REST request.

Node typeProcessorvCPUGBvCPU:GBSocketsCores:SocketTotal coresMax VMs allowed
a2-highgpu-node-96-680Cascade Lake966801:7.08224488
a2-megagpu-node-96-1360Cascade Lake9613601:14.17224481
a2-ultragpu-node-96-1360-lssd1Cascade Lake9613601:14.17224481
a3-highgpu-node-208-1872-lssd1Sapphire Rapids20818721:92561128
a3-megagpu-node-208-1872-lssd1Sapphire Rapids20818721:92561121
c2-node-60-240Cascade Lake602401:42183615
c3-node-176-352Sapphire Rapids1763521:22489644
c3-node-176-704Sapphire Rapids1767041:42489644
c3-node-176-704-lssdSapphire Rapids1767041:42489640
c3-node-176-1408Sapphire Rapids17614081:82489644
c3d-node-360-708AMD EPYC Genoa3607081:229619234
c3d-node-360-1440AMD EPYC Genoa36014401:429619240
c3d-node-360-2880AMD EPYC Genoa36028801:829619240
c4-node-192-384Emerald Rapids1923841:226012026
c4-node-192-720Emerald Rapids1927201:3.7526012026
c4-node-192-1488Emerald Rapids1921,4881:7.7526012026
c4a-node-72-144Google Axion721441:21808022
c4a-node-72-288Google Axion722881:41808022
c4a-node-72-576Google Axion725761:81808036
c4d-node-384-720AMD EPYC Turin3847441:229619224
c4d-node-384-1488AMD EPYC Turin38414881:429619225
c4d-node-384-3024AMD EPYC Turin38430241:829619225
g2-node-96-384Cascade Lake963841:4228568
g2-node-96-432Cascade Lake964321:4.5228568
h3-node-88-352Sapphire Rapids883521:4248961
h4d-node-384-744AMD EPYC Turin3847681:22961921
h4d-node-384-1488AMD EPYC Turin38415361:42961921
m1-node-96-1433Skylake9614331:14.93228561
m1-node-160-3844Broadwell E716038441:24422884
m2-node-416-8832Cascade Lake41688321:21.238282241
m2-node-416-11776Cascade Lake416117761:28.318282242
m3-node-128-1952Ice Lake12819521:15.25236722
m3-node-128-3904Ice Lake12839041:30.5236722
m4-node-224-2976Emerald Rapids22429761:13.321122241
m4-node-224-5952Emerald Rapids22459521:26.721122241
n1-node-96-624Skylake966241:6.52285696
n2-node-80-640Cascade Lake806401:82244880
n2-node-128-864Ice Lake1288641:6.7523672128
n2d-node-224-896AMD EPYC Rome2248961:4264128112
n2d-node-224-1792AMD EPYC Milan22417921:8264128112
n4-node-224-1372Emerald Rapids22413721:626012090

1Node types with Local SSD: The-lssd suffix indicates that Local SSD disks are attached to thenode. For A3 High, A3 Mega, and A2 Ultra node types, Local SSD disks are alsoattached by default when no suffix is specified. To provision these node typeswithout Local SSD disks, use the-nolssd suffix (for example,a3-megagpu-node-208-1872-nolssd ora2-ultragpu-node-96-1360-nolssd).

For information about the prices of these node types, seesole-tenant node pricing.

All nodes let you schedule VMs of different shapes. Noden type aregeneral-purpose nodes, on which you can schedulecustom machine typeinstances. For recommendations about which node type to choose, seeRecommendations for machine types.For information about performance, seeCPU platforms.

Node groups and VM provisioning

Sole-tenant node templates define the properties of a node group, and you mustcreate a node template before creating a node group in a Google Cloudzone. When you create a group, specify the host maintenance policy for VMson the node group, the number of nodes for the node group, and whether toshare it with other projects or with the entire organization.

A node group can have zero or more nodes; for example, you can reduce the numberof nodes in a node group to zero when you don't need to run any VMs onnodes in the group, or you can enable thenode group autoscaler to managethe size of the node group automatically.

Before provisioning VMs on sole-tenant nodes, you must create a sole-tenant nodegroup. A node group is a homogeneous set of sole-tenant nodes in a specificzone. Node groups can contain multiple VMs from the samemachine series running onmachine types of various sizes, as long as the machine type has 2 or more vCPUs.

When you create a node group,enable autoscalingso that the size of the group adjusts automatically to meet the requirements ofyour workload. If your workload requirements are static, you can manuallyspecify the size of the node group.

After creating a node group, you can provision VMs on the group or on a specificnode within the group. For further control, use node affinity labels to scheduleVMs on any node with matching affinity labels.

After you've provisioned VMs on node groups, and optionally assigned affinitylabels to provision VMs on specific node groups or nodes, considerlabeling your resources to help manage yourVMs. Labels are key-value pairs that can help you categorize your VMs so thatyou can view them in aggregate for reasons such as billing. For example, you canuse labels to mark the role of a VM, its tenancy, the license type, or itslocation.

Host maintenance policy

Depending on your licensing scenarios and workloads, you might want to limit thenumber of physical cores used by your VMs. The host maintenance policy you choosemight depend on, for example, your licensing or compliance requirements, or, youmight want to choose a policy that lets you limit usage of physical servers.With all of these policies, your VMs remain on dedicated hardware.

When you schedule VMs onsole-tenant nodes, you can choose fromthe following three different host maintenance policy options, which let youdetermine how and whether Compute Enginelive migrates VMs duringhost events,which occur approximately every 4 to 6 weeks. During maintenance,Compute Engine live migrates, as a group, all of the VMs on the hostto a different sole-tenant node, but, in some cases, Compute Enginemight break up the VMs into smaller groups and live migrate each smaller groupof VMs to separate sole-tenant nodes.

Default host maintenance policy

This is the default host maintenance policy, and VMs on nodes groups configured withthis policy followtraditional maintenance behavior for non-sole-tenant VMs.That is, depending on the on-host maintenance setting of the VM's host,VMslive migrate to a newsole-tenant node in the node group before a host maintenanceevent, and this new sole-tenant node only runs the customer's VMs.

This policy is most suitable for per-user or per-device licenses that requirelive migration during host events. This setting doesn't restrictmigration of VMs to within a fixed pool of physical servers, and is recommendedfor general workloads without physical server requirements and that don'trequire existing licenses.

Because VMs live migrate to any server without considering existing serveraffinity with this policy, this policy is not suitable for scenarios requiringminimization of the use of physical cores during host events.

The following figure shows an animation of theDefault host maintenancepolicy.

Animation of the default host maintenance policy.
Figure 2: Animation of theDefault host maintenance policy.

Restart in place host maintenance policy

When you use this host maintenance policy, Compute Engine stops VMsduring host events, and then restarts the VMs on the same physical server afterthe host event. You must set the VM's on host maintenance setting toTERMINATE when using this policy.

This policy is most suitable for workloads that are fault-tolerant and canexperience approximately one hour of downtime during host events,workloads that must remain on the same physical server, workloads that don'trequire live migration, or if you have licenses that are based on the number ofphysical cores or processors.

With this policy, the instance can be assigned to the node group usingnode-name,node-group-name, or node affinity label.

The following figure shows an animation of theRestart in place maintenancepolicy.

Animation of the restart in place host maintenance policy
Figure 3: Animation of theRestart in place host maintenance policy.

Migrate within node group host maintenance policy

When using this host maintenance policy, Compute Engine live migratesVMs within a fixed-sized group of physical servers during host events, whichhelps limit the number of unique physical servers used by the VM.

This policy is most suitable for high-availability workloads with licenses thatare based on the number of physical cores or processors, because with this hostmaintenance policy, each sole-tenant node in the group is pinned to a fixed setof physical servers, which is different than the default policy that lets VMsmigrate to any server.

To confirm the capacity for live migration, Compute Engine reserves 1holdback node for every 20 nodes that you reserve.The following figure shows an animation of theMigrate within node grouphost maintenance policy.

Animation of the migrate within a node group host maintenance policy.
Figure 4: Animation of theMigrate within node group host maintenance policy.

The following table shows how many holdback nodes Compute Enginereserves depending on how many nodes you reserve for your node group.

Total nodes in groupHoldback nodes reserved for live migration
1Not applicable. Must reserve at least 2 nodes.
2 to 201
21 to 402
41 to 603
61 to 804
81 to 1005

Pin an instance to multiple node groups

You can pin an instance to multiple node groups using thenode-group-nameaffinity labelunder the following conditions:

  • The instance that you want to pin is using adefault host maintenance policy (Migrate VM instance).
  • The host maintenance policy of all the node groups that you want to pin theinstance to ismigrate within node group. If you try to pin an instance tonode groups with different host maintenance policies, the operation fails withan error.

For example, if you want to pin an instancetest-node to two node groupsnode-group1 andnode-group2, verify the following:

  • The host maintenance policy oftest-node isMigrate VM instance.
  • The host maintenance policy ofnode-group1 andnode-group2 ismigrate within node group.
Note: If the instance's on host maintenance policy isTerminate, you can pin the instance to only a single node group.

You cannot assign an instance to any specific node with the affinity labelnode-name. You can use any custom node affinity labels for your instances aslong as they are assigned thenode-group-name and not thenode-name.

Maintenance windows

If you are managing workloads—for example—finely tuned databases,that might be sensitive to the performance impact oflive migration, then you candetermine when maintenance begins on a sole-tenant node group by specifying amaintenance window when you create the node group. You can't modify themaintenance window after you create the node group.

Maintenance windows are 4-hour blocks of time that you can use to specify whenGoogle performsmaintenanceon your sole-tenant VMs. Maintenance events occurapproximately once every 4 to 6 weeks.

The maintenance window applies to all VMs in the sole-tenant node group, and itonly specifies when the maintenance begins. Maintenance is not guaranteed tofinish during the maintenance window, and there is no guarantee on howfrequently maintenance occurs. Maintenance windows are not supported onnode groups with theMigrate within node group host maintenance policy.

Simulate a host maintenance event

You cansimulate a host maintenance eventto test how your workloads that are running on sole-tenant nodes behave duringa host maintenance event. This lets you see the effects of the sole-tenant VM'shost maintenance policy on the applications running on the VMs.

Host errors

When there is a rare critical hardware failure on the host—sole-tenant ormulti-tenant—Compute Engine does the following:

  1. Retires the physical server and its unique identifier.

  2. Revokes your project's access to the physical server.

  3. Replaces the failed hardware with a new physical server that has a new uniqueidentifier.

  4. Moves the VMs from the failed hardware to the replacement node.

  5. Restarts the affected VMs if you configured them to automatically restart.

Node affinity and anti-affinity

Sole-tenant nodes make sure that your VMs don't share host with VMs from otherprojects unless you use shared sole-tenant node groups. Withshared sole-tenant node groups,other projects within the organization can provision VMs on the same host.However, you still might want to group several workloads together on the samesole-tenant node or isolate your workloads from one another on different nodes.For example, to help meet some compliance requirements, you might need to useaffinity labels to separate sensitive workloads from non-sensitive workloads.

When you create a VM, you request sole-tenancy by specifying node affinity oranti-affinity, referencing one or more node affinity labels. You specify customnode affinity labels when you create a node template, and Compute Engineautomatically includes some default affinity labels on each node. By specifyingaffinity when you create a VM, you can schedule VMs together on a specific nodeor nodes in a node group. By specifying anti-affinity when you create a VM, youcan make sure that certain VMs are not scheduled together on the same node ornodes in a node group.

Node affinity labels are key-value pairs assigned to nodes, and are inheritedfrom a node template. Affinity labels let you:

  • Control how individual VM instances are assigned to nodes.
  • Control how VM instances created from a template, such as those created by amanaged instance group, are assigned to nodes.
  • Group sensitive VM instances on specific nodes or node groups, separate fromother VMs.

Default affinity labels

Compute Engine assigns the following default affinity labels to eachnode:

  • A label for the node group name:
    • Key:compute.googleapis.com/node-group-name
    • Value: Name of the node group.
  • A label for the node name:
    • Key:compute.googleapis.com/node-name
    • Value: Name of the individual node.
  • A label for the projects the node group is shared with:
    • Key:compute.googleapis.com/projects
    • Value: Project ID of the project containing the node group.

Custom affinity labels

You can create custom node affinity labels when youcreate a node template.These affinity labels are assigned to all nodes in node groups created from thenode template. You can't add more custom affinity labels to nodes in a nodegroup after the node group has been created.

For information about how to use affinity labels, seeConfiguring node affinity.

Pricing

  • To help you to minimize thecost of your sole-tenant nodes,Compute Engine providescommitted use discounts (CUDs)andsustained use discounts (SUDs).Note that for sole-tenancy premium charges, you can receive only flexible CUDsand SUDs, but not resource-based CUDs.

  • Because you are already billed for the vCPU and memory of yoursole-tenant nodes, you don't pay extra for the VMs that you create on thosenodes.

  • If you provision sole-tenant nodes with GPUs or Local SSD disks, you arebilled for all of the GPUs or Local SSD disks on each node that you provision.The sole-tenancy premium is based only on the vCPUs and memory that you usefor the sole-tenant node, and doesn't include GPUs or Local SSD disks.

For more information, seeSole-tenant node pricing.

Availability

  • Sole-tenant nodes areavailable in select zones. Toverify high-availability, schedule VMs on sole-tenant nodes in differentzones.

  • Before using GPUs or Local SSD disks on sole-tenant nodes, make sure you haveenough GPU or Local SSDquota in the zone whereyou are reserving the resource.

  • Compute Engine supports GPUs onn1,g2,a2-highgpu,a2-megagpu,a2-ultragpu,a3-highgpu, anda3-megagpu sole-tenant nodetypes that are inzones with GPU support. Thefollowing table shows the types of GPUs that you can attach ton1,g2,a2,anda3 nodes and how many GPUs you must attach when you create the node template.

    GPU typeGPU quantitySole-tenant node type
    NVIDIA A100 40GB8a2-highgpu
    NVIDIA A100 40GB16a2-megagpu
    NVIDIA A100 80GB8a2-ultragpu
    NVIDIA H1008a3-highgpu
    NVIDIA H1008a3-megagpu
    NVIDIA L48g2
    NVIDIA P1004n1
    NVIDIA P44n1
    NVIDIA T44n1
    NVIDIA V1008n1
  • Compute Engine supports Local SSD disks onn1,n2,n2d,g2,a2-ultragpu,a3-highgpu, anda3-megagpu sole-tenant node types that are used inzones that support those machine series.

Restrictions

  • You can't use sole-tenant VMs with the follow machine series and types:T2D,T2A,E2,C2D,A3 Ultra,A3 Edge,A4,A4X,G4, orbare metal instances.

  • Sole-tenant VMs can'tspecify a minimum CPU platform.

  • You can'tmigrate a VM to a sole-tenantnode if that VM specifies a minimumCPU platform. To migrate a VM to a sole-tenant node,remove the minimum CPUplatform specification by setting it toautomatic before updatingthe VM's node affinity labels.

  • Sole-tenant nodes don't supportpreemptible VM instances.

  • For information about the limitations of using Local SSD disks on sole-tenantnodes, seeLocal SSD data persistence.

  • For information about how using GPUs affects live migration, see thelimitations of livemigration.

  • Sole-tenant nodes with GPUs don't support VMs without GPUs.

  • Only N1, N2, N2D, and N4 sole-tenant nodes support overcommitting CPUs.

  • C3, C3D, C4, C4A, C4D, and N4 VMs are scheduled with alignment to theunderlying NUMA architecture of the sole tenant node. Scheduling full andsub-NUMA VM shapes on the same node might lead to fragmentation, wherebya larger shape can't be run while several smaller shapes totaling the sameresource requirements can.

  • C3 and C4 sole-tenant nodes require that VMs have the same vCPU-to-memoryratio as the node type—for example—you can't place ac3-standardVM on a-highmem node type.

  • You can't update the maintenance policy on a live node group.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.