Sole-tenancy overview Stay organized with collections Save and categorize content based on your preferences.
This document describes sole-tenant nodes. For information about how to provisionVMs on sole-tenant nodes, seeProvisioning VMs on sole-tenantnodes.
Sole-tenancy lets you have exclusive access to asole-tenant node, which is aphysical Compute Engine server that is dedicated to hosting only yourproject's VMs. Use sole-tenant nodes to keep your VMs physically separated fromVMs in other projects, or to group your VMs together on the same host hardwareas shown in the following diagram. You can also create a sole-tenant node groupand specify whether you want toshare it with other projects or with the entire organization.
VMs running on sole-tenant nodes can use the same Compute Enginefeatures as other VMs, including transparent scheduling and block storage, butwith an added layer of hardware isolation. To give you full control over the VMson the physical server, each sole-tenant node maintains a one-to-one mapping tothe physical server that is backing the node.
Within a sole-tenant node, you can provision multiple VMs on machine types ofvarious sizes, which lets you efficiently use the underlying resourcesof the dedicated host hardware. Also, if you choose not to share the hosthardware with other projects, you can meet security or compliance requirementswith workloads that require physical isolation from other workloads or VMs. Ifyour workload requires sole tenancy only temporarily, you can modifyVM tenancy as necessary.
Sole-tenant nodes can help you meet dedicated hardware requirements forbring your own license (BYOL)scenarios that require per-core or per-processor licenses. When you usesole-tenant nodes, you have some visibility into the underlying hardware, whichlets you track core and processor usage. To track this usage,Compute Engine reports the ID of the physical server on which a VM isscheduled. Then, by usingCloud Logging, you canview the historical server usage of a VM.
To optimize the use of the host hardware, you can do the following:
Through a configurable host maintenance policy, you can control the behavior ofsole-tenant VMs while their host is undergoing maintenance. You can specify whenmaintenance occurs, and whether the VMs maintain affinity with a specificphysical server or are moved to other sole-tenant nodes within a node group.
Workload considerations
The following types of workloads might benefit from using sole-tenant nodes:
Gaming workloads with performance requirements
Finance or healthcare workloads with security and compliance requirements
Windows workloads with licensing requirements
Machine learning, data processing, or image rendering workloads. For theseworkloads, considerreserving GPUs.
Workloads requiring increased I/O operations per second (IOPS) anddecreased latency, or workloads that use temporary storage in the form ofcaches, processing space, or low-value data. For these workloads, considerreserving Local SSD disks.
Node templates
A node template is a regional resource that defines the properties of each nodein a node group. When you create a node group from a node template, theproperties of the node template are immutably copied to each node in the nodegroup.
When you create a node template you must specify a node type. You can optionallyspecify node affinity labels when you create a node template. You can onlyspecify node affinity labels on a node template. You can't specify node affinitylabels on a node group.
Node types
When configuring a node template, specify a node type to apply to all nodeswithin a node group created based on the node template. The sole-tenant nodetype, referenced by the node template, specifies the total amount of vCPU coresand memory for nodes created in node groups that use that template. For example,then2-node-80-640 node type has 80 vCPUs and 640 GB of memory.
The VMs that you add to a sole-tenant node must have the same machine type asthe node type that you specify in the node template. For example,n2sole-tenant node types are only compatible with VMs created with then2machine type. You can add VMs to a sole-tenant node until the total amount ofvCPUs or memory exceeds the capacity of the node.
When you create a node group using a node template, each node in the node groupinherits the node template's node type specifications. A node type applies toeach individual node within a node group, not to all of the nodes in the groupuniformly. So, if you create a node group with two nodes that are both of then2-node-80-640 node type, each node is allocated 80 vCPUs and 640 GB ofmemory.
Depending on your workload requirements, you might fill the node with multiplesmaller VMs running on machine types of various sizes, includingpredefined machine types,custom machine types, andmachine types with extended memory.When a node is full, you cannot schedule additional VMs on that node.
The following table displays the available node types. To see a list of thenode types available for your project, run thegcloud compute sole-tenancy node-types listcommand or create anodeTypes.list REST request.
| Node type | Processor | vCPU | GB | vCPU:GB | Sockets | Cores:Socket | Total cores | Max VMs allowed |
|---|---|---|---|---|---|---|---|---|
a2-highgpu-node-96-680 | Cascade Lake | 96 | 680 | 1:7.08 | 2 | 24 | 48 | 8 |
a2-megagpu-node-96-1360 | Cascade Lake | 96 | 1360 | 1:14.17 | 2 | 24 | 48 | 1 |
a2-ultragpu-node-96-1360-lssd1 | Cascade Lake | 96 | 1360 | 1:14.17 | 2 | 24 | 48 | 1 |
a3-highgpu-node-208-1872-lssd1 | Sapphire Rapids | 208 | 1872 | 1:9 | 2 | 56 | 112 | 8 |
a3-megagpu-node-208-1872-lssd1 | Sapphire Rapids | 208 | 1872 | 1:9 | 2 | 56 | 112 | 1 |
c2-node-60-240 | Cascade Lake | 60 | 240 | 1:4 | 2 | 18 | 36 | 15 |
c3-node-176-352 | Sapphire Rapids | 176 | 352 | 1:2 | 2 | 48 | 96 | 44 |
c3-node-176-704 | Sapphire Rapids | 176 | 704 | 1:4 | 2 | 48 | 96 | 44 |
c3-node-176-704-lssd | Sapphire Rapids | 176 | 704 | 1:4 | 2 | 48 | 96 | 40 |
c3-node-176-1408 | Sapphire Rapids | 176 | 1408 | 1:8 | 2 | 48 | 96 | 44 |
c3d-node-360-708 | AMD EPYC Genoa | 360 | 708 | 1:2 | 2 | 96 | 192 | 34 |
c3d-node-360-1440 | AMD EPYC Genoa | 360 | 1440 | 1:4 | 2 | 96 | 192 | 40 |
c3d-node-360-2880 | AMD EPYC Genoa | 360 | 2880 | 1:8 | 2 | 96 | 192 | 40 |
c4-node-192-384 | Emerald Rapids | 192 | 384 | 1:2 | 2 | 60 | 120 | 26 |
c4-node-192-720 | Emerald Rapids | 192 | 720 | 1:3.75 | 2 | 60 | 120 | 26 |
c4-node-192-1488 | Emerald Rapids | 192 | 1,488 | 1:7.75 | 2 | 60 | 120 | 26 |
c4a-node-72-144 | Google Axion | 72 | 144 | 1:2 | 1 | 80 | 80 | 22 |
c4a-node-72-288 | Google Axion | 72 | 288 | 1:4 | 1 | 80 | 80 | 22 |
c4a-node-72-576 | Google Axion | 72 | 576 | 1:8 | 1 | 80 | 80 | 36 |
c4d-node-384-720 | AMD EPYC Turin | 384 | 744 | 1:2 | 2 | 96 | 192 | 24 |
c4d-node-384-1488 | AMD EPYC Turin | 384 | 1488 | 1:4 | 2 | 96 | 192 | 25 |
c4d-node-384-3024 | AMD EPYC Turin | 384 | 3024 | 1:8 | 2 | 96 | 192 | 25 |
g2-node-96-384 | Cascade Lake | 96 | 384 | 1:4 | 2 | 28 | 56 | 8 |
g2-node-96-432 | Cascade Lake | 96 | 432 | 1:4.5 | 2 | 28 | 56 | 8 |
h3-node-88-352 | Sapphire Rapids | 88 | 352 | 1:4 | 2 | 48 | 96 | 1 |
h4d-node-384-744 | AMD EPYC Turin | 384 | 768 | 1:2 | 2 | 96 | 192 | 1 |
h4d-node-384-1488 | AMD EPYC Turin | 384 | 1536 | 1:4 | 2 | 96 | 192 | 1 |
m1-node-96-1433 | Skylake | 96 | 1433 | 1:14.93 | 2 | 28 | 56 | 1 |
m1-node-160-3844 | Broadwell E7 | 160 | 3844 | 1:24 | 4 | 22 | 88 | 4 |
m2-node-416-8832 | Cascade Lake | 416 | 8832 | 1:21.23 | 8 | 28 | 224 | 1 |
m2-node-416-11776 | Cascade Lake | 416 | 11776 | 1:28.31 | 8 | 28 | 224 | 2 |
m3-node-128-1952 | Ice Lake | 128 | 1952 | 1:15.25 | 2 | 36 | 72 | 2 |
m3-node-128-3904 | Ice Lake | 128 | 3904 | 1:30.5 | 2 | 36 | 72 | 2 |
m4-node-224-2976 | Emerald Rapids | 224 | 2976 | 1:13.3 | 2 | 112 | 224 | 1 |
m4-node-224-5952 | Emerald Rapids | 224 | 5952 | 1:26.7 | 2 | 112 | 224 | 1 |
n1-node-96-624 | Skylake | 96 | 624 | 1:6.5 | 2 | 28 | 56 | 96 |
n2-node-80-640 | Cascade Lake | 80 | 640 | 1:8 | 2 | 24 | 48 | 80 |
n2-node-128-864 | Ice Lake | 128 | 864 | 1:6.75 | 2 | 36 | 72 | 128 |
n2d-node-224-896 | AMD EPYC Rome | 224 | 896 | 1:4 | 2 | 64 | 128 | 112 |
n2d-node-224-1792 | AMD EPYC Milan | 224 | 1792 | 1:8 | 2 | 64 | 128 | 112 |
n4-node-224-1372 | Emerald Rapids | 224 | 1372 | 1:6 | 2 | 60 | 120 | 90 |
1Node types with Local SSD: The-lssd suffix indicates that Local SSD disks are attached to thenode. For A3 High, A3 Mega, and A2 Ultra node types, Local SSD disks are alsoattached by default when no suffix is specified. To provision these node typeswithout Local SSD disks, use the-nolssd suffix (for example,a3-megagpu-node-208-1872-nolssd ora2-ultragpu-node-96-1360-nolssd).
For information about the prices of these node types, seesole-tenant node pricing.
All nodes let you schedule VMs of different shapes. Noden type aregeneral-purpose nodes, on which you can schedulecustom machine typeinstances. For recommendations about which node type to choose, seeRecommendations for machine types.For information about performance, seeCPU platforms.
Node groups and VM provisioning
Sole-tenant node templates define the properties of a node group, and you mustcreate a node template before creating a node group in a Google Cloudzone. When you create a group, specify the host maintenance policy for VMson the node group, the number of nodes for the node group, and whether toshare it with other projects or with the entire organization.
A node group can have zero or more nodes; for example, you can reduce the numberof nodes in a node group to zero when you don't need to run any VMs onnodes in the group, or you can enable thenode group autoscaler to managethe size of the node group automatically.
Before provisioning VMs on sole-tenant nodes, you must create a sole-tenant nodegroup. A node group is a homogeneous set of sole-tenant nodes in a specificzone. Node groups can contain multiple VMs from the samemachine series running onmachine types of various sizes, as long as the machine type has 2 or more vCPUs.
When you create a node group,enable autoscalingso that the size of the group adjusts automatically to meet the requirements ofyour workload. If your workload requirements are static, you can manuallyspecify the size of the node group.
After creating a node group, you can provision VMs on the group or on a specificnode within the group. For further control, use node affinity labels to scheduleVMs on any node with matching affinity labels.
After you've provisioned VMs on node groups, and optionally assigned affinitylabels to provision VMs on specific node groups or nodes, considerlabeling your resources to help manage yourVMs. Labels are key-value pairs that can help you categorize your VMs so thatyou can view them in aggregate for reasons such as billing. For example, you canuse labels to mark the role of a VM, its tenancy, the license type, or itslocation.
Host maintenance policy
Depending on your licensing scenarios and workloads, you might want to limit thenumber of physical cores used by your VMs. The host maintenance policy you choosemight depend on, for example, your licensing or compliance requirements, or, youmight want to choose a policy that lets you limit usage of physical servers.With all of these policies, your VMs remain on dedicated hardware.
When you schedule VMs onsole-tenant nodes, you can choose fromthe following three different host maintenance policy options, which let youdetermine how and whether Compute Enginelive migrates VMs duringhost events,which occur approximately every 4 to 6 weeks. During maintenance,Compute Engine live migrates, as a group, all of the VMs on the hostto a different sole-tenant node, but, in some cases, Compute Enginemight break up the VMs into smaller groups and live migrate each smaller groupof VMs to separate sole-tenant nodes.
Default host maintenance policy
This is the default host maintenance policy, and VMs on nodes groups configured withthis policy followtraditional maintenance behavior for non-sole-tenant VMs.That is, depending on the on-host maintenance setting of the VM's host,VMslive migrate to a newsole-tenant node in the node group before a host maintenanceevent, and this new sole-tenant node only runs the customer's VMs.
This policy is most suitable for per-user or per-device licenses that requirelive migration during host events. This setting doesn't restrictmigration of VMs to within a fixed pool of physical servers, and is recommendedfor general workloads without physical server requirements and that don'trequire existing licenses.
Because VMs live migrate to any server without considering existing serveraffinity with this policy, this policy is not suitable for scenarios requiringminimization of the use of physical cores during host events.
The following figure shows an animation of theDefault host maintenancepolicy.

Restart in place host maintenance policy
When you use this host maintenance policy, Compute Engine stops VMsduring host events, and then restarts the VMs on the same physical server afterthe host event. You must set the VM's on host maintenance setting toTERMINATE when using this policy.
This policy is most suitable for workloads that are fault-tolerant and canexperience approximately one hour of downtime during host events,workloads that must remain on the same physical server, workloads that don'trequire live migration, or if you have licenses that are based on the number ofphysical cores or processors.
With this policy, the instance can be assigned to the node group usingnode-name,node-group-name, or node affinity label.
The following figure shows an animation of theRestart in place maintenancepolicy.

Migrate within node group host maintenance policy
When using this host maintenance policy, Compute Engine live migratesVMs within a fixed-sized group of physical servers during host events, whichhelps limit the number of unique physical servers used by the VM.
This policy is most suitable for high-availability workloads with licenses thatare based on the number of physical cores or processors, because with this hostmaintenance policy, each sole-tenant node in the group is pinned to a fixed setof physical servers, which is different than the default policy that lets VMsmigrate to any server.
To confirm the capacity for live migration, Compute Engine reserves 1holdback node for every 20 nodes that you reserve.The following figure shows an animation of theMigrate within node grouphost maintenance policy.

The following table shows how many holdback nodes Compute Enginereserves depending on how many nodes you reserve for your node group.
| Total nodes in group | Holdback nodes reserved for live migration |
|---|---|
| 1 | Not applicable. Must reserve at least 2 nodes. |
| 2 to 20 | 1 |
| 21 to 40 | 2 |
| 41 to 60 | 3 |
| 61 to 80 | 4 |
| 81 to 100 | 5 |
Pin an instance to multiple node groups
You can pin an instance to multiple node groups using thenode-group-nameaffinity labelunder the following conditions:
- The instance that you want to pin is using adefault host maintenance policy (Migrate VM instance).
- The host maintenance policy of all the node groups that you want to pin theinstance to ismigrate within node group. If you try to pin an instance tonode groups with different host maintenance policies, the operation fails withan error.
For example, if you want to pin an instancetest-node to two node groupsnode-group1 andnode-group2, verify the following:
- The host maintenance policy of
test-nodeisMigrate VM instance. - The host maintenance policy of
node-group1andnode-group2ismigrate within node group.
You cannot assign an instance to any specific node with the affinity labelnode-name. You can use any custom node affinity labels for your instances aslong as they are assigned thenode-group-name and not thenode-name.
Maintenance windows
If you are managing workloads—for example—finely tuned databases,that might be sensitive to the performance impact oflive migration, then you candetermine when maintenance begins on a sole-tenant node group by specifying amaintenance window when you create the node group. You can't modify themaintenance window after you create the node group.
Maintenance windows are 4-hour blocks of time that you can use to specify whenGoogle performsmaintenanceon your sole-tenant VMs. Maintenance events occurapproximately once every 4 to 6 weeks.
The maintenance window applies to all VMs in the sole-tenant node group, and itonly specifies when the maintenance begins. Maintenance is not guaranteed tofinish during the maintenance window, and there is no guarantee on howfrequently maintenance occurs. Maintenance windows are not supported onnode groups with theMigrate within node group host maintenance policy.
Simulate a host maintenance event
You cansimulate a host maintenance eventto test how your workloads that are running on sole-tenant nodes behave duringa host maintenance event. This lets you see the effects of the sole-tenant VM'shost maintenance policy on the applications running on the VMs.
Host errors
When there is a rare critical hardware failure on the host—sole-tenant ormulti-tenant—Compute Engine does the following:
Retires the physical server and its unique identifier.
Revokes your project's access to the physical server.
Replaces the failed hardware with a new physical server that has a new uniqueidentifier.
Moves the VMs from the failed hardware to the replacement node.
Restarts the affected VMs if you configured them to automatically restart.
Node affinity and anti-affinity
Sole-tenant nodes make sure that your VMs don't share host with VMs from otherprojects unless you use shared sole-tenant node groups. Withshared sole-tenant node groups,other projects within the organization can provision VMs on the same host.However, you still might want to group several workloads together on the samesole-tenant node or isolate your workloads from one another on different nodes.For example, to help meet some compliance requirements, you might need to useaffinity labels to separate sensitive workloads from non-sensitive workloads.
When you create a VM, you request sole-tenancy by specifying node affinity oranti-affinity, referencing one or more node affinity labels. You specify customnode affinity labels when you create a node template, and Compute Engineautomatically includes some default affinity labels on each node. By specifyingaffinity when you create a VM, you can schedule VMs together on a specific nodeor nodes in a node group. By specifying anti-affinity when you create a VM, youcan make sure that certain VMs are not scheduled together on the same node ornodes in a node group.
Node affinity labels are key-value pairs assigned to nodes, and are inheritedfrom a node template. Affinity labels let you:
- Control how individual VM instances are assigned to nodes.
- Control how VM instances created from a template, such as those created by amanaged instance group, are assigned to nodes.
- Group sensitive VM instances on specific nodes or node groups, separate fromother VMs.
Default affinity labels
Compute Engine assigns the following default affinity labels to eachnode:
- A label for the node group name:
- Key:
compute.googleapis.com/node-group-name - Value: Name of the node group.
- Key:
- A label for the node name:
- Key:
compute.googleapis.com/node-name - Value: Name of the individual node.
- Key:
- A label for the projects the node group is shared with:
- Key:
compute.googleapis.com/projects - Value: Project ID of the project containing the node group.
- Key:
Custom affinity labels
You can create custom node affinity labels when youcreate a node template.These affinity labels are assigned to all nodes in node groups created from thenode template. You can't add more custom affinity labels to nodes in a nodegroup after the node group has been created.
For information about how to use affinity labels, seeConfiguring node affinity.
Pricing
To help you to minimize thecost of your sole-tenant nodes,Compute Engine providescommitted use discounts (CUDs)andsustained use discounts (SUDs).Note that for sole-tenancy premium charges, you can receive only flexible CUDsand SUDs, but not resource-based CUDs.
Because you are already billed for the vCPU and memory of yoursole-tenant nodes, you don't pay extra for the VMs that you create on thosenodes.
If you provision sole-tenant nodes with GPUs or Local SSD disks, you arebilled for all of the GPUs or Local SSD disks on each node that you provision.The sole-tenancy premium is based only on the vCPUs and memory that you usefor the sole-tenant node, and doesn't include GPUs or Local SSD disks.
For more information, seeSole-tenant node pricing.
Availability
Sole-tenant nodes areavailable in select zones. Toverify high-availability, schedule VMs on sole-tenant nodes in differentzones.
Before using GPUs or Local SSD disks on sole-tenant nodes, make sure you haveenough GPU or Local SSDquota in the zone whereyou are reserving the resource.
Compute Engine supports GPUs on
n1,g2,a2-highgpu,a2-megagpu,a2-ultragpu,a3-highgpu, anda3-megagpusole-tenant nodetypes that are inzones with GPU support. Thefollowing table shows the types of GPUs that you can attach ton1,g2,a2,anda3nodes and how many GPUs you must attach when you create the node template.GPU type GPU quantity Sole-tenant node type NVIDIA A100 40GB 8 a2-highgpuNVIDIA A100 40GB 16 a2-megagpuNVIDIA A100 80GB 8 a2-ultragpuNVIDIA H100 8 a3-highgpuNVIDIA H100 8 a3-megagpuNVIDIA L4 8 g2NVIDIA P100 4 n1NVIDIA P4 4 n1NVIDIA T4 4 n1NVIDIA V100 8 n1Compute Engine supports Local SSD disks on
n1,n2,n2d,g2,a2-ultragpu,a3-highgpu, anda3-megagpusole-tenant node types that are used inzones that support those machine series.
Restrictions
You can't use sole-tenant VMs with the follow machine series and types:T2D,T2A,E2,C2D,A3 Ultra,A3 Edge,A4,A4X,G4, orbare metal instances.
Sole-tenant VMs can'tspecify a minimum CPU platform.
You can'tmigrate a VM to a sole-tenantnode if that VM specifies a minimumCPU platform. To migrate a VM to a sole-tenant node,remove the minimum CPUplatform specification by setting it toautomatic before updatingthe VM's node affinity labels.
Sole-tenant nodes don't supportpreemptible VM instances.
For information about the limitations of using Local SSD disks on sole-tenantnodes, seeLocal SSD data persistence.
For information about how using GPUs affects live migration, see thelimitations of livemigration.
Sole-tenant nodes with GPUs don't support VMs without GPUs.
Only N1, N2, N2D, and N4 sole-tenant nodes support overcommitting CPUs.
C3, C3D, C4, C4A, C4D, and N4 VMs are scheduled with alignment to theunderlying NUMA architecture of the sole tenant node. Scheduling full andsub-NUMA VM shapes on the same node might lead to fragmentation, wherebya larger shape can't be run while several smaller shapes totaling the sameresource requirements can.
C3 and C4 sole-tenant nodes require that VMs have the same vCPU-to-memoryratio as the node type—for example—you can't place a
c3-standardVM on a-highmemnode type.You can't update the maintenance policy on a live node group.
What's next
Learn how tocreate, configure, and consume your sole-tenant nodes.
Learn how toovercommit CPUs on sole-tenant VMs.
Learn how tobring your own licenses.
Review ourbest practices for using sole-tenant nodes to run VM workloads.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.