Provision VMs on sole-tenant nodes Stay organized with collections Save and categorize content based on your preferences.
This page describes how to provision VMs on sole-tenant nodes, which arephysical servers that run VMs only from a single project. Before provisioningVMs on sole-tenant nodes, read thesole-tenant nodeoverview.
Before you begin
- Before provisioning VMs on sole-tenant nodes,check your quota. Depending on the number and size of nodes that you reserve, you might need torequest additional quota.
- Create a sole-tenant node template.
- Create a sole-tenant node group.
- If you haven't already, set upauthentication. Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.For more information, seeAuthenticate for using REST in the Google Cloud authentication documentation.
Create a sole-tenant node template
Sole-tenant node templatesare regional resources that specify properties for sole-tenant node groups. Youmust create a node template before you create a node group. However, if you'reusing the Google Cloud console, you must create node templates during thecreation of a sole-tenant node group.
Permissions required for this task
To perform this task, you must have the followingpermissions:
compute.nodeTemplates.createon the project or organization
Console
You must create a sole-tenant node template before you create a node group.By using the Google Cloud console, you must create the node template during thecreation of a sole-tenant node group. The new node template is created inthe same region that you specify in the node group properties.
In the Google Cloud console, go to theSole-tenant nodes page.
ClickCreate node group.
Specify aName for the node group.
Specify aRegion to create the node template in. You can use the nodetemplate to create node groups in any zone of this region.
Specify theZone and clickContinue.
In theNode template list, clickCreate node template to begincreating a sole-tenant node template.
Specify aName for the node template.
Specify theNode type for each sole-tenant node in the node group tocreate based on this node template.
Optional: you can also specify the following properties for the nodetemplate.
- Add alocal SSD andGPU accelerator.
SelectEnable CPU overcommit to controlCPU overcommitlevels foreach VM scheduled on the node.
AddNode affinity labels. Affinity labels let you logically groupnodes and node groups, and later, when provisioning VMs, you can specifyaffinity labels on the VMs to schedule VMs on a specific set of nodes ornode groups. For more information, seeNode affinity andanti-affinity.
ClickCreate to finish creating your node template.
Optional: to add a new sole-tenant node template in a different region,repeat the preceding steps.
To view the node templates, clickNode templates in theSole-tenant nodes page.
gcloud
Use thegcloud compute sole-tenancy node-templatescreate commandto create a node template:
gcloud compute sole-tenancy node-templates createTEMPLATE_NAME \ --node-type=NODE_TYPE \ [--region=REGION \] [--node-affinity-labels=AFFINITY_LABELS \] [--accelerator type=GPU_TYPE,count=GPU_COUNT \] [--disk type=local-ssd,count=DISK_COUNT,size=DISK_SIZE \] [--cpu-overcommit-type=CPU_OVERCOMMIT_TYPE]
Replace the following:
TEMPLATE_NAME: the name for the new node template.NODE_TYPE: the node type for sole-tenant nodescreated based on this template. Use thegcloud compute sole-tenancynode-types listcommandto get a list of the node types available in each zone.REGION: the region to create the node template in.You can use this template to create node groups in any zone of thisregion.AFFINITY_LABELS: the keys and values,[KEY=VALUE,...], for affinity labels. Affinity labels letyou logically group nodes and node groups and later, when provisioningVMs, you can specify affinity labels on the VMs to schedule VMs on aspecific set of nodes or node groups. For more information, seeNodeaffinity and anti-affinity.GPU_TYPE: the type of GPU for each sole-tenantnode created based on this node template. For information on the zonalavailability of the required sole-tenant node type, use thegcloud compute sole-tenancy node-types listcommand. For example, to see zones fora2-highgpunode types, add the flag--filter="name~'a2-highgpu'". For theGPU and node types available, see the table in theGPU_COUNTdescription.GPU_COUNT: the number of GPUs to attach to eachsole-tenant node. The value forGPU_COUNTdepends ontheGPU_TYPEand the sole-tenant node type. SetGPU_COUNTto the value shown in the following table:Node type GPU_TYPEGPU_COUNTa2-highgpunvidia-a100-40gb8 a2-megagpunvidia-a100-40gb16 a2-ultragpunvidia-a100-80gb8 a3-highgpunvidia-h100-80gb8 a3-megagpunvidia-h100-mega-80gb8 g2nvidia-l48 n1nvidia-tesla-p1004 n1nvidia-tesla-p44 n1nvidia-tesla-t44 n1nvidia-tesla-v1008 DISK_COUNT: number of Local SSD disks. Set to16or24. This parameter is not required for A2 Ultra, A3 High, and A3 Meganode types because they include a fixed number of Local SSD disks.DISK_SIZE: optional value for the partition sizeof the local SSD in GB. The only supported partition size is375, and ifyou do not set this value it defaults to375.
CPU_OVERCOMMIT_TYPE: the overcommit type for CPUson a VM. Set toenabledornone.
REST
Use thenodeTemplates.insert methodto create a node template:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/nodeTemplates{ "name": "TEMPLATE_NAME", "nodeType": "NODE_TYPE", "nodeAffinityLabels": { "KEY": "VALUE", ... }, "accelerators": [ { "acceleratorType": "GPU_TYPE", "acceleratorCount":GPU_COUNT } ], "disks": [ { "diskType": "local-ssd", "diskSizeGb":DISK_SIZE, "diskCount":DISK_COUNT } ], "cpuOvercommitType":CPU_OVERCOMMIT_TYPE}Replace the following:
PROJECT_ID: the project ID.REGION: the region to create the node template in.You can use this template to create node groups in any zone of thisregion.TEMPLATE_NAME: the name for the new node template.NODE_TYPE: the node type for sole-tenant nodescreated based on this template. Use thenodeTypes.listmethodto get a list of the node types available in each zone.KEY: thenodeAffinityLabelsvalue that specifiesthe key portion of a node affinity label expressed as a key-value pair.Affinity labels let you logically group nodes and node groups, and later,when provisioning VMs, you can specify affinity labels on the VMs toschedule VMs on a specific set of nodes or node groups. For moreinformation, seeNode affinity and anti-affinity.VALUE: thenodeAffinityLabelsvalue thatspecifies the value portion of a node affinity label key-value pair.GPU_TYPE: the type of GPU for each sole-tenantnode created based on this node template. For information on the zonalavailability of the required sole-tenant node type, use thenodeTypes.listmethod. For example, to seezones fora2-highgpunode types, use the filtername~"a2-highgpu.*". Forthe GPU and node types available, see the table in theGPU_COUNTdescription.GPU_COUNT: the number of GPUs for each sole-tenantnode created based on this node template. The value forGPU_COUNTdepends on theGPU_TYPEand the sole-tenant node type. SetGPU_COUNTto the value shown in the following table:Node type GPU_TYPEGPU_COUNTa2-highgpunvidia-a100-40gb8 a2-megagpunvidia-a100-40gb16 a2-ultragpunvidia-a100-80gb8 a3-highgpunvidia-h100-80gb8 a3-megagpunvidia-h100-mega-80gb8 g2nvidia-l48 n1nvidia-tesla-p1004 n1nvidia-tesla-p44 n1nvidia-tesla-t44 n1nvidia-tesla-v1008 DISK_SIZE: optional value for the partition sizeof the local SSD in GB. The only supported partition size is375, and ifyou don't set this value it defaults to375.DISK_COUNT: number of Local SSD disks. Set to16or24. This parameter is not required for A3 High and A3 Mega node typesbecause they include a fixed number of Local SSD disks.CPU_OVERCOMMIT_TYPE: CPU overcommit type. Set toenabled,none, orCPU_OVERCOMMIT_TYPE_UNSPECIFIED.
Create a sole-tenant node group
With the previously created sole-tenant node template, create a sole-tenant nodegroup. A sole-tenant node group inherits properties specified by the sole-tenantnode template and has additional values that you must specify.
Permissions required for this task
To perform this task, you must have the followingpermissions:
compute.nodeTemplates.useon the node templatecompute.nodeGroups.createon the project or organization
Console
In the Google Cloud console, go to theSole-tenant nodes page.
ClickCreate node group to begin creating a node group.
Specify aName for the node group.
Specify theRegion for the node group to display the available nodetemplates in that region.
Specify theZone within the region to create the node group in.
Specify theNode template to create the node group or clickCreate node template tocreate a new sole-tenant node template. The selected node template is applied to the node group.
Choose one of the following for theAutoscaling mode for thenodegroup autoscaler:
Off: Manually manage the size of the nodegroup.
On: Have nodes automatically added to or removed from thenode group.
Only scale out: Add nodes to the node group when extracapacity is required.
Specify theNumber of nodes for the group. If you enable thenodegroup autoscaler, specify arange for the size of the node group. You can manually change the valueslater.
Set theMaintenance policy of the sole-tenant node group in theConfigure Maintenance Settings section to one of the following values.The maintenance policy lets you configure the behavior of VMs on the nodegroup during host maintenance events. For more information, seeMaintenance policies.
- Default
- Restart in place
- Migrate within node group
You can choose betweenregular maintenance windows andadvanced maintenance control to handle maintenance for your sole-tenantnode group, as follows:
Maintenance Window: Select the time period during which you wantplanned maintenance events to happen for the nodes in the sole-tenantnode groups.
Opt-in for advanced maintenance control for sole-tenancy:Advanced maintenance control for sole-tenancy lets you controlplanned maintenance events for sole-tenant node groups and minimizemaintenance-related disruptions. To opt-in for advanced maintenancecontrol, click theOpt-in for advanced maintenance control for sole-tenancytoggle to theon position. If you choose to use this option for nodemaintenance, theMaintenance window field is disabled, andmaintenance occurs as configured in advanced maintenance control.
Note that advanced maintenance control only supports theDefaultmaintenance policy.
Configure the share settings by specifying one of the following inConfigure share settings:
- To share the node group with all projects in your organization, chooseShare this node group with all projects within the organization.
- To share the node group with specific projects within yourorganization, chooseShare this node group with selected projectswithin the organization.
If you do not want to share the node group, chooseDo not share thisnode group with other projects. For more information about sharing nodegroups, seeShare sole-tenant node groups.
ClickCreate to finish creating the node group.
gcloud
Run thegcloud compute sole-tenancy node-groupscreate commandcommand to create a node group based on a previously created nodetemplate:
gcloud compute sole-tenancy node-groups createGROUP_NAME \ --node-template=TEMPLATE_NAME \ --target-size=TARGET_SIZE \ [--zone=ZONE \] [--maintenance-policy=MAINTENANCE_POLICY \] [--maintenance-window-start-time=START_TIME \] [--autoscaler-mode=AUTOSCALER_MODE: \ --min-nodes=MIN_NODES \ --max-nodes=MAX_NODES]
Replace the following:
GROUP_NAME: the name for the new node group.TEMPLATE_NAME: the name of the node template touse to create this group.TARGET_SIZE: the number of nodes to create in thegroup.ZONE: the zone to create the node group in. This mustbe the same region as the node template on which you are basing the nodegroup.MAINTENANCE_POLICY: the maintenance policy for thenode group. For more information, seeMaintenancepolicies. This must be one of thefollowing values:defaultrestart-in-placemigrate-within-node-group
Alternately, you can opt in for advanced maintenance control for thesole-tenant node group, using the
--maintenance-intervalflag.For more information, seeEnable advanced maintenance control on a sole-tenant node.START_TIME: the start time in GMT for themaintenance window for the VMs in this node group. Set to one of:00:00,04:00,08:00,12:00,16:00, or20:00. If not set, the nodegroup does not have a set maintenance window.AUTOSCALER_MODE: the autoscaler policy for the nodegroup. This must be one of:off: manually manage the size of the node group.on: have nodes automatically added to or removed from thenode group.only-scale-out: add nodes to the node group when extracapacity is required.
MIN_NODES: the minimim size of the node group. Thedefault value is0and must be an integer value less than or equal toMAX_NODES.MAX_NODES: the maximum size of the node group.This must be less than or equal to100and greater than or equal toMIN_NODES. Required ifAUTOSCALER_MODEis not settooff.
REST
Use thenodeGroups.insert methodto create a node group based on a previously created nodetemplate:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/nodeGroups?initialNodeCount=TARGET_SIZE
{ "nodeTemplate": "regions/REGION/nodeTemplates/TEMPLATE_NAME", "name": "GROUP_NAME", "maintenancePolicy":MAINTENANCE_POLICY, "maintenanceWindow": { "startTime": "START_TIME" } "autoscalingPolicy": { "mode":AUTOSCALER_MODE, "minNodes":MIN_NODES, "maxNodes":MAX_NODES },}
Replace the following:
PROJECT_ID: the project ID.ZONE: the zone to create the node group in. Thismust be in the same region as the node template on which you are basing thenode group.TARGET_SIZE: the number of nodes to create in thegroup.REGION: the region to create the node group in.You must have a node template in the selected region.TEMPLATE_NAME: the name of the node template touse to create this group.GROUP_NAME: the name for the new node group.MAINTENANCE_POLICY: themaintenancepolicy for the node group. Thismust be one of the following values:DEFAULTRESTART_IN_PLACEMIGRATE_WITHIN_NODE_GROUP
Alternately, you can opt-in for advanced maintenance control for thesole-tenant node group, using the
maintenanceIntervalfield.For more information, seeEnable advanced maintenance control on a sole-tenant node.START_TIME: the start time in GMT for themaintenance window for the VMs in this node group. Set to one of:00:00,04:00,08:00,12:00,16:00, or20:00. If not set, the nodegroup does not have a set maintenance window.AUTOSCALER_MODE: the autoscaler policy for the nodegroup. This must be one of the following values:OFF: manually manage the size of the node group.ON: have nodes automatically added to or removed from thenode group.ONLY_SCALE_OUT: add nodes to the node group when extracapacity is required.
MIN_NODES: the minimim size of the node group. The defaultis0and must be an integer value less than or equal toMAX_NODES.MAX_NODES: the maximum size of the node group.This must be less than or equal to100and greater than or equal toMIN_NODES. Required ifAUTOSCALER_MODEis not settoOFF.
Provision a sole-tenant VM
After creating a node group based on a previously creatednode template, you can provision individual VMs on a sole-tenant node group.
To provision a VM on a specific node or node group that has affinity labels thatmatch those you previously assigned to the node template, follow thestandardprocedure for creating a VM instance, and assignaffinitylabels to the VM.
Or, you can use the following procedure to provision a VM on a sole-tenant nodefrom the node group details page. Based on the node group that you provision VMson, Compute Engine assignsaffinitylabels.
Console
In the Google Cloud console, go to theSole-tenant nodes page.
ClickNode groups.
Click theName of the node group to provision a VM instance on, andthen optionally, to provision a VM on a specific sole-tenant node, click thename of the specific sole-tenant node to provision the VM.
ClickCreate instance to provision a VM instance on this node group,note the values automatically applied for theName,Region, andZone, and modify those values as necessary.
Select aMachine configuration by specifying theMachine family,Series, andMachine type. Choose theSeries that corresponds tothesole-tenant node type.
Modify theBoot disk,Firewall, and other settings as necessary.
ClickSole Tenancy, note the automatically assignedNodeaffinity labels, and useBrowse to adjust as necessary.
ClickManagement, and forOn host maintenance, choose one of thefollowing:
Migrate VM instance (recommended): VM migrated to another node inthe node group during maintenance events.
Terminate: VM stopped during maintenance events.
Choose one of the following for theAutomatic restart:
On (recommended): Automatically restarts VMs if they arestopped for maintenance events.
Off: Does not automatically restart VMs after a maintenance event.
ClickCreate to finish creating your sole-tenant VM.
gcloud
Use thegcloud compute instancescreate command command toprovision a VM on a sole-tenant node group:
gcloud compute instances createVM_NAME \ [--zone=ZONE \] --image-family=IMAGE_FAMILY \ --image-project=IMAGE_PROJECT \ --node-group=GROUP_NAME \ --machine-type=MACHINE_TYPE \ [--maintenance-policy=MAINTENANCE_POLICY \] [--accelerator type=GPU_TYPE,count=GPU_COUNT \] [--local-ssd interface=SSD_INTERFACE \] [--restart-on-failure]
The--restart-on-failure flag indicates whether sole-tenant VMsrestart after stopping. This flag is enabled by default. Use--no-restart-on-failure to disable.
Replace the following:
VM_NAME: the name of the new sole-tenant VM.ZONE: the zone to provision the sole-tenant VM in.IMAGE_FAMILY: theimagefamily of the image to use to create theVM.IMAGE_PROJECT: theimageproject of the image family.GROUP_NAME: the name of the node group toprovision the VM on.MACHINE_TYPE: the machine type of the sole-tenantVM. Use thegcloud compute machine-types listcommand to get a list ofavailable machine types for the project.MAINTENANCE_POLICY: specifies restart behavior ofsole-tenant VMs during maintenance events. Set to one of the following:MIGRATE: VM migrated to another node in the node groupduring maintenance events.TERMINATE: VM stopped during maintenance events.
GPU_TYPE: type of GPU. Set to one of theaccelerator types specified when the node template was created.GPU_COUNT: number of GPUs of the total specifiedby the node template to attach to this VM. Default value is1.SSD_INTERFACE: type of local SSD interface. Youcan only set this for instances created from a node template with local SSDsupport. If you specify this while creating the instance, and the nodetemplate does not support local SSD, instance creation fails. Set tonvmeif the boot disk image drivers are optimized for NVMe, otherwise set toscsi. Specify this flag and a corresponding value once for each local SSDpartition.
REST
Use theinstances.insert methodto provision a VM on a sole-tenant node group:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/VM_ZONE/instances
{ "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "name": "VM_NAME", "scheduling": { "onHostMaintenance":MAINTENANCE_POLICY, "automaticRestart":RESTART_ON_FAILURE, "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ], "guestAccelerators": [ { "acceleratorType":GPU_TYPE, "acceleratorCount":GPU_COUNT } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } }, { "type":"SCRATCH", "initializeParams":{ "diskType":"zones/LOCAL_SSD_ZONE/diskTypes/local-ssd" }, "autoDelete":true, "interface":"SSD_INTERFACE" } ]}
Replace the following:
PROJECT_ID: the project ID.VM_ZONE: the zone to provision the sole-tenant VMin.MACHINE_TYPE_ZONE: the zone of the machine type.MACHINE_TYPE: the machine type of the sole-tenantVM. Use themachineTypes.listmethod to get a list ofavailable machine types for the project.VM_NAME: the name of the new sole-tenant VM.MAINTENANCE_POLICY: specifies restart behavior ofsole-tenant VMs during maintenance events. Set to one of the following:MIGRATE: VM migrated to another node in the node groupduring maintenance events.TERMINATE: VM stopped during maintenance events.
RESTART_ON_FAILURE: indicates whether sole-tenantVMs restart after stopping. Default istrue.GROUP_NAME: the name of the node group toprovision the VM on.NETWORK: the URL of the network resource for thisVM.REGION: the region containing the subnetwork forthis VM.SUBNETWORK: the URL of the subnetwork resource forthis VM.GPU_TYPE: the type of GPU. Set to one of theaccelerator types specified when the node template was created.GPU_COUNT: the number of GPUs of the totalspecified by the node template to attach to this VM. Default value is1.IMAGE_PROJECT:imageproject of the image family.IMAGE_FAMILY:imagefamily of the image to use to create theVM.LOCAL_SSD_ZONE: the local SSD's zone.SSD_INTERFACE: the type of local SSD interface.Set toNVMEif the boot disk image drivers are optimized for NVMe,otherwise set toSCSI.
Provision a group of sole-tenant VMs
Managed instance groups (MIGs) let youprovision a group of identical sole-tenant VMs.Affinitylabels letyou specify the sole-tenant node or node group on which to provision the groupof sole-tenant VMs.
For regional MIGs, you must create node groups in each of the regional MIG'szones and you must specify node affinities for those node groups in the regionalMIG's instance template.
Warning: For regional MIGs, you must set the MIG'starget distribution shapetoEVEN. If you use a different shape, the MIG's VMs might not be created inyour sole-tenant nodes.gcloud
Use the
gcloud compute instance-templates createcommand to createa managed instance group template for a group of VMs to create on asole-tenant node group:gcloud compute instance-templates createINSTANCE_TEMPLATE \ --machine-type=MACHINE_TYPE \ --image-project=IMAGE_PROJECT \ --image-family=IMAGE_FAMILY \ --node-group=GROUP_NAME \ [--accelerator type=GPU_TYPE,count=GPU_COUNT \] [--local-ssd interface=SSD_INTERFACE]
Replace the following:
INSTANCE_TEMPLATE: the name for the newinstance template.MACHINE_TYPE: the machine type of the sole-tenant VM. Use thegcloud compute machine-types listcommand to get a listof available machine types for the project.IMAGE_PROJECT: theimageproject of the image family.IMAGE_FAMILY: theimagefamily of the image to use to createthe VM.GROUP_NAME: the name of the node group toprovision the VM on. Alternatively, if you want to use this instancetemplate to create a regional MIG that exists in more than one zone, usethe--node-affinity-fileflagto specify a list of values for the regional MIG's node groups.GPU_TYPE: type of GPU. Set to one of theaccelerator types specified when the node template was created.GPU_COUNT: number of GPUs of the totalspecified by the node template to attach to this VM. Default value is1.SSD_INTERFACE: type of local SSD interface.Set tonvmeif the boot disk image drivers are optimized for NVMe,otherwise set toscsi. Specify this flag and a corresponding valueonce for each local SSD partition.
Use the
gcloud compute instance-groups managedcreatecommandcommand to create a managed instance group within your sole-tenant nodegroup:gcloud compute instance-groups managed createINSTANCE_GROUP_NAME \ --size=SIZE \ --template=INSTANCE_TEMPLATE \ --zone=ZONE
Replace the following:
INSTANCE_GROUP_NAME: the name for this instancegroup.SIZE: the number of VMs to include in thisinstance group. Your node group must have enough resources toaccommodate the instances in this managed instance group. Use themanaged instance group autoscaler toautomatically manage the size of managed instance groups.INSTANCE_TEMPLATE: the name of the instancetemplate to use to create this MIG. The template must have one or morenode affinity labelspointing to the appropriate node groups.ZONE: the zone to create the managed instancegroup in. For a regional MIG, replace the--zoneflag with the--regionflag and specify a region; also add the--zonesflag tospecify all of the zones where the node groups exist.
REST
Use the
instanceTemplates.insertmethodto create a managed instance group template within your sole-tenantnode group:POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/TEMPLATE_ZONE/instance-templates{ "name": "INSTANCE_TEMPLATE", "properties": { "machineType": "zones/MACHINE_TYPE_ZONE/machineTypes/MACHINE_TYPE", "scheduling": { "nodeAffinities": [ { "key": "compute.googleapis.com/node-group-name", "operator": "IN", "values": [ "GROUP_NAME" ] } ] }, "networkInterfaces": [ { "network": "global/networks/NETWORK", "subnetwork": "regions/REGION/subnetworks/SUBNETWORK" } ], "guestAccelerators": [ { "acceleratorType":GPU_TYPE, "acceleratorCount":GPU_COUNT } ], "disks": [ { "boot": true, "initializeParams": { "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" } }, { "type":"SCRATCH", "initializeParams":{ "diskType":"zones/LOCAL_SSD_ZONE/diskTypes/local-ssd" }, "autoDelete":true, "interface":"SSD_INTERFACE" } ] }}Replace the following:
PROJECT_ID: the project ID.TEMPLATE_ZONE: the zone to create the instancetemplate in.INSTANCE_TEMPLATE: the name of the new instancetemplate.MACHINE_TYPE_ZONE: the zone of the machine type.MACHINE_TYPE: the machine type of the sole-tenantVM. Use themachineTypes.listmethodto get a list of available machine types for the project.GROUP_NAME: name of the node group toprovision the VM on. If you want to use this instance template to createa regional MIG that exists in more than one zone, specify a list of nodegroups that exist in the same zones as the regional MIG's zones.NETWORK: the URL of the network resource for thisinstance template.REGION: the region containing the subnetwork forthis instance template.SUBNETWORK: the URL of the subnetwork resource forthis instance template.GPU_TYPE: the type of GPU. Set to one of theaccelerator types specified when the node template was created.GPU_COUNT: the number of GPUs of the total specifiedby the node template to attach to this VM. Default value is1.IMAGE_PROJECT: theimageproject of the image family.IMAGE_FAMILY: theimagefamily of the image to use to create theVM.LOCAL_SSD_ZONE: the local SSD's zone.SSD_INTERFACE: the type of local SSD interface.Set toNVMEif the boot disk image drivers are optimized for NVMe,otherwise set toSCSI.
Use the
instanceGroupManagers.insertmethodto create a MIG within your sole-tenant node group based on the previouslycreated instance template. Or, if you want to create a regional MIG, use theregionInstanceGroupManagers.insertmethodand specify the region and zones of all of the node groups as specified inthe instance template.For example, to create a zonal MIG, use the following request:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers{ "baseInstanceName": "NAME_PREFIX", "name": "INSTANCE_GROUP_NAME", "targetSize":SIZE, "instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE"}Replace the following:
PROJECT_ID: the project ID.ZONE: the zone to create the managed instancegroup in.NAME_PREFIX: the prefix name for each of theinstances in the managed instance group.INSTANCE_GROUP_NAME: the name for the instancegroup.SIZE: the number of VMs to include in thisinstance group. Your node group must have enough resources toaccommodate the instances in this managed instance group. Use themanaged instance group autoscaler toautomatically manage the size of managed instance groups.INSTANCE_TEMPLATE: the URL of the instancetemplate to use to create this group. The template must have anodeaffinity labelpointing to the appropriate node group.
Configure node affinity labels
Node affinity labels let you logically group node groups and schedule VMs on aspecific set of node groups. You can also use node affinity labels to scheduleVMs on node groups across different zones, and still keep the node groups in alogical group. The following procedure is an example of using affinity labels toassociate VMs with a specific node group that is used for production workloads.This example shows how to schedule a single VM, but you could also usemanagedinstance groups to schedule a group of VMs.
gcloud
Use the
gcloud compute sole-tenancy node-templatescreatecommandto create a node template with a set of affinity labels for aproduction workload:gcloud compute sole-tenancy node-templates create prod-template \ --node-type=n1-node-96-624 \ --node-affinity-labels workload=frontend,environment=prod
Use the
gcloud compute sole-tenancy node-templatesdescribecommandto view the node affinity labels assigned to the node template.Use the
gcloud compute sole-tenancy node-groupscreatecommandto create a node group that uses the production template:gcloud compute sole-tenancy node-groups create prod-group \ --node-template=prod-template \ --target-size=1
For your production VMs, create a
node-affinity-prod.jsonfile tospecify the affinity of your production VMs. For example, you might create afile that specifies that VMs run only on nodes with both theworkload=frontendandenvironment=prodaffinities. Create the nodeaffinity file by usingCloud Shellor create it in a location of your choice.[ { "key" : "workload", "operator" : "IN", "values" : ["frontend"] }, { "key" : "environment", "operator" : "IN", "values" : ["prod"] }]Use the
node-affinity-prod.jsonfile with thegcloud computeinstances createcommandto schedule a VM on the node group with matching affinity labels.gcloud compute instances create prod-vm \ --node-affinity-file node-affinity-prod.json \ --machine-type=n1-standard-2
Use the
gcloud compute instancesdescribecommandand check theschedulingfield to view the node affinitiesassigned to the VM.
Configure node anti-affinity labels
Node affinity labels can be configured as anti-affinity labels to prevent VMsfrom running on specific nodes. For example, you can use anti-affinity labels toprevent VMs that you are using for development purposes from being scheduled onthe same nodes as your production VM. The following example shows how to useaffinity labels to prevent VMs from running on specific node groups. Thisexample shows how to schedule a single VM, but you could also usemanagedinstance groups to schedule a group of VMs.
gcloud
For development VMs, specify the affinity of your development VMs bycreating a
node-affinity-dev.jsonwithCloud Shell, or by creating itin a location of your choice. For example, create a file that configures VMsto run on any node group with theworkload=frontendaffinity as long as itis notenvironment=prod:[ { "key" : "workload", "operator" : "IN", "values" : ["frontend"] }, { "key" : "environment", "operator" : "NOT_IN", "values" : ["prod"] }]Use the
node-affinity-dev.jsonfile with thegcloud compute instancescreatecommand to createthe development VM:gcloud compute instances create dev-vm \ --node-affinity-file=node-affinity-dev.json \ --machine-type=n1-standard-2
Use the
gcloud compute instancesdescribecommandand check theschedulingfield to view the node anti-affinitiesassigned to the VM.
What's next
- For information about sole-tenant node pricing, seeSole-tenant nodespricing.
For information about how to enable autoscaling on sole-tenant node groups,seeNode group autoscaler.
For more information about bringing existing licenses to Google Cloud, seeBringexisting licenses.
For more information about sole-tenant nodes, seeSole-tenantnodes.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-17 UTC.