Bulk create HPC-optimized instances with H4D Stay organized with collections Save and categorize content based on your preferences.
This document explains how to create a large number of high performancecomputing (HPC) virtual machine (VM) instances in bulk that are identical andindependent from each other. The instances use H4D machine typesand run on reservedblocks of capacity.
For more information about creating VMs in bulk, seeAbout bulk creation of VMs.To create instances in bulk that don't use reservations for enhanced cluster management capabilities, seeinsteadCreate VMs in bulk.
To learn about other ways to create large clusters of tightly-coupled H4D VMs,see theOverview of HPC cluster creationpage.
Before you begin
Choose a consumption option: to create compute instances in bulk and enable enhanced cluster management capabilities, you can choose a Future Reservation in Calendar mode orSpot VMs.
If you choose to use Spot VMs, the VMs might not be compactly collocated. Also, Spot VMs can be preempted as needed and they are not eligible formanaging host maintenance events for groups of VMs.
Obtain capacity: the process to obtain capacity differs for each consumption option.
To learn more, seeChoose a consumption option and obtain capacity.
- If you haven't already, set upauthentication. Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.For more information, seeAuthenticate for using REST in the Google Cloud authentication documentation.
Required roles
To get the permissions that you need to create VMs in bulk, ask your administrator to grant you the following IAM roles on the project:
- Compute Instance Admin (v1) (
roles/compute.instanceAdmin.v1) - Compute Network Admin (
roles/compute.networkAdmin)
For more information about granting roles, seeManage access to projects, folders, and organizations.
These predefined roles contain the permissions required to create VMs in bulk. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to create VMs in bulk:
compute.instances.createon the project- To use a custom image to create the VM:
compute.images.useReadOnlyon the image - To use a snapshot to create the VM:
compute.snapshots.useReadOnlyon the snapshot - To use an instance template to create the VM:
compute.instanceTemplates.useReadOnlyon the instance template - To specify a subnet for your VM:
compute.subnetworks.useon the project or on the chosen subnet - To specify a static IP address for the VM:
compute.addresses.useon the project - To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIpon the project or on the chosen subnet - To assign alegacy network to the VM:
compute.networks.useon the project - To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIpon the project - To set VM instance metadata for the VM:
compute.instances.setMetadataon the project - To set tags for the VM:
compute.instances.setTagson the VM - To set labels for the VM:
compute.instances.setLabelson the VM - To set a service account for the VM to use:
compute.instances.setServiceAccounton the VM - To create a new disk for the VM:
compute.disks.createon the project - To attach an existing disk in read-only or read-write mode:
compute.disks.useon the disk - To attach an existing disk in read-only mode:
compute.disks.useReadOnlyon the disk
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Overview
Creating HPC instances in bulk with the H4D machine type includes the followingsteps:
- Optional:Create Virtual Private Cloud networks.
- Optional:Create a placement policy if youaren't creating the compute instances on the same block or sub-block.
- Create H4D instances in bulk.
Optional: Create Virtual Private Cloud networks
When you create a compute instance, you can specify a VPC network and subnet.If you omit this configuration, the default network and subnet are used.
- If you want to configure the H4D instances in the MIG to useCloud RDMA, then complete the steps in this section.
- If you don't want to use Cloud RDMA, then you can skip this sectionand use the default network instead
To use Cloud RDMA with H4D instances, you must have at least twonetworks configured, one for each type of network interface (NIC):
- NIC type
GVNIC: uses thegvedriver for TCP/IP and Internet traffic for normal VM-VM and VM-Internet communication. - NIC type
IRDMA: uses IDPF/iRDMA drivers for Cloud RDMA networking between instances.
Instances that use Cloud RDMA can have only oneIRDMA interface. You can add up to eight additionalGVNIC network interfaces for a total of 10 vNICs per instance.
To set up theFalcon VPC networks to use with your instances, you can either follow the documented instructions or use the provided script.
Instruction guides
To create the networks, you can use the following instructions:
To create the host networks for the
GVNICnetwork interfaces, seeCreate and manage VPC networks.If you are configuring only one
GVNICnetwork interface, you can use the default VPC network and the auto subnet that's in the same region as the instance.To create a network for the
IRDMAnetwork interface, seeCreate a VPC network with a Falcon VPC network profile. Use the default value for themaximum transmission unit (MTU) for a Falcon VPC network, which is8896.
Script
You can create up to nine gVNIC network interfaces and oneIRDMA network interface per instance. Each network interface mustattach to a separate network. To create the networks, you can use the followingscript, which creates two networks for gVNIC and one network for IRDMA.
- Optional: Before running the script, list the Falcon VPC network profiles to verify there is one available.
gcloud compute network-profiles list
Copy the following code and run it in a Linux shell window.
#!/bin/bash # Set the number of GVNIC interfaces to create. You can create up to 9. NUM_GVNIC=NUMBER_OF_GVNIC # Create regular VPC networks and subnets for the GVNIC interfaces for N in $(seq 0 $(($NUM_GVNIC - 1))); do gcloud compute networks createGVNIC_NAME_PREFIX-net-$N \ --subnet-mode=custom gcloud compute networks subnets createGVNIC_NAME_PREFIX-sub-$N \ --network=GVNIC_NAME_PREFIX-net-$N \ --region=REGION \ --range=10.$N.0.0/16 gcloud compute firewall-rules createGVNIC_NAME_PREFIX-internal-$N \ --network=GVNIC_NAME_PREFIX-net-$N \ --action=ALLOW \ --rules=tcp:0-65535,udp:0-65535,icmp \ --source-ranges=10.0.0.0/8 done # Create SSH firewall rules gcloud compute firewall-rules createGVNIC_NAME_PREFIX-ssh \ --network=GVNIC_NAME_PREFIX-net-0 \ --action=ALLOW \ --rules=tcp:22 \ --source-ranges=IP_RANGE # Optional: Create a firewall rule for the external IP address for the # first GVNIC network interface gcloud compute firewall-rules createGVNIC_NAME_PREFIX-allow-ping-net-0 \ --network=GVNIC_NAME_PREFIX-net-0 \ --action=ALLOW \ --rules=icmp \ --source-ranges=IP_RANGE # Create a Falcon VPC network for the Cloud RDMA network interface gcloud compute networks createRDMA_NAME_PREFIX-irdma \ --network-profile=ZONE-vpc-falcon \ --subnet-mode custom # Create a subnet in the Falcon VPC network gcloud compute networks subnets createRDMA_NAME_PREFIX-irdma-sub \ --network=RDMA_NAME_PREFIX-irdma \ --region=REGION \ --range=10.2.0.0/16 # offset to avoid overlap with GVNIC subnet ranges
Replace the following:
NUMBER_OF_GVNIC: the number of GVNIC interfaces to create. Specify a number from 1 to 9.GVNIC_NAME_PREFIX: the name prefix to use for the regular VPC network and subnet that uses a GVNIC NIC type.REGION: the region where you want to create the networks. This must correspond to the zone specified for the--network-profileflag, when creating the Falcon VPC network. For example, if you specify the zone aseurope-west4-b, then your region iseurope-west4.IP_RANGE: the range of IP addresses outside of the VPC network to use for theSSH firewall rules. As a best practice, specify the specific IP address ranges that you need to allow access from, rather than all IPv4 or IPv6 sources. Don't use0.0.0.0/0or::/0as a source range because this allows traffic from all IPv4 or IPv6 sources, including sources outside of Google Cloud.RDMA_NAME_PREFIX: the name prefix to use for the VPC network and subnet that uses the IRDMA NIC type.ZONE: thezone where you want to create the networks and compute instances. Use eitherus-central1-aoreurope-west4-b.
Optional: To verify that the VPC network resources are created successfully, check the network settings in the Google Cloud console:
- In the Google Cloud console, go to theVPC networks page.
- Search the list for the networks that you created in the previous step.
- To view the subnets, firewall rules, and other network settings, click the name of the network.
Optional: Create a placement policy
Tip: If you want your VMs to be placed in a single or adjacent blocks, specify VM placement by creating a placement policy. However if you want your VMs to be on a specific block, skip this step and provide the name of the block in the reservation affinity when you create the instance.
You can specify VM placement by creating a compact placement policy. When you apply a compact placement policy to your VMs, Compute Engine makes best-effort attempts to create VMs that are as close to each other as possible. If your application is latency-sensitive and you want the VMs to be closer together (maximum compactness), then specify themaxDistance field (Preview) when creating a compact placement policy. A lowermaxDistance value ensures closer VM placement, but it also increases the chance that some VMs won't be created.
gcloud
To create a compact placement policy, use thegcloud beta compute resource-policies create group-placement command:
gcloud beta compute resource-policies create group-placementPOLICY_NAME \ --collocation=collocated \ --max-distance=MAX_DISTANCE \ --region=REGION
Replace the following:
POLICY_NAME: the name of the compact placement policy.MAX_DISTANCE: the maximum distance configuration for your VMs. The value must be3to place VMs in the adjacent blocks, or2to place VMs in the same block. For information about the maximum number of VMs supported for eachmaxDistanceper machine series, seeAbout compact placement policies in the Compute Engine documentation.REGION: the region where you want to create the compact placement policy. Specify a region in which the machine type that you want to use is available. For information about regions, seeAvailable regions and zones.
REST
To create a compact placement policy, make aPOST request to thebetaresourcePolicies.insert method. In the request body, include thecollocation field set toCOLLOCATED, and themaxDistance field.
POST https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/regions/REGION/resourcePolicies { "name": "POLICY_NAME", "groupPlacementPolicy": { "collocation": "COLLOCATED", "maxDistance":MAX_DISTANCE } }Replace the following:
PROJECT_ID: your project IDPOLICY_NAME: the name of the compact placement policy.MAX_DISTANCE: the maximum distance configuration for your VMs. The value must be3to place VMs in the adjacent blocks, or2to place VMs in the same block. For information about the maximum number of VMs supported for eachmaxDistanceper machine series, seeAbout compact placement policies in the Compute Engine documentation.REGION: the region where you want to create the compact placement policy. Specify a region in which the machine type that you want to use is available. For information about regions, seeAvailable regions and zones.
Create VM instances in bulk
The instructions in this section describe how to create H4D VMs in bulk.
Review the following limitations before creating H4D instances withCloud RDMA:
- You can't use Live migration during host maintenance events with instancesthat have a Cloud RDMA network interface. You must configure theinstance to terminate during maintenance events.
- The gVNIC network interface can have only one IPv6 address, either internalor external, but not both.
You can use only IPv4 addresses with IRDMA network interfaces andFalcon VPC networks.
gcloud
To create VMs in bulk, use thegcloud compute instances create command.
The parameters that you need to specify depend on theconsumption optionthat you are using for this deployment. Select the tab that corresponds toyour consumption option's provisioning model.
Reservation-bound
Start with the followinggcloud compute instances create command.
gcloud compute instances bulk create \ --name-pattern=NAME_PATTERN \ --count=COUNT \ --machine-type=MACHINE_TYPE \ --image-family=IMAGE_FAMILY \ --image=project=IMAGE_PROJECT \ --instance-termination=action=DELETE \ --maintenance-policy=TERMINATE \ --region=REGION \ --boot-disk-type=hyperdisk-balanced \ --boot-disk-size=DISK_SIZE
Complete the following steps:
Replace the following:
NAME_PATTERN: the name pattern for the instances. For example, usingvm-#for the name pattern generates instances with names such asvm-1andvm-2, up to the number specified by the--countflag.COUNT: the number of instances to create.MACHINE_TYPE: the machine type to use for the instances. Use one of theH4D machine types, for exampleh4d-highmem-192-lssd.IMAGE_FAMILY: the image family of the OS image that you want to use, for examplerocky-linux-9-optimized-gcp.For a list of supported OS images, seeSupported operating system. Choose an OS image version that supports the IRDMA interface.
IMAGE_PROJECT: the project ID for the OS image, for example,rocky-linux-cloud.REGION: specify a region in which the machine type that you want to use is available, for exampleeurope-west4. For information about available regions, seeAvailable regions and zones.DISK_SIZE: Optional: the size of the boot disk in GiB. The value must be a whole number.
Optional: If you chose to use a compact placement policy, include the
--resource-policiesflag:--resource-policies=POLICY_NAME
ReplacePOLICY_NAME with the name of the compact placement policy.
To specify the reservation, do one of the following:
If you are using a placement policy or if VMs can be placed anywhere in your reservation block, then add the following flags to the command:
--provisioning-model=RESERVATION_BOUND \ --reservation-affinity=specific \ --reservation=RESERVATION_NAME \
ReplaceRESERVATION_NAME with the name of the reservation, for example,
h4d-highmem-exfr-prod.If you aren't using a compact placement policy and you want the instances placed in a specific block, then add the following flags to the command:
--provisioning-model=RESERVATION_BOUND \ --reservation-affinity=specific \ --reservation=<RESERVATION_BLOCK_NAME \
ReplaceRESERVATION_BLOCK_NAME with the name of a block in the reservation, for example,
h4d-highmem-exfr-prod/reservationBlocks/h4d-highmem-exfr-prod-block-1.
To view the reservation name or the available reservation blocks, seeView capacity.
Optional: To configure the instances to use Cloud RDMA, add the flags similar to the following to the command. This example configures two GVNIC network interfaces and one IRDMA network interface:
--network-interface=nic-type=GVNIC, \ network=GVNIC_NAME_PREFIX-net-0, \ subnet=GVNIC_NAME_PREFIX-sub-0, \ stack-type=STACK_TYPE, \ address=EXTERNAL_IPV4_ADDRESS \ --network-interface=nic-type=GVNIC, \ network=GVNIC_NAME_PREFIX-net-1, \ subnet=GVNIC_NAME_PREFIX-sub-1, no-address \ --network-interface=nic-type=IRDMA, \ network=RDMA_NAME_PREFIX-irdma, \ subnet=RDMA_NAME_PREFIX-irdma-sub, \ stack-type=IPV4_ONLY, no-address \
Replace the following:
GVNIC_NAME_PREFIX: the name prefix you used when creating the VPC network and subnet for the GVNIC interface.For the first GVNIC network interface, you can omit the
networkandsubnetflags to use thedefaultnetwork instead.STACK_TYPE: Optional: the stack type for the GVNIC network interface.STACK_TYPEmust be one ofIPV4_ONLY, orIPV4_IPV6. The default value isIPV4_ONLY.EXTERNAL_IPV4_ADDRESS: Optional: a static external IPv4 address to use with the network interface. You must have previouslyreserved an external IPv4 address. Do one of the following:- Specify a valid IPv4 address from the subnet.
- Use the flag
no-addressinstead if you don't want the network interface to have an external IP address. - Specify
address=''if you want the interface to receive an ephemeral external IP address.
To specify an external IPv6 address, use the flag
--external-ipv6-addressinstead.RDMA_NAME_PREFIX: the name prefix that you used when creating the VPC network and subnet for the IRDMA interface.
- Optional: Add additional flags to customize the rest of the instance properties, as needed.
- Run the command.
Spot
Start with the followinggcloud compute instances create command.
gcloud compute instances bulk create \ --name-pattern=NAME_PATTERN \ --count=COUNT \ --machine-type=MACHINE_TYPE \ --image-family=IMAGE_FAMILY \ --image=project=IMAGE_PROJECT \ --region=REGION \ --boot-disk-type=hyperdisk-balanced \ --boot-disk-size=DISK_SIZE \ --provisioning-model=SPOT \ --instance-termination=action=TERMINATION_ACTION
Complete the following steps:
Replace the following:
NAME_PATTERN: the name pattern for the instances. For example, usingvm-#for the name pattern generates instances with names such asvm-1andvm-2, up to the number specified by the--countflag.COUNT: the number of instances to create.MACHINE_TYPE: the machine type to use for the instances. Use one of theH4D machine types, for exampleh4d-highmem-192-lssd.IMAGE_FAMILY: the image family of the OS image that you want to use, for examplerocky-linux-9-optimized-gcp.For a list of supported OS images, seeSupported operating system. Choose an OS image version that supports the IRDMA interface.
IMAGE_PROJECT: the project ID for the OS image, for example,rocky-linux-cloud.REGION: specify a region in which the machine type that you want to use is available, for exampleeurope-west4. For information about available regions, seeAvailable regions and zones.DISK_SIZE: Optional: the size of the boot disk in GiB. The value must be a whole number.
Important: Make sure your application can handle preemption. For example, we recommend that you handle preemption by specifying a shutdown script during instance creation. Learn how tohandle preemption with a shutdown script.TERMINATION_ACTION: the action to take when Compute Engine preempts the instance, eitherSTOP(default) orDELETE.
Optional: If you chose to use a compact placement policy, then add the following flag to the command:
--resource-policies=POLICY_NAME \
Replace
POLICY_NAMEwith the name of the compact placement policy.Optional: To configure the instances to use Cloud RDMA, add the flags similar to the following to the command. This example configures two GVNIC network interfaces and one IRDMA network interface:
--network-interface=nic-type=GVNIC, \ network=GVNIC_NAME_PREFIX-net-0, \ subnet=GVNIC_NAME_PREFIX-sub-0, \ stack-type=STACK_TYPE, \ address=EXTERNAL_IPV4_ADDRESS \ --network-interface=nic-type=GVNIC, \ network=GVNIC_NAME_PREFIX-net-1, \ subnet=GVNIC_NAME_PREFIX-sub-1, no-address \ --network-interface=nic-type=IRDMA, \ network=RDMA_NAME_PREFIX-irdma, \ subnet=RDMA_NAME_PREFIX-irdma-sub, \ stack-type=IPV4_ONLY, no-address \
Replace the following:
GVNIC_NAME_PREFIX: the name prefix you used when creating the VPC network and subnet for the GVNIC interface.For the first GVNIC network interface, you can omit the
networkandsubnetflags to use thedefaultnetwork instead.STACK_TYPE: Optional: the stack type for the GVNIC network interface.STACK_TYPEmust be one ofIPV4_ONLY, orIPV4_IPV6. The default value isIPV4_ONLY.EXTERNAL_IPV4_ADDRESS: Optional: a static external IPv4 address to use with the network interface. You must have previouslyreserved an external IPv4 address. Do one of the following:- Specify a valid IPv4 address from the subnet.
- Use the flag
no-addressinstead if you don't want the network interface to have an external IP address. - Specify
address=''if you want the interface to receive an ephemeral external IP address.
To specify an external IPv6 address, use the flag
--external-ipv6-addressinstead.RDMA_NAME_PREFIX: the name prefix that you used when creating the VPC network and subnet for the IRDMA interface.
- Optional: Add additional flags to customize the rest of the instance properties, as needed.
- Run the command.
REST
To create VM instances in bulk, make aPOST request to theinstances.bulkInsert method
The parameters that you need to specify depend on theconsumption optionthat you are using for this deployment. Select the tab that corresponds toyour consumption option's provisioning model.
Reservation-bound
Start with the followingPOST request to theinstances.bulkInsert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/bulkInsert { "namePattern":"NAME_PATTERN", "count":"COUNT", "instanceProperties":{ "machineType":"MACHINE_TYPE", "disks":[ { "boot":true, "initializeParams":{ "diskSizeGb":"DISK_SIZE", "diskType":"hyperdisk-balanced", "sourceImage":"projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" }, "mode":"READ_WRITE", "type":"PERSISTENT" } ], "scheduling":{ "provisioningModel":"RESERVATION_BOUND", "instanceTerminationAction":"DELETE", "onHostMaintenance": "TERMINATE", "automaticRestart":true } } }Complete the following steps:
Replace the following:
PROJECT_ID: the project ID of the project where you want to create the instances.ZONE: specify a zone in which the machine type that you want to use is available. If you are using a compact placement policy, then use a zone in the same region as the compact placement policy. For information about the regions where H4D machine types are available, seeAvailable regions and zones.NAME_PATTERN: the name pattern for the instances. For example, usingvm-#for the name pattern generates instances with names such asvm-1andvm-2, up to the number specified by thecountfield.COUNT: the number of instances to create.MACHINE_TYPE: the machine type to use for the instances. Use one of theH4D machine types, for exampleh4d-highmem-192-lssd.DISK_SIZE: the size of the boot disk in GiB.IMAGE_PROJECT: the project ID for the OS image, for example,debian-cloud.IMAGE_FAMILY: the image family of the OS image that you want to use, for examplerocky-linux-9-optimized-gcp. For a list of supported OS images, seeSupported operating system. Choose an OS image version that supports the IRDMA interface.
Optional: If you chose to use a compact placement policy, include the
resourcePoliciesparameter in the request body as part of the"instanceProperties"parameter."resourcePolicies": [ "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME" ],
ReplacePOLICY_NAME with the name of the compact placement policy.
To specify the reservation, do one of the following:
If you are using a placement policy or if VMs can be placed anywhere in your reservation block, then add the following to the request body as part of the
"instanceProperties"parameter:"reservationAffinity":{ "consumeReservationType":"SPECIFIC_RESERVATION", "key":"compute.googleapis.com/reservation-name", "values":[ "RESERVATION_NAME" ], },ReplaceRESERVATION_NAME with the name of the reservation, for example,
h4d-highmem-exfr-prod.If you aren't using a compact placement policy or you want the instances placed in a specific block, then add the following to the request body as part of the
"instanceProperties"parameter:"reservationAffinity":{ "consumeReservationType":"SPECIFIC_RESERVATION", "key":"compute.googleapis.com/reservation-name", "values":[ "RESERVATION_BLOCK_NAME" ], },ReplaceRESERVATION_BLOCK_NAME with the name of a block in the reservation, for example,
h4d-highmem-exfr-prod/reservationBlocks/h4d-highmem-exfr-prod-block-1.
To view the reservation name or the available reservation blocks, seeView capacity.
If you want to configure the instances to use Cloud RDMA, then include a parameter block similar to the following to the request body as part of the
"instanceProperties"parameter. This example configures two GVNIC network interfaces and one IRDMA network interface:"networkInterfaces": [ { "network": "GVNIC_NAME_PREFIX-net-0", "subnetwork": "GVNIC_NAME_PREFIX-sub-0", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "External IP", "natIP": "EXTERNAL_IPV4_ADDRESS" } ], "stackType": "IPV4_ONLY", "nicType": "GVNIC", }, { "network": "GVNIC_NAME_PREFIX-net-1", "subnetwork": "GVNIC_NAME_PREFIX-sub-1", "stackType": "IPV4_ONLY", "nicType": "GVNIC", }, { "network": "RDMA_NAME_PREFIX-irdma", "subnetwork": "RDMA_NAME_PREFIX-irdma-sub", "stackType": "IPV4_ONLY", "nicType": "IRDMA", } ],Replace the following:
GVNIC_NAME_PREFIX: the name prefix that you used when creating the VPC network and subnet for the GVNIC interface.For the GVNIC network interface, you can omit the
networkandsubnetworkfields to use thedefaultnetwork instead.EXTERNAL_IPV4_ADDRESS: Optional: a static external IPv4 address to use with the network interface. You must have previouslyreserved an external IPv4 address.RDMA_NAME_PREFIX: the name prefix you used when creating the VPC network and subnet for the IRDMA interface.
- Optional: Customize the rest of the instance properties, as needed.
- Submit the request.
Spot
Start with the followingPOST request to theinstances.bulkInsert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/bulkInsert { "namePattern":"NAME_PATTERN", "count":"COUNT", "instanceProperties":{ "machineType":"MACHINE_TYPE", "disks":[ { "boot":true, "initializeParams":{ "diskSizeGb":"DISK_SIZE", "diskType":"hyperdisk-balanced", "sourceImage":"projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" }, "mode":"READ_WRITE", "type":"PERSISTENT" } ], "scheduling":{ "provisioningModel":"SPOT", "instanceTerminationAction":"TERMINATION_ACTION" } } }Complete the following steps:
Replace the following:
PROJECT_ID: the project ID of the project where you want to create the instances.ZONE: specify a zone in which the machine type that you want to use is available. If you are using a compact placement policy, then use a zone in the same region as the compact placement policy. For information about the regions where H4D machine types are available, seeAvailable regions and zones.NAME_PATTERN: the name pattern for the instances. For example, usingvm-#for the name pattern generates instances with names such asvm-1andvm-2, up to the number specified by thecountfield.COUNT: the number of instances to create.MACHINE_TYPE: the machine type to use for the instances. Use one of theH4D machine types, for exampleh4d-highmem-192-lssd.DISK_SIZE: the size of the boot disk in GiB.IMAGE_PROJECT: the project ID for the OS image, for example,debian-cloud.IMAGE_FAMILY: the image family of the OS image that you want to use, for examplerocky-linux-9-optimized-gcp. For a list of supported OS images, seeSupported operating system. Choose an OS image version that supports the IRDMA interface.
Important: Make sure your application can handle preemption. For example, we recommend that you handle preemption by specifying a shutdown script during instance creation. Learn how tohandle preemption with a shutdown script.TERMINATION_ACTION: the action to take when Compute Engine preempts the instance, eitherSTOP(default) orDELETE.
Optional: If you chose to use a compact placement policy, include the
resourcePoliciesparameter as part of the"instanceProperties"parameter."resourcePolicies": [ "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME" ]
If you want to configure the instances to use Cloud RDMA, then include a parameter block similar to the following to the request body as part of the
"instanceProperties"parameter. This example configures two GVNIC network interfaces and one IRDMA network interface:"networkInterfaces": [ { "network": "GVNIC_NAME_PREFIX-net-0", "subnetwork": "GVNIC_NAME_PREFIX-sub-0", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "External IP", "natIP": "EXTERNAL_IPV4_ADDRESS" } ], "stackType": "IPV4_ONLY", "nicType": "GVNIC", }, { "network": "GVNIC_NAME_PREFIX-net-1", "subnetwork": "GVNIC_NAME_PREFIX-sub-1", "stackType": "IPV4_ONLY", "nicType": "GVNIC", }, { "network": "RDMA_NAME_PREFIX-irdma", "subnetwork": "RDMA_NAME_PREFIX-irdma-sub", "stackType": "IPV4_ONLY", "nicType": "IRDMA", } ],Replace the following:
GVNIC_NAME_PREFIX: the name prefix that you used when creating the VPC network and subnet for the GVNIC interface.For the GVNIC network interface, you can omit the
networkandsubnetworkfields to use thedefaultnetwork instead.EXTERNAL_IPV4_ADDRESS: Optional: a static external IPv4 address to use with the network interface. You must have previouslyreserved an external IPv4 address.RDMA_NAME_PREFIX: the name prefix you used when creating the VPC network and subnet for the IRDMA interface.
- Optional: Customize the rest of the instance properties, as needed.
- Submit the request.
What's next
- View H4D cluster topology.
- Connect to Linux VMs.
- Set up and scale MPI applications on H4D VMs with Cloud RDMA.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.