Create GPU VMs in bulk

Linux Windows
You can create a group of virtual machines (VMs) thathave attached graphical processing units (GPUs)by using the bulk creation process. With the bulk creation process, you getupfront validation where the request fails fast if it is not feasible.Also, if you use the region flag, the bulk creation API automatically choosesthe zone that has the capacity to fulfill the request.

To learn more about bulk creation, seeAbout bulk creation of VMs.To learn more about creating VMs with attached GPUs, seeOverview of creating an instance with attached GPUs.

Before you begin

Required roles

To get the permissions that you need to create VMs, ask your administrator to grant you theCompute Instance Admin (v1) (roles/compute.instanceAdmin.v1) IAM role on the project. For more information about granting roles, seeManage access to projects, folders, and organizations.

This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

The following permissions are required to create VMs:

  • compute.instances.create on the project
  • To use a custom image to create the VM: compute.images.useReadOnly on the image
  • To use a snapshot to create the VM: compute.snapshots.useReadOnly on the snapshot
  • To use an instance template to create the VM: compute.instanceTemplates.useReadOnly on the instance template
  • To specify a subnet for your VM: compute.subnetworks.use on the project or on the chosen subnet
  • To specify a static IP address for the VM: compute.addresses.use on the project
  • To assign an external IP address to the VM when using a VPC network: compute.subnetworks.useExternalIp on the project or on the chosen subnet
  • To assign alegacy network to the VM: compute.networks.use on the project
  • To assign an external IP address to the VM when using a legacy network: compute.networks.useExternalIp on the project
  • To set VM instance metadata for the VM: compute.instances.setMetadata on the project
  • To set tags for the VM: compute.instances.setTags on the VM
  • To set labels for the VM: compute.instances.setLabels on the VM
  • To set a service account for the VM to use: compute.instances.setServiceAccount on the VM
  • To create a new disk for the VM: compute.disks.create on the project
  • To attach an existing disk in read-only or read-write mode: compute.disks.use on the disk
  • To attach an existing disk in read-only mode: compute.disks.useReadOnly on the disk

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Overview

When creating VMs with attached GPUs using the bulk creation method, you can chooseto create VMs in a region (such asus-central1) or in a specific zone such as(us-central1-a).

If you choose to specify a region, Compute Engine places the VMs inany zone within the region that supports GPUs.

Machine types

The accelerator-optimized machine family contains multiple machine types.

Each accelerator-optimized machine type has a specific model of NVIDIA GPUsattached to support the recommended workload type.

AI and ML workloadsGraphics and visualization
Accelerator-optimized A series machine types are designed for high performance computing (HPC), artificial intelligence (AI), and machine learning (ML) workloads.

For these machine types, the GPU model is automatically attached to the instance.

Accelerator-optimized G series machine types are designed for workloads such as NVIDIA Omniverse simulation workloads, graphics-intensive applications, video transcoding, and virtual desktops. These machine types supportNVIDIA RTX Virtual Workstations (vWS).

For these machine types, the GPU model is automatically attached to the instance.

  • A4X (NVIDIA GB200 Superchips)
    (nvidia-gb200)
  • A4 (NVIDIA B200)
    (nvidia-b200)
  • A3 Ultra (NVIDIA H200)
    (nvidia-h200-141gb)
  • A3 Mega (NVIDIA H100)
    (nvidia-h100-mega-80gb)
  • A3 High (NVIDIA H100)
    (nvidia-h100-80gb)
  • A3 Edge (NVIDIA H100)
    (nvidia-h100-80gb)
  • A2 Ultra (NVIDIA A100 80GB)
    (nvidia-a100-80gb)
  • A2 Standard (NVIDIA A100)
    (nvidia-a100-40gb)
  • G4 (NVIDIA RTX PRO 6000)
    (nvidia-rtx-pro-6000)
    (nvidia-rtx-pro-6000-vws)
  • G2 (NVIDIA L4)
    (nvidia-l4)
    (nvidia-l4-vws)

Create groups of A4X, A4, and A3 Ultra

To create instances in bulk for the A4X, A4, and A3 Ultra machine series, seetheDeployment options overviewin the AI Hypercomputer documentation.

Create groups of A3, A2, G4, and G2 VMs

This section explains you can create instances in bulk for the A3 High, A3 Mega,A3 Edge, A2 Standard, A2 Ultra, G4, and G2 machine series by usingGoogle Cloud CLI, orREST.

gcloud

To create a group of VMs, use thegcloud compute instances bulk createcommand. For moreinformation about the parameters and how to use this command, seeCreate VMs in bulk.

Note: The following example outlines the required flags.For a list of optional flags that you can include, see theoptional flags section.Depending on your workload requirement or if you need to use a Windows operating system,you might need to include one or more of these optional flags.

Example

This example creates two VMs that have attached GPUs by using the followingspecifications:

gcloud compute instances bulk create \    --name-pattern="my-test-vm-#" \    --region=REGION \    --count=2 \    --machine-type=MACHINE_TYPE \    --boot-disk-size=200 \    --image=IMAGE \    --image-project=IMAGE_PROJECT \    --on-host-maintenance=TERMINATE

Replace the following:

If successful, the output is similar to the following:

NAME          ZONEmy-test-vm-1  us-central1-bmy-test-vm-2  us-central1-bBulk create request finished with status message: [VM instances created: 2, failed: 0.]

Optional flags

To further configure your instance to meet your workload or operating system needs, include oneor more of the following flags when you run thegcloud compute instances bulk create command.

FeatureDescription
Provisioning modelSets the provisioning model for the instance. Specify eitherSPOT orFLEX_START.FLEX_START isn't supported for G4 instances. If you don't specify a model, then the standard model is used. For more information, see Compute Engine instances provisioning models.
--provisioning-model=PROVISIONING_MODEL
Virtual workstationSpecifies anNVIDIA RTX Virtual Workstations (vWS) for graphics workloads. This feature is supported only for G4 and G2 instances.
--accelerator=type=VWS_ACCELERATOR_TYPE,count=VWS_ACCELERATOR_COUNT

Replace the following:

  • ForVWS_ACCELERATOR_TYPE, choose from one of the following:
    • For G4 instances, specifynvidia-rtx-pro-6000-vws
    • For G2 instances, specifynvidia-l4-vws
  • ForVWS_ACCELERATOR_COUNT, specify the number of virtual GPUs that you need.
Local SSDAttaches one or more Local SSDs to your instance. Local SSDs can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
    --local-ssd=interface=nvme \    --local-ssd=interface=nvme \    --local-ssd=interface=nvme ...
For the maximum number of Local SSD disks that you can attach per VM instance, seeLocal SSD limits.
Network interfaceAttaches multiple network interfaces to your instance. Forg4-standard-384 instances, you can attach up to two network interfaces. You can use this flag to create an instance with dual network interfaces (2x 200 Gbps). Each network interface must be in a unique VPC network.

   --network-interface=network=VPC_NAME_1,subnet=SUBNET_NAME_1,nic-type=GVNIC \   --network-interface=network=VPC_NAME_2,subnet=SUBNET_NAME_2,nic-type=GVNIC

Dual network interfaces are only supported ong4-standard-384 machine types.

Replace the following:

  • VPC_NAME: the name of yourVPC network.
  • SUBNET_NAME: the name of the subnet that is part of the specified VPC network.

REST

Use theinstances.bulkInsertmethod with therequired parameters to create multiple VMs in a zone. For moreinformation about the parameters and how to use this command, seeCreate VMs in bulk.

Note: The following example outlines the required flags.For a list of optional flags that you can include, see theoptional flags section.Depending on your workload requirement or if you need to use a Windows operating system,you might need to include one or more of these optional flags.

Example

This example creates two VMs that have attached GPUs by using the followingspecifications:

  • VM names:my-test-vm-1,my-test-vm-2
  • Each VM has two GPUs attached, specified by usingthe appropriateaccelerator-optimized machine type

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/instances/bulkInsert{"namePattern":"my-test-vm-#","count":"2","instanceProperties": {  "machineType":MACHINE_TYPE,  "disks":[    {      "type":"PERSISTENT",      "initializeParams":{        "diskSizeGb":"200",        "sourceImage":SOURCE_IMAGE_URI      },      "boot":true    }  ],  "name": "default",  "networkInterfaces":  [    {      "network": "projects/PROJECT_ID/global/networks/default"    }  ],  "scheduling":{    "onHostMaintenance":"TERMINATE",    ["automaticRestart":true]  }}}

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region for the VMs. This region mustsupportyour selected GPU model.
  • MACHINE_TYPE: the machine type that you selected.Choose from one of the following:

  • SOURCE_IMAGE_URI: the URI for the specificimage or image familythat you want to use.

    For example:

    • Specific image:"sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
    • Image family:"sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp".

    When you specify an image family, Compute Engine creates a VMfrom the most recent, non-deprecated OS image in that family. Formore information about when to use image families, seeImage family best practices.

Optional flags

To further configure your instance to meet your workload or operating system needs, include one or moreof the following flags when you run theinstances.bulkInsert method.

FeatureDescription
Provisioning model To lower your costs, you can specify a different provisioning model by adding the"provisioningModel": "PROVISIONING_MODEL" field to thescheduling object in your request. If you specify to create Spot VMs, then theonHostMaintenance andautomaticRestart fields are ignored. For more information, see Compute Engine instances provisioning models.
    "scheduling":     {       "onHostMaintenance": "terminate",       "provisioningModel": "PROVISIONING_MODEL"     }

ReplacePROVISIONING_MODEL with one of the following:

  • STANDARD: (Default) A standard instance.
  • SPOT: A Spot VM.
  • FLEX_START: A Flex Start VM. Flex-start VMs run for up to seven days and can help you acquire high-demand resources like GPUs at a discounted price. This provisioning model isn't supported for G4 instances.
Virtual workstationSpecifies anNVIDIA RTX Virtual Workstations (vWS) for graphics workloads. This feature is supported only for G4 and G2 instances.
   "guestAccelerators":     [       {         "acceleratorCount":VWS_ACCELERATOR_COUNT,         "acceleratorType": "projects/PROJECT_ID/zones/ZONE/acceleratorTypes/VWS_ACCELERATOR_TYPE"       }     ]

Replace the following:

  • ForVWS_ACCELERATOR_TYPE, choose from one of the following:
    • For G4 instances, specifynvidia-rtx-pro-6000-vws
    • For G2 instances, specifynvidia-l4-vws
  • ForVWS_ACCELERATOR_COUNT, specify the number of virtual GPUs that you need.
Local SSDAttaches one or more Local SSDs to your instance. Local SSDs can be used for fast scratch disks or for feeding data into the GPUs while preventing I/O bottlenecks.
   {     "type": "SCRATCH",     "autoDelete": true,     "initializeParams": {       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/local-nvme-ssd"     }   }
For the maximum number of Local SSD disks that you can attach per VM instance, seeLocal SSD limits.
Network interfaceAttaches multiple network interfaces to your instance. Forg4-standard-384 instances, you can attach up to two network interfaces. This creates an instance with dual network interfaces (2x 200 Gbps). Each network interface must be in a unique VPC network.

   "networkInterfaces":   [     {       "network": "projects/PROJECT_ID/global/networks/VPC_NAME_1",       "subnetwork": "projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME_1",       "nicType": "GVNIC"     },     {       "network": "projects/PROJECT_ID/global/networks/VPC_NAME_2",       "subnetwork": "projects/PROJECT_ID/regions/REGION/subnetworks/SUBNET_NAME_2",       "nicType": "GVNIC"     }   ]

Dual network interfaces are only supported ong4-standard-384 machine types.

Replace the following:

  • VPC_NAME: the name of yourVPC network.
  • SUBNET_NAME: the name of the subnet that is part of the specified VPC network.

Create groups of N1-general purpose VMs

You create a group of VMs with attached GPUs by using either theGoogle Cloud CLI, orREST.

This section describes how to create multiple VMs using the following GPU types:

NVIDIA GPUs:

  • NVIDIA T4:nvidia-tesla-t4
  • NVIDIA P4:nvidia-tesla-p4
  • NVIDIA P100:nvidia-tesla-p100
  • NVIDIA V100:nvidia-tesla-v100

NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):

  • NVIDIA T4 Virtual Workstation:nvidia-tesla-t4-vws
  • NVIDIA P4 Virtual Workstation:nvidia-tesla-p4-vws
  • NVIDIA P100 Virtual Workstation:nvidia-tesla-p100-vws

    For these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) licenseis automatically added to your instance.

gcloud

To create a group of VMs, use thegcloud compute instances bulk createcommand.For more information about the parameters and how to use this command, seeCreate VMs in bulk.

Example

The following example creates two VMs with attached GPUs usingthe following specifications:

  • VM names:my-test-vm-1,my-test-vm-2
  • VMs created in any zone inus-central1 that supports GPUs
  • Each VM has two T4 GPUs attached, specified by usingthe accelerator type and accelerator count flags
  • Each VM has GPU drivers installed
  • Each VM uses the Deep Learning VM imagepytorch-latest-gpu-v20211028-debian-10
gcloud compute instances bulk create \    --name-pattern="my-test-vm-#" \    --count=2 \    --region=us-central1 \    --machine-type=n1-standard-2 \    --accelerator type=nvidia-tesla-t4,count=2 \    --boot-disk-size=200 \    --metadata="install-nvidia-driver=True" \    --scopes="https://www.googleapis.com/auth/cloud-platform" \    --image=pytorch-latest-gpu-v20211028-debian-10 \    --image-project=deeplearning-platform-release \    --on-host-maintenance=TERMINATE --restart-on-failure

If successful, the output is similar to the following:

NAME          ZONEmy-test-vm-1  us-central1-bmy-test-vm-2  us-central1-bBulk create request finished with status message: [VM instances created: 2, failed: 0.]

REST

Use theinstances.bulkInsertmethod with therequired parameters to create multiple VMs in a zone.For more information about the parameters and how to use this command, seeCreate VMs in bulk.

Example

The following example creates two VMs with attached GPUs usingthe following specifications:

  • VM names:my-test-vm-1,my-test-vm-2
  • VMs created in any zone inus-central1 that supports GPUs
  • Each VM has two T4 GPUs attached, specified by usingthe accelerator type and accelerator count flags
  • Each VM has GPU drivers installed
  • Each VM uses the Deep Learning VM imagepytorch-latest-gpu-v20211028-debian-10

ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-central1/instances/bulkInsert{    "namePattern":"my-test-vm-#",    "count":"2",    "instanceProperties": {      "machineType":"n1-standard-2",      "disks":[        {          "type":"PERSISTENT",          "initializeParams":{            "diskSizeGb":"200",            "sourceImage":"projects/deeplearning-platform-release/global/images/pytorch-latest-gpu-v20211028-debian-10"          },          "boot":true        }      ],      "name": "default",      "networkInterfaces":      [        {          "network": "projects/PROJECT_ID/global/networks/default"        }      ],      "guestAccelerators":      [        {          "acceleratorCount":2,          "acceleratorType": "nvidia-tesla-t4"        }      ],      "scheduling":{        "onHostMaintenance":"TERMINATE",        "automaticRestart":true      },      "metadata":{        "items":[          {            "key":"install-nvidia-driver",            "value":"True"          }        ]      }  } }

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.