Create an A3 High or A2 instance

Linux Windows

This document explains how to create a virtual machine (VM) instance that usesa machine type from theA2 orA3 Highaccelerator-optimized machine series.

For A3 High machine types, this document only covers machine types that havefewer than 8 GPUs attached. These A3 High machine types with fewer than 8 GPUscan only be created as Spot VMs or Flex-start VMs. To createan A3 instance that has 8 GPUs attached, seeCreate an A3 Mega, A3 High, or A3 Edge instance with GPUDirect enabled.

To create multiple A3 or A2 VMs, you can also use one of the following options:

Before you begin

Required roles

To get the permissions that you need to create VMs, ask your administrator to grant you theCompute Instance Admin (v1) (roles/compute.instanceAdmin.v1) IAM role on the project. For more information about granting roles, seeManage access to projects, folders, and organizations.

This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand theRequired permissions section:

Required permissions

The following permissions are required to create VMs:

  • compute.instances.create on the project
  • To use a custom image to create the VM: compute.images.useReadOnly on the image
  • To use a snapshot to create the VM: compute.snapshots.useReadOnly on the snapshot
  • To use an instance template to create the VM: compute.instanceTemplates.useReadOnly on the instance template
  • To specify a subnet for your VM: compute.subnetworks.use on the project or on the chosen subnet
  • To specify a static IP address for the VM: compute.addresses.use on the project
  • To assign an external IP address to the VM when using a VPC network: compute.subnetworks.useExternalIp on the project or on the chosen subnet
  • To assign alegacy network to the VM: compute.networks.use on the project
  • To assign an external IP address to the VM when using a legacy network: compute.networks.useExternalIp on the project
  • To set VM instance metadata for the VM: compute.instances.setMetadata on the project
  • To set tags for the VM: compute.instances.setTags on the VM
  • To set labels for the VM: compute.instances.setLabels on the VM
  • To set a service account for the VM to use: compute.instances.setServiceAccount on the VM
  • To create a new disk for the VM: compute.disks.create on the project
  • To attach an existing disk in read-only or read-write mode: compute.disks.use on the disk
  • To attach an existing disk in read-only mode: compute.disks.useReadOnly on the disk

You might also be able to get these permissions withcustom roles or otherpredefined roles.

Create a VM that has attached GPUs

You can create an A2 or A3 accelerator-optimized VM by using theGoogle Cloud console, Google Cloud CLI, or REST.

Console

  1. In the Google Cloud console, go to theCreate an instance page.

    Go to Create an instance
  2. In theName field, enter a unique name for your instance. SeeResource namingconvention.
  3. Select a region and zone where these GPU machine types are available. SeeGPU regions and zones.
  4. In the machine types section, selectGPUs.
    1. In theGPU type list, select the GPU type.
      • For A2 accelerator-optimized VMs, select eitherNVIDIA A100 40GB orNVIDIA A100 80GB
      • For A3 accelerator-optimized VMs, selectNVIDIA H100 80GB
    2. In theNumber of GPUs list, select the number of GPUs.Note: Each accelerator-optimized machine type has afixed number of GPUs attached. If you adjust the number of GPUs, the machinetype changes.
  5. Configure the boot disk as follows:
    1. In theOS and storage section, clickChange.This opens theBoot disk configuration page.
    2. On theBoot disk configuration page, do the following:
      1. On thePublic images tab, choose asupportedCompute Engine image orDeep Learning VM Images.
      1. Specify a boot disk size of at least 40 GiB.
      2. To confirm your boot disk options, clickSelect.
  6. Configure the provisioning model.Note: A3 machine types with fewer than 8 GPUscan only be created as Spot VMs or Flex-start VMs. Whencreating these specific A3 machine types, you must select eitherSpot orFlex-start. For A2 machine types, you can select any of the provisioning models.In theAdvanced options section, underVM provisioning model,select one of the following:
  7. Optional: In theOn VM termination list, select what happens whenCompute Engine preempts the Spot VMs or theFlex-start VMs reaches the end of its run duration:
    • To stop the VM during preemption, selectStop (default).
    • To delete the VM during preemption, selectDelete.
  8. To create and start the VM, clickCreate.

gcloud

To create and start a VM, use thegcloud compute instances createcommand with the following flags. VMs with GPUs can't livemigrate, so make sure that you set the--maintenance-policy=TERMINATE flag.

The sample command also shows the--provisioning-modelflag. This flag sets the provisioning model for the VM. This flag is required when creating A3 machinetypes with fewer than 8 GPUs and must be set to eitherSPOT orFLEX_START.For A2 machine types, this flag is optional. If you don't specify a model, then the standardprovisioning model is used. For more information, seeCompute Engine instances provisioning models.

  gcloud compute instances createVM_NAME \      --machine-type=MACHINE_TYPE \      --zone=ZONE \      --boot-disk-size=DISK_SIZE \      --image=IMAGE \      --image-project=IMAGE_PROJECT \      --maintenance-policy=TERMINATE \      --provisioning-model=PROVISIONING_MODEL
Replace the following:
  • VM_NAME: thename for the new VM.
  • MACHINE_TYPE: anA2 machine type or anA3 machine type with 1, 2, or 4 GPUs. For A3 machine types, you must specify a provisioning model.
  • ZONE: the zone for the VM. This zone must supportyour selected GPU model.
  • DISK_SIZE: the size of your boot disk in GiB. Specify a boot disk size of at least 40 GiB.
  • IMAGE: an operating system image that supports GPUs. If you want to use the latest image in animage family, replace the--image flag with the--image-family flag and set its value to an image family that supports GPUs. For example:--image-family=rocky-linux-8-optimized-gcp.
    You can also specify a custom image or Deep Learning VM Images.
  • IMAGE_PROJECT: the Compute Engineimage project that the OS image belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.
  • PROVISIONING_MODEL: the provisioning model to use to create the VM. Youcan specify eitherSPOT orFLEX_START. If you remove the--provisioning-model flag from the command, then the standardprovisioning model is used. This flag is required when creating A3 VMs withfewer than 8 GPUs.

REST

Send a POST request to theinstances.insert method.VMs with GPUs can't live migrate, make sure you set theonHostMaintenanceparameter toTERMINATE.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{"machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE","disks":[  {    "type": "PERSISTENT",    "initializeParams":    {      "diskSizeGb": "DISK_SIZE",      "sourceImage": "SOURCE_IMAGE_URI"    },    "boot": true  }],"name": "VM_NAME","networkInterfaces":[  {    "network": "projects/PROJECT_ID/global/networks/NETWORK"  }],"scheduling":{  "onHostMaintenance": "terminate",  "automaticRestart": true}}
Replace the following:
  • VM_NAME: thename for the new VM.
  • PROJECT_ID: your Project ID.
  • ZONE: the zone for the VM. This zone must supportyour selected GPU model.
  • MACHINE_TYPE: anA2 machine type or anA3 machine type with 1, 2, or 4 GPUs. For A3 machine types, you must specify a provisioning model.
  • PROVISIONING_MODEL: The provisioning model for the VM. Specify eitherSPOT orFLEX_START. This field is required when creating A3 VMs with fewer than 8 GPUs. For A2 VMs, this field is optional; if you don't specify a model, then the standard provisioning model is used. For more information, seeCompute Engine instances provisioning models.
  • SOURCE_IMAGE_URI: the URI for the specificimage or image family that you want touse. For example:
    • Specific image:"sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-10-optimized-gcp-v20251017"
    • Image family:"sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-10-optimized-gcp"
    When you specify an image family, Compute Engine creates a VM from the most recent, non-deprecatedOS image in that family. For more information about when to use image families, seeImage families best practices.
  • DISK_SIZE: the size of your boot disk in GB. Specify a boot disk sizeof at least 40 GB.
  • NETWORK: the VPC network that you wantto use for the VM. You can specify `default` to use your default network.
Additional settings:
  • To specify a provisioning model, add the"provisioningModel": "PROVISIONING_MODEL" field to thescheduling object in your request. This is required for A3 machine types with fewer than 8 GPUs. If you specify to create Spot VMs, then theonHostMaintenance andautomaticRestart fields are ignored.
    "scheduling":  {    "onHostMaintenance": "terminate",    "automaticRestart": true,    "provisioningModel": "PROVISIONING_MODEL"  }

Install drivers

For the VM to use the GPU, you need toInstall the GPU driver on your VM.

Examples

In these examples, most of the VMs are created by using the Google Cloud CLI.However, you can also use either theGoogle Cloud console orREST to create these VMs.

The following examples show how to createan A3 Spot VM by using a standard OS image, and an A2 VM by usingaDeep Learning VM Images image.

Create an A3 Spot VM by using the Debian 13 OS image family

This example creates an A3 (a3-highgpu-1g) Spot VMby using the Debian 13 OS image family.

gcloud compute instances createVM_NAME \    --project=PROJECT_ID \    --zone=ZONE \    --machine-type=a3-highgpu-1g \    --provisioning-model=SPOT \    --maintenance-policy=TERMINATE \    --image-family=debian-13 \    --image-project=debian-cloud \    --boot-disk-size=200GB \    --scopes=https://www.googleapis.com/auth/cloud-platform

Replace the following:

  • VM_NAME: the name of your VM instance
  • PROJECT_ID : your project ID
  • ZONE: the zone for the VM instance

Create an A2 VM with a Vertex AI Workbench user-managed notebooks instance on the VM

This example creates an A2 Standard (a2-highgpu-1g) VM by using thetf2-ent-2-3-cu110Deep Learning VM Images image. In thisexample, optional flags such as boot disk size and scope are specified.

Using DLVM images is the easiest way to get started because theseimages already have the NVIDIA drivers and CUDA libraries pre-installed.

These images also provide performance optimizations.

The following DLVM images are supported for NVIDIA A100:

  • common-cu110: NVIDIA driver and CUDA pre-installed
  • tf-ent-1-15-cu110: NVIDIA driver, CUDA, TensorFlow Enterprise 1.15.3pre-installed
  • tf2-ent-2-1-cu110: NVIDIA driver, CUDA, TensorFlow Enterprise 2.1.1pre-installed
  • tf2-ent-2-3-cu110: NVIDIA driver, CUDA, TensorFlow Enterprise 2.3.1pre-installed
  • pytorch-1-6-cu110: NVIDIA driver, CUDA, Pytorch 1.6

For more information about the DLVM images that are available,and the packages installed on the images, see theDeep Learning VM documentation.

gcloud compute instances createVM_NAME \    --project=PROJECT_ID \    --zone=ZONE \    --machine-type=a2-highgpu-1g \    --maintenance-policy=TERMINATE \    --image-family=tf2-ent-2-3-cu110 \    --image-project=deeplearning-platform-release \    --boot-disk-size=200GB \    --metadata="install-nvidia-driver=True,proxy-mode=project_editors" \    --scopes=https://www.googleapis.com/auth/cloud-platform

Replace the following:

  • VM_NAME: the name of your VM instance
  • PROJECT_ID : your project ID
  • ZONE: the zone for the VM instance

The preceding example command also generates aVertex AI Workbench user-managed notebooksinstancefor the VM. To access the notebook, in the Google Cloud console, go to theVertex AI Workbench >User-managed notebookspage.

Go to the User-managed notebooks page

Multi-Instance GPU

AMulti-Instance GPUpartitions a single NVIDIA A100 or NVIDIA H100 GPU within the same VM into as many asseven independent GPU instances. They run simultaneously, each with its own memory,cache and streaming multiprocessors. This setup enables the NVIDIA A100 and H100GPU to deliver consistent quality-of-service (QoS) at up to 7x higherutilization compared to earlier GPU models.

You can create up to seven Multi-instance GPUs. For A100 40GB GPUs, eachMulti-instance GPU is allocated 5 GB of memory. With the A100 80GBGPUs the allocated memory doubles to 10 GB each. With H100 80GB GPUs, eachMulti-instance GPU is also allocated 10 GB of memory.

For more information about using Multi-Instance GPUs, seeNVIDIA Multi-Instance GPU User Guide.

To create Multi-Instance GPUs, complete the following steps:

  1. Create an A2 (A100) or A3 (H100) accelerator-optimized VM instance.

  2. Connect to the VM instance. For more information, seeConnect to Linux VMs orConnect to Windows VMs.

  3. EnableNVIDIA GPU drivers.

    Pro Tip: You can skip thisstep by creating VMs withDeep Learning VM Images. EachDeep Learning VM Images has an NVIDIA GPU driver pre-installed.

  4. Enable Multi-Instance GPUs.

    sudo nvidia-smi -mig 1
  5. Review the Multi-Instance GPU shapes that are available.

    sudo nvidia-smi mig --list-gpu-instance-profiles

    The output is similar to the following:

    +-----------------------------------------------------------------------------+| GPU instance profiles:                                                      || GPU   Name             ID    Instances   Memory     P2P    SM    DEC   ENC  ||                              Free/Total   GiB              CE    JPEG  OFA  ||=============================================================================||   0  MIG 1g.10gb       19     7/7        9.62       No     16     1     0   ||                                                             1     1     0   |+-----------------------------------------------------------------------------+|   0  MIG 1g.10gb+me    20     1/1        9.62       No     16     1     0   ||                                                             1     1     1   |+-----------------------------------------------------------------------------+|   0  MIG 1g.20gb       15     4/4        19.50      No     26     1     0   ||                                                             1     1     0   |+-----------------------------------------------------------------------------+|   0  MIG 2g.20gb       14     3/3        19.50      No     32     2     0   ||                                                             2     2     0   |+-----------------------------------------------------------------------------+|   0  MIG 3g.40gb        9     2/2        39.25      No     60     3     0   ||                                                             3     3     0   |+-----------------------------------------------------------------------------+.......
  6. Create the Multi-Instance GPU (GI) and associated compute instances (CI)that you want. You can create these instances by specifying either the fullor shortened profile name, profile ID, or a combination of both. For moreinformation, seeCreating GPU Instances.

    The following example creates twoMIG 3g.20gb GPU instances by using theprofile ID (9).

    The-C flag is also specified which creates the associated computeinstances for the required profile.

    sudo nvidia-smi mig -cgi 9,9 -C
  7. Check that the two Multi-Instance GPUs are created:

    sudo nvidia-smi mig -lgi
  8. Check that both the GIs and corresponding CIs are created.

    sudo nvidia-smi

    The output is similar to the following:

    +-----------------------------------------------------------------------------+| NVIDIA-SMI 525.125.06   Driver Version: 525.125.06   CUDA Version: 12.0     ||-------------------------------+----------------------+----------------------+| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC || Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. ||                               |                      |               MIG M. ||===============================+======================+======================||   0  NVIDIA H100 80G...  Off  | 00000000:04:00.0 Off |                   On || N/A   33C    P0    70W / 700W |     39MiB / 81559MiB |     N/A      Default ||                               |                      |              Enabled |+-------------------------------+----------------------+----------------------+|   1  NVIDIA H100 80G...  Off  | 00000000:05:00.0 Off |                   On || N/A   32C    P0    69W / 700W |     39MiB / 81559MiB |     N/A      Default ||                               |                      |              Enabled |+-------------------------------+----------------------+----------------------+......+-----------------------------------------------------------------------------+| MIG devices:                                                                |+------------------+----------------------+-----------+-----------------------+| GPU  GI  CI  MIG |         Memory-Usage |        Vol|         Shared        ||      ID  ID  Dev |           BAR1-Usage | SM     Unc| CE  ENC  DEC  OFA  JPG||                  |                      |        ECC|                       ||==================+======================+===========+=======================||  0    1   0   0  |     19MiB / 40192MiB | 60      0 |  3   0    3    0    3 ||                  |      0MiB / 65535MiB |           |                       |+------------------+----------------------+-----------+-----------------------+|  0    2   0   1  |     19MiB / 40192MiB | 60      0 |  3   0    3    0    3 ||                  |      0MiB / 65535MiB |           |                       |+------------------+----------------------+-----------+-----------------------+......+-----------------------------------------------------------------------------+| Processes:                                                                  ||  GPU   GI   CI        PID   Type   Process name                  GPU Memory ||        ID   ID                                                   Usage      ||=============================================================================||  No running processes found                                                 |+-----------------------------------------------------------------------------+

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.