Create an A3, A2, or G2 VM Stay organized with collections Save and categorize content based on your preferences.
This document explains how to create a VM that uses a machine type fromthe A3 High, A3 Mega, A3 Edge, A2, and G2 machine series. To learn more aboutcreating VMs with attached GPUs, seeOverview of creating an instance with attached GPUs.
Tip: When provisioning A3 Ultra machinetypes, you must reserve capacity to create VMs or clusters, use Spot VMs, or create aresize request in a MIG. For more information about the parameters to set when creating an A3 UltraVM, seeCreate an A3 Ultra or A4 instance.Before you begin
- To review limitations and additional prerequisite steps for creatinginstances with attached GPUs, such as selecting an OS image and checkingGPU quota, seeOverview of creating an instance with attached GPUs.
- If you haven't already, then set up authentication.Authentication is the process by which your identity is verified for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
Afterinstalling the Google Cloud CLI,initialize it by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update
.- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Afterinstalling the Google Cloud CLI,initialize it by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, seeAuthenticate for using REST in the Google Cloud authentication documentation.
Required roles
To get the permissions that you need to create VMs, ask your administrator to grant you theCompute Instance Admin (v1) (roles/compute.instanceAdmin.v1
) IAM role on the project. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to create VMs:
compute.instances.create
on the project- To use a custom image to create the VM:
compute.images.useReadOnly
on the image - To use a snapshot to create the VM:
compute.snapshots.useReadOnly
on the snapshot - To use an instance template to create the VM:
compute.instanceTemplates.useReadOnly
on the instance template - To assign alegacy network to the VM:
compute.networks.use
on the project - To specify a static IP address for the VM:
compute.addresses.use
on the project - To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIp
on the project - To specify a subnet for your VM:
compute.subnetworks.use
on the project or on the chosen subnet - To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIp
on the project or on the chosen subnet - To set VM instance metadata for the VM:
compute.instances.setMetadata
on the project - To set tags for the VM:
compute.instances.setTags
on the VM - To set labels for the VM:
compute.instances.setLabels
on the VM - To set a service account for the VM to use:
compute.instances.setServiceAccount
on the VM - To create a new disk for the VM:
compute.disks.create
on the project - To attach an existing disk in read-only or read-write mode:
compute.disks.use
on the disk - To attach an existing disk in read-only mode:
compute.disks.useReadOnly
on the disk
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Create a VM that has attached GPUs
You can create an A3 High, A3 Mega, A3 Edge, A2, or G2 accelerator-optimized VMby using the Google Cloud console, Google Cloud CLI, or REST.
To make some customizations to your G2 VMs, you might need to use theGoogle Cloud CLI or REST. SeeG2 limitations.
Console
In the Google Cloud console, go to theCreate an instance page.
Specify aName for your VM. SeeResource naming convention.
Select a region and zone where GPUs are available. See the list of availableGPU regions and zones.
In theMachine configuration section, select theGPUs machinefamily.
Complete one of the following steps to select either apredefined or custom machine type based on the machine series:
For all GPU machine series, you can select a predefined machine typeas follows:
In theGPU type list, select your GPU type.
- For A3 High, A3 Mega, or A3 Edge accelerator-optimized VMs, select
NVIDIA H100 80GB
, orNVIDIA H100 80GB MEGA
. - For A2 accelerator-optimized VMs, select either
NVIDIA A100 40GB
orNVIDIA A100 80GB
. - For G2 accelerator-optimized VMs, select
NVIDIA L4
.
- For A3 High, A3 Mega, or A3 Edge accelerator-optimized VMs, select
In theNumber of GPUs list, select the number of GPUs.
Note: Each accelerator-optimized machine type has a fixed number of GPUs attached. If you adjust the number of GPUs, the machine type changes.
For the G2 machine series, you can select a custom machine type asfollows:
- In theGPU type list, select
NVIDIA L4
. - In theMachine type section, selectCustom.
- To specify the number of vCPUs and the amount ofmemory for the instance, drag the sliders or enter the values inthe text boxes. The console displays an estimated costfor the instance as you change the number of vCPUs and memory.
- In theGPU type list, select
Optional: The G2 machine series supportsNVIDIA RTX Virtual Workstations (vWS) for graphics workloads.If you plan on running graphics-intensive workloads on your G2 VM,selectEnable Virtual Workstation (NVIDIA GRID).
In theBoot disk section, clickChange. This opens theBoot disk configuration page.
On theBoot disk configuration page, do the following:
- On thePublic images tab, choose asupported Compute Engine imageorDeep Learning VM Images.
- Specify a boot disk size of at least 40 GB.
- To confirm your boot disk options, clickSelect.
Optional: Configure provisioning model.For example, if your workload is fault-tolerant and can withstand possibleVM preemption, consider using Spot VMs to reduce the cost ofyour VMs and the attached GPUs. For more information, seeGPUs on Spot VMs.To do this, complete the following steps:
- In theAvailability policies section, selectSpotfrom theVM provisioning model list. This setting disablesautomatic restart and host maintenance options for the VM.
- Optional: In theOn VM termination list, select what happenswhen Compute Engine preempts the VM:
- To stop the VM during preemption, selectStop (default).
- To delete the VM during preemption, selectDelete.
To create and start the VM, clickCreate.
gcloud
To create and start a VM, use thegcloud compute instances create
command with the following flags. VMs with GPUs can't livemigrate, make sure that you set the--maintenance-policy=TERMINATE
flag.
The following optional flags are shown in the sample command:
- The
--provisioning-model=SPOT
flag which configures your VMs as Spot VMs. If your workload is fault-tolerant and can withstand possible VM preemption, consider using Spot VMs to reduce the cost of your VMs and the attached GPUs. For more information, seeGPUs on Spot VMs. For Spot VMs, the automatic restart and host maintenance options flags are disabled. - The
--accelerator
flag to specify a virtual workstation. NVIDIA RTX Virtual Workstations (vWS) are supported for only G2 VMs.
gcloud compute instances createVM_NAME \ --machine-type=MACHINE_TYPE \ --zone=ZONE \ --boot-disk-size=DISK_SIZE \ --image=IMAGE \ --image-project=IMAGE_PROJECT \ --maintenance-policy=TERMINATE \ [--provisioning-model=SPOT] \ [--accelerator=type=nvidia-l4-vws,count=VWS_ACCELERATOR_COUNT]
VM_NAME
: thename for the new VM.MACHINE_TYPE
: the machine type that you selected. Choose from one ofthe following:- AnA3 machine type.
- AnA2 machine type.
- AG2 machine type. G2 machine types also support custom memory. Memory must be a multiple of 1024 MB and within the supported memory range. For example, to create a VM with 4 vCPUs and 19 GB of memory specify
--machine-type=g2-custom-4-19456
.
ZONE
: the zone for the VM. This zone must supportyour selected GPU model.DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk size of at least 40 GB.IMAGE
: an operating system image that supports GPUs. If you want to use the latest image in animage family, replace the--image
flag with the--image-family
flag and set its value to an image family that supports GPUs. For example:--image-family=rocky-linux-8-optimized-gcp
.
You can also specify a custom image or Deep Learning VM Images.IMAGE_PROJECT
: the Compute Engineimage project that the OS image belongs to. If using a custom image or Deep Learning VM Images, specify the project that those images belong to.VWS_ACCELERATOR_COUNT
: the number of virtual GPUs that you need.
REST
Send a POST request to theinstances.insert
method.VMs with GPUs can't live migrate, make sure you set theonHostMaintenance
parameter toTERMINATE
.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{"machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE","disks":[ { "type": "PERSISTENT", "initializeParams": { "diskSizeGb": "DISK_SIZE", "sourceImage": "SOURCE_IMAGE_URI" }, "boot": true }],"name": "VM_NAME","networkInterfaces":[ { "network": "projects/PROJECT_ID/global/networks/NETWORK" }],"scheduling":{ "onHostMaintenance": "terminate", ["automaticRestart": true]},}
VM_NAME
: thename for the new VM.PROJECT_ID
: your Project ID.ZONE
: the zone for the VM. This zone must supportyour selected GPU model.MACHINE_TYPE
: the machine type that you selected. Choose from one ofthe following:- AnA3 machine type.
- AnA2 machine type.
- AG2 machine type. G2 machine types also support custom memory. Memory must be a multiple of 1024 MB and within the supported memory range. For example, to create a VM with 4 vCPUs and 19 GB of memory specify
--machine-type=g2-custom-4-19456
.
SOURCE_IMAGE_URI
: the URI for the specificimage or image family that you want touse. For example:- Specific image:
"sourceImage": "projects/rocky-linux-cloud/global/images/rocky-linux-8-optimized-gcp-v20220719"
- Image family:
"sourceImage": "projects/rocky-linux-cloud/global/images/family/rocky-linux-8-optimized-gcp"
DISK_SIZE
: the size of your boot disk in GB. Specify a boot disk sizeof at least 40 GB.NETWORK
: the VPC network that you wantto use for the VM. You can specify `default` to use your default network.
- If your workload is fault-tolerant and can withstand possibleVM preemption, consider using Spot VMs to reduce the cost ofyour VMs and the attachedGPUs. For more information, seeGPUs on Spot VMs.To specify Spot VMs, add the
"provisioningModel": "SPOT"
option toyour request. For Spot VMs, the automatic restart and host maintenance options flags aredisabled."scheduling": { "provisioningModel": "SPOT" }
- For G2 VMs, NVIDIA RTX Virtual Workstations (vWS) are supported. Tospecify a virtual workstation, add the `guestAccelerators` option to yourrequest.Replace
VWS_ACCELERATOR_COUNT
with the number ofvirtual GPUs that you need."guestAccelerators": [ { "acceleratorCount":VWS_ACCELERATOR_COUNT, "acceleratorType": "projects/PROJECT_ID/zones/ZONEacceleratorTypes/nvidia-l4-vws" } ]
Install drivers
For the VM to use the GPU, you need toInstall the GPU driver on your VM.
Examples
In these examples, most of the VMs are created by using the Google Cloud CLI.However, you can also use either theGoogle Cloud console orREST to create these VMs.
The following examples show how to create VMs using the following images:
- Deep Learning VM Images. This example usesthe A2 Standard (
a2-highgpu-1g
) VM. - Container-optimized (COS) image.This example uses either an
a3-highgpu-8g
ora3-edgegpu-8g
VM. Public image. This example uses a G2 VM.
COS (A3 Edge/High)
You can create eithera3-edgegpu-8g
ora3-highgpu-8g
VMs that haveattached H100 GPUs by usingContainer-optimized (COS) images.
For detailed instructions on how to create thesea3-edgegpu-8g
ora3-highgpu-8g
VMs that use Container-Optimized OS, seeCreate an A3 VM with GPUDirect-TCPX enabled.
Public OS image (G2)
You can create VMs that have attached GPUs that use either apublic image that is available onCompute Engine or acustom image.
To create a VM using the most recent, non-deprecated image from theRocky Linux 8 optimized for Google Cloud image familythat uses theg2-standard-8
machine type and has an NVIDIA RTX Virtual Workstation, complete thefollowing steps:
Create the VM. In this example, optional flags such asboot disk type and size are also specified.
gcloud compute instances createVM_NAME \ --project=PROJECT_ID \ --zone=ZONE \ --machine-type=g2-standard-8 \ --maintenance-policy=TERMINATE --restart-on-failure \ --network-interface=nic-type=GVNIC \ --accelerator=type=nvidia-l4-vws,count=1 \ --image-family=rocky-linux-8-optimized-gcp \ --image-project=rocky-linux-cloud \ --boot-disk-size=200GB \ --boot-disk-type=pd-ssd
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.ZONE
: the zone for the VM.
InstallNVIDIA driver and CUDA.For NVIDIA L4 GPUs, CUDA version XX or higher is required.
DLVM image (A2)
Using DLVM images is the easiest way to get started because theseimages already have the NVIDIA drivers and CUDA libraries pre-installed.
These images also provide performance optimizations.
The following DLVM images are supported for NVIDIA A100:
common-cu110
: NVIDIA driver and CUDA pre-installedtf-ent-1-15-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 1.15.3pre-installedtf2-ent-2-1-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 2.1.1pre-installedtf2-ent-2-3-cu110
: NVIDIA driver, CUDA, TensorFlow Enterprise 2.3.1pre-installedpytorch-1-6-cu110
: NVIDIA driver, CUDA, Pytorch 1.6
For more information about the DLVM images that are available,and the packages installed on the images, see theDeep Learning VM documentation.
Create a VM using the
tf2-ent-2-3-cu110
image and thea2-highgpu-1g
machine type. In this example, optional flags such as boot disk size andscope are specified.gcloud compute instances createVM_NAME \ --projectPROJECT_ID \ --zoneZONE \ --machine-type a2-highgpu-1g \ --maintenance-policy TERMINATE \ --image-family tf2-ent-2-3-cu110 \ --image-project deeplearning-platform-release \ --boot-disk-size 200GB \ --metadata "install-nvidia-driver=True,proxy-mode=project_editors" \ --scopes https://www.googleapis.com/auth/cloud-platform
Replace the following:
VM_NAME
: the name of your VMPROJECT_ID
: your project ID.ZONE
: the zone for the VM
The preceding example command also generates aVertex AI Workbench user-managed notebooksinstancefor the VM. To access the notebook, in the Google Cloud console, go to theVertex AI Workbench >User-managed notebookspage.
Multi-Instance GPU (A3 and A2 VMs only)
AMulti-Instance GPU partitions a single NVIDIA H100 or A100 GPU within the same VM into as many asseven independent GPU instances. They run simultaneously, each with its own memory,cache and streaming multiprocessors. This setup enables the NVIDIA H100 or A100GPU to deliver guaranteed quality-of-service (QoS) at up to 7x higherutilization compared to earlier GPU models.
You can create up to seven Multi-instance GPUs. For A100 40GB GPUs, eachMulti-instance GPU is allocated 5 GB of memory. With the A100 80GB and H100 80GBGPUs the allocated memory doubles to 10 GB each.
For more information about using Multi-Instance GPUs, seeNVIDIA Multi-Instance GPU User Guide.
To create Multi-Instance GPUs, complete the following steps:
Create an A3 High, A3 Mega, A3 Edge, or A2 accelerator-optimized VM.
EnableNVIDIA GPU drivers.
Pro Tip: You can skip thisstep by creating VMs withDeep Learning VM Images. EachDeep Learning VM Images has an NVIDIA GPU driver pre-installed.
Enable Multi-Instance GPUs..
sudo nvidia-smi -mig 1
Review the Multi-Instance GPU shapes that are available.
sudo nvidia-smi mig --list-gpu-instance-profiles
The output is similar to the following:
+-----------------------------------------------------------------------------+| GPU instance profiles: || GPU Name ID Instances Memory P2P SM DEC ENC || Free/Total GiB CE JPEG OFA ||=============================================================================|| 0 MIG 1g.10gb 19 7/7 9.62 No 16 1 0 || 1 1 0 |+-----------------------------------------------------------------------------+| 0 MIG 1g.10gb+me 20 1/1 9.62 No 16 1 0 || 1 1 1 |+-----------------------------------------------------------------------------+| 0 MIG 1g.20gb 15 4/4 19.50 No 26 1 0 || 1 1 0 |+-----------------------------------------------------------------------------+| 0 MIG 2g.20gb 14 3/3 19.50 No 32 2 0 || 2 2 0 |+-----------------------------------------------------------------------------+| 0 MIG 3g.40gb 9 2/2 39.25 No 60 3 0 || 3 3 0 |+-----------------------------------------------------------------------------+.......
Create the Multi-Instance GPU (GI) and associated compute instances (CI)that you want. You can create these instances by specifying either the fullor shortened profile name, profile ID, or a combination of both. For moreinformation, seeCreating GPU Instances.
The following example creates two
MIG 3g.20gb
GPU instances by using theprofile ID (9
).The
-C
flag is also specified which creates the associated computeinstances for the required profile.sudo nvidia-smi mig -cgi 9,9 -C
Check that the two Multi-Instance GPUs are created:
sudo nvidia-smi mig -lgi
Check that both the GIs and corresponding CIs are created.
sudo nvidia-smi
The output is similar to the following:
+-----------------------------------------------------------------------------+| NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. || | | MIG M. ||===============================+======================+======================|| 0 NVIDIA H100 80G... Off | 00000000:04:00.0 Off | On || N/A 33C P0 70W / 700W | 39MiB / 81559MiB | N/A Default || | | Enabled |+-------------------------------+----------------------+----------------------+| 1 NVIDIA H100 80G... Off | 00000000:05:00.0 Off | On || N/A 32C P0 69W / 700W | 39MiB / 81559MiB | N/A Default || | | Enabled |+-------------------------------+----------------------+----------------------+......+-----------------------------------------------------------------------------+| MIG devices: |+------------------+----------------------+-----------+-----------------------+| GPU GI CI MIG | Memory-Usage | Vol| Shared || ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG|| | | ECC| ||==================+======================+===========+=======================|| 0 1 0 0 | 19MiB / 40192MiB | 60 0 | 3 0 3 0 3 || | 0MiB / 65535MiB | | |+------------------+----------------------+-----------+-----------------------+| 0 2 0 1 | 19MiB / 40192MiB | 60 0 | 3 0 3 0 3 || | 0MiB / 65535MiB | | |+------------------+----------------------+-----------+-----------------------+......+-----------------------------------------------------------------------------+| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|| No running processes found |+-----------------------------------------------------------------------------+
What's next?
- Learn more aboutGPU platforms.
- Add Local SSDs to your instances.Local SSD devices pair well with GPUs when your apps requirehigh-performance storage.
- Install the GPU drivers.
- If you enabled an NVIDIA RTX virtual workstation,install a driver for the virtual workstation.
- To handle GPU host maintenance, seeHandling GPU host maintenance events.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-07-14 UTC.