Create an N1 VM that has attached GPUs Stay organized with collections Save and categorize content based on your preferences.
This document explains how to create a VM that has attached GPUs and uses anN1 machine family.You can use most N1 machine types except theN1 shared-core.
Before you begin
- To review additional prerequisite steps such as selecting an OS image and checking GPU quota, review theoverview document.
- If you haven't already, set upauthentication. Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Note: If you installed the gcloud CLI previously, make sure you have the latest version by runninggcloud components update.For more information, seeAuthenticate for using REST in the Google Cloud authentication documentation.
Required roles
To get the permissions that you need to create VMs, ask your administrator to grant you theCompute Instance Admin (v1) (roles/compute.instanceAdmin.v1) IAM role on the project. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the permissions required to create VMs. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to create VMs:
compute.instances.createon the project- To use a custom image to create the VM:
compute.images.useReadOnlyon the image - To use a snapshot to create the VM:
compute.snapshots.useReadOnlyon the snapshot - To use an instance template to create the VM:
compute.instanceTemplates.useReadOnlyon the instance template - To specify a subnet for your VM:
compute.subnetworks.useon the project or on the chosen subnet - To specify a static IP address for the VM:
compute.addresses.useon the project - To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIpon the project or on the chosen subnet - To assign alegacy network to the VM:
compute.networks.useon the project - To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIpon the project - To set VM instance metadata for the VM:
compute.instances.setMetadataon the project - To set tags for the VM:
compute.instances.setTagson the VM - To set labels for the VM:
compute.instances.setLabelson the VM - To set a service account for the VM to use:
compute.instances.setServiceAccounton the VM - To create a new disk for the VM:
compute.disks.createon the project - To attach an existing disk in read-only or read-write mode:
compute.disks.useon the disk - To attach an existing disk in read-only mode:
compute.disks.useReadOnlyon the disk
You might also be able to get these permissions withcustom roles or otherpredefined roles.
Overview
The following GPU models can be attached to VMs that use N1 machine families.
NVIDIA GPUs:
- NVIDIA T4:
nvidia-tesla-t4 - NVIDIA P4:
nvidia-tesla-p4 - NVIDIA P100:
nvidia-tesla-p100 - NVIDIA V100:
nvidia-tesla-v100
NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
- NVIDIA T4 Virtual Workstation:
nvidia-tesla-t4-vws - NVIDIA P4 Virtual Workstation:
nvidia-tesla-p4-vws NVIDIA P100 Virtual Workstation:
nvidia-tesla-p100-vwsFor these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS) licenseis automatically added to your instance.
Create a VM that has attached GPUs
You can create an N1 VM that has attached GPUs by using either theGoogle Cloud console, Google Cloud CLI, or REST.
Console
In the Google Cloud console, go to theCreate an instance page.
Specify aName for your VM. SeeResource naming convention.
Select a region and zone where GPUs are available. See the list of availableGPU zones.
In theMachine configuration section, select theGPUs machinefamily, and then do the following:
- In theGPU type list, select one of the GPU models supported onN1 machines.
- In theNumber of GPUs list, select the number of GPUs.
If your GPU model supportsNVIDIA RTX Virtual Workstations (vWS) for graphics workloads, and you plan on runninggraphics-intensive workloads on this VM, selectEnable Virtual Workstation (NVIDIA GRID).
In theMachine type list, select one of the preset N1 machinetypes. Alternatively, you can also specify custom machine type settings.
In theBoot disk section, clickChange. This opens theBoot disk configuration page.
On theBoot disk configuration page, do the following:
- On thePublic images tab, choose asupported Compute Engine imageorDeep Learning VM Images.
- Specify a boot disk size of at least 40 GB.
- To confirm your boot disk options, clickSelect.
Optional: In theVM provisioning model list, select aprovisioning model.
To create and start the VM, clickCreate.
gcloud
To create and start a VM use thegcloud compute instances createcommand with the following flags.
If your workload is fault-tolerant or can start at any time, then considerusing a different provisioning model to reduce your costs. To change yourprovisioning model, include the--provisioning-model=<var>PROVISIONING_MODEL</var> flag in the command.For more information about the available models, seeCompute Engine instances provisioning models.
gcloud compute instances createVM_NAME \ --machine-typeMACHINE_TYPE \ --zoneZONE \ --boot-disk-sizeDISK_SIZE \ --accelerator type=ACCELERATOR_TYPE,count=ACCELERATOR_COUNT \ [--imageIMAGE | --image-familyIMAGE_FAMILY] \ --image-projectIMAGE_PROJECT \ --maintenance-policy TERMINATE \ [--provisioning-model=PROVISIONING_MODEL]
Replace the following:
VM_NAME: thenamefor the new VM.MACHINE_TYPE: themachine typethat you selected for your VM.ZONE: thezone for theVM. This zone must support theGPU type.DISK_SIZE: the size of your boot disk in GB.Specify a boot disk size of at least 40 GB.IMAGEorIMAGE_FAMILYthatsupports GPUs.Specify one of the following:IMAGE: the required version of apublic image. For example,--image debian-10-buster-v20200309.IMAGE_FAMILY: animage family. Thiscreates the VM from the most recent, non-deprecated OS image. Forexample, if you specify--image-family debian-10,Compute Engine creates a VMfrom the latest version of the OS image in the Debian 10 image family.
You can also specify a custom image orDeep Learning VM Images.
IMAGE_PROJECT: theCompute Engineimage projectthat the image family belongs to. If using a custom image orDeep Learning VM Images, specify the project that those images belong to.PROVISIONING_MODEL: the provisioning model forthe VM. Specify eitherSPOTorFLEX_START. Ifyou don't specify a provisioning model, the standard model is used. Thisflag is optional.ACCELERATOR_COUNT: the number of GPUs that you wantto add to your VM. SeeGPUs on Compute Enginefor a list of GPU limits based on the machine type of your VM.ACCELERATOR_TYPE: theGPU model that you want to use.If you plan on running graphics-intensive workloads on this VM, use oneof thevirtual workstation models.Choose one of the following values:
NVIDIA GPUs:
- NVIDIA T4:
nvidia-tesla-t4 - NVIDIA P4:
nvidia-tesla-p4 - NVIDIA P100:
nvidia-tesla-p100 - NVIDIA V100:
nvidia-tesla-v100
- NVIDIA T4:
NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
- NVIDIA T4 Virtual Workstation:
nvidia-tesla-t4-vws - NVIDIA P4 Virtual Workstation:
nvidia-tesla-p4-vws NVIDIA P100 Virtual Workstation:
nvidia-tesla-p100-vwsFor these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS)license is automatically added to your instance.
- NVIDIA T4 Virtual Workstation:
Example
For example, you can use the followinggcloud command to start anUbuntu 22.04 VM with 1 NVIDIA T4 GPU and 2 vCPUs in theus-east1-d zone.
gcloud compute instances create gpu-instance-1 \ --machine-type n1-standard-2 \ --zone us-east1-d \ --boot-disk-size 40GB \ --accelerator type=nvidia-tesla-t4,count=1 \ --image-family ubuntu-2204-lts \ --image-project ubuntu-os-cloud \ --maintenance-policy TERMINATE
REST
Identify the GPU type that you want to add to your VM. Submit aGET request to list the GPU types that are available to your project in aspecific zone.
To create VMs at a discounted price, you can specify adifferent provisioning model by adding the"provisioningModel":"PROVISIONING_MODEL" field to thescheduling object in your request. For more information about the available models, seeCompute Engine instances provisioning models.
GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/acceleratorTypes
Replace the following:
PROJECT_ID: project ID.ZONE:zonefrom which you want to list the available GPU types.
Send a POST request to theinstances.insert method.Include theacceleratorType parameter to specify which GPU type you want to use, andinclude theacceleratorCount parameter to specify how many GPUs you wantto add. Also set theonHostMaintenance parameter toTERMINATE.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{ "machineType": "projects/PROJECT_ID/zones/ZONE/machineTypes/MACHINE_TYPE", "disks": [ { "type": "PERSISTENT", "initializeParams": { "diskSizeGb": "DISK_SIZE", "sourceImage": "projects/IMAGE_PROJECT/global/images/family/IMAGE_FAMILY" }, "boot": true } ], "name": "VM_NAME", "networkInterfaces": [ { "network": "projects/PROJECT_ID/global/networks/NETWORK" } ], "guestAccelerators": [ { "acceleratorCount":ACCELERATOR_COUNT, "acceleratorType": "projects/PROJECT_ID/zones/ZONE/acceleratorTypes/ACCELERATOR_TYPE" } ], "scheduling": { ["automaticRestart": true], "onHostMaintenance": "TERMINATE", "provisioningModel": "PROVISIONING_MODEL" },}Replace the following:
VM_NAME: thename of the VM.PROJECT_ID: your project ID.ZONE: thezone for theVM. This zone must support theGPU type.MACHINE_TYPE: themachine typethat you selected for the VM. SeeGPUs on Compute Engineto see what machine types are available based on your chosen GPUcount.IMAGEorIMAGE_FAMILY: specify one of the following:IMAGE: the required version of a public image. For example,"sourceImage": "projects/debian-cloud/global/images/debian-10-buster-v20200309"IMAGE_FAMILY: animage family. Thiscreates the VM from the most recent, non-deprecated OS image. Forexample, if you specify"sourceImage": "projects/debian-cloud/global/images/family/debian-10",Compute Engine creates a VMfrom the latest version of the OS image in the Debian 10 image family.
You can also specify a custom image orDeep Learning VM Images.
IMAGE_PROJECT: theCompute Engineimage projectthat the image family belongs to. If using a custom image orDeep Learning VM Images, specify the project that those images belong to.DISK_SIZE: the size of your boot disk in GB.Specify a boot disk size of at least 40 GB.NETWORK: the VPCnetwork that you want to usefor the VM. You can specifydefaultto use your default network.ACCELERATOR_COUNT: the number of GPUs that youwant to add to your VM. SeeGPUs on Compute Enginefor a list of GPU limits based on the machine type of your VM.ACCELERATOR_TYPE: theGPU model that you want to use.If you plan on running graphics-intensive workloads on this VM, use oneof thevirtual workstation models.Choose one of the following values:
NVIDIA GPUs:
- NVIDIA T4:
nvidia-tesla-t4 - NVIDIA P4:
nvidia-tesla-p4 - NVIDIA P100:
nvidia-tesla-p100 - NVIDIA V100:
nvidia-tesla-v100
- NVIDIA T4:
NVIDIA RTX Virtual Workstation (vWS) (formerly known as NVIDIA GRID):
- NVIDIA T4 Virtual Workstation:
nvidia-tesla-t4-vws - NVIDIA P4 Virtual Workstation:
nvidia-tesla-p4-vws NVIDIA P100 Virtual Workstation:
nvidia-tesla-p100-vwsFor these virtual workstations, an NVIDIA RTX Virtual Workstation (vWS)license is automatically added to your instance.
- NVIDIA T4 Virtual Workstation:
PROVISIONING_MODEL: the provisioning model for theVM. Specify eitherSPOTorFLEX_START. If youdon't specify a provisioning model, the standard model is used. Thisproperty is optional. For more information on provisioning models, seeCompute Engine instances provisioningmodels.
Install drivers
To install the drivers, choose one of the following options:
- If you plan to run graphics-intensive workloads, such as those for gaming andvisualization,install drivers for the NVIDIA RTX Virtual Workstation.
- For most workloads,install the GPU drivers.
What's next?
- Learn more aboutGPU platforms.
- Add Local SSDs to your instances.Local SSD devices pair well with GPUs when your apps requirehigh-performance storage.
- Install the GPU drivers. If youenabled an NVIDIA RTX Virtual Workstation,install a driver for the virtual workstation.
- To handle GPU host maintenance, seeHandling GPU host maintenance events.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.