Install GPU drivers

Linux Windows

After you create a virtual machine (VM) instance with one or more GPUs, yoursystem requires NVIDIA device drivers so that your applications can access thedevice. Make sure your virtual machine (VM) instances have enough free diskspace. You should choose at least 40 GB for the boot disk when creatingthe new VM.

To install the drivers, you have two options to choose from:

NVIDIA driver, CUDA toolkit, and CUDA runtime versions

There are different versioned components of drivers and runtime that might beneeded in your environment. These include the following components:

  • NVIDIA driver
  • CUDA toolkit
  • CUDA runtime

When installing these components, you have the ability to configure yourenvironment to suit your needs. For example, if you have an earlier version ofTensorFlow that works best with an earlier version of the CUDA toolkit,but the GPU that you want to use requires a later version of the NVIDIA driver,then you can install an earlier version of a CUDA toolkit along with a laterversion of the NVIDIA driver.

However, you must make sure that your NVIDIA driver and CUDA toolkit versions arecompatible. For CUDA toolkit and NVIDIA driver compatibility, see the NVIDIAdocumentation aboutCUDA compatibility.

Understand NVIDIA driver branches

NVIDIA provides the following three driver branches:

  • Long-Term Support Branch (LTSB): this branch prioritizes stabilityand minimizes maintenance, with an extended support lifecycle of threeyears. The latest LTSB tested and verified by Google isR580,which has an end of support date in August 2028.
  • Production Branch (PB): this branch provides performanceenhancements and support for the latest hardware. It fully supportsproduction workloads but has a shorter support lifecycle of up to one year.The latest PB tested and verified by Google isR570, which has anend of support in February 2026.
  • New Feature Branch (NFB): this branch is for early adopters to testnew features and is not recommended for production environments.

For production workloads, use either the Production Branch or theLong-Term Support Branch. For more details about the NVIDIA branches, see theNVIDIA documentation.

Recommended NVIDIA driver branches

Use the table in this section to help you to determine the best NVIDIA driverbranch for your GPU machine type.

Note: For all machine types, you can use any of the branches listed in theSupported branches column. However, therecommended branchis tested and verified to work on Compute Engine, provides the longest supportwindow, and requires the least maintenance. Typically, therecommended branchis a long-term support branch, unless the machine type requires a more recentproduction branch.

In the following table,EOS indicates that NVIDIA lists that branch asreachingend of support.N/A indicates that the specified operating system(OS) can't run on the machine type.

Machine typeGPU modelSupported branchesRecommended branch
(EOS date)
Minimum driver for recommended branch
A4XNVIDIA Blackwell GB200 SuperchipR570 or laterR580 (Aug 2028)
  • Linux:580.82.07 or later
  • Windows: N/A
A4NVIDIA Blackwell B200R570 or laterR580 (Aug 2028)
  • Linux:580.82.07 or later
  • Windows: N/A
A3 UltraNVIDIA H200R570 or laterR580 (Aug 2028)
  • Linux:580.82.07 or later
  • Windows: N/A
A3 Mega, High, EdgeNVIDIA H100R535 or laterR535 (Jun 2026)
  • Linux:535.230.02 or later
  • Windows: N/A
G4NVIDIA RTX PRO 6000R580 or laterR580 (Aug 2028)
  • Linux:580.95.05 or later
  • Windows:581.42 or later
G2NVIDIA L4R535 or laterR535 (Jun 2026)
  • Linux:535.230.02 or later
  • Windows:538.67 or later
A2 Standard, A2 UltraNVIDIA A100R535 or laterR535 (Jun 2026)
  • Linux:535.230.02 or later
  • Windows:538.67 or later
N1NVIDIA T4R535 or laterR535 (Jun 2026)
  • Linux:535.230.02 or later
  • Windows:538.67 or later
N1NVIDIA V100, P100, P4R35 to R5801R535 (Jun 2026)
  • Linux:535.230.02 or later
  • Windows:538.67 or later

1NVIDIA announced that R580 is the last driver branch to support thePascal (P4 and P100) and Volta architecture (V100).

Install GPU drivers on VMs by using CUDA Toolkit guides

One way to install the NVIDIA driver on most VMs is to install theCUDA Toolkit.

Note: With the exception of Windows, these instructions don't work on VMs that haveSecure Boot enabled.For VMs that have Secure Boot enabled, seeInstalling GPU drivers (Secure Boot VMs).

To install the CUDA Toolkit, complete the following steps:

  1. Select a CUDA Toolkit version that supports thedriver version that you need.

    Machine typeGPU modelRecommended CUDA Toolkit
    A4XNVIDIA Blackwell GB200 SuperchipCUDA 12.8.1 or later
    A4NVIDIA Blackwell B200CUDA 12.8.1 or later
    A3 UltraNVIDIA H200CUDA 12.4 or later
    G4NVIDIA RTX PRO 6000CUDA 13.1 or later
    G2NVIDIA L4CUDA 12.2.2 or later
    A3 Mega, High, EdgeNVIDIA H100CUDA 12.2.2 or later
    A2 Standard, A2 UltraNVIDIA A100CUDA 12.2.2 or later
    N1NVIDIA T4CUDA 12.2.2 or later
    N1NVIDIA V100, P100, P4CUDA 12.2.2 to CUDA 12 (final version)1

    1CUDA Toolkit 12 is the last to support the Pascal (P4 and P100)and Volta architecture (V100). NVIDIA announced that offline compilation andlibrary support for these architectures is removed starting with theCUDA Toolkit 13.0 major version release. For more information, see theNVIDIA 13.0 driver release notes.

  2. Connect to the VMwhere you want to install the driver.

  3. On your VM, download and install the CUDA Toolkit. To find theCUDA Toolkit package and installation instructions, seeCUDA Toolkit Archive in the NVIDIA documentation.

Install GPU drivers on VMs by using installation script

You can use the following scripts to automate the installation process.To review these scripts, see theGitHub repository.

Note: These scripts won't work on Linux VMs that haveSecure Boot enabled.For Linux VMs that have Secure Boot enabled, seeInstalling GPU drivers (Secure Boot VMs).

Linux

Use these instructions to install GPU drivers on a running VM.

Supported operating systems

The Linux installation script was tested on the following operatingsystems:

  • Debian 12
  • Red Hat Enterprise Linux (RHEL) 8 and 9
  • Rocky Linux 8 and 9
  • Ubuntu 22 and 24

If you use this script on other operating systems, the installationmight fail. This script can install NVIDIA driver as well asCUDA Toolkit.

To install the GPU drivers and CUDA Toolkit, complete the following steps:

  1. If you have version 2.38.0 or later of theOps Agentcollecting GPU metrics on your VM, you must stop the agent before you caninstall or upgrade your GPU drivers using this installation script.

    To stop the Ops Agent, run the following command:

    sudo systemctl stop google-cloud-ops-agent
  2. Ensure that Python 3 is installed on your operating system.

  3. Download the installation script.

    curl -L https://storage.googleapis.com/compute-gpu-installation-us/installer/latest/cuda_installer.pyz --output cuda_installer.pyz
  4. Run the installation script.

    sudo python3 cuda_installer.pyz install_driver --installation-mode=INSTALLATION_MODE --installation-branch=BRANCH

    • INSTALLATION_MODE: the installation method. Use one of thefollowing values:
      • repo: (Default) installs the driver from the official NVIDIA packagerepository.
      • binary: installs the driver using the binary installation package.
    • BRANCH: the driver branch you want to install. Use one of thefollowing values:
      • prod: (Default) the production branch. This branch is qualified foruse in production environments for enterprise and data center GPUs.
      • nfb: the new feature branch. This branch includes the latest updatesfor early adopters. This branch is not recommended for productionenvironments.
      • lts: the long-term support branch. This branch is maintained for alonger period than a normal production branch.

    The script takes some time to run. It will restart your VM. When the VMrestarts, run the script again to continue the installation.

  5. Verify the installation. SeeVerify the GPU driver install.

  6. You can also use this tool to install the CUDA Toolkit. To install theCUDA Toolkit, run the following command:

    sudo python3 cuda_installer.pyz install_cuda --installation-mode=INSTALLATION_MODE --installation-branch=BRANCH

    Make sure that you use the same values forINSTALLATION_MODE andBRANCH as you did during driver installation.

    The script will take a while to run. It will restart your VM. When the VM restarts, run the script again to continue the installation.

  7. Verify the CUDA toolkit installation.

    python3 cuda_installer.pyz verify_cuda
  8. After you complete the installation, you must reboot the VM.

Linux (startup script)

Use these instructions to install GPU drivers during startup of a VM.

Supported operating systems

The Linux installation script was tested on the following operatingsystems:

  • Debian 12
  • Red Hat Enterprise Linux (RHEL) 8 and 9
  • Rocky Linux 8 and 9
  • Ubuntu 22 and 24

If you use this script on other operating systems, the installationmight fail. This script can install NVIDIA driver as well asCUDA Toolkit.

Use the followingstartup scriptto automate the driver and CUDA Toolkit installation:

#!/bin/bashiftest-f/opt/google/cuda-installerthenexitfimkdir-p/opt/google/cuda-installercd/opt/google/cuda-installer/||exitiftest-fcuda_installationthenexitficurl-fSsL-Ohttps://storage.googleapis.com/compute-gpu-installation-us/installer/latest/cuda_installer.pyzpython3cuda_installer.pyzinstall_cuda

You can append the--installation-modeINSTALLATION_MODE and--installation-branchBRANCH flags to the install command to indicate what installation mode and which driver branch you want installed.

  • INSTALLATION_MODE: the installation method. Use one of thefollowing values:
    • repo: (Default) installs the driver from the official NVIDIA packagerepository.
    • binary: installs the driver using the binary installation package.
  • BRANCH: the driver branch you want to install. Use one of thefollowing values:
    • prod: (Default) the production branch. This branch is qualified foruse in production environments for enterprise and data center GPUs.
    • nfb: the new feature branch. This branch includes the latest updatesfor early adopters. This branch is not recommended for productionenvironments.
    • lts: the long-term support branch. This branch is maintained for alonger period than a normal production branch.

Windows

This installation script can be used on GPU instances that have secure boot enabled.It supports Windows Server 2019 and later.

This script installs an NVIDIA RTX Virtual Workstation (vWS) compatibledriver. If you don't have a vWS license, vWS features won't be availableon your instance.

Open a PowerShell terminalas an administrator, then complete the followingsteps:

  1. Download the script.

    Invoke-WebRequest https://github.com/GoogleCloudPlatform/compute-gpu-installation/raw/main/windows/install_gpu_driver.ps1 -OutFile C:\install_gpu_driver.ps1
  2. Run the script.

    C:\install_gpu_driver.ps1

    The script takes some time to run. No command prompts are given during theinstallation process. Once the script exits, the driver is installed.

    This script installs the drivers in the following default location onyour VM:C:\Program Files\NVIDIA Corporation\.

  3. Verify the installation. SeeVerify the GPU driver install.

Install GPU drivers (Secure Boot VMs)

These instructions are for installing GPU drivers on Linux VMs that useSecure Boot.

GPU Support

The procedures in this section supportall GPU modelsthat are available on Compute Engine.

You can't use these procedures to install drivers on Secure Boot instances thathaveNVIDIA RTX Virtual Workstations (vWS)versions of our GPUs attached.

If you are using either a Windows VM or a Linux VM that doesn't use Secure Boot,review one of the following instructions instead:

Installation of the driver on a Secure Boot VM is different for Linux VMs,because these VMs require all kernel modules to have a trusted certificatesignature.

Installation

You can use one of the following options for installing drivers that havetrusted certificates:

  • Create a trusted certificate for your drivers. For this option, choose from the following:
    • Automated method: use an image building tool to create boot images that have trusted certificates for your drivers installed
    • Manual method: generate your own certificate and use it to sign the GPUdriver's kernel modules
  • Use pre-signed drivers with an existing trusted certificate. This method onlysupports Ubuntu.

Self-signing (automated)

Supported operating systems:

This automated self-signing method was tested on the following operating systems:

  • Debian 12
  • Red Hat Enterprise Linux (RHEL) 8 and 9
  • Rocky Linux 8 and 9
  • Ubuntu 22 and 24

Procedure

To create an OS image that has self-signed certificates, complete the following steps:

    1. In the Google Cloud console, activate Cloud Shell.

      Activate Cloud Shell

      At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  1. Download thecuda_installer tool. To download the latest version of the script,run the following command:

    curl -L https://storage.googleapis.com/compute-gpu-installation-us/installer/latest/cuda_installer.pyz --output cuda_installer.pyz
  2. Build an image that has Secure Boot enabled by running the following command. The image creation process can take up to 20 minutes.

    PROJECT=PROJECT_IDZONE=ZONEBASE_IMAGE=BASE_IMAGE_NAMESECURE_BOOT_IMAGE=IMAGE_NAMEpython3 cuda_installer.pyz build_image \  --project $PROJECT \  --vm-zone $ZONE \  --base-image $BASE_IMAGE $SECURE_BOOT_IMAGE

    Replace the following:

    • PROJECT_ID: ID of the project tocreate the image in
    • ZONE: zone to create a temporary VM used.For exampleus-west4-a.
    • IMAGE_NAME: name of the image that will be created.
    • BASE_IMAGE_NAME: select from one of the following:

      • debian-12
      • rhel-8 orrhel-9
      • rocky-8 orrocky-9
      • ubuntu-22 orubuntu-24

    You can also add the--familyNAMEflag to add the new image to animage family.

    To see all the customization options for the image runpython3 cuda_installer.pyz build_image --help. You can alsoreview the documentation for thecuda_installer on GitHub.

  3. Verify the image. Use the following steps to verify that the image hasSecure Boot enabled and can create GPU instances that have NVIDIA driversinstalled.

    1. Create a test VM instance to verify that your image is properly configured and the GPU drivers load successfully. The following examplecreates an N1 machine type with a single NVIDIA T4 accelerator attached.However, you can use any supported GPU machine type of your choice.

      TEST_INSTANCE_NAME=TEST_INSTANCE_NAMEZONE=ZONEgcloud compute instances create $TEST_INSTANCE_NAME \ --project=$PROJECT \ --zone=$ZONE \ --machine-type=n1-standard-4 \ --accelerator=count=1,type=nvidia-tesla-t4 \ --create-disk=auto-delete=yes,boot=yes,device-name=$TEST_INSTANCE_NAME,image=projects/$PROJECT/global/images/$SECURE_BOOT_IMAGE,mode=rw,size=100,type=pd-balanced \ --shielded-secure-boot \ --shielded-vtpm \ --shielded-integrity-monitoring \ --maintenance-policy=TERMINATE

      Replace the following:

      • TEST_INSTANCE_NAME: a name for the test VM instance
      • ZONE: a zone that has T4 GPUs or the GPUof your choice. For more information, seeGPU regions and zones.
    2. Check that Secure Boot is enabled by running themokutil --sb-statecommand on the test VM by usinggcloud compute ssh.

      gcloud compute ssh --project=$PROJECT --zone=$ZONE $TEST_INSTANCE_NAME --command "mokutil --sb-state"
    3. Verify that the driver is installed by running thenvidia-smi commandon the test VM by usinggcloud compute ssh.

      gcloud compute ssh --project=$PROJECT --zone=$ZONE $TEST_INSTANCE_NAME --command "nvidia-smi"

      If you had installed the CUDA Toolkit, you can use thecuda_installertool to verify the install as follows:

      gcloud compute ssh --project=$PROJECT --zone=$ZONE $TEST_INSTANCE_NAME --command "python3 cuda_installer.pyz verify_cuda"
  4. Clean up. After you verify that the customized image works, there's noneed to keep the verification VM around. To delete the VM, run thefollowing command:

    gcloud compute instances delete --zone=$ZONE --project=$PROJECT $TEST_INSTANCE_NAME
  5. Optional: To delete the disk image you created, run the following command:

    gcloud compute images delete --project=$PROJECT $SECURE_BOOT_IMAGE

Self-signing (manual)

Supported operating systems

This manual self-signing method was tested on the following operating systems:

  • Debian 12
  • Red Hat Enterprise Linux (RHEL) 8 and 9
  • Rocky Linux 8 and 9
  • Ubuntu 22 and 24

Overview

The installation, signing, and image creation process is as follows:

  1. Generate your own certificate to be used for signing the driver.
  2. Create a VM to install and sign the GPU driver. To create the VM, you canuse the OS of your choice. When you create the VM, you must disableSecure Boot. You don't need to attach any GPUs to the VM.
  3. Install and sign the GPU driver, and optional CUDA Toolkit.
  4. Create a disk image based on the machine with a self-signed driver,adding your certificate to the list of trusted certificates.
  5. Use the image to create GPU VMs that have Secure Boot enabled.

Image creation

    1. In the Google Cloud console, activate Cloud Shell.

      Activate Cloud Shell

      At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  1. Generate your own certificate by using OpenSSL. With OpenSSL, the signing andverification for Secure Boot is done by using the regular DistinguishedEncoding Rules (DER)-encoded X.509 certificates. Run the following commandto generate a new self-signed X.509 certificate and an RSA private key file.

    openssl req -new -x509 -newkey rsa:2048 -keyout private.key -outform DER -out public.der -noenc -days 36500 -subj "/CN=Graphics Drivers Secure Boot Signing"
  2. Create a VM to install the self-signed driver. When you create the VM, youdon't need to attach any GPUs or enable Secure Boot. You can use astandard E2 machine type that has at least 40 GB of space available, so that the installation process can succeed.

    INSTANCE_NAME=BUILD_INSTANCE_NAMEDISK_NAME=IMAGE_NAMEZONE=ZONEPROJECT=PROJECT_IDOS_IMAGE=IMAGE_DETAILS# Create the build VMgcloud compute instances create $INSTANCE_NAME \ --zone=$ZONE \ --project=$PROJECT \ --machine-type=e2-standard-4 \ --create-disk=auto-delete=yes,boot=yes,name=$DISK_NAME,$OS_IMAGE,mode=rw,size=100,type=pd-balanced \ --no-shielded-secure-boot

    Replace the following:

    • BUILD_INSTANCE_NAME: name of the VM instance used to build the image.
    • IMAGE_NAME: name of the disk image.
    • ZONE: zone to create the VM in.
    • PROJECT_ID: ID of the project you want to use to build thenew disk image in.
    • IMAGE_DETAILS: the image family and project foryour selected base OS image:

      • Debian 12:"image-family=debian-12,image-project=debian-cloud"
      • RHEL 8:"image-family=rhel-8,image-project=rhel-cloud"
      • RHEL 9:"image-family=rhel-9,image-project=rhel-cloud"
      • Rocky Linux 8:"image-family=rocky-linux-8,image-project=rocky-linux-cloud"
      • Rocky Linux 9:"image-family=rocky-linux-9,image-project=rocky-linux-cloud"
      • Ubuntu 22:"image-family=ubuntu-2204-lts-amd64,image-project=ubuntu-os-cloud"
      • Ubuntu 24:"image-family=ubuntu-2404-lts-amd64,image-project=ubuntu-os-cloud"
  3. Copy the generated private key file to the VM. To sign the driver file,you need to have the newly generated key pair available on the VM.

    gcloud compute scp --zone $ZONE --project $PROJECT private.key $INSTANCE_NAME:~/private.keygcloud compute scp --zone $ZONE --project $PROJECT public.der $INSTANCE_NAME:~/public.der
  4. Install and sign the driver. The installation and signing of the driverand CUDA Toolkit are handled by the installation script that's also usedfor installations that don't use Secure Boot. To install and sign the driver, complete the following steps:

    1. Connect with SSH to the VM:

      gcloud compute ssh --zone $ZONE --project $PROJECT $INSTANCE_NAME
    2. Verify that the private and public keys got properly copied:

      ls private.key public.der
    3. Download driver installation script:

      curl -L https://storage.googleapis.com/compute-gpu-installation-us/installer/latest/cuda_installer.pyz --output cuda_installer.pyz
    4. Check that the driver installation is set up with signing configured.The build machine restarts during setup. After the build machine restarts, connect to the VM using SSH and rerun the script to resume installation.

      sudo python3 cuda_installer.pyz install_driver --secure-boot-pub-key=public.der --secure-boot-priv-key=private.key --ignore-no-gpu

      If you want to install CUDA Toolkit at the same time, you can do so withthe following command.

      sudo python3 cuda_installer.pyz install_cuda --ignore-no-gpu

      You might see some error or warning messages. These are the result of noGPU being detected and are expected. The system will reboot after completingCUDA Toolkit installation. After reconnecting, you can continue to thenext steps.

    5. Remove the certificate files, as they are no longer needed on the temporarymachine. For better security, useshred instead of therm command. Keysshouldn't be present on the final disk image.

      shred -uz private.key public.der
    6. Shutdown the VM so that you can use its disk to create the new image.

      sudo shutdown now
  5. Prepare the base disk image. To create a new disk image that can be used tocreate instances with Secure Boot enabled, you need to configure the imageto trust your newly generated key. The new disk image still accepts thedefault certificates,used by the operating system. To prepare the base image, complete the following steps.

    1. Download the default certificates. Use the following commands to downloadtheMicWinProPCA2011_2011-10-19.crtandMicCorUEFCA2011_2011-06-27.crtcertificates:

      curl -L https://storage.googleapis.com/compute-gpu-installation-us/certificates/MicCorUEFCA2011_2011-06-27.crt --output MicCorUEFCA2011_2011-06-27.crtcurl -L https://storage.googleapis.com/compute-gpu-installation-us/certificates/MicWinProPCA2011_2011-10-19.crt --output MicWinProPCA2011_2011-10-19.crt
    2. Verify the certificates:

      cat <<EOF >>check.sha146def63b5ce61cf8ba0de2e6639c1019d0ed14f3  MicCorUEFCA2011_2011-06-27.crt580a6f4cc4e4b669b9ebdc1b2b3e087b80d0678d  MicWinProPCA2011_2011-10-19.crtEOFsha1sum -c check.sha1
    3. Create an image based on the disk of the temporary VM. You can add--family=IMAGE_FAMILY_NAME as an option, sothat the image is set as the latest image in a given image family. Creation of the new image might take a couple minutes.

      Run the following command in the same directory where yourpublic.der file and downloaded certificates are.

      SECURE_BOOT_IMAGE=IMAGE_NAMEgcloud compute images create $SECURE_BOOT_IMAGE \--source-disk=$DISK_NAME \--source-disk-zone=$ZONE \--project=$PROJECT  \--signature-database-file=MicWinProPCA2011_2011-10-19.crt,MicCorUEFCA2011_2011-06-27.crt,public.der \--guest-os-features="UEFI_COMPATIBLE"

      You can verify that the public key of your certificate is attached to this new image by running the following command:

      gcloud compute images describe --project=$PROJECT $SECURE_BOOT_IMAGE
  6. Verify the new image. You can create a GPU VM using the new disk image. For this step, we recommend an N1 machine type with a single T4 accelerator that has Secure Boot enabled. However, the image also supports other types of GPUs and machine types.

    1. Create a test GPU VM:

      TEST_GPU_INSTANCE=TEST_GPU_INSTANCE_NAMEZONE=ZONEgcloud compute instances create $TEST_GPU_INSTANCE \--project=$PROJECT \--zone=$ZONE \--machine-type=n1-standard-4 \--accelerator=count=1,type=nvidia-tesla-t4 \--create-disk=auto-delete=yes,boot=yes,device-name=$TEST_GPU_INSTANCE,image=projects/$PROJECT/global/images/$SECURE_BOOT_IMAGE,mode=rw,size=100,type=pd-balanced \--shielded-secure-boot \--shielded-vtpm \--shielded-integrity-monitoring \--maintenance-policy=TERMINATE

      Replace the following:

      • TEST_GPU_INSTANCE_NAME: name of the GPU VM instance
        that you are creating to test the new image.
      • ZONE: zone that has T4 GPUs or other GPU of your choice.For more information, seeGPU regions and zones.
    2. Check that Secure Boot is enabled by running themokutil --sb-state command on the test VM usinggcloud compute ssh.

      gcloud compute ssh --project=$PROJECT --zone=$ZONE $TEST_GPU_INSTANCE --command "mokutil --sb-state"
    3. Verify that the driver is installed by running thenvidia-smi command on the test VM by usinggcloud compute ssh.

      gcloud compute ssh --project=$PROJECT --zone=$ZONE $TEST_GPU_INSTANCE --command "nvidia-smi"

      If you had installed the CUDA Toolkit, you can use thecuda_installer tool to verify the install as follows:

      gcloud compute ssh --project=$PROJECT --zone=$ZONE $TEST_GPU_INSTANCE --command "python3 cuda_installer.pyz verify_cuda"
  7. Clean up. After you verify that the new image works, there's no need to keep the temporary VM or the verification VM around. The disk image you created doesn't depend on them in any way. You can delete them with the following command:

    gcloud compute instances delete --zone=$ZONE --project=$PROJECT $INSTANCE_NAMEgcloud compute instances delete --zone=$ZONE --project=$PROJECT $TEST_GPU_INSTANCE

    We don't advise that you store your Secure Boot signing certificate in an unencrypted state on your disk. If you'd like to securely store the keys in a way that they can be shared with others, you can useSecret Manager to keep your data safe.

    When you no longer need the files on your disk, it's best to safely remove them using theshred tool. Run the following command:

    # Safely delete the key pair from your systemshred -uz private.key public.der

Pre-signed (Ubuntu only)

These instructions are only available for Secure boot Linux VMs that run onUbuntu 18.04, 20.04, and 22.04 operating systems. Support for more Linuxoperating systems is in progress.

To install GPU drivers on your Ubuntu VMs that use Secure Boot, complete thefollowing steps:

  1. Connect to the VMwhere you want to install the driver.

  2. Update the repository.

     sudo apt-get update
  3. Search for the most recent NVIDIA kernel module package or the version youwant. This package contains NVIDIA kernel modules signed by the Ubuntukey. If you want to find an earlier version, change the number for thetail parameter to get an earlier version. For example, specifytail -n 2.

    Ubuntu PRO and LTS

    For Ubuntu PRO and LTS, run the following command:

    NVIDIA_DRIVER_VERSION=$(sudo apt-cache search 'linux-modules-nvidia-[0-9]+-gcp$' | awk '{print $1}' | sort | tail -n 1 | head -n 1 | awk -F"-" '{print $4}')

    Ubuntu PRO FIPS

    For Ubuntu PRO FIPS, run the following commands:

    1. Enable Ubuntu FIPS updates.

      sudo ua enable fips-updates
    2. Shutdown and reboot

      sudo shutdown -r now
    3. Get the latest package.

      NVIDIA_DRIVER_VERSION=$(sudo apt-cache search 'linux-modules-nvidia-[0-9]+-gcp-fips$' | awk '{print $1}' | sort | tail -n 1 | head -n 1 | awk -F"-" '{print $4}')

    You can check the picked driver version by runningecho $NVIDIA_DRIVER_VERSION.The output is a version string like455.

  4. Install the kernel module package and corresponding NVIDIA driver.

    Note: Installing the package might upgrade your kernel.
     sudo apt install linux-modules-nvidia-${NVIDIA_DRIVER_VERSION}-gcp nvidia-driver-${NVIDIA_DRIVER_VERSION}

    If the command failed with thepackage not found error, the latestNVIDIA driver might be missing from the repository. Retry the previous stepand select an earlier driver version by changing the tail number.

  5. Verify that the NVIDIAdriver is installed. You might need to reboot the VM.

  6. If you rebooted the system to verify the NVIDIA version. After the reboot,you need to reset theNVIDIA_DRIVER_VERSION variable by rerunning thecommand that you used in step 3.

  7. Configure APT to use the NVIDIA package repository.

    1. To help APT pick the correct dependency, pin the repositories as follows:

      sudo tee /etc/apt/preferences.d/cuda-repository-pin-600 > /dev/null <<EOLPackage: nsight-computePin: origin *ubuntu.com*Pin-Priority: -1
      Package: nsight-systemsPin: origin *ubuntu.com*Pin-Priority: -1
      Package: nvidia-modprobePin: release l=NVIDIA CUDAPin-Priority: 600
      Package: nvidia-settingsPin: release l=NVIDIA CUDAPin-Priority: 600
      Package: *Pin: release l=NVIDIA CUDAPin-Priority: 100EOL

    2. Installsoftware-properties-common. This is required if youare using Ubuntu minimal images.

      sudo apt install software-properties-common
    3. Set the Ubuntu version.

      Ubuntu 18.04

      For Ubuntu 18.04, run the following command:

      export UBUNTU_VERSION=ubuntu1804/x86_64

      Ubuntu 20.04

      For Ubuntu 20.04, run the following command:

      export UBUNTU_VERSION=ubuntu2004/x86_64

      Ubuntu 22.04

      For Ubuntu 22.04, run the following command:

      export UBUNTU_VERSION=ubuntu2204/x86_64
    4. Download thecuda-keyring package.

      wget https://developer.download.nvidia.com/compute/cuda/repos/$UBUNTU_VERSION/cuda-keyring_1.0-1_all.deb
    5. Install thecuda-keyring package.

      sudo dpkg -i cuda-keyring_1.0-1_all.deb
    6. Add the NVIDIA repository.

      sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/$UBUNTU_VERSION/ /"

    If prompted, select the default action to keep your current version.

  8. Find the compatible CUDA driver version.

    The following script determines the latest CUDA driver version that iscompatible with the NVIDIA driver we just installed:

     CUDA_DRIVER_VERSION=$(apt-cache madison cuda-drivers | awk '{print $3}' | sort -r | while read line; do    if dpkg --compare-versions $(dpkg-query -f='${Version}\n' -W nvidia-driver-${NVIDIA_DRIVER_VERSION}) ge $line ; then       echo "$line"       break    fi done)

    You can check the CUDA driver version by runningecho $CUDA_DRIVER_VERSION.The output is a version string like455.32.00-1.

  9. Install CUDA drivers with the version identified from the previous step.

     sudo apt install cuda-drivers-${NVIDIA_DRIVER_VERSION}=${CUDA_DRIVER_VERSION} cuda-drivers=${CUDA_DRIVER_VERSION}

  10. Optional: Hold backdkms packages.

    After enabling Secure Boot, all kernel modules must be signed to beloaded. Kernel modules built bydkms don't work on the VM because theyaren't properly signed by default. This is an optional step, but it canhelp prevent you from accidentally installing otherdkms packages in thefuture.

    To holddkms packages, run the following command:

     sudo apt-get remove dkms && sudo apt-mark hold dkms
  11. Install CUDA Toolkit and runtime.

    Pick the suitable CUDA version. The following script determines the latestCUDA version that is compatible with the CUDA driver we just installed:

     CUDA_VERSION=$(apt-cache showpkg cuda-drivers | grep -o 'cuda-runtime-[0-9][0-9]-[0-9],cuda-drivers [0-9\\.]*' | while read line; do    if dpkg --compare-versions ${CUDA_DRIVER_VERSION} ge $(echo $line | grep -Eo '[[:digit:]]+\.[[:digit:]]+') ; then       echo $(echo $line | grep -Eo '[[:digit:]]+-[[:digit:]]')       break    fi done)

    You can check the CUDA version by runningecho $CUDA_VERSION.The output is a version string like11-1.

  12. Install the CUDA package.

     sudo apt install cuda-${CUDA_VERSION}
  13. Verify the CUDA installation.

    sudo nvidia-smi/usr/local/cuda/bin/nvcc --version

    The first command prints the GPU information. The second commandprints the installed CUDA compiler version.

Verify the GPU driver install

After completing the driver installation steps, verify that the driver installedand initialized properly.

Linux

Connect to the Linux instanceand use thenvidia-smi command to verify that the driver is running properly.

sudo nvidia-smi

The output is similar to the following:

  +-----------------------------------------------------------------------------------------+  | NVIDIA-SMI 580.82.07              Driver Version: 580.82.07      CUDA Version: 13.0     |  +-----------------------------------------+------------------------+----------------------+  | GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |  | Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |  |                                         |                        |               MIG M. |  |=======================================+====================+====================|  |   0  Tesla T4                       On  |   00000000:00:04.0 Off |                    0 |  | N/A   53C    P8             17W /   70W |       0MiB /  15360MiB |      0%      Default |  |                                         |                        |                  N/A |  +-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+

If this command fails, check if GPUs are attached to the VM. To check forany NVIDIA PCI devices, run the following command:

sudo lspci | grep -i "nvidia"

Windows Server

Connect to the Windows Server instanceand open a PowerShell terminal, then run the followingcommand to verify that the driver is running properly.

nvidia-smi

The output is similar to the following:

+---------------------------------------------------------------------------------------+| NVIDIA-SMI 538.67                 Driver Version: 538.67       CUDA Version: 12.2     ||-----------------------------------------+----------------------+----------------------+| GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC || Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. ||                                         |                      |               MIG M. ||=========================================+======================+======================||   0  NVIDIA L4                    WDDM  | 00000000:00:03.0 Off |                    0 || N/A   66C    P8              17W /  72W |    128MiB / 23034MiB |      0%      Default ||                                         |                      |                  N/A |+-----------------------------------------+----------------------+----------------------++---------------------------------------------------------------------------------------+| Processes:                                                                            ||  GPU   GI   CI        PID   Type   Process name                            GPU Memory ||        ID   ID                                                             Usage      ||=======================================================================================||    0   N/A  N/A      4888    C+G   ...CBS_cw5n1h2txyewy\TextInputHost.exe    N/A      ||    0   N/A  N/A      5180    C+G   ....Search_cw5n1h2txyewy\SearchApp.exe    N/A      |+---------------------------------------------------------------------------------------+

What's next?

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.