Using the Compute Engine persistent disk CSI Driver

Google Kubernetes Engine (GKE) provides a simple way for you to automatically deployand manage theCompute Engine persistent disk Container Storage Interface (CSI) Driver in your clusters. The Compute Engine persistent disk CSI Driver is always enabled in Autopilotclusters and can't be disabled or edited. In Standard clusters, youmust enable the Compute Engine persistent disk CSI Driver.

The Compute Engine persistent disk CSI Driver version is tied to GKE version numbers. TheCompute Engine persistent disk CSI Driver version is typically the latest driver available at the timethat the GKE version is released. The drivers updateautomatically when the cluster is upgraded to the latest GKEpatch.

Note: Because the Compute Engine persistent disk CSI Driver and some of the other associated CSIcomponents are deployed as separate containers, they incur resource usage (VMCPU, memory, and boot disk) on Kubernetes nodes. VM CPU usage is typically tensof millicores and memory usage is typically tens of MB. Boot disk usage is mostlyincurred by the logs of the CSI driver and other system containers in theDeployment.

Benefits

Using the Compute Engine persistent disk CSI Driver provides the following benefits:

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running thegcloud components update command. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zone instead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.

Enable the Compute Engine persistent disk CSI Driver

To enable the Compute Engine persistent disk CSI Driver in existing Standard clusters, use theGoogle Cloud CLI or the Google Cloud console.

To enable the driver on an existing cluster, complete the following steps:

gcloud

gcloudcontainerclustersupdateCLUSTER-NAME\--update-addons=GcePersistentDiskCsiDriver=ENABLED

ReplaceCLUSTER-NAME with the name of the existing cluster.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. UnderFeatures, next to theCompute Engine persistent disk CSIDriver field, clickEdit ComputeEngine CSI driver.

  4. Select theEnable Compute Engine Persistent Disk CSI Driver checkbox.

  5. ClickSave Changes.

After you have enabled the Compute Engine persistent disk CSI Driver, you can use the driver in Kubernetesvolumes by using the driver and provisioner name:pd.csi.storage.gke.io.

Disable the Compute Engine persistent disk CSI Driver

You can disable the Compute Engine persistent disk CSI Driver for Standard clusters by usingGoogle Cloud CLI or the Google Cloud console.

If you disable the driver, then any Pods currently using PersistentVolumesowned by the driver do not terminate. Any new Pods that try to usethose PersistentVolumes also fail to start.

Note: As thegcePersistentDisk volume type is migrated to the Compute Enginepersistent disk CSI Driver in version 1.22 and later, if you disable thepersistent disk CSI driver, thegcePersistentDisk volume type also stopsworking.Warning: There is a known issue in GKE 1.21 and earlier if you are usingthe in-tree persistent disk driver and want to delete regional disks.A Compute Engine regional disk can leak when its relatedPersistentVolume resource is deleted. This problem can be detected whenyour API call to delete the regional disk fails and returns anerror code other thanNotFound. For more information,see thisGitHub issue.

To disable the driver on an existing Standard cluster, complete the following steps:

gcloud

gcloudcontainerclustersupdateCLUSTER-NAME\--update-addons=GcePersistentDiskCsiDriver=DISABLED

ReplaceCLUSTER-NAME with the name of the existing cluster.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. UnderFeatures, next to theCompute Engine persistent disk CSIDriver field, clickEdit ComputeEngine CSI driver.

  4. Clear theEnable Compute Engine Persistent Disk CSI Driver checkbox.

  5. ClickSave Changes.

Use the Compute Engine persistent disk CSI Driver for Linux clusters

The following sections describe the typical process for using a Kubernetesvolume backed by a CSI driver in GKE. These sections arespecific to clusters using Linux.

Create a StorageClass

After you enable the Compute Engine persistent disk CSI Driver, GKE automaticallyinstalls the followingStorageClasses:

  • standard-rwo, using balanced persistent disk
  • premium-rwo, using SSD persistent disk

For Autopilot clusters, the default StorageClass isstandard-rwo,which uses the Compute Engine persistent disk CSI Driver. For Standard clusters, the defaultStorageClass uses the Kubernetes in-treegcePersistentDisk volume plugin.

You can find the name of your installed StorageClasses by running the followingcommand:

kubectlgetsc

You can also install a different StorageClass that uses the Compute Engine persistent disk CSI Driver byaddingpd.csi.storage.gke.io in the provisioner field.

For example, you could create a StorageClass using the following file, which is namedpd-example-class.yaml.

apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:pd-exampleprovisioner:pd.csi.storage.gke.io# Recommended setting. Delays the binding and provisioning of a PersistentVolume until a Pod that uses the# PersistentVolumeClaim is created and scheduled on a node.volumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:trueparameters:type:pd-balanced

You can specify the followingPersistent Disk typesin thetype parameter:

  • pd-balanced
  • pd-ssd
  • pd-standard
  • pd-extreme (supported on GKE version 1.26 and later)

If usingpd-standard orpd-extreme, seeUnsupported machine types for additional usage restrictions.

When you use thepd-extreme option, you must also add theprovisioned-iops-on-create field to yourmanifest. This field must be set to the same value as theprovisionedIOPS value that you specifiedwhen you created your persistent disk.

apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:pd-extreme-exampleprovisioner:pd.csi.storage.gke.iovolumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:trueparameters:type:pd-extremeprovisioned-iops-on-create:'10000'

After creating thepd-example-class.yaml file, run the following command:

kubectlcreate-fpd-example-class.yaml

Create a PersistentVolumeClaim

You can create a PersistentVolumeClaim that references the Compute Engine persistent disk CSI Driver'sStorageClass.

The following file, namedpvc-example.yaml, uses the pre-installed storage classstandard-rwo:

kind:PersistentVolumeClaimapiVersion:v1metadata:name:podpvcspec:accessModes:-ReadWriteOncestorageClassName:standard-rworesources:requests:storage:6Gi

After creating the PersistentVolumeClaim manifest, run the following command:

kubectlcreate-fpvc-example.yaml

In the pre-installed StorageClass (standard-rwo),volumeBindingModeis set toWaitForFirstConsumer. WhenvolumeBindingMode is set toWaitForFirstConsumer, the PersistentVolume is not provisioned until a Podreferencing the PersistentVolumeClaim is scheduled. IfvolumeBindingMode inthe StorageClass is set toImmediate (or it's omitted), apersistent-disk-backed PersistentVolume is provisioned after thePersistentVolumeClaim is created.

Create a Pod that consumes the volume

When using Pods with PersistentVolumes, we recommend that you use a workloadcontroller (such as a Deployment or StatefulSet). Although you wouldn't typicallyuse a standalone Pod, the following example uses one for simplicity.

The following example consumes the volume that you created in the previous section:

apiVersion:v1kind:Podmetadata:name:web-serverspec:containers:-name:web-serverimage:nginxvolumeMounts:# The path in the container where the volume will be mounted.-mountPath:/var/lib/www/html# The name of the volume that is being defined in the "volumes" section.name:mypvcvolumes:-name:mypvcpersistentVolumeClaim:# References the PersistentVolumeClaim created earlier.claimName:podpvcreadOnly:false

Use the Compute Engine persistent disk CSI Driver for Windows clusters

The following sections describe the typical process for using a Kubernetesvolume backed by a CSI driver in GKE. These sections arespecific to clusters using Windows.

Ensure that the:

  • Cluster version is 1.19.7-gke.2000, 1.20.2-gke.2000, or later.
  • Node versions is 1.18.12-gke.1203, 1.19.6-gke.800, or later.

Create a StorageClass

Creating a StorageClass for Windows is very similar to Linux. You should beaware that the StorageClass installed by default will not work for Windowsbecause the file system type is different. Compute Engine persistent disk CSI Driver for Windows requiresNTFS as the file system type.

For example, you could create a StorageClass using the following file namedpd-windows-class.yaml. Make sure to addcsi.storage.k8s.io/fstype: NTFS to theparameters list:

apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:pd-sc-windowsprovisioner:pd.csi.storage.gke.iovolumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:trueparameters:type:pd-balancedcsi.storage.k8s.io/fstype:NTFS

Create a PersistentVolumeClaim

After creating a StorageClass for Windows, you can now create aPersistentVolumeClaim that references that StorageClass:

kind:PersistentVolumeClaimapiVersion:v1metadata:name:podpvc-windowsspec:accessModes:-ReadWriteOncestorageClassName:pd-sc-windowsresources:requests:storage:6Gi

Create a Pod that consumes the volume

The following example consumes the volume that you created in the previous task:

apiVersion:v1kind:Podmetadata:name:web-serverspec:# Node selector to ensure the Pod runs on a Windows node.nodeSelector:kubernetes.io/os:windowscontainers:-name:iis-server# The container image to use.image:mcr.microsoft.com/windows/servercore/iisports:-containerPort:80volumeMounts:# The path in the container where the volume will be mounted.-mountPath:/var/lib/www/htmlname:mypvcvolumes:-name:mypvcpersistentVolumeClaim:# References the PersistentVolumeClaim created earlier.claimName:podpvc-windowsreadOnly:false

Use the Compute Engine persistent disk CSI Driver with non-default filesystem types

The default filesystem type for Compute Engine persistent disks in GKEisext4. You can also use thexfs storage type as long as your node imagesupports it. SeeStorage driver supportfor a list of supported drivers by node image.

The following example shows you how to usexfs as the default filesystem typeinstead ofext4 with the Compute Engine persistent disk CSI Driver.

Create a StorageClass

  1. Save the following manifest as a YAML file namedpd-xfs-class.yaml:

    apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:xfs-classprovisioner:pd.csi.storage.gke.ioparameters:# The type of Compute Engine persistent disk to provision.type:pd-balanced# Specify "xfs" as the filesystem type.csi.storage.k8s.io/fstype:xfsvolumeBindingMode:WaitForFirstConsumer
  2. Apply the manifest:

    kubectlapply-fpd-xfs-class.yaml

Create a PersistentVolumeClaim

  1. Save the following manifest aspd-xfs-pvc.yaml:

    apiVersion:v1kind:PersistentVolumeClaimmetadata:name:xfs-pvcspec:# References the StorageClass created earlier.storageClassName:xfs-classaccessModes:-ReadWriteOnceresources:requests:# The amount of storage requested.storage:10Gi
  2. Apply the manifest:

    kubectlapply-fpd-xfs-pvc.yaml

Create a Pod that consumes the volume

  1. Save the following manifest aspd-xfs-pod.yaml:

    apiVersion:v1kind:Podmetadata:name:pd-xfs-podspec:containers:-name:cloud-sdkimage:google/cloud-sdk:slim# Keep the container running for 1 hour.args:["sleep","3600"]volumeMounts:# The path in the container where the volume will be mounted.-mountPath:/xfsname:xfs-volume# Define the volumes available to the containers in the Pod.volumes:-name:xfs-volumepersistentVolumeClaim:# References the PersistentVolumeClaim created earlier.claimName:xfs-pvc
  2. Apply the manifest:

    kubectlapply-fpd-xfs-pod.yaml

Verify that the volume was mounted correctly

  1. Open a shell session in the Pod:

    kubectlexec-itpd-xfs-pod--/bin/bash
  2. Look forxfs partitions:

    df-aTh--type=xfs

    The output should be similar to the following:

    Filesystem     Type  Size  Used Avail Use% Mounted on/dev/sdb       xfs    30G   63M   30G   1% /xfs

View logs for Compute Engine persistent disk CSI Driver

You can use Cloud Logging to view events that relate to the Compute Engine persistent disk CSI Driver.Logs can help you troubleshoot issues.

For more information about Cloud Logging,seeViewing your GKE logs.

To view logs for the Compute Engine persistent disk CSI Driver, complete the following steps:

  1. Go to theCloud Logging page in the Google Cloud console.

    Go to Cloud Logging

  2. To filter log entries to show only the entries related to the CSI Driver that runs in your namespace, run the following Cloud Logging query:

    resource.type="k8s_container"resource.labels.project_id="PROJECT_ID"resource.labels.location="LOCATION"resource.labels.cluster_name="CLUSTER_NAME"resource.labels.namespace_name="kube-system"resource.labels.container_name="gce-pd-driver"

    Replace the following:

    • PROJECT_ID: the name of your project.
    • LOCATION: the Compute Engine region or zone ofthe cluster.
    • CLUSTER_NAME: the name of your cluster.

Known issues

Unsupported machine types

If you are using the C3 series machine family, thepd-standard persistentdisk type is not supported.

If you attempt to run a Pod on a machine, and the Pod uses an unsupportedpersistent disk type, you will see a warning message like the following emittedon the Pod:

AttachVolume.Attach failed for volume "pvc-d7397693-5097-4a70-9df0-b10204611053" : rpc error: code = Internal desc = unknown Attach error: failed when waiting for zonal op: operation operation-1681408439910-5f93b68c8803d-6606e4ed-b96be2e7 failed (UNSUPPORTED_OPERATION): [pd-standard] features are not compatible for creating instance.

If your cluster has multiple node pools with different machine families, you can usenode taintsandnode affinityto limit where workloads can be scheduled. For example, you can use thisapproach to restrict a workload usingpd-standard from running on anunsupported machine family.

If you are using thepd-extreme persistent disk type, you need to ensure that your disk is attached to a VM instance with a suitable machine shape. To learn more, refer toMachine shape support.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-11-06 UTC.