Access Filestore instances with the Filestore CSI driver

The Filestore CSI driver is the primary way for you to useFilestoreinstances with Google Kubernetes Engine (GKE).The Filestore CSI driver provides a fully-managedexperience powered by the open sourceGoogle Cloud Filestore CSI driver.

The Filestore CSI driver version is tied to Kubernetes minor version numbers. TheFilestore CSI driver version is typically the latest driver available at the timethat the Kubernetes minor version is released. The drivers update automaticallywhen the cluster is upgraded to the latest GKEpatch.

Note: Because the Filestore CSI driver and some of the other associated CSIcomponents are deployed as separate containers, they incur resource usage(VM CPU, memory, and boot disk) on Kubernetes nodes. VM CPU usage is typicallytens of millicores and memory usage is typically tens of MiB. Boot disk usage ismostly incurred by the logs of the CSI driver and other system containers in theDeployment. For details about pricing for Autopilot andStandard clusters, seeGoogle Kubernetes Engine pricing.

Benefits

The Filestore CSI driver provides the following benefits:

  • You have access to fully-managed NFS storage through the KubernetesAPIs (kubectl).

  • You can use the GKE Filestore CSI driver to dynamicallyprovision your PersistentVolumes.

  • You can use volume snapshots with the GKE FilestoreCSI driver.CSI volume snapshotscan be used to createFilestore backups.

    A Filestore backup creates a differential copy of the file share,including all file data and metadata, and stores it separate from the instance.You can restore this copy to a new Filestore instance only. Restoringto an existing Filestore instance is not supported. You can use theCSI volume snapshot API to trigger Filestore backups, by adding atype:backup field in thevolume snapshot class.

  • You can usevolume expansionwith the GKE Filestore CSI driver. Volume expansionlets you resize your volume's capacity.

  • You can access existing Filestore instances byusing pre-provisioned Filestore instances in Kubernetes workloads.You can also dynamically create or delete Filestore instances anduse them in Kubernetes workloads with aStorageClassor aDeployment.

  • SupportsFilestore multishares for GKE.This feature lets you create a Filestore instance and allocatemultiple smaller NFS-mounted PersistentVolumes for it simultaneously acrossany number of GKE clusters.

  • Supports Basic HDD tier with a minimum capacity of 100 GiB.

Requirements

  • To use the Filestore CSI driver, your clusters must meet the following requirements:

    • The required GKE version depends on the Filestore service tier, and the features you intend to use. See theSupported service tiers, protocols, and GKE versions table for supported combinations.
    • The Filestore CSI driver is supported for clusters using Linux nodes only. Windows Server nodes are not supported.
    • The minimum instance size is as at least 100 GiB for Basic HDD tier and at least 1 TiB for Zonal, Regional, and Enterprise service tiers.

    To learn more, seeService tiers.

Supported service tiers, protocols, and GKE versions

The following table outlines the supported combinations of Filestore service tiers, protocols, and the minimum required GKE versions for use with the Filestore CSI driver.

Service tierShare typeGKE minimum version for NFSv3GKE minimum version for NFSv4.1
EnterpriseSingle share, multishare1.251.33 (single share only)
Zonal (1 TiB - 9.75 TiB)Single share1.311.33
Zonal (10 TiB - 100 TiB)Single share1.271.33
RegionalSingle share1.33.4-gke.11720001.33.4-gke.1172000
Basic HDD (100 GiB - 63.9 TiB)Single share1.33Not supported
Basic HDDSingle share1.21Not supported
Basic SSDSingle share1.21Not supported

Filestore uses the NFSv3 file system protocol on the Filestore instance by default and supports any NFSv3-compatible client.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Cloud Filestore API and the Google Kubernetes Engine API.
  • Enable APIs
  • If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running thegcloud components update command. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zone instead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
  • Ensure that you have an existing Autopilot or Standardcluster. If you need one,create an Autopilot cluster. The Filestore CSI driver is enabled by default for Autopilot clusters.

  • If you want to use Filestore on a Shared VPC network, seethe additional setup instructions inUse Filestore with Shared VPC.

Enable the Filestore CSI driver on your Standard cluster

Note: The Filestore CSI driver is enabled by default for Autopilot clusters.

To enable the Filestore CSI driver on Standard clusters, use theGoogle Cloud CLI or the Google Cloud console.

To enable the driver on an existing Standard cluster, complete the following steps:

gcloud

gcloudcontainerclustersupdateCLUSTER_NAME\--update-addons=GcpFilestoreCsiDriver=ENABLED

ReplaceCLUSTER_NAME with the name of the existingcluster.

Console

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. UnderFeatures, next to theFilestore CSI driver field, clickEdit Filestore CSI driver.

  4. Select theEnable Filestore CSI driver checkbox.

  5. ClickSave Changes.

If you want to use Filestore on a Shared VPC network, seeEnable the Filestore CSI driver on a new cluster with Shared VPC.

After you enable the Filestore CSI driver, you can use the driver in Kubernetesvolumes using the driver and provisioner name:filestore.csi.storage.gke.io.

Disable the Filestore CSI driver

You can disable the Filestore CSI driver on an existing Autopilot orStandard cluster by using the Google Cloud CLI or the Google Cloud console.

Note: We strongly recommend not to manually disable the Filestore CSI driver onAutopilot clusters. Doing so causes any Pods using PersistentVolumesowned by the driver to fail to terminate. New pods attempting to use thosePersistentVolumes will also fail to start.

Access pre-existing Filestore instances using the Filestore CSI driver

This section describes the typical process for using a Kubernetesvolume to access pre-existing Filestore instances using Filestore CSI driverin GKE:

Create a PersistentVolume and a PersistentVolumeClaim to access the instance

  1. Create a manifest file like the one shown in the following example, and nameitpreprov-filestore.yaml:

    apiVersion:v1kind:PersistentVolumemetadata:name:PV_NAMEspec:storageClassName:""capacity:storage:1TiaccessModes:-ReadWriteManypersistentVolumeReclaimPolicy:RetainvolumeMode:Filesystemcsi:driver:filestore.csi.storage.gke.iovolumeHandle:"modeInstance/FILESTORE_INSTANCE_LOCATION/FILESTORE_INSTANCE_NAME/FILESTORE_SHARE_NAME"volumeAttributes:ip:FILESTORE_INSTANCE_IPvolume:FILESTORE_SHARE_NAMEprotocol:FILESYSTEM_PROTOCOLclaimRef:name:PVC_NAMEnamespace:NAMESPACE---kind:PersistentVolumeClaimapiVersion:v1metadata:name:PVC_NAMEnamespace:NAMESPACEspec:accessModes:-ReadWriteManystorageClassName:""resources:requests:storage:1Ti
  2. To create thePersistentVolumeClaim andPersistentVolume resources basedon thepreprov-filestore.yaml manifest file, run the following command:

    kubectlapply-fpreprov-filestore.yaml

To specify the NFSv4.1 file system protocol, set theprotocol field toNFS_V4_1 in thevolumeAttributes field of aPersistentVolume object. To use the NFSv3 file systemprotocol, set theprotocol field toNFS_V3 or omit theprotocol field.

Then, proceed tocreate a Deployment that consumes the volume.

Create a volume using the Filestore CSI driver

The following sections describe the typical process for using a Kubernetesvolume backed by a Filestore CSI driver in GKE:

Create a StorageClass

After you enable the Filestore CSI driver, GKE automatically installsthe followingStorageClassesfor provisioning Filestore instances:

Each StorageClass is only available in GKE clusters running intheir respective supported GKE version numbers. For a list ofsupported versions required for each service tier, seeRequirements.

You can find the name of your installedStorageClass by running the followingcommand:

kubectlgetsc
Note: The pre-installed StorageClasses usevolumeBindingMode: WaitForFirstConsumer. This means that the Filestore instance is not provisioned immediately after creating the PersistentVolumeClaim. The instance is only created when a Pod that references the PersistentVolumeClaim is scheduled.

You can also install a differentStorageClass that uses the Filestore CSI driverby addingfilestore.csi.storage.gke.io in theprovisioner field.

Filestore needs to know on which network to create thenew instance. The automatically installed StorageClasses use the defaultnetwork created for GKE clusters. If you have deleted thisnetwork or want to use a different network, you must create a new StorageClassas described in the following steps. Otherwise, the automatically installedStorageClasses won't work.

  1. Save the following manifest asfilestore-example-class.yaml:

    apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:filestore-exampleprovisioner:filestore.csi.storage.gke.iovolumeBindingMode:ImmediateallowVolumeExpansion:trueparameters:tier:standardnetwork:default

    From the manifest, consider the following parameter configuration:

  2. To create aStorageClass resource based on thefilestore-example-class.yamlmanifest file, run the following command:

    kubectlcreate-ffilestore-example-class.yaml

If you want to use Filestore on a Shared VPC network, seeCreate a StorageClass when using the Filestore CSI driver with Shared VPC.

Use a PersistentVolumeClaim to access the volume

You can create aPersistentVolumeClaim resource that references theFilestore CSI driver'sStorageClass.

You can use either a pre-installed or customStorageClass.

The following example manifest file creates aPersistentVolumeClaim thatreferences theStorageClass namedfilestore-example.

  1. Save the following manifest file aspvc-example.yaml:

    kind:PersistentVolumeClaimapiVersion:v1metadata:name:podpvcspec:accessModes:-ReadWriteManystorageClassName:filestore-exampleresources:requests:storage:1Ti
  2. To create aPersistentVolumeClaim resource based on thepvc-example.yamlmanifest file, run the following command:

    kubectlcreate-fpvc-example.yaml

Create a Deployment that consumes the volume

The following example Deployment manifest consumes thePersistentVolumeresource namedpvc-example.yaml.

Multiple Pods can share the samePersistentVolumeClaim resource.

  1. Save the following manifest asfilestore-example-deployment.yaml:

    apiVersion:apps/v1kind:Deploymentmetadata:name:web-server-deploymentlabels:app:nginxspec:replicas:3selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-name:nginximage:nginxvolumeMounts:-mountPath:/usr/share/nginx/htmlname:mypvcvolumes:-name:mypvcpersistentVolumeClaim:claimName:podpvc---kind:PersistentVolumeClaimapiVersion:v1metadata:name:podpvcspec:accessModes:-ReadWriteManystorageClassName:filestore-exampleresources:requests:storage:1Ti
  2. To create a Deployment based on thefilestore-example-deployment.yamlmanifest file, run the following command:

    kubectlapply-ffilestore-example-deployment.yaml
  3. Confirm the Deployment was successfully created:

    kubectlgetdeployment

    It might take a while for Filestore instances to complete provisioning.Before that, deployments won't report aREADY status. You can checkthe progress by monitoring your PVC status by running the following command:

    kubectlgetpvc

    You should see the PVC reach aBOUND status, when the volumeprovisioning completes.

Label Filestore instances

You can uselabels to group relatedinstances and store metadata about an instance. A label is a key-value pair thathelps you organize your Filestore instances. You can attach a label toeach resource, then filter the resources based on their labels.

You can provide labels by using thelabels key inStorageClass.parameters.A Filestore instance can be labeled with information about whatPersistentVolumeClaim/PersistentVolume the instance was createdfor. Custom label keys and values must comply with the labelnaming convention.See the Kubernetesstorage class exampleto apply custom labels to the Filestore instance.

Use NFSv4.1 file system protocol with Filestore

The Filestore CSI driver supports the NFSv4.1 file system protocol with GKE version 1.33 or later.In case of static provisioning, set theprotocol field toNFS_V4_1in thevolumeAttributes field of aPersistentVolume object.

For dynamic provisioning, set theprotocol field toNFS_V4_1 in the parameters of aStorageClass object.

apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:enterprise-multishare-rwxprovisioner:filestore.csi.storage.gke.ioparameters:tier:enterprisemultishare:"true"instance-storageclass-label:"enterprise-multishare-rwx"protocol:NFS_V4_1volumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:true

You cannot mount the Filestore instance with theNFSv4.1 protocol withmountOptions set tonfsvers=3 in theStorageClass object.

Use fsgroup with Filestore volumes

Kubernetes usesfsGroup to change permissions and ownership of the volume tomatch a user-requestedfsGroup in the Pod'sSecurityContext.AnfsGroup is a supplemental group that applies to all containers in a Pod.You canapply an fsgroupto volumes provisioned by the Filestore CSI driver.

Configure IP access rules with Filestore volumes

Filestore supportsIP-based access controlrules forvolumes. This feature is available on GKE clusters runningversion 1.29.5 or later.

This feature allows administrators to specify which IP address ranges areallowed to access a Filestore instance provisioned dynamically throughGKE. This enhances security by restricting access to only authorizedclients, especially in scenarios where the GKE cluster's IP range is toobroad, potentially exposing the Filestore instance to unauthorizedusers or applications.

These rules can be configured directly through the Filestore API, orthrough the Filestore CSI driver when a volume is created. You can provide theselected configuration in JSON format in the StorageClass using thenfs-export-options-on-create parameter.

The following example manifest shows how to specify the configuration:

apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:filestore-exampleprovisioner:filestore.csi.storage.gke.iovolumeBindingMode:ImmediateallowVolumeExpansion:trueparameters:tier:"enterprise"nfs-export-options-on-create:'[{"accessMode":"READ_WRITE","ipRanges":["10.0.0.0/24"],"squashMode":"ROOT_SQUASH","anonUid":"1003","anonGid":"1003"},{"accessMode":"READ_WRITE","ipRanges":["10.0.0.0/28"],"squashMode":"NO_ROOT_SQUASH"}]'
Note: Use the node primary IP range instead of the Pod IPin thenfs-export-options-on-create parameter.

Security options

Filestore IP access rules simplify the configuration of shared file storagepermissions for your GKE workloads. However, understanding how it managesfile ownership and access requires grasping a few key concepts:

Recommendations

  • Initial Setup: Always start with at least one NFS export rule thatspecifies an administrator range withREAD_WRITE permissions and allowsNO_ROOT_SQUASH access. Use this access to create directories, setpermissions, and assign ownership as needed.
  • Security: Enable root squashing (ROOT_SQUASH) to enhance security.Note that after a volume is created, you can only modify the access rulesthrough the Filestore API.
  • Shared Access: UsefsGroup in yourPod securitycontextsto manage group ownership of shared volumes. Make sure not to overlap yoursetting with theROOT_SQUASH mode. Doing so returns anAccess deniederror message.

Use Filestore with Shared VPC

This section covers how to use a Filestore instance on aShared VPC network from a service project.

Set up a cluster with Shared VPC

To set up your clusters with a Shared VPC network, follow these steps:

  1. Create a host and service project.
  2. Enable the Google Kubernetes Engine API on both your host and service projects.
  3. In your host project, create a network and a subnet.
  4. Enable Shared VPC in the host project.
  5. On the host project, grant theHostServiceAgent user role binding for theservice project's GKE service account.
  6. Enable private service access on the Shared VPC network.

Enable the Filestore CSI driver on a new cluster with Shared VPC

To enable the Filestore CSI driver on a new cluster with Shared VPC, followthese steps:

  1. Verify the usable subnets and secondary ranges. When creating a cluster, youmust specify a subnet and the secondary IP address ranges to be used for thecluster's Pods and Service.

    gcloudcontainersubnetslist-usable\--project=SERVICE_PROJECT_ID\--network-project=HOST_PROJECT_ID

    The output is similar to the following:

    PROJECTREGIONNETWORKSUBNETRANGEHOST_PROJECT_IDus-central1shared-nettier-110.0.4.0/22┌──────────────────────┬───────────────┬─────────────────────────────┐│SECONDARY_RANGE_NAMEIP_CIDR_RANGESTATUS│├──────────────────────┼───────────────┼─────────────────────────────┤│tier-1-pods10.4.0.0/14usableforpodsorservices││tier-1-services10.0.32.0/20usableforpodsorservices│└──────────────────────┴───────────────┴─────────────────────────────┘
  2. Create a GKE cluster. The following examples show how you canuse gcloud CLI to create an Autopilot or Standardcluster configured for Shared VPC.The following examples use the network, subnet, and range names fromCreating a network and two subnets.

    Autopilot

    gcloudcontainerclusterscreate-autotier-1-cluster\--project=SERVICE_PROJECT_ID\--region=COMPUTE_REGION\--network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-services

    Standard

    gcloudcontainerclusterscreatetier-1-cluster\--project=SERVICE_PROJECT_ID\--zone=COMPUTE_REGION\--enable-ip-alias\--network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-services\--addons=GcpFilestoreCsiDriver
  3. Create firewall rules to allow communication between nodes, Pods, andServices in your cluster. The following example shows how you can create afirewall rule namedmy-shared-net-rule-2.

    gcloudcomputefirewall-rulescreatemy-shared-net-rule-2\--projectHOST_PROJECT_ID\--network=NETWORK_NAME\--allow=tcp,udp\--direction=INGRESS\--source-ranges=10.0.4.0/22,10.4.0.0/14,10.0.32.0/20

    In the example, the source ranges IP values come from the previous stepwhere you verified the usable subnets and secondary ranges.

Create a StorageClass when using the Filestore CSI driver with Shared VPC

The following example shows how you can create a StorageClass when using theFilestore CSI driver with Shared VPC:

cat<<EOF|kubectlapply-f-apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:filestore-sharedvpc-exampleprovisioner:filestore.csi.storage.gke.ioparameters:network:"projects/HOST_PROJECT_ID/global/networks/SHARED_VPC_NAME"connect-mode:PRIVATE_SERVICE_ACCESSreserved-ip-range:RESERVED_IP_RANGE_NAMEallowVolumeExpansion:trueEOF

Replace the following:

  • HOST_PROJECT_ID: the ID or name of the host project ofthe Shared VPC network.
  • SHARED_VPC_NAME: the name of the Shared VPCnetwork you created earlier.
  • RESERVED_IP_RANGE_NAME: the name of the specificreserved IP address range to provision Filestore instance in. This fieldis optional. If a reserved IP address range is specified, it must be a namedaddress range instead of a direct CIDR value.

If you want to provision a volume backed by Filestore multishares onGKE clusters running version 1.23 or later, seeOptimize storage with Filestore multishares for GKE.

Reconnect Filestore single share volumes

If you are using Filestore with the basic HDD, basic SSD, or enterprise(single share) tier, you can follow these instructions to reconnect yourexisting Filestore instance to your GKE workloads.

  1. Find the details of your pre-provisioned Filestore instance byfollowing the instructions inGetting information about a specific instance.

  2. Redeploy your PersistentVolume specification. In thevolumeAttributes field,modify the following fields to use the same values as your Filestoreinstance from step 1:

    • ip: Modify this value to the pre-provisioned Filestore instanceIP address.
    • volume: Modify this value to the pre-provisioned Filestoreinstance's share name.In theclaimRef make sure you reference the same PersistentVolumeClaim instep 2.
  3. Redeploy your PersistentVolumeClaim specification.

  4. Check the binding status of your PersistentVolumeClaim and PersistentVolumeby runningkubectl get pvc.

  5. Redeploy your Pod specification and ensure that your Pod is able to accessthe Filestore share again.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.