Access Filestore instances with the Filestore CSI driver Stay organized with collections Save and categorize content based on your preferences.
The Filestore CSI driver is the primary way for you to useFilestoreinstances with Google Kubernetes Engine (GKE).The Filestore CSI driver provides a fully-managedexperience powered by the open sourceGoogle Cloud Filestore CSI driver.
The Filestore CSI driver version is tied to Kubernetes minor version numbers. TheFilestore CSI driver version is typically the latest driver available at the timethat the Kubernetes minor version is released. The drivers update automaticallywhen the cluster is upgraded to the latest GKEpatch.
Note: Because the Filestore CSI driver and some of the other associated CSIcomponents are deployed as separate containers, they incur resource usage(VM CPU, memory, and boot disk) on Kubernetes nodes. VM CPU usage is typicallytens of millicores and memory usage is typically tens of MiB. Boot disk usage ismostly incurred by the logs of the CSI driver and other system containers in theDeployment. For details about pricing for Autopilot andStandard clusters, seeGoogle Kubernetes Engine pricing.Benefits
The Filestore CSI driver provides the following benefits:
You have access to fully-managed NFS storage through the KubernetesAPIs (
kubectl).You can use the GKE Filestore CSI driver to dynamicallyprovision your PersistentVolumes.
You can use volume snapshots with the GKE FilestoreCSI driver.CSI volume snapshotscan be used to createFilestore backups.
A Filestore backup creates a differential copy of the file share,including all file data and metadata, and stores it separate from the instance.You can restore this copy to a new Filestore instance only. Restoringto an existing Filestore instance is not supported. You can use theCSI volume snapshot API to trigger Filestore backups, by adding a
type:backupfield in thevolume snapshot class.You can usevolume expansionwith the GKE Filestore CSI driver. Volume expansionlets you resize your volume's capacity.
You can access existing Filestore instances byusing pre-provisioned Filestore instances in Kubernetes workloads.You can also dynamically create or delete Filestore instances anduse them in Kubernetes workloads with aStorageClassor aDeployment.
SupportsFilestore multishares for GKE.This feature lets you create a Filestore instance and allocatemultiple smaller NFS-mounted PersistentVolumes for it simultaneously acrossany number of GKE clusters.
Supports Basic HDD tier with a minimum capacity of 100 GiB.
Requirements
To use the Filestore CSI driver, your clusters must meet the following requirements:
- The required GKE version depends on the Filestore service tier, and the features you intend to use. See theSupported service tiers, protocols, and GKE versions table for supported combinations.
- The Filestore CSI driver is supported for clusters using Linux nodes only. Windows Server nodes are not supported.
- The minimum instance size is as at least 100 GiB for Basic HDD tier and at least 1 TiB for Zonal, Regional, and Enterprise service tiers.
To learn more, seeService tiers.
Supported service tiers, protocols, and GKE versions
The following table outlines the supported combinations of Filestore service tiers, protocols, and the minimum required GKE versions for use with the Filestore CSI driver.
| Service tier | Share type | GKE minimum version for NFSv3 | GKE minimum version for NFSv4.1 |
|---|---|---|---|
| Enterprise | Single share, multishare | 1.25 | 1.33 (single share only) |
| Zonal (1 TiB - 9.75 TiB) | Single share | 1.31 | 1.33 |
| Zonal (10 TiB - 100 TiB) | Single share | 1.27 | 1.33 |
| Regional | Single share | 1.33.4-gke.1172000 | 1.33.4-gke.1172000 |
| Basic HDD (100 GiB - 63.9 TiB) | Single share | 1.33 | Not supported |
| Basic HDD | Single share | 1.21 | Not supported |
| Basic SSD | Single share | 1.21 | Not supported |
Filestore uses the NFSv3 file system protocol on the Filestore instance by default and supports any NFSv3-compatible client.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Cloud Filestore API and the Google Kubernetes Engine API. Enable APIs
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
Ensure that you have an existing Autopilot or Standardcluster. If you need one,create an Autopilot cluster. The Filestore CSI driver is enabled by default for Autopilot clusters.
If you want to use Filestore on a Shared VPC network, seethe additional setup instructions inUse Filestore with Shared VPC.
Enable the Filestore CSI driver on your Standard cluster
Note: The Filestore CSI driver is enabled by default for Autopilot clusters.To enable the Filestore CSI driver on Standard clusters, use theGoogle Cloud CLI or the Google Cloud console.
To enable the driver on an existing Standard cluster, complete the following steps:
gcloud
gcloudcontainerclustersupdateCLUSTER_NAME\--update-addons=GcpFilestoreCsiDriver=ENABLEDReplaceCLUSTER_NAME with the name of the existingcluster.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
UnderFeatures, next to theFilestore CSI driver field, clickeditEdit Filestore CSI driver.
Select theEnable Filestore CSI driver checkbox.
ClickSave Changes.
If you want to use Filestore on a Shared VPC network, seeEnable the Filestore CSI driver on a new cluster with Shared VPC.
After you enable the Filestore CSI driver, you can use the driver in Kubernetesvolumes using the driver and provisioner name:filestore.csi.storage.gke.io.
Disable the Filestore CSI driver
You can disable the Filestore CSI driver on an existing Autopilot orStandard cluster by using the Google Cloud CLI or the Google Cloud console.
Note: We strongly recommend not to manually disable the Filestore CSI driver onAutopilot clusters. Doing so causes any Pods using PersistentVolumesowned by the driver to fail to terminate. New pods attempting to use thosePersistentVolumes will also fail to start.gcloud
gcloudcontainerclustersupdateCLUSTER_NAME\--update-addons=GcpFilestoreCsiDriver=DISABLED\--regionREGIONReplace the following values:
CLUSTER_NAME: the name of theexisting cluster.REGION: the region for your cluster(such as,us-central1).
Console
In the Google Cloud console, go to the Google Kubernetes Engine menu.
In the cluster list, click the name of the cluster you want to modify.
UnderFeatures, next to theFilestore CSI driver field, clickeditEdit Filestore CSI driver.
Clear theEnable Filestore CSI driver checkbox.
ClickSave Changes.
Access pre-existing Filestore instances using the Filestore CSI driver
This section describes the typical process for using a Kubernetesvolume to access pre-existing Filestore instances using Filestore CSI driverin GKE:
Create a PersistentVolume and a PersistentVolumeClaim to access the instance
Create a manifest file like the one shown in the following example, and nameit
preprov-filestore.yaml:apiVersion:v1kind:PersistentVolumemetadata:name:PV_NAMEspec:storageClassName:""capacity:storage:1TiaccessModes:-ReadWriteManypersistentVolumeReclaimPolicy:RetainvolumeMode:Filesystemcsi:driver:filestore.csi.storage.gke.iovolumeHandle:"modeInstance/FILESTORE_INSTANCE_LOCATION/FILESTORE_INSTANCE_NAME/FILESTORE_SHARE_NAME"volumeAttributes:ip:FILESTORE_INSTANCE_IPvolume:FILESTORE_SHARE_NAMEprotocol:FILESYSTEM_PROTOCOLclaimRef:name:PVC_NAMEnamespace:NAMESPACE---kind:PersistentVolumeClaimapiVersion:v1metadata:name:PVC_NAMEnamespace:NAMESPACEspec:accessModes:-ReadWriteManystorageClassName:""resources:requests:storage:1TiTo create the
PersistentVolumeClaimandPersistentVolumeresources basedon thepreprov-filestore.yamlmanifest file, run the following command:kubectlapply-fpreprov-filestore.yaml
To specify the NFSv4.1 file system protocol, set theprotocol field toNFS_V4_1 in thevolumeAttributes field of aPersistentVolume object. To use the NFSv3 file systemprotocol, set theprotocol field toNFS_V3 or omit theprotocol field.
Then, proceed tocreate a Deployment that consumes the volume.
Create a volume using the Filestore CSI driver
The following sections describe the typical process for using a Kubernetesvolume backed by a Filestore CSI driver in GKE:
- Create a StorageClass
- Use a PersistentVolumeClaim to access the volume
- Create a Deployment that consumes the volume
Create a StorageClass
After you enable the Filestore CSI driver, GKE automatically installsthe followingStorageClassesfor provisioning Filestore instances:
zonal-rwx, using theFilestore zonal tier.enterprise-rwx, using theFilestore enterprise tier,where each Kubernetes PersistentVolume maps to a Filestore instance.enterprise-multishare-rwx, using theFilestore enterprise tier,where each Kubernetes PersistentVolume maps to a share of a given Filestoreinstance. To learn more, seeFilestore multishares for Google Kubernetes Engine.standard-rwx, using theFilestore basic HDD service tier.premium-rwx, using theFilestore basic SSD service tier.
Each StorageClass is only available in GKE clusters running intheir respective supported GKE version numbers. For a list ofsupported versions required for each service tier, seeRequirements.
You can find the name of your installedStorageClass by running the followingcommand:
kubectlgetscvolumeBindingMode: WaitForFirstConsumer. This means that the Filestore instance is not provisioned immediately after creating the PersistentVolumeClaim. The instance is only created when a Pod that references the PersistentVolumeClaim is scheduled.You can also install a differentStorageClass that uses the Filestore CSI driverby addingfilestore.csi.storage.gke.io in theprovisioner field.
Filestore needs to know on which network to create thenew instance. The automatically installed StorageClasses use the defaultnetwork created for GKE clusters. If you have deleted thisnetwork or want to use a different network, you must create a new StorageClassas described in the following steps. Otherwise, the automatically installedStorageClasses won't work.
Save the following manifest as
filestore-example-class.yaml:apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:filestore-exampleprovisioner:filestore.csi.storage.gke.iovolumeBindingMode:ImmediateallowVolumeExpansion:trueparameters:tier:standardnetwork:defaultFrom the manifest, consider the following parameter configuration:
- Setting
volumeBindingModetoImmediateallows the provisioning of thevolume to begin immediately. This is possible because Filestoreinstances are accessible from any zone. Therefore GKE doesnot need to know the zone where the Pod is scheduled, in contrast withCompute Engine persistent disk. When set toWaitForFirstConsumer,GKE begins provisioning only after the Pod is scheduled.For more information, seeVolumeBindingMode. - Any supportedFilestore tiercan be specified in the
tierparameter(for example,BASIC_HDD,BASIC_SSD,ZONAL, orENTERPRISE). - The
networkparametercan be used when provisioning Filestore instances on non-defaultVPCs. Non-default VPCs require specialfirewall rulesto be set up. - The
protocolparametercan be used to set the file system protocol of the Filestore instance. It can take the following values:NFS_V3(default) andNFS_V4_1. The default protocol isNFS_V3.
- Setting
To create a
StorageClassresource based on thefilestore-example-class.yamlmanifest file, run the following command:kubectlcreate-ffilestore-example-class.yaml
If you want to use Filestore on a Shared VPC network, seeCreate a StorageClass when using the Filestore CSI driver with Shared VPC.
Use a PersistentVolumeClaim to access the volume
You can create aPersistentVolumeClaim resource that references theFilestore CSI driver'sStorageClass.
You can use either a pre-installed or customStorageClass.
The following example manifest file creates aPersistentVolumeClaim thatreferences theStorageClass namedfilestore-example.
Save the following manifest file as
pvc-example.yaml:kind:PersistentVolumeClaimapiVersion:v1metadata:name:podpvcspec:accessModes:-ReadWriteManystorageClassName:filestore-exampleresources:requests:storage:1TiTo create a
PersistentVolumeClaimresource based on thepvc-example.yamlmanifest file, run the following command:kubectlcreate-fpvc-example.yaml
Create a Deployment that consumes the volume
The following example Deployment manifest consumes thePersistentVolumeresource namedpvc-example.yaml.
Multiple Pods can share the samePersistentVolumeClaim resource.
Save the following manifest as
filestore-example-deployment.yaml:apiVersion:apps/v1kind:Deploymentmetadata:name:web-server-deploymentlabels:app:nginxspec:replicas:3selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:-name:nginximage:nginxvolumeMounts:-mountPath:/usr/share/nginx/htmlname:mypvcvolumes:-name:mypvcpersistentVolumeClaim:claimName:podpvc---kind:PersistentVolumeClaimapiVersion:v1metadata:name:podpvcspec:accessModes:-ReadWriteManystorageClassName:filestore-exampleresources:requests:storage:1TiTo create a Deployment based on the
filestore-example-deployment.yamlmanifest file, run the following command:kubectlapply-ffilestore-example-deployment.yamlConfirm the Deployment was successfully created:
kubectlgetdeploymentIt might take a while for Filestore instances to complete provisioning.Before that, deployments won't report a
READYstatus. You can checkthe progress by monitoring your PVC status by running the following command:kubectlgetpvcYou should see the PVC reach a
BOUNDstatus, when the volumeprovisioning completes.
Label Filestore instances
You can uselabels to group relatedinstances and store metadata about an instance. A label is a key-value pair thathelps you organize your Filestore instances. You can attach a label toeach resource, then filter the resources based on their labels.
You can provide labels by using thelabels key inStorageClass.parameters.A Filestore instance can be labeled with information about whatPersistentVolumeClaim/PersistentVolume the instance was createdfor. Custom label keys and values must comply with the labelnaming convention.See the Kubernetesstorage class exampleto apply custom labels to the Filestore instance.
Use NFSv4.1 file system protocol with Filestore
The Filestore CSI driver supports the NFSv4.1 file system protocol with GKE version 1.33 or later.In case of static provisioning, set theprotocol field toNFS_V4_1in thevolumeAttributes field of aPersistentVolume object.
For dynamic provisioning, set theprotocol field toNFS_V4_1 in the parameters of aStorageClass object.
apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:enterprise-multishare-rwxprovisioner:filestore.csi.storage.gke.ioparameters:tier:enterprisemultishare:"true"instance-storageclass-label:"enterprise-multishare-rwx"protocol:NFS_V4_1volumeBindingMode:WaitForFirstConsumerallowVolumeExpansion:trueYou cannot mount the Filestore instance with theNFSv4.1 protocol withmountOptions set tonfsvers=3 in theStorageClass object.
Use fsgroup with Filestore volumes
Kubernetes usesfsGroup to change permissions and ownership of the volume tomatch a user-requestedfsGroup in the Pod'sSecurityContext.AnfsGroup is a supplemental group that applies to all containers in a Pod.You canapply an fsgroupto volumes provisioned by the Filestore CSI driver.
Configure IP access rules with Filestore volumes
Filestore supportsIP-based access controlrules forvolumes. This feature is available on GKE clusters runningversion 1.29.5 or later.
This feature allows administrators to specify which IP address ranges areallowed to access a Filestore instance provisioned dynamically throughGKE. This enhances security by restricting access to only authorizedclients, especially in scenarios where the GKE cluster's IP range is toobroad, potentially exposing the Filestore instance to unauthorizedusers or applications.
These rules can be configured directly through the Filestore API, orthrough the Filestore CSI driver when a volume is created. You can provide theselected configuration in JSON format in the StorageClass using thenfs-export-options-on-create parameter.
The following example manifest shows how to specify the configuration:
apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:filestore-exampleprovisioner:filestore.csi.storage.gke.iovolumeBindingMode:ImmediateallowVolumeExpansion:trueparameters:tier:"enterprise"nfs-export-options-on-create:'[{"accessMode":"READ_WRITE","ipRanges":["10.0.0.0/24"],"squashMode":"ROOT_SQUASH","anonUid":"1003","anonGid":"1003"},{"accessMode":"READ_WRITE","ipRanges":["10.0.0.0/28"],"squashMode":"NO_ROOT_SQUASH"}]'nfs-export-options-on-create parameter.Security options
Filestore IP access rules simplify the configuration of shared file storagepermissions for your GKE workloads. However, understanding how it managesfile ownership and access requires grasping a few key concepts:
NFS and user mappings NFS (Network File System) is the protocol used byFilestore. It works by mapping users on client systems (your GKEPods) to users on the Filestore server. If a file on the server is owned byuser ID 1003, and a client connects with user ID 1003, they'll have accessto the file.
Root squashing and
anonUid:Root Squashing
ROOT_SQUASHis a security feature that preventsclients from accessing the Filestore instance with full root privileges.When root squashing is enabled, root users on client systems are mapped to anon-privileged user specified by theanonUidsetting.No Root Squashing (
NO_ROOT_SQUASH) allows clients to access theFilestore instance with full root privileges, which is convenient forinitial setup but less secure for regular operations.
Initial Setup and Permissions: By default, a new Filestore instance isowned entirely by the root user. If you enable root squashing without firstsetting up permissions for other users, you'll lose access. This is whyyou need at least one NFS export rule with
NO_ROOT_SQUASHto initiallyconfigure access for other users and groups.
Recommendations
- Initial Setup: Always start with at least one NFS export rule thatspecifies an administrator range with
READ_WRITEpermissions and allowsNO_ROOT_SQUASHaccess. Use this access to create directories, setpermissions, and assign ownership as needed. - Security: Enable root squashing (
ROOT_SQUASH) to enhance security.Note that after a volume is created, you can only modify the access rulesthrough the Filestore API. - Shared Access: Use
fsGroupin yourPod securitycontextsto manage group ownership of shared volumes. Make sure not to overlap yoursetting with theROOT_SQUASHmode. Doing so returns anAccess deniederror message.
Use Filestore with Shared VPC
This section covers how to use a Filestore instance on aShared VPC network from a service project.
Set up a cluster with Shared VPC
To set up your clusters with a Shared VPC network, follow these steps:
- Create a host and service project.
- Enable the Google Kubernetes Engine API on both your host and service projects.
- In your host project, create a network and a subnet.
- Enable Shared VPC in the host project.
- On the host project, grant the
HostServiceAgentuser role binding for theservice project's GKE service account. - Enable private service access on the Shared VPC network.
Enable the Filestore CSI driver on a new cluster with Shared VPC
To enable the Filestore CSI driver on a new cluster with Shared VPC, followthese steps:
Verify the usable subnets and secondary ranges. When creating a cluster, youmust specify a subnet and the secondary IP address ranges to be used for thecluster's Pods and Service.
gcloudcontainersubnetslist-usable\--project=SERVICE_PROJECT_ID\--network-project=HOST_PROJECT_IDThe output is similar to the following:
PROJECTREGIONNETWORKSUBNETRANGEHOST_PROJECT_IDus-central1shared-nettier-110.0.4.0/22┌──────────────────────┬───────────────┬─────────────────────────────┐│SECONDARY_RANGE_NAME│IP_CIDR_RANGE│STATUS│├──────────────────────┼───────────────┼─────────────────────────────┤│tier-1-pods│10.4.0.0/14│usableforpodsorservices││tier-1-services│10.0.32.0/20│usableforpodsorservices│└──────────────────────┴───────────────┴─────────────────────────────┘Create a GKE cluster. The following examples show how you canuse gcloud CLI to create an Autopilot or Standardcluster configured for Shared VPC.The following examples use the network, subnet, and range names fromCreating a network and two subnets.
Autopilot
gcloudcontainerclusterscreate-autotier-1-cluster\--project=SERVICE_PROJECT_ID\--region=COMPUTE_REGION\--network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-servicesStandard
gcloudcontainerclusterscreatetier-1-cluster\--project=SERVICE_PROJECT_ID\--zone=COMPUTE_REGION\--enable-ip-alias\--network=projects/HOST_PROJECT_ID/global/networks/NETWORK_NAME\--subnetwork=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET_NAME\--cluster-secondary-range-name=tier-1-pods\--services-secondary-range-name=tier-1-services\--addons=GcpFilestoreCsiDriverCreate firewall rules to allow communication between nodes, Pods, andServices in your cluster. The following example shows how you can create afirewall rule named
my-shared-net-rule-2.gcloudcomputefirewall-rulescreatemy-shared-net-rule-2\--projectHOST_PROJECT_ID\--network=NETWORK_NAME\--allow=tcp,udp\--direction=INGRESS\--source-ranges=10.0.4.0/22,10.4.0.0/14,10.0.32.0/20In the example, the source ranges IP values come from the previous stepwhere you verified the usable subnets and secondary ranges.
Create a StorageClass when using the Filestore CSI driver with Shared VPC
The following example shows how you can create a StorageClass when using theFilestore CSI driver with Shared VPC:
cat<<EOF|kubectlapply-f-apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:filestore-sharedvpc-exampleprovisioner:filestore.csi.storage.gke.ioparameters:network:"projects/HOST_PROJECT_ID/global/networks/SHARED_VPC_NAME"connect-mode:PRIVATE_SERVICE_ACCESSreserved-ip-range:RESERVED_IP_RANGE_NAMEallowVolumeExpansion:trueEOFReplace the following:
HOST_PROJECT_ID: the ID or name of the host project ofthe Shared VPC network.SHARED_VPC_NAME: the name of the Shared VPCnetwork you created earlier.RESERVED_IP_RANGE_NAME: the name of the specificreserved IP address range to provision Filestore instance in. This fieldis optional. If a reserved IP address range is specified, it must be a namedaddress range instead of a direct CIDR value.
If you want to provision a volume backed by Filestore multishares onGKE clusters running version 1.23 or later, seeOptimize storage with Filestore multishares for GKE.
Reconnect Filestore single share volumes
If you are using Filestore with the basic HDD, basic SSD, or enterprise(single share) tier, you can follow these instructions to reconnect yourexisting Filestore instance to your GKE workloads.
Find the details of your pre-provisioned Filestore instance byfollowing the instructions inGetting information about a specific instance.
Redeploy your PersistentVolume specification. In the
volumeAttributesfield,modify the following fields to use the same values as your Filestoreinstance from step 1:ip: Modify this value to the pre-provisioned Filestore instanceIP address.volume: Modify this value to the pre-provisioned Filestoreinstance's share name.In theclaimRefmake sure you reference the same PersistentVolumeClaim instep 2.
Redeploy your PersistentVolumeClaim specification.
Check the binding status of your PersistentVolumeClaim and PersistentVolumeby running
kubectl get pvc.Redeploy your Pod specification and ensure that your Pod is able to accessthe Filestore share again.
What's next
- Learn how to deploy a stateful Filestore workload on GKE.
- Learn how to share a Filestore enterprise instance with multiple Persistent Volumes.
- Learn how to use volume expansion.
- Learn how to use volume snapshots.
- Read more about the CSI driver on GitHub.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.