GKE security overview Stay organized with collections Save and categorize content based on your preferences.
Google Kubernetes Engine (GKE) provides many ways to help secure yourworkloads.Protecting workloads in GKE involves many layers of the stack, includingthe contents of your container image, the container runtime, the cluster network,and access to the cluster API server.
It's best to take a layered approach to protecting your clusters and workloads. You canapply theprinciple of least privilegeto the level of access provided to your users and your application. In each layer, your organization might need to make different tradeoffsto allow the right level of flexibility and security to securely deploy andmaintain your workloads. For example, some security settingsmight be too constraining for certain types of applications or use cases to function withoutsignificant refactoring.
This document provides an overview of each layer of your infrastructure, andshows how you can configure its security features to best suit your needs.
This document is forSecurity specialists who define, govern and implement policies and proceduresto protect an organization's data from unauthorized access. To learn more aboutcommon roles and example tasks that we reference in Google Cloud content, seeCommon GKE user roles and tasks.
Note: GKE Autopilot clusters implement many securityconfigurations for you. For details, refer toAutopilot security capabilities.Authentication and authorization
Kubernetes supportstwo types of authentication:
- User accounts are accounts that are known to Kubernetes, but are notmanaged by Kubernetes - for example, you cannot create or delete them using
kubectl. - Service accounts are accounts that are created and managed by Kubernetes,but can only be used by Kubernetes-created entities,such as pods.
In a GKEcluster,Kubernetes user accounts are managed by Google Cloud, and may beone of the following two types:
Once authenticated, you need to authorize these identities to create,read, update or delete Kubernetes resources.
Despite the similar names, Kubernetes service accounts and Google Cloud service accountsare different entities. Kubernetes service accounts are part of the cluster inwhich they are defined and are typically used within that cluster. By contrast,Google Cloud service accounts are part of a Google Cloud project, and can easily be grantedpermissions both within clusters and to Google Cloud project clusters themselves,as well as to any Google Cloud resource usingIdentity and Access Management (IAM).This makes Google Cloud service accounts more powerful than Kubernetes service accounts;in order to follow the security principle of least privilege, you shouldconsider using Google Cloud service accounts only when their capabilities are required.
To configure more granular access to Kubernetes resources at the cluster levelor within Kubernetes namespaces, you useRole-Based Access Control (RBAC).RBAC allows you to create detailed policies that define whichoperations and resources you allow users and service accounts to access. With RBAC,you can control access for Google Accounts, Google Cloud service accounts, and Kubernetesservice accounts. To further simplify and streamline your authentication andauthorization strategy for GKE, you should ensure that thelegacy Attribute Based Access Controlis disabled so that Kubernetes RBAC and IAM are the sources of truth.
For more information:
- Read theGKE RBAC documentation.
- Learn about supported authentication methods when connecting to the KubernetesAPI server inAuthenticating to the Kubernetes API server.
Control plane security
In GKE, theKubernetes control plane componentsare managed and maintained by Google. The control plane components host thesoftware that runs the Kubernetes control plane, including the API server,scheduler, controller manager, and the etcd API. If the cluster runs etcddatabase instances on the control plane VMs, these instances are also managedand maintained by Google.
You can access the control plane using a DNS-based endpoint (recommended),IP-based endpoints, or both. If you use IP-based endpoints, you can protect theKubernetes API server by usingauthorized networks and not enablingthe external endpoint of the control plane. This lets you assign an internal IPaddress to the control plane and disable access on the external IP address. Ifyou use a DNS-based endpoint, you can use IAM andVPC Service Controls to secure yourcontrol plane access with both identity and network-aware policies.
You can handle cluster authentication in Google Kubernetes Engine by usingIAM as the identity provider. For information on authentication,seeAuthenticating to the Kubernetes API server.
Another way to help secure your control plane is to ensure that you aredoingcredential rotation ona regular basis. When credential rotation is initiated, the SSL certificates and cluster certificateauthority are rotated. This process is automated by GKE and also ensures thatyour control plane IP address rotates.
For more information:
- Read more aboutcontrol plane security.
- Read theRole-Based Access Control documentation.
- Follow theCredential Rotation guide.
Node security
GKE deploys your workloads on Compute Engine instances running in your Google Cloud project.These instances are attached to your GKE cluster asnodes.The following sections show you how to leverage the node-level security features available to you in Google Cloud.
Container-Optimized OS
By default, GKEnodesuse Google'sContainer-Optimized OSas the operating system on which to run Kubernetes and its components.Container-Optimized OS implements severaladvanced featuresfor enhancing the security of GKE clusters, including:
- Locked-down firewall
- Read-only filesystem where possible
- Limited user accounts and disabled root login
GKE Autopilot nodes always use Container-Optimized OS as the operating system.
Warning: Loading kernel modules that aren't shipped with Container-Optimized OS isunsupported. Doing so can lead to security vulnerabilities and reliabilityissues.Node upgrades
A best practice is to patch your OS on a regular basis. From time to time,security issues in the container runtime, Kubernetes itself, or the nodeoperating system might require you to upgrade your nodes more urgently. When youupgrade your node, the node's software is upgraded to their latest versions.
GKE clusters supportautomatic upgrades. InAutopilot clusters, automatic upgrades are always enabled. You canalsomanually upgradethe nodes in a Standard cluster.
Protect nodes from untrusted workloads
For clusters that run unknown or untrusted workloads, a good practice is toprotect the operating system on the node from the untrusted workload running ina Pod.
For example,multi-tenant clusterssuch as software-as-a-service (SaaS) providers often execute unknown codesubmitted by their users. Security research is another application whereworkloads may need stronger isolation than nodes provide by default.
You can enableGKE Sandboxon your cluster to isolate untrusted workloads in sandboxes on the node.GKE Sandbox is built usinggVisor, an open sourceproject.
Secure instance metadata
GKE usesinstance metadata from theunderlying Compute Engine instances to provide nodes with credentials andconfigurations that are used to bootstrap nodes and to connect to the controlplane. This metadata contains sensitive information that Pods on the node don'tneed access to, such as the node's service account key.
You can lock down sensitive instance metadata paths by usingWorkload Identity Federation for GKE.Workload Identity Federation for GKE enables theGKE metadata serverin your cluster, which filters requests to sensitive fields such askube-env.
Workload Identity Federation for GKE is always enabled in Autopilot clusters. InStandard clusters, Pods have access to instance metadata unless youmanually enable Workload Identity Federation for GKE.
Network security
Most workloads running in GKE need to communicate with otherservices that could be running either inside or outside of the cluster. You canuse several different methods to control what traffic is allowed to flow throughyour clusters and their Pods.
Limit Pod-to-Pod communication
By default, all Pods in a cluster can be reached over the network via their PodIP address. Similarly, by default, egress traffic allows outbound connections toany address accessible in the VPC into which the cluster was deployed.
Cluster administrators and users can lock down the ingress and egressconnections created to and from the Pods in a namespace by usingnetwork policies.By default, when there are no network policies defined, all ingress and egresstraffic is allowed to flow into and out of all Pods. Network policies allow youto use tags to define the traffic flowing through your Pods.
Once a network policy is applied in a namespace, all traffic is dropped to and from Pods thatdon't match the configured labels. As part of your creation of clusters and/ornamespaces, you can applythe default deny traffic to both ingress and egressof every Pod to ensure that all new workloads added to the cluster mustexplicitly authorize the traffic they require.
For more information:
- Read more aboutnetwork policies
- Follow thenetwork policy tutorial
- Read more aboutdefault policies
Filter load balanced traffic
To load balance your Kubernetes Pods with anetwork load balancer,you need to create a Service of typeLoadBalancer that matches your Pod'slabels. With the Service created, you will have an external-facing IP that mapsto ports on your Kubernetes Pods. Filtering authorized traffic is achieved atthe node level bykube-proxy, whichfilters based on IP address.
To configure this filtering, you can use theloadBalancerSourceRanges configuration of the Service object. With thisconfiguration parameter, you can provide a list of CIDR ranges that you wouldlike to allow for access to the Service. If you do not configureloadBalancerSourceRanges, all addresses are allowed to access the Service viaits external IP.
For cases in which external access to the Service is not required, consider using aninternal load balancer.The internal load balancer also respects theloadBalancerSourceRanges when itis necessary to filter out traffic from inside of the VPC.
For more information, follow theinternal load balancing tutorial.
Filter traffic outside of the cluster
To control the flow of network traffic between external entities and yourcluster, useCloud Next Generation Firewall. You canuse firewall configurations to, for example, block outgoing traffic from yourPods to unapproved destinations.
Firewall configurations aren't enough to control which registries the containerimages in your cluster come from. To limit container image pulls to a set ofapproved registries, seeBlock container images from unapproved registries.
Secure your workloads
Kubernetes allows users to quickly provision, scale, and update container-basedworkloads. This section describes tactics that administrators and users canemploy to limit the effect a running container can have on other containers inthe same cluster, the nodes where containers can run, and the Google Cloudservices enabled in users' projects.
Limit privileges for containerized Pod processees
Limiting the privileges of containerized processes is important for the overallsecurity of your cluster. GKE Autopilot clustersalways restrict specific privileges, as described inAutopilot security capabilities.
GKE also allows you to set security-relatedoptions via theSecurity Contexton both Pods and containers. These settings allow you to change securitysettings of your processes like:
- User and group to run as
- Available Linux capabilities
- Ability to escalate privileges
To enforce these restrictions at the cluster level rather than at the Pod orcontainer levels, use thePodSecurityAdmission controller.Cluster administrators can use PodSecurityAdmission to ensure that all Pods ina cluster or namespace adhere to a pre-defined policy in thePod Security Standards.You can also set custom Pod security policies at the cluster level by usingGatekeeper.
Note: You can't use these policies to override the built-in securityconfigurations in GKE Autopilot.The GKE node operating systems, both Container-Optimized OSand Ubuntu,apply the default Docker AppArmor security policiesto all containers started by Kubernetes. You can view the profile's template onGitHub.Among other things, the profiledenies the following abilities tocontainers:
- Write files directly in
/proc/ - Write to files that are not in a process ID directory (
/proc/<number>) - Write to files in
/proc/sysother than/proc/sys/kernel/shm* - Mount filesystems
container.apparmor.security.beta.kubernetes.io/container-nameannotations are missing from a Pod. As a result, security scanners that onlyexamine Kubernetes API resources might falsely identify Pods without thisannotation as missing an AppArmor profile, even though the AppArmor profile isapplied at the node level.For more information:
- Read thePod Security Context documentation.
- Learn more about existing protections in theContainer-Optimized OS AppArmor documentation.
Give Pods access to Google Cloud resources
Your containers and Pods might need access to other resources inGoogle Cloud. There are three ways to do this.
Workload Identity Federation for GKE (recommended)
The most secure way to authorize Pods to access Google Cloud resourcesis withWorkload Identity Federation for GKE.Workload Identity Federation for GKE allows a Kubernetes service account to run as anIAM service account.Pods that run as the Kubernetes service account have the permissions of the IAM service account.
Workload Identity Federation for GKE can be used withGKE Sandbox.
Node service account
In Standard clusters, your Pods can also authenticate toGoogle Cloud using the credentials of theservice account used by the node'sCompute Engine virtual machine (VM).
Caution: Any Pod running in the cluster can use the node's service accountcredentials from the VM metadata. If you use this method, create andconfigure a custom service account that has theminimum IAM rolesthat are required by the Pods running in the cluster.This approach is not compatible withGKE Sandboxbecause GKE Sandbox blocks access to the Compute Engine metadata server.
Service account JSON key (not recommended)
Caution: Service account keys are a security risk if not managed correctly. You should choose a more secure alternative to service account keyswhenever possible. If you must authenticate with a service account key, you are responsible for thesecurity of the private key and for other operations described by Best practices for managing service account keys.If you are prevented from creating a service account key, service account key creation mightbe disabled for your organization. For more information, see Managing secure-by-default organization resources.If you acquired the service account key from an external source, you must validate it before use.For more information, see Security requirements for externally sourced credentials.
You can grant credentials for Google Cloud resources toapplications by using theservice account key. Thisapproach is strongly discouraged because of the difficulty of securely managingaccount keys.
If you choose this method, use custom IAM service accounts foreach applicationso that applications have the minimal necessary permissions. Grant each serviceaccount the minimum IAM roles that are needed for itspaired application to operate successfully. Keeping the service accountsapplication-specific makes it easier to revoke access in the case of acompromise without affecting other applications. After you have assigned yourservice account the correct IAM roles, you can create a JSONservice account key, and then mount the key into your Pod using a KubernetesSecret.
Use Binary Authorization
Binary Authorization is a service onGoogle Cloud that provides software supply-chain security for applicationsthat run in the cloud. Binary Authorization works with images that you deployto GKE from Artifact Registry or another container imageregistry.
With Binary Authorization enforcement, you can ensure that internalprocesses that safeguard the quality and integrity of your software havesuccessfully completed before an application is deployed to your productionenvironment. For instructions about creating a cluster withBinary Authorization enabled, visitCreating a clusterin theBinary Authorization documentation.
WithBinary Authorization continuous validation (CV),you can ensure that container images associated with Pods are regularlymonitored to ensure that they conform to your evolving internal processes.
Audit logging
Audit logging provides a way for administrators to retain, query, process, andalert on events that occur in your GKE environments.Administrators can use the logged information to do forensic analysis, real-timealerting, or for cataloging how a fleet of GKE clusters are beingused and by whom.
By default, GKE logs Admin Activity logs. You can optionally alsolog Data Access events, depending on the types of operations you are interestedin inspecting.
For more information:
- Follow theGKE audit logging tutorial.
- Read more aboutCloud Audit Logs.
Built-in security measures
GKE enforces specific restrictions on what you can do to systemobjects in your clusters. When you perform an operation like creating orpatching a workload, anadmission webhooknamedwarden-validating validates your request against a set of restrictedoperations and decides whether to allow the request.
Admission errors that are caused by this policy are similar to the following:
GKE Warden rejected the request because it violates one or more constraints.Autopilot cluster security measures
Autopilot clusters apply multiple security settings based on ourexpertise and industry best practices. For details, seeSecurity measures in Autopilot.
Standard cluster security measures
Standard clusters are more permissive by default than Autopilotclusters. GKE Standard clusters have the followingsecurity settings:
- You can't update the ServiceAccount used by GKE-managed systemworkloads, such as workloads in the
kube-systemnamespace. - You can't bind the
cluster-admindefault ClusterRole to thesystem:anonymous,system:unauthenticated, orsystem:authenticatedgroups.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.