Harden your cluster's security Stay organized with collections Save and categorize content based on your preferences.
This document provides best practices for improving the security of yourGoogle Kubernetes Engine (GKE) environments. Security specialists who define,govern, and implement policies and procedures can use these best practices toprotect their organization's data.
You should already be familiar with the following:
New GKE clusters implement many of the best practices in thisdocument by default. Autopilot mode clusters have a stricter defaultsecurity posture than Standard mode clusters.
To implement and enforce the best practices in this document across yourorganization, consider the following services:
- Security Command Center:automatically check whether your clusters implement many of these bestpractices and check for other common misconfigurations.
- Organization Policy Service:enforce specific best practices on GKE resource in anorganization, folder, or project. Specific sections in this document havelinks to the Google Cloud console for you to applymanaged constraintsfor those recommendations.
Google Cloud environment design
The following sections describe security measures that you should consider whenyou plan and design your resources in Google Cloud.Cloud architects should use these recommendations when planning anddefining Google Cloud architecture.
Best practices
Plan your Google Cloud resource structure
Recommended: implement theenterprise foundations blueprint,which is a complete foundation for your enterprise environment based on our bestpractices.
The architecture of your Google Cloud organizations, folders, andprojects affects your security posture. Design these foundational resources ina way that enables governance and security controls at scale across yourservices.
Plan multi-tenant environments
Recommended: implement Google Cloud and GKE bestpractices for multi-tenant enterprise platforms.
Many GKE customers manage distributed teams, with separateengineering workflows and responsibilities. Thesemulti-tenant environmentsmust have shared infrastructure that all of your developers can use, whilerestricting access to components based on roles and responsibilities. Theenterprise application blueprint builds on theenterprise foundations blueprintto help you to deploy internal developer platforms in multi-tenant environments.
For more information, see the following documents:
Use tags to group Google Cloud resources
Recommended: use tags to organize GKE resources forconditional policy enforcement and improved accountability across your teams.
Tags are metadata that you can attach to resources in your organizations,folders, and projects to identify business dimensions across yourGoogle Cloud resource hierarchy. You can attach tags to GKEclusters and node pools, and then use those tags to conditionally applyorganization policies, IAM policies, or firewall policies.
For more information, see the following documents:
Plan your VPC networks
Recommended: implement Google Cloud and GKE bestpractices for VPC network design.
Your VPC network design and the features that you use impactyour network security. Plan your networks based on your Google Cloudresource hierarchy and your security objectives. For more information, see thefollowing documents:
Design an incident response plan
Recommended: create and maintain an incident response plan that meets yoursecurity and reliability goals.
Security incidents can occur even when you implement every possible securitycontrol. An incident response plan helps you to identify potential gaps in yoursecurity controls, respond quickly and effectively to various types of incidents,and reduce downtime during an outage. For more information, see the followingdocuments:
Google Cloud network security
The following sections provide security recommendations for yourVPC networks. Network architects andnetwork administrators should apply these recommendations to reduce theattack surface at the network level and to limit the impact of unintendednetwork access.
Best practices
Use least-privilege firewall rules
Recommended: when you create firewall rules, use the principle of leastprivilege to provide access only for the required purpose. Ensure that yourfirewall rules don't conflict with, or override, the GKE defaultfirewall rules when possible.
GKE createsdefault VPC firewall rulesto enable system functionality and to enforce good security practices. If youcreate permissive firewall rules with a higher priority than a default firewallrule (for example, a firewall rule that allows all ingress traffic for debugging),your cluster is at risk of unintended access.
Use Shared VPC for cross-project traffic
Recommended: use Shared VPC to let resources in multiple projectscommunicate with each other by using internal IP addresses.
Resources in different projects in your organization might need to communicatewith each other. For example, frontend services in a GKE clusterin one project might need to communicate with backend Compute Engineinstances in a different project.
For more information, see the following documents:
Use separate networks to isolate environments
Recommended: use separate Shared VPC networks for staging, test, andproduction environments.
Isolate your development environments from each other to reduce the impact andrisk of unauthorized access or disruptive bugs. For more information,seeMultiple host projects.
Immutable security settings
The following sections provide security recommendations that you can configureonly when you create clusters or node pools. You can't update existing clustersor node pools to change these settings. Platform admins should apply theserecommendations to new clusters and node pools.
Use least-privilege IAM node service accounts
Recommended: use a custom IAM service account for yourGKE clusters and node pools instead of using the defaultCompute Engine service account.
GKE uses IAM service accounts that are attached to your nodes to run system tasks like logging and monitoring. At a minimum, thesenode service accounts must have theKubernetes Engine Default Node Service Account (roles/container.defaultNodeServiceAccount) role on your project. By default, GKE uses theCompute Engine default service account, which is automatically created in your project, as the node service account.
If you use the Compute Engine default service account for other functions in your project or organization, the service account might have more permissions than GKE needs, which could expose you to security risks.
Best practice: Instead of using the Compute Engine default service account, create acustom service account for your nodes to use and give it only the permissions that GKE needs to run system tasks. For more information, seeConfigure a custom node service account.The service account that's attached to your nodes should be used only by system workloads that perform tasks like logging and monitoring. For your own workloads, provision identities usingWorkload Identity Federation for GKE.
To enforce this recommendation in your organization, use theconstraints/container.managed.disallowDefaultComputeServiceAccountmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Use a Container-Optimized OS node image
Recommended: unless you have a specific requirement to use Ubuntu orWindows, use the Container-Optimized OS node image for your nodes.
Container-Optimized OS is built, optimized, and hardenedspecifically for running containers. Container-Optimized OS is theonly supported node image for Autopilot mode, and is the default nodeimage for Standard mode.
For more information, see the following documents:
Node security configuration
The following sections provide security recommendations for GKEnode configuration. Platform admins and security engineers should applythese recommendations to improve the integrity of your GKE nodes.
Best practices
Use Shielded GKE Nodes
Recommended: enable Shielded GKE Nodes, secure boot, and integritymonitoring in all clusters and node pools.
Shielded GKE Nodes provides verifiable identity and integrity checks thatimprove the security of your nodes. Shielded GKE Nodes and features like nodeintegrity monitoring and secure boot are always enabled in Autopilotclusters. In Standard clusters, do the following:
- Don't disable Shielded GKE Nodes in your clusters.
- Enable secure boot in all of your node pools.
- Don't disable integrity monitoring in your node pools.
For more information about how to enable these features, seeUsing Shielded GKE Nodes.
Note: In Container-Optimized OS, secure boot doesn't change whether youcan load unsigned modules. If you want to change that setting forContainer-Optimized OS nodes,Configure secure kernel module loading. To enforce this recommendation in your organization, use theconstraints/container.managed.enableShieldedNodesmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Disable the insecure kubelet read-only port
Recommended: disable thekubelet read-only port and switch any workloadsthat use port10255 to use the more secure port10250 instead.
Thekubelet process running on nodes serves a read-only API using the insecureport10255. Kubernetes doesn't perform any authentication or authorizationchecks on this port. Thekubelet serves the same endpoints on the more secure,authenticated port10250.
For more information, seeDisable thekubelet read-only port in GKE clusters.
constraints/container.managed.disableInsecureKubeletReadOnlyPortmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Access control
The following sections provide recommendations for restricting unauthorizedaccess in your cluster. Security engineers and identity and account adminsshould apply these recommendations to reduce your attack surface and to limitthe impact of unauthorized access.
Best practices
Restrict access to cluster API discovery
Recommended: restrict access to your control plane and nodes from theinternet to prevent unintended access to cluster API discovery endpoints.
By default, Kubernetes creates clusters with a permissive set ofdefault API discovery roles.These default roles give broad access to information about a cluster's APIs tovarious default groups, such assystem:authenticated. These default rolesdon't represent a meaningful level of security for GKE clusters.For example, thesystem:authenticated group, which can read information aboutAPIs like CustomResources, is assigned to any authenticated user (includinganyone with a Google account).
To restrict access to your cluster discovery APIs, do the following:
- Restrict access to the control plane: use only the DNS-based endpointfor control plane access. If you use IP-based endpoints, restrict access toa set of known address ranges by configuring authorized networks.
- Configure private nodes: disable the external IP addresses of yournodes, so that clients outside of your network can't access the nodes.
For more information, seeAbout network isolation.
If you don't enable these network isolation features, treat all API discoveryinformation (especially the schema of CustomResources, APIService definitions,and discovery information hosted by extension API servers) as publiclydisclosed.
Place teams and environments in separate namespaces or clusters
Give teams least-privilege access to Kubernetes by creating separatenamespaces orclusters for each team and environment. For each namespace or cluster, assigncost centers and labels foraccountability and chargeback.
You can use IAM and RBAC permissions together with namespaces to restrict userinteractions with cluster resources on Google Cloud console. For more information, see Enable access and view cluster resources by namespace.Use the principle of least privilege in access policies
Recommended: give developers only the access that they need to deploy andmanage applications in their namespace, especially in production environments.When you design your access control policies, map out the tasks that your usersneed to do in the cluster and give them only the permissions that allow them todo those tasks.
In GKE, you can use IAM and Kubernetes role-basedaccess control (RBAC) to give permissions on resources. These access controlmechanisms work together. To reduce the complexity of managing access, do thefollowing:
To give access to your project or to Google Cloud resources, useIAM roles.
To give access to Kubernetes resources in your cluster, such as namespaces,useRBAC.
For more information about planning and designing IAM and RBACpolicies, see the following documents:
Use Workload Identity Federation for GKE to access Google Cloud APIs
Recommended: to access Google Cloud resources from yourGKE workloads, useWorkload Identity Federation for GKE.
Workload Identity Federation for GKE is the recommended way to authenticate toGoogle Cloud APIs. You can grant IAM roles on variousresources to principals in your cluster, such as specific KubernetesServiceAccounts or Pods. Workload Identity Federation for GKE also protects sensitive metadataon your nodes and provides a more secure authentication workflow thanalternatives like static token files.
Workload Identity Federation for GKE is always enabled in Autopilot clusters. InStandard clusters, enable Workload Identity Federation for GKE for all clusters andnode pools. Additionally, follow these recommendations:
- If you use Google Cloud client libraries in your application code,then don't distribute Google Cloud credentials to your workloads. Codethat uses client libraries automatically retrieves credentials forWorkload Identity Federation for GKE.
- Use a separate namespace and ServiceAccount for every workload that needs adistinct identity. Grant IAM permissions to specificServiceAccounts.
For more information, seeAuthenticate to Google Cloud APIs from GKE workloads.
To enforce this recommendation in your organization, use theconstraints/container.managed.enableWorkloadIdentityFederationmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Use groups to manage access
Recommended: in your access policies, give permissions to groups of usersinstead of to individuals.
When you manage users in groups, your identity management system and identityadministrators can centrally control identities by modifying user membership invarious groups. This type of management negates the need to update your RBAC orIAM policies every time that a specific user needs updatedpermissions.
You can specify Google Groups in your IAM or RBAC policies.For more information, see the following documents:
To enforce this recommendation in your organization, use theconstraints/container.managed.enableGoogleGroupsRBACmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Restrict anonymous access to cluster endpoints
Recommended: prevent anonymous requests to all cluster endpoints except forhealth check endpoints, in all Autopilot and Standardclusters.
By default, Kubernetes assigns thesystem:anonymous user and thesystem:unauthenticated group toanonymous requeststo cluster endpoints. If your RBAC policies give this user or group additionalpermissions, an anonymous user might be able to compromise the security of aservice or the cluster itself.
In GKE version 1.32.2-gke.1234000 and later, you can limit theset of endpoints that anonymous requests can reach to only the/healthz,/livez, and/readyz Kubernetes API server health check endpoints.Anonymous access to these health check endpoints is required to verify that acluster is operating correctly.
To limit anonymous access to cluster endpoints, specifyLIMITED for the--anonymous-authentication-config flag when you use the gcloud CLIor the GKE API to create or update Standard andAutopilot clusters. GKE rejects anonymous requests tocluster endpoints that aren't the health check endpoints during authentication.Anonymous requests don't reach the endpoints, even if your RBAC policies grantaccess to anonymous users and groups. Rejected requests return an HTTP status of401.
To enforce this recommendation in your organization, folder, or project by usingan organization policy, create a custom constraint with theresource.anonymousAuthenticationConfig.mode condition. For more informationand for an example constraint, seeRestrict actions on GKE resources using custom organization policies.
Don't rely on this capability alone to secure your cluster. Implement additionalsecurity measures like the following:
GKE network security
The following sections provide recommendations to improve network security inyour clusters. Network administrators and security engineers shouldapply these recommendations to protect workloads and infrastructure fromunintended external or internal access.
Best practices
Restrict access to the control plane
Recommended: enable the DNS-based endpoint for control plane access anddisable all IP-based control plane endpoints.
By default, external entities, such as clients on the internet, can reach yourcontrol plane. You can restrict who can access your control plane by configuringnetwork isolation.
To isolate your control plane, do one of the following:
Use only the DNS-based endpoint (recommended): enable the DNS-basedendpoint for the control plane and disable internal and external IP-basedendpoints. All control plane access must use the DNS-based endpoint.You can useVPC Service Controls tocontrol who can access the DNS-based endpoint.
To enforce this recommendation in your organization, use the
constraints/container.managed.enableControlPlaneDNSOnlyAccessmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Disable the external IP-based endpoint: remove the external IP addressof the control plane. Clients that are outside your VPCnetwork can't use the external IP address to access the control plane.
This option works well if you use technologies likeCloud InterconnectandCloud VPN toconnect your company network to your VPC network.
Use authorized networks with the external IP-based endpoint: restrictaccess to the external IP-based endpoint to only a trusted range of sourceIP addresses.
This option works well if you don't have existing VPN infrastructure, orif you have remote users or branch offices that access your clusters byusing the public internet.
In most scenarios, use only the DNS-based endpoint for control plane access. Ifyou have to enable the IP-based endpoint, useauthorized networksto limit control plane access to the following entities:
- The IP address ranges that you specify.
- GKE nodes in the same VPC network as thecluster.
- Google-reserved IP addresses for cluster management purposes.
Isolate your nodes from the internet
By default, all GKE nodes have an external IP address thatclients on the internet can reach. To remove this external IP address,enable private nodes.
To enforce this recommendation in your organization, use theconstraints/container.managed.enablePrivateNodesmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Restrict network traffic among Pods
Recommended: control Pod-to-Pod network traffic by using NetworkPolicies, aservice mesh, or both.
By default, every Pod in your cluster can communicate with every other Pod.Restricting network access among services makes it much more difficult forattackers to move laterally in your cluster. Your services also gain someprotection against accidental or deliberate denial-of-service incidents.Depending on your requirements, use one or both of the following methods torestrict Pod-to-Pod traffic:
- Use Cloud Service Meshif you want features like load balancing, service authorization, throttling,quota, and metrics. A service mesh is useful if you have large numbers ofdistinct services that have complex interactions with each other.
Use Kubernetes NetworkPoliciesif you want a basic traffic flow control mechanism. To verify that yourNetworkPolicies work as expected, configurenetwork policy logging.
To enforce this recommendation in your organization, use the
constraints/container.managed.enableNetworkPolicymanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.
Sensitive data protection
The following sections provide recommendations for encrypting data andprotecting sensitive information like credentials. Security engineers andplatform admins should apply these recommendations to reduce the risk ofunintended access to critical data.
Best practices
Encrypt workload data in use
Use hardware-based memory encryption to protect data that's in use by yourworkloads by using Confidential GKE Nodes. You can choose aConfidential Computing technology based on your requirements. For more information,seeEncrypt workload data in-use with Confidential GKE Nodes.
Store secrets outside of your cluster
Recommended: use an external secret manager like Secret Manager tostore sensitive data, such as API keys, outside of your cluster.
In Kubernetes, you can store sensitive data in Secrets in your cluster. You canuse Secrets to provide confidential data to applications without including thatdata in the application code. However, storing this data in your cluster hasrisks like the following:
- Anyone who can create Pods in a namespace can read the data of any Secret inthat namespace.
- Anyone with RBAC or IAM access to read all Kubernetes APIobjects can read Secrets.
Because of these risks, create Secrets in your cluster only when you can'tprovide that data to your workloads in any other way. We recommend the followingmethods, in order of preference, to store and access your sensitive data:
- Secret Manager client libraries:programmatically access secrets from your application code by using theSecret Manager API with Workload Identity Federation for GKE. For more information,seeAccess secrets stored outside GKE clusters using client libraries.
- Secret Manager data as mounted volumes:provide sensitive data to your Pods as mounted volumes by using theSecret Manager add-on for GKE. This method isuseful if you can't modify your application code to use theSecret Manager client libraries. For more information, seeUse Secret Manager add-on with Google Kubernetes Engine.
Third-party secret management tools: third-party tools like HashiCorpVault provide secret management capabilities for Kubernetes workloads. Thesetools require more initial configuration than Secret Manager,but are a more secure option than creating Secrets in the cluster. Toconfigure a third-party tool for secret management, see the provider'sdocumentation. Additionally, consider the following recommendations:
- If the third-party tool runs in a cluster, use a different cluster than thecluster that runs your workloads.
- Use Cloud Storage or Spanner to store the tool'sdata.
- Use an internal passthrough Network Load Balancer to expose the third-party secret management tool toPods that run in your VPC network.
Use Kubernetes Secrets (not recommended): if none of the preceding optionsis suitable for your use case, you can store the data as Kubernetes Secrets.Google Cloudencrypts data at the storage layer by default.This default storage-layer encryption includes the database that stores thestate of your cluster, which is based on either etcd or Spanner.Additionally, you can encrypt these Secrets at the application-layer with akey that you manage. For more information, seeEncrypt secrets at the application layer.
Workload security
The following sections provide recommendations for improving the security ofyour cluster against workload issues. Security engineers and platform adminsshould apply these recommendations to improve the protection ofGKE infrastructure from workloads.
Best practices
Isolate workloads by using GKE Sandbox
Recommended: useGKE Sandbox to preventmalicious code from affecting the host kernel on your cluster nodes.
You can run containers in asandboxed environment to mitigate against mostcontainer escape attacks, also called local privilege escalation attacks. Asdescribed inGKE security bulletins,this type of attack lets an attacker gain access to the host VM of thecontainer. The attacker can use this host access to access other containers onthe same VM.GKE Sandboxcan help to limit the impact of these attacks.
Use GKE Sandbox in scenarios like the following:
- You have workloads that run untrusted code.
- You want to limit the impact if an attacker compromises a container in theworkload.
For more information, seeHarden workload isolation with GKE Sandbox.
Restrict the ability for workloads to self-modify
Recommended: use admission controllers to prevent workloads fromself-modifying, or to prevent the modification of risky workload attributes likeServiceAccounts.
Certain Kubernetes workloads, especially system workloads, have permission toself-modify. For example, some workloads vertically autoscale themselves. Whileconvenient, self-modification can allow an attacker who has already compromiseda node to escalate further in the cluster. For example, an attacker could have aworkload in a namespace change itself to run as a more privileged ServiceAccountin the same namespace.
Unless necessary, don't give Pods permission to self-modify. If some Pods mustself-modify, use Policy Controller to limit what the workloads canchange. For example, you can use theNoUpdateServiceAccountconstraint template to prevent Pods from changing their ServiceAccount.When you create a policy, exclude any cluster managemenet components from yourconstraints, like in the following example:
parameters: allowedGroups: - system:masters allowedUsers: - system:addon-managerPolicy-based enforcement
The following sections provide recommendations for using policies to enforcesecurity constraints across multiple resources. Identity and account admins andsecurity engineers should apply these recommendations to maintain thecompliance of clusters and workloads with organizational security requirements.
Best practices
Enforce policies across the Google Cloud resource hierarchy
Recommended: to enforce security practices in your organization, folder, orproject, useOrganization Policy Service.
With Organization Policy, you cancentrally define constraints and enforce them at various levels of your resourcehierarchy. Various Google Cloud products publishmanaged constraintsthat let you apply best practice recommendations for that product. For example,GKE publishes managed constraints for many of the best practicesin this document.
For more information about how to enable Organization Policy, seeCreating and managing organization policies.
Enforce policies during workload admission
Recommended: use an admission controller like Policy Controller orthe PodSecurity admission controller to review incoming API requests and enforcepolicies on those requests.
Admission controllersintercept authenticated, authorized requests to the Kubernetes API to performvalidation or mutation tasks before allowing a resource to persist in the API.
You can use the following methods for admission control in GKEclusters:
- Policy Controller:control workload admission at scale across multiple GKEclusters.
- PodSecurity admission controller:enforce the KubernetesPod Security Standards by applying predefined policies to entire clusters or to specificnamespaces.
Cluster management
The following sections provide recommendations for managing your clusters overtime, such as upgrading, monitoring, and configuring logs. Security engineers,platform admins, and SREs should use these recommendations to maintain thesecurity posture of the GKE platform.
Best practices
Upgrade your GKE infrastructure regularly
Recommended: keep your GKE version up to date to access newsecurity features and apply security patches. Use release channels, acceleratedpatch auto-upgrades, and automatic node upgrades.
Kubernetes and GKE frequently release new patch versions thatinclude security improvements and vulnerability fixes. For all clusters,GKEautomatically upgrades the control planeto more stable minor versions and patch versions.
To ensure that your GKE cluster runs an up-to-date version, dothe following:
- Enroll your clusters in arelease channel.Autopilot clusters are always enrolled in a release channel.
- For clusters that are in a release channel, enableaccelerated patch auto-upgradesto get security patch versions as soon as they're available in your releasechannel.
- For Standard clusters that aren't in a release channel,enable automatic node upgrades.Node auto-upgrade is enabled by default for clusters created using theGoogle Cloud console since June 2019, and for clusters created usingthe GKE API starting on November 11, 2019.
- If you use maintenance policies, use amaintenance windowto let GKE auto-upgrade your nodes at least once a month.
- For node pools that don't use node auto-upgrades, upgrade the node pools atleast once a month on your own schedule.
- Track theGKE security bulletinsand theGKE release notesfor information about security patches.
Enable security bulletin notifications
Recommended: configure notifications for new security bulletins that affectyour cluster.
When security bulletins are available that are relevant to your cluster,GKE publishes notifications about those events as messages toPub/Sub topics that you configure. You can receive these notificationson a Pub/Sub subscription,integrate with third-party services,andreceive notifications in Cloud Logging.
To enforce this recommendation in your organization, use theconstraints/container.managed.enableSecurityBulletinNotificationsmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Configure log collection
Recommended: to reduce operational overhead and to maintain a consolidatedview of your logs, implement a consistent logging strategy across your clusters.Don't disable log collection in your Standard clusters.
GKE clusters send specificlogs to Google Cloud Observability.You can optionally configure the collection of additional types of logs.In addition to system and workload logs, all GKE clusters sendthe following audit logs to Logging:
- Kubernetes audit logs: a chronological record of calls that have beenmade to the Kubernetes API server. Kubernetes audit log entries are usefulfor investigating suspicious API requests, for collecting statistics, or forcreating monitoring alerts for unwanted API calls.
- GKE audit logs: a record of administrative and accessactivities for the GKE API.
For more information, see the following documents:
To enforce this recommendation in your organization, use theconstraints/container.managed.enableCloudLoggingmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Monitor your resources for security issues
Use theGKE security posture dashboardandSecurity Command Centerto monitor your clusters and workloads for potential issues. You can use theseservices to check for active vulnerabilities, threats, and security bulletinsthat affect your GKE infrastructure.
Default security configurations
The following sections describe options that are configured by default in newclusters to mitigate specific security concerns, like vulnerabilities or risks.Security engineers and platform admins should validate that existing clustersuse these settings.
Best practices
Leave legacy client authentication methods disabled
Recommended: disable legacy API server authentication methods like staticcertificates and passwords.
There are severalmethods of authenticatingto the Kubernetes API server. In GKE, the supported methodsare service account bearer tokens, OAuth tokens, and X.509 client certificates.The gcloud CLI uses OAuth tokens to authenticate users forGKE.
Legacy authentication methods like static passwords are disabled, because thesemethods increase the attack surface for cluster compromises. InAutopilot clusters, you can't enable or use these authenticationmethods.
Use one of the following methods to authenticate to the Kubernetes API server:
- Users: use the gcloud CLI to let GKEauthenticate users, generate OAuth access tokens for the cluster, and keepthe tokens up-to-date.
- Applications: use Workload Identity Federation to let applications inGoogle Cloud or in other environments authenticate to your cluster.
For more information about how to authenticate and how to disable legacyauthentication methods, seeAuthenticate to the Kubernetes API server.
To enforce this recommendation in your organization, use theconstraints/container.managed.disableLegacyClientCertificateIssuancemanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Leave ABAC disabled
Recommended: use IAM and RBAC to control access inGKE. Don't enable attribute-based access control (ABAC).
ABAC is a legacy authorization method that's disabled by default in allGKE clusters, and can't be enabled in Autopilotclusters.
To enforce this recommendation in your organization, use theconstraints/container.managed.disableABACmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.Leave the DenyServiceExternalIPs admission controller enabled
Recommended: don't disable theDenyServiceExternalIPs admission controller.
This admission controller blocks Services from using ExternalIPs and mitigatesGCP-2020-015.This admission controller is enabled by default in clusters that were created onGKE version 1.21 and later. For cluster that were originallycreated on an earlier GKE version, enable the admissioncontroller:
gcloudcontainerclustersupdateCLUSTER_NAME\--location=LOCATION\--no-enable-service-externalipsconstraints/container.managed.denyServiceExternalIPsmanaged Organization Policy constraint.To review this managed constraint in the Google Cloud console, go to thePolicy details page.What's next
- Read theGKE security overview.
- Review theGKE shared responsibility model.
- Learn more aboutaccess control in GKE.
- Read theGKE network overview.
- Read theGKE multi-tenancy overview.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.