Isolate your workloads in dedicated node pools

This page shows you how to reduce the risk of privilege escalationattacks in your cluster by telling Google Kubernetes Engine (GKE) to schedule yourworkloads on a separate, dedicated node pool away from privilegedGKE-managed workloads. You should use this approach only ifyoucan't use GKE Sandbox.GKE Sandboxis the recommended approach for node isolation. GKE Sandbox also providesother hardening benefits for your workloads.

This page is for Security specialists who require a layer of isolation onworkloads but can't use GKE Sandbox. To learn more aboutcommon roles and example tasks that we reference in Google Cloud content, seeCommon GKE user roles and tasks.

This page applies to Standardclusters without node auto-provisioning. To separate workloads inAutopilot clusters and in Standard clusters with nodeauto-provisioning enabled, refer toConfigure workload separation in GKE.

Overview

GKE clusters use privileged GKE-managed workloadsto enable specific cluster functionality and features, such asmetrics gathering.These workloads are given special permissions to run correctly in thecluster.

Workloads that you deploy to your nodes might have the potential to becompromised by a malicious entity. Running these workloads alongsideprivileged GKE-managed workloads means thatan attacker who breaks out of a compromised container can use the credentialsof the privileged workload on the node to escalate privileges in yourcluster.

Prevent container breakouts

Your primary defense should be your applications. GKE hasmultiple features that you can use to harden your clusters and Pods. In mostcases, westrongly recommend usingGKE Sandbox to isolate yourworkloads. GKE Sandbox is based on thegVisor open source project, andimplements the Linux kernel API in the userspace. Each Pod runs on a dedicatedkernel that sandboxes applications to prevent access to privileged system callsin the host kernel. Workloads running in GKE Sandbox are automaticallyscheduled on separate nodes, isolated from other workloads.

You should also follow the recommendations inHarden your cluster's security.

Avoid privilege escalation attacks

If you can't use GKE Sandbox, and you want an extra layer ofisolation in addition to other hardening measures, you can usenode taintsandnode affinity to schedule your workloads on a dedicated node pool.A node taint tells GKE to avoid scheduling workloads without acorresponding toleration (such as GKE-managed workloads) on those nodes. The nodeaffinity on your own workloads tells GKE to schedule your Pods on thededicated nodes.

Caution: Node isolation is an advanced defence-in-depth mechanism that youshouldonly use alongside other isolation features such asminimally-privileged containers and service accounts. Node isolation might notcover all escalation paths, and should never be used as a primary securityboundary. We don't recommend this approach unless you can't use GKE Sandbox.

Limitations of node isolation

  • Attackers can still initiate Denial-of-Service (DoS) attacks from thecompromised node.
  • Compromised nodes can still read many resources, including all Pods andnamespaces in the cluster.
  • Compromised nodes can access Secrets and credentials used by every Pod runningon that node.
  • Using a separate node pool to isolate your workloads can impact your costefficiency, autoscaling, and resource utilization.
  • Compromised nodes can still bypass egress network policies.
  • Some GKE-managed workloads must run on every node in yourcluster, and are configured to tolerate all taints.
  • If you deploy DaemonSets that have elevated permissions and can tolerate anytaint, those Pods may be a pathway for privilege escalation from acompromised node.

How node isolation works

To implement node isolation for your workloads, you must do the following:

  1. Taint and label a node pool for your workloads.
  2. Update your workloads with the corresponding toleration and node affinityrule.

This guide assumes that you start with one node pool in your cluster. Using nodeaffinity in addition to node taints isn't mandatory, but we recommend it becauseyou benefit from greater control over scheduling.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Choose a specific name for the node taint and the node label that you wantto use for the dedicated node pools.

Taint and label a node pool for your workloads

Best practice: to prevent the kubelet frommodifying node labels that you use for workload isolation, prefix your labelkeys withnode-restriction.kubernetes.io/.

Create a new node pool for your workloads and apply a node taint anda node label. When you apply a taint or a label at the node poollevel, any new nodes, such as those created by autoscaling, will automaticallyget the specified taints and labels.

You can also add node taints and node labels to existing node pools. If you usetheNoExecute effect, GKE evicts any Pods running on thosenodes that don't have a toleration for the new taint.

For workload isolation, always use thenode-restriction.kubernetes.io/ prefixfor your node labels and for the corresponding selectors in your Pod manifests.This prefix prevents an attacker from using the node's credential to set ormodify the labels that use this prefix. For more information, seeNode isolation/restrictionin the Kubernetes documentation.

To add a taint and a label to a new node pool, run the following command:

gcloudcontainernode-poolscreatePOOL_NAME\--cluster=CLUSTER_NAME\--node-taints=TAINT_KEY=TAINT_VALUE:TAINT_EFFECT\--node-labels=node-restriction.kubernetes.io/LABEL_KEY=LABEL_VALUE

Replace the following:

  • POOL_NAME: the name of the new node pool foryour workloads.
  • CLUSTER_NAME: the name of your GKEcluster.
  • TAINT_KEY=TAINT_VALUE: a key-value pair associatedwith a schedulingTAINT_EFFECT. For example,workloadType=untrusted.
  • TAINT_EFFECT: one of the followingeffect values:NoSchedule,PreferNoSchedule, orNoExecute.NoExecute provides abetter eviction guarantee thanNoSchedule.
  • node-restriction.kubernetes.io/LABEL_KEY=LABEL_VALUE:key-value pairs for the node labels, which correspond to the selectors thatyou specify in your workload manifests. Thenode-restriction.kubernetes.io/prefix prevents the node credentials from being used to set these key-valuepairs on nodes.

Add a toleration and a node affinity rule to your workloads

After you taint the dedicated node pool, no workloads can schedule on it unlessthey have a toleration corresponding to the taint you added. Add the tolerationto the specification for your workloads to let those Pods schedule on yourtainted node pool.

If you labelled the dedicated node pool, you can also add a node affinity ruleto tell GKE to only schedule your workloads on that node pool.

The following example adds a toleration for theworkloadType=untrusted:NoExecute taint and a node affinity rule for theworkloadType=untrusted node label.

kind:DeploymentapiVersion:apps/v1metadata:name:my-appnamespace:defaultlabels:app:my-appspec:replicas:1selector:matchLabels:app:my-apptemplate:metadata:labels:app:my-appspec:tolerations:-key:TAINT_KEYoperator:Equalvalue:TAINT_VALUEaffinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:node-restriction.kubernetes.io/LABEL_KEYoperator:Invalues:-"LABEL_VALUE"containers:-name:sleepimage:ubuntucommand:["/bin/sleep","inf"]

Replace the following:

  • TAINT_KEY: the taint key that you applied to yourdedicated node pool.
  • TAINT_VALUE: the taint value that you applied to yourdedicated node pool.
  • LABEL_KEY: the node label key that you applied to yourdedicated node pool.
  • LABEL_VALUE: the node label value that you applied toyour dedicated node pool.

When you update your Deployment withkubectl apply, GKErecreates the affected Pods. The node affinity rule forces the Pods onto thededicated node pool that you created. The toleration allows only those Pods to beplaced on the nodes.

Verify that the separation works

To verify that the scheduling works correctly, run the following command andcheck whether your workloads are on the dedicated node pool:

kubectlgetpods-o=wide

Recommendations and best practices

After setting up node isolation, we recommend that you do the following:

  • Restrict specific node pools to GKE-managed workloads only byadding thecomponents.gke.io/gke-managed-components taint. Adding thistaint prevents your own Pods from scheduling on those nodes, improving theisolation.
  • When creating new node pools, prevent most GKE-managedworkloads from running on those nodes by adding your own taint to those nodepools.
  • Whenever you deploy new workloads to your cluster, such as when installingthird-party tooling, audit the permissions that the Pods require. Whenpossible, avoid deploying workloads that use elevated permissions to sharednodes.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.