Configure workload separation in GKE

This page shows you how to tell Google Kubernetes Engine (GKE) to schedule your Podstogether, separately, or in specific locations.

Workload separation lets you usetaints and tolerations to tell GKE to separate Pods onto different nodes, place Pods onnodes that meet specific criteria, or to schedule specific workloads together.What you need to do to configure workload separation depends on yourGKE cluster configuration. The following table describes thedifferences:

Workload separation configuration

Add a toleration for a specific key:value pair to your Pod specification, and select that key:value pair using a nodeSelector. GKE creates nodes, applies the corresponding node taint, and schedules the Pod on the node.

For instructions, refer toSeparate workloads in Autopilot clusters on this page.

Standard without node auto-provisioning
  1. Create a node pool with a node taint and a node label
  2. Add a toleration for that taint to the Pod specification

For instructions, refer toIsolate your workloads in dedicated node pools.

Caution: With this method, if existing tainted nodes don't have enough resources to support a Pod with a toleration, the Pod remains in the Pending state.

This guide uses an example scenario in which you have two workloads, a batch joband a web server, that you want to separate from each other.

When to use workload separation in GKE

Workload separation is useful when you have workloads that performdifferent roles and shouldn't run on the same underlying machines. Some examplescenarios include the following:

  • You have a batch coordinator workload that creates Jobs that you want tokeep separate.
  • You run a game server with a matchmaking workload that you want to separatefrom session Pods.
  • You want to separate parts of your stack from each other, such as separatinga server from a database.
  • You want to separate some workloads for compliance or policy reasons.
Warning: Workload separation shouldnever be used as a primary securityboundary. It is not a method of isolating untrusted workloads, and doesn'tmitigate all escalation paths. Workload separation is not intended for use as adefense mechanism. To learn about the risks of using Kubernetes schedulingas an isolation method, seeAvoiding privilege escalation attacks.

Pricing

In Autopilot clusters, you're billed for the resources that your Podsrequest while running. For details, refer toAutopilot pricing. Podsthat use workload separation havehigher minimum resource requestsenforced than regular Pods.

In Standard clusters, you're billed based on the hardware configurationand size of each node, regardless of whether Pods are running on the nodes. Fordetails, refer toStandard pricing.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running thegcloud components update command. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zone instead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.

Separate workloads in Autopilot clusters

To separate workloads from each other, add a toleration and a node selector toeach workload specification that defines the node on which the workload shouldrun. This method also works on Standard clusters that have nodeauto-provisioning enabled.

  1. Save the following manifest asweb-server.yaml:

    apiVersion:apps/v1kind:Deploymentmetadata:name:web-serverspec:replicas:6selector:matchLabels:pod:nginx-podtemplate:metadata:labels:pod:nginx-podspec:tolerations:-key:groupoperator:Equalvalue:"servers"effect:NoSchedulenodeSelector:group:"servers"containers:-name:web-serverimage:nginx

    This manifest includes the following fields:

    • spec.tolerations: GKE can place the Pods on nodes thathave thegroup=servers:NoSchedule taint. GKE can'tschedule Pods that don't have this toleration on those nodes.
    • spec.nodeSelector: GKE must place the Pods on nodes thathave thegroup: servers node label.

    GKE adds the corresponding labels and taints to nodes thatGKE automatically provisions to run these Pods.

  2. Save the following manifest asbatch-job.yaml:

    apiVersion:batch/v1kind:Jobmetadata:name:batch-jobspec:completions:5backoffLimit:3ttlSecondsAfterFinished:120template:metadata:labels:pod:pi-podspec:restartPolicy:Nevertolerations:-key:groupoperator:Equalvalue:"jobs"effect:NoSchedulenodeSelector:group:"jobs"containers:-name:piimage:perlcommand:["perl","-Mbignum=bpi","-wle","printbpi(2000)"]

    This manifest includes the following fields:

    • spec.tolerations: GKE can place the Pods on nodes thathave thegroup=jobs:NoSchedule taint. GKE can'tschedule Pods that don't have this toleration on those nodes.
    • spec.nodeSelector: GKE must place the Pods on nodes thathave thegroup: jobs node label.

    GKE adds the corresponding labels and taints to nodes thatGKE automatically provisions to run these Pods.

  3. Deploy the workloads:

    kubectlapply-fbatch-job.yamlweb-server.yaml

When you deploy the workloads, GKE does the following for eachworkload:

  1. GKE looks for existing nodes that have the corresponding nodetaint and node label specified in the manifest. If nodes exist and haveavailable resources, GKE schedules the workload on the node.
  2. If GKE doesn't find an eligible existing node to schedule theworkload, GKE creates a new node and applies thecorresponding node taint and node label based on the manifest.GKE places the Pod on the new node.

The presence of theNoSchedule effect in the node taint ensures that workloadswithout a toleration don't get placed on the node.

Verify the workload separation

List your Pods to find the names of the nodes:

kubectlgetpods--output=wide

The output is similar to the following:

NAME                          READY   ...   NODEbatch-job-28j9h               0/1     ...   gk3-sandbox-autopilot-nap-1hzelof0-ed737889-2m59batch-job-78rcn               0/1     ...   gk3-sandbox-autopilot-nap-1hzelof0-ed737889-2m59batch-job-gg4x2               0/1     ...   gk3-sandbox-autopilot-nap-1hzelof0-ed737889-2m59batch-job-qgsxh               0/1     ...   gk3-sandbox-autopilot-nap-1hzelof0-ed737889-2m59batch-job-v4ksf               0/1     ...   gk3-sandbox-autopilot-nap-1hzelof0-ed737889-2m59web-server-6bb8cd79b5-dw4ds   1/1     ...   gk3-sandbox-autopilot-nap-1eurxgsq-f2f3c272-n6xmweb-server-6bb8cd79b5-g5ld6   1/1     ...   gk3-sandbox-autopilot-nap-1eurxgsq-9f447e18-275zweb-server-6bb8cd79b5-jcdx5   1/1     ...   gk3-sandbox-autopilot-nap-1eurxgsq-9f447e18-275zweb-server-6bb8cd79b5-pxdzw   1/1     ...   gk3-sandbox-autopilot-nap-1eurxgsq-ccd22fd9-qtfqweb-server-6bb8cd79b5-s66rw   1/1     ...   gk3-sandbox-autopilot-nap-1eurxgsq-ccd22fd9-qtfqweb-server-6bb8cd79b5-zq8hh   1/1     ...   gk3-sandbox-autopilot-nap-1eurxgsq-f2f3c272-n6xm

This output shows that thebatch-job Pods and theweb-server Pods always runon different nodes.

Limitations of workload separation with taints and tolerations

You can't use the following key prefixes for workload separation:

  • GKE and Kubernetes-specific keys
  • *cloud.google.com/
  • *kubelet.kubernetes.io/
  • *node.kubernetes.io/

You should use your own, unique keys for workload separation.

Separate workloads in Standard clusters without node auto-provisioning

Separating workloads in Standard clusters without node auto-provisioningrequires that you manually create node pools with the appropriate node taintsand node labels to accommodate your workloads. For instructions, refer toIsolate your workloads in dedicated node pools.Only use this approach if you have specific requirements that require you tomanually manage your node pools.

Create a cluster with node taints

When you create a cluster in GKE, you can assign node taints tothe cluster. This assigns the taints toall nodes created with the cluster.

If you create a node pool, the node pool does not inherit taints from thecluster. If you want taints on the node pool, you must use the--node-taintsflag when you create the node pool.

If you create a Standard cluster with node taints that have theNoSchedule effect or theNoExecute effect, GKE can'tschedule some GKE managed components, such askube-dns ormetrics-server on the default node pool that GKE creates whenyou create the cluster. GKE can't schedule these componentsbecause they don't have the corresponding tolerations for your node taints.You must add a new node pool that satisfies one of the following conditions:

  • No taints
  • A taint that has thePreferNoSchedule effect
  • Thecomponents.gke.io/gke-managed-components=true:NoSchedule taint

Any of these conditions allow GKE to schedule GKEmanaged components in the new node pool.

For instructions, refer toIsolate workloads on dedicated nodes.

gcloud

Create a cluster with node taints:

gcloudcontainerclusterscreateCLUSTER_NAME\--node-taintsKEY=VALUE:EFFECT

Replace the following:

  • CLUSTER_NAME: the name of the new cluster.
  • EFFECT: one of the following effects:PreferNoSchedule,NoSchedule, orNoExecute.
  • KEY=VALUE: a key-value pair associatedwith theEFFECT.

Console

Create a cluster with node taints:

  1. In the Google Cloud console, go to theCreate a Kubernetes cluster page.

    Go to Create a Kubernetes cluster

  2. Configure your cluster as desired.

  3. From the navigation pane, underNode Pools, expand the node pool youwant to modify, and then clickMetadata.

  4. In theNode taints section, clickAdd Taint.

  5. In theEffect drop-down list, select the desired effect.

  6. Enter the desired key-value pair in theKey andValue fields.

  7. ClickCreate.

API

When you use the API to create a cluster, include thenodeTaints fieldunder `nodeConfig:

POSThttps://container.googleapis.com/v1/projects/PROJECT_ID/zones/COMPUTE_ZONE/clusters{'cluster':{'name':'example-cluster','nodeConfig':{'nodeTaints':[{'key':'special','Value':'gpu','effect':'PreferNoSchedule'}]...}...}}

Remove all taints from a node pool

To remove all taints from a node pool, run the following command:

gcloudbetacontainernode-poolsupdatePOOL_NAME\--node-taints=""\--cluster=CLUSTER_NAME

Replace the following:

  • POOL_NAME: the name of the node pool to change.
  • CLUSTER_NAME: the name of the cluster of the nodepool.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.