Place GKE Pods in specific zones Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to tell Google Kubernetes Engine (GKE) to run your Pods onnodes in specific Google Cloud zones usingzonal topology. This type ofplacement is useful in situations such as the following:
- Pods must access data that's stored in a zonal Compute Enginepersistent disk.
- Pods must run alongside other zonal resources such as Cloud SQL instances.
You can also use zonal placement with topology-aware traffic routing toreduce latency between clients and workloads. For details abouttopology-aware traffic routing, seeTopology aware routing.
Using zonal topology to control Pod placement is an advanced Kubernetesmechanism that you should only use if your situation requires that Pods run inspecific zones. In most production environments, we recommend that you useregional resources, which is the GKE default, when possible.
Zonal placement methods
Zonal topology is built into Kubernetes with thetopology.kubernetes.io/zone:ZONE node label. To tellGKE to place a Pod in a specific zone, use one of the followingmethods:
- nodeAffinity: Specify a nodeAffinity rule inyour Pod specification for one or more Google Cloud zones. This methodis more flexible than a nodeSelector because it lets you place Pods inmultiple zones.
nodeSelector: Specify a nodeSelector in yourPod specification for a single Google Cloud zone.
Compute classes: Configure your Pod to use a GKE compute class.This approach lets you define a prioritized list of sets of Google Cloud zones.It enables the workload to be moved dynamically to the most preferred set of zones when nodes are available in these zones.For more information, seeAbout custom compute classes.
Considerations
Zonal Pod placement using zonal topology has the following considerations:
- The cluster must be in the same Google Cloud region as the requestedzones.
- In Standard clusters, you must use node auto-provisioning or createnode pools with nodes in the requested zones. Autopilot clustersautomatically manage this process for you.
- Standard clusters must be regional clusters.
Pricing
Zonal topology is a Kubernetes scheduling capability and is offered at no extracost in GKE.
For pricing details, seeGKE pricing.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
- Ensure that you have an existing GKE cluster in the sameGoogle Cloud region as the zones in which you want to place your Pods.To create a new cluster, seeCreate an Autopilot cluster.
Place Pods in multiple zones using nodeAffinity
Kubernetes nodeAffinity provides a flexible scheduling control mechanism thatsupports multiple label selectors and logical operators. Use nodeAffinity if youwant to let Pods run in one of a set of zones (for example, in eitherus-central1-a orus-central1-f).
Save the following manifest as
multi-zone-affinity.yaml:apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:replicas:3selector:matchLabels:app:nginx-multi-zonetemplate:metadata:labels:app:nginx-multi-zonespec:containers:-name:nginximage:nginx:latestports:-containerPort:80affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:topology.kubernetes.io/zoneoperator:Invalues:-us-central1-a-us-central1-fThis manifest creates a Deployment with three replicas and places the Podsin
us-central1-aorus-central1-fbased on node availability.Ensure that your cluster is in the
us-central1region. If your cluster is ina different region, change the zones in the values field of the manifest tovalid zones in your cluster region.Optional: If you are provisioning TPU VMs, use anAI zone, like
us-central1-ai1a. AI zones are specialized locations that are optimized for AI/ML workloads within Google Cloud regions.Create the Deployment:
kubectlcreate-fmulti-zone-affinity.yamlGKE creates the Pods in nodes in one of the specified zones.Multiple Pods might run on the same node. You can optionally usePod anti-affinity to tell GKE to place each Pod on a separate node.
Place Pods in a single zone using a nodeSelector
To place Pods in a single zone, use a nodeSelector in the Pod specification. AnodeSelector is equivalent to arequiredDuringSchedulingIgnoredDuringExecutionnodeAffinity rule that has a single zone specified.
Save the following manifest as
single-zone-selector.yaml:apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-singlezonespec:replicas:3selector:matchLabels:app:nginx-singlezonetemplate:metadata:labels:app:nginx-singlezonespec:nodeSelector:topology.kubernetes.io/zone:"us-central1-a"containers:-name:nginximage:nginx:latestports:-containerPort:80This manifest tells GKE to place all replicas in theDeployment in the
us-central1-azone.Create the Deployment:
kubectlcreate-fsingle-zone-selector.yaml
Prioritize Pod placement in selected zones using a compute class
GKE compute classes provide a control mechanism thatlets you define a list of node configuration priorities. Zonal preferenceslet you define the zones that you want GKE to place Pods in.Defining zonal preferences in compute classes requires GKE version 1.33.1-gke.1545000 or later.
Caution: We don't recommend combining zonal preferences in compute classeswith previous zonal topology methods in Kubernetes.The following example creates a compute class that specifies a list ofpreferred zones for Pods.
These steps assume that your cluster is in theus-central1 region. If yourcluster is in a different region, change the values of the zones in themanifest to valid zones in your cluster region.
Save the following manifest as
zones-custom-compute-class.yaml:apiVersion:cloud.google.com/v1kind:ComputeClassmetadata:name:zones-custom-compute-classspec:priorities:-location:zones:[us-central1-a,us-central1-b]-location:zones:[us-central1-c]activeMigration:optimizeRulePriority:truenodePoolAutoCreation:enabled:truewhenUnsatisfiable:ScaleUpAnywayThis compute class manifest changes scaling behavior as follows:
- GKE tries to place Pods in either
us-central1-aor inus-central1-b. - If
us-central1-aandus-central1-bdon't have available capacity,GKE tries to place Pods inus-central1-c. - If
us-central1-cdoesn't have available capacity, thewhenUnsatisfiable: ScaleUpAnywayfield makes GKEplace the Pods in any available zone in the region. - If a zone that has higher priority in the compute class becomes availablelater, the
activeMigration.optimizeRulePriority: truefield makesGKE move the Pods to that zone from any lower priorityzones. This migration uses the Pod Disruption Budget to ensure serviceavailability.
- GKE tries to place Pods in either
Create the Custom Compute Class:
kubectlcreate-fzones-custom-compute-class.yamlGKE creates a custom compute class that your workloads can reference.
Save the following manifest as
custom-compute-class-deployment.yaml:apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-zonal-preferencesspec:replicas:3selector:matchLabels:app:nginx-zonal-preferencestemplate:metadata:labels:app:nginx-zonal-preferencesspec:nodeSelector:cloud.google.com/compute-class:"zones-custom-compute-class"containers:-name:nginximage:nginx:latestports:-containerPort:80Create the Deployment:
kubectlcreate-fcustom-compute-class-deployment.yaml
Target AI zones
AI zones are specialized zones used for AI/ML training and inference workloads. These zones provide significantML accelerator capacity. For more information, see theAI zones documentation.
Note: In this document and the GKE documentation, "standard zones" or "zones" refer to non-AI zones within a Google Cloud region.Before you use an AI zone in GKE, consider the followingcharacteristics:
- AI zones are physically separate from standard zones to provide additionalstorage space and power. This separation might result in higher latency, which is generallytolerable for AI/ML workloads.
- AI zones have a suffix with the
ainotation. For example, an AI zone in theus-central1region is namedus-central1-ai1a. - Currently, only TPU VMs are supported.
- The cluster's control plane runs in one or more standard zones within the sameregion as the AI zone.
You can run VMs without attached TPUs in an AI zone only if you meet thefollowing requirements:
- You are already running other workloads that use TPU VMs in the same zone.
- The non-TPU VMs are either Spot VMs, tied to a reservation, orpart of a node pool with a specific accelerator-to-general-purpose VMratio.
AI zones share components, such as networking connections and software rollouts,with standard zones that have the same suffix within the same region. Forhigh-availability workloads, we recommend that you use different zones. Forexample, avoid using both
us-central1-ai1aandus-central1-afor highavailability.
By default, GKE doesn't deploy your workloads in AI zones. To usean AI zone, you must configure one of the following options:
- (Recommended) ComputeClasses: set your highest priority to requeston-demand TPUs in an AI zone. ComputeClasses help you define a prioritizedlist of hardware configurations for your workloads. For an example, seeAbout ComputeClasses.
- Node auto-provisioning: use a
nodeSelectorornodeAffinityin your Podspecification to instruct node auto-provisioning to create a node pool in theAI zone. If your workload doesn't explicitly target an AI zone, nodeauto-provisioning considers only standard zones or zones from--autoprovisioning-locationswhen creating new node pools.This configuration ensures that workloads that don't run AI/ML models remain in standardzones unless you explicitly configure otherwise. For an example of a manifestthat uses anodeSelector, seeSet the default zones for auto-created nodes. - GKE Standard: if you directly manage your nodepools, use an AI zone in the
--node-locationsflag when you create a nodepool. For an example, seeDeploy TPU workloads in GKE Standard.
Verify Pod placement
To verify Pod placement, list the Pods and check the node labels. Multiple Podsmight run in a single node, so you might not see Pods spread across multiplezones if you used nodeAffinity.
List your Pods:
kubectlgetpods-owideThe output is a list of running Pods and the corresponding GKEnode.
Describe the nodes:
kubectldescribenodeNODE_NAME|grep"topology.kubernetes.io/zone"Replace
NODE_NAMEwith the name of the node.The output is similar to the following:
topology.kubernetes.io/zone: us-central1-a
If you want GKE to spread your Pods evenly across multiple zonesfor improved failover across multiple failure domains, usetopologySpreadConstraints.
What's next
- Separate GKE workloads from each other
- Keep network traffic in the same topology as the node
- Spread Pods across multiple failure domains
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.