Place GKE Pods in specific zones

This page shows you how to tell Google Kubernetes Engine (GKE) to run your Pods onnodes in specific Google Cloud zones usingzonal topology. This type ofplacement is useful in situations such as the following:

  • Pods must access data that's stored in a zonal Compute Enginepersistent disk.
  • Pods must run alongside other zonal resources such as Cloud SQL instances.

You can also use zonal placement with topology-aware traffic routing toreduce latency between clients and workloads. For details abouttopology-aware traffic routing, seeTopology aware routing.

Using zonal topology to control Pod placement is an advanced Kubernetesmechanism that you should only use if your situation requires that Pods run inspecific zones. In most production environments, we recommend that you useregional resources, which is the GKE default, when possible.

Zonal placement methods

Zonal topology is built into Kubernetes with thetopology.kubernetes.io/zone:ZONE node label. To tellGKE to place a Pod in a specific zone, use one of the followingmethods:

  • nodeAffinity: Specify a nodeAffinity rule inyour Pod specification for one or more Google Cloud zones. This methodis more flexible than a nodeSelector because it lets you place Pods inmultiple zones.
  • nodeSelector: Specify a nodeSelector in yourPod specification for a single Google Cloud zone.

  • Compute classes: Configure your Pod to use a GKE compute class.This approach lets you define a prioritized list of sets of Google Cloud zones.It enables the workload to be moved dynamically to the most preferred set of zones when nodes are available in these zones.For more information, seeAbout custom compute classes.

Considerations

Zonal Pod placement using zonal topology has the following considerations:

  • The cluster must be in the same Google Cloud region as the requestedzones.
  • In Standard clusters, you must use node auto-provisioning or createnode pools with nodes in the requested zones. Autopilot clustersautomatically manage this process for you.
  • Standard clusters must be regional clusters.

Pricing

Zonal topology is a Kubernetes scheduling capability and is offered at no extracost in GKE.

For pricing details, seeGKE pricing.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running thegcloud components update command. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zone instead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
  • Ensure that you have an existing GKE cluster in the sameGoogle Cloud region as the zones in which you want to place your Pods.To create a new cluster, seeCreate an Autopilot cluster.

Place Pods in multiple zones using nodeAffinity

Kubernetes nodeAffinity provides a flexible scheduling control mechanism thatsupports multiple label selectors and logical operators. Use nodeAffinity if youwant to let Pods run in one of a set of zones (for example, in eitherus-central1-a orus-central1-f).

  1. Save the following manifest asmulti-zone-affinity.yaml:

    apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:replicas:3selector:matchLabels:app:nginx-multi-zonetemplate:metadata:labels:app:nginx-multi-zonespec:containers:-name:nginximage:nginx:latestports:-containerPort:80affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:topology.kubernetes.io/zoneoperator:Invalues:-us-central1-a-us-central1-f

    This manifest creates a Deployment with three replicas and places the Podsinus-central1-a orus-central1-f based on node availability.

    Ensure that your cluster is in theus-central1 region. If your cluster is ina different region, change the zones in the values field of the manifest tovalid zones in your cluster region.

    Optional: If you are provisioning TPU VMs, use anAI zone, likeus-central1-ai1a. AI zones are specialized locations that are optimized for AI/ML workloads within Google Cloud regions.

  2. Create the Deployment:

    kubectlcreate-fmulti-zone-affinity.yaml

    GKE creates the Pods in nodes in one of the specified zones.Multiple Pods might run on the same node. You can optionally usePod anti-affinity to tell GKE to place each Pod on a separate node.

Place Pods in a single zone using a nodeSelector

To place Pods in a single zone, use a nodeSelector in the Pod specification. AnodeSelector is equivalent to arequiredDuringSchedulingIgnoredDuringExecutionnodeAffinity rule that has a single zone specified.

Caution: Pods that you place in a single zone by using zonal topology might notbe covered by theAutopilot service level agreement (SLA), whichcovers Autopilot Pods inmultiple zones.TheCompute Engine SLA continues to apply.
  1. Save the following manifest assingle-zone-selector.yaml:

    apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-singlezonespec:replicas:3selector:matchLabels:app:nginx-singlezonetemplate:metadata:labels:app:nginx-singlezonespec:nodeSelector:topology.kubernetes.io/zone:"us-central1-a"containers:-name:nginximage:nginx:latestports:-containerPort:80

    This manifest tells GKE to place all replicas in theDeployment in theus-central1-a zone.

  2. Create the Deployment:

    kubectlcreate-fsingle-zone-selector.yaml

Prioritize Pod placement in selected zones using a compute class

GKE compute classes provide a control mechanism thatlets you define a list of node configuration priorities. Zonal preferenceslet you define the zones that you want GKE to place Pods in.Defining zonal preferences in compute classes requires GKE version 1.33.1-gke.1545000 or later.

Caution: We don't recommend combining zonal preferences in compute classeswith previous zonal topology methods in Kubernetes.

The following example creates a compute class that specifies a list ofpreferred zones for Pods.

These steps assume that your cluster is in theus-central1 region. If yourcluster is in a different region, change the values of the zones in themanifest to valid zones in your cluster region.

  1. Save the following manifest aszones-custom-compute-class.yaml:

    apiVersion:cloud.google.com/v1kind:ComputeClassmetadata:name:zones-custom-compute-classspec:priorities:-location:zones:[us-central1-a,us-central1-b]-location:zones:[us-central1-c]activeMigration:optimizeRulePriority:truenodePoolAutoCreation:enabled:truewhenUnsatisfiable:ScaleUpAnyway

    This compute class manifest changes scaling behavior as follows:

    1. GKE tries to place Pods in eitherus-central1-a or inus-central1-b.
    2. Ifus-central1-a andus-central1-b don't have available capacity,GKE tries to place Pods inus-central1-c.
    3. Ifus-central1-c doesn't have available capacity, thewhenUnsatisfiable: ScaleUpAnyway field makes GKEplace the Pods in any available zone in the region.
    4. If a zone that has higher priority in the compute class becomes availablelater, theactiveMigration.optimizeRulePriority: true field makesGKE move the Pods to that zone from any lower priorityzones. This migration uses the Pod Disruption Budget to ensure serviceavailability.
  2. Create the Custom Compute Class:

    kubectlcreate-fzones-custom-compute-class.yaml

    GKE creates a custom compute class that your workloads can reference.

  3. Save the following manifest ascustom-compute-class-deployment.yaml:

    apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-zonal-preferencesspec:replicas:3selector:matchLabels:app:nginx-zonal-preferencestemplate:metadata:labels:app:nginx-zonal-preferencesspec:nodeSelector:cloud.google.com/compute-class:"zones-custom-compute-class"containers:-name:nginximage:nginx:latestports:-containerPort:80
  4. Create the Deployment:

    kubectlcreate-fcustom-compute-class-deployment.yaml

Target AI zones

AI zones are specialized zones used for AI/ML training and inference workloads. These zones provide significantML accelerator capacity. For more information, see theAI zones documentation.

Note: In this document and the GKE documentation, "standard zones" or "zones" refer to non-AI zones within a Google Cloud region.

Before you use an AI zone in GKE, consider the followingcharacteristics:

By default, GKE doesn't deploy your workloads in AI zones. To usean AI zone, you must configure one of the following options:

  • (Recommended) ComputeClasses: set your highest priority to requeston-demand TPUs in an AI zone. ComputeClasses help you define a prioritizedlist of hardware configurations for your workloads. For an example, seeAbout ComputeClasses.
  • Node auto-provisioning: use anodeSelector ornodeAffinity in your Podspecification to instruct node auto-provisioning to create a node pool in theAI zone. If your workload doesn't explicitly target an AI zone, nodeauto-provisioning considers only standard zones or zones from--autoprovisioning-locationswhen creating new node pools.This configuration ensures that workloads that don't run AI/ML models remain in standardzones unless you explicitly configure otherwise. For an example of a manifestthat uses anodeSelector, seeSet the default zones for auto-created nodes.
  • GKE Standard: if you directly manage your nodepools, use an AI zone in the--node-locations flag when you create a nodepool. For an example, seeDeploy TPU workloads in GKE Standard.

Verify Pod placement

To verify Pod placement, list the Pods and check the node labels. Multiple Podsmight run in a single node, so you might not see Pods spread across multiplezones if you used nodeAffinity.

  1. List your Pods:

    kubectlgetpods-owide

    The output is a list of running Pods and the corresponding GKEnode.

  2. Describe the nodes:

    kubectldescribenodeNODE_NAME|grep"topology.kubernetes.io/zone"

    ReplaceNODE_NAME with the name of the node.

    The output is similar to the following:

    topology.kubernetes.io/zone: us-central1-a

If you want GKE to spread your Pods evenly across multiple zonesfor improved failover across multiple failure domains, usetopologySpreadConstraints.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.