Add and manage node pools Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to add, manage, scale, upgrade, and delete node poolsthat run in your Google Kubernetes Engine (GKE) Standardclusters, in order tooptimize your GKE Standard clusters for performance andscalability. The node pools in a Standard cluster includeStandard node pools, andAutopilot-managed nodepools. Asidefrom the section about how toUpgrade a node pool, theinformation in this document applies specifically to Standard nodepools. You also learn how to deploy Pods to specific Standard nodepools, and about the implications of node pool upgrades on running workloads.
This page is for Operators, Cloud architects, andDevelopers, who need to create and configure clusters, and deployworkloads on GKE. To learn more about common roles and exampletasks that we reference in Google Cloudcontent, seeCommon GKE user roles andtasks.
Before reading this page, ensure that you're familiar withnodepools.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
- Ensure that you have an existingStandard cluster.
Set up IAM service accounts for GKE
GKE uses IAM service accounts that are attached to your nodes to run system tasks like logging and monitoring. At a minimum, thesenode service accounts must have theKubernetes Engine Default Node Service Account (roles/container.defaultNodeServiceAccount) role on your project. By default, GKE uses theCompute Engine default service account, which is automatically created in your project, as the node service account.
To grant theroles/container.defaultNodeServiceAccount role to the Compute Engine default service account, complete the following steps:
Console
- Go to theWelcome page:
- In theProject number field, clickCopy to clipboard.
- Go to theIAM page:
- ClickGrant access.
- In theNew principals field, specify the following value:
ReplacePROJECT_NUMBER-compute@developer.gserviceaccount.comPROJECT_NUMBERwith the project number that you copied. - In theSelect a role menu, select theKubernetes Engine Default Node Service Account role.
- ClickSave.
gcloud
- Find your Google Cloud project number:
gcloudprojectsdescribePROJECT_ID\--format="value(projectNumber)"
Replace
PROJECT_IDwith your project ID.The output is similar to the following:
12345678901
- Grant the
roles/container.defaultNodeServiceAccountrole to the Compute Engine default service account:gcloudprojectsadd-iam-policy-bindingPROJECT_ID\--member="serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com"\--role="roles/container.defaultNodeServiceAccount"
Replace
PROJECT_NUMBERwith the project number from the previous step.
Add a node pool to a Standard cluster
You can add a new node pool to a GKE Standard clusterusing the gcloud CLI, the Google Cloud console, or Terraform.GKE also supportsnode auto-provisioning,which automatically manages the node pools in your cluster based on scalingrequirements.
Create and use aminimally-privileged Identity and Access Management (IAM) service account for your nodepools to use instead of theCompute Engine default service account.For instructions to create a minimally-privileged service account, refer toHardening your cluster's security.
gcloud
To create a node pool, run thegcloud container node-pools createcommand:
gcloudcontainernode-poolscreatePOOL_NAME\--clusterCLUSTER_NAME\--location=CONTROL_PLANE_LOCATION\--service-accountSERVICE_ACCOUNTReplace the following:
POOL_NAME: the name of the new node pool.CLUSTER_NAME: the name of your existing cluster.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.SERVICE_ACCOUNT: the name of the IAMservice account for your nodes to use.We strongly recommend that you specify a minimally-privileged IAM service account that your nodes can use instead of the Compute Engine default service account. To learn how to create a minimally-privileged service account, seeUse a least privilege service account.
To specify a custom service account in the gcloud CLI, add the following flag to your command:
--service-account=SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
ReplaceSERVICE_ACCOUNT_NAME with the name of your minimally-privileged service account.
For a full list of optional flags you can specify, refer to thegcloud container node-pools createdocumentation.
The output is similar to the following:
Creating node poolPOOL_NAME...done.Created [https://container.googleapis.com/v1/projects/PROJECT_ID/zones/us-central1/clusters/CLUSTER_NAME/nodePools/POOL_NAME].NAME:POOL_NAMEMACHINE_TYPE: e2-mediumDISK_SIZE_GB: 100NODE_VERSION: 1.21.5-gke.1302In this output, you see details about the node pool, such as the machinetype and GKE version running on the nodes.
Occasionally, the node pool is created successfully but thegcloud commandtimes out instead of reporting the status from the server. To check thestatus of all node pools, including those not yet fully provisioned, use thefollowing command:
gcloudcontainernode-poolslist--clusterCLUSTER_NAME\--location=CONTROL_PLANE_LOCATIONConsole
To add a node pool to an existing Standard cluster, perform thefollowing steps:
Go to theGoogle Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the Standard cluster youwant to modify.
Clickadd_boxAdd node pool.
Configure your node pool.
In the navigation menu, clickSecurity.
- Optionally, specify a custom IAM service account for your nodes:
- In theAdvanced settings page, expand theSecurity section.
- In theService account menu, select your preferred service account.
We strongly recommend that you specify a minimally-privileged IAM service account that your nodes can use instead of the Compute Engine default service account. To learn how to create a minimally-privileged service account, seeUse a least privilege service account.
ClickCreate to add the node pool.
Terraform
Use one of the following examples:
- Add a node pool that uses the Compute Engine defaultIAM service account:
resource"google_container_node_pool""default"{name="gke-standard-regional-node-pool"cluster=google_container_cluster.default.namenode_config{service_account=google_service_account.default.email}}- Add a node pool that uses a custom IAM service account:
Create an IAM service account and grant it the
roles/container.defaultNodeServiceAccountrole on the project:resource"google_service_account""default"{account_id="service-account-id"display_name="Service Account"}data"google_project""project"{}resource"google_project_iam_member""default"{project=data.google_project.project.project_idrole="roles/container.defaultNodeServiceAccount"member="serviceAccount:${google_service_account.default.email}"}Create a node pool that uses the new service account:
resource"google_container_node_pool""default"{name="gke-standard-regional-node-pool"cluster=google_container_cluster.default.namenode_config{service_account=google_service_account.default.email}}
To learn more about using Terraform, seeTerraform support for GKE.
View node pools in a Standard cluster
gcloud
To list all the node pools of a Standard cluster, run thegcloud container node-pools listcommand:
gcloudcontainernode-poolslist--clusterCLUSTER_NAME\--location=CONTROL_PLANE_LOCATIONTo view details about a specific node pool, run thegcloud container node-pools describecommand:
gcloudcontainernode-poolsdescribePOOL_NAME\--clusterCLUSTER_NAME\--location=CONTROL_PLANE_LOCATIONReplace the following:
CLUSTER_NAME: the name of the cluster.POOL_NAME: the name of the node pool to view.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.
Console
To view node pools for a Standard cluster, perform the followingsteps:
Go to theGoogle Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the Standard cluster.
Click theNodes tab.
UnderNode Pools, click the name of the node pool you want to view.
Scale a node pool
You can scale your node pools up or down to optimize for performance and cost.With GKE Standard node pools, you canscale a node poolhorizontally by changing the number of nodes in the nodepool, orscale a node pool vertically by changingthe machine attribute configuration of the nodes.
Horizontally scale by changing the node count
gcloud
To resize a cluster's node pools, run thegcloud container clusters resizecommand:
gcloudcontainerclustersresizeCLUSTER_NAME\--location=CONTROL_PLANE_LOCATION\--node-poolPOOL_NAME\--num-nodesNUM_NODESReplace the following:
CLUSTER_NAME: the name of the cluster to resize.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.POOL_NAME: the name of the node pool to resize.NUM_NODES: the number of nodes in the pool in a zonalcluster. If you use multi-zonal or regional clusters,NUM_NODESis the number of nodes for each zone the node pool is in.
Repeat this command for each node pool. If your cluster has only one nodepool, omit the--node-pool flag.
Console
To resize a cluster's node pools, perform the following steps:
Go to theGoogle Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the Standard cluster youwant to modify.
Click theNodes tab.
In theNode Pools section, click the name of the node pool that youwant to resize.
ClickeditResize.
In theNumber of nodes field, enter how many nodes that you want inthe node pool, and then clickResize.
Repeat for each node pool as needed.
Vertically scale by changing the node machine attributes
You can modify the node pool's configured machine type, disk type, and disksize.
When you edit one or more of these machine attributes, GKEupdates the nodes to the new configuration using theupgradestrategyconfigured for the node pool. If you configure theblue-greenupgradestrategyyou can migrate the workloads from the original nodes to the new nodes whilebeing able to roll back the original nodes if the migration fails.Inspect theupgrade settings of the nodepoolto ensure that the configured strategy is how you want your nodes to be updated.
Update at least one of the highlighted machine attributes in the followingcommand:
gcloudcontainernode-poolsupdatePOOL_NAME\--clusterCLUSTER_NAME\--location=CONTROL_PLANE_LOCATION\--machine-typeMACHINE_TYPE\--disk-typeDISK_TYPE\--disk-sizeDISK_SIZEOmit any flags for machine attributes that you don't want to change. However,you must use at least one machine attribute flag, as the command otherwisefails.
Replace the following:
POOL_NAME: the name of the node pool to resize.CLUSTER_NAME: the name of the cluster to resize.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.MACHINE_TYPE: the type of machine to use for nodes.To learn more, seegcloud container node-pools update.DISK_TYPE: the type of the node VM boot disk, must be one ofpd-standard,pd-ssd,pd-balanced.DISK_SIZE: the size for node VM boot disks in GB. Defaults to 100GB.
This change requires recreating the nodes, which can cause disruption to yourrunning workloads. For details about this specific change, find thecorresponding row in themanual changes that recreate the nodes using a nodeupgrade strategy without respecting maintenancepoliciestable. To learn more about node updates, seePlanning for node updatedisruptions.
Caution: GKE immediately begins recreating the nodes for thischange using the node upgrade strategy, regardless of active maintenancepolicies. GKE depends onresourceavailability for thechange. Disabling node auto-upgradesdoesn't prevent thischange.Ensure that your workloads running on the nodes are prepared for disruptionbefore you initiate this change.Upgrade a node pool
By default, Standard node pools haveauto-upgradeenabled, and all Autopilot-managed node pools in Standardclusters always have auto-upgrade enabled. Node auto-upgrades ensure that yourcluster's control plane and node version remain in sync and in compliance withtheKubernetes version skewpolicy, which ensures thatcontrol planes are compatible with nodes up to two minor versions earlier thanthe control plane. For example, Kubernetes 1.34 control planes are compatiblewith Kubernetes 1.32 nodes.
Avoid disablingnode auto-upgrades with Standard node pools so that your cluster benefits from the upgrades listed in the preceding paragraph.
With GKE Standard node pool upgrades, you can choosebetween three configurable upgrade strategies, includingsurgeupgrades,blue-greenupgrades,andautoscaled blue-greenupgrades(Preview).Autopilot-managed node pools in Standard clusters always usesurge upgrades.
For Standard node pools,choose astrategy anduse the parameters to tune thestrategy to bestfit your cluster environment's needs.
How node upgrades work
While a node is being upgraded, GKE stops scheduling new Podsonto it, and attempts to schedule its running Pods onto other nodes. This issimilar to other events that re-create the node, such as enabling or disabling afeature on the node pool.
During automatic or manual node upgrades,PodDisruptionBudgets(PDBs) andPod termination graceperiodare respected for a maximum of 1 hour. If Pods running on the node can't bescheduled onto new nodes after one hour, GKE initiates theupgrade anyway. This behavior applies even if you configure your PDBs to alwayshave all of your replicas available by setting themaxUnavailable field to0or0% or by setting theminAvailable field to100% or to the number ofreplicas. In all of these scenarios, GKE deletes the Pods afterone hour so that the node deletion can happen.
If a workload running in a Standard node pool requires more flexibility with graceful termination, useblue-green upgrades which provide settings for additional soak time to extend PDB checks beyond the one hour default.
To learn more about what to expect during node termination in general,see the topic aboutPods.
The upgrade is only complete when all nodes have been recreatedand the cluster is in the new state. When a newly-upgraded node registerswith the control plane, GKE marks the node as schedulable.
New node instances run the new Kubernetes version as well as the following:
For a node pool upgrade to be considered complete, all nodes in the node poolmust be recreated. If an upgrade started but then didn't complete and is in apartially upgraded state, the node pool version might not reflect the version ofall of the nodes. To learn more, seeSome node versions don't match the nodepool version after an incomplete node poolupgrade.To determine that the node pool upgrade finished,check the node pool upgradestatus. If theupgrade operation is beyond the retention period, then check that eachindividual node version matches the node pool version.
Save your data to persistent disks before upgrading
Before upgrading a node pool, you must ensure that any data you need to keep isstored in a Pod by using [persistent volumes], which use [persistent disks].Persistent disks are unmounted, rather than erased, during upgrades, and theirdata is transferred between Pods.
The following restrictions pertain to persistent disks:
- The nodes on which Pods are running must be Compute Engine VMs.
- Those VMs need to be in the same Compute Engine project and zone as thepersistent disk.
To learn how to add a persistent disk to an existing node instance, seeAdding or resizing zonal persistentdisks in theCompute Engine documentation.
Manually upgrade a node pool
You can manually upgrade the version of a Standard node pool orAutopilot-managed node poolin a Standard cluster. You canmatch the version of the control plane or, use a previous version that is stillavailable and is compatible with the control plane. You can manually upgrademultiple node pools in parallel, whereas GKE automaticallyupgrades only one node pool at a time.
When GKE upgrades a node pool, either manually or automatically,GKE removes anylabels you added to individual nodes usingkubectl.Any othertypes of changes to a GKEclusterwhich recreate the nodes also remove the labels. To avoid losing labels,applylabels to nodepoolsinstead.
Before you manually upgrade your node pool, consider the following conditions:
- Upgrading a node pool may disrupt workloads running in that node pool. Toavoid this, you can create a new node pool with the required version andmigrate the workload. After migration, you can delete the old node pool.
- If you upgrade a node pool with an Ingress in an errored state, the instancegroup does not sync. To work around this issue, first check the status usingthe
kubectl get ingcommand. If the instance group is not synced, you canwork around the problem by re-applying the manifest used to create theingress.
You can manually upgrade your node pools to a version compatible with thecontrol plane:
- For Standard node pools, you can use the Google Cloud console or theGoogle Cloud CLI.
ForAutopilot-managed node pools,you can only use the Google Cloud CLI.
Console
To upgrade a Standard node pool using the Google Cloud console,perform the following steps:
Go to theGoogle Kubernetes Engine page in Google Cloud console.
Click the name of the cluster.
On theCluster details page, click theNodes tab.
In theNode Pools section, click the name of the node pool that youwant to upgrade.
ClickeditEdit.
ClickChange underNode version.
Select the required version from theNode version drop-down list,then clickChange.
It may take several minutes for the node version to change.
gcloud
The following variables are used in the commands in this section:
CLUSTER_NAME: the name of the cluster of the nodepool to be upgraded.NODE_POOL_NAME: the name of the node pool to beupgraded.CONTROL_PLANE_LOCATION: the location (region orzone) for the control plane, such asus-central1orus-central1-a.VERSION: the Kubernetes version to whichthe nodes are upgraded. For example,--cluster-version=1.34.1-gke.1293000orcluster-version=latest.
Upgrade a node pool:
gcloudcontainerclustersupgradeCLUSTER_NAME\--node-pool=NODE_POOL_NAME\--location=CONTROL_PLANE_LOCATIONTo specify a different version of GKE on nodes, use theoptional--cluster-version flag:
gcloudcontainerclustersupgradeCLUSTER_NAME\--node-pool=NODE_POOL_NAME\--location=CONTROL_PLANE_LOCATION\--cluster-versionVERSIONFor more information about specifying versions, seeVersioning.
For more information, refer to thegcloud container clusters upgradedocumentation.
Deploy a Pod to a specific node pool
You can explicitly deploy a Pod to a specific node pool by using anodeSelector in your Pod manifest.nodeSelector schedules Pods ontonodes with a matching label.
All GKE node pools have labels with the following format:cloud.google.com/gke-nodepool:POOL_NAME.Add this label to thenodeSelector field in your Pod as shown in thefollowing example:
apiVersion:v1kind:Podmetadata:name:nginxlabels:env:testspec:containers:-name:nginximage:nginximagePullPolicy:IfNotPresentnodeSelector:cloud.google.com/gke-nodepool:POOL_NAMEFor more information, seeAssigning Pods to Nodes.
As an alternative to node selector, you can use node affinity.Use node affinity if you want a "soft" rule where the Podattempts to meet the constraint, but is still scheduled even if theconstraint can't be satisfied. For more information, seeNode affinity.You can alsospecify resource requests for the containers.
Downgrade a node pool
You can downgrade a node pool if GKE completed the node poolupgrade. GKE can't downgrade—and can only roll back—a partiallyupgraded node pool. If you attempt to trigger a downgrade of a partially upgradednode pool, GKE doesn't update any of the nodes in the node pool withthat operation. To check if node pool isn't completely upgraded and can only berolled back, firstcheck the node pool upgradestatus. If theupgrade was cancelled or failed, you can check that the node pool upgrade nevercompleted by checking the nodes in the node pool to see if there are still nodesrunning the previous version.
If a node pool upgrade didn't complete and you want to roll back upgraded nodes totheir previous version, follow the instructions toroll back a node poolupgrade. If a node pool upgrade is complete and you want to reverse itbecause it caused issues for your workloads, you can downgrade the node pool to anearlier version by following the instructions in this section. Review thelimitationsbefore downgrading a node pool.
Use theblue-green node upgrade strategy if you need tooptimize for risk mitigation for node pool upgrades impacting your workloads.With this strategy, you canroll back an in-progress upgrade to the original nodes if the upgrade is unsuccessful.
- Set amaintenance exclusionfor the cluster to prevent the node pool from being automatically upgraded byGKE after being downgraded.
- To downgrade to an earlier version, follow the steps tomanually upgrade a node pool and specify an earlier version.
Delete a node pool
Deleting a node pool deletes the nodes and all running workloads. By default,GKE doesn't respectPodDisruptionBudgetsettings when deleting node pools. To learn how to change this setting, seeUpdate a node pool to respect PDBs during node pooldeletion.
For more information about how deleting a node pool affects your workloads,including interactions with node selectors, seeDeleting nodepools.
You can only delete Standard node pools. You can't delete theAutopilot-managed nodepools whichrun in Standard clusters, GKE automatically cleans theseup when they're no longer needed.
Delete a node pool using the gcloud CLI or theGoogle Cloud console:
gcloud
To delete a node pool, run thegcloud container node-pools deletecommand:
gcloudcontainernode-poolsdeletePOOL_NAME\--clusterCLUSTER_NAME\--location=CONTROL_PLANE_LOCATIONReplace the following:
POOL_NAME: the name of the node pool to delete.CLUSTER_NAME: the name of your cluster.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.
Console
To delete a node pool, perform the following steps:
Go to theGoogle Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the Standard cluster youwant to modify.
Click theNodes tab.
In theNode Pools section, clickdeletenext to the node pool you want to delete.
When prompted to confirm, clickDelete.
Update a node pool to respect PDBs during node pool deletion
By default, GKE doesn't respectPodDisruptionBudgetsettings when deleting node pools. However, you can update the configuration ofthe node pool to have GKE respect PDBs during node pool deletion,for up to one hour. Respecting PDBs gives workloads the opportunity to move toother nodes in the cluster. This setting doesn't affect node pool deletionduring cluster deletion.
To update the setting for your node pool, run the followinggcloud CLI command:
gcloudcontainernode-poolsupdatePOOL_NAME--cluster=CLUSTER_NAME\--location=CONTROL_PLANE_LOCATION\--respect-pdb-during-node-pool-deletionReplace the following:
POOL_NAME: the name of the node pool.CLUSTER_NAME: the name of your cluster.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.
You can also use the--respect-pdb-during-node-pool-deletion flag when youadd a node pool.
To remove this configuration and use the default setting of not respecting PDBsduring node pool deletion, see the next section.
Update a node pool to not respect PDBs during node pool deletion
You can update a node pool to revert to the default setting of not respectingPDBs during node pool deletion.
To update the setting, run the following gcloud CLI command:
gcloudcontainernode-poolsupdatePOOL_NAME--cluster=CLUSTER_NAME\--location=CONTROL_PLANE_LOCATION\--no-respect-pdb-during-node-pool-deletionReplace the following:
POOL_NAME: the name of the node pool.CLUSTER_NAME: the name of your cluster.CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.
Migrate nodes to a different machine type
To learn about different approaches for moving workloads between machine types,for example, to migrate to a newer machine type, seeMigrate nodes to adifferent machine type.
Migrate workloads between node pools
To migrate workloads from one node pool to another node pool, seeMigrateworkloads between nodepools.For example, you can use these instructions if you're replacing an existing nodepool with a new node pool and you want to ensure that the workloads move to thenew nodes from the existing nodes.
Troubleshoot
For troubleshooting information, seeTroubleshoot Standard node poolsandTroubleshoot node registration.
What's next
- Learn about auto-provisioning node pools.
- Learn how GKE can automatically repair unhealthy nodes.
- Learn how to configure
kubeletandsysctlusing Node System Configuration.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.