gcloud alpha container node-pools update Stay organized with collections Save and categorize content based on your preferences.
- NAME
- gcloud alpha container node-pools update - updates a node pool in a running cluster
- SYNOPSIS
gcloud alpha container node-pools updateNAME(--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…] |--confidential-node-type=CONFIDENTIAL_NODE_TYPE|--containerd-config-from-file=PATH_TO_FILE|--enable-confidential-nodes|--enable-gvnic|--enable-image-streaming|--enable-insecure-kubelet-readonly-port|--enable-kernel-module-signature-enforcement|--enable-private-nodes|--enable-queued-provisioning|--flex-start|--labels=[KEY=VALUE,…] |--logging-variant=LOGGING_VARIANT|--max-run-duration=MAX_RUN_DURATION|--network-performance-configs=[PROPERTY=VALUE,…] |--node-labels=[NODE_LABEL,…] |--node-locations=ZONE,[ZONE,…] |--node-taints=[NODE_TAINT,…] |--resource-manager-tags=[KEY=VALUE,…] |--storage-pools=STORAGE_POOL,[…] |--system-config-from-file=PATH_TO_FILE|--tags=[TAG,…] |--windows-os-version=WINDOWS_OS_VERSION|--workload-metadata=WORKLOAD_METADATA|--autoscaled-rollout-policy=[wait-for-drain-duration=WAIT-FOR-DRAIN-DURATION]--enable-blue-green-upgrade--enable-surge-upgrade--max-surge-upgrade=MAX_SURGE_UPGRADE--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE--node-pool-soak-duration=NODE_POOL_SOAK_DURATION--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…] |--boot-disk-provisioned-iops=BOOT_DISK_PROVISIONED_IOPS--boot-disk-provisioned-throughput=BOOT_DISK_PROVISIONED_THROUGHPUT--disk-size=DISK_SIZE--disk-type=DISK_TYPE--machine-type=MACHINE_TYPE|--enable-autoprovisioning--enable-autoscaling--location-policy=LOCATION_POLICY--max-nodes=MAX_NODES--min-nodes=MIN_NODES--total-max-nodes=TOTAL_MAX_NODES--total-min-nodes=TOTAL_MIN_NODES|--enable-autorepair--enable-autoupgrade|--node-drain-grace-period-seconds=NODE_DRAIN_GRACE_PERIOD_SECONDS--node-drain-pdb-timeout-seconds=NODE_DRAIN_PDB_TIMEOUT_SECONDS--respect-pdb-during-node-pool-deletion)[--async][--cluster=CLUSTER][--location=LOCATION|--region=REGION|--zone=ZONE,-zZONE][GCLOUD_WIDE_FLAG …]
- DESCRIPTION
(ALPHA)gcloud alpha container node-pools updateupdates a node pool in a Google Kubernetes Engine cluster.- EXAMPLES
- To turn on node autoupgrade in "node-pool-1" in the cluster "sample-cluster",run:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=sample-cluster--enable-autoupgrade - POSITIONAL ARGUMENTS
NAME- The name of the node pool.
- REQUIRED FLAGS
- Exactly one of these must be specified:
--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]- Attaches accelerators (e.g. GPUs) to all nodes.
type- (Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of acceleratorto attach to the instances. Use
gcloud compute accelerator-typeslistto learn about all available accelerator types. count- (Optional) The number of accelerators to attach to the instances. The defaultvalue is 1.
gpu-driver-version- (Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be oneof:
`default`:InstallthedefaultdriverversionforthisGKEversion.ForGKEversion1.30.1-gke.1156000andlater,thisisthedefaultoption.
`latest`:InstallthelatestdriverversionavailableforthisGKEversion.CanonlybeusedfornodesthatuseContainer-OptimizedOS.
`disabled`:Skipautomaticdriverinstallation.Youmustmanuallyinstalladriverafteryoucreatethecluster.ForGKEversion1.30.1-gke.1156000andearlier,thisisthedefaultoption.TomanuallyinstalltheGPUdriver,refertohttps://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers.
gpu-partition-size- (Optional) The GPU partition size used when running multi-instance GPUs. Forinformation about multi-instance GPUs, refer to:https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi
gpu-sharing-strategy- (Optional) The GPU sharing strategy (e.g. time-sharing) to use. For informationabout GPU sharing, refer to:https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus
max-shared-clients-per-gpu- (Optional) The max number of containers allowed to share each GPU on the node.This field is used together with
gpu-sharing-strategy.
--confidential-node-type=CONFIDENTIAL_NODE_TYPE- Recreate all the nodes in the node pool to be confidential VMhttps://docs.cloud.google.com/compute/docs/about-confidential-vm.
CONFIDENTIAL_NODE_TYPEmust be one of:sev,sev_snp,tdx,disabled. --containerd-config-from-file=PATH_TO_FILE- Path of the YAML file that contains containerd configuration entries likeconfiguring access to private image registries.
For detailed information on the configuration usage, please refer tohttps://cloud.google.com/kubernetes-engine/docs/how-to/customize-containerd-configuration.
Note: Updating the containerd configuration of an existing cluster or node poolrequires recreation of the existing nodes, which might cause disruptions inrunning workloads.
Use a full or relative path to a local file containing the value ofcontainerd_config.
--enable-confidential-nodes- Recreate all the nodes in the node pool to be confidential VMhttps://docs.cloud.google.com/compute/docs/about-confidential-vm.
--enable-gvnic- Enable the use of GVNIC for this cluster. Requires re-creation of nodes usingeither a node-pool upgrade or node-pool creation.
--enable-image-streaming- Specifies whether to enable image streaming on node pool.
--enable-insecure-kubelet-readonly-port- Enables the Kubelet's insecure read only port.
To disable the readonly port on a cluster or node-pool set the flag to
--no-enable-insecure-kubelet-readonly-port. --enable-kernel-module-signature-enforcement- Enforces that kernel modules are signed on all nodes in the node pool. Thissetting overrides the cluster-level setting. For example, if the clusterdisables enforcement, you can enable enforcement only for a specific node pool.When the policy is modified on an existing node pool, nodes will be immediatelyrecreated to use the new policy. Use
--no-enable-kernel-module-signature-enforcementto disable.Examples:
gcloudalphacontainernode-poolsupdatenode-pool-1--enable-kernel-module-signature-enforcement --enable-private-nodes- Enables provisioning nodes with private IP addresses only.
The control plane still communicates with all nodes through private IP addressesonly, regardless of whether private nodes are enabled or disabled.
--enable-queued-provisioning- Mark the nodepool as Queued only. This means that all new nodes can be obtainedonly through queuing via ProvisioningRequest API.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-queued-provisioning…andotherrequiredparameters,formoredetailssee:https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest --flex-start- Start the node pool with Flex Start provisioning model.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--flex-startandotherrequiredparameters,formoredetailssee:https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest --labels=[KEY=VALUE,…]- Labels to apply to the Google Cloud resources of node pools in the KubernetesEngine cluster. These are unrelated to Kubernetes labels. Warning: Updating thislabel will causes the node(s) to be recreated.
Examples:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--labels=label1=value1,label2=value2 --logging-variant=LOGGING_VARIANT- Specifies the logging variant that will be deployed on all the nodes in the nodepool. If the node pool doesn't specify a logging variant, then the loggingvariant specified for the cluster will be deployed on all the nodes in the nodepool. Valid logging variants are
MAX_THROUGHPUT,DEFAULT.LOGGING_VARIANTmust be one of:DEFAULT- 'DEFAULT' variant requests minimal resources but may not guarantee highthroughput.
MAX_THROUGHPUT- 'MAX_THROUGHPUT' variant requests more node resources and is able to achievelogging throughput up to 10MB per sec.
--max-run-duration=MAX_RUN_DURATION- Limit the runtime of each node in the node pool to the specified duration.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--max-run-duration=3600s --network-performance-configs=[PROPERTY=VALUE,…]- Configures network performance settings for the node pool. If this flag is notspecified, the pool will be created with its default network performanceconfiguration.
total-egress-bandwidth-tier- Total egress bandwidth is the available outbound bandwidth from a VM, regardlessof whether the traffic is going to internal IP or external IP destinations. Thefollowing tier values are allowed: [TIER_UNSPECIFIED,TIER_1]
--node-labels=[NODE_LABEL,…]- Replaces all the user specified Kubernetes labels on all nodes in an existingnode pool with the given labels.
Examples:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--node-labels=label1=value1,label2=value2Updating the node pool's --node-labels flag applies the labels to the KubernetesNode objects for existing nodes in-place; it does not re-create or replacenodes. New nodes, including ones created by resizing or re-creating nodes, willhave these labels on the Kubernetes API Node object. The labels can be used inthe
nodeSelectorfield. Seehttps://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/for examples.Note that Kubernetes labels, intended to associate cluster components andresources with one another and manage resource lifecycles, are different fromGoogle Kubernetes Engine labels that are used for the purpose of trackingbilling and usage information.
--node-locations=ZONE,[ZONE,…]- Set of zones in which the node pool's nodes should be located. Changing thelocations for a node pool will result in nodes being either created or removedfrom the node pool, depending on whether locations are being added or removed.
Multiple locations can be specified, separated by commas. For example:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=sample-cluster--node-locations=us-central1-a,us-central1-b --node-taints=[NODE_TAINT,…]- Replaces all the user specified Kubernetes taints on all nodes in an existingnode pool, which can be used with tolerations for pod scheduling.
Examples:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--node-taints=key1=val1:NoSchedule,key2=val2:PreferNoScheduleTo read more about node-taints, seehttps://cloud.google.com/kubernetes-engine/docs/node-taints.
--resource-manager-tags=[KEY=VALUE,…]- Replaces all the user specified resource manager tags on all nodes in anexisting node pool in a Standard cluster with the given comma-separated resourcemanager tags that has the GCE_FIREWALL purpose.
Examples:
gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=tagKeys/1234=tagValues/2345gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=my-project/key1=value1gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=12345/key1=value1,23456/key2=value2gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=All nodes, including nodes that are resized or re-created, will have thespecified tags on the corresponding Instance object in the Compute Engine API.You can reference these tags in network firewall policy rules. For instructions,seehttps://cloud.google.com/firewall/docs/use-tags-for-firewalls.
--storage-pools=STORAGE_POOL,[…]- A list of storage pools where the node pool's boot disks will be provisioned.Replaces all the current storage pools of an existing node pool, with thespecified storage pools.
STORAGE_POOL must be in the formatprojects/project/zones/zone/storagePools/storagePool
--system-config-from-file=PATH_TO_FILE- Path of the YAML/JSON file that contains the node configuration, including Linuxkernel parameters (sysctls) and kubelet configs.
Examples:
kubeletConfig:cpuManagerPolicy:staticmemoryManager:policy:StatictopologyManager:policy:BestEffortscope:podlinuxConfig:sysctl:net.core.somaxconn:'2048'net.ipv4.tcp_rmem:'4096 87380 6291456'hugepageConfig:hugepage_size2m:'1024'hugepage_size1g:'2'swapConfig:enabled:truebootDiskProfile:swapSizeGib:8cgroupMode:'CGROUP_MODE_V2'nodeKernelModuleLoading:policy:'ENFORCE_SIGNED_MODULES'
List of supported kubelet configs in 'kubeletConfig'.
List of supported keys in memoryManager in 'kubeletConfig'.KEY VALUE cpuManagerPolicy either 'static' or 'none' cpuCFSQuota true or false (enabled by default) cpuCFSQuotaPeriod interval (e.g., '100ms'. The value must be between 1ms and 1 second, inclusive.) memoryManager specify memory manager policy topologyManager specify topology manager policy and scope podPidsLimit integer (The value must be greater than or equal to 1024 and less than 4194304.) containerLogMaxSize positive number plus unit suffix (e.g., '100Mi', '0.2Gi'. The value must be between 10Mi and 500Mi, inclusive.) containerLogMaxFiles integer (The value must be between [2, 10].) imageGcLowThresholdPercent integer (The value must be between [10, 85], and lower than imageGcHighThresholdPercent.) imageGcHighThresholdPercent integer (The value must be between [10, 85], and greater than imageGcLowThresholdPercent.) imageMinimumGcAge interval (e.g., '100s', '1m'. The value must be less than '2m'.) imageMaximumGcAge interval (e.g., '100s', '1m'. The value must be greater than imageMinimumGcAge.) evictionSoft specify eviction soft thresholds evictionSoftGracePeriod specify eviction soft grace period evictionMinimumReclaim specify eviction minimum reclaim thresholds evictionMaxPodGracePeriodSeconds integer (Max grace period for pod termination during eviction, in seconds. The value must be between [0, 300].) shutdownGracePeriodSeconds integer (Grace period for pods terminating on node shutdown, in seconds. Allowed values: 0, 30, 120.) shutdownGracePeriodCriticalPodsSeconds integer (Grace period for critical pods terminating on node shutdown, in seconds. The value must be between [0, 120] and less than shutdownGracePeriodSeconds.) allowedUnsafeSysctls list of sysctls (Allowlisted groups: 'kernel.shm*', 'kernel.msg*', 'kernel.sem', 'fs.mqueue.*', and 'net.*', and sysctls under the groups.) singleProcessOomKill true or false maxParallelImagePulls integer (The value must be between [2, 5].)
List of supported keys in topologyManager in 'kubeletConfig'.KEY VALUE policy either 'Static' or 'None'
List of supported keys in evictionSoft in 'kubeletConfig'.KEY VALUE policy either 'none' or 'best-effort' or 'single-numa-node' or 'restricted' scope either 'pod' or 'container'
List of supported keys in evictionSoftGracePeriod in 'kubeletConfig'.KEY VALUE memoryAvailable quantity (e.g., '100Mi', '1Gi'. Represents the amount of memory available before soft eviction. The value must be at least 100Mi and less than 50% of the node's memory.) nodefsAvailable percentage (e.g., '20%'. Represents the nodefs available before soft eviction. The value must be between 10% and 50%, inclusive.) nodefsInodesFree percentage (e.g., '20%'. Represents the nodefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.) imagefsAvailable percentage (e.g., '20%'. Represents the imagefs available before soft eviction. The value must be between 15% and 50%, inclusive.) imagefsInodesFree percentage (e.g., '20%'. Represents the imagefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.) pidAvailable percentage (e.g., '20%'. Represents the pid available before soft eviction. The value must be between 10% and 50%, inclusive.)
List of supported keys in evictionMinimumReclaim in 'kubeletConfig'.KEY VALUE memoryAvailable duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.) nodefsAvailable duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.) nodefsInodesFree duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.) imagefsAvailable duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.) imagefsInodesFree duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.) pidAvailable duration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
List of supported sysctls in 'linuxConfig'.KEY VALUE memoryAvailable percentage (e.g., '5%'. Represents the minimum reclaim threshold for memory available. The value must be positive and no more than 10%.) nodefsAvailable percentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs available. The value must be positive and no more than 10%.) nodefsInodesFree percentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs inodes free. The value must be positive and no more than 10%.) imagefsAvailable percentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs available. The value must be positive and no more than 10%.) imagefsInodesFree percentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs inodes free. The value must be positive and no more than 10%.) pidAvailable percentage (e.g., '5%'. Represents the minimum reclaim threshold for pid available. The value must be positive and no more than 10%.)
List of supported hugepage size in 'hugepageConfig'.KEY VALUE net.core.netdev_max_backlog Any positive integer, less than 2147483647 net.core.rmem_default Must be between [2304, 2147483647] net.core.rmem_max Must be between [2304, 2147483647] net.core.wmem_default Must be between [4608, 2147483647] net.core.wmem_max Must be between [4608, 2147483647] net.core.optmem_max Any positive integer, less than 2147483647 net.core.somaxconn Must be between [128, 2147483647] net.ipv4.tcp_rmem Any positive integer tuple net.ipv4.tcp_wmem Any positive integer tuple net.ipv4.tcp_tw_reuse Must be {0, 1, 2} net.ipv4.tcp_mtu_probing Must be {0, 1, 2} net.ipv4.tcp_max_orphans Must be between [16384, 262144] net.ipv4.tcp_max_tw_buckets Must be between [4096, 2147483647] net.ipv4.tcp_syn_retries Must be between [1, 127] net.ipv4.tcp_ecn Must be {0, 1, 2} net.ipv4.tcp_congestion_control Supported values for COS: 'reno', 'cubic', 'bbr', 'lp', 'htcp'. Supported values for Ubuntu: 'reno', 'cubic', 'bbr', 'lp', 'htcp', 'vegas', 'dctcp', 'bic', 'cdg', 'highspeed', 'hybla', 'illinois', 'nv', 'scalable', 'veno', 'westwood', 'yeah'. net.netfilter.nf_conntrack_max Must be between [65536, 4194304] net.netfilter.nf_conntrack_buckets Must be between [65536, 524288]. Recommend setting: nf_conntrack_max = nf_conntrack_buckets * 4 net.netfilter.nf_conntrack_tcp_timeout_close_wait Must be between [60, 3600] net.netfilter.nf_conntrack_tcp_timeout_time_wait Must be between [1, 600] net.netfilter.nf_conntrack_tcp_timeout_established Must be between [600, 86400] net.netfilter.nf_conntrack_acct Must be {0, 1} kernel.shmmni Must be between [4096, 32768] kernel.shmmax Must be between [0, 18446744073692774399] kernel.shmall Must be between [0, 18446744073692774399] kernel.perf_event_paranoid Must be {-1, 0, 1, 2, 3} kernel.sched_rt_runtime_us Must be [-1, 1000000] kernel.softlockup_panic Must be {0, 1} kernel.yama.ptrace_scope Must be {0, 1, 2, 3} kernel.kptr_restrict Must be {0, 1, 2} kernel.dmesg_restrict Must be {0, 1} kernel.sysrq Must be [0, 511] fs.aio-max-nr Must be between [65536, 4194304] fs.file-max Must be between [104857, 67108864] fs.inotify.max_user_instances Must be between [8192, 1048576] fs.inotify.max_user_watches Must be between [8192, 1048576] fs.nr_open Must be between [1048576, 2147483584] vm.dirty_background_ratio Must be between [1, 100] vm.dirty_background_bytes Must be between [0, 68719476736] vm.dirty_expire_centisecs Must be between [0, 6000] vm.dirty_ratio Must be between [1, 100] vm.dirty_bytes Must be between [0, 68719476736] vm.dirty_writeback_centisecs Must be between [0, 1000] vm.max_map_count Must be between [65536, 2147483647] vm.overcommit_memory Must be one of {0, 1, 2}. Not supported on machines with less than 15 GB memory. vm.overcommit_ratio Must be between [0, 100] vm.vfs_cache_pressure Must be between [0, 100] vm.swappiness Must be between [0, 200] vm.watermark_scale_factor Must be between [10, 3000] vm.min_free_kbytes Must be between [67584, 1048576]
List of supported keys in 'swapConfig' under 'linuxConfig'.KEY VALUE hugepage_size2m Number of 2M huge pages, any positive integer hugepage_size1g Number of 1G huge pages, any positive integer
List of supported keys in 'encryptionConfig' under 'swapConfig'.KEY VALUE enabled boolean encryptionConfig specify encryption settings for the swap space bootDiskProfile specify swap on the node's boot disk ephemeralLocalSsdProfile specify swap on the local SSD shared with pod ephemeral storage dedicatedLocalSsdProfile specify swap on a new, separate local NVMe SSD exclusively for swap
List of supported keys in 'bootDiskProfile' under 'swapConfig'.KEY VALUE disabled boolean
List of supported keys in 'ephemeralLocalSsdProfile' under 'swapConfig'.KEY VALUE swapSizeGib integer swapSizePercent integer
List of supported keys in 'dedicatedLocalSsdProfile' under 'swapConfig'.KEY VALUE swapSizeGib integer swapSizePercent integer
List of supported keys in 'nodeKernelModuleLoading'.KEY VALUE diskCount integer
The upper limit for total allocated hugepage size differs based upon machinesize.KEY VALUE policy ENFORCE_SIGNED_MODULES, DO_NOT_ENFORCE_SIGNED_MODULES, POLICY_UNSPECIFIED - On machines with less than 30 GB memory: 60% of the total memory. For example,on e2-standard-2 machine with 8 GB of memory, you can't allocate more than 4.8GB for hugepages.
- On machines with more than 30 GB memory: 80% of the total memory. For example,on c4a-standard-8 machines with 32 GB of memory, hugepages cannot exceed 25.6GB.
1G hugepages are only available in following machine familes: c3, m2, c2d, c3d,h3, m3, a2, a3, g2.
Supported values for 'cgroupMode' under 'linuxConfig'.
CGROUP_MODE_V1: Use cgroupv1 on the node pool.CGROUP_MODE_V2: Use cgroupv2 on the node pool.CGROUP_MODE_UNSPECIFIED: Use the default GKE cgroup configuration.
Supported values for 'transparentHugepageEnabled' under 'linuxConfig' whichcontrols transparent hugepage support for anonymous memory.
TRANSPARENT_HUGEPAGE_ENABLED_ALWAYS: Transparent hugepage isenabled system wide.TRANSPARENT_HUGEPAGE_ENABLED_MADVISE: Transparent hugepage isenabled inside MADV_HUGEPAGE regions. This is the default kernel configuration.TRANSPARENT_HUGEPAGE_ENABLED_NEVER: Transparent hugepage isdisabled.TRANSPARENT_HUGEPAGE_ENABLED_UNSPECIFIED: Default value. GKE willnot modify the kernel configuration.
Supported values for 'transparentHugepageDefrag' under 'linuxConfig' whichdefines the transparent hugepage defrag configuration on the node.
TRANSPARENT_HUGEPAGE_DEFRAG_ALWAYS: It means that an applicationrequesting THP will stall on allocation failure and directly reclaim pages andcompact memory in an effort to allocate a THP immediately.TRANSPARENT_HUGEPAGE_DEFRAG_DEFER: It means that an applicationwill wake kswapd in the background to reclaim pages and wake kcompactd tocompact memory so that THP is available in the near future. It is theresponsibility of khugepaged to then install the THP pages later.TRANSPARENT_HUGEPAGE_DEFRAG_DEFER_WITH_MADVISE: It means that anapplication will enter direct reclaim and compaction like always, but only forregions that have used madvise(MADV_HUGEPAGE); all other regions will wakekswapd in the background to reclaim pages and wake kcompactd to compact memoryso that THP is available in the near future.TRANSPARENT_HUGEPAGE_DEFRAG_MADVISE: It means that an applicationwill enter direct reclaim and compaction like always, but only for regions thathave used madvise(MADV_HUGEPAGE); all other regions will wake kswapd in thebackground to reclaim pages and wake kcompactd to compact memory so that THP isavailable in the near future.TRANSPARENT_HUGEPAGE_DEFRAG_NEVER: It means that an applicationwill never enter direct reclaim or compaction.TRANSPARENT_HUGEPAGE_DEFRAG_UNSPECIFIED: Default value. GKE willnot modify the kernel configuration.
Supported values for 'policy' under 'nodeKernelModuleLoading'.
POLICY_UNSPECIFIED: Default behavior. GKE selects the image basedon node type. For CPU and TPU nodes, the image will not allow loading externalkernel modules. For GPU nodes, the image will allow loading any module, whetherit is signed or not.ENFORCE_SIGNED_MODULES: Enforced signature verification: Node poolswill use a Container-Optimized OS image configured to allow loading ofGoogle-signedexternal kernel modules. Loadpin is enabled butconfigured to exclude modules, and kernel module signature checking is enforced.DO_NOT_ENFORCE_SIGNED_MODULES: Do not enforce kernel modulesignature enforcement. Mirrors existing DEFAULT behavior.
Note, updating the system configuration of an existing node pool requiresrecreation of the nodes which which might cause a disruption.
Use a full or relative path to a local file containing the value ofsystem_config.
--tags=[TAG,…]- Replaces all the user specified Compute Engine tags on all nodes in an existingnode pool with the given tags (comma separated).
Examples:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--tags=tag1,tag2New nodes, including ones created by resize or recreate, will have these tags onthe Compute Engine API instance object and these tags can be used in firewallrules. Seehttps://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/createfor examples.
--windows-os-version=WINDOWS_OS_VERSION- Specifies the Windows Server Image to use when creating a Windows node pool.Valid variants can be "ltsc2019", "ltsc2022". It means using LTSC2019 serverimage or LTSC2022 server image. If the node pool doesn't specify a WindowsServer Image Os version, then Ltsc2019 will be the default one to use.
WINDOWS_OS_VERSIONmust be one of:ltsc2019,ltsc2022. --workload-metadata=WORKLOAD_METADATA- Type of metadata server available to pods running in the node pool.
WORKLOAD_METADATAmust be one of:EXPOSED- [DEPRECATED] Pods running in this node pool have access to the node's underlyingCompute Engine Metadata Server.
GCE_METADATA- Pods running in this node pool have access to the node's underlying ComputeEngine Metadata Server.
GKE_METADATA- Run the Kubernetes Engine Metadata Server on this node. The Kubernetes EngineMetadata Server exposes a metadata API to workloads that is compatible with theV1 Compute Metadata APIs exposed by the Compute Engine and App Engine MetadataServers. This feature can only be enabled if Workload Identity is enabled at thecluster level.
GKE_METADATA_SERVER- [DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. TheKubernetes Engine Metadata Server exposes a metadata API to workloads that iscompatible with the V1 Compute Metadata APIs exposed by the Compute Engine andApp Engine Metadata Servers. This feature can only be enabled if WorkloadIdentity is enabled at the cluster level.
SECURE- [DEPRECATED] Prevents pods not in hostNetwork from accessing certain VMmetadata, specifically kube-env, which contains Kubelet credentials, and theinstance identity token. This is a temporary security solution available whilethe bootstrapping process for cluster nodes is being redesigned with significantsecurity improvements. This feature is scheduled to be deprecated in the futureand later removed.
- Or at least one of these can be specified:
- Upgrade settings
--autoscaled-rollout-policy=[wait-for-drain-duration=WAIT-FOR-DRAIN-DURATION]- Autoscaled rollout policy options for blue-green upgrade.
wait-for-drain-duration- (Optional) Time in seconds to wait after cordoning the blue pool before drainingthe nodes.
Examples:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-blue-green-upgrade--autoscaled-rollout-policy=""gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-blue-green-upgrade--autoscaled-rollout-policy=wait-for-drain-duration=7200s
--enable-blue-green-upgrade- Changes node pool upgrade strategy to blue-green upgrade.
--enable-surge-upgrade- Changes node pool upgrade strategy to surge upgrade.
--max-surge-upgrade=MAX_SURGE_UPGRADE- Number of extra (surge) nodes to be created on each upgrade of the node pool.
Specifies the number of extra (surge) nodes to be created during this nodepool's upgrades. For example, running the following command will result increating an extra node each time the node pool is upgraded:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--max-surge-upgrade=1--max-unavailable-upgrade=0Must be used in conjunction with '--max-unavailable-upgrade'.
--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE- Number of nodes that can be unavailable at the same time on each upgrade of thenode pool.
Specifies the number of nodes that can be unavailable at the same time duringthis node pool's upgrades. For example, assume the node pool has 5 nodes,running the following command will result in having 3 nodes being upgraded inparallel (1 + 2), but keeping always at least 3 (5 - 2) available each time thenode pool is upgraded:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--max-surge-upgrade=1--max-unavailable-upgrade=2Must be used in conjunction with '--max-surge-upgrade'.
--node-pool-soak-duration=NODE_POOL_SOAK_DURATION- Time in seconds to be spent waiting during blue-green upgrade before deletingthe blue pool and completing the upgrade.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--node-pool-soak-duration=600s --standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]- Standard rollout policy options for blue-green upgrade.
Batch sizes are specified by one of, batch-node-count or batch-percent. Theduration between batches is specified by batch-soak-duration.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--standard-rollout-policy=batch-node-count=3,batch-soak-duration=60sgcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--standard-rollout-policy=batch-percent=0.3,batch-soak-duration=60s
- Or at least one of these can be specified:
- Node config
--boot-disk-provisioned-iops=BOOT_DISK_PROVISIONED_IOPS- Configure the Provisioned IOPS for the node pool boot disks. Only valid forhyperdisk-balanced boot disks.
--boot-disk-provisioned-throughput=BOOT_DISK_PROVISIONED_THROUGHPUT- Configure the Provisioned Throughput for the node pool boot disks. Only validfor hyperdisk-balanced boot disks.
--disk-size=DISK_SIZE- Size for node VM boot disks in GB. Defaults to 100GB.
--disk-type=DISK_TYPE- Type of the node VM boot disk. For version 1.24 and later, defaults topd-balanced. For versions earlier than 1.24, defaults to pd-standard.
DISK_TYPEmust be one of:pd-standard,pd-ssd,pd-balanced,hyperdisk-balanced,hyperdisk-extreme,hyperdisk-throughput. --machine-type=MACHINE_TYPE- The type of machine to use for nodes. Defaults to e2-medium. The list ofpredefined machine types is available using the following command:
gcloudcomputemachine-typeslistYou can also specify custom machine types by providing a string with the format"custom-CPUS-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is theamount of RAM in MiB.
For example, to create a node pool using custom machines with 2 vCPUs and 12 GBof RAM:
gcloudalphacontainernode-poolsupdatehigh-mem-pool--machine-type=custom-2-12288
- Or at least one of these can be specified:
- Cluster autoscaling
--enable-autoprovisioning- Enables Cluster Autoscaler to treat the node pool as if it was autoprovisioned.
Cluster Autoscaler will be able to delete the node pool if it's unneeded.
--enable-autoscaling- Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by --node-pool or the defaultnode pool if --node-pool is not provided. If not already, --max-nodes or--total-max-nodes must also be set.
--location-policy=LOCATION_POLICY- Location policy specifies the algorithm used when scaling-up the node pool.
BALANCED- Is a best effort policy that aims to balance the sizesof available zones.ANY- Instructs the cluster autoscaler to prioritize utilization ofunused reservations, and reduces preemption risk for Spot VMs.
LOCATION_POLICYmust be one of:BALANCED,ANY. --max-nodes=MAX_NODES- Maximum number of nodes per zone in the node pool.
Maximum number of nodes per zone to which the node pool specified by --node-pool(or default node pool if unspecified) can scale. Ignored unless--enable-autoscaling is also specified.
--min-nodes=MIN_NODES- Minimum number of nodes per zone in the node pool.
Minimum number of nodes per zone to which the node pool specified by --node-pool(or default node pool if unspecified) can scale. Ignored unless--enable-autoscaling is also specified.
--total-max-nodes=TOTAL_MAX_NODES- Maximum number of all nodes in the node pool.
Maximum number of all nodes to which the node pool specified by --node-pool (ordefault node pool if unspecified) can scale. Ignored unless --enable-autoscalingis also specified.
--total-min-nodes=TOTAL_MIN_NODES- Minimum number of all nodes in the node pool.
Minimum number of all nodes to which the node pool specified by --node-pool (ordefault node pool if unspecified) can scale. Ignored unless --enable-autoscalingis also specified.
- Or at least one of these can be specified:
- Node management
--enable-autorepair- Enable node autorepair feature for a node pool.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-autorepairSeehttps://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repairfor more info.
--enable-autoupgrade- Sets autoupgrade feature for a node pool.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-autoupgradeSeehttps://cloud.google.com/kubernetes-engine/docs/node-auto-upgradesfor more info.
- Or at least one of these can be specified:
- Node drain settings
--node-drain-grace-period-seconds=NODE_DRAIN_GRACE_PERIOD_SECONDS- The grace period in seconds for nodes to drain before being forcefully removed.
--node-drain-pdb-timeout-seconds=NODE_DRAIN_PDB_TIMEOUT_SECONDS- The timeout in seconds for the node pool to be drained.
--respect-pdb-during-node-pool-deletion- Whether to respect PDBs when deleting nodes in the node pool.
- Exactly one of these must be specified:
- OPTIONAL FLAGS
--async- Return immediately, without waiting for the operation in progress to complete.
--cluster=CLUSTER- The name of the cluster. Overrides the default
container/clusterproperty value for this command invocation. - At most one of these can be specified:
--location=LOCATION- Compute zone or region (e.g. us-central1-a or us-central1) for the cluster.Overrides the default compute/region or compute/zone value for this commandinvocation. Prefer using this flag over the --region or --zone flags.
--region=REGION- Compute region (e.g. us-central1) for a regional cluster. Overrides the defaultcompute/region property value for this command invocation.
--zone=ZONE,-zZONE- Compute zone (e.g. us-central1-a) for a zonal cluster. Overrides the defaultcompute/zone property value for this command invocation.
- GCLOUD WIDE FLAGS
- These flags are available to all commands:
--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.Run
$gcloud helpfor details. - NOTES
- This command is currently in alpha and might change without notice. If thiscommand fails with API permission errors despite specifying the correct project,you might be trying to access an API with an invitation-only early accessallowlist. These variants are also available:
gcloudcontainernode-poolsupdategcloudbetacontainernode-poolsupdate
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-03 UTC.