gcloud alpha container node-pools update

NAME
gcloud alpha container node-pools update - updates a node pool in a running cluster
SYNOPSIS
gcloud alpha container node-pools updateNAME(--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]    |--confidential-node-type=CONFIDENTIAL_NODE_TYPE    |--containerd-config-from-file=PATH_TO_FILE    |--enable-confidential-nodes    |--enable-gvnic    |--enable-image-streaming    |--enable-insecure-kubelet-readonly-port    |--enable-kernel-module-signature-enforcement    |--enable-private-nodes    |--enable-queued-provisioning    |--flex-start    |--labels=[KEY=VALUE,…]    |--logging-variant=LOGGING_VARIANT    |--max-run-duration=MAX_RUN_DURATION    |--network-performance-configs=[PROPERTY=VALUE,…]    |--node-labels=[NODE_LABEL,…]    |--node-locations=ZONE,[ZONE,…]    |--node-taints=[NODE_TAINT,…]    |--resource-manager-tags=[KEY=VALUE,…]    |--storage-pools=STORAGE_POOL,[…]    |--system-config-from-file=PATH_TO_FILE    |--tags=[TAG,…]    |--windows-os-version=WINDOWS_OS_VERSION    |--workload-metadata=WORKLOAD_METADATA    |--autoscaled-rollout-policy=[wait-for-drain-duration=WAIT-FOR-DRAIN-DURATION]--enable-blue-green-upgrade--enable-surge-upgrade--max-surge-upgrade=MAX_SURGE_UPGRADE--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE--node-pool-soak-duration=NODE_POOL_SOAK_DURATION--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]    |--boot-disk-provisioned-iops=BOOT_DISK_PROVISIONED_IOPS--boot-disk-provisioned-throughput=BOOT_DISK_PROVISIONED_THROUGHPUT--disk-size=DISK_SIZE--disk-type=DISK_TYPE--machine-type=MACHINE_TYPE    |--enable-autoprovisioning--enable-autoscaling--location-policy=LOCATION_POLICY--max-nodes=MAX_NODES--min-nodes=MIN_NODES--total-max-nodes=TOTAL_MAX_NODES--total-min-nodes=TOTAL_MIN_NODES    |--enable-autorepair--enable-autoupgrade    |--node-drain-grace-period-seconds=NODE_DRAIN_GRACE_PERIOD_SECONDS--node-drain-pdb-timeout-seconds=NODE_DRAIN_PDB_TIMEOUT_SECONDS--respect-pdb-during-node-pool-deletion)[--async][--cluster=CLUSTER][--location=LOCATION    |--region=REGION    |--zone=ZONE,-zZONE][GCLOUD_WIDE_FLAG]
DESCRIPTION
(ALPHA)gcloud alpha container node-pools updateupdates a node pool in a Google Kubernetes Engine cluster.
EXAMPLES
To turn on node autoupgrade in "node-pool-1" in the cluster "sample-cluster",run:
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=sample-cluster--enable-autoupgrade
POSITIONAL ARGUMENTS
NAME
The name of the node pool.
REQUIRED FLAGS
Exactly one of these must be specified:
--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]
Attaches accelerators (e.g. GPUs) to all nodes.
type
(Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of acceleratorto attach to the instances. Usegcloud compute accelerator-typeslist to learn about all available accelerator types.
count
(Optional) The number of accelerators to attach to the instances. The defaultvalue is 1.
gpu-driver-version
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be oneof:
`default`:InstallthedefaultdriverversionforthisGKEversion.ForGKEversion1.30.1-gke.1156000andlater,thisisthedefaultoption.
`latest`:InstallthelatestdriverversionavailableforthisGKEversion.CanonlybeusedfornodesthatuseContainer-OptimizedOS.
`disabled`:Skipautomaticdriverinstallation.Youmustmanuallyinstalladriverafteryoucreatethecluster.ForGKEversion1.30.1-gke.1156000andearlier,thisisthedefaultoption.TomanuallyinstalltheGPUdriver,refertohttps://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers.
gpu-partition-size
(Optional) The GPU partition size used when running multi-instance GPUs. Forinformation about multi-instance GPUs, refer to:https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi
gpu-sharing-strategy
(Optional) The GPU sharing strategy (e.g. time-sharing) to use. For informationabout GPU sharing, refer to:https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus
max-shared-clients-per-gpu
(Optional) The max number of containers allowed to share each GPU on the node.This field is used together withgpu-sharing-strategy.
--confidential-node-type=CONFIDENTIAL_NODE_TYPE
Recreate all the nodes in the node pool to be confidential VMhttps://docs.cloud.google.com/compute/docs/about-confidential-vm.CONFIDENTIAL_NODE_TYPE must be one of:sev,sev_snp,tdx,disabled.
--containerd-config-from-file=PATH_TO_FILE
Path of the YAML file that contains containerd configuration entries likeconfiguring access to private image registries.

For detailed information on the configuration usage, please refer tohttps://cloud.google.com/kubernetes-engine/docs/how-to/customize-containerd-configuration.

Note: Updating the containerd configuration of an existing cluster or node poolrequires recreation of the existing nodes, which might cause disruptions inrunning workloads.

Use a full or relative path to a local file containing the value ofcontainerd_config.

--enable-confidential-nodes
Recreate all the nodes in the node pool to be confidential VMhttps://docs.cloud.google.com/compute/docs/about-confidential-vm.
--enable-gvnic
Enable the use of GVNIC for this cluster. Requires re-creation of nodes usingeither a node-pool upgrade or node-pool creation.
--enable-image-streaming
Specifies whether to enable image streaming on node pool.
--enable-insecure-kubelet-readonly-port
Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node-pool set the flag to--no-enable-insecure-kubelet-readonly-port.

--enable-kernel-module-signature-enforcement
Enforces that kernel modules are signed on all nodes in the node pool. Thissetting overrides the cluster-level setting. For example, if the clusterdisables enforcement, you can enable enforcement only for a specific node pool.When the policy is modified on an existing node pool, nodes will be immediatelyrecreated to use the new policy. Use--no-enable-kernel-module-signature-enforcement to disable.

Examples:

gcloudalphacontainernode-poolsupdatenode-pool-1--enable-kernel-module-signature-enforcement
--enable-private-nodes
Enables provisioning nodes with private IP addresses only.

The control plane still communicates with all nodes through private IP addressesonly, regardless of whether private nodes are enabled or disabled.

--enable-queued-provisioning
Mark the nodepool as Queued only. This means that all new nodes can be obtainedonly through queuing via ProvisioningRequest API.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-queued-provisioningandotherrequiredparameters,formoredetailssee:https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest
--flex-start
Start the node pool with Flex Start provisioning model.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--flex-startandotherrequiredparameters,formoredetailssee:https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest
--labels=[KEY=VALUE,…]
Labels to apply to the Google Cloud resources of node pools in the KubernetesEngine cluster. These are unrelated to Kubernetes labels. Warning: Updating thislabel will causes the node(s) to be recreated.

Examples:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--labels=label1=value1,label2=value2
--logging-variant=LOGGING_VARIANT
Specifies the logging variant that will be deployed on all the nodes in the nodepool. If the node pool doesn't specify a logging variant, then the loggingvariant specified for the cluster will be deployed on all the nodes in the nodepool. Valid logging variants areMAX_THROUGHPUT,DEFAULT.LOGGING_VARIANT must be one of:
DEFAULT
'DEFAULT' variant requests minimal resources but may not guarantee highthroughput.
MAX_THROUGHPUT
'MAX_THROUGHPUT' variant requests more node resources and is able to achievelogging throughput up to 10MB per sec.
--max-run-duration=MAX_RUN_DURATION
Limit the runtime of each node in the node pool to the specified duration.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--max-run-duration=3600s
--network-performance-configs=[PROPERTY=VALUE,…]
Configures network performance settings for the node pool. If this flag is notspecified, the pool will be created with its default network performanceconfiguration.
total-egress-bandwidth-tier
Total egress bandwidth is the available outbound bandwidth from a VM, regardlessof whether the traffic is going to internal IP or external IP destinations. Thefollowing tier values are allowed: [TIER_UNSPECIFIED,TIER_1]
--node-labels=[NODE_LABEL,…]
Replaces all the user specified Kubernetes labels on all nodes in an existingnode pool with the given labels.

Examples:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--node-labels=label1=value1,label2=value2

Updating the node pool's --node-labels flag applies the labels to the KubernetesNode objects for existing nodes in-place; it does not re-create or replacenodes. New nodes, including ones created by resizing or re-creating nodes, willhave these labels on the Kubernetes API Node object. The labels can be used inthenodeSelector field. Seehttps://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/for examples.

Note that Kubernetes labels, intended to associate cluster components andresources with one another and manage resource lifecycles, are different fromGoogle Kubernetes Engine labels that are used for the purpose of trackingbilling and usage information.

--node-locations=ZONE,[ZONE,…]
Set of zones in which the node pool's nodes should be located. Changing thelocations for a node pool will result in nodes being either created or removedfrom the node pool, depending on whether locations are being added or removed.

Multiple locations can be specified, separated by commas. For example:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=sample-cluster--node-locations=us-central1-a,us-central1-b
--node-taints=[NODE_TAINT,…]
Replaces all the user specified Kubernetes taints on all nodes in an existingnode pool, which can be used with tolerations for pod scheduling.

Examples:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--node-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule

To read more about node-taints, seehttps://cloud.google.com/kubernetes-engine/docs/node-taints.

--resource-manager-tags=[KEY=VALUE,…]
Replaces all the user specified resource manager tags on all nodes in anexisting node pool in a Standard cluster with the given comma-separated resourcemanager tags that has the GCE_FIREWALL purpose.

Examples:

gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=tagKeys/1234=tagValues/2345gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=my-project/key1=value1gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=12345/key1=value1,23456/key2=value2gcloudalphacontainernode-poolsupdateexample-node-pool--resource-manager-tags=

All nodes, including nodes that are resized or re-created, will have thespecified tags on the corresponding Instance object in the Compute Engine API.You can reference these tags in network firewall policy rules. For instructions,seehttps://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--storage-pools=STORAGE_POOL,[…]
A list of storage pools where the node pool's boot disks will be provisioned.Replaces all the current storage pools of an existing node pool, with thespecified storage pools.

STORAGE_POOL must be in the formatprojects/project/zones/zone/storagePools/storagePool

--system-config-from-file=PATH_TO_FILE
Path of the YAML/JSON file that contains the node configuration, including Linuxkernel parameters (sysctls) and kubelet configs.

Examples:

kubeletConfig:cpuManagerPolicy:staticmemoryManager:policy:StatictopologyManager:policy:BestEffortscope:podlinuxConfig:sysctl:net.core.somaxconn:'2048'net.ipv4.tcp_rmem:'4096 87380 6291456'hugepageConfig:hugepage_size2m:'1024'hugepage_size1g:'2'swapConfig:enabled:truebootDiskProfile:swapSizeGib:8cgroupMode:'CGROUP_MODE_V2'nodeKernelModuleLoading:policy:'ENFORCE_SIGNED_MODULES'

List of supported kubelet configs in 'kubeletConfig'.

KEYVALUE
cpuManagerPolicyeither 'static' or 'none'
cpuCFSQuotatrue or false (enabled by default)
cpuCFSQuotaPeriodinterval (e.g., '100ms'. The value must be between 1ms and 1 second, inclusive.)
memoryManagerspecify memory manager policy
topologyManagerspecify topology manager policy and scope
podPidsLimitinteger (The value must be greater than or equal to 1024 and less than 4194304.)
containerLogMaxSizepositive number plus unit suffix (e.g., '100Mi', '0.2Gi'. The value must be between 10Mi and 500Mi, inclusive.)
containerLogMaxFilesinteger (The value must be between [2, 10].)
imageGcLowThresholdPercentinteger (The value must be between [10, 85], and lower than imageGcHighThresholdPercent.)
imageGcHighThresholdPercentinteger (The value must be between [10, 85], and greater than imageGcLowThresholdPercent.)
imageMinimumGcAgeinterval (e.g., '100s', '1m'. The value must be less than '2m'.)
imageMaximumGcAgeinterval (e.g., '100s', '1m'. The value must be greater than imageMinimumGcAge.)
evictionSoftspecify eviction soft thresholds
evictionSoftGracePeriodspecify eviction soft grace period
evictionMinimumReclaimspecify eviction minimum reclaim thresholds
evictionMaxPodGracePeriodSecondsinteger (Max grace period for pod termination during eviction, in seconds. The value must be between [0, 300].)
shutdownGracePeriodSecondsinteger (Grace period for pods terminating on node shutdown, in seconds. Allowed values: 0, 30, 120.)
shutdownGracePeriodCriticalPodsSecondsinteger (Grace period for critical pods terminating on node shutdown, in seconds. The value must be between [0, 120] and less than shutdownGracePeriodSeconds.)
allowedUnsafeSysctlslist of sysctls (Allowlisted groups: 'kernel.shm*', 'kernel.msg*', 'kernel.sem', 'fs.mqueue.*', and 'net.*', and sysctls under the groups.)
singleProcessOomKilltrue or false
maxParallelImagePullsinteger (The value must be between [2, 5].)
List of supported keys in memoryManager in 'kubeletConfig'.
KEYVALUE
policyeither 'Static' or 'None'
List of supported keys in topologyManager in 'kubeletConfig'.
KEYVALUE
policyeither 'none' or 'best-effort' or 'single-numa-node' or 'restricted'
scopeeither 'pod' or 'container'
List of supported keys in evictionSoft in 'kubeletConfig'.
KEYVALUE
memoryAvailablequantity (e.g., '100Mi', '1Gi'. Represents the amount of memory available before soft eviction. The value must be at least 100Mi and less than 50% of the node's memory.)
nodefsAvailablepercentage (e.g., '20%'. Represents the nodefs available before soft eviction. The value must be between 10% and 50%, inclusive.)
nodefsInodesFreepercentage (e.g., '20%'. Represents the nodefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
imagefsAvailablepercentage (e.g., '20%'. Represents the imagefs available before soft eviction. The value must be between 15% and 50%, inclusive.)
imagefsInodesFreepercentage (e.g., '20%'. Represents the imagefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
pidAvailablepercentage (e.g., '20%'. Represents the pid available before soft eviction. The value must be between 10% and 50%, inclusive.)
List of supported keys in evictionSoftGracePeriod in 'kubeletConfig'.
KEYVALUE
memoryAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsInodesFreeduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsInodesFreeduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
pidAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
List of supported keys in evictionMinimumReclaim in 'kubeletConfig'.
KEYVALUE
memoryAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for memory available. The value must be positive and no more than 10%.)
nodefsAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs available. The value must be positive and no more than 10%.)
nodefsInodesFreepercentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs inodes free. The value must be positive and no more than 10%.)
imagefsAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs available. The value must be positive and no more than 10%.)
imagefsInodesFreepercentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs inodes free. The value must be positive and no more than 10%.)
pidAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for pid available. The value must be positive and no more than 10%.)
List of supported sysctls in 'linuxConfig'.
KEYVALUE
net.core.netdev_max_backlogAny positive integer, less than 2147483647
net.core.rmem_defaultMust be between [2304, 2147483647]
net.core.rmem_maxMust be between [2304, 2147483647]
net.core.wmem_defaultMust be between [4608, 2147483647]
net.core.wmem_maxMust be between [4608, 2147483647]
net.core.optmem_maxAny positive integer, less than 2147483647
net.core.somaxconnMust be between [128, 2147483647]
net.ipv4.tcp_rmemAny positive integer tuple
net.ipv4.tcp_wmemAny positive integer tuple
net.ipv4.tcp_tw_reuseMust be {0, 1, 2}
net.ipv4.tcp_mtu_probingMust be {0, 1, 2}
net.ipv4.tcp_max_orphansMust be between [16384, 262144]
net.ipv4.tcp_max_tw_bucketsMust be between [4096, 2147483647]
net.ipv4.tcp_syn_retriesMust be between [1, 127]
net.ipv4.tcp_ecnMust be {0, 1, 2}
net.ipv4.tcp_congestion_controlSupported values for COS: 'reno', 'cubic', 'bbr', 'lp', 'htcp'. Supported values for Ubuntu: 'reno', 'cubic', 'bbr', 'lp', 'htcp', 'vegas', 'dctcp', 'bic', 'cdg', 'highspeed', 'hybla', 'illinois', 'nv', 'scalable', 'veno', 'westwood', 'yeah'.
net.netfilter.nf_conntrack_maxMust be between [65536, 4194304]
net.netfilter.nf_conntrack_bucketsMust be between [65536, 524288]. Recommend setting: nf_conntrack_max = nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_tcp_timeout_close_waitMust be between [60, 3600]
net.netfilter.nf_conntrack_tcp_timeout_time_waitMust be between [1, 600]
net.netfilter.nf_conntrack_tcp_timeout_establishedMust be between [600, 86400]
net.netfilter.nf_conntrack_acctMust be {0, 1}
kernel.shmmniMust be between [4096, 32768]
kernel.shmmaxMust be between [0, 18446744073692774399]
kernel.shmallMust be between [0, 18446744073692774399]
kernel.perf_event_paranoidMust be {-1, 0, 1, 2, 3}
kernel.sched_rt_runtime_usMust be [-1, 1000000]
kernel.softlockup_panicMust be {0, 1}
kernel.yama.ptrace_scopeMust be {0, 1, 2, 3}
kernel.kptr_restrictMust be {0, 1, 2}
kernel.dmesg_restrictMust be {0, 1}
kernel.sysrqMust be [0, 511]
fs.aio-max-nrMust be between [65536, 4194304]
fs.file-maxMust be between [104857, 67108864]
fs.inotify.max_user_instancesMust be between [8192, 1048576]
fs.inotify.max_user_watchesMust be between [8192, 1048576]
fs.nr_openMust be between [1048576, 2147483584]
vm.dirty_background_ratioMust be between [1, 100]
vm.dirty_background_bytesMust be between [0, 68719476736]
vm.dirty_expire_centisecsMust be between [0, 6000]
vm.dirty_ratioMust be between [1, 100]
vm.dirty_bytesMust be between [0, 68719476736]
vm.dirty_writeback_centisecsMust be between [0, 1000]
vm.max_map_countMust be between [65536, 2147483647]
vm.overcommit_memoryMust be one of {0, 1, 2}. Not supported on machines with less than 15 GB memory.
vm.overcommit_ratioMust be between [0, 100]
vm.vfs_cache_pressureMust be between [0, 100]
vm.swappinessMust be between [0, 200]
vm.watermark_scale_factorMust be between [10, 3000]
vm.min_free_kbytesMust be between [67584, 1048576]
List of supported hugepage size in 'hugepageConfig'.
KEYVALUE
hugepage_size2mNumber of 2M huge pages, any positive integer
hugepage_size1gNumber of 1G huge pages, any positive integer
List of supported keys in 'swapConfig' under 'linuxConfig'.
KEYVALUE
enabledboolean
encryptionConfigspecify encryption settings for the swap space
bootDiskProfilespecify swap on the node's boot disk
ephemeralLocalSsdProfilespecify swap on the local SSD shared with pod ephemeral storage
dedicatedLocalSsdProfilespecify swap on a new, separate local NVMe SSD exclusively for swap
List of supported keys in 'encryptionConfig' under 'swapConfig'.
KEYVALUE
disabledboolean
List of supported keys in 'bootDiskProfile' under 'swapConfig'.
KEYVALUE
swapSizeGibinteger
swapSizePercentinteger
List of supported keys in 'ephemeralLocalSsdProfile' under 'swapConfig'.
KEYVALUE
swapSizeGibinteger
swapSizePercentinteger
List of supported keys in 'dedicatedLocalSsdProfile' under 'swapConfig'.
KEYVALUE
diskCountinteger
List of supported keys in 'nodeKernelModuleLoading'.
KEYVALUE
policyENFORCE_SIGNED_MODULES, DO_NOT_ENFORCE_SIGNED_MODULES, POLICY_UNSPECIFIED
The upper limit for total allocated hugepage size differs based upon machinesize.
  • On machines with less than 30 GB memory: 60% of the total memory. For example,on e2-standard-2 machine with 8 GB of memory, you can't allocate more than 4.8GB for hugepages.
  • On machines with more than 30 GB memory: 80% of the total memory. For example,on c4a-standard-8 machines with 32 GB of memory, hugepages cannot exceed 25.6GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d,h3, m3, a2, a3, g2.

Supported values for 'cgroupMode' under 'linuxConfig'.

  • CGROUP_MODE_V1: Use cgroupv1 on the node pool.
  • CGROUP_MODE_V2: Use cgroupv2 on the node pool.
  • CGROUP_MODE_UNSPECIFIED: Use the default GKE cgroup configuration.

Supported values for 'transparentHugepageEnabled' under 'linuxConfig' whichcontrols transparent hugepage support for anonymous memory.

  • TRANSPARENT_HUGEPAGE_ENABLED_ALWAYS: Transparent hugepage isenabled system wide.
  • TRANSPARENT_HUGEPAGE_ENABLED_MADVISE: Transparent hugepage isenabled inside MADV_HUGEPAGE regions. This is the default kernel configuration.
  • TRANSPARENT_HUGEPAGE_ENABLED_NEVER: Transparent hugepage isdisabled.
  • TRANSPARENT_HUGEPAGE_ENABLED_UNSPECIFIED: Default value. GKE willnot modify the kernel configuration.

Supported values for 'transparentHugepageDefrag' under 'linuxConfig' whichdefines the transparent hugepage defrag configuration on the node.

  • TRANSPARENT_HUGEPAGE_DEFRAG_ALWAYS: It means that an applicationrequesting THP will stall on allocation failure and directly reclaim pages andcompact memory in an effort to allocate a THP immediately.
  • TRANSPARENT_HUGEPAGE_DEFRAG_DEFER: It means that an applicationwill wake kswapd in the background to reclaim pages and wake kcompactd tocompact memory so that THP is available in the near future. It is theresponsibility of khugepaged to then install the THP pages later.
  • TRANSPARENT_HUGEPAGE_DEFRAG_DEFER_WITH_MADVISE: It means that anapplication will enter direct reclaim and compaction like always, but only forregions that have used madvise(MADV_HUGEPAGE); all other regions will wakekswapd in the background to reclaim pages and wake kcompactd to compact memoryso that THP is available in the near future.
  • TRANSPARENT_HUGEPAGE_DEFRAG_MADVISE: It means that an applicationwill enter direct reclaim and compaction like always, but only for regions thathave used madvise(MADV_HUGEPAGE); all other regions will wake kswapd in thebackground to reclaim pages and wake kcompactd to compact memory so that THP isavailable in the near future.
  • TRANSPARENT_HUGEPAGE_DEFRAG_NEVER: It means that an applicationwill never enter direct reclaim or compaction.
  • TRANSPARENT_HUGEPAGE_DEFRAG_UNSPECIFIED: Default value. GKE willnot modify the kernel configuration.

Supported values for 'policy' under 'nodeKernelModuleLoading'.

  • POLICY_UNSPECIFIED: Default behavior. GKE selects the image basedon node type. For CPU and TPU nodes, the image will not allow loading externalkernel modules. For GPU nodes, the image will allow loading any module, whetherit is signed or not.
  • ENFORCE_SIGNED_MODULES: Enforced signature verification: Node poolswill use a Container-Optimized OS image configured to allow loading ofGoogle-signed external kernel modules. Loadpin is enabled butconfigured to exclude modules, and kernel module signature checking is enforced.
  • DO_NOT_ENFORCE_SIGNED_MODULES: Do not enforce kernel modulesignature enforcement. Mirrors existing DEFAULT behavior.

Note, updating the system configuration of an existing node pool requiresrecreation of the nodes which which might cause a disruption.

Use a full or relative path to a local file containing the value ofsystem_config.

--tags=[TAG,…]
Replaces all the user specified Compute Engine tags on all nodes in an existingnode pool with the given tags (comma separated).

Examples:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--tags=tag1,tag2

New nodes, including ones created by resize or recreate, will have these tags onthe Compute Engine API instance object and these tags can be used in firewallrules. Seehttps://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/createfor examples.

--windows-os-version=WINDOWS_OS_VERSION
Specifies the Windows Server Image to use when creating a Windows node pool.Valid variants can be "ltsc2019", "ltsc2022". It means using LTSC2019 serverimage or LTSC2022 server image. If the node pool doesn't specify a WindowsServer Image Os version, then Ltsc2019 will be the default one to use.WINDOWS_OS_VERSION must be one of:ltsc2019,ltsc2022.
--workload-metadata=WORKLOAD_METADATA
Type of metadata server available to pods running in the node pool.WORKLOAD_METADATA must be one of:
EXPOSED
[DEPRECATED] Pods running in this node pool have access to the node's underlyingCompute Engine Metadata Server.
GCE_METADATA
Pods running in this node pool have access to the node's underlying ComputeEngine Metadata Server.
GKE_METADATA
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes EngineMetadata Server exposes a metadata API to workloads that is compatible with theV1 Compute Metadata APIs exposed by the Compute Engine and App Engine MetadataServers. This feature can only be enabled if Workload Identity is enabled at thecluster level.
GKE_METADATA_SERVER
[DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. TheKubernetes Engine Metadata Server exposes a metadata API to workloads that iscompatible with the V1 Compute Metadata APIs exposed by the Compute Engine andApp Engine Metadata Servers. This feature can only be enabled if WorkloadIdentity is enabled at the cluster level.
SECURE
[DEPRECATED] Prevents pods not in hostNetwork from accessing certain VMmetadata, specifically kube-env, which contains Kubelet credentials, and theinstance identity token. This is a temporary security solution available whilethe bootstrapping process for cluster nodes is being redesigned with significantsecurity improvements. This feature is scheduled to be deprecated in the futureand later removed.
Or at least one of these can be specified:
Upgrade settings
--autoscaled-rollout-policy=[wait-for-drain-duration=WAIT-FOR-DRAIN-DURATION]
Autoscaled rollout policy options for blue-green upgrade.
wait-for-drain-duration
(Optional) Time in seconds to wait after cordoning the blue pool before drainingthe nodes.

Examples:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-blue-green-upgrade--autoscaled-rollout-policy=""
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-blue-green-upgrade--autoscaled-rollout-policy=wait-for-drain-duration=7200s
--enable-blue-green-upgrade
Changes node pool upgrade strategy to blue-green upgrade.
--enable-surge-upgrade
Changes node pool upgrade strategy to surge upgrade.
--max-surge-upgrade=MAX_SURGE_UPGRADE
Number of extra (surge) nodes to be created on each upgrade of the node pool.

Specifies the number of extra (surge) nodes to be created during this nodepool's upgrades. For example, running the following command will result increating an extra node each time the node pool is upgraded:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--max-surge-upgrade=1--max-unavailable-upgrade=0

Must be used in conjunction with '--max-unavailable-upgrade'.

--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of thenode pool.

Specifies the number of nodes that can be unavailable at the same time duringthis node pool's upgrades. For example, assume the node pool has 5 nodes,running the following command will result in having 3 nodes being upgraded inparallel (1 + 2), but keeping always at least 3 (5 - 2) available each time thenode pool is upgraded:

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--max-surge-upgrade=1--max-unavailable-upgrade=2

Must be used in conjunction with '--max-surge-upgrade'.

--node-pool-soak-duration=NODE_POOL_SOAK_DURATION
Time in seconds to be spent waiting during blue-green upgrade before deletingthe blue pool and completing the upgrade.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--node-pool-soak-duration=600s
--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]
Standard rollout policy options for blue-green upgrade.

Batch sizes are specified by one of, batch-node-count or batch-percent. Theduration between batches is specified by batch-soak-duration.

gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--standard-rollout-policy=batch-node-count=3,batch-soak-duration=60s
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--standard-rollout-policy=batch-percent=0.3,batch-soak-duration=60s
Or at least one of these can be specified:
Node config
--boot-disk-provisioned-iops=BOOT_DISK_PROVISIONED_IOPS
Configure the Provisioned IOPS for the node pool boot disks. Only valid forhyperdisk-balanced boot disks.
--boot-disk-provisioned-throughput=BOOT_DISK_PROVISIONED_THROUGHPUT
Configure the Provisioned Throughput for the node pool boot disks. Only validfor hyperdisk-balanced boot disks.
--disk-size=DISK_SIZE
Size for node VM boot disks in GB. Defaults to 100GB.
--disk-type=DISK_TYPE
Type of the node VM boot disk. For version 1.24 and later, defaults topd-balanced. For versions earlier than 1.24, defaults to pd-standard.DISK_TYPE must be one of:pd-standard,pd-ssd,pd-balanced,hyperdisk-balanced,hyperdisk-extreme,hyperdisk-throughput.
--machine-type=MACHINE_TYPE
The type of machine to use for nodes. Defaults to e2-medium. The list ofpredefined machine types is available using the following command:
gcloudcomputemachine-typeslist

You can also specify custom machine types by providing a string with the format"custom-CPUS-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is theamount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GBof RAM:

gcloudalphacontainernode-poolsupdatehigh-mem-pool--machine-type=custom-2-12288
Or at least one of these can be specified:
Cluster autoscaling
--enable-autoprovisioning
Enables Cluster Autoscaler to treat the node pool as if it was autoprovisioned.

Cluster Autoscaler will be able to delete the node pool if it's unneeded.

--enable-autoscaling
Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by --node-pool or the defaultnode pool if --node-pool is not provided. If not already, --max-nodes or--total-max-nodes must also be set.

--location-policy=LOCATION_POLICY
Location policy specifies the algorithm used when scaling-up the node pool.
  • BALANCED - Is a best effort policy that aims to balance the sizesof available zones.
  • ANY - Instructs the cluster autoscaler to prioritize utilization ofunused reservations, and reduces preemption risk for Spot VMs.

LOCATION_POLICY must be one of:BALANCED,ANY.

--max-nodes=MAX_NODES
Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by --node-pool(or default node pool if unspecified) can scale. Ignored unless--enable-autoscaling is also specified.

--min-nodes=MIN_NODES
Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by --node-pool(or default node pool if unspecified) can scale. Ignored unless--enable-autoscaling is also specified.

--total-max-nodes=TOTAL_MAX_NODES
Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by --node-pool (ordefault node pool if unspecified) can scale. Ignored unless --enable-autoscalingis also specified.

--total-min-nodes=TOTAL_MIN_NODES
Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by --node-pool (ordefault node pool if unspecified) can scale. Ignored unless --enable-autoscalingis also specified.

Or at least one of these can be specified:
Node management
--enable-autorepair
Enable node autorepair feature for a node pool.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-autorepair

Seehttps://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repairfor more info.

--enable-autoupgrade
Sets autoupgrade feature for a node pool.
gcloudalphacontainernode-poolsupdatenode-pool-1--cluster=example-cluster--enable-autoupgrade

Seehttps://cloud.google.com/kubernetes-engine/docs/node-auto-upgradesfor more info.

Or at least one of these can be specified:
Node drain settings
--node-drain-grace-period-seconds=NODE_DRAIN_GRACE_PERIOD_SECONDS
The grace period in seconds for nodes to drain before being forcefully removed.
--node-drain-pdb-timeout-seconds=NODE_DRAIN_PDB_TIMEOUT_SECONDS
The timeout in seconds for the node pool to be drained.
--respect-pdb-during-node-pool-deletion
Whether to respect PDBs when deleting nodes in the node pool.
OPTIONAL FLAGS
--async
Return immediately, without waiting for the operation in progress to complete.
--cluster=CLUSTER
The name of the cluster. Overrides the defaultcontainer/clusterproperty value for this command invocation.
At most one of these can be specified:
--location=LOCATION
Compute zone or region (e.g. us-central1-a or us-central1) for the cluster.Overrides the default compute/region or compute/zone value for this commandinvocation. Prefer using this flag over the --region or --zone flags.
--region=REGION
Compute region (e.g. us-central1) for a regional cluster. Overrides the defaultcompute/region property value for this command invocation.
--zone=ZONE,-zZONE
Compute zone (e.g. us-central1-a) for a zonal cluster. Overrides the defaultcompute/zone property value for this command invocation.
GCLOUD WIDE FLAGS
These flags are available to all commands:--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.

Run$gcloud help for details.

NOTES
This command is currently in alpha and might change without notice. If thiscommand fails with API permission errors despite specifying the correct project,you might be trying to access an API with an invitation-only early accessallowlist. These variants are also available:
gcloudcontainernode-poolsupdate
gcloudbetacontainernode-poolsupdate

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-03 UTC.