gcloud alpha container node-pools create

NAME
gcloud alpha container node-pools create - create a node pool in a running cluster
SYNOPSIS
gcloud alpha container node-pools createNAME[--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]][--accelerator-network-profile=ACCELERATOR_NETWORK_PROFILE][--additional-node-network=[network=NETWORK_NAME,subnetwork=SUBNETWORK_NAME,…]][--additional-pod-network=[subnetwork=SUBNETWORK_NAME,pod-ipv4-range=SECONDARY_RANGE_NAME,[max-pods-per-node=NUM_PODS],…]][--async][--autoscaled-rollout-policy=[wait-for-drain-duration=WAIT-FOR-DRAIN-DURATION]][--boot-disk-kms-key=BOOT_DISK_KMS_KEY][--boot-disk-provisioned-iops=BOOT_DISK_PROVISIONED_IOPS][--boot-disk-provisioned-throughput=BOOT_DISK_PROVISIONED_THROUGHPUT][--cluster=CLUSTER][--confidential-node-type=CONFIDENTIAL_NODE_TYPE][--containerd-config-from-file=PATH_TO_FILE][--data-cache-count=DATA_CACHE_COUNT][--disable-pod-cidr-overprovision][--disk-size=DISK_SIZE][--disk-type=DISK_TYPE][--enable-autoprovisioning][--enable-autorepair][--no-enable-autoupgrade][--enable-blue-green-upgrade][--enable-confidential-nodes][--enable-confidential-storage][--enable-gvnic][--enable-image-streaming][--enable-insecure-kubelet-readonly-port][--enable-kernel-module-signature-enforcement][--enable-nested-virtualization][--enable-private-nodes][--enable-queued-provisioning][--enable-surge-upgrade][--flex-start][--image-type=IMAGE_TYPE][--labels=[KEY=VALUE,…]][--linux-sysctls=KEY=VALUE,[KEY=VALUE,…]][--local-ssd-encryption-mode=LOCAL_SSD_ENCRYPTION_MODE][--logging-variant=LOGGING_VARIANT][--machine-type=MACHINE_TYPE,-mMACHINE_TYPE][--max-pods-per-node=MAX_PODS_PER_NODE][--max-run-duration=MAX_RUN_DURATION][--max-surge-upgrade=MAX_SURGE_UPGRADE; default=1][--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE][--metadata=KEY=VALUE,[KEY=VALUE,…]][--metadata-from-file=KEY=LOCAL_FILE_PATH,[…]][--min-cpu-platform=PLATFORM][--network-performance-configs=[PROPERTY=VALUE,…]][--node-group=NODE_GROUP][--node-labels=[NODE_LABEL,…]][--node-locations=ZONE,[ZONE,…]][--node-pool-soak-duration=NODE_POOL_SOAK_DURATION][--node-taints=[NODE_TAINT,…]][--node-version=NODE_VERSION][--num-nodes=NUM_NODES][--opportunistic-maintenance=[node-idle-time=NODE_IDLE_TIME,window=WINDOW,min-nodes=MIN_NODES,…]][--performance-monitoring-unit=PERFORMANCE_MONITORING_UNIT][--placement-policy=PLACEMENT_POLICY][--placement-type=PLACEMENT_TYPE][--preemptible][--resource-manager-tags=[KEY=VALUE,…]][--sandbox=[type=TYPE]][--secondary-boot-disk=[disk-image=DISK_IMAGE,[mode=MODE],…]][--shielded-integrity-monitoring][--shielded-secure-boot][--sole-tenant-min-node-cpus=SOLE_TENANT_MIN_NODE_CPUS][--sole-tenant-node-affinity-file=SOLE_TENANT_NODE_AFFINITY_FILE][--spot][--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]][--storage-pools=STORAGE_POOL,[…]][--system-config-from-file=PATH_TO_FILE][--tags=TAG,[TAG,…]][--threads-per-core=THREADS_PER_CORE][--tpu-topology=TPU_TOPOLOGY][--windows-os-version=WINDOWS_OS_VERSION][--workload-metadata=WORKLOAD_METADATA][--create-pod-ipv4-range=[KEY=VALUE,…]    |--pod-ipv4-range=NAME][--enable-autoscaling--location-policy=LOCATION_POLICY--max-nodes=MAX_NODES--min-nodes=MIN_NODES--total-max-nodes=TOTAL_MAX_NODES--total-min-nodes=TOTAL_MIN_NODES][--enable-best-effort-provision--min-provision-nodes=MIN_PROVISION_NODES][--ephemeral-storage[=[local-ssd-count=LOCAL-SSD-COUNT]]    |--ephemeral-storage-local-ssd[=[count=COUNT]]    |--local-nvme-ssd-block[=[count=COUNT]]    |--local-ssd-count=LOCAL_SSD_COUNT    |--local-ssd-volumes=[[count=COUNT],[type=TYPE],[format=FORMAT],…]][--location=LOCATION    |--region=REGION    |--zone=ZONE,-zZONE][--reservation=RESERVATION--reservation-affinity=RESERVATION_AFFINITY][--scopes=[SCOPE,…]; default="gke-default"--service-account=SERVICE_ACCOUNT][GCLOUD_WIDE_FLAG]
DESCRIPTION
(ALPHA)gcloud alpha container node-pools createfacilitates the creation of a node pool in a Google Kubernetes Engine cluster. Avariety of options exists to customize the node configuration and the number ofnodes created.
EXAMPLES
To create a new node pool "node-pool-1" with the default options in the cluster"sample-cluster", run:
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=sample-cluster

The new node pool will show up in the cluster after all the nodes have beenprovisioned.

To create a node pool with 5 nodes, run:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=sample-cluster--num-nodes=5
POSITIONAL ARGUMENTS
NAME
The name of the node pool to create.
FLAGS
--accelerator=[type=TYPE,[count=COUNT,gpu-driver-version=GPU_DRIVER_VERSION,gpu-partition-size=GPU_PARTITION_SIZE,gpu-sharing-strategy=GPU_SHARING_STRATEGY,max-shared-clients-per-gpu=MAX_SHARED_CLIENTS_PER_GPU],…]
Attaches accelerators (e.g. GPUs) to all nodes.
type
(Required) The specific type (e.g. nvidia-tesla-t4 for NVIDIA T4) of acceleratorto attach to the instances. Usegcloud compute accelerator-typeslist to learn about all available accelerator types.
count
(Optional) The number of accelerators to attach to the instances. The defaultvalue is 1.
gpu-driver-version
(Optional) The NVIDIA driver version to install. GPU_DRIVER_VERSION must be oneof:
`default`:InstallthedefaultdriverversionforthisGKEversion.ForGKEversion1.30.1-gke.1156000andlater,thisisthedefaultoption.
`latest`:InstallthelatestdriverversionavailableforthisGKEversion.CanonlybeusedfornodesthatuseContainer-OptimizedOS.
`disabled`:Skipautomaticdriverinstallation.Youmustmanuallyinstalladriverafteryoucreatethecluster.ForGKEversion1.30.1-gke.1156000andearlier,thisisthedefaultoption.TomanuallyinstalltheGPUdriver,refertohttps://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers.
gpu-partition-size
(Optional) The GPU partition size used when running multi-instance GPUs. Forinformation about multi-instance GPUs, refer to:https://cloud.google.com/kubernetes-engine/docs/how-to/gpus-multi
gpu-sharing-strategy
(Optional) The GPU sharing strategy (e.g. time-sharing) to use. For informationabout GPU sharing, refer to:https://cloud.google.com/kubernetes-engine/docs/concepts/timesharing-gpus
max-shared-clients-per-gpu
(Optional) The max number of containers allowed to share each GPU on the node.This field is used together withgpu-sharing-strategy.
--accelerator-network-profile=ACCELERATOR_NETWORK_PROFILE
Accelerator Network Profile that will be used by the node pool.

Currently only theauto value is supported. A compatibleAccelerator machine type needs to be specified with the--machine-type flag. An Accelerator Network Profiles will becreated if it does not exist.

--additional-node-network=[network=NETWORK_NAME,subnetwork=SUBNETWORK_NAME,…]
Attach an additional network interface to each node in the pool. This parametercan be specified up to 7 times.

e.g. --additional-node-network network=dataplane,subnetwork=subnet-dp

network
(Required) The network to attach the new interface to.
subnetwork
(Required) The subnetwork to attach the new interface to.
--additional-pod-network=[subnetwork=SUBNETWORK_NAME,pod-ipv4-range=SECONDARY_RANGE_NAME,[max-pods-per-node=NUM_PODS],…]
Specify the details of a secondary range to be used for an additional podnetwork. Not needed if you use "host" typed NIC from this network. Thisparameter can be specified up to 35 times.

e.g. --additional-pod-networksubnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8.

subnetwork
(Optional) The name of the subnetwork to link the pod network to. If notspecified, the pod network defaults to the subnet connected to the defaultnetwork interface.
pod-ipv4-range
(Required) The name of the secondary range in the subnetwork. The range musthold at least (2 * MAX_PODS_PER_NODE * MAX_NODES_IN_RANGE) IPs.
max-pods-per-node
(Optional) Maximum amount of pods per node that can utilize this ipv4-range.Defaults to NodePool (if specified) or Cluster value.
--async
Return immediately, without waiting for the operation in progress to complete.
--autoscaled-rollout-policy=[wait-for-drain-duration=WAIT-FOR-DRAIN-DURATION]
Autoscaled rollout policy options for blue-green upgrade.
wait-for-drain-duration
(Optional) Time in seconds to wait after cordoning the blue pool before drainingthe nodes.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--enable-blue-green-upgrade--autoscaled-rollout-policy=""
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--enable-blue-green-upgrade--autoscaled-rollout-policy=wait-for-drain-duration=7200s
--boot-disk-kms-key=BOOT_DISK_KMS_KEY
The Customer Managed Encryption Key used to encrypt the boot disk attached toeach node in the node pool. This should be of the formprojects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME].For more information about protecting resources with Cloud KMS Keys please see:https://cloud.google.com/compute/docs/disks/customer-managed-encryption
--boot-disk-provisioned-iops=BOOT_DISK_PROVISIONED_IOPS
Configure the Provisioned IOPS for the node pool boot disks. Only valid forhyperdisk-balanced boot disks.
--boot-disk-provisioned-throughput=BOOT_DISK_PROVISIONED_THROUGHPUT
Configure the Provisioned Throughput for the node pool boot disks. Only validfor hyperdisk-balanced boot disks.
--cluster=CLUSTER
The cluster to add the node pool to. Overrides the defaultcontainer/cluster property value for this command invocation.
--confidential-node-type=CONFIDENTIAL_NODE_TYPE
Enable confidential nodes for the node pool. Enabling Confidential Nodes willcreate nodes using Confidential VMhttps://cloud.google.com/compute/confidential-vm/docs/about-cvm.CONFIDENTIAL_NODE_TYPE must be one of:sev,sev_snp,tdx,disabled.
--containerd-config-from-file=PATH_TO_FILE
Path of the YAML file that contains containerd configuration entries likeconfiguring access to private image registries.

For detailed information on the configuration usage, please refer tohttps://cloud.google.com/kubernetes-engine/docs/how-to/customize-containerd-configuration.

Note: Updating the containerd configuration of an existing cluster or node poolrequires recreation of the existing nodes, which might cause disruptions inrunning workloads.

Use a full or relative path to a local file containing the value ofcontainerd_config.

--data-cache-count=DATA_CACHE_COUNT
Specifies the number of local SSDs to be utilized for GKE Data Cache in the nodepool.
--disable-pod-cidr-overprovision
Disables pod cidr overprovision on nodes. Pod cidr overprovisioning is enabledby default.
--disk-size=DISK_SIZE
Size for node VM boot disks in GB. Defaults to 100GB.
--disk-type=DISK_TYPE
Type of the node VM boot disk. For version 1.24 and later, defaults topd-balanced. For versions earlier than 1.24, defaults to pd-standard.DISK_TYPE must be one of:pd-standard,pd-ssd,pd-balanced,hyperdisk-balanced,hyperdisk-extreme,hyperdisk-throughput.
--enable-autoprovisioning
Enables Cluster Autoscaler to treat the node pool as if it was autoprovisioned.

Cluster Autoscaler will be able to delete the node pool if it's unneeded.

--enable-autorepair
Enable node autorepair feature for a node pool.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--enable-autorepair

Node autorepair is enabled by default for node pools using COS, COS_CONTAINERD,UBUNTU or UBUNTU_CONTAINERD as a base image, use --no-enable-autorepair todisable.

Seehttps://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repairfor more info.

--enable-autoupgrade
Sets autoupgrade feature for a node pool.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--enable-autoupgrade

Seehttps://cloud.google.com/kubernetes-engine/docs/node-auto-upgradesfor more info.

Enabled by default, use--no-enable-autoupgrade to disable.

--enable-blue-green-upgrade
Changes node pool upgrade strategy to blue-green upgrade.
--enable-confidential-nodes
Enable confidential nodes for the node pool. Enabling Confidential Nodes willcreate nodes using Confidential VMhttps://cloud.google.com/compute/confidential-vm/docs/about-cvm.
--enable-confidential-storage
Enable confidential storage for the node pool. Enabling Confidential Storagewill create boot disk with confidential mode
--enable-gvnic
Enable the use of GVNIC for this cluster. Requires re-creation of nodes usingeither a node-pool upgrade or node-pool creation.
--enable-image-streaming
Specifies whether to enable image streaming on node pool.
--enable-insecure-kubelet-readonly-port
Enables the Kubelet's insecure read only port.

To disable the readonly port on a cluster or node-pool set the flag to--no-enable-insecure-kubelet-readonly-port.

--enable-kernel-module-signature-enforcement
Enforces that kernel modules are signed on all nodes in the node pool. Thissetting overrides the cluster-level setting. For example, if the clusterdisables enforcement, you can enable enforcement only for a specific node pool.When the policy is modified on an existing node pool, nodes will be immediatelyrecreated to use the new policy. Use--no-enable-kernel-module-signature-enforcement to disable.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--enable-kernel-module-signature-enforcement
--enable-nested-virtualization
Enables the use of nested virtualization on the node pool. Defaults tofalse. Can only be enabled on UBUNTU_CONTAINERD base image orCOS_CONTAINERD base image with version 1.28.4-gke.1083000 and above.
--enable-private-nodes
Enables provisioning nodes with private IP addresses only.

The control plane still communicates with all nodes through private IP addressesonly, regardless of whether private nodes are enabled or disabled.

--enable-queued-provisioning
Mark the nodepool as Queued only. This means that all new nodes can be obtainedonly through queuing via ProvisioningRequest API.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--enable-queued-provisioningandotherrequiredparameters,formoredetailssee:https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest
--enable-surge-upgrade
Changes node pool upgrade strategy to surge upgrade.
--flex-start
Start the node pool with Flex Start provisioning model.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--flex-startandotherrequiredparameters,formoredetailssee:https://cloud.google.com/kubernetes-engine/docs/how-to/provisioningrequest
--image-type=IMAGE_TYPE
The image type to use for the node pool. Defaults to server-specified.

Image Type specifies the base OS that the nodes in the node pool will run on. Ifan image type is specified, that will be assigned to the node pool and allfuture upgrades will use the specified image type. If it is not specified theserver will pick the default image type.

The default image type and the list of valid image types are available using thefollowing command.

gcloudcontainerget-server-config
--labels=[KEY=VALUE,…]
Labels to apply to the Google Cloud resources of node pools in the KubernetesEngine cluster. These are unrelated to Kubernetes labels. Warning: Updating thislabel will causes the node(s) to be recreated.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--labels=label1=value1,label2=value2
--linux-sysctls=KEY=VALUE,[KEY=VALUE,…]
(DEPRECATED) Linux kernel parameters to be applied to all nodes in the new nodepool as well as the pods running on the nodes.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--linux-sysctls="net.core.somaxconn=1024,net.ipv4.tcp_rmem=4096 87380 6291456"

The--linux-sysctls flag is deprecated. Please use--system-config-from-file instead.

--local-ssd-encryption-mode=LOCAL_SSD_ENCRYPTION_MODE
Encryption mode for Local SSDs on the node pool.LOCAL_SSD_ENCRYPTION_MODE must be one of:STANDARD_ENCRYPTION,EPHEMERAL_KEY_ENCRYPTION.
--logging-variant=LOGGING_VARIANT
Specifies the logging variant that will be deployed on all the nodes in the nodepool. If the node pool doesn't specify a logging variant, then the loggingvariant specified for the cluster will be deployed on all the nodes in the nodepool. Valid logging variants areMAX_THROUGHPUT,DEFAULT.LOGGING_VARIANT must be one of:
DEFAULT
'DEFAULT' variant requests minimal resources but may not guarantee highthroughput.
MAX_THROUGHPUT
'MAX_THROUGHPUT' variant requests more node resources and is able to achievelogging throughput up to 10MB per sec.
--machine-type=MACHINE_TYPE,-mMACHINE_TYPE
The type of machine to use for nodes. Defaults to e2-medium. The list ofpredefined machine types is available using the following command:
gcloudcomputemachine-typeslist

You can also specify custom machine types by providing a string with the format"custom-CPUS-RAM" where "CPUS" is the number of virtual CPUs and "RAM" is theamount of RAM in MiB.

For example, to create a node pool using custom machines with 2 vCPUs and 12 GBof RAM:

gcloudalphacontainernode-poolscreatehigh-mem-pool--machine-type=custom-2-12288
--max-pods-per-node=MAX_PODS_PER_NODE
The max number of pods per node for this node pool.

This flag sets the maximum number of pods that can be run at the same time on anode. This will override the value given with --default-max-pods-per-node flagset at the cluster level.

Must be used in conjunction with '--enable-ip-alias'.

--max-run-duration=MAX_RUN_DURATION
Limit the runtime of each node in the node pool to the specified duration.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--max-run-duration=3600s
--max-surge-upgrade=MAX_SURGE_UPGRADE; default=1
Number of extra (surge) nodes to be created on each upgrade of the node pool.

Specifies the number of extra (surge) nodes to be created during this nodepool's upgrades. For example, running the following command will result increating an extra node each time the node pool is upgraded:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--max-surge-upgrade=1--max-unavailable-upgrade=0

Must be used in conjunction with '--max-unavailable-upgrade'.

--max-unavailable-upgrade=MAX_UNAVAILABLE_UPGRADE
Number of nodes that can be unavailable at the same time on each upgrade of thenode pool.

Specifies the number of nodes that can be unavailable at the same time duringthis node pool's upgrades. For example, running the following command willresult in having 3 nodes being upgraded in parallel (1 + 2), but keeping alwaysat least 3 (5 - 2) available each time the node pool is upgraded:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--num-nodes=5--max-surge-upgrade=1--max-unavailable-upgrade=2

Must be used in conjunction with '--max-surge-upgrade'.

--metadata=KEY=VALUE,[KEY=VALUE,…]
Compute Engine metadata to be made available to the guest operating systemrunning on nodes within the node pool.

Each metadata entry is a key/value pair separated by an equals sign. Metadatakeys must be unique and less than 128 bytes in length. Values must be less thanor equal to 32,768 bytes in length. The total size of all keys and values mustbe less than 512 KB. Multiple arguments can be passed to this flag. For example:

--metadatakey-1=value-1,key-2=value-2,key-3=value-3

Additionally, the following keys are reserved for use by Kubernetes Engine:

  • cluster-location
  • cluster-name
  • cluster-uid
  • configure-sh
  • enable-os-login
  • gci-update-strategy
  • gci-ensure-gke-docker
  • instance-template
  • kube-env
  • startup-script
  • user-data

Google Kubernetes Engine sets the following keys by default:

  • serial-port-logging-enable

See also Compute Engine'sdocumentationon storing and retrieving instance metadata.

--metadata-from-file=KEY=LOCAL_FILE_PATH,[…]
Same as--metadata except that the valuefor the entry will be read from a local file.
--min-cpu-platform=PLATFORM
When specified, the nodes for the new node pool will be scheduled on host withspecified CPU architecture or a newer one.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--min-cpu-platform=PLATFORM

To list available CPU platforms in given zone, run:

gcloudbetacomputezonesdescribeZONE--format="value(availableCpuPlatforms)"

CPU platform selection is available only in selected zones.

--network-performance-configs=[PROPERTY=VALUE,…]
Configures network performance settings for the node pool. If this flag is notspecified, the pool will be created with its default network performanceconfiguration.
total-egress-bandwidth-tier
Total egress bandwidth is the available outbound bandwidth from a VM, regardlessof whether the traffic is going to internal IP or external IP destinations. Thefollowing tier values are allowed: [TIER_UNSPECIFIED,TIER_1]
--node-group=NODE_GROUP
Assign instances of this pool to run on the specified Google Compute Engine nodegroup. This is useful for running workloads on sole tenant nodes.

To see available sole tenant node-groups, run:

gcloudcomputesole-tenancynode-groupslist

To create a sole tenant node group, run:

gcloudcomputesole-tenancynode-groupscreate[GROUP_NAME]--location[ZONE]--node-template[TEMPLATE_NAME]--target-size[TARGET_SIZE]

Seehttps://cloud.google.com/compute/docs/nodesfor more information on sole tenancy and node groups.

--node-labels=[NODE_LABEL,…]
Applies the given Kubernetes labels on all nodes in the new node pool.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--node-labels=label1=value1,label2=value2

Updating the node pool's --node-labels flag applies the labels to the KubernetesNode objects for existing nodes in-place; it does not re-create or replacenodes. New nodes, including ones created by resizing or re-creating nodes, willhave these labels on the Kubernetes API Node object. The labels can be used inthenodeSelector field. Seehttps://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/for examples.

Note that Kubernetes labels, intended to associate cluster components andresources with one another and manage resource lifecycles, are different fromGoogle Kubernetes Engine labels that are used for the purpose of trackingbilling and usage information.

--node-locations=ZONE,[ZONE,…]
The set of zones in which the node pool's nodes should be located.

Multiple locations can be specified, separated by commas. For example:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=sample-cluster--node-locations=us-central1-a,us-central1-b
--node-pool-soak-duration=NODE_POOL_SOAK_DURATION
Time in seconds to be spent waiting during blue-green upgrade before deletingthe blue pool and completing the upgrade.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--node-pool-soak-duration=600s
--node-taints=[NODE_TAINT,…]
Applies the given kubernetes taints on all nodes in the new node pool, which canbe used with tolerations for pod scheduling.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--node-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule

To read more about node-taints, seehttps://cloud.google.com/kubernetes-engine/docs/node-taints.

--node-version=NODE_VERSION
The Kubernetes version to use for nodes. Defaults to server-specified.

The default Kubernetes version is available using the following command.

gcloudcontainerget-server-config
--num-nodes=NUM_NODES
The number of nodes in the node pool in each of the cluster's zones. Defaults to3.

Exception: when--tpu-topology is specified for multi-host TPUmachine types the number of nodes will be defaulted to(product oftopology)/(# of chips per VM).

--opportunistic-maintenance=[node-idle-time=NODE_IDLE_TIME,window=WINDOW,min-nodes=MIN_NODES,…]
Opportunistic maintenance options.

node-idle-time: Time to be spent waiting for node to be idle before startingmaintenance, ending with 's'. Example: "3.5s"

window: The window of time that opportunistic maintenance can run, ending with's'. Example: A setting of 14 days (1209600s) implies that opportunisticmaintenance can only be ran in the 2 weeks leading up to the scheduledmaintenance date. Setting 28 days(2419200s) allows opportunistic maintenance torun at any time in the scheduled maintenance window.

min-nodes: Minimum number of nodes in the node pool to be available during theopportunistic triggered maintenance.

gcloudalphacontainernode-poolscreateexample-cluster--opportunistic-maintenance=node-idle-time=600s,window=600s,min-nodes=2
--performance-monitoring-unit=PERFORMANCE_MONITORING_UNIT
Sets the Performance Monitoring Unit level. Valid values arearchitectural,standard andenhanced.PERFORMANCE_MONITORING_UNIT must be one of:
architectural
Enables architectural PMU events tied to non last level cache (LLC) events.
enhanced
Enables most documented core/L2 and LLC PMU events.
standard
Enables most documented core/L2 PMU events.
--placement-policy=PLACEMENT_POLICY
Indicates the desired resource policy to use.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--placement-policymy-placement
--placement-type=PLACEMENT_TYPE
Placement type allows to define the type of node placement within this nodepool.

UNSPECIFIED - No requirements on the placement of nodes. This isthe default option.

COMPACT - GKE will attempt to place the nodes in a close proximityto each other. This helps to reduce the communication latency between the nodes,but imposes additional limitations on the node pool size.

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--placement-type=COMPACT

PLACEMENT_TYPE must be one of:UNSPECIFIED,COMPACT.

--preemptible
Create nodes using preemptible VM instances in the new node pool.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--preemptible

New nodes, including ones created by resize or recreate, will use preemptible VMinstances. Seehttps://cloud.google.com/kubernetes-engine/docs/preemptible-vmfor more information on how to use Preemptible VMs with Kubernetes Engine.

--resource-manager-tags=[KEY=VALUE,…]
Applies the specified comma-separated resource manager tags that has theGCE_FIREWALL purpose to all nodes in the new node pool.

Examples:

gcloudalphacontainernode-poolscreateexample-node-pool--resource-manager-tags=tagKeys/1234=tagValues/2345gcloudalphacontainernode-poolscreateexample-node-pool--resource-manager-tags=my-project/key1=value1gcloudalphacontainernode-poolscreateexample-node-pool--resource-manager-tags=12345/key1=value1,23456/key2=value2gcloudalphacontainernode-poolscreateexample-node-pool--resource-manager-tags=

All nodes, including nodes that are resized or re-created, will have thespecified tags on the corresponding Instance object in the Compute Engine API.You can reference these tags in network firewall policy rules. For instructions,seehttps://cloud.google.com/firewall/docs/use-tags-for-firewalls.

--sandbox=[type=TYPE]
Enables the requested sandbox on all nodes in the node pool.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--sandbox="type=gvisor"

The only supported type is 'gvisor'.

--secondary-boot-disk=[disk-image=DISK_IMAGE,[mode=MODE],…]
Attaches secondary boot disks to all nodes.
disk-image
(Required) The full resource path to the source disk image to create thesecondary boot disks from.
mode
(Optional) The configuration mode for the secondary boot disks. The defaultvalue is "CONTAINER_IMAGE_CACHE".
--shielded-integrity-monitoring
Enables monitoring and attestation of the boot integrity of the instance. Theattestation is performed against the integrity policy baseline. This baseline isinitially derived from the implicitly trusted boot image when the instance iscreated.
--shielded-secure-boot
The instance will boot with secure boot enabled.
--sole-tenant-min-node-cpus=SOLE_TENANT_MIN_NODE_CPUS
A integer value that specifies the minimum number of vCPUs that each sole tenantnode must have to use CPU overcommit. If not specified, the CPU overcommitfeature is disabled.
--sole-tenant-node-affinity-file=SOLE_TENANT_NODE_AFFINITY_FILE
JSON/YAML file containing the configuration of desired sole tenant nodes ontowhich this node pool could be backed by. These rules filter the nodes accordingto their node affinity labels. A node's affinity labels come from the nodetemplate of the group the node is in.

The file should contain a list of a JSON/YAML objects. For an example, seehttps://cloud.google.com/compute/docs/nodes/provisioning-sole-tenant-vms#configure_node_affinity_labels.The following list describes the fields:

key
Corresponds to the node affinity label keys of the Node resource.
operator
Specifies the node selection type. Must be one of:IN: RequiresCompute Engine to seek for matched nodes.NOT_IN: Requires ComputeEngine to avoid certain nodes.
values
Optional. A list of values which correspond to the node affinity label values ofthe Node resource.
--spot
Create nodes using spot VM instances in the new node pool.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--spot

New nodes, including ones created by resize or recreate, will use spot VMinstances.

--standard-rollout-policy=[batch-node-count=BATCH_NODE_COUNT,batch-percent=BATCH_NODE_PERCENTAGE,batch-soak-duration=BATCH_SOAK_DURATION,…]
Standard rollout policy options for blue-green upgrade.

Batch sizes are specified by one of, batch-node-count or batch-percent. Theduration between batches is specified by batch-soak-duration.

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--standard-rollout-policy=batch-node-count=3,batch-soak-duration=60s
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--standard-rollout-policy=batch-percent=0.3,batch-soak-duration=60s
--storage-pools=STORAGE_POOL,[…]
A list of storage pools where the node pool's boot disks will be provisioned.

STORAGE_POOL must be in the formatprojects/project/zones/zone/storagePools/storagePool

--system-config-from-file=PATH_TO_FILE
Path of the YAML/JSON file that contains the node configuration, including Linuxkernel parameters (sysctls) and kubelet configs.

Examples:

kubeletConfig:cpuManagerPolicy:staticmemoryManager:policy:StatictopologyManager:policy:BestEffortscope:podlinuxConfig:sysctl:net.core.somaxconn:'2048'net.ipv4.tcp_rmem:'4096 87380 6291456'hugepageConfig:hugepage_size2m:'1024'hugepage_size1g:'2'swapConfig:enabled:truebootDiskProfile:swapSizeGib:8cgroupMode:'CGROUP_MODE_V2'

List of supported kubelet configs in 'kubeletConfig'.

KEYVALUE
cpuManagerPolicyeither 'static' or 'none'
cpuCFSQuotatrue or false (enabled by default)
cpuCFSQuotaPeriodinterval (e.g., '100ms'. The value must be between 1ms and 1 second, inclusive.)
memoryManagerspecify memory manager policy
topologyManagerspecify topology manager policy and scope
podPidsLimitinteger (The value must be greater than or equal to 1024 and less than 4194304.)
containerLogMaxSizepositive number plus unit suffix (e.g., '100Mi', '0.2Gi'. The value must be between 10Mi and 500Mi, inclusive.)
containerLogMaxFilesinteger (The value must be between [2, 10].)
imageGcLowThresholdPercentinteger (The value must be between [10, 85], and lower than imageGcHighThresholdPercent.)
imageGcHighThresholdPercentinteger (The value must be between [10, 85], and greater than imageGcLowThresholdPercent.)
imageMinimumGcAgeinterval (e.g., '100s', '1m'. The value must be less than '2m'.)
imageMaximumGcAgeinterval (e.g., '100s', '1m'. The value must be greater than imageMinimumGcAge.)
evictionSoftspecify eviction soft thresholds
evictionSoftGracePeriodspecify eviction soft grace period
evictionMinimumReclaimspecify eviction minimum reclaim thresholds
evictionMaxPodGracePeriodSecondsinteger (Max grace period for pod termination during eviction, in seconds. The value must be between [0, 300].)
allowedUnsafeSysctlslist of sysctls (Allowlisted groups: 'kernel.shm*', 'kernel.msg*', 'kernel.sem', 'fs.mqueue.*', and 'net.*', and sysctls under the groups.)
singleProcessOomKilltrue or false
maxParallelImagePullsinteger (The value must be between [2, 5].)
List of supported keys in memoryManager in 'kubeletConfig'.
KEYVALUE
policyeither 'Static' or 'None'
List of supported keys in topologyManager in 'kubeletConfig'.
KEYVALUE
policyeither 'none' or 'best-effort' or 'single-numa-node' or 'restricted'
scopeeither 'pod' or 'container'
List of supported keys in evictionSoft in 'kubeletConfig'.
KEYVALUE
memoryAvailablequantity (e.g., '100Mi', '1Gi'. Represents the amount of memory available before soft eviction. The value must be at least 100Mi and less than 50% of the node's memory.)
nodefsAvailablepercentage (e.g., '20%'. Represents the nodefs available before soft eviction. The value must be between 10% and 50%, inclusive.)
nodefsInodesFreepercentage (e.g., '20%'. Represents the nodefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
imagefsAvailablepercentage (e.g., '20%'. Represents the imagefs available before soft eviction. The value must be between 15% and 50%, inclusive.)
imagefsInodesFreepercentage (e.g., '20%'. Represents the imagefs inodes free before soft eviction. The value must be between 5% and 50%, inclusive.)
pidAvailablepercentage (e.g., '20%'. Represents the pid available before soft eviction. The value must be between 10% and 50%, inclusive.)
List of supported keys in evictionSoftGracePeriod in 'kubeletConfig'.
KEYVALUE
memoryAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
nodefsInodesFreeduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
imagefsInodesFreeduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
pidAvailableduration (e.g., '30s', '1m'. The grace period for soft eviction for this resource. The value must be positive and no more than '5m'.)
List of supported keys in evictionMinimumReclaim in 'kubeletConfig'.
KEYVALUE
memoryAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for memory available. The value must be positive and no more than 10%.)
nodefsAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs available. The value must be positive and no more than 10%.)
nodefsInodesFreepercentage (e.g., '5%'. Represents the minimum reclaim threshold for nodefs inodes free. The value must be positive and no more than 10%.)
imagefsAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs available. The value must be positive and no more than 10%.)
imagefsInodesFreepercentage (e.g., '5%'. Represents the minimum reclaim threshold for imagefs inodes free. The value must be positive and no more than 10%.)
pidAvailablepercentage (e.g., '5%'. Represents the minimum reclaim threshold for pid available. The value must be positive and no more than 10%.)
List of supported sysctls in 'linuxConfig'.
KEYVALUE
net.core.netdev_max_backlogAny positive integer, less than 2147483647
net.core.rmem_defaultMust be between [2304, 2147483647]
net.core.rmem_maxMust be between [2304, 2147483647]
net.core.wmem_defaultMust be between [4608, 2147483647]
net.core.wmem_maxMust be between [4608, 2147483647]
net.core.optmem_maxAny positive integer, less than 2147483647
net.core.somaxconnMust be between [128, 2147483647]
net.ipv4.tcp_rmemAny positive integer tuple
net.ipv4.tcp_wmemAny positive integer tuple
net.ipv4.tcp_tw_reuseMust be {0, 1, 2}
net.ipv4.tcp_mtu_probingMust be {0, 1, 2}
net.ipv4.tcp_max_orphansMust be between [16384, 262144]
net.ipv4.tcp_max_tw_bucketsMust be between [4096, 2147483647]
net.ipv4.tcp_syn_retriesMust be between [1, 127]
net.ipv4.tcp_ecnMust be {0, 1, 2}
net.ipv4.tcp_congestion_controlSupported values for COS: 'reno', 'cubic', 'bbr', 'lp', 'htcp'. Supported values for Ubuntu: 'reno', 'cubic', 'bbr', 'lp', 'htcp', 'vegas', 'dctcp', 'bic', 'cdg', 'highspeed', 'hybla', 'illinois', 'nv', 'scalable', 'veno', 'westwood', 'yeah'.
net.netfilter.nf_conntrack_maxMust be between [65536, 4194304]
net.netfilter.nf_conntrack_bucketsMust be between [65536, 524288]. Recommend setting: nf_conntrack_max = nf_conntrack_buckets * 4
net.netfilter.nf_conntrack_tcp_timeout_close_waitMust be between [60, 3600]
net.netfilter.nf_conntrack_tcp_timeout_time_waitMust be between [1, 600]
net.netfilter.nf_conntrack_tcp_timeout_establishedMust be between [600, 86400]
net.netfilter.nf_conntrack_acctMust be {0, 1}
kernel.shmmniMust be between [4096, 32768]
kernel.shmmaxMust be between [0, 18446744073692774399]
kernel.shmallMust be between [0, 18446744073692774399]
kernel.perf_event_paranoidMust be {-1, 0, 1, 2, 3}
kernel.sched_rt_runtime_usMust be [-1, 1000000]
kernel.softlockup_panicMust be {0, 1}
kernel.yama.ptrace_scopeMust be {0, 1, 2, 3}
kernel.kptr_restrictMust be {0, 1, 2}
kernel.dmesg_restrictMust be {0, 1}
kernel.sysrqMust be [0, 511]
fs.aio-max-nrMust be between [65536, 4194304]
fs.file-maxMust be between [104857, 67108864]
fs.inotify.max_user_instancesMust be between [8192, 1048576]
fs.inotify.max_user_watchesMust be between [8192, 1048576]
fs.nr_openMust be between [1048576, 2147483584]
vm.dirty_background_ratioMust be between [1, 100]
vm.dirty_background_bytesMust be between [0, 68719476736]
vm.dirty_expire_centisecsMust be between [0, 6000]
vm.dirty_ratioMust be between [1, 100]
vm.dirty_bytesMust be between [0, 68719476736]
vm.dirty_writeback_centisecsMust be between [0, 1000]
vm.max_map_countMust be between [65536, 2147483647]
vm.overcommit_memoryMust be one of {0, 1, 2}
vm.overcommit_ratioMust be between [0, 100]
vm.vfs_cache_pressureMust be between [0, 100]
vm.swappinessMust be between [0, 200]
vm.watermark_scale_factorMust be between [10, 3000]
vm.min_free_kbytesMust be between [67584, 1048576]
List of supported hugepage size in 'hugepageConfig'.
KEYVALUE
hugepage_size2mNumber of 2M huge pages, any positive integer
hugepage_size1gNumber of 1G huge pages, any positive integer
List of supported keys in 'swapConfig' under 'linuxConfig'.
KEYVALUE
enabledboolean
encryptionConfigspecify encryption settings for the swap space
bootDiskProfilespecify swap on the node's boot disk
ephemeralLocalSsdProfilespecify swap on the local SSD shared with pod ephemeral storage
dedicatedLocalSsdProfilespecify swap on a new, separate local NVMe SSD exclusively for swap
List of supported keys in 'encryptionConfig' under 'swapConfig'.
KEYVALUE
disabledboolean
List of supported keys in 'bootDiskProfile' under 'swapConfig'.
KEYVALUE
swapSizeGibinteger
swapSizePercentinteger
List of supported keys in 'ephemeralLocalSsdProfile' under 'swapConfig'.
KEYVALUE
swapSizeGibinteger
swapSizePercentinteger
List of supported keys in 'dedicatedLocalSsdProfile' under 'swapConfig'.
KEYVALUE
diskCountinteger
Allocated hugepage size should not exceed 60% of available memory on the node.For example, c2d-highcpu-4 has 8GB memory, total allocated hugepage of 2m and 1gshould not exceed 8GB * 0.6 = 4.8GB.

1G hugepages are only available in following machine familes: c3, m2, c2d, c3d,h3, m3, a2, a3, g2.

Supported values for 'cgroupMode' under 'linuxConfig'.

  • CGROUP_MODE_V1: Use cgroupv1 on the node pool.
  • CGROUP_MODE_V2: Use cgroupv2 on the node pool.
  • CGROUP_MODE_UNSPECIFIED: Use the default GKE cgroup configuration.

Supported values for 'transparentHugepageEnabled' under 'linuxConfig' whichcontrols transparent hugepage support for anonymous memory.

  • TRANSPARENT_HUGEPAGE_ENABLED_ALWAYS: Transparent hugepage isenabled system wide.
  • TRANSPARENT_HUGEPAGE_ENABLED_MADVISE: Transparent hugepage isenabled inside MADV_HUGEPAGE regions. This is the default kernel configuration.
  • TRANSPARENT_HUGEPAGE_ENABLED_NEVER: Transparent hugepage isdisabled.
  • TRANSPARENT_HUGEPAGE_ENABLED_UNSPECIFIED: Default value. GKE willnot modify the kernel configuration.

Supported values for 'transparentHugepageDefrag' under 'linuxConfig' whichdefines the transparent hugepage defrag configuration on the node.

  • TRANSPARENT_HUGEPAGE_DEFRAG_ALWAYS: It means that an applicationrequesting THP will stall on allocation failure and directly reclaim pages andcompact memory in an effort to allocate a THP immediately.
  • TRANSPARENT_HUGEPAGE_DEFRAG_DEFER: It means that an applicationwill wake kswapd in the background to reclaim pages and wake kcompactd tocompact memory so that THP is available in the near future. It is theresponsibility of khugepaged to then install the THP pages later.
  • TRANSPARENT_HUGEPAGE_DEFRAG_DEFER_WITH_MADVISE: It means that anapplication will enter direct reclaim and compaction like always, but only forregions that have used madvise(MADV_HUGEPAGE); all other regions will wakekswapd in the background to reclaim pages and wake kcompactd to compact memoryso that THP is available in the near future.
  • TRANSPARENT_HUGEPAGE_DEFRAG_MADVISE: It means that an applicationwill enter direct reclaim and compaction like always, but only for regions thathave used madvise(MADV_HUGEPAGE); all other regions will wake kswapd in thebackground to reclaim pages and wake kcompactd to compact memory so that THP isavailable in the near future.
  • TRANSPARENT_HUGEPAGE_DEFRAG_NEVER: It means that an applicationwill never enter direct reclaim or compaction.
  • TRANSPARENT_HUGEPAGE_DEFRAG_UNSPECIFIED: Default value. GKE willnot modify the kernel configuration.

Note, updating the system configuration of an existing node pool requiresrecreation of the nodes which which might cause a disruption.

Use a full or relative path to a local file containing the value ofsystem_config.

--tags=TAG,[TAG,…]
Applies the given Compute Engine tags (comma separated) on all nodes in the newnode-pool. Example:
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--tags=tag1,tag2

New nodes, including ones created by resize or recreate, will have these tags onthe Compute Engine API instance object and can be used in firewall rules. Seehttps://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/createfor examples.

--threads-per-core=THREADS_PER_CORE
The number of visible threads per physical core for each node. To disablesimultaneous multithreading (SMT) set this to 1.
--tpu-topology=TPU_TOPOLOGY
The desired physical topology for the PodSlice.
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--tpu-topology
--windows-os-version=WINDOWS_OS_VERSION
Specifies the Windows Server Image to use when creating a Windows node pool.Valid variants can be "ltsc2019", "ltsc2022". It means using LTSC2019 serverimage or LTSC2022 server image. If the node pool doesn't specify a WindowsServer Image Os version, then Ltsc2019 will be the default one to use.WINDOWS_OS_VERSION must be one of:ltsc2019,ltsc2022.
--workload-metadata=WORKLOAD_METADATA
Type of metadata server available to pods running in the node pool.WORKLOAD_METADATA must be one of:
EXPOSED
[DEPRECATED] Pods running in this node pool have access to the node's underlyingCompute Engine Metadata Server.
GCE_METADATA
Pods running in this node pool have access to the node's underlying ComputeEngine Metadata Server.
GKE_METADATA
Run the Kubernetes Engine Metadata Server on this node. The Kubernetes EngineMetadata Server exposes a metadata API to workloads that is compatible with theV1 Compute Metadata APIs exposed by the Compute Engine and App Engine MetadataServers. This feature can only be enabled if Workload Identity is enabled at thecluster level.
GKE_METADATA_SERVER
[DEPRECATED] Run the Kubernetes Engine Metadata Server on this node. TheKubernetes Engine Metadata Server exposes a metadata API to workloads that iscompatible with the V1 Compute Metadata APIs exposed by the Compute Engine andApp Engine Metadata Servers. This feature can only be enabled if WorkloadIdentity is enabled at the cluster level.
SECURE
[DEPRECATED] Prevents pods not in hostNetwork from accessing certain VMmetadata, specifically kube-env, which contains Kubelet credentials, and theinstance identity token. This is a temporary security solution available whilethe bootstrapping process for cluster nodes is being redesigned with significantsecurity improvements. This feature is scheduled to be deprecated in the futureand later removed.
At most one of these can be specified:
--create-pod-ipv4-range=[KEY=VALUE,…]
Create a new pod range for the node pool. The name and range of the pod rangecan be customized via optionalname andrange keys.

name specifies the name of the secondaryrange to be created.

range specifies the IP range for the newsecondary range. This can either be a netmask size (e.g. "/20") or a CIDR range(e.g. "10.0.0.0/20"). If a netmask size is specified, the IP is automaticallytaken from the free space in the cluster's network.

Must be used in VPC native clusters. Can not be used in conjunction with the--pod-ipv4-range option.

Examples:

Create a new pod range with a default name and size.

gcloudalphacontainernode-poolscreate--create-pod-ipv4-range""

Create a new pod range namedmy-range withnetmask of size21.

gcloudalphacontainernode-poolscreate--create-pod-ipv4-rangename=my-range,range=/21

Create a new pod range with a default name with the primary range of10.100.0.0/16.

gcloudalphacontainernode-poolscreate--create-pod-ipv4-rangerange=10.100.0.0/16

Create a new pod range with the namemy-range with a default range.

gcloudalphacontainernode-poolscreate--create-pod-ipv4-rangename=my-range

Must be used in VPC native clusters. Can not be used in conjunction with the--pod-ipv4-range option.

--pod-ipv4-range=NAME
Set the pod range to be used as the source for pod IPs for the pods in this nodepool. NAME must be the name of an existing subnetwork secondary range in thesubnetwork for this cluster.

Must be used in VPC native clusters. Cannot be used with--create-ipv4-pod-range.

Examples:

Specify a pod range calledother-range

gcloudalphacontainernode-poolscreate--pod-ipv4-rangeother-range
Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.

Enables autoscaling in the node pool specified by --node-pool or the defaultnode pool if --node-pool is not provided. If not already, --max-nodes or--total-max-nodes must also be set.

--location-policy=LOCATION_POLICY
Location policy specifies the algorithm used when scaling-up the node pool.
  • BALANCED - Is a best effort policy that aims to balance the sizesof available zones.
  • ANY - Instructs the cluster autoscaler to prioritize utilization ofunused reservations, and reduces preemption risk for Spot VMs.

LOCATION_POLICY must be one of:BALANCED,ANY.

--max-nodes=MAX_NODES
Maximum number of nodes per zone in the node pool.

Maximum number of nodes per zone to which the node pool specified by --node-pool(or default node pool if unspecified) can scale. Ignored unless--enable-autoscaling is also specified.

--min-nodes=MIN_NODES
Minimum number of nodes per zone in the node pool.

Minimum number of nodes per zone to which the node pool specified by --node-pool(or default node pool if unspecified) can scale. Ignored unless--enable-autoscaling is also specified.

--total-max-nodes=TOTAL_MAX_NODES
Maximum number of all nodes in the node pool.

Maximum number of all nodes to which the node pool specified by --node-pool (ordefault node pool if unspecified) can scale. Ignored unless --enable-autoscalingis also specified.

--total-min-nodes=TOTAL_MIN_NODES
Minimum number of all nodes in the node pool.

Minimum number of all nodes to which the node pool specified by --node-pool (ordefault node pool if unspecified) can scale. Ignored unless --enable-autoscalingis also specified.

Specifies minimum number of nodes to be created when best effort provisioningenabled.
--enable-best-effort-provision
Enable best effort provision for nodes
--min-provision-nodes=MIN_PROVISION_NODES
Specifies the minimum number of nodes to be provisioned during creation
At most one of these can be specified:
--ephemeral-storage[=[local-ssd-count=LOCAL-SSD-COUNT]]
Parameters for the ephemeral storage filesystem. If unspecified, ephemeralstorage is backed by the boot disk.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=examplecluster--ephemeral-storagelocal-ssd-count=2

'local-ssd-count' specifies the number of local SSDs to use to back ephemeralstorage. Local SDDs use NVMe interfaces. For first- and second-generationmachine types, a nonzero count field is required for local ssd to be configured.For third-generation machine types, the count field is optional because thecount is inferred from the machine type.

Seehttps://cloud.google.com/compute/docs/disks/local-ssdfor more information.

--ephemeral-storage-local-ssd[=[count=COUNT]]
Parameters for the ephemeral storage filesystem. If unspecified, ephemeralstorage is backed by the boot disk.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=examplecluster--ephemeral-storage-local-ssdcount=2

'count' specifies the number of local SSDs to use to back ephemeral storage.Local SDDs use NVMe interfaces. For first- and second-generation machine types,a nonzero count field is required for local ssd to be configured. Forthird-generation machine types, the count field is optional because the count isinferred from the machine type.

Seehttps://cloud.google.com/compute/docs/disks/local-ssdfor more information.

--local-nvme-ssd-block[=[count=COUNT]]
Adds the requested local SSDs on all nodes in default node pool(s) in the newcluster.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=examplecluster--local-nvme-ssd-blockcount=2

'count' must be between 1-8New nodes, including ones created by resize or recreate, will have these localSSDs.

For first- and second-generation machine types, a nonzero count field isrequired for local ssd to be configured. For third-generation machine types, thecount field is optional because the count is inferred from the machine type.

Seehttps://cloud.google.com/compute/docs/disks/local-ssdfor more information.

--local-ssd-count=LOCAL_SSD_COUNT
--local-ssd-count is the equivalent of using --local-ssd-volumes withtype=scsi,format=fs

The number of local SSD disks to provision on each node, formatted and mountedin the filesystem.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that canbe attached to an instance is limited by the maximum number of disks availableon a machine, which differs by compute zone. Seehttps://cloud.google.com/compute/docs/disks/local-ssdfor more information.

--local-ssd-volumes=[[count=COUNT],[type=TYPE],[format=FORMAT],…]
Adds the requested local SSDs on all nodes in default node pool(s) in the newcluster.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--local-ssd-volumescount=2,type=nvme,format=fs

'count' must be between 1-8

'type' must be either scsi or nvme

'format' must be either fs or block

New nodes, including ones created by resize or recreate, will have these localSSDs.

Local SSDs have a fixed 375 GB capacity per device. The number of disks that canbe attached to an instance is limited by the maximum number of disks availableon a machine, which differs by compute zone. Seehttps://cloud.google.com/compute/docs/disks/local-ssdfor more information.

At most one of these can be specified:
--location=LOCATION
Compute zone or region (e.g. us-central1-a or us-central1) for the cluster.Overrides the default compute/region or compute/zone value for this commandinvocation. Prefer using this flag over the --region or --zone flags.
--region=REGION
Compute region (e.g. us-central1) for a regional cluster. Overrides the defaultcompute/region property value for this command invocation.
--zone=ZONE,-zZONE
Compute zone (e.g. us-central1-a) for a zonal cluster. Overrides the defaultcompute/zone property value for this command invocation.
Specifies the reservation for the node pool.
--reservation=RESERVATION
The name of the reservation, required when--reservation-affinity=specific.
--reservation-affinity=RESERVATION_AFFINITY
The type of the reservation for the node pool.RESERVATION_AFFINITY must be one of:any,none,specific.
Options to specify the node identity.
Scopes options.
--scopes=[SCOPE,…]; default="gke-default"
Specifies scopes for the node instances.

Examples:

gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--scopes=https://www.googleapis.com/auth/devstorage.read_only
gcloudalphacontainernode-poolscreatenode-pool-1--cluster=example-cluster--scopes=bigquery,storage-rw,compute-ro

Multiple scopes can be specified, separated by commas. Various scopes areautomatically added based on feature usage. Such scopes are not added if anequivalent scope already exists.

  • monitoring-write: always added to ensure metrics can be written
  • logging-write: added if Cloud Logging is enabled(--enable-cloud-logging/--logging)
  • monitoring: added if Cloud Monitoring is enabled(--enable-cloud-monitoring/--monitoring)
  • gke-default: added for Autopilot clusters that use the defaultservice account
  • cloud-platform: added for Autopilot clusters that use any otherservice account

SCOPE can be either the full URI of the scope or an alias.Defaultscopes are assigned to all instances. Available aliases are:

AliasURI
bigqueryhttps://www.googleapis.com/auth/bigquery
cloud-platformhttps://www.googleapis.com/auth/cloud-platform
cloud-source-reposhttps://www.googleapis.com/auth/source.full_control
cloud-source-repos-rohttps://www.googleapis.com/auth/source.read_only
compute-rohttps://www.googleapis.com/auth/compute.readonly
compute-rwhttps://www.googleapis.com/auth/compute
datastorehttps://www.googleapis.com/auth/datastore
defaulthttps://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring.write
https://www.googleapis.com/auth/pubsub
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
gke-defaulthttps://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.write
https://www.googleapis.com/auth/monitoring
https://www.googleapis.com/auth/service.management.readonly
https://www.googleapis.com/auth/servicecontrol
https://www.googleapis.com/auth/trace.append
logging-writehttps://www.googleapis.com/auth/logging.write
monitoringhttps://www.googleapis.com/auth/monitoring
monitoring-readhttps://www.googleapis.com/auth/monitoring.read
monitoring-writehttps://www.googleapis.com/auth/monitoring.write
pubsubhttps://www.googleapis.com/auth/pubsub
service-controlhttps://www.googleapis.com/auth/servicecontrol
service-managementhttps://www.googleapis.com/auth/service.management.readonly
sql (deprecated)https://www.googleapis.com/auth/sqlservice
sql-adminhttps://www.googleapis.com/auth/sqlservice.admin
storage-fullhttps://www.googleapis.com/auth/devstorage.full_control
storage-rohttps://www.googleapis.com/auth/devstorage.read_only
storage-rwhttps://www.googleapis.com/auth/devstorage.read_write
taskqueuehttps://www.googleapis.com/auth/taskqueue
tracehttps://www.googleapis.com/auth/trace.append
userinfo-emailhttps://www.googleapis.com/auth/userinfo.email
DEPRECATION WARNING:https://www.googleapis.com/auth/sqlserviceaccount scope andsql alias do not provide SQL instance managementcapabilities and have been deprecated. Please, usehttps://www.googleapis.com/auth/sqlservice.adminorsql-admin to manage your Google SQL Service instances.
--service-account=SERVICE_ACCOUNT
The Google Cloud Platform Service Account to be used by the node VMs. If aservice account is specified, the cloud-platform and userinfo.email scopes areused. If no Service Account is specified, the project default service account isused.
GCLOUD WIDE FLAGS
These flags are available to all commands:--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.

Run$gcloud help for details.

NOTES
This command is currently in alpha and might change without notice. If thiscommand fails with API permission errors despite specifying the correct project,you might be trying to access an API with an invitation-only early accessallowlist. These variants are also available:
gcloudcontainernode-poolscreate
gcloudbetacontainernode-poolscreate

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-16 UTC.