Create a VPC-native cluster Stay organized with collections Save and categorize content based on your preferences.
This page explains how to configure VPC-nativeclustersin Google Kubernetes Engine (GKE).
To learn more about the benefits and requirements of VPC-nativeclusters, see the overview forVPC-native clusters.
For GKE Autopilot clusters,VPC-native networks are enabled by default and can't beoverridden.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
Limitations
You can't convert a VPC-native cluster into a routes-basedcluster, and you can't convert a routes-based cluster into aVPC-native cluster.
VPC-native clusters require VPC networks.Legacy networks are not supported.
As with any GKE cluster,Service(ClusterIP) addresses are only available from within the cluster. If you needto access a Kubernetes Service from VM instances outside of the cluster, butwithin the cluster's VPC network and region, create aninternal passthrough Network Load Balancer.
If you use all of the Pod IP addresses in a subnet, you can't replace thesubnet's secondary IP address range without putting the cluster into anunstable state. However, you can create additional Pod IP address ranges usingdiscontiguous multi-Pod CIDR.
Create a cluster
This section shows you how to complete the following tasks at cluster creation time:
- Create a cluster and subnet simultaneously.
- Create a cluster in an existing subnet.
- Create a cluster and select the control plane IP address range.
- Create a cluster with dual-stack networking in a new subnet (available in Autopilotclusters version 1.25 or later, and Standard clusters version 1.24 orlater).
- Create a dual-stack cluster and a dual-stack subnet simultaneously(available in Autopilot clusters version 1.25 or later, andStandard clusters version 1.24 or later).
You can also create a cluster and enable auto IP address management in yourcluster (Preview), which means that GKE automatically createssubnets and manages IP addresses for you. For more information, seeUse auto IP addressmanagement.
After you create the cluster, you can modify access to the cluster's controlplane. To learn more, seeCustomize your network isolation in GKE.
Create a cluster and subnet simultaneously
The following directions demonstrate how to create a VPC-nativeGKE cluster and subnet at the same time. Thesecondary range assignment methodis managed by GKE when you perform these two steps with one command.
If usingShared VPC, you can'tsimultaneously create the cluster and subnet. Instead, a Network administrators in theShared VPC host project must create the subnet first. Then you cancreate the cluster in an existing subnet with a secondaryrange assignment method of user-managed.
gcloud
To create a VPC-native cluster and subnet simultaneously, run the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--location=COMPUTE_LOCATION\--enable-ip-alias\--create-subnetworkname=SUBNET_NAME,range=NODE_IP_RANGE\--cluster-ipv4-cidr=POD_IP_RANGE\--services-ipv4-cidr=SERVICES_IP_RANGEReplace the following:
CLUSTER_NAME: the name of the GKEcluster.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.SUBNET_NAME: the name of the subnet to create. Thesubnet's region is the same region as the cluster (or the region containing thezonal cluster). Use an empty string (name="") if you wantGKE to generate a name for you.NODE_IP_RANGE: an IP address range in CIDR notation,such as10.5.0.0/20, or the size of a CIDR block's subnet mask, such as/20. This is used to create the subnet's primary IP addressrange for nodes. If omitted, GKE chooses an available IPrange in the VPC with a size of/20.POD_IP_RANGE: an IP address range in CIDR notation,such as10.0.0.0/14, or the size of a CIDR block's subnet mask, such as/14. This is used to create the subnet's secondary IP addressrange for Pods. If omitted, GKE uses a randomly chosen/14range containing 218 addresses. The automatically chosenrange is randomly selected from10.0.0.0/8(a range of 224addresses) and does not include IP address ranges allocated to VMs,existingroutes, or ranges allocated to otherclusters. The automatically chosen range might conflict withreserved IP addresses,dynamic routes, or routes withinVPCs that peer with this cluster. If you use these any ofthese, you should specify--cluster-ipv4-cidrto prevent conflicts.SERVICES_IP_RANGE: an IP address range in CIDRnotation, such as10.4.0.0/19, or the size of a CIDR block's subnet mask,such as/19. This is used to create the subnet's secondary IPaddress range for Services. If omitted, GKE uses/20, thedefault Services IP address range size.
Console
You can't create a cluster and subnet simultaneously using theGoogle Cloud console. Instead, firstcreate a subnet thencreate the cluster in an existing subnet.
API
To create a VPC-native cluster, define anIPAllocationPolicy object in your cluster resource:
{"name":CLUSTER_NAME,"description":DESCRIPTION,..."ipAllocationPolicy":{"useIpAliases":true,"createSubnetwork":true,"subnetworkName":SUBNET_NAME},...}ThecreateSubnetwork field automatically creates and provisions asubnetwork for the cluster. ThesubnetworkName field is optional; if left empty, a name is automatically chosen for the subnetwork.
After you create the cluster, you can modify access to the cluster's controlplane. To learn more, seeCustomize your network isolation in GKE.
Create a cluster in an existing subnet
The following instructions demonstrate how to create a VPC-nativeGKE cluster in an existing subnet with your choice ofsecondary range assignment method.
gcloud
To use asecondary range assignment methodofmanaged by GKE, run the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--location=COMPUTE_LOCATION\--enable-ip-alias\--subnetwork=SUBNET_NAME\--cluster-ipv4-cidr=POD_IP_RANGE\--services-ipv4-cidr=SERVICES_IP_RANGETo use asecondary range assignment method ofuser-managed, run the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--location=COMPUTE_LOCATION\--enable-ip-alias\--subnetwork=SUBNET_NAME\--cluster-secondary-range-name=SECONDARY_RANGE_PODS\--services-secondary-range-name=SECONDARY_RANGE_SERVICES
Replace the following:
CLUSTER_NAME: the name of the GKE cluster.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.SUBNET_NAME: the name of an existing subnet. The subnet'sprimary IP address range is used for nodes. The subnet must exist in the sameregion as the one used by the cluster. If omitted, GKEattempts to use a subnet in thedefaultVPC network inthe cluster's region.- If the secondary range assignment method ismanaged byGKE:
POD_IP_RANGE: an IP address range in CIDR notation,such as10.0.0.0/14, or the size of a CIDR block's subnet mask,such as/14. This is used to create the subnet's secondary IPaddress range for Pods. If you omit the--cluster-ipv4-cidroption,GKE chooses a/14range (218addresses) automatically. The automatically chosen range is randomlyselected from10.0.0.0/8(a range of 224 addresses) andwon't include IP address ranges allocated to VMs, existingroutes, or ranges allocated to other clusters.The automatically chosen range might conflict withreserved IPaddresses,dynamic routes, or routes withinVPCs that peer with this cluster. If you use these anyof these, you should specify--cluster-ipv4-cidrto prevent conflicts.SERVICES_IP_RANGE: an IP address range in CIDR notation(for example,10.4.0.0/19) or the size of a CIDR block's subnet mask(for example,/19). This is used to create the subnet's secondary IPaddress range for Services.
- If the secondary range assignment method isuser-managed:
SECONDARY_RANGE_PODS: the name of an existing secondaryIP address range in the specifiedSUBNET_NAME.GKE uses the entire subnet secondary IP address range for thecluster's Pods.SECONDARY_RANGE_SERVICES: the name of an existingsecondary IP address range in theSUBNET_NAME.
Console
- In the Google Cloud console, go to theCreate an Autopilot cluster page.
Go to Create an Autopilot cluster
You can also complete this task bycreating a Standard cluster.
- From the navigation pane, underCluster, clickNetworking.
- UnderControl Plane Access, configure access to thecontrol plane endpoints.
- In theCluster networking section, in theNetwork drop-down list, select a VPC.
- In theNode subnet drop-down list, select a subnet for the cluster.
- Ensure theEnable VPC-native traffic routing (uses alias IP)checkbox is selected.
- Select theAutomatically create secondary ranges checkbox if you wantthe secondary range assignment method to be managed by GKE.Clear this checkbox if you have already created secondary ranges for thechosen subnet and would like the secondary range assignment method to beuser-managed.
- In thePod address range field, enter a Pod range, such as
10.0.0.0/14. - In theService address range field, enter a service range, such as
10.4.0.0/19. - Configure your cluster.
- ClickCreate.
Terraform
You can create a VPC-native cluster with Terraform using aTerraform module.
For example, you can add the following block to your Terraform configuration:
module"gke"{source="terraform-google-modules/kubernetes-engine/google"version="~> 12.0"project_id="PROJECT_ID"name="CLUSTER_NAME"region="COMPUTE_LOCATION"network="NETWORK_NAME"subnetwork="SUBNET_NAME"ip_range_pods="SECONDARY_RANGE_PODS"ip_range_services="SECONDARY_RANGE_SERVICES"}Replace the following:
PROJECT_ID: your project ID.CLUSTER_NAME: the name of the GKEcluster.COMPUTE_LOCATION: theCompute Engine locationfor the cluster. For Terraform, the Compute Engine region.NETWORK_NAME: the name of an existing network.SUBNET_NAME: the name of an existing subnet. Thesubnet's primary IP address range is used for nodes. The subnet must existin the same region as the one used by the cluster.SECONDARY_RANGE_PODS: the name of an existingsecondary IP address range inSUBNET_NAME.SECONDARY_RANGE_SERVICES: the name of an existingsecondary IP address range inSUBNET_NAME.
API
When you create a VPC-native cluster, you define anIPAllocationPolicyobject. You can reference existing subnet secondary IP address ranges or you canspecify CIDR blocks. Reference existing subnet secondary IP address ranges to createa cluster whose secondary range assignment method is user-managed. ProvideCIDR blocks if you want the range assignment method to be managed byGKE.
{"name":CLUSTER_NAME,"description":DESCRIPTION,..."ipAllocationPolicy":{"useIpAliases":true,"clusterIpv4CidrBlock":string,"servicesIpv4CidrBlock":string,"clusterSecondaryRangeName":string,"servicesSecondaryRangeName":string,},...}This command includes the following values:
"clusterIpv4CidrBlock": the CIDR range for Pods. This determines thesize of the secondary range for Pods, and can be in CIDR notation, such as10.0.0.0/14. An empty space with the given sizeis chosen from the available space in your VPC. If leftblank, a valid range is found and created with a default size."servicesIpv4CidrBlock": the CIDR range for Services. See description of"clusterIpv4CidrBlock"."clusterSecondaryRangeName": the name of the secondary range for Pods.The secondary range must already exist and belong to the subnetworkassociated with the cluster."serviceSecondaryRangeName": the name of the secondary range forServices. The secondary range must already exist and belong to thesubnetwork associated with the cluster.
After you create the cluster, you can modify access to the cluster's controlplane. To learn more, seeCustomize your network isolation in GKE.
Create a cluster and select the control plane IP address range
By default, clusters in version 1.29 or later use the primary subnet range toprovision the internal IP address assigned to the control plane endpoint. Youcan override this default setting by selecting a different subnet range duringthe cluster creation time only.
The following sections show you how to create acluster and override the subnet range.
gcloud
gcloudcontainerclusterscreateCLUSTER_NAME\--enable-private-nodes\--private-endpoint-subnetwork=SUBNET_NAME\--location=COMPUTE_LOCATIONWhere:
- The
enable-private-nodesflag is optional and tells GKE to create the clusterwith private nodes. - The
private-endpoint-subnetworkflag defines the IP address range of thecontrol plane internal endpoint. You can use themaster-ipv4-cidrflaginstead ofprivate-endpoint-subnetworkflag to provision the internal IPaddress for the control plane. To choose which flag to use, consider thefollowing configurations:- If you create a cluster with the
enable-private-nodesflag, themaster-ipv4-cidrandprivate-endpoint-subnetworkflags areoptional. - If you use the
private-endpoint-subnetworkflag, GKEprovisions the control plane internal endpoint with an IP address fromthe range that you define. - If you use the
master-ipv4-cidrflag, GKE creates anew subnet from the values that you provide. GKEprovisions the control plane internal endpoint with an IP address fromthis new range. - If you omit the
private-endpoint-subnetworkand themaster-ipv4-cidrflags, GKE provisions the controlplane internal endpoint with an IP address from the secondary cluster'ssubnetwork.
- If you create a cluster with the
Replace the following:
CLUSTER_NAME: the name of the GKEcluster.SUBNET_NAME: the name of an existing subnet toprovision the internal IP address.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.
GKE creates acluster with Private Service Connect.After you create the cluster, you can modify access to the cluster's controlplane. To learn more, seeCustomize your network isolation in GKE.
Console
To assign a subnet to the control plane of a new cluster, you mustadd a subnet first. Complete the followingsteps:
- In the Google Cloud console, go to theCreate an Autopilot cluster page.
Go to Create an Autopilot cluster
You can also complete this task bycreating a Standard cluster.
- In the Standard or Autopilot section, clickConfigure.
- For theName, enter your cluster name.
- From the navigation pane, underCluster, clickNetworking.
- UnderControl Plane Access, configure access to thecontrol plane endpoints.
- In theCluster networking section, select theOverride control plane's default private endpoint subnet checkbox.
- In thePrivate endpoint subnet list, select your created subnet.
- ClickDone. Add additional authorized networks as needed.
Project metadata for secondary IP address ranges
When you create or upgrade a GKE cluster, GKEautomatically adds project-level metadata entries likegoogle_compute_project_metadata to track secondary IP address range usage,including in Shared VPC environments. This metadata verifies thatGKE correctly allocates IP addresses for Pods and Services,which helps prevent conflicts.
GKE automatically manages this metadata.
Warning: Don't manually modify or remove thismetadata, because it might cause issues with the IP address allocation and disruptthe operation of your GKE cluster.Note: If you use Infrastructure as Code (IaC) tools, such as Terraform,you must configure your IaC tool to ignore these metadata entries to preventconfiguration drift.The metadata has the following format:
key:gke-REGION-CLUSTER_NAME-GKE_UID-secondary-rangesvalue:pods:SHARED_VPC_NETWORK:SHARED_VPC_SUBNETWORK:CLUSTER_PODS_SECONDARY_RANGE_NAMEwhere:
REGION: the Google Cloud region where the cluster islocated.CLUSTER_NAME: the name of the GKEcluster.GKE_UID: a unique identifier for theGKE cluster.VPC_NETWORK: the name of the VPCnetwork used by the cluster.VPC_SUBNETWORK: the name of the subnetwork withinthe VPC network used by the cluster.CLUSTER_PODS_SECONDARY_RANGE_NAME: the name of thesecondary IP address range that is used for the cluster's Pods.
VPC_NETWORK andVPC_SUBNETWORK variables refer to the network and subnetwork used by the cluster, which couldbe a Shared VPC host network or the cluster's own VPCnetwork.Create a cluster with dual-stack networking
You can create a cluster with IPv4/IPv6 dual-stack networking on a new orexisting dual-stack subnet. Dual-stack subnet is available in Autopilot clusters version1.25 or later, and Standard clusters version 1.24 or later. Dual-stack subnet is notsupported with Windows Server node pools.
Note: You canconfigure your cluster tohave nodes with internal (private) or external (public) access by using theenable-private-nodes flag. Theenable-private-nodes flag only affects theIPv4 address of the nodes in your cluster. Therefore, this flagdoesn't affect the configuration of the IPv4/IPv6 dual-stack networkingthat you define in this section.Before setting up dual-stack clusters, we recommend that you complete the following actions:
- Learn more about thebenefits and requirements of GKE clusterswith dual-stack networking.
- See therestrictions and limitations of dual-stack networking.
In this section, you create a dual-stack subnet first and use this subnet to create a cluster.
To create a dual-stack subnet, run the following command:
gcloudcomputenetworkssubnetscreateSUBNET_NAME\--stack-type=ipv4-ipv6\--ipv6-access-type=ACCESS_TYPE\--network=NETWORK_NAME\--range=PRIMARY_RANGE\--region=COMPUTE_REGIONReplace the following:
SUBNET_NAME: the name of the subnet that you choose.ACCESS_TYPE: the routability to the public internet.UseINTERNALfor internal IPv6 addresses orEXTERNALfor external IPv6 addresses. If--ipv6-access-typeis not specified, the default access type isEXTERNAL.NETWORK_NAME: the name of the network that willcontain the new subnet. This network must meet the following conditions:- It must be a custom mode VPC network. For moreinformation, see how toswitch a VPC network from auto mode to custom mode.
- If you replace the
ACCESS_TYPEwithINTERNAL, the network must useUnique Local IPv6 Unicast Addresses (ULA).
PRIMARY_RANGE: the primary IPv4 IP address range forthe new subnet, in CIDR notation. For more information, seeSubnet ranges.COMPUTE_REGION: thecompute region for the cluster.
To create a cluster with a dual-stack subnet, either use the
gcloud CLIor the Google Cloud console:
gcloud
For Autopilot clusters, run the following command:
gcloudcontainerclusterscreate-autoCLUSTER_NAME\--location=COMPUTE_LOCATION\--network=NETWORK_NAME\--subnetwork=SUBNET_NAMEReplace the following:
CLUSTER_NAME: the name of your newAutopilot cluster.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.NETWORK_NAME: the name of a VPCnetwork that contains the subnet. This VPC networkmust be a custom mode VPC network. Formore information, see how toswitch a VPC network from auto mode to custom mode.SUBNET_NAME: the name of the dual-stack subnet.GKE Autopilot clusters default to a dual-stackcluster when you use a dual-stack subnet. After cluster creation, youcan update the Autopilot cluster to be IPv4-only.
For Standard clusters, run the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--enable-ip-alias\--enable-dataplane-v2\--stack-type=ipv4-ipv6\--network=NETWORK_NAME\--subnetwork=SUBNET_NAME\--location=COMPUTE_LOCATIONReplace the following:
CLUSTER_NAME: the name of the new cluster.NETWORK_NAME: the name of a VPCnetwork that contains the subnet. This VPC network mustbe a custom mode VPC network that usesUnique Local IPv6 Unicast Addresses (ULA).For more information, see how toswitch a VPC network from auto mode to custom mode.SUBNET_NAME: the name of the subnet.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.
Console
- In the Google Cloud console, go to theCreate an Autopilot cluster page.
Go to Create an Autopilot cluster
You can also complete this task bycreating a Standard cluster.
- In the Standard or Autopilot section, clickConfigure.
- Configure your cluster as needed.
- From the navigation pane, underCluster, clickNetworking.
- UnderControl Plane Access, configure access to thecontrol plane endpoints.
- In theCluster networking section, in theNetwork list, select the name of your network.
- In theNode subnet list, select the name of your dual-stack subnet.
For Standard clusters, select theIPv4 and IPv6 (dual stack)radio button. This option is available only if you selected a dual-stacksubnet.
Autopilot clusters default to a dual-stack cluster when you usea dual-stack subnet.
ClickCreate.
Create a dual-stack cluster and a subnet simultaneously
You can create a subnet and a dual-stack cluster simultaneously.GKE creates an IPv6 subnet and assigns an external IPv6 primaryrange to the subnet.
If usingShared VPC, you can'tsimultaneously create the cluster and subnet. Instead, a Network Admin in theShared VPC host project must create the dual-stack subnet first.
For Autopilot clusters, run the following command:
gcloudcontainerclusterscreate-autoCLUSTER_NAME\--location=COMPUTE_LOCATION\--network=NETWORK_NAME\--create-subnetworkname=SUBNET_NAMEReplace the following:
CLUSTER_NAME: the name of your newAutopilot cluster.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.NETWORK_NAME: the name of a VPCnetwork that contains the subnet. This VPC networkmust be a custom mode VPC network that usesUnique Local IPv6 Unicast Addresses (ULA).For more information, see how toswitch a VPC network from auto mode to custom mode.SUBNET_NAME: the name of the new subnet.GKE can create the subnet based on yourorganization policies:- If your organization policies allow dual-stack, and the network iscustom mode, GKE creates a dual-stack subnet and assignsan external IPv6 primary range to the subnet .
- If your organization policies don't allow dual-stack, or if thenetwork is in auto mode, GKE creates a single stack (IPv4) subnet.
For Standard clusters, run the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--enable-ip-alias\--stack-type=ipv4-ipv6\--ipv6-access-type=ACCESS_TYPE\--network=NETWORK_NAME\--create-subnetworkname=SUBNET_NAME,range=PRIMARY_RANGE\--location=COMPUTE_LOCATIONReplace the following:
CLUSTER_NAME: the name of the new cluster that youchoose.ACCESS_TYPE: the routability to the publicinternet. UseINTERNALfor internal IPv6 addresses orEXTERNALforexternal IPv6 addresses. If--ipv6-access-typeis not specified, thedefault access type isEXTERNAL.NETWORK_NAME: the name of the network that willcontain the new subnet. This network must meet the following conditions:- It must be a custom mode VPC network. For moreinformation, see how toswitch a VPC network from auto mode to custom mode.
- If you replace the
ACCESS_TYPEwithINTERNAL,the network must useUnique Local IPv6 Unicast Addresses (ULA).
SUBNET_NAME: the name of the new subnet that youchoose.PRIMARY_RANGE: the primary IPv4 address range forthe new subnet, in CIDR notation. For more information, seeSubnet ranges.COMPUTE_LOCATION: theCompute Engine locationfor the cluster.
Update the stack type
You can change the stack type of an existing cluster or a update an existingsubnet to a dual-stack subnet.
Update the stack type on an existing cluster
Before you change thestack type on an existing cluster, consider the following limitations:
Changing the stack type is supported in new GKE clustersrunning version 1.25 or later. GKE clusters that have beenupgraded from versions 1.24 to versions 1.25 or 1.26 might get validationerrors when enabling dual-stack network. In case of errors, contact the Google Cloudsupport team.
Changing the stack type is a disruptive operation because GKErestarts components in both the control plane and nodes.
GKE respects your configured maintenance windows whenrecreating nodes. This means that the cluster stack type won't be operationalon the cluster until the next maintenance window occurs. If you prefer not towait, you can manually upgrade the node pool by setting the
--cluster-versionflag to the same GKE version the controlplane is already running. You must use the gcloud CLI if you usethis workaround. For more information, seecaveats for maintenance windows.Changing the stack type does not automatically change the IP family ofexisting Services. The following conditions apply:
- If you change a single stack to dual-stack, the existing Services remainsingle stack.
- If you change a dual-stack to single stack, the existing Services withIPv6 addresses get into an error state. Delete the Service and create onewith the correct
ipFamilies.To learn more, see anexample of how to set up a Deployment.
To update an existing VPC-native cluster, you can usegcloud CLI or the Google Cloud console:
gcloud
Run the following command:
gcloudcontainerclustersupdateCLUSTER_NAME\--stack-type=STACK_TYPE\--location=COMPUTE_LOCATIONReplace the following:
CLUSTER_NAME: the name of the cluster you want toupdate.STACK_TYPE: the stack type. Replace with one ofthe following values:ipv4: to update a dual-stack cluster to IPv4 only cluster.GKE uses the primary IPv4 address range of thecluster's subnet.ipv4-ipv6: to update an existing IPv4 cluster to dual-stack. You canonly change a cluster to dual-stack if the underlying subnet supportsdual-stack. To learn more, seeUpdate an existing subnet to a dual-stack subnet.
COMPUTE_LOCATION: theCompute Engine locationfor the cluster.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
Next to the cluster you want to edit, clickmore_vertActions, then clickeditEdit.
In theNetworking section, next toStack type, clickeditEdit.
In theEdit stack type dialog, select the checkbox for the cluster stack type you need.
ClickSave Changes.
Update an existing subnet to a dual-stack subnet (available in Autopilot clusters version 1.25 or later, and Standard clusters version 1.24 or later).
Update an existing subnet to a dual-stack subnet
To update an existing subnet to a dual-stack subnet, run the following command.Updating a subnet does not affect any existing IPv4 clusters in the subnet.
gcloudcomputenetworkssubnetsupdateSUBNET_NAME\--stack-type=ipv4-ipv6\--ipv6-access-type=ACCESS_TYPE\--region=COMPUTE_REGIONReplace the following:
SUBNET_NAME: the name of the subnet.ACCESS_TYPE: the routability to the public internet.UseINTERNALfor internal IPv6 addresses orEXTERNALfor external IPv6 addresses. If--ipv6-access-typeis not specified, the default access type isEXTERNAL.COMPUTE_REGION: thecompute region for the cluster.
Verify the stack type, Pod, and Service IP address ranges
After you create a VPC-native cluster, you can verify its Pod andService ranges.
gcloud
To verify the cluster, run the following command:
gcloudcontainerclustersdescribeCLUSTER_NAMEThe output has anipAllocationPolicy block. ThestackType fielddescribes the type of network definition. For each type,you can see the following network information:
IPv4 network information:
clusterIpv4Cidris the secondary range for Pods.servicesIpv4Cidris the secondary range for Services.
IPv6 network information (if a cluster has dual-stack networking):
ipv6AccessType: The routability to the public internet.INTERNALforinternal IPv6 addresses andEXTERNALfor external IPv6 addresses.subnetIpv6CidrBlock: The secondary IPv6 address range for the newsubnet.servicesIpv6CidrBlock: The address range assigned for the IPv6Services on the dual-stack cluster.
Console
To verify the cluster, perform the following steps:
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to inspect.
The secondary ranges are displayed in theNetworking section:
- Pod address range is the secondary range for Pods
- Service address range is the secondary range for Services
Delete your cluster
To delete your cluster, follow the steps inDeleting a cluster.
GKE tries to clean up the created subnetwork when thecluster is deleted. But if the subnetwork is being used by other resources,GKE does not delete the subnetwork, and you must manage the lifeof the subnetwork yourself.
Advanced configuration for internal IP addresses
The following sections show how to use non-RFC 1918 private IP address rangesand how to enable privately used public IP address ranges.
Use non-RFC 1918 IP address ranges
GKE clusters can use IP address ranges outside of theRFC 1918 ranges for nodes, Pods, and Services. Seevalid ranges in the VPC networkdocumentation for a list of non-RFC 1918 private ranges that can be used asinternal IP addresses for subnet ranges.
This feature is not supported with Windows Server node pools.
Non-RFC 1918 private ranges are subnet ranges — you can use them exclusively orin conjunction with RFC 1918 subnet ranges. Nodes, Pods, and Services continueto use subnet ranges as described inIP ranges for VPC-native clusters.If you use non-RFC 1918 ranges, keep the followingin mind:
Subnet ranges, even those using non-RFC 1918 ranges, must beassignedmanually or by GKEbefore the cluster's nodes are created.You can't switch to or cease using non-RFC 1918 subnet ranges for node orService IP addresses on an existing cluster unless you replace the cluster.However, you can add additional Pod CIDR ranges, including non-RFC 1918ranges, to an existing VPC-native cluster. For moreinformation about adding additional Pod CIDR ranges, seeexpand the IPaddress ranges of a GKEcluster.
Internal passthrough Network Load Balancersonly use IP addresses from the subnet's primary IP address range. To create aninternal passthrough Network Load Balancer with a non-RFC 1918 address, your subnet's primary IP addressrange must be non-RFC 1918.
Destinations outside your cluster might have difficulties receiving traffic fromprivate, non-RFC 1918 ranges. For example, RFC 1112 (class E) private ranges aretypically used as multicast addresses. If a destination outside of your clustercan't process packets whose sources are private IP addressesoutside of theRFC 1918 range, you can do the following:
Use an RFC 1918 range for the subnet's primary IP address range. This way,nodes in the cluster use RFC 1918 addresses.
Ensure that your cluster is running theIP masqueradeagent and that thedestinations arenot in the
nonMasqueradeCIDRslist. This way, packetssent from Pods have their sources changed (SNAT) to node addresses, which areRFC 1918.
Enable privately used external IP address ranges
GKE clusters can privately use certain external IP addressranges as internal, subnet IP address ranges. You can privately use any externalIP address except forcertain restricted rangesas described the VPC network documentation. This feature is not supported with Windows Server node pools.
Your clustermust be a VPC-native cluster in order touse privately used external IP address ranges. Routes-based clusters arenot supported.
Privately used external ranges are subnet ranges. You can use them exclusivelyor in conjunction with other subnet ranges that use private addresses. Nodes,Pods, and Services continue to use subnet ranges as described inIP ranges for VPC-native clusters.Keep the following in mind when re-using external IP addresses privately:
When you use a external IP address range as a subnet range, your cluster canno longer communicate with systems on the Internet that use that external range.The range becomes an internal IP address range in the cluster'sVPC network.
Subnet ranges, even those that privately use external IP address ranges,must beassignedmanually or by GKEbefore the cluster's nodes are created. You can't switch to or cease usingnon-RFC 1918 subnet ranges for node or service IP addresses on an existingcluster unless you replace the cluster. However, you can add additional PodCIDR ranges, including non-RFC 1918 ranges, to an existingVPC-native cluster. For more information about adding additionalPod CIDR ranges, seeexpand the IP address ranges of aGKE cluster.
GKE by default implements SNAT on the nodes to external IPdestinations. If you have configured the Pod CIDR to use external IP addresses,the SNAT rules apply to Pod-to-Pod traffic. To avoid this you have 2 options:
- Create your cluster with the
--disable-default-snatflag. For more details about this flag, refer toIP masquerading in GKE. - Configure theconfigMap
ip-masq-agentincluding in thenonMasqueradeCIDRslist at least the Pod CIDR, the ServiceCIDR, and the nodes subnet.
For Standard clusters, if the cluster version is 1.14 or later,both options will work. If your cluster version is earlier than 1.14, you canonly use the second option (configuringip-masq-agent).
What's next
- Read the GKE network overview.
- Learn about internal load balancing.
- Learn about configuring authorized networks.
- Learn about creating cluster network policies.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.