Customize your network isolation in GKE Stay organized with collections Save and categorize content based on your preferences.
This page explains how to configure network isolation forGoogle Kubernetes Engine (GKE) clusters when you create or update your cluster.
Plan and design your cluster network isolationwith your organization's Network architects, Network administrators,or any other Network engineers team responsible for defining,implementing, and maintaining the network architecture.
How cluster network isolation works
In a GKE cluster, network isolation depends onwho can access the cluster components and how. You can control:
- Control plane access: You can customize external access, limited access, orunrestricted access to the control plane.
- Cluster networking: You can choose who can access the nodes inStandard clusters, or the workloads in Autopilot clusters.
Before you create your cluster, consider the following:
- Who can access the control plane and how is the control plane exposed?
- How are your nodes or workloads exposed?
To answer these questions, follow the plan and design guidelinesinAbout network isolation.
Restrictions and limitations
By default, GKE creates your clusters asVPC-nativeclusters. VPC-nativeclusters don't supportlegacy networks.
Node pool-level Pod secondary ranges: when creating a GKEcluster, if you specify a Pod secondary range smaller than/24 per node poolusing the UI, you might encounter the following error:
Getting Pod secondary range 'pod' must have a CIDR block larger or equal to /24GKE does not support specifying a range smaller than/24 at thenode pool level. However, specifying a smaller range at the cluster level issupported. This can be done by using Google Cloud CLI with the--cluster-ipv4-cidr argument. For more information, seeCreating a clusterwith a specific CIDRrange.
Expand the following sections to view the rules around IP address ranges andtraffic when creating a cluster.
Control plane
- You can add up to 100 authorized networks, including both external and internal IP addresses, in a project . For more information, refer toDefine the IP addresses that can access the control plane.
For clusters that were created on versions prior to 1.29, you can add up to only 50 authorized networks if your cluster meets the following conditions. You can check these conditions by running thegcloud container clusters describecommand:- The
privateClusterConfigresource does not exist. - Or, if
privateClusterConfigexists, the resource has both of the following values: - The
peeringNamefield is empty or doesn't exist. - The
privateEndpointfield doesn't have any value assigned.
- The
- While GKE can detect overlap with the control plane address block, it cannot detect overlap within aShared VPC network.
Cluster networking
- Internal IP addresses for nodes come from the primary IP address range of the subnet you choose for the cluster. Pod IP addresses and Service IP addresses come from two subnet secondary IP address ranges of that same subnet. For more information, seeIP ranges for VPC-native clusters.
- GKE supports any internal IP address ranges, including private ranges (RFC 1918 and other private ranges) and privately used external IP address ranges. See the VPC documentation for a list ofvalid internal IP address ranges.
- If youexpand the primary IP range of a subnet to accommodate additional nodes, then you must add the expanded subnet's primary IP address range to the list of authorized networks for your cluster. If you don't, ingress-allow firewall rules relevant to the control plane aren't updated, and new nodes created in the expanded IP address space won't be able to register with the control plane. This can lead to an outage where new nodes are continuously deleted and replaced. Such an outage can happen when performing node pool upgrades or when nodes are automatically replaced due to liveness probe failures.
- All nodes in a cluster with only private nodes are created without an external IP; they have limited access to Google Cloud APIs and services. To provide outbound internet access for your private nodes, you can useCloud NAT.
- Private Google Access is enabled automatically when you create a cluster unless you are using Shared VPC. You must not disable Private Google Access unless you are using NAT to access the internet.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
Configure control plane access
When you create a GKE clusterin any version by using Google Cloud CLIor in version 1.29 and later using theConsole, the control plane isaccessible through the following interfaces:
DNS-based endpoint
Access to the control plane depends on the DNS resolution of the source traffic.Enable the DNS-based endpoint to get the following benefits:
- Create a dynamic access policy based on IAM policies.
- Access the control plane from other VPC networks or external locations withoutthe need of setting up bastion host or proxy nodes.
To configure access to the DNS-based endpoint, seeDefine the DNS-based endpoint access.
IP-based endpoints
Access to control plane endpoints depends on the source IP address and is controlled by your authorized networks. You can manage access to the IP-based endpoints of the control plane, including:
- Enable or disable the IP-based endpoint.
- Enable or disable the external endpoint to allow access from external traffic. The internal endpoint is alwaysenabled when you enable the control plane IP-based endpoints.
- Addauthorized networks to allowlist or deny access from public IP addresses.If you don't configure authorized networks, the control plane is accessiblefrom any external IP address. This includes public internet or Google Cloud external IP addressesaddresses with no restrictions.
- Allowlist or deny access from any or all private IP addresses in thecluster.
Allowlist or deny access fromGoogle Cloud external IP addresses, which are external IP addresses assigned to any VM used by any customer hosted on Google Cloud.
Caution: We don't recommend that you allowlist access from allGoogle Cloud external IP addresses to the control plane because itprovides minimal security benefits. Consider adding authorized networks torestrict access to the control plane from specific ranges that you control,or use aDNS-based endpoint.Allowlist or deny access from IP addresses in other Google Cloud regions.
Review thelimitations of using IP-based endpoints before you define the control plane access.
Create a cluster and define control plane access
To create or update an Autopilot or a Standard cluster, useeither the Google Cloud CLI or the Google Cloud console.
Console
To create a cluster, complete the following steps:
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
Clickadd_boxCreate.
Configure the attributes of your cluster based on your project needs.
In the navigation menu, clickNetworking.
UnderControl Plane Access, configure the control plane endpoints:
- Select theAccess using DNS checkbox to enable thecontrol plane DNS-based endpoints.
Optional: Enable authentication to the DNS-based endpoint by usingKubernetes credentials:
- To use Kubernetes ServiceAccount bearer tokens, select theEnable Kubernetes tokens via DNS checkbox.
- To use X.509 client certificates, select theEnable Kubernetes certificates via DNS checkbox.
ServiceAccount tokens and client certificates can add complexity andmanagement overhead to your security configuration. Unless your use caserequires one of these authentication methods,use IAM credentials to authenticate to your control plane.
Select theAccess using IPv4 addresses checkbox to enable thecontrol plane IP-based endpoints. Use the configuration includedinDefine the IP addresses that can access the control plane to customize access to the IP-basedendpoints.
gcloud
For Autopilot clusters, run the following command:
gcloudcontainerclusterscreate-autoCLUSTER_NAME\--enable-ip-access\--enable-dns-accessFor Standard clusters, run the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--enable-ip-access\--enable-dns-accessReplace the following:
CLUSTER_NAME: the name of your cluster.
Both commands include flags that enable the following:
enable-dns-access: Enables access to the control plane by using theDNS-based endpoint of the control plane. If you specify this flag, you canalso specify the following optional flags:enable-k8s-tokens-via-dns: enable authentication to the DNS-basedendpoint by using Kubernetes ServiceAccount bearer tokens.enable-k8s-certs-via-dns: enable authentication to the DNS-basedendpoint by using Kubernetes X.509 client certificates.
ServiceAccount tokens and client certificates can add complexity andmanagement overhead to your security configuration. Unless your use caserequires one of these authentication methods,use IAMcredentials to authenticate to your control plane.
enable-ip-access: Enables access to the control plane by using IPv4addresses. If you want to disable both the internal andexternal endpoints of the control plane, use theno-enable-ip-accessflag instead.
get-credentials command automatically uses the IP-based endpoint if the IP-based endpoint access is enabled. To instructget-credentials to use the DNS-based endpoint, add the--dns-endpoint flag when running theget-credentials command.Use the flags listed inDefine the IP addresses that can access the control plane to customize access to the IP-based endpoints.
Define access to the DNS-based endpoint
You can manage authentication and authorization to the DNS-based endpoint byconfiguring the IAM permissioncontainer.clusters.connect. Toconfigure this permission, assign one of the following IAMroles to yourGoogle Cloud project:
roles/container.developerroles/container.viewer
Optionally, you can manage the reachability of the DNS-based endpoint by using the following features:
VPC Service Controls: the DNS-based endpoint supportsVPC Service Controls to add a layer ofsecurity to your control plane access. VPC Service Controls work consistentlyacross Google Cloud APIs.
Access to the DNS-based endpoint from clients with no public internet access: theDNS-based endpoint is accessible through Google Cloud APIs that areavailable on the public internet. To access the DNS-based endpoint from privateclients, you can usePrivate Google Access,Cloud NAT gateway, orPrivate Service Connect for Google Cloud APIs.
Note: If you want to use a Private Service Connect endpoint toaccess the DNS-based endpoint, in addition tocreating an endpoint, you must alsocreate DNS recordsfor thegke.googdomain. With this configuration, GKE reroutesrequests for*.gke.googdomain to the internal IP address of thePrivate Service Connect endpoint, not the default public GoogleIP address.Access to the DNS-based endpoint from on-premises clients: theDNS-based endpoint is accessible by on-premises clients through Private Google Access. To configure Private Google Access, complete the steps inConfigure Private Google Access for on-premises hosts.
Define the IP addresses that can access the control plane
To define the IP addresses that can access the control plane, complete thefollowing steps:
Console
- UnderControl Plane Access, selectEnable authorized networks.
- ClickAdd authorized network.
- Enter aName for the network.
- ForNetwork, enter a CIDR range that you want to grant access to yourcluster control plane.
- ClickDone.
- Add additional authorized networks if you need.
Define the control plane IP address firewall rules
To define the control plane IP address firewall rules, complete the following steps:
- Expand theShow IP address firewall rules section.
Select theAccess using the control plane's external IP address checkbox toallow access to the control plane from public IP addresses.
Best practice: Definecontrol plane authorized networks to restrict access to the control plane.
Select theAccess using the control plane's internal IP address fromany region checkbox. Internal IP addresses from any Google Cloudregion can access the control plane internal endpoint.
SelectEnforce authorized networks on the control plane's internalendpoint. Only the IP addresses that you defined in theAddauthorized networks list can access to the control plane internalendpoint. The internal endpoint is enabled by default.
SelectAdd Google Cloud external IP addresses to authorizednetworks. All public IP addresses from Google Cloud can accessthe control plane.
gcloud
You can configure the IP addresses that can access the control planeexternal and internal endpoints by using the following flags:
enable-private-endpoint: Specifies that access to the externalendpoint is disabled. Omit this flag if you want to allow access to thecontrol plane from external IP addresses. In this case, we stronglyrecommend that you control access to the external endpoint with theenable-master-authorized-networksflag.enable-master-authorized-networks:Specifies that access to the externalendpoint is restricted to IP address ranges that you authorize.master-authorized-networks: Lists theCIDR values for the authorized networks. This list is comma-delimited list.For example,8.8.8.8/32,8.8.8.0/24.Best practice: Use the
enable-master-authorized-networksflag so that access to control plane is restricted.enable-authorized-networks-on-private-endpoint: Specifies thataccess to the internal endpoint is restricted to IP address ranges thatyou authorize with theenable-master-authorized-networksflag.no-enable-google-cloud-access: Denies access to the control plane fromGoogle Cloud external IP addresses. Note that updating this settingdoes not take effect immediately. It might take several hours forGKE to propagate and enforce the firewall rule changes.enable-master-global-access: Allows access from IP addresses in otherGoogle Cloud regions.You can continue to configure the cluster network by definingnode or Pod isolation on a cluster level.
You can also create a cluster and define attributes at the cluster level, such as node network and subnet, IP stack type, and IP address allocation. To learn more, seeCreate a VPC-native cluster.
Modify the control plane access
To change control plane access for a cluster, use either thegcloud CLI or the Google Cloud console.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In theCluster details tab, underControl Plane Networking, clickedit.
In theEdit control plane networking dialog, modify the control planeaccess based on your use case requirements.
gcloud
Run the following command and append the flags that meet your use case. You can use the following flags:
enable-dns-access: Enables access to the control plane by using theDNS-based endpoint of the control plane. If you specify this flag, you canalso specify the following optional flags:enable-k8s-tokens-via-dns: enable authentication to the DNS-basedendpoint by using Kubernetes ServiceAccount bearer tokens.enable-k8s-certs-via-dns: enable authentication to the DNS-basedendpoint by using Kubernetes X.509 client certificates.
ServiceAccount tokens and client certificates can add complexity andmanagement overhead to your security configuration. Unless your use caserequires one of these authentication methods,use IAMcredentials to authenticate to your control plane.
enable-ip-access: Enables access to the control plane by using IPv4addresses. If you want to disable both the internal andexternal endpoints of the control plane, use theno-enable-ip-accessflag instead.enable-private-endpoint: Specifies that access to the externalendpoint is disabled. Omit this flag if you want to allow access to thecontrol plane from external IP addresses. In this case, we stronglyrecommend that you control access to the external endpoint with theenable-master-authorized-networksflag.enable-master-authorized-networks:Specifies that access to the externalendpoint is restricted to IP address ranges that you authorize.master-authorized-networks: Lists theCIDR values for the authorized networks. This list is comma-delimited list.For example,8.8.8.8/32,8.8.8.0/24.Best practice: Use the
enable-master-authorized-networksflag so that access to control plane is restricted.enable-authorized-networks-on-private-endpoint: Specifies thataccess to the internal endpoint is restricted to IP address ranges thatyou authorize with theenable-master-authorized-networksflag.no-enable-google-cloud-access: Denies access to the control plane fromGoogle Cloud external IP addresses. Note that updating this settingdoes not take effect immediately. It might take several hours forGKE to propagate and enforce the firewall rule changes.enable-master-global-access: Allows access from IP addresses in otherGoogle Cloud regions.gcloudcontainerclustersupdateCLUSTER_NAMEReplace
CLUSTER_NAMEwith the name of the cluster.
Verify your control plane configuration
You can view your cluster's endpoints using the gcloud CLI or theGoogle Cloud console.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In theCluster details tab, underControl plane, you can check the following characteristics of the control plane endpoints:
- DNS endpoint includes the name of the DNS-based endpoint of yourcluster, if you've enabled this endpoint.
- Control plane access using IPv4 addresses includes the status of theIP-based endpoint. If enabled, you can see the information of the publicand private endpoints.
- Access using control plane's internal IP address from any region showsthe status asEnabled when the control plane can be accessed by Google IPaddresses from other regions.
- Authorized networks shows the list of CIDRs that can access thecontrol plane, if you've enabled authorized networks.
- Enforce authorized networks on control plane's internal endpoint showstheEnabled status if only the CIDRs in theAuthorized networks fieldcan access the internal endpoint.
- Add Google Cloud external IP addresses to authorized networksshows theEnabled status if the external IP addresses from Google Cloudcan access the control plane.
To modify any attribute, click theeditControl plane access using IPv4 addresses and adjust based on your use case.
gcloud
To verify the control plane configuration, run the following command:
gcloudcontainerclustersdescribeCLUSTER_NAMEThe output has ancontrolPlaneEndpointsConfig block that describes thenetwork definition. You can see an output similar to the following:
controlPlaneEndpointsConfig:dnsEndpointConfig: allowExternalTraffic: true endpoint: gke-dc6d549babec45f49a431dc9ca926da159ca-518563762004.us-central1-c.autopush.gke.googipEndpointsConfig: authorizedNetworksConfig: cidrBlocks: - cidrBlock: 8.8.8.8/32 - cidrBlock: 8.8.8.0/24 enabled: true gcpPublicCidrsAccessEnabled: false privateEndpointEnforcementEnabled: true enablePublicEndpoint: false enabled: true globalAccess: true privateEndpoint: 10.128.0.13In this example, the cluster has the following configuration:
- Both DNS and IP-address based endpoints are enabled.
- Authorized networks are enabled and the CIDR ranges are defined. Theseauthorized networks are enforced for the internal IP address.
- Access to the control plane from Google Cloud external IP addresses is denied.
Examples of control plane access configuration
This section details the configuration of the following network isolationexamples. Evaluate these examples for similarity to your use case:
- Example 1: The control plane is accessible fromcertain IP addresses that you define.These might include IP addresses from other Google Cloud regions orGoogle-reserved IP addresses.
- Example 2: The control plane is not accessible by any external IP address.
Example 1: The control plane is accessible from certain IP addresses
In this section, you create a cluster with the following network isolationconfigurations:
- The control plane has the DNS-based endpoint enabled.
- The control plane has the external endpoint enabled in addition to the internalendpoint enabled by default.
- The control plane has authorized networks defined, allowing only thefollowing authorized networks to reach the control plane:
- A range of external IP addresses that you define.
- All the internal IP addresses in your cluster.
- Google Cloud external IP addresses.
To create this cluster, use either the Google Cloud CLI or the Google Cloud console.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
Clickadd_boxCreate.
Configure your cluster to suit your requirements.
In the navigation menu, clickNetworking.
UnderControl Plane Access, configure the control plane endpoints:
- Select theAccess using DNS checkbox.
- Select theAccess using IPV4 addresses checkbox.
SelectEnable authorized networks.
ClickAdd authorized network.
Enter aName for the network.
ForNetwork, enter a CIDR range that you want to grant access to yourcluster control plane.
ClickDone.
Add additional authorized networks if you need.
Expand theShow IP address firewall rules section.
SelectAccess using the control plane's internal IP address from anyregion. Internal IP addresses from any Google Cloud region canaccess the control plane over the internal IP address.
SelectAdd Google Cloud external IP addresses to authorizednetworks. All external IP addresses from Google Cloud can accessthe control plane.
You can continue configuring the cluster network by definingnode or Pod isolation on a cluster level.
gcloud
Run the following command:
gcloudcontainerclusterscreate-autoCLUSTER_NAME\--enable-dns-access\--enable-ip-access\--enable-master-authorized-networks\--enable-master-global-access\--master-authorized-networksCIDR1,CIDR2,...Replace the following:
CLUSTER_NAME: the name of the GKEcluster.CIDR1,CIDR2,...: A comma-delimited list ofCIDR values for the authorized networks. For example,8.8.8.8/32,8.8.8.0/24.
Example 2: The control plane is accessible from internal IP addresses
In this section, you create a cluster with the following network isolationconfigurations:
- The control plane has the DNS-based endpoint enabled.
- The control plane has the external endpoint disabled.
- The control plane has authorized networks enabled.
- All access to the control plane over the internal IP address from anyGoogle Cloud region is allowed.
- Google Cloud external IP addressesdon't have access to your cluster.
You can create this cluster by using the Google Cloud CLI or the Google Cloud console.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
Clickadd_boxCreate.
Configure your cluster to suit your requirements.
In the navigation menu, clickNetworking.
UnderControl Plane Access, configure the control plane endpoints:
- Select theAccess using DNS checkbox.
- Select theAccess using IPV4 addresses checkbox.
Expand theShow IP address firewall rules section.
UnselectAccess using the control plane's external IP address. Thecontrol plane is not accessible by any external IP address.
UnderControl Plane Access, selectEnable authorized networks.
Select theAccess using the control plane's internal IP address from anyregion checkbox. Internal IP addresses from any Google Cloud regioncan access the control plane over the internal IP address.
You can continue the cluster network configuration by definingnode or Pod isolation on a cluster level.
gcloud
Run the following command:
gcloudcontainerclusterscreate-autoCLUSTER_NAME\--enable-dns-access\--enable-ip-access\--enable-private-endpoint\--enable-master-authorized-networks\--master-authorized-networksCIDR1,CIDR2,...\--no-enable-google-cloud-access\--enable-master-global-accessReplace the following:
CLUSTER_NAME: the name of the cluster..CIDR1,CIDR2,...: A comma-delimited list ofCIDR values for the authorized networks. For example,8.8.8.8/32,8.8.8.0/24.
Configure cluster networking
In this section, you configure your cluster to have nodes with internal(private) or external (public) access. GKE lets you combine thenode network configuration depending on the type of cluster that you use:
- Standard cluster: You can create or update your node pools toprovision private or public nodes in the same cluster. For example, if youcreate a node pool with private nodes, thenGKE provisions its nodes with only internal IP addresses. GKEdoesn't modify existing node pools. You can alsodefine the default networking configuration at a cluster level. GKE applies this default network configuration only when new node pools don't have any network configuration defined.
- Autopilot clusters: You can create or update your cluster to definethe default network configuration for all your workloads. GKEschedules new and existing workloads on public or private nodes based on yourconfiguration. You can also explicitly define the cluster networkconfiguration of an individual workload.
Configure your cluster
In this section, configure the cluster networking at a cluster level.GKE considers this configuration when your node pool or workloaddoesn't have this configuration defined.
To define cluster-level configuration, use either the Google Cloud CLI or theGoogle Cloud console.
Note: If you use VPC-native clusters, you cancreate subnets with IPv4/IPv6 dual-stack networking.However, dual-stack networking subnets are not affected by the clusternetworking configuration described in the following section. The configurationof cluster networking at a cluster level only affects the IPV4 address of thenodes in your cluster.Console
Create a cluster
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
Clickadd_boxCreate, then in the Standard or Autopilot section, clickConfigure.
Configure your cluster to suit your requirements.
In the navigation menu, clickNetworking.
In theCluster networking section, complete the following based on your use case:
- SelectEnable private nodes to provision nodes with only internalIP addresses (private nodes) which prevent external clients fromaccessing the nodes. You can change these settings at any time.
- UnselectEnable private nodes to provision nodes with only external IPaddresses (public) which enables external clients to access the nodes.
In theAdvanced networking options section, configure additional VPC-native attributes. To learn more, seeCreate a VPC-native cluster.
ClickCreate.
Update an existing Cluster
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
InPrivate Nodes, underDefault New Node-Pool Configuration tab, clickeditEdit Private Nodes.
In theEdit Private Nodes dialog, do any of the following:
- SelectEnable private nodes to provision nodes with only internalIP addresses (private nodes) which prevent external clients fromaccessing the nodes. You can change these settings at any time.
- UnselectEnable private nodes to provision nodes with only external IPaddresses (public) which enables external clients to access the nodes.
ClickSave changes.
gcloud
Use any of the following flags to define the cluster networking:
enable-private-nodes: To provision nodes with only internal IP addresses (private nodes). Consider the following conditions when using this flag:- The
enable-ip-aliasflag is required when usingenable-private-nodes. - The
master-ipv4-cidrflag is optional to create private subnets. If you use thisflag, GKE creates a new subnet that uses the values youdefined inmaster-ipv4-cidrand uses the new subnet to provision theinternal IP address for the control plane.
- The
no-enable-private-nodes: To provision nodes with only external IPaddresses (public nodes).
In Autopilot clusters,create orupdate the cluster with theenable-private-nodesflag.
To create a cluster use the following command:
gcloudcontainerclusterscreate-autoCLUSTER_NAME\--enable-private-nodes\--enable-ip-aliasTo update a cluster use the following command.
gcloudcontainerclustersupdateCLUSTER_NAME\--enable-private-nodes\--enable-ip-aliasThe cluster update takes effect only after all the node pools have beenre-scheduled. This process might take several hours.
In Standard clusters,create orupdate the cluster with theenable-private-nodes flag.
To create a cluster use the following command:
gcloudcontainerclusterscreateCLUSTER_NAME\--enable-private-nodes\--enable-ip-aliasTo update a cluster use the following command:
gcloudcontainerclustersupdateCLUSTER_NAME\--enable-private-nodes\--enable-ip-aliasThe cluster update takes effect only on new node pools. GKEdoesn't update this configuration on existing node pools.
The cluster configuration is overwritten by the network configuration in thenode poolor workload level.
Configure your node pools or workloads
To configure private or public nodes at the workload level for Autopilotclusters, or node pools for Standard clusters use either theGoogle Cloud CLI or theGoogle Cloud console. If you don't define the network configuration at the workload ornode pool level, GKE applies the default configuration at thecluster level.
Console
In Standard clusters, complete the following steps:
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
On theCluster details page, click the name of the cluster you wantto modify.
Clickadd_boxAdd Node Pool.
Configure theEnable private nodes checkbox based on your use case:
- SelectEnable private nodes to provision nodes with only internalIP addresses (private nodes).
- UnselectEnable private nodes to provision nodes with only external IPaddresses (public) which enables external clients to access the nodes.You can change this configuration at any time.
Configure your new node pool.
ClickCreate.
To learn more about node pool management, seeAdd and manage node pools.
gcloud
In Autopilot clusters and in Standard node pools that usenode auto-provisioning, add the following
nodeSelectorto your Podspecification:cloud.google.com/private-node=trueUse
private-node=trueto schedule a Pod on nodes that have only internal IPaddresses (private nodes).GKE recreates your Pods on private nodes or public nodes,based on your configuration. To avoid workloaddisruption, migrate each workload independently and monitor the migration.
In Standard clusters, to provision nodes through private IPaddresses in an existing node pool, run the following command:
gcloudcontainernode-poolsupdateNODE_POOL_NAME\--cluster=CLUSTER_NAME\--enable-private-nodes\--enable-ip-aliasReplace the following:
NODE_POOL_NAME: the name of the node pool thatyou want to edit.CLUSTER_NAME: the name of your existing cluster.
Use any of the following flags to define the node pool networkingconfiguration:
enable-private-nodes: To provision nodes with only internalIP addresses (private nodes).no-enable-private-nodes: To provision nodes with only external IPaddresses (public nodes).
Advanced configurations
The following sections describe advanced configurations that you might want whenconfiguring your cluster network isolation.
Using Cloud Shell to access a cluster with external endpoint disabled
If the external endpoint of your cluster's control plane is disabled, you can't access yourGKE control plane with Cloud Shell. If you want touseCloud Shell to access your cluster, we recommend that you enable theDNS-based endpoint.
To verify access to your cluster, complete the following steps:
If you have enabled the DNS-based endpoint, run the following command to get credentials for your cluster:
gcloudcontainerclustersget-credentialsCLUSTER_NAME\--dns-endpointIf you have enabled the IP-based endpoint, run the following command to get credentials for your cluster:
gcloudcontainerclustersget-credentialsCLUSTER_NAME\--project=PROJECT_ID\--internal-ipReplace
PROJECT_IDwith your project ID.Use
kubectlin Cloud Shell to access your cluster:kubectlgetnodesThe output is similar to the following:
NAME STATUS ROLES AGE VERSIONgke-cluster-1-default-pool-7d914212-18jv Ready <none> 104m v1.21.5-gke.1302gke-cluster-1-default-pool-7d914212-3d9p Ready <none> 104m v1.21.5-gke.1302gke-cluster-1-default-pool-7d914212-wgqf Ready <none> 104m v1.21.5-gke.1302
Theget-credentials command automatically uses the DNS-based endpoint if the IP-based endpoint access is disabled.
Add firewall rules for specific use cases
This section explains how to add a firewall rule to a cluster. Bydefault, firewall rules restrict your cluster control plane to only initiate TCPconnections to your nodes and Pods on ports443 (HTTPS) and10250 (kubelet).For some Kubernetes features, you might need to add firewall rules to allowaccess on additional ports. Don't create firewall rules orhierarchical firewall policy rulesthat have ahigher priority than theautomatically created firewall rules.
443 and10250) refer to the ports exposed by yournodes and Pods,not the ports exposed by any Kubernetes services. For example,if the cluster control plane attempts to access a service on port443, but theservice isimplemented by a pod using port9443, this will be blocked by thefirewall unless you add a firewall rule to explicitly allow ingress to port9443.Kubernetes features that require additional firewall rules include:
- Admission webhooks
- Aggregated API servers
- Webhook conversion
- Dynamic audit configuration
- Generally, any API that has a ServiceReference field requires additional firewall rules.
Adding a firewall rule allows traffic from the cluster control plane to all of the following:
- The specified port of each node (hostPort).
- The specified port of each Pod running on these nodes.
- The specified port of each Service running on these nodes.
To learn about firewall rules, refer toFirewall rulesin the Cloud Load Balancing documentation.
To add a firewall rule in a cluster, you need to record the clustercontrol plane's CIDR block and the target used. After you have recorded this you cancreate the rule.
View control plane's CIDR block
You need the cluster control plane's CIDR block to add a firewall rule.
Console
Go to theGoogle Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In theDetails tab, underNetworking, take note of the value in theControl plane address range field.
gcloud
Run the following command:
gcloudcontainerclustersdescribeCLUSTER_NAMEReplaceCLUSTER_NAME with the name of yourcluster.
In the command output, take note of the value in themasterIpv4CidrBlockfield.
View existing firewall rules
You need to specify thetarget(in this case, the destination nodes) that the cluster's existing firewall rulesuse.
Console
Go to theFirewall policies page in the Google Cloud console.
ForFilter table forVPC firewall rules, enter
gke-CLUSTER_NAME.
In the results, take note of the value in theTargets field.
gcloud
Run the following command:
gcloudcomputefirewall-ruleslist\--filter'name~^gke-CLUSTER_NAME'\--format'table( name, network, direction, sourceRanges.list():label=SRC_RANGES, allowed[].map().firewall_rule().list():label=ALLOW, targetTags.list():label=TARGET_TAGS )'In the command output, take note of the value in theTargets field.
To view firewall rules for a Shared VPC, add the--projectHOST_PROJECT_ID flag to the command.
Add a firewall rule
Console
Go to theFirewall policies page in the Google Cloud console.
Clickadd_boxCreate Firewall Rule.
ForName, enter the name for the firewall rule.
In theNetwork list, select the relevant network.
InDirection of traffic, clickIngress.
InAction on match, clickAllow.
In theTargets list, selectSpecified target tags.
ForTarget tags, enter the target value that you noted previously.
In theSource filter list, selectIPv4 ranges.
ForSource IPv4 ranges, enter the cluster control plane's CIDR block.
InProtocols and ports, clickSpecified protocols and ports,select the checkbox for the relevant protocol (tcp orudp), andenter the port number in the protocol field.
ClickCreate.
gcloud
Run the following command:
gcloudcomputefirewall-rulescreateFIREWALL_RULE_NAME\--actionALLOW\--directionINGRESS\--source-rangesCONTROL_PLANE_RANGE\--rulesPROTOCOL:PORT\--target-tagsTARGETReplace the following:
FIREWALL_RULE_NAME: the name you choose for the firewallrule.CONTROL_PLANE_RANGE: the cluster controlplane's IP address range (masterIpv4CidrBlock) that you collectedpreviously.PROTOCOL:PORT: the port and itsprotocol,tcporudp.TARGET: the target (Targets) value that you collectedpreviously.
To add a firewall rule for a Shared VPC, addthe following flags to the command:
--projectHOST_PROJECT_ID--networkNETWORK_IDGranting private nodes outbound internet access
To provide outbound internet access for your private nodes, such as to pullimages from an external registry, useCloud NAT tocreate and configure a Cloud Router. Cloud NAT lets private nodesestablish outbound connections over the internet to send and receive packets.
The Cloud Router allows all your nodes in the region to useCloud NAT for all primary andalias IP ranges. Italso automatically allocates the external IP addresses for the NAT gateway.
For instructions to create and configure a Cloud Router, refer toCreate a Cloud NAT configuration using Cloud Routerin the Cloud NAT documentation.
Note: To provide outbound internet access for your Pods and Services, configure Cloud NAT toSpecify subnet ranges for NATDeploying a Windows Server container application
To learn how to deploy a Windows Server container application to a cluster with private nodes, refer to theWindows node pool documentation.
What's next
- Learn more aboutnetwork isolation in GKE.
- Learn how tocreate a VPC-native cluster.
- Learn more aboutPrivate Service Connect.
- Learn how toinstall kubectl and configure cluster access.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.