Create an internal load balancer Stay organized with collections Save and categorize content based on your preferences.
This page explains how to create aninternal passthrough Network Load Balancer or internal loadbalancer onGoogle Kubernetes Engine (GKE). To create an external passthrough Network Load Balancer, seeCreate a Service of type LoadBalancer.
Before reading this page, ensure that you're familiar with the followingconcepts:
- LoadBalancer Service.
- LoadBalancer Service parameters.
- Backend service-based external passthrough Network Load Balancer.
Using internal passthrough Network Load Balancer
Internal passthrough Network Load Balancers make your cluster's Services accessible to clients locatedin your cluster's VPC network and to clients in networks that areconnected to your cluster's VPC network. Clients in your cluster'sVPC network can be nodes or Pods of your cluster, or they can be VMsoutside of your cluster. For more information about connectivity from clients inconnected networks, seeInternal passthrough Network Load Balancers and connectednetworks.
Using GKE subsetting
GKE subsetting improves the scalability of internalLoadBalancer Services because it usesGCE_VM_IP network endpoint groups (NEGs)as backends instead of instance groups. When GKE subsetting isenabled, GKE creates one NEG percomputezone per internal LoadBalancer Service.
TheexternalTrafficPolicy of the Service controls node membership in theGCE_VM_IPNEG backends. For more information, seeNode membership inGCE_VM_IP NEGbackends.
Using zonal affinity
Note: Zonal affinity is in thePreview stage.When you enable zonal affinity in an Internal passthrough Network Load Balancer, GKEroutes traffic that originates from a zone to nodes and Pods within that samezone. If there are no healthy Pods in the zone, GKE routestraffic to another zone. This implementation optimizes for latency and cost.
To enable zonal affinity in a GKE cluster, you must haveGKE subsetting enabled.
Requirements and limitations
Following are the requirements and limitations for internal load balancers.
Requirements
GKE subsetting has the following requirements and limitations:
- You can enable GKE subsetting in new and existing Standardclusters in GKE versions 1.18.19-gke.1400 and later.GKE subsetting cannot be disabled once it has been enabled.
- GKE subsetting is disabled by default in Autopilotclusters. However, you can enable it after you create the cluster.
- GKE subsetting requires that the
HttpLoadBalancingadd-onis enabled. This add-on is enabled by default. In Autopilotclusters, you cannot disable this required add-on. - Quotas for Network Endpoint Groupsapply. Google Cloud creates one
GCE_VM_IPNEG per internal LoadBalancerService per zone. - Quotas for forwarding rules, backend services, and health checks apply. Formore information, seeQuotas and limits.
- GKE subsetting cannot be used with the annotation to share abackend service among multiple load balancers,
alpha.cloud.google.com/load-balancer-backend-share. - You must have Google Cloud CLI version 345.0.0 or later.
Zonal affinity has the following requirements:
- You can enable zonal affinity in new and existing clusters inGKE version 1.33.3-gke.1392000 and later.
- You must have GKE subsetting enabled.
- You must ensure that the
HttpLoadBalancingadd-on is enabled for your cluster. This add-on is enabled by default and allows the cluster to manage load balancers that use backend services. - You must include
spec.trafficDistribution: PreferClosein the LoadBalancerService manifest.
The LoadBalancer Service manifest can use eitherexternalTrafficPolicy: LocalorexternalTrafficPolicy: Cluster.
Limitations
Internal passthrough Network Load Balancers
- For clusters running Kubernetes version 1.7.4 and later, you can useinternal load balancers withcustom-mode subnets in addition toauto-mode subnets.
- Clusters running Kubernetes version 1.7.X and later support using a reservedIP address for the internal passthrough Network Load Balancer if you create the reservedIP address with the
--purposeflag set toSHARED_LOADBALANCER_VIP. Refer toEnabling Shared IPfor step-by-step directions. GKE only preserves the IP addressof an internal passthrough Network Load Balancer if the Service references an internal IPaddress with that purpose. Otherwise, GKE might change theload balancer's IP address (spec.loadBalancerIP) if the Service is updated(for example, if ports are changed). - Even if the load balancer's IP address changes (see previous point), the
spec.clusterIPremains constant. - Internal UDP load balancers don't support using
sessionAffinity: ClientIP.
Before you begin
Before you start, make sure that you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running the
gcloud components updatecommand. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zoneinstead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.
- Ensure that you have an existing Autopilot or Standardcluster. To create a new cluster, seeCreate an Autopilot cluster.
Enable GKE subsetting in a cluster
You can enable GKE subsetting for an existing cluster using thegcloud CLI or the Google Cloud console. You cannot disableGKE subsetting after you have enabled it.
Console
In the Google Cloud console, go to theGoogle Kubernetes Engine page.
In the cluster list, click the name of the cluster you want to modify.
UnderNetworking, next to theSubsetting for L4 Internal Load Balancers field, clickeditEnable subsettingfor L4 internal load balancers.
Select theEnable subsetting for L4 internal load balancers checkbox.
ClickSave Changes.
gcloud
gcloudcontainerclustersupdateCLUSTER_NAME\--enable-l4-ilb-subsettingReplace the following:
CLUSTER_NAME: the name of the cluster.
Enabling GKE subsetting does not disrupt existinginternal LoadBalancer Services. If you want to migrate existing internalLoadBalancer Services to use backend services withGCE_VM_IP NEGs as backends,you must deploy a replacement Service manifest. For more details, seeNode groupingin the LoadBalancer Service concepts documentation.
Deploy a workload
The following manifest describes a Deployment that runs a sample webapplication container image.
Save the manifest as
ilb-deployment.yaml:apiVersion:apps/v1kind:Deploymentmetadata:name:ilb-deploymentspec:replicas:3selector:matchLabels:app:ilb-deploymenttemplate:metadata:labels:app:ilb-deploymentspec:containers:-name:hello-appimage:us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0Apply the manifest to your cluster:
kubectlapply-filb-deployment.yaml
Create an internal LoadBalancer Service
(Optional) Disable automatic VPC firewall rules creation:
While GKE automatically creates VPC firewallrules to allow traffic to your internal load balancer, you have the option todisable the automatic VPC firewall rules creation and managefirewall rules on your own. You can disable VPC firewall rulesonly if you have enabled GKE subsetting for your internalLoadBalancer Service. However, managing VPC firewall rules isoptional and you can rely on the automatic rules.
Before you disable automatic VPC firewall rules creation, ensurethat you define allow rules that permit traffic to reach your load balancer andapplication Pods.
For more information on managing VPC firewall rules, seemanageautomatic firewall rulecreationand how to disable automatic firewall rule creation, seeUser-managed firewallrules for GKE LoadBalancerServices.
The following example creates an internal LoadBalancer Service using TCPport
80. GKE deploys an internal passthrough Network Load Balancer whose forwardingrule uses port80, but then forwards traffic to backend Pods on port8080:Save the manifest as
ilb-svc.yaml:apiVersion:v1kind:Servicemetadata:name:ilb-svc# Request an internal load balancer.annotations:networking.gke.io/load-balancer-type:"Internal"spec:type:LoadBalancer# Evenly route external traffic to all endpoints.externalTrafficPolicy:Cluster# Prioritize routing traffic to endpoints that are in the same zone.trafficDistribution:PreferCloseselector:app:ilb-deployment# Forward traffic from TCP port 80 to port 8080 in backend Pods.ports:-name:tcp-portprotocol:TCPport:80targetPort:8080Your manifest must contain the following:
- A
namefor the internal LoadBalancer Service, in this caseilb-svc. - An annotation that specifies that you require an internal LoadBalancer Service.For GKE versions 1.17 and later, use the annotation
networking.gke.io/load-balancer-type: "Internal"as shown in the examplemanifest. For earlier versions, usecloud.google.com/load-balancer-type: "Internal"instead. - The
type: LoadBalancer. - A
spec: selectorfield to specify the Pods the Service should target,for example,app: hello. - Port information:
- The
portrepresents the destination port on which the forwarding ruleof the internal passthrough Network Load Balancer receives packets. - The
targetPortmust match acontainerPortdefined on eachserving Pod. - The
portandtargetPortvalues don't need to be the same. Nodesalways perform destination NAT, changing the destination load balancerforwarding rule IP address andportto a destination Pod IP address andtargetPort. For more details, seeDestination Network AddressTranslation on nodesin the LoadBalancer Service concepts documentation.
- The
Your manifest can contain the following:
spec.ipFamilyPolicyandipFamiliesto define how GKEallocates IP addresses to the Service. GKE supports eithersingle-stack (IPv4 only or IPv6 only), or dual-stack IP LoadBalancerServices. A dual-stack LoadBalancer Service is implemented with twoseparate internal passthrough Network Load Balancer forwarding rules: one for IPv4 traffic and onefor IPv6 traffic. TheGKE dual-stack LoadBalancer Service isavailable in version 1.29 or later. To learn more, seeIPv4/IPv6 dual-stack Services.
Note: You can also enable or disable zonal affinity on an existinginternal LoadBalancer Service by using thespec.trafficDistributionto define how GKE routesincoming traffic (Preview). If you set this field toPreferClose,GKE routes traffic that originates from a zone to nodesand Pods within that same zone. If there are no healthy Pods in thezone, then GKE routes traffic to another zone. If youinclude this field, you must have GKE subsettingenabled.kubectl edit svcservice-namecommand. Thekubectl editcommand opensthe existing load balancer's Service manifest in your configured texteditor, where you can add or remove thespec.trafficDistributionparameter and save changes. Traffic might be briefly interruptedwhen you modify this parameter.
For more information see,LoadBalancer Service parameters
- A
Apply the manifest to your cluster:
kubectlapply-filb-svc.yaml
Get detailed information about the Service:
kubectlgetserviceilb-svc--outputyamlThe output is similar to the following:
apiVersion:v1kind:Servicemetadata:annotations:cloud.google.com/neg:'{"ingress":true}'cloud.google.com/neg-status:'{"network_endpoint_groups":{"0":"k8s2-pn2h9n5f-default-ilb-svc-3bei4n1r"},"zones":["ZONE_NAME","ZONE_NAME","ZONE_NAME"]}'kubectl.kubernetes.io/last-applied-configuration:|{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"networking.gke.io/load-balancer-type":"Internal"},"name":"ilb-svc","namespace":"default"},"spec":{"externalTrafficPolicy":"Cluster","ports":[{"name":"tcp-port","port":80,"protocol":"TCP","targetPort":8080}],"selector":{"app":"ilb-deployment"},"type":"LoadBalancer"}}networking.gke.io/load-balancer-type:Internalservice.kubernetes.io/backend-service:k8s2-pn2h9n5f-default-ilb-svc-3bei4n1rservice.kubernetes.io/firewall-rule:k8s2-pn2h9n5f-default-ilb-svc-3bei4n1rservice.kubernetes.io/firewall-rule-for-hc:k8s2-pn2h9n5f-l4-shared-hc-fwservice.kubernetes.io/healthcheck:k8s2-pn2h9n5f-l4-shared-hcservice.kubernetes.io/tcp-forwarding-rule:k8s2-tcp-pn2h9n5f-default-ilb-svc-3bei4n1rcreationTimestamp:"2022-07-22T17:26:04Z"finalizers:-gke.networking.io/l4-ilb-v2-service.kubernetes.io/load-balancer-cleanupname:ilb-svcnamespace:defaultresourceVersion:"51666"uid:d7a1a865-7972-44e1-aa9e-db5be23d6567spec:allocateLoadBalancerNodePorts:trueclusterIP:10.88.2.141clusterIPs:-10.88.2.141externalTrafficPolicy:ClusterinternalTrafficPolicy:ClusteripFamilies:-IPv4ipFamilyPolicy:SingleStackports:-name:tcp-port# Kubernetes automatically allocates a port on the node during the# process of implementing a Service of type LoadBalancer.nodePort:30521port:80protocol:TCPtargetPort:8080selector:app:ilb-deploymentsessionAffinity:NonetrafficDistribution:PreferClosetype:LoadBalancerstatus:# IP address of the load balancer forwarding rule.loadBalancer:ingress:-ip:10.128.15.245The output has the following attributes:
- The IP address of the internal passthrough Network Load Balancer's forwarding rule is included in
status.loadBalancer.ingress. This IP address is different from the valueofclusterIP. In this example, the load balancer's forwarding rule IP addressis10.128.15.245. - Any Pod that has the label
app: ilb-deploymentis a serving Pod for thisService. These are the Pods that receive packets routed by the internal passthrough Network Load Balancer. - Clients call the Service by using this
loadBalancerIP address and the TCPdestination port specified in theportfield of the Service manifest. Forcomplete details about how packets are routed once received by a node, seePacket processing. - GKE assigned a
nodePortto the Service; in this example, port30521is assigned. ThenodePortis not relevant to the internal passthrough Network Load Balancer.
- The IP address of the internal passthrough Network Load Balancer's forwarding rule is included in
Inspect the Service network endpoint group:
kubectlgetsvcilb-svc-o=jsonpath="{.metadata.annotations.cloud\.google\.com/neg-status}"The output is similar to the following:
{"network_endpoint_groups":{"0":"k8s2-knlc4c77-default-ilb-svc-ua5ugas0"},"zones":["ZONE_NAME"]}The response indicates that GKE has created a network endpointgroup named
k8s2-knlc4c77-default-ilb-svc-ua5ugas0. This annotation ispresent in services of typeLoadBalancerthat use GKEsubsetting and is not present in Services that do not use GKEsubsetting.
Verify internal passthrough Network Load Balancer components
This section shows you how to verify the key components of your internal passthrough Network Load Balancer.
Verify that your Service is running:
kubectlgetserviceSERVICE_NAME--outputyamlReplace
SERVICE_NAMEwith the name of the Servicemanifest.If you enabled zonal affinity, the output includes the parameter
spec.trafficDistributionwith the field set toPreferClose.Verify the internal passthrough Network Load Balancer's forwarding rule IP address. The internal passthrough Network Load Balancer'sforwarding rule IP address is
10.128.15.245in the example included in theCreate an internal LoadBalancer Service section. Verify that thisforwarding rule is included in the list of forwarding rules in the cluster'sproject by using the Google Cloud CLI:gcloudcomputeforwarding-ruleslist--filter="loadBalancingScheme=INTERNAL"The output includes the relevant internal passthrough Network Load Balancer forwarding rule, its IPaddress, and the backend service referenced by the forwarding rule(
k8s2-pn2h9n5f-default-ilb-svc-3bei4n1rin this example).NAME ... IP_ADDRESS ... TARGET...k8s2-tcp-pn2h9n5f-default-ilb-svc-3bei4n1r 10.128.15.245 ZONE_NAME/backendServices/k8s2-pn2h9n5f-default-ilb-svc-3bei4n1rDescribe the load balancer's backend service by using the Google Cloud CLI:
gcloudcomputebackend-servicesdescribek8s2-tcp-pn2h9n5f-default-ilb-svc-3bei4n1r--region=COMPUTE_REGIONReplace
COMPUTE_REGIONwith thecompute region of the backend service.If you enabled zonal affinity:
- The
networkPassThroughLbTrafficPolicy.zonalAffinity.spilloverfieldshould be set toZONAL_AFFINITY_SPILL_CROSS_ZONE. - The
networkPassThroughLbTrafficPolicy.zonalAffinity.spilloverRatiofieldshould be set to0or not be included.
The output includes the backend
GCE_VM_IPNEG or NEGs for the Service(k8s2-pn2h9n5f-default-ilb-svc-3bei4n1rin this example).backends:- balancingMode: CONNECTION group: .../ZONE_NAME/networkEndpointGroups/k8s2-pn2h9n5f-default-ilb-svc-3bei4n1r...kind: compute#backendServiceloadBalancingScheme: INTERNALname: aae3e263abe0911e9b32a42010a80008networkPassThroughLbTrafficPolicy: zonalAffinity: spillover: ZONAL_AFFINITY_SPILL_CROSS_ZONEprotocol: TCP...If you disabled zonal affinity, the
networkPassThroughLbTrafficPolicy.zonalAffinity.spilloverfield should be settoZONAL_AFFINITY_DISABLEDor not be included. Note that zonal affinity isautomatically disabled if your cluster is on a version earlier than 1.33.3-gke.1392000.- The
Determine the list of nodes in a subset for a service:
gcloudcomputenetwork-endpoint-groupslist-network-endpointsNEG_NAME\--zone=COMPUTE_ZONEReplace the following:
NEG_NAME: the name of the network endpoint groupcreated by the GKE controller.COMPUTE_ZONE: thecompute zone of the network endpointgroup to operate on.
Determine the list of healthy nodes for an internal passthrough Network Load Balancer:
gcloudcomputebackend-servicesget-healthSERVICE_NAME\--region=COMPUTE_REGIONReplace the following:
SERVICE_NAME: the name of the backend service. Thisvalue is the same as the name of the network endpoint group created by theGKE controller.COMPUTE_REGION: thecompute region of the backendservice to operate on.
Test connectivity to the internal passthrough Network Load Balancer
Run the following command in the same region as the cluster:
curlLOAD_BALANCER_IP:80ReplaceLOAD_BALANCER_IP with the load balancer'sforwarding rule IP address.
The response shows the output ofilb-deployment:
Hello, world!Version: 1.0.0Hostname: ilb-deployment-77b45987f7-pw54nThe internal passthrough Network Load Balancer is only accessible within the same VPCnetwork (or a connectednetwork).By default, the load balancer's forwarding rule has global access disabled, soclient VMs, Cloud VPN tunnels, or Cloud Interconnectattachments (VLANs) must be located in the same region as the internal passthrough Network Load Balancer. Tosupport clients in all regions, you can enable global access on the loadbalancer's forwarding rule by including theglobalaccessannotation in the Service manifest.
Delete the internal LoadBalancer Service and load balancer resources
You can delete the Deployment and Service usingkubectl delete or theGoogle Cloud console.
kubectl
Delete the Deployment
To delete the Deployment, run the following command:
kubectldeletedeploymentilb-deploymentDelete the Service
To delete the Service, run the following command:
kubectldeleteserviceilb-svcConsole
Delete the Deployment
To delete the Deployment, perform the following steps:
Go to theWorkloads page in the Google Cloud console.
Select the Deployment you want to delete, then clickdeleteDelete.
When prompted to confirm, select theDelete Horizontal Pod Autoscaler associated with selected Deployment checkbox, then clickDelete.
Delete the Service
To delete the Service, perform the following steps:
Go to theServices & Ingress page in the Google Cloud console.
Select the Service you want to delete, then clickdeleteDelete.
When prompted to confirm, clickDelete.
Shared IP
The internal passthrough Network Load Balancer allows thesharing of a Virtual IP address amongst multiple forwarding rules.This is useful for expanding the number of simultaneous ports on the same IP orfor accepting UDP and TCP traffic on the same IP. It allows up to a maximum of50 exposed ports per IP address. Shared IPs are supported natively onGKE clusters with internal LoadBalancer Services.When deploying, the Service'sloadBalancerIP field is used to indicatewhich IP should be shared across Services.
Limitations
A shared IP for multiple load balancers has the following limitations andcapabilities:
- Each forwarding rule can have up to five ports (contiguous or non-contiguous), or it can be configured to match and forward traffic on all ports. If an Internal LoadBalancer Service defines more than five ports, the forwarding rule will automatically be set to match all ports.
- A maximum of ten Services (forwarding rules) can share an IP address. Thisresults in a maximum of 50 ports per shared IP.
- Each forwarding rule that shares the same IP address must use a unique combination of protocols and ports. Therefore, every internal LoadBalancer Service must use a unique set of protocols and ports.
- A combination of TCP-only and UDP-only Services is supported on the sameshared IP, however you cannot expose both TCP and UDP ports in the same Service.
Enabling Shared IP
To enable an internal LoadBalancer Services to share a common IP, follow thesesteps:
Create a static internal IP with
--purpose SHARED_LOADBALANCER_VIP. An IPaddress must be created with this purpose to enable its ability to be shared.If you create the static internal IP address in a Shared VPC, you must create the IP address in thesame service project as the instance that will use the IP address, even though the valueof the IP address will come from the range of available IPs in a selectedshared subnet of the Shared VPC network.Refer toreserving a static internal IPon theProvisioning Shared VPC page for more information.Deploy up to ten internal LoadBalancer Services using this static IP in the
loadBalancerIPfield. The internal passthrough Network Load Balancers are reconciledby the GKE service controller and deploy using the samefrontend IP.
The following example demonstrates how this is done to support multiple TCP andUDP ports against the same internal load balancer IP.
Create a static IP in the same region as your GKE cluster.The subnet must be the same subnet that the load balancer uses, which bydefault is the same subnet that is used by the GKE clusternode IPs.
If your cluster and the VPC network are in the same project:
gcloudcomputeaddressescreateIP_ADDR_NAME\--project=PROJECT_ID\--subnet=SUBNET\--addresses=IP_ADDRESS\--region=COMPUTE_REGION\--purpose=SHARED_LOADBALANCER_VIPIf your cluster is in a Shared VPC service project but uses aShared VPC network in a host project:
gcloudcomputeaddressescreateIP_ADDR_NAME\--project=SERVICE_PROJECT_ID\--subnet=projects/HOST_PROJECT_ID/regions/COMPUTE_REGION/subnetworks/SUBNET\--addresses=IP_ADDRESS\--region=COMPUTE_REGION\--purpose=SHARED_LOADBALANCER_VIPReplace the following:
IP_ADDR_NAME: a name for the IP address object.SERVICE_PROJECT_ID: the ID of the service project.PROJECT_ID: the ID of your project (single project).HOST_PROJECT_ID: the ID of the Shared VPChost project.COMPUTE_REGION: thecompute region containing theshared subnet.IP_ADDRESS: an unused internal IP address fromthe selected subnet's primary IP address range. If you omit specifyingan IP address, Google Cloud selects an unused internal IP addressfrom the selected subnet's primary IP address range. To determine anautomatically selected address, you'll need to rungcloud compute addresses describe.SUBNET: the name of the shared subnet.
Save the following TCP Service configuration to a file named
tcp-service.yamland then deploy to your cluster. ReplaceIP_ADDRESSwith the IP address you chose in theprevious step.apiVersion:v1kind:Servicemetadata:name:tcp-servicenamespace:default# Request an internal load balancer.annotations:networking.gke.io/load-balancer-type:"Internal"spec:type:LoadBalancer# Use an IP address that you create.loadBalancerIP:IP_ADDRESSselector:app:myappports:-name:8001-to-8001protocol:TCPport:8001targetPort:8001-name:8002-to-8002protocol:TCPport:8002targetPort:8002-name:8003-to-8003protocol:TCPport:8003targetPort:8003-name:8004-to-8004protocol:TCPport:8004targetPort:8004-name:8005-to-8005protocol:TCPport:8005targetPort:8005Apply this Service definition against your cluster:
kubectlapply-ftcp-service.yamlSave the following UDP Service configuration to a file named
udp-service.yamland then deploy it. It also uses theIP_ADDRESSthat you specified in the previous step.apiVersion:v1kind:Servicemetadata:name:udp-servicenamespace:default# Request an internal load balancer.annotations:networking.gke.io/load-balancer-type:"Internal"spec:type:LoadBalancer# Use the same IP address that you used for the TCP Service.loadBalancerIP:IP_ADDRESSselector:app:my-udp-appports:-name:9001-to-9001protocol:UDPport:9001targetPort:9001-name:9002-to-9002protocol:UDPport:9002targetPort:9002Apply this file against your cluster:
kubectlapply-fudp-service.yamlValidate that the VIP is shared amongst load balancer forwarding rules bylisting them out and filtering for the static IP. This shows that there is aUDP and a TCP forwarding rule both listening across seven different ports onthe shared
IP_ADDRESS, which in this example is10.128.2.98.gcloud compute forwarding-rules list | grep 10.128.2.98ab4d8205d655f4353a5cff5b224a0dde us-west1 10.128.2.98 UDP us-west1/backendServices/ab4d8205d655f4353a5cff5b224a0ddeacd6eeaa00a35419c9530caeb6540435 us-west1 10.128.2.98 TCP us-west1/backendServices/acd6eeaa00a35419c9530caeb6540435
Known issues
Connection timeout every 10 minutes
Internal LoadBalancer Services created with Subsetting might observe trafficdisruptions roughly every 10 minutes. This bug has been fixed in versions:
- 1.18.19-gke.1700 and later
- 1.19.10-gke.1000 and later
- 1.20.6-gke.1000 and later
Error creating load balancer in Standard tier
When you create an internal passthrough Network Load Balancer in a project with theproject default network tierset to Standard, the following error message appears:
Error syncing load balancer: failed to ensure load balancer: googleapi: Error 400: STANDARD network tier (the project's default network tier) is not supported: Network tier other than PREMIUM is not supported for loadBalancingScheme=INTERNAL., badRequestTo resolve this issue in GKE versions earlier than1.23.3-gke.900, configure the project default network tier to Premium.
This issue is resolved in GKE versions 1.23.3-gke.900 andlater whenGKE subsetting is enabled.
The GKE controller creates internal passthrough Network Load Balancers in thePremium network tier even if the project default network tier is set toStandard.
What's next
- Read the GKE network overview.
- Learn more about Compute Engine load balancers.
- Learn how to create a VPC-native cluster.
- Troubleshoot load balancing in GKE.
- Learn about IP masquerade agent.
- Learn about configuring authorized networks.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.