External proxy Network Load Balancer overview

This document introduces the concepts that you need to understand to configurea Google Cloud external proxy Network Load Balancer.

The external proxy Network Load Balancer is a reverse proxy load balancer that distributes TCPtraffic coming from the internet to virtual machine (VM) instances in yourGoogle Cloud Virtual Private Cloud (VPC) network. When using anexternal proxy Network Load Balancer, incoming TCP or SSLtraffic is terminated at the load balancer. A new connection then forwardstraffic to the closest available backend by using either TCP or SSL(recommended). For more use cases, seeProxy Network Load Balanceroverview.

External proxy Network Load Balancers let you use a single IP address forall users worldwide. The load balancer automatically routestraffic to the backends that are closest to the user.

In this example, SSL traffic from users in City A and City B is terminatedat the load balancing layer, and a separate connection is establishedto the selected backend.

Cloud Load Balancing with SSL termination.
Cloud Load Balancing with SSL termination (click to enlarge).
Note: Although external proxy Network Load Balancers can support HTTPS traffic, you should use anexternal Application Load Balancer for HTTPS traffic instead. External Application Load Balancers support anumber of HTTP-specific features, including routing by HTTP request path andbalancing by request rate. For more details, see theExternal Application Load Balanceroverview.

Modes of operation

You can configure an external proxy Network Load Balancer in the following modes:

  • Aclassic proxy Network Load Balancer is implemented on globally distributedGoogle Front Ends (GFEs).This load balancer can be configured to handle either TCP or SSL traffic byusing either a target TCP proxy or a target SSL proxy respectively.With the Premium Tier, this load balancer can beconfigured as a global load balancing service. With Standard Tier, this loadbalancer is configured as a regional load balancing service.Classic proxy Network Load Balancers can also be used for other protocols that useSSL, such as WebSockets and IMAP over SSL.
  • Aglobal external proxy Network Load Balancer is implemented on globally distributedGFEsand supportsadvanced traffic managementcapabilities. This load balancercan be configured to handle either TCP or SSL traffic by usingeither a target TCP proxy or a target SSL proxy respectively.This load balancer is configured as a global load balancing service withthe Premium Tier. Global external proxy Network Load Balancers can also be used forother protocols that use SSL, such as WebSockets and IMAP over SSL.
  • Aregional external proxy Network Load Balancer is implemented on the open sourceEnvoyproxy software stack. It canhandle only TCP traffic. This load balancer is configured as a regional loadbalancing service that can use either Premium or Standard Tier.

Identify the mode

To determine the mode of a load balancer, complete the following steps.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. On theLoad Balancers tab, the load balancer type, protocol, andregion are displayed. If the region is blank, then the load balanceris global.

    The following table summarizes how to identify the mode of the load balancer.

    Load balancer modeLoad balancer typeAccess typeRegion
    Classic proxy Network Load BalancerNetwork (Proxy classic)External
    Global external proxy Network Load BalancerNetwork (Proxy)External
    Regional external proxy Network Load BalancerNetwork (Proxy)ExternalSpecifies a region

gcloud

  1. Use thegcloud compute forwarding-rules describe command:

    gcloud compute forwarding-rules describeFORWARDING_RULE_NAME
  2. In the command output, check the load balancing scheme, region, and networktier. The following table summarizes how to identify the mode of the loadbalancer.

    Load balancer modeLoad balancing schemeForwarding ruleNetwork tier
    Classic proxy Network Load BalancerEXTERNALGlobalStandard or Premium
    Global external proxy Network Load BalancerEXTERNAL_MANAGEDGlobalPremium
    Regional external proxy Network Load BalancerEXTERNAL_MANAGEDRegionalStandard or Premium
Important: After you create a load balancer, you can't edit its mode. Instead,you must delete the load balancer and create a new one.

Architecture

The following diagrams show the components of external proxy Network Load Balancers.

Regional

This diagram shows the components of a regional external proxy Network Load Balancerdeployment.

Regional external proxy Network Load Balancer components.
Regional external proxy Network Load Balancer components (click to enlarge).

The following are components of external proxy Network Load Balancers.

Proxy-only subnet

Note: Proxy-only subnets are only required for regional external proxy Network Load Balancers.

The proxy-only subnet provides a set of IP addressesthat Google uses to run Envoy proxies on your behalf. You must create oneproxy-only subnet in each region of a VPC network where you useload balancers. The--purpose flag for this proxy-only subnet isset toREGIONAL_MANAGED_PROXY. Allregional Envoy-based loadbalancers in the sameregion and VPC network share a pool of Envoy proxies from thesame proxy-only subnet.

Backend VMs or endpoints of all load balancers in a region and aVPC network receive connections from the proxy-only subnet.

Points to remember:

  • Proxy-only subnets areonly used for Envoy proxies, not your backends.
  • The IP address of the load balancer isnot located in theproxy-only subnet. The load balancer's IP address is defined by itsexternal managed forwarding rule.

Forwarding rules and IP addresses

Forwarding rules route trafficby IP address, port, and protocol to a load balancing configuration thatconsists of a target proxy and a backend service.

IP address specification. Each forwarding rule references a single IPaddress that you can use in DNS records for your application. You can eitherreserve a static IP address that you can use or let Cloud Load Balancingassign one for you. We recommend that you reserve a static IP address.Otherwise, you must update your DNS record with the newly assigned ephemeral IPaddress whenever you delete a forwarding rule and create a new one.

Port specification. External forwarding rules used in the definition of thisload balancer can reference exactlyone port from 1-65535. If you want tosupport multiple consecutive ports, you need to configure multiple forwardingrules. Multiple forwarding rules can be configured with the same virtual IPaddress and different ports; therefore, you can proxy multiple applications withseparate custom ports to the same TCP proxy virtual IP address. For moredetails, seePort specifications for forwardingrules.

To support multiple consecutive ports, you have to configure multiple forwardingrules. Multiple forwarding rules can be configured with the same virtual IPaddress and different ports. Therefore, you can proxy multiple applicationswith separate custom ports to the same TCP proxy virtual IP address.

The following table shows the forwarding rule requirements forexternal proxy Network Load Balancers.

Load balancer modeNetwork Service TierForwarding rule, IP address, and load balancing schemeRouting from the internet to the load balancer frontend
Classic proxy Network Load BalancerPremium Tier

Global external forwarding rule

Global external IP address

Load balancing scheme:EXTERNAL

Requests routed to the GFEs that are closest to the client on the internet.
Standard Tier

Regional external forwarding rule

Regional external IP address

Load balancing scheme:EXTERNAL

Requests routed to a GFE in the load balancer's region.
Global external proxy Network Load BalancerPremium Tier

Global external forwarding rule

Global external IP address

Load balancing scheme:EXTERNAL_MANAGED

Requests routed to the GFEs that are closest to the client on the internet.
Regional external proxy Network Load BalancerPremiumand Standard Tier

Regional external forwarding rule

Regional external IP address

Load balancing scheme:EXTERNAL_MANAGED

Requests routed to the Envoy proxies in the same region as the load balancer.

Forwarding rules and VPC networks

This section describes how forwarding rules used by external Application Load Balancers areassociated with VPC networks.

Load balancer modeVPC network association
Global external proxy Network Load Balancer

Classic proxy Network Load Balancer

No associated VPC network.

The forwarding rule always uses anIP address that is outside the VPC network. Therefore, the forwarding rule has no associated VPC network.

Regional external proxy Network Load Balancer

The forwarding rule's VPC network is the network where the proxy-only subnet has been created. You specify the network when you create the forwarding rule.

Target proxies

External proxy Network Load Balancers terminate connections from the client and createnew connections to the backends. Thetargetproxy routes these new connections tothe backend service.

Depending on the type of traffic your application needs to handle, you canconfigure an external proxy Network Load Balancer with either a target TCP proxy or a target SSLproxy.

  • Target TCP proxy: Configure the load balancer with a target TCP proxy ifyou're expecting TCP traffic.
  • Target SSL proxy: Configure the load balancer with a target SSL proxy ifyou're expectingencrypted client traffic. This type of load balancer isintended for non-HTTP(S) traffic only. For HTTP(S) traffic, we recommend thatyou use anexternal Application Load Balancer.

By default, the target proxy does not preserve the original client IP addressand port information. You can preserve this information by enabling thePROXYprotocol on the targetproxy.

The following table shows the target proxy requirements forexternal proxy Network Load Balancers.

Load balancer modeNetwork Service TierTarget proxy
Classic proxy Network Load BalancerPremium TiertargetTcpProxies ortargetSslProxies
Standard TiertargetTcpProxies ortargetSslProxies
Global external proxy Network Load BalancerPremium TiertargetTcpProxies ortargetSslProxies
Regional external proxy Network Load BalancerPremium and Standard TierregionTargetTcpProxies

SSL certificates

SSL certificates are only required if you're deploying a global external proxy Network Load Balancerand classic proxy Network Load Balancer with a target SSL proxy.

External proxy Network Load Balancers using target SSL proxies require private keys andSSL certificates as part of the load balancer configuration.

  • Google Cloud provides twoconfiguration methods for assigning privatekeys and SSL certificates to target SSL proxies: Compute Engine SSLcertificates and Certificate Manager. For a description of eachconfiguration, seeCertificate configurationmethods in the SSLcertificates overview.

  • Google Cloud provides twocertificate types: Self-managed andGoogle-managed. For a description of each type, seeCertificatetypes in the SSLcertificates overview.

Backend services

Backend services direct incoming trafficto one or more attached backends. Each backend is composed of aninstance group ornetwork endpoint group andinformation about the backend's serving capacity. Backend serving capacity can bebased on CPU or requests per second (RPS).

Each load balancer has a single backend service resource that specifiesthehealth check to be performed for theavailable backends.

Changes made to the backend service are not instantaneous. It cantake several minutes for changes to propagate to GFEs.To ensure minimal interruptions to your users, you can enable connectiondraining on backend services. Such interruptions might happen when a backendis terminated, removed manually, or removed by an autoscaler. To learn more aboutusing connection draining to minimize service interruptions, seeEnabling connection draining.

For more information about the backend service resource, seeBackend services overview.

The following table specifies the different backends supported on the backend service of external proxy Network Load Balancers.

Load balancer modeSupported backends on a backend service
Instance groupsZonal NEGsInternet NEGsServerless NEGsHybrid NEGsPrivate Service Connect NEGsGKE
Classic proxy Network Load BalancerUse standalone zonal NEGs
Global external proxy Network Load Balancer*GCE_VM_IP_PORT type endpoints*
Regional external proxy Network Load BalancerGCE_VM_IP_PORT type endpointsRegional NEGs onlyAdd a Private Service Connect NEG

* Global external proxy Network Load Balancers support IPv4 and IPv6 (dual stack) instance groups and zonal NEG backends withGCE_VM_IP_PORT endpoints.

Backends and VPC networks

For global external proxy Network Load Balancer and classic proxy Network Load Balancer backends,all backend instances from instance group backends and all backend endpointsfrom NEG backends must be located in the same project. However, an instancegroup backend or a NEG can use a different VPC network inthat project. The different VPC networks don't need to beconnected using VPC Network Peering because GFEs communicate directlywith backends in their respective VPC networks.

For regional external proxy Network Load Balancer backends, the following applies:

  • For instance groups, zonal NEGs, and hybrid connectivity NEGs, all backendsmust be located in the same project and region as the backend service.However, a load balancer can reference a backend that uses a differentVPC network in the same project as the backend service.Connectivity between the load balancer'sVPC network and the backend VPC networkcan be configured using either VPC Network Peering, Cloud VPNtunnels, Cloud Interconnect VLAN attachments, or a Network Connectivity Centerframework.

    Backend network definition

    • For zonal NEGs and hybrid NEGs, you explicitly specify theVPC network when you create the NEG.
    • For managed instance groups, the VPC network is defined inthe instance template.
    • For unmanaged instance groups, the instance group'sVPC network is set to match the VPC networkof thenic0 interface for the first VM added to the instance group.

    Backend network requirements

    Your backend's network must satisfyone of the following networkrequirements:

    • The backend's VPC network must exactly match theforwarding rule's VPC network.

    • The backend's VPC network must be connected to theforwarding rule's VPC network usingVPC Network Peering. You must configure subnet route exchanges toallow communication between the proxy-only subnet in the forwarding rule'sVPC network and the subnets used by the backend instancesor endpoints.

  • Both the backend's VPC network and the forwarding rule'sVPC network must beVPCspokesattached to the sameNetwork Connectivity Centerhub.Import and export filters must allow communication between the proxy-onlysubnet in the forwarding rule's VPC network and the subnetsused by backend instances or endpoints.
  • For all other backend types, all backends must be located in the sameVPC network and region.

Backends and network interfaces

If you use instance group backends, packets are always delivered tonic0. Ifyou want to send packets to non-nic0 interfaces (eithervNICs orDynamic Network Interfaces), useNEG backends instead.

If you use zonal NEG backends, packets are sent to whatever network interface isrepresented by the endpoint in the NEG. The NEG endpoints must be in the sameVPC network as the NEG's explicitly defined VPCnetwork.

Protocol for communicating with the backends

When you configure a backend service for an external proxy Network Load Balancer, you set theprotocol that the backend service uses to communicate with the backends.

  • For classic proxy Network Load Balancers, you can choose either TCP or SSL.
  • For global external proxy Network Load Balancers, you can choose either TCP or SSL.
  • For regional external proxy Network Load Balancers, you can use TCP.

The load balancer uses only the protocol that you specify, and does not attemptto negotiate a connection with the other protocol.

Firewall rules

The following firewall rules are required:

  • For classic proxy Network Load Balancers, an ingressallow firewall rule to permittraffic from GFEs to reach your backends.

  • For global external proxy Network Load Balancers, an ingressallow firewall rule to permittraffic from GFEs to reach your backends.

  • For regional external proxy Network Load Balancers, an ingress firewall rule to permit trafficfrom theproxy-only subnet to reach your backends.

  • An ingressallow firewall rule to permit traffic from the health check proberanges to reach your backends. For more information about health check probesand why it's necessary to allow traffic from them, seeProbe IP ranges and firewall rules.

Firewall rules are implemented at the VM instance level, not at the GFE proxieslevel. You cannot use firewall rules to prevent traffic fromreaching the load balancer.

The ports for these firewall rules must be configured as follows:

  • Allow traffic to the destination port for each backend service's health check.
  • For instance group backends: determine the ports to be configured by themapping between the backend service'snamed port and the portnumbers associated with that named port on each instance group. Port numberscan vary between instance groups assigned to the same backend service.
  • ForGCE_VM_IP_PORT NEG zonal NEG backends: allow traffic to the port numbersof the endpoints.

The following table summarizes the required source IP address ranges for thefirewall rules.

Load balancer modeHealth check source rangesRequest source ranges
Global external proxy Network Load Balancer
  • 35.191.0.0/16
  • 130.211.0.0/22

For IPv6 traffic to the backends:

  • 2600:2d00:1:b029::/64
The source of GFE traffic depends on the backend type:
  • Instance groups and zonal NEGs (GCE_VM_IP_PORT):
    • 130.211.0.0/22
    • 35.191.0.0/16

    For IPv6 traffic to the backends:

    • 2600:2d00:1:1::/64
Classic proxy Network Load Balancer
  • 35.191.0.0/16
  • 130.211.0.0/22
These ranges apply to health check probes and requests from the GFE.
Regional external proxy Network Load Balancer*, †
  • 35.191.0.0/16
  • 130.211.0.0/22

For IPv6 traffic to the backends:

  • 2600:2d00:1:b029::/64
These ranges apply to health checks probes.

* Allowing traffic from Google's health check probe ranges isn't required for hybridNEGs. However, if you're using a combination of hybrid and zonal NEGs ina single backend service, you need to allow traffic from theGooglehealth check probe ranges for the zonal NEGs.

For regional internet NEGs, health checks are optional. Traffic from loadbalancers usingregional internet NEGs originates from theproxy-only subnet and is thenNAT-translated (by using Cloud NAT) to either manually or automatically allocatedNAT IP addresses. This traffic includes both health check probes and userrequests from the load balancer to the backends. For details, seeRegional NEGs:Use a Cloud NAT gateway.

Important: Make sure that you allow packets from the full health check ranges.If your firewall rule allows packets from only a subset of the ranges, you mightsee health check failures because the load balancer can't communicate with yourbackends. This causes connection timeouts.

Source IP addresses

The source IP address for packets, as seen by the backends, isnot theGoogle Cloud external IP address of the load balancer. In other words,there are two TCP connections.

For the classic proxy Network Load Balancers and global external proxy Network Load Balancers:
For the regional external proxy Network Load Balancers:
  • Connection 1, from original client to the load balancer (proxy-only subnet):

    • Source IP address: the original client (or external IP address if theclient is behind a NAT gateway or a forward proxy).
    • Destination IP address: your load balancer's IP address.
  • Connection 2, from the load balancer (proxy-only subnet) to the backend VM orendpoint:

    • Source IP address: an IP address in theproxy-onlysubnet that is shared among all the Envoy-based loadbalancers deployed in the same region and network as the load balancer.

    • Destination IP address: the internal IP address of the backend VM orcontainer in the VPC network.

Open ports

External proxy Network Load Balancers are reverse proxy load balancers. The loadbalancer terminates incoming connections, and then opens new connections fromthe load balancer to the backends. These load balancers are implemented by usingGoogle Front End (GFE) proxies worldwide.

GFEs have several open ports to support other Google services that run on thesame architecture. When you run a port scan, you might see other open ports forother Google services running on GFEs.

Running a port scan on the IP address of a GFE-based load balancer isn't usefulfrom an auditing perspective for the following reasons:

  • A port scan (for example, withnmap) generally expects no response packet ora TCP RST packet when performing TCP SYN probing. GFEs send SYN-ACKpackets in response to SYN probes only for ports on which you have configureda forwarding rule. GFEs only send packets to your backends in response topackets sent to your load balancer's IP addressand the destination portconfigured on its forwarding rule. Packets that are sent to a different IPaddress or port aren't sent to your backends.

    GFEs implement security features such as Google Cloud Armor. WithCloud Armor Standard, GFEsprovide always-on protection from volumetric and protocol-based DDoS attacksand SYN floods. This protection is available even if you haven't explicitlyconfigured Cloud Armor. You are charged only if you configuresecurity policies or if you enroll in Managed Protection Plus.

  • Packets sent to the IP address of your load balancer can be answered by anyGFE in Google's fleet; however, scanning a load balancer IP address anddestination port combination only interrogates a single GFE perTCP connection. The IP address of your load balancer isn't assigned to asingle device or system. Thus, scanning the IP address of a GFE-based loadbalancer doesn't scan all the GFEs in Google's fleet.

With that in mind, the following are some more effective ways to audit thesecurity of your backend instances:

  • A security auditor should inspect the forwarding rules configuration for theload balancer's configuration. The forwarding rules define the destinationport for which your load balancer accepts packets and forwardsthem to the backends. For GFE-based load balancers,each external forwardingrule can only reference a single destination TCPport.

  • A security auditor should inspect the firewall rule configuration applicableto backend VMs. The firewall rules that you set block traffic from the GFEsto the backend VMs but don't block incoming traffic to the GFEs. For bestpractices, see thefirewall rules section.

Shared VPC architecture

Regional external proxy Network Load Balancers and classic proxy Network Load Balancers supportdeployments that use Shared VPC networks.Shared VPC lets you maintain a clear separation of responsibilitiesbetween network administrators and service developers. Your development teamscan focus on building services in service projects, and the networkinfrastructure teams can provision and administer load balancing. If you're notalready familiar with Shared VPC, read theShared VPC overview documentation.

IP addressForwarding ruleTarget proxyBackend components
Anexternal IP address must be defined in the same project as the load balancer.The external forwarding rule must be defined in the same project as the backend instances (the service project).The target TCP or SSL proxy must be defined in the same project as the backend instances.

For classic proxy Network Load Balancers, a global backend service must be defined in the same project as the backend instances. These instances must be in instance groups attached to the backend service as backends. Health checks associated with backend services must be defined in the same project as the backend service.

For regional external proxy Network Load Balancers, the backend VMs are typically located in a service project. A regional backend service and health check must be defined in that service project.

Note: Global external proxy Network Load Balancers do not supportdeployments that use Shared VPC networks.

Traffic distribution

When you add a backend instance group or NEG to a backend service, you specify aload balancingmode,which defines a method that measures the backend load and target capacity.

For external proxy Network Load Balancers, the balancing mode can beCONNECTIONorUTILIZATION:

  • If the load balancing mode isCONNECTION, the load is spread based on thetotal number of connections that the backend can handle.
  • If the load balancing mode isUTILIZATION, the load is spreadbased on the utilization of instances in an instance group. This balancingmode applies to VM instance group backends only.

The distribution of traffic across backends is determined by the balancing modeof the load balancer.

Classic proxy Network Load Balancer

For the classic proxy Network Load Balancer, the balancing mode is used to select themost favorable backend (instance group or NEG). Traffic is then distributed in around robin fashion among instances or endpoints within the backend.

How connections are distributed

A classic proxy Network Load Balancer can be configured as a global load balancingservice withPremium Tier, and as aregional service in theStandardTier.

The balancing mode and choice of target determine backendfullness from theperspective of each zonalGCE_VM_IP_PORT NEG, zonal instance group, or zone ofa regional instance group. Traffic is then distributed within a zone by usingconsistent hashing.

For Premium Tier

  • You can have only one backend service, and the backend service canhave backends in multiple regions. For global load balancing, you deploy yourbackends in multiple regions, and the load balancer automatically directstraffic to the region closest to the user. If a region is at capacity, theload balancer automatically directs new connections to another region withavailable capacity. Existing user connections remain in the current region.

  • Google advertises your load balancer's IP address from all points ofpresence, worldwide. Each load balancer IP address is global anycast.

  • If you configure a backend service with backends in multiple regions, GoogleFront Ends (GFEs) attempt to direct requests to healthy backend instancegroups or NEGs in the region closest to the user.

For Standard Tier

  • Google advertises your load balancer's IP address from points of presenceassociated with the forwarding rule's region. The load balancer uses aregional external IP address.

  • You can only configure backends in the same region as the forwarding rule.The load balancer only directs requests to healthy backends in that oneregion.

Global external proxy Network Load Balancer

For the global external proxy Network Load Balancer, traffic distribution is based on the loadbalancing mode and the load balancing locality policy.

The balancing mode determines the weight and fraction of traffic to besent to each group (instance group or NEG). The load balancing locality policy(LocalityLbPolicy) determines how backends within the group are load balanced.

When a backend service receives traffic, it first directs traffic to a backend(instance group or NEG) according to the backend's balancing mode. After abackend is selected, traffic is then distributed among instances or endpoints inthat backend group according to the load balancing locality policy.

For more information, see the following:

How connections are distributed

A global external proxy Network Load Balancer can be configured as a global load balancingservice withPremium Tier

The balancing mode and choice of target determine backendfullness from theperspective of each zonalGCE_VM_IP_PORT NEG, or zonal instance group.Traffic is then distributed within a zone by using consistent hashing.

  • You can have only one backend service, and the backend service canhave backends in multiple regions. For global load balancing, you deploy yourbackends in multiple regions, and the load balancer automatically directstraffic to the region closest to the user. If a region is at capacity, theload balancer automatically directs new connections to another region withavailable capacity. Existing user connections remain in the current region.

  • Google advertises your load balancer's IP address from all points ofpresence, worldwide. Each load balancer IP address is global anycast.

  • If you configure a backend service with backends in multiple regions, GoogleFront Ends (GFEs) attempt to direct requests to healthy backend instancegroups or NEGs in the region closest to the user.

Regional external proxy Network Load Balancer

For regional external proxy Network Load Balancers, traffic distribution is based on the loadbalancing mode and the load balancing locality policy.

The balancing mode determines the weight and fraction of traffic that should besent to each backend (instance group or NEG). The load balancing locality policy(LocalityLbPolicy) determines how backends within the group are load balanced.

When a backend service receives traffic, it first directs traffic to a backend(instance group or NEG) according to its balancing mode. After abackend is selected, traffic is then distributed among instances or endpoints inthat backend group according to the load balancing locality policy.

For more information, see the following:

Session affinity

Session affinity lets you configure the load balancer's backend service to sendall requests from the same client to the same backend, as long as the backend ishealthy and has capacity.

External proxy Network Load Balancers offer the following types of session affinity:
  • None

    A session affinity setting ofNONE doesnot mean that there is nosession affinity. It means that no session affinity option is explicitly configured.

    Hashing is always performed to select a backend. And a session affinity setting ofNONE means that the load balancer uses a 5-tuple hash to select a backend. The 5-tuplehash consists of the source IP address, the source port, the protocol, the destination IP address,and the destination port.

    A session affinity ofNONE is the default value.

  • Client IP affinity

    Client IP session affinity (CLIENT_IP) is a 2-tuple hash created from thesource and destination IP addresses of the packet. Client IP affinity forwardsall requests from the same client IP address to the same backend, as long asthat backend has capacity and remains healthy.

    When you use client IP affinity, keep the following in mind:

    • The packet destination IP address is only the same as the load balancer forwarding rule's IP address if the packet is sent directly to the load balancer.
    • The packet source IP address might not match an IP address associated with the original client if the packet is processed by an intermediate NAT or proxy system before being delivered to a Google Cloud load balancer. In situations where many clients share the same effective source IP address, some backend VMs might receive more connections or requests than others.

Keep the following in mind when configuring session affinity:

  • Don't rely on session affinity for authentication or security purposes.Session affinity can break whenever thenumber of serving and healthy backends changes. For more details, seeLosingsession affinity.

  • The default values of the--session-affinity and--subsetting-policyflags are bothNONE, and only one of them at a time can be set to adifferent value.

Losing session affinity

All session affinity options require the following:

  • The selected backend instance or endpoint must remain configured as a backend. Session affinity can break when one of the following events occurs:
    • You remove the selected instance from its instance group.
    • Managed instance group autoscaling or autohealing removes the selected instance from its managed instance group.
    • You remove the selected endpoint from its NEG.
    • You remove the instance group or NEG that contains the selected instance or endpoint from the backend service.
  • The selected backend instance or endpoint must remain healthy. Session affinity can break when the selected instance or endpoint fails health checks.
  • For Global external proxy Network Load Balancers and Classic proxy Network Load Balancers, session affinity can break if a different first-layer Google Front End (GFE) is used for subsequent requests or connections after the change in routing path. A different first-layer GFE might be selected if the routing path from a client on the internet to Google changes between requests or connections.
All session affinity options have the following additional requirements:
  • The instance group or NEG that contains the selected instance or endpointmust not be full as defined by itstarget capacity. (Forregional managed instance groups, the zonal component of the instance groupthat contains the selected instance must not be full.) Session affinity canbreak when the instance group or NEG is full and other instance groups orNEGs are not. Because fullness can change in unpredictable ways when usingtheUTILIZATION balancing mode, you should use theRATE orCONNECTIONbalancing mode to minimize situations when session affinity can break.

  • Thetotal number of configured backend instances or endpoints must remainconstant. When at least one of the following events occurs, the number ofconfigured backend instances or endpoints changes, and session affinity canbreak:

    • Adding new instances or endpoints:

      • You add instances to an existing instance group on the backend service.
      • Managed instance group autoscaling adds instances to a managed instancegroup on the backend service.
      • You add endpoints to an existing NEG on the backend service.
      • You add non-empty instance groups or NEGs to the backend service.
    • Removing any instance or endpoint,not just the selected instance orendpoint:

      • You remove any instance from an instance group backend.
      • Managed instance group autoscaling or autohealing removes any instancefrom a managed instance group backend.
      • You remove any endpoint from a NEG backend.
      • You remove any existing, non-empty backend instance group or NEG fromthe backend service.
  • Thetotal number of healthy backend instances or endpoints must remainconstant. When at least one of the following events occurs, the number ofhealthy backend instances or endpoints changes, and session affinity canbreak:

    • Any instance or endpoint passes its health check, transitioning fromunhealthy to healthy.
    • Any instance or endpoint fails its health check, transitioning fromhealthy to unhealthy or timeout.

Failover

Failover for external proxy Network Load Balancers works as follows:

  • If a backend becomes unhealthy, traffic is automatically redirected to healthybackends within the same region.
  • If all backends within a region are unhealthy, traffic is distributed tohealthy backends in other regions (global and classic modes only).
  • If all backends are unhealthy, the load balancer drops traffic.

Load balancing for GKE applications

If you are building applications in Google Kubernetes Engine, you can use standaloneNEGs to load balance traffic directly to containers. With standalone NEGs, youare responsible for creating theService object that creates the NEG,and then associating the NEG with the backend service so that the load balancercan connect to the Pods.

For related documentation, seeContainer-native load balancing through standalone zonalNEGs.

Limitations

  • You can't create a regional external proxy Network Load Balancer in Premium Tier using theGoogle Cloud console. Additionally, onlyregionssupporting Standard Tier are available for these load balancers in theGoogle Cloud console. Use either the gcloud CLIor the API instead.

  • Google Cloud does not make any guarantees on the lifetime of TCPconnections when you use external proxy Network Load Balancers. Clients should beresilient to dropped connections, both due to broader internet issuesand due to regularly scheduled restarts of the proxies managing theconnections.

  • The following limitations apply only to classic proxy Network Load Balancers andglobal external proxy Network Load Balancer that are deployed with an SSL target proxy:

    • Classic proxy Network Load Balancers and global external proxy Network Load Balancers do notsupport client certificate-based authentication, also known as mutual TLSauthentication.

    • Classic proxy Network Load Balancers and global external proxy Network Load Balancers supportonly lowercase characters in domains in a common name (CN) attribute or asubject alternative name (SAN) attribute of the certificate. Certificateswith uppercase characters in domains are returned only when set as theprimary certificate inthe target proxy.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.