Backend service-based external passthrough Network Load Balancer overview Stay organized with collections Save and categorize content based on your preferences.
External passthrough Network Load Balancers are regional, Layer 4 load balancers that distributeexternal traffic among backends (instance groups or network endpoint groups(NEGs)) in the same region as the load balancer. These backends must be in thesame region and project but can be in different VPC networks.These load balancers are built onMaglevand theAndromeda network virtualizationstack.
External passthrough Network Load Balancers can receive traffic from:
- Any client on the internet
- Google Cloud VMs with external IPs
- Google Cloud VMs that have internet access through Cloud NAT orinstance-based NAT
External passthrough Network Load Balancers arenot proxies. The load balancer itself doesn'tterminate user connections. Load-balanced packets are sent to the backend VMswith their source and destination IP addresses, protocol, and, if applicable,ports, unchanged. The backend VMs then terminate user connections. Responsesfrom the backend VMs go directly to the clients, not back through the loadbalancer. This process is known as direct server return (DSR).
Backend service-based external passthrough Network Load Balancers support the following features:
- Managed and unmanaged instance group backends. Backend service-basedexternal passthrough Network Load Balancers support both managed and unmanaged instance groupsas backends. Managed instance groups automate certain aspects of backendmanagement and provide better scalability and reliability as compared tounmanaged instance groups.
- Zonal NEG backends. Backend service-basedexternal passthrough Network Load Balancers support using zonal NEGs with
GCE_VM_IPendpoints.Zonal NEGGCE_VM_IPendpoints let you do the following:- Forward packets to any network interface, not just
nic0. - Place the same
GCE_VM_IPendpoint in two or more zonal NEGs connected todifferent backend services.
- Forward packets to any network interface, not just
- Support for multiple protocols. Backend service-based external passthrough Network Load Balancerscan load-balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.
- Support for IPv6 connectivity. Backend service-basedexternal passthrough Network Load Balancers can handle both IPv4 and IPv6 traffic.
- Fine-grained traffic distribution control. A backend service allowstraffic to be distributed according to the configured session affinity,connection tracking policy, and weighted load balancing settings. The backendservice can also be configured to enable connection draining and designatefailover backends for the load balancer. Most of these settings have defaultvalues that let you get started quickly. For more information, seeTrafficdistribution forexternal passthrough Network Load Balancers.
- Support for non-legacy, regional health checks. Backend service-basedexternal passthrough Network Load Balancers support regionalhealth checks, which canuse any supported health check protocol.
- Google Cloud Armor integration. Cloud Armor supports advanced networkDDoS protection for external passthrough Network Load Balancers. For more information, seeConfigureadvanced network DDoS protection.
GKE integration. If you are building applications inGKE, we recommend that you use the built-inGKE Servicecontroller, which deploysGoogle Cloud load balancers on behalf of GKE users. Thisis the same as the standalone load balancing architecture described on thispage, except that its lifecycle is fully automated and controlled byGKE.
Related GKE documentation:
Architecture
The following diagram illustrates the components of an external passthrough Network Load Balancer:
The load balancer is made up of several configuration components. A singleload balancer can have the following:
- One or more regional external IP addresses
- One or more regional external forwarding rules
- One regional external backend service
- One or more backends: either all instance groups or all zonal NEG backends(
GCE_VM_IPendpoints) - Health check associated with the backend service
Additionally, you must create firewall rules that allow your load balancingtraffic and health check probes to reach the backend VMs.
IP address
An external passthrough Network Load Balancer requires at least one forwarding rule. The forwarding rulereferences a regional external IP address that is accessible anywhere on theinternet.
ForIPv4 traffic, the forwarding rule references a singleregionalexternal IPv4 address. Regional external IPv4addresses come from a pool unique to each Google Cloud region.The IPv4 address can be assigned either by specifyinga reserved external IP addressor by letting Google Cloud automatically assign an ephemeral IPv4address.
ForIPv6 traffic, the forwarding rule references a
/96range of IPv6addresses from a dual-stack or IPv6-onlysubnet. The subnet must have anassigned external IPv6 subnetrange in the VPCnetwork. External IPv6 addresses are available only in Premium Tier.The
/96IPv6 address range can be assigned by either specifyinga reserved external IPv6 address,specifying a custom ephemeral IPv6 address, or letting Google Cloudautomatically assign an ephemeral IPv6 address.To specify a custom ephemeral IPv6 address, you must use thegcloud CLI or the API. The Google Cloud console doesn't supportspecifying custom ephemeral IPv6 addresses for forwarding rules.
For more details about IPv6 support, see the VPCdocumentation onIPv6 subnet ranges andIPv6addresses.
Use a reserved IP address for the forwarding rule if you need to keep theaddress associated with your project for reuse after you delete a forwardingrule or if you need multiple forwarding rules to reference the same IP address.
External passthrough Network Load Balancers support both Standard Tier and Premium Tierfor regional external IPv4 addresses. Both the IP address and the forwardingrule must use the same network tier. Regional external IPv6 addresses are onlyavailable in the Premium Tier.
Forwarding rule
A regional external forwarding rule specifies the protocol and ports on whichthe load balancer accepts traffic. Because external passthrough Network Load Balancers are not proxies,they pass traffic to backends on the same protocol and ports, if the packetcarries port information. The forwarding rule in combination with the IP addressforms the frontend of the load balancer.
The load balancer preserves the source IP addresses of incoming packets. Thedestination IP address for incoming packets is an IP address associated withthe load balancer's forwarding rule.
Incoming traffic is matched to a forwarding rule, which is a combination of aparticular IP address (either an IPv4 address or an IPv6 address range),protocol, and if the protocol is port-based, one of port(s), a range of ports,or all ports. The forwarding rule then directs traffic to the load balancer'sbackend service.
If the forwarding rule references an IPv4 address, the forwarding rule is notassociated with any subnet. That is, its IP address comes from outside of anyGoogle Cloud subnet range.
If the forwarding rule references a
/96IPv6 address range, the forwardingrule must be associated with a subnet, and that subnet must be (a) dual-stackand (b) have an external IPv6 subnet range (--ipv6-access-typeset toEXTERNAL). The subnet that the forwarding rule references can be the samesubnet used by the backend instances; however, backend instances can use aseparate subnet if chosen. When backend instances use a separate subnet, thefollowing must be true:
An external passthrough Network Load Balancer requires at least one forwarding rule.Forwarding rules can beconfigured to direct traffic coming from a specific range of source IP addressesto a specific backend service (or target instance). For details, seetrafficsteering. You can definemultiple forwarding rules for the same load balancer as described inMultipleforwarding rules.
If you want the load balancer to handle both IPv4 and IPv6 traffic, create twoforwarding rules: one rule for IPv4 traffic that points to IPv4 (or dual-stack)backends, and one rule for IPv6 traffic that points only to dual-stack backends.It's possible to have an IPv4 and an IPv6 forwarding rule reference the samebackend service, but the backend service must reference dual-stack backends.
Forwarding rule protocols
External passthrough Network Load Balancers support the following protocol options for eachforwarding rule:TCP,UDP, andL3_DEFAULT.
Use theTCP andUDP options to configure TCP or UDP load balancing.TheL3_DEFAULT protocol option enables an external passthrough Network Load Balancer toload balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.
In addition to supporting protocols other than TCP and UDP,L3_DEFAULT makesit possible for a single forwarding rule to serve multiple protocols. Forexample, IPsec services typically handle some combination of ESP andUDP-based IKE and NAT-T traffic. TheL3_DEFAULT option allows a singleforwarding rule to be configured to process all of those protocols.
Forwarding rules using theTCP orUDP protocols can reference a backendservice using either the same protocol as the forwarding rule or a backendservice whose protocol isUNSPECIFIED.L3_DEFAULT forwarding rules can onlyreference abackend service with protocolUNSPECIFIED.
If you're using theL3_DEFAULT protocol, you must configure the forwardingrule to accept traffic on all ports. To configure all ports, either set--ports=ALL by usingthe Google Cloud CLI, or setallPorts toTrue by using the API.
The following table summarizes how to use these settings for differentprotocols.
| Traffic to be load balanced | Forwarding rule protocol | Backend service protocol |
|---|---|---|
| TCP | TCP | TCP orUNSPECIFIED |
L3_DEFAULT | UNSPECIFIED | |
| UDP | UDP | UDP orUNSPECIFIED |
L3_DEFAULT | UNSPECIFIED | |
| ESP, GRE, ICMP/ICMPv6 (echo request only) | L3_DEFAULT | UNSPECIFIED |
Multiple forwarding rules
You can configure multiple regional external forwarding rules for the sameexternal passthrough Network Load Balancer. Each forwarding rule can have a different regional externalIP address, or multiple forwarding rules can have the same regional external IPaddress.
Configuring multiple regional external forwarding rules can be usefulfor these use cases:
- You need to configure more than one external IP address for the same backendservice.
- You need to configure different protocols or non-overlapping ports or portranges for the same external IP address.
- You need to steer traffic from certain source IP addresses to specific loadbalancer backends.
Google Cloud requires that incoming packets match no more than oneforwarding rule. Except for steering forwarding rules, which are discussed inthe next section, two or more forwarding rules that use the same regionalexternal IP address must have unique protocol and port combinations according tothese constraints:
- A forwarding rule configured for all ports of a protocol prevents thecreation of other forwarding rules using the same protocol and IP address.Forwarding rules using
TCPorUDPprotocols can be configured to use allports, or they can be configured for specific ports. For example, if youcreate a forwarding rule using IP address198.51.100.1, theTCPprotocol,and all ports, you cannot create any other forwarding rule using IP address198.51.100.1and theTCPprotocol.You can create two forwarding rules, both using the IP address198.51.100.1and theTCPprotocol, if each one has unique ports or non-overlapping portranges. For example, you can create two forwarding rules using IP address198.51.100.1and the TCP protocol, where one forwarding rule's ports are80,443and the other uses the port range81-442. - Only one
L3_DEFAULTforwarding rule can be created per IP address. Thisis because theL3_DEFAULTprotocol uses all ports by definition. In thiscontext, the all ports term includes protocols without port information. A single
L3_DEFAULTforwarding rule can coexist with other forwardingrules that use specific protocols (TCPorUDP). TheL3_DEFAULTforwarding rule can be used as a last resort when forwarding rules using thesame IP address but more specific protocols exist. AnL3_DEFAULTforwardingrule processes packets sent to its destination IP address if and only if thepacket's destination IP address, protocol, and destination port don't match aprotocol-specific forwarding rule.To illustrate this, consider these two scenarios. Forwardingrules in both scenarios use the same IP address
198.51.100.1.- Scenario 1. The first forwarding rule uses the
L3_DEFAULTprotocol.The second forwarding rule uses theTCPprotocol and all ports.TCP packets sent to any destination port of198.51.100.1are processed bythe second forwarding rule. Packets using different protocols are processedby the first forwarding rule. - Scenario 2. The first forwarding rule uses the
L3_DEFAULTprotocol.The second forwarding rule uses theTCPprotocol and port 8080. TCPpackets sent to198.51.100.1:8080are processed by the secondforwarding rule. All other packets, including TCP packets sent todifferent destination ports, are processed by the first forwarding rule.
- Scenario 1. The first forwarding rule uses the
Forwarding rule selection
Google Cloud selects one or zero forwarding rules to process an incomingpacket by using this elimination process, starting with the set of forwardingrule candidates which match the destination IP address of the packet:
Eliminate forwarding rules whose protocol doesn't match the packet's protocol,except for
L3_DEFAULTforwarding rules. Forwarding rules using theL3_DEFAULTprotocol are never eliminated by this step becauseL3_DEFAULTmatches all protocols. For example, if the packet's protocol is TCP, onlyforwarding rules using theUDPprotocol are eliminated.Eliminate forwarding rules whose port doesn't match the packet's port.Forwarding rules configured for all ports are never eliminated by this stepbecause an all ports forwarding rule matches any port.
If the remaining forwarding rule candidates include both
L3_DEFAULTandprotocol specific forwarding rules, eliminate theL3_DEFAULTforwardingrules. If the remaining forwarding rule candidates are allL3_DEFAULTforwarding rules, none are eliminated at this step.At this point, either the remaining forwarding rule candidates fall into oneof the following categories:
- A single forwarding rule remains which matches the packet's destination IPaddress, protocol, and port, and is used to route the packet.
- Two or more forwarding rule candidates remain which match the packet'sdestination IP address, protocol, and port. This means the remainingforwarding rule candidates include steering forwarding rules (discussedin the next section). Select thesteering forwarding rule whosesource range includes the mostspecific (longest prefix match) CIDR containing the packet's source IPaddress. If no steering forwarding rules have a source range including thepacket's source IP address, select the parent forwarding rule.
- Zero forwarding rule candidates remain and the packet is dropped.
When using multiple forwarding rules, make sure that you configure the softwarerunning on your backend VMs so that it binds to all the external IP address(es)of the load balancer's forwarding rule(s).
Traffic steering
Forwarding rules for external passthrough Network Load Balancers can be configured to direct traffic comingfrom a specific source IP address or a range of IP addresses to a specificbackend service (or target instance).
Traffic steering is useful for troubleshooting and for advanced configurations.With traffic steering, you can direct certain clients to a different set ofbackends, a different backend service configuration, or both. For example:
- Traffic steering lets you create two forwarding rules which direct traffic tothe same backend (instance group or NEG) by way of two backend services. The twobackend services can be configured with different health checks, differentsession affinities, or different traffic distribution control policies(connection tracking, connection draining, and failover).
- Traffic steering lets you create a forwarding rule to redirect traffic from alow-bandwidth backend service to a high-bandwidth backend service. Bothbackend services contain the same set of backend VMs or endpoints, butload-balanced with different weights usingweighted loadbalancing.
- Traffic steering lets you create two forwarding rules which direct traffic todifferent backend services, with different backends (instance groups or NEGs).For example, one backend could be configured using different machine types inorder to better process traffic from a certain source IP addresses.
Traffic steering is configured with a forwarding rule API parameter calledsourceIPRanges. Forwarding rules that have at least one source IP rangeconfigured are calledsteering forwarding rules.
A steering forwarding rule can use thesourceIPRanges parameter to specify acomma-separated list of up to 64 source IP addresses or IP address ranges. Youcan update this list of source IP address ranges at any time.
Each steering forwarding rule requires that you first create aparentforwarding rule. The parent and steering forwarding rules share thesame regional external IP address, IP protocol, and port information; however,the parent forwarding rule does not have any source IP address information.For example:
- Parent forwarding rule: IP address:
198.51.100.1, IP protocol:TCP,ports: 80 - Steering forwarding rule: IP address:
198.51.100.1, IP protocol:TCP,ports: 80, sourceIPRanges:203.0.113.0/24
A parent forwarding rule that points to a backend service can be associated witha steering forwarding rule that points to a backend service or a targetinstance.
For a given parent forwarding rule, two or more steering forwarding rules canhaveoverlapping, but not identical, source IP address ranges and IPaddresses. As an example, one steering forwarding rule can have the source IPrange203.0.113.0/24 and another steering forwarding rule for the same parentcan have the source IP address203.0.113.0.
You must delete all steering forwarding rules before you can delete the parentforwarding rule upon which they depend.
To learn how incoming packets are processed when steering forwarding rules areused, seeForwarding rule selection.
Session affinity behavior across steering changes
This section describes the conditions under which session affinity might breakwhen the source IP address ranges configured for a steering forwarding rule areupdated:
- If an existing connection continues to match the same forwarding rule afteryou change the source IP ranges for a steering forwarding rule, sessionaffinity doesn't break. If your change results in an existing connectionmatching a different forwarding rule, then:
- Session affinityalways breaks under these circumstances:
- The newly matched forwarding rule directs an established connection to abackend service (or target instance) which doesn't reference thepreviously selected backend VM.
- The newly matched forwarding rule directs an established connection to abackend service which does reference the previously selected backend VM,but the backend service is not configured topersist connections whenbackends areunhealthy,and the backend VM fails the backend service's health check.
- Session affinitymight break when the newly matched forwarding ruledirects an established connection to a backend service, and thebackend service does reference the previously selected VM, but thebackend service's combination of session affinity and connectiontracking mode results in a different connection tracking hash.
Preserving session affinity across steering changes
This section describes how to avoid breaking session affinitywhen the source IP ranges for steering forwarding rules are updated:
- Steering forwarding rules pointing to backend services. If both the parentand the steering forwarding rule point to backend services, you'll need tomanually make sure that thesessionaffinityandconnection trackingpolicysettings areidentical.Google Cloud does not automatically reject configurations if they arenot identical.
- Steering forwarding rules pointing to target instances. A parentforwarding rule that points to a backend service can be associated with asteering forwarding rule that points to a target instance. In this case, thesteering forwarding rule inheritssessionaffinityandconnection trackingpolicysettings from the parent forwarding rule.
For instructions on how to configure traffic steering, seeConfigure trafficsteering.
Regional backend service
Each external passthrough Network Load Balancer has one regional backend service that definesthe behavior of the load balancer and how traffic is distributed to itsbackends. The name of the backend service is the name of the external passthrough Network Load Balancershown in the Google Cloud console.
Each backend service defines the following backend parameters:
Protocol. A backend service accepts traffic on the IP address and ports(if configured) specified by one or more regional external forwarding rules.The backend service passes packets to backend VMs while preserving thepacket's source and destination IP addresses, protocol, and, if the protocolis port-based, the source and destination ports.
Backend services used with external passthrough Network Load Balancers support the following protocoloptions:
TCP,UDP, orUNSPECIFIED.Backend services with the
UNSPECIFIEDprotocol can be used with anyforwarding rule regardless of the forwarding rule protocol. Backend serviceswith a specific protocol (TCPorUDP) can only be referenced by forwardingrules with the same protocol (TCPorUDP). Forwarding rules with theL3_DEFAULTprotocol can only refer to backend services with theUNSPECIFIEDprotocol.SeeForwarding rule protocol specification for a table withpossible forwarding rule and backend service protocol combinations.
Traffic distribution. A backend service allowstraffic to be distributed according to the configured session affinity,connection tracking policy, and weighted load balancing settings. The backendservice can also be configured to enable connection draining and designatefailover backends for the load balancer. Most of these settings have defaultvalues that let you get started quickly. For more information, seeTrafficdistribution forexternal passthrough Network Load Balancers.
Health check. A backend service must have an associated regionalhealthcheck.
Backends. Each backend service operates in a single region and distributestraffic to either instance groups or zonal NEGs in the same region. You canuse either instance groups or zonal NEGs, but not a combination of both, asbackends for an external passthrough Network Load Balancer:
- If you chooseinstance groups, you can use unmanagedinstance groups, zonal managed instance groups, regional managed instancegroups, or a combination of instance group types.
- If you choosezonal NEGs, you must use
GCE_VM_IPzonal NEGs.
Each instance group or NEG backend has an associated VPCnetwork, even if that instance group or NEG hasn't been connected to a backendservice yet. For more information about how a network is associated with eachtype of backend, seeInstance group backends and networkinterfacesandZonal NEG backends and network interfaces.
Instance groups
An external passthrough Network Load Balancer distributes connections among backend VMs contained withinmanaged or unmanaged instance groups. Instance groups can be regional or zonalin scope.
Each instance group has an associated VPC network, even if thatinstance group hasn't been connected to a backend service yet. For moreinformation about how a network is associated with instance groups, seeInstance group backends and network interfaces.
The external passthrough Network Load Balancer is highly available by design. There are no specialsteps needed to make the load balancer highly available because the mechanismdoesn't rely on a single device or VM instance. You only need to make sure thatyour backend VM instances are deployed to multiple zones so that the loadbalancer can work around potential issues in any given zone.
Regional managed instance groups. Use regional managed instance groups ifyou can deploy your software by using instance templates. Regional managedinstance groups automatically distribute traffic among multiple zones, providingthe best option to avoid potential issues in any given zone.
An example deployment using a regional managed instance group is shown here.The instance group has an instance template that defines how instances shouldbe provisioned, and each group deploys instances within three zones of the
us-central1region.External passthrough Network Load Balancer with a regional managed instance group Zonal managed or unmanaged instance groups. Use zonal instance groups indifferent zones (in the same region) to protect against potential issues in anygiven zone.
An example deployment using zonal instance groups is shown here. This loadbalancer provides availability across two zones.
External passthrough Network Load Balancer with zonal instance groups
Zonal NEGs
An external passthrough Network Load Balancer distributes connections amongGCE_VM_IP endpoints containedwithinzonal network endpointgroups. These endpoints must belocated in the same region as the load balancer. For some recommended zonal NEGuse cases, seeZonal network endpoint groupsoverview.
Endpoints in the NEG must be primary internal IPv4 addresses of VM networkinterfaces that are in the same subnet and zone as the zonal NEG. The primaryinternal IPv4 address from any network interface of a multi-NIC VM instance canbe added to a NEG as long as it is in the NEG's subnet.
Zonal NEGs support both IPv4 and dual-stack (IPv4 and IPv6) VMs. For both IPv4and dual-stack VMs, it is sufficient to specify only the VM instance whenattaching an endpoint to a NEG. You don't need to specify the endpoint's IPaddress. The VM instance must always be in the same zone as the NEG.
Each zonal NEG has an associated VPC network and a subnet, evenif that zonal NEG hasn't been connected to a backend service yet. For moreinformation about how a network is associated with zonal NEGs, seeZonal NEGbackends and network interfaces.
Instance group backends and network interfaces
Within a given (managed or unmanaged) instance group, all VM instances must havetheirnic0 network interfaces in the same VPC network.
- For managed instance groups (MIGs), the VPC network for theinstance group is defined in the instance template.
- For unmanaged instance groups, the VPC network for the instancegroup is defined as the VPC network used by the
nic0network interface of the first VM instance that you add to the unmanagedinstance group.
nic0 interfaces. If you want to receive traffic on a non-nic0 networkinterface (vNICs orDynamic Network Interfaces), youmust use zonal NEGs withGCE_VM_IP endpoints.Note: Cloud Armor features such asadvanced network DDoSprotection andnetwork edge security policies aren'tsupported for backend instance VMs using Dynamic NICs.Zonal NEG backends and network interfaces
When you create a new zonal NEG withGCE_VM_IP endpoints, you must explicitlyassociate the NEG with a subnetwork of a VPC network before youcan add any endpoints to the NEG. Neither the subnet nor the VPCnetwork can be changed after the NEG is created.
Within a given NEG, eachGCE_VM_IP endpoint actually represents a networkinterface. The network interface must be in the subnetwork associated with theNEG. From the perspective of a Compute Engine instance, the networkinterface can use anyidentifier. From theperspective of being an endpoint in a NEG, the network interface is identifiedby using its primary internal IPv4 address.
There are two ways to add aGCE_VM_IP endpoint to a NEG:
- If you specify only a VM name (without any IP address) when adding anendpoint, Google Cloud requires that the VM has a network interface inthe subnetwork associated with the NEG. The IP address that Google Cloudchooses for the endpoint is the primary internal IPv4 address of the VM'snetwork interface in the subnetwork associated with the NEG.
- If you specify both a VM name and an IP address when adding an endpoint, theIP address that you provide must be a primary internal IPv4 address for one ofthe VM's network interfaces. That network interface must be in the subnetworkassociated with the NEG. Note that specifying an IP address is redundantbecause there can only be a single network interface that is in the subnetworkassociated with the NEG.
A Dynamic NIC can't be deleted if the Dynamic NICis an endpoint of a load-balanced network endpoint group.
Backend services and VPC networks
The backend service isn't associated with any VPC network;however, each backend instance group or zonal NEG is associated with aVPC network, as noted previously. As long as all backendsare located in the same region and project, and as long as all backends areof the same type (instance groups or zonal NEGs), you can add backends that useeither the same or different VPC networks.
To distribute packets to a non-nic0 interface, you must use zonal NEG backends(withGCE_VM_IP endpoints).
Dual-stack backends (IPv4 and IPv6)
If you want the load balancer to use dual-stack backends that handle both IPv4and IPv6 traffic, note the following requirements:
- Backends must be configured indual-stacksubnets that are in thesame region as the load balancer's IPv6 forwarding rule. For the backends, youcan use a subnet with the
ipv6-access-typeset to eitherEXTERNALorINTERNAL. If the backend subnet'sipv6-access-typeis set toINTERNAL,you must use a different IPv6-only subnet or dual-stack subnet withipv6-access-typeset toEXTERNALfor the load balancer'sexternal forwarding rule. - Backends must be configured to be dual-stack with
stack-typeset toIPV4_IPV6. If the backend subnet'sipv6-access-typeis set toEXTERNAL,you must also set the--ipv6-network-tiertoPREMIUM. For instructions,seeCreate an instance template with IPv6addresses.
IPv6-only backends
If you want the load balancer to use IPv6-only backends, note thefollowing requirements:
- IPv6-only instances are supported in managed and unmanaged instance groups.No other backend type is supported.
- Backends must be configured in eitherdual-stack orIPv6-onlysubnets that arein the same region as the load balancer's IPv6 forwarding rule. For thebackends, you can use a subnet with the
ipv6-access-typeset to eitherINTERNALorEXTERNAL. If the backend subnet'sipv6-access-typeis set toINTERNAL, you must use a different IPv6-only subnet withipv6-access-typeset toEXTERNALfor the load balancer's external forwarding rule. - Backends must be configured to be IPv6-only with the VM
stack-typeset toIPV6_ONLY. If the backend subnet'sipv6-access-typeis set toEXTERNAL,you must also set the--ipv6-network-tiertoPREMIUM. For instructions,seeCreate an instance template with IPv6addresses.
Note that IPv6-only VMs can be created under both dual-stackand IPv6-only subnets, but dual-stack VMs can't be created under IPv6-onlysubnets.
Health checks
Health check information is used to determine eligible backends for newconnections, and you can control whether existing connections persist onunhealthy backends. For more information about eligible backends, seeTrafficdistribution forexternal passthrough Network Load Balancers.
Health check type, protocol, and port
The load balancer's backend service must reference a regional health check,using any supported health check protocol and port. The health check protocoland port details don't have to match the load balancer backend service protocoland forwarding rule IP port information.
Because all supported health check protocols rely on TCP, when you use anexternal passthrough Network Load Balancer to balance connections and traffic for other protocols, backendVMs must run a TCP-based server to answer health check probers. For example, youcan use an HTTP health check combined with running an HTTP server on eachbackend VM. In this example, your scripts or software are responsible forconfiguring the HTTP server so that it returns status200 only when thesoftware listening to load-balanced connections is operational.
For more information about supported health check protocols and ports, seeHealth check categories, protocols, andportsandHow health checks work.
Health check packets
For instance group backends, health check probers send packets to thenic0network interface of each backend VM. ForGCE_VM_IP zonal NEG backends, healthcheck probers send packets to the network interface inthe VPCnetwork of the NEG. Health check packets have thefollowing characteristics:
- Source IP address from the relevant health checkprobe IPrange.
- Destination IP address that matches each IP address of a forwarding rule thatreferences the external passthrough Network Load Balancer backend service.
- Destination port that matches the port number you specify in the health check.
Software running on the backend VMs must bind to and listen on relevant IPaddress and port combinations. The simplest way to accomplish this is toconfigure software to bind to and listen on the relevant ports of any of theVM's IP addresses (0.0.0.0). For more information, seeDestination for probepackets.
Firewall rules
Because external passthrough Network Load Balancers are passthrough load balancers, youcontrol access to the load balancer's backends using Google Cloud firewallrules. You must create ingress allowfirewall rules oran ingress allowhierarchical firewall policy topermit health checks and the traffic that you're load balancing.
Forwarding rules and ingress allow firewall rules or Hierarchical firewall policieswork together in the following way: a forwarding rule specifies theprotocol and, if defined, port requirements that a packet must meet to beforwarded to a backend VM. Ingress allow firewall rules control whether theforwarded packets are delivered to the VM or dropped. All VPCnetworks have animplied deny ingress firewallrule that blocks incomingpackets from any source. The Google Cloud default VPCnetwork includes a limited set ofpre-populated ingress allow firewallrules.
To accept traffic from any IP address on the internet, you must createan ingress allow firewall rule with the
0.0.0.0/0source range. To onlyallow traffic from certain IP address ranges, use more restrictive sourceranges.As a security best practice, your ingress allow firewall rules should onlypermit theIP protocols and ports that youneed. Restricting the protocol(and, if possible, port) configuration is especially important when usingforwarding rules whose protocol is set to
L3_DEFAULT.L3_DEFAULTforwarding rulesforward packets for all supported IP protocols (on all ports if the protocoland packet have port information).External passthrough Network Load Balancers use Google Cloud health checks.Therefore, you must always allow traffic from thehealth check IP addressranges. These ingressallow firewall rules can be made specific to the protocol and ports of theload balancer's health check.
IP addresses for request and return packets
When a backend VM receives a load-balanced packet from a client, the packet'ssource and destination are as follows:
- Source: the external IP address associated with a Google Cloud VM or internet-routable IP address of a system connecting to the load balancer.
- Destination: the IP address of the load balancer's forwarding rule.
Because the load balancer is a pass-through load balancer (not a proxy), packetsarrive bearing the destination IP address of the load balancer's forwardingrule. Configure software running on backend VMs to do the following:
- Listen on (bind to) the load balancer's forwarding rule IP address or any IPaddress (
0.0.0.0or::) - If the load balancer forwarding rule's protocol supports ports: Listen on(bind to) a port that's included in the load balancer's forwarding rule
Return packets are sent directly from the load balancer's backend VMs to theclient. The return packet's source and destination IP addresses depend on theprotocol:
- TCP is connection-oriented so backend VMs must reply with packets whose source IP addresses match the forwarding rule's IP address so that the client can associate the response packets with the appropriate TCP connection.
- UDP, ESP, GRE, and ICMP are connectionless. Backend VMs can send response packets whose source IP addresses either match the forwarding rule's IP address or match any assigned external IP address for the VM. Practically speaking, most clients expect the response to come from the same IP address to which they sent packets.
The following table summarizes sources and destinations for response packets:
| Traffic type | Source | Destination |
|---|---|---|
| TCP | The IP address of the load balancer's forwarding rule | The requesting packet's source |
| UDP, ESP, GRE, ICMP | For most use cases, the IP address of the load balancer's forwarding rule1 | The requesting packet's source. |
1 When a VM has an external IP address or when you are usingCloud NAT, it is also possible to set the response packet's source IPaddress to the VM NIC's primary internal IPv4 address. Google Cloud orCloud NAT changes the response packet's source IP address to either theNIC's external IPv4 address or a Cloud NAT external IPv4 address inorder to send the response packet to the client's external IP address. Not usingthe forwarding rule's IP address as a source is an advanced scenario because theclient receives a response packet from an external IP address that doesn'tmatch the IP address to which it sent a request packet.
Return path
External passthrough Network Load Balancers usespecialroutes outside ofyour VPC network to direct incoming requests and health checkprobes to each backend VM.
The load balancer preserves the source IP addresses of packets. Responses fromthe backend VMs go directly to the clients, not back through the load balancer.The industry term for this isdirect server return.
Outbound internet connectivity from backends
VM instances configured as an external passthrough Network Load Balancer's backend endpoints can initiateconnections to the internet using the load balancer's forwarding rule IP addressas the source IP address of the outbound connection.
Generally, a VM instance always uses its own external IP address orCloud NAT to initiate connections. You use the forwarding rule IPaddress to initiate connections from backend endpoints only in special scenariossuch as when you need VM instances to originate and receive connections at thesame external IP address, and you also need the backend redundancy provided bythe external passthrough Network Load Balancer for inbound connections.
Outbound packets sent from backend VMs directly to the internet have norestrictions on traffic protocols and ports. Even if an outbound packetis using the forwarding rule's IP address as the source, the packet'sprotocol and source port don't have to match the forwarding rule's protocol andport specification. However, inbound response packets must match the forwardingrule IP address, protocol, and destination port of the forwarding rule. For moreinformation, seePaths for external passthrough Network Load Balancers and external protocolforwarding.
Additionally, any responses to the VM's outbound connections are subject to loadbalancing, just like all the other incoming packets meant for the load balancer.This means that responses might not arrive on the same backend VM that initiatedthe connection to the internet. If the outbound connections and load balancedinbound connections share common protocols and ports, then you can try one ofthe following suggestions:
Synchronize outbound connection state across backend VMs, so thatconnections can be served even if responses arrive at a backend VM otherthan the one that has initiated the connection.
Use afailoverconfiguration,with a single primary VM and a single backup VM. Then, the activebackend VM that initiates the outbound connections always receives theresponse packets.
This path to internet connectivity from an external passthrough Network Load Balancer's backends is thedefault intended behavior according to Google Cloud'simplied firewallrules. However, if you havesecurity concerns about leaving this path open, you can use targeted egressfirewall rules to block unsolicited outbound traffic to the internet.
Shared VPC architecture
Except for the IP address, all of the components of an external passthrough Network Load Balancer mustexist in the same project. The following table summarizes Shared VPCcomponents for external passthrough Network Load Balancers:
| IP address | Forwarding rule | Backend components |
|---|---|---|
| A regional external IP address must be defined in either the same project as the load balancer or the Shared VPC host project. | A regional external forwarding rule must be defined in the same project as the instances in the backend service. | The regional backend service must be defined in the same project and same region where the backends (instance group or zonal NEG) exist. Health checks associated with the backend service must be defined in the same project and the same region as the backend service. |
Traffic distribution
External passthrough Network Load Balancers support a variety of traffic distribution customizationoptions, including session affinity, connection tracking, weighted loadbalancing, and failover. For details about how external passthrough Network Load Balancers distributetraffic, and how these options interact with each other, seeTrafficdistribution forexternal passthrough Network Load Balancers.
Limitations
You cannot use the Google Cloud console to do the following tasks:
- Create or modify an external passthrough Network Load Balancer whose forwarding rule uses the
L3_DEFAULTprotocol. - Create or modify an external passthrough Network Load Balancer whose backend service protocol is setto
UNSPECIFIED. - Create or modify an external passthrough Network Load Balancer that configures a connection trackingpolicy.
- Create or modify source IP-based traffic steering for a forwarding rule.
Use either the Google Cloud CLI or the REST API instead.
- Create or modify an external passthrough Network Load Balancer whose forwarding rule uses the
External passthrough Network Load Balancers don't support VPC Network Peering.
Pricing
For pricing information, seePricing.
What's next
- To configure an external passthrough Network Load Balancer with a backend service for TCP or UDP trafficonly (supporting IPv4 and IPv6 traffic), seeSet up an external passthrough Network Load Balancer with abackendservice.
- To configure an external passthrough Network Load Balancer for multiple IP protocols (supporting IPv4 andIPv6 traffic), seeSet up an external passthrough Network Load Balancer for multiple IPprotocols.
- To configure an external passthrough Network Load Balancer with a zonal NEG backend, seeSet up anexternal passthrough Network Load Balancer with zonal NEGs
- To configure an external passthrough Network Load Balancer with atarget pool, seeSet up anexternal passthrough Network Load Balancer with a targetpool.
- To learn how to transition an external passthrough Network Load Balancer from a target pool backend to aregional backend service, seeTransitioning an external passthrough Network Load Balancer from a targetpool to a backendservice.
- To use prebuilt Terraform templates to streamline the setup and management of Google Cloud's networking infrastructure, explore theSimplified Cloud Networking Configuration Solutions GitHub repository.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.