Passthrough Network Load Balancer overview

Passthrough Network Load Balancers are Layer 4 regional, passthrough load balancers. Theseload balancers distribute traffic among backends in the same region as the loadbalancer. As the name suggests, passthrough Network Load Balancers are not proxies. Load-balancedpackets are received by backend VMs with the packet's source and destination IPaddresses, protocol, and, if the protocol is port-based, the source anddestination ports unchanged. Load-balanced connections are terminated at thebackends. Responses from the backend VMs go directly to the clients, not backthrough the load balancer. The industry term for this is direct server return(DSR).

The following diagram shows a sample passthrough Network Load Balancer architecture.

Passthrough Network Load Balancer architecture.
Passthrough Network Load Balancer architecture (click to enlarge).

You'd use a passthrough Network Load Balancer in the following circumstances:

  • You need to forward original client packets to the backendsun-proxied—for example, if you need the client source IP addressto be preserved.
  • You need to load balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic, or you need toload balance a TCP port that isn't supported by other load balancers.
  • It is acceptable to have SSL traffic decrypted by your backends instead of bythe load balancer. The passthrough Network Load Balancer cannot perform this task. When thebackends decrypt SSL traffic, there is a greater CPU burden on the VMs.
  • You are able to manage the backend VM's SSL certificates yourself.Google-managed SSL certificates are only available for proxy load balancers.
  • You have an existing setup that uses a passthrough load balancer, and youwant to migrate it without changes.

Passthrough Network Load Balancers are available in the following modes of deployment.

ScopeTraffic typeNetwork service tierLoad-balancing schemeIP addressFrontend portsLinks
External passthrough Network Load Balancer

Load balances traffic that comes from clients on the internet.

RegionalTCP, UDP, ESP, GRE, ICMP, and ICMPv6Premium or StandardEXTERNALIPv4 and IPv6A single port, range of ports, or all portsArchitecture details
Internal passthrough Network Load Balancer

Load balance traffic within your VPC network or networks connected to your VPC network.

RegionalTCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GREPremiumINTERNALIPv4 and IPv6A single port, range of ports, or all portsArchitecture details

The load-balancing scheme is an attribute on theforwarding rule and thebackend service of a loadbalancer and indicates whether the load balancer can be used for internal orexternal traffic.

External passthrough Network Load Balancers

External passthrough Network Load Balancers are regional, Layer 4 load balancers that distributeexternal traffic among backends (instance groups or network endpoint groups(NEGs)) in the same region as the load balancer. These backends must be in thesame region and project but can be in different VPC networks.These load balancers are built onMaglevand theAndromeda network virtualizationstack.

External passthrough Network Load Balancers can receive traffic from:

  • Any client on the internet
  • Google Cloud VMs with external IPs
  • Google Cloud VMs that have internet access through Cloud NAT orinstance-based NAT

External passthrough Network Load Balancers arenot proxies. The load balancer itself doesn'tterminate user connections. Load-balanced packets are sent to the backend VMswith their source and destination IP addresses, protocol, and, if applicable,ports, unchanged. The backend VMs then terminate user connections. Responsesfrom the backend VMs go directly to the clients, not back through the loadbalancer. This process is known as direct server return (DSR).

The following diagram shows an external passthrough Network Load Balancer that is configured in theus-central1 region with its backends located in the same region. Traffic isrouted from a user in Singapore to theload balancer inus-central1 (forwarding rule IP address120.1.1.1).

If the IP address of the load balancer is in the Premium Tier, the traffictraverses Google's high‑quality global backbone with the intent that packetsenter and exit a Google edge peering point as close as possible to the client. If the IP address of the load balancer is in the Standard Tier, the trafficenters and exits the Google network at a peering point closest to theGoogle Cloud region where the load balancer is configured.

External passthrough Network Load Balancer traffic routing in Premium and Standard Network Tiers.
External passthrough Network Load Balancer traffic routing in Premium and Standard Network Tiers.

The architecture of an external passthrough Network Load Balancer depends on whether you use a backendservice or a target pool to set up the backend.

Backend service-based load balancers

External passthrough Network Load Balancers can be created with a regional backend service thatdefines the behavior of the load balancer and how it distributes traffic to itsbackends. Backend service-basedexternal passthrough Network Load Balancers support IPv4 and IPv6 traffic, multiple protocols(TCP, UDP, ESP, GRE, ICMP, and ICMPv6), managed and unmanaged instance group backends,zonal network endpoint group (NEG) backends withGCE_VM_IP endpoints,fine-grained traffic distribution controls, failover policies, and let you usenon-legacy health checks that match the type of traffic (TCP, SSL, HTTP, HTTPS,or HTTP/2) that you are distributing.

Load balancing to Google Kubernetes Engine (GKE) is handled by using the built-inGKE Servicecontroller. In addition, backendservice-based external passthrough Network Load Balancers are supported withApp Hub.

For architecture details and more information about supported features, seeBackend service-based external passthrough Network Load Balanceroverview.

You can transition an existing target pool-based load balancer to use abackend service instead. For instructions, seeMigrate load balancers fromtarget pools to backendservices.

Target pool-based load balancers

A target pool is the legacy backend supported withexternal passthrough Network Load Balancers. A target pool defines a group of instances that shouldreceive incoming traffic from the load balancer.

Target pool-based load balancers support either TCP or UDP traffic.Forwarding rules for target pool-based external passthrough Network Load Balancers only support externalIPv4 addresses.

For architecture details, seeTarget pool-based external passthrough Network Load Balancer overview.

Internal passthrough Network Load Balancers

Internal passthrough Network Load Balancers distribute traffic among internal virtual machine (VM)instances in the same region in a Virtual Private Cloud (VPC) network. Theyenable you to run and scale your services behind an internal IP address that isaccessible only to systems in the same VPC network or systemsconnected to your VPC network.

These load balancers are built on theAndromeda network virtualizationstack.They support only regional backends so that you canautoscale across a region, protecting your service from zonal failures.Additionally, this load balancer can only be configured in Premium Tier.

The internal passthrough Network Load Balancers address many use cases. The following sections showcase afew high-level examples.

Access to connected networks

You can access an internal passthrough Network Load Balancer in your VPC network from aconnected network by using the following:

  • VPC Network Peering
  • Cloud VPN and Cloud Interconnect

For detailed examples, seeInternal passthrough Network Load Balancers and connectednetworks.

Three-tier web service

You can use internal passthrough Network Load Balancers in conjunction with other loadbalancers. For example, if you incorporateexternal Application Load Balancers, the external Application Load Balanceris the web tier and relies on services behind the internal passthrough Network Load Balancer.

The following diagram depicts an example of a three-tier configuration that usesexternal Application Load Balancers and internal passthrough Network Load Balancers:

Three-tier web app with an external Application Load Balancer and an internal passthrough Network Load Balancer.
Three-tier web app with an external Application Load Balancer and an internal passthrough Network Load Balancer (click to enlarge).

Three-tier web service with global access

If youenable globalaccess,your web-tier VMs can be in another region, as shown in the following diagram.

This multi-tier application example shows the following:

  • A globally available internet-facing web tier that load balances traffic withan external Application Load Balancer.
  • An internal backend load-balanced database tier in theus-east1 regionthat is accessed by the global web tier.
  • A client VM that is part of the web tier in theeurope-west1 region thataccesses the internal load-balanced database tier located inus-east1.
Three-tier web app with an external Application Load Balancer, global access, and an         internal passthrough Network Load Balancer.
Three-tier web app with an external Application Load Balancer, global access, and an internal passthrough Network Load Balancer (click to enlarge).

Using internal passthrough Network Load Balancers as next hops

You can use an internal passthrough Network Load Balancer as the next gateway to which packets areforwarded along the path to their final destination. To do this, you set theload balancer as thenext hop in astatic route.

A next hop internal passthrough Network Load Balancer processes packets forall supportedtrafficregardless of theforwarding rule and backend serviceprotocols that youspecified when you created the load balancer.

This feature supports using either IPv4 or IPv6 addresses.

Here is a sample architecture using an internal passthrough Network Load Balancer as the next hop to aNAT gateway. You can route traffic to your firewall or gateway virtual appliancebackends through an internal passthrough Network Load Balancer.

NAT use case.
NAT use case (click to enlarge).

Additional use cases include:

  • Hub and spoke: Exchanging next-hop routes by usingVPC Network Peering—You can configure a hub-and-spoke topology withyour next-hop firewall virtual appliances located in thehubVPC network. Routes using the load balancer as a next hopin thehub VPC network can be usable in eachspoke network.
  • Load balancing to multiple network interfaces (nic0throughnic7) on the backend VMs.

For more information about these use cases, seeInternal passthrough Network Load Balancersas next hops.

Internal passthrough Network Load Balancers and GKE

For details about how GKE creates internal passthrough Network Load Balancers for Services, seeLoadBalancer Service conceptsin the GKE documentation.

Internal passthrough Network Load Balancers and App Hub

Resources used by Internal passthrough Network Load Balancers can be designated as services inApp Hub applications.

Private Service Connect port mapping

Private Service Connect port mapping services useport mapping NEGsand have similar configurations to Internal passthrough Network Load Balancers. However, port mappingservices don't load balance traffic. Instead, Private Service Connectforwards traffic to service ports on service producer VMs based on the clientdestination port that receives the traffic.

If you send traffic to a port mapping service from the sameVPC network as the service, the packets are dropped.

For more information, seeAbout Private Service Connect port mapping.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.