Internal passthrough Network Load Balancer overview

An internal passthrough Network Load Balancer is a regional load balancer that is builton theAndromeda network virtualizationstack.

Internal passthrough Network Load Balancers distribute traffic among internal virtual machine(VM) instances in the same region in a Virtual Private Cloud (VPC) network. Itenables you to run and scale your services behind an internal IP address that isaccessible only to systems in the same VPC network or systemsconnected to your VPC network.

Use an internal passthrough Network Load Balancer in the following circumstances:

  • You need a high-performance, pass-through Layer 4 load balancer forTCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE protocols.
  • If serving traffic through TLS (SSL), it is acceptable to have SSL trafficterminated by your backends instead of bythe load balancer. The internal passthrough Network Load Balancer cannot terminate SSL traffic.
  • You need to forward the original packets unproxied. For example, if you needthe client source IP address to be preserved.
  • You have an existing setup that uses a pass-through load balancer, and youwant to migrate it without changes.

The internal passthrough Network Load Balancers address many use cases. For a few high-level examples, seePassthrough Network Load Balanceroverview.

How internal passthrough Network Load Balancers work

An internal passthrough Network Load Balancer has a frontend (the forwarding rule) and a backend (thebackend service). You can use either instance groups orGCE_VM_IP zonal NEGsas backends on the backend service. This example shows instance group backends.

High-level internal passthrough Network Load Balancer example.
High-level internal passthrough Network Load Balancer example (click to enlarge).

Unlike a proxy load balancer, an internal passthrough Network Load Balancer doesn't terminate connectionsfrom clients and then open new connections to backends. Instead, aninternal passthrough Network Load Balancer routes connections directly from clients to eligible backends,without a proxy between the clients and backends. Responses from each selectedbackend are delivered using direct server return. For more information, seeTraffic distribution forinternal passthrough Network Load BalancersandIP addresses for request and return packets.

The load balancer monitors backend health by using health check probes.For more information, see theHealth check section.

The Google CloudLinux guestenvironment,Windows guestenvironment, oran equivalent process configures each backend VM with the IP address of the loadbalancer. For VMs created from Google Cloud images, theGuestagent (formerly, the Windows GuestEnvironment or Linux Guest Environment) installs the local route for the loadbalancer's IP address. Google Kubernetes Engine instances based onContainer-Optimized OS implement this by usingiptablesinstead.

Google Cloud virtual networking manages traffic delivery andscaling as appropriate.

Protocols, scheme, and scope

Each internal passthrough Network Load Balancer supports the following:

  • One backend service with load balancing schemeINTERNAL and a supportedprotocol. For more information, seebackend service.
  • Backend VMs specified as either one of the following:
  • Support for IPv4 and IPv6 traffic when using instance group backends.Zonal network endpoint groups (NEGs) withGCE_VM_IP endpoints only supportIPv4 traffic.
  • One or more forwarding rules, each using either theTCP,UDP, orL3_DEFAULT protocol matching the backend service's protocol.
  • Each forwarding rule with its own unique IP address ormultiple forwardingrules that share a common IP address.
  • Each forwarding rule with up to five ports or all ports.
  • Ifglobal access is enabled, clients in any region.
  • Ifglobal access is disabled, clients in the same regionas the load balancer.

An internal passthrough Network Load Balancer doesn't support the following:

Client access

By default, the load balancer only supports clients that are in the same regionas the load balancer. Clients can be in the same network as the load balancer orin a VPC network connected by usingVPC Network Peering. You canenable globalaccess to allow clients from any regionto access your internal passthrough Network Load Balancer.

Internal passthrough Network Load Balancer with global access.
Internal passthrough Network Load Balancer with global access (click to enlarge).

The following table summarizes client access.

Global access disabledGlobal access enabled
Clients must be in the same region as the load balancer. They also must be in the same VPC network as the load balancer or in a VPC network that is connected to the load balancer's VPC network by using VPC Network Peering.Clients can be in any region. They still must be in the same VPC network as the load balancer or in a VPC network that's connected to the load balancer's VPC network by using VPC Network Peering.
On-premises clients can access the load balancer throughCloud VPN tunnels or VLAN attachments. These tunnels or attachments must be in the same region as the load balancer.On-premises clients can access the load balancer through Cloud VPN tunnels or VLAN attachments. These tunnels or attachments can be in any region.

IP addresses for request and return packets

When a backend VM receives a load-balanced packet from a client, the packet'ssource and destination are as follows:

  • Source: the client's internal IPv4, IPv6, or the IPv4 address from one of the client's alias IPv4 ranges.
  • Destination: the IP address of the load balancer's forwarding rule. The forwarding rule uses either a single internal IPv4 address or aninternal IPv6 range.

Because the load balancer is a pass-through load balancer (not a proxy), packetsarrive bearing the destination IP address of the load balancer's forwardingrule. Configure software running on backend VMs to do the following:

  • Listen on (bind to) the load balancer's forwarding rule IP address or any IPaddress (0.0.0.0 or::)
  • If the load balancer forwarding rule's protocol supports ports: Listen on(bind to) a port that's included in the load balancer's forwarding rule

Return packets are sent directly from the load balancer's backend VMs to theclient. The return packet's source and destination IP addresses depend on theprotocol:

  • TCP is connection-oriented so backend VMs must reply with packets whose source IP addresses match the forwarding rule's IP address so that the client can associate the response packets with the appropriate TCP connection.
  • UDP is connectionless, so backend VMs can send response packets whose source IP addresses either match the forwarding rule's IP address or match any assigned IP address for the VM. Practically speaking, most clients expect the response to come from the same IP address to which they sent packets.

The following table summarizes sources and destinations for response packets:

Traffic typeSourceDestination
TCPThe IP address of the load balancer's forwarding ruleThe requesting packet's source
UDPFor most use cases, the IP address of the load balancer's forwarding rule1The requesting packet's source

1 It is possible to set theresponse packet's source to the VM NIC's primary internal IPv4 address or analias IP address range. If the VM has IP forwarding enabled, arbitrary IPaddress sources can also be used. Not using the forwarding rule's IP address asa source is an advanced scenario because the client receives a response packetfrom an internal IP address that doesn't match the IP address to which it senta request packet.

Architecture

An internal passthrough Network Load Balancer with multiple backends distributesconnections among all of those backends. For informationabout the distribution method and its configuration options, seetrafficdistribution.

You can use either instance groups or zonal NEGs, but not a combination of both,as backends for an internal passthrough Network Load Balancer:

  • If you choose instance groups, you can use unmanaged instance groups,zonal managed instance groups, regional managed instance groups, or acombination of instance group types.
  • If you choose zonal NEGs, you must useGCE_VM_IP zonal NEGs.

High availability describes how to design an internal loadbalancer that isn't dependent on a single zone.

Instances that participate as backend VMs for internal passthrough Network Load Balancers must berunning the appropriate Linux or Windows guest environment or other processesthat provide equivalent functionality. This guest environment must be able tocontact the metadata server (metadata.google.internal, 169.254.169.254) toread instance metadata so that it can generate local routes to accept trafficsent to the load balancer's internal IP address.

This diagram shows trafficdistribution among VMs located in two separate instance groups. Traffic sentfrom the client instance to the IP address of the load balancer (10.10.10.9)is distributed among backend VMs in either instance group. Responses sent fromany of the serving backend VMs are delivered directly to the client VM.

You can use an internal passthrough Network Load Balancer with either acustom mode or auto mode VPCnetwork. You can also create internal passthrough Network Load Balancers with an existinglegacy network.

An internal passthrough Network Load Balancer doesn't support the following:

Internal IP address

Internal passthrough Network Load Balancers support IPv4-only, dual-stack, and IPv6-only subnets. Formore information about each one, seeTypes ofsubnets.

An internal passthrough Network Load Balancer requires at least one forwarding rule. The forwarding rulereferences the internal IP address:

  • ForIPv4 traffic, the forwarding rule references an IPv4 address from theprimary IPv4 subnet range.
  • ForIPv6 traffic, the forwarding rule references a/96 range of internalIPv6 addresses from the subnet's/64 internal IPv6 address range. The subnetmust be either a dual-stack or single-stack IPv6-onlysubnet with aninternal IPv6address range (ipv6-access-type set toINTERNAL). The IPv6 address range can be areserved staticaddress or an ephemeraladdress.

    For more information about IPv6 support, see the VPCdocumentation aboutIPv6 subnet ranges andIPv6addresses.

Firewall configuration

Internal passthrough Network Load Balancers require the following configuration forhierarchical firewall policies and VPC firewall rules:

For more information, seeConfiguring firewallrules.

Forwarding rules

Aforwarding rule specifies theprotocol andports on which the load balancer accepts traffic.Because internal passthrough Network Load Balancers aren't proxies, they pass traffic to backends onthe same protocol and port.

An internal passthrough Network Load Balancer requires at least one internal forwardingrule. You can definemultiple forwardingrules for the same load balancer.

If you want the load balancer to handle both IPv4 and IPv6 traffic, create twoforwarding rules: one rule for IPv4 traffic that points to IPv4 (or dual-stack)backends, and one rule for IPv6 traffic that points only to dual-stack backends.It's possible to have an IPv4 and an IPv6 forwarding rule reference the samebackend service, but the backend service must reference dual-stack backends.

The forwarding rule must reference a specificsubnet in the same VPCnetwork and region as the load balancer's backend components. This requirementhas the following implications:

  • The subnet that you specify for the forwarding rule doesn't need to be the sameas any of the subnets used by backend VMs; however, the subnet must be in thesame region as the forwarding rule.
  • ForIPv4 traffic, the internal forwarding rule references a regionalinternal IPv4 address from the primary IPv4 address range of the subnet thatyou select. The IPv4 address can be assigned either by specifyinga reserved internal IPv4 address,specifying a custom ephemeral IPv4 address, or letting Google Cloudautomatically assign an ephemeral IPv4 address.
  • ForIPv6 traffic, the forwarding rule references a/96 range of IPv6addresses from the subnet's/64 internal IPv6 address range. Thesubnet must be a dual-stack subnet with theipv6-access-type set toINTERNAL. The/96 IPv6 address range can be assigned by either specifyinga reserved internal IPv6 address,specifying a custom ephemeral IPv6 address, or letting Google Cloudautomatically assign an ephemeral IPv6 address.

    To specify a custom ephemeral IPv6 address, you must usethe gcloud CLI or the API. The Google Cloud console doesn't supportspecifying custom ephemeral IPv6 addresses for forwarding rules.

Forwarding rule protocols

Internal passthrough Network Load Balancers support the following IPv4 protocol options foreach forwarding rule:TCP,UDP, orL3_DEFAULT.

Internal passthrough Network Load Balancers support the following IPv6 protocol options foreach forwarding rule:TCP orUDP.

TheL3_DEFAULT option enables youto load balance TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE protocols.

In addition to supporting protocols other than TCP and UDP, theL3_DEFAULT option makesit possible for a single forwarding rule to simultaneously forward traffic formultiple protocols. For example, in addition to making HTTP requests, you canalso ping the load balancer IP address.

Forwarding rules that use theTCP orUDP protocols can reference a backendservice by using either the same protocol as the forwarding rule or a backendservice using theUNSPECIFIED protocol.

If you're using theL3_DEFAULTprotocol, you must configure the forwardingrule to accept traffic on all ports. To configure all ports, either set--ports=ALL by usingthe Google Cloud CLI, or setallPorts toTrue by using the API.

The following table summarizes how to use these settings for differentprotocols:

Traffic to be load-balancedForwarding rule protocolBackend service protocol
TCP (IPv4 or IPv6)TCPTCP or UNSPECIFIED
UDP (IPv4 or IPv6)UDPUDP or UNSPECIFIED
TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GREL3_DEFAULTUNSPECIFIED

Forwarding rules and global access

An internal passthrough Network Load Balancer's forwarding rules are regional, even when global accessis enabled. After you enable global access, the regional internal forwardingrule'sallowGlobalAccess flag is set totrue.

Forwarding rules and port specifications

When you create an internal forwarding rule, you must choose one of thefollowing port specifications:

  • Specify at least one and up to five ports, by number.
  • SpecifyALL to forward traffic on all ports.

An internal forwarding rule that supports either all TCP ports or all UDP portsallows backend VMs to run multiple applications, each on its own port. Trafficsent to a given port is delivered to the corresponding application, and allapplications use the same IP address.

When you need to forward traffic on more than five specific ports, combinefirewall rules with forwarding rules. When you create theforwarding rule, specify all ports, and then create ingressallow firewallrules that only permit traffic to the desired ports. Apply the firewall rules tothe backend VMs.

You cannot modify a forwarding rule after you create it. If you need to changethe specified ports or the internal IP address for an internal forwarding rule,you must delete and recreate it.

Multiple forwarding rules for a single backend service

You can configure multiple internal forwarding rules that all reference the sameinternal backend service. An internal passthrough Network Load Balancer requires at least one internalforwarding rule.

Configuring multiple forwarding rules for the same backend service lets youdo the following:

  • Assign multiple IP addresses to the load balancer. You can createmultiple forwarding rules, each using a unique IP address. Each forwardingrule can specify all ports or a set of up to five ports.

  • Assign a specific set of ports, using the same IP address, to theload balancer. You can create multiple forwarding rules sharing the same IPaddress, where each forwarding rule uses a specific set of up to five ports.This is an alternative to configuring a single forwarding rule that specifiesall ports.

For more information about scenarios involving two or more internal forwardingrules that share a common internal IP address, seeMultipleforwarding rules with the same IP address.

When using multiple internal forwarding rules, make sure that you configure thesoftware running on your backend VMs to bind to all of the forwarding rule IPaddresses or to any address (0.0.0.0/0 for IPv4 or::/0 for IPv6). Thedestination IP address for a packet delivered through the loadbalancer is the internal IP address associated with the corresponding internalforwarding rule. For more information, seeTCP and UDP request and returnpackets.

Backend service

Each internal passthrough Network Load Balancer has one regional internal backend service that definesbackend parameters and behavior. The name of the backend service is the name ofthe internal passthrough Network Load Balancer shown in the Google Cloud console.

Each backend service defines the following backend parameters:

  • Protocol. A backend service supports IPv4 and IPv6 traffic.If the protocol has a concept of a port(likeTCP orUDP), the backend service delivers packets to backend VMson the same destination port to which traffic was sent.

    Backend services support the following IPv4 protocol options:TCP,UDP,orUNSPECIFIED.

    Backend services support the following IPv6 protocol options:TCP orUDP.The backend service protocol must coordinate with the forwarding rule protocol.

    For a table with possible forwarding rule and backend service protocol combinations,seeForwarding rule protocol specification.

  • Traffic distribution. A backend service allowstraffic to be distributed according to aconfigurablesession affinity.

  • Health check. A backend service must have an associatedhealth check.

Each backend service operates in a single region and distributes traffic forbackend VMs in a single VPC network:

Instance group backends and network interfaces

Within a given (managed or unmanaged) instance group, all VM instances must havetheirnic0 network interfaces in the same VPC network.

  • For managed instance groups (MIGs), the VPC network for theinstance group is defined in the instance template.
  • For unmanaged instance groups, the VPC network for the instancegroup is defined as the VPC network used by thenic0network interface of the first VM instance that you add to the unmanagedinstance group.
Each member VM can optionally haveadditional network interfaces (vNICs orDynamic Network Interfaces), buteach network interface must attach to a differentVPC network. These networks must also be different from theVPC network associated with the instance group.

A Dynamic NICcan be deleted if the Dynamic NICbelongs to a VM that's a member of a load-balanced instance group.

Zonal NEG backends and network interfaces

When you create a new zonal NEG withGCE_VM_IP endpoints, you must explicitlyassociate the NEG with a subnetwork of a VPC network before youcan add any endpoints to the NEG. Neither the subnet nor the VPCnetwork can be changed after the NEG is created.

Within a given NEG, eachGCE_VM_IP endpoint actually represents a networkinterface. The network interface must be in the subnetwork associated with theNEG. From the perspective of a Compute Engine instance, the networkinterface can use anyidentifier. From theperspective of being an endpoint in a NEG, the network interface is identifiedby using its primary internal IPv4 address.

There are two ways to add aGCE_VM_IP endpoint to a NEG:

  • If you specify only a VM name (without any IP address) when adding anendpoint, Google Cloud requires that the VM has a network interface inthe subnetwork associated with the NEG. The IP address that Google Cloudchooses for the endpoint is the primary internal IPv4 address of the VM'snetwork interface in the subnetwork associated with the NEG.
  • If you specify both a VM name and an IP address when adding an endpoint, theIP address that you provide must be a primary internal IPv4 address for one ofthe VM's network interfaces. That network interface must be in the subnetworkassociated with the NEG. Note that specifying an IP address is redundantbecause there can only be a single network interface that is in the subnetworkassociated with the NEG.

A Dynamic NIC can't be deleted if the Dynamic NICis an endpoint of a load-balanced network endpoint group.

Backend service network specification

You can explicitly associate a VPC network with a backendservice by using the--network flag when you create the backend service. Thebackend service network rules go into effectimmediately.

If you omit the--network flag when you create a backend service,Google Cloud uses one of the following qualifying events to implicitlyset the backend service's associated VPC network. After it isset, the backend service's associated VPC network cannot bechanged:

  • If the backend service doesn't yet have any associated backends, and youconfigure the first forwarding rule to reference the backend service,Google Cloud sets the backend service's associated VPCnetwork to the VPC network that contains the subnet used bythis forwarding rule.

  • If the backend service doesn't yet have a forwarding rule referencing it,and you add the first instance group backend to the backend service,Google Cloud sets thebackend service's VPC network to the VPCnetwork associated with that first instance group backend. This meansthat the backend service's associated VPC network is thenetwork used by thenic0 network interface of each VM in the firstinstance group backend.

  • If the backend service doesn't yet have a forwarding rule referencing it,and you add the first zonal NEG backend (withGCE_VM_IP endpoints) tothe backend service,Google Cloud sets the backend service's VPC network tothe VPC network associated with that first NEG backend.

After the backend service's VPC network has been set by aqualifying event, any additional forwarding rules, backend instance groups,or backend zonal NEGs that you add mustfollow thebackend service network rules.

Backend service network rules

The following rules apply after abackend service is associated with a specificVPC network:

  • When configuring a forwarding rule to reference the backend service, theforwarding rule must use a subnet of the backend service's VPCnetwork. The forwarding rule and backend service cannot use differentVPC networks, even if those networks are connected—forexample, using VPC Network Peering.

  • When adding instance group backends,one of the following must be true:

    • The VPC network associated with the instancegroup—that is, the VPC network used by thenic0network interface of each VM in the instancegroup—must match thebackend service's associated VPC network.
    • Each backend VM must have anon-nic0interface in theVPC network associated with the backend service.
  • When adding zonal NEG backends withGCE_VM_IP endpoints, theVPC network associated with the NEG must match theVPC network associated with the backend service.

Dual-stack backends (IPv4 and IPv6)

If you want the load balancer to use dual-stack backends that handle both IPv4and IPv6 traffic, note the following requirements:

  • Backends must be configured indual-stacksubnets that are in the same region as theload balancer's IPv6 forwarding rule. For the backends, you can use a subnetwith theipv6-access-type set to eitherINTERNAL orEXTERNAL. If thebackend subnet'sipv6-access-type is set toEXTERNAL, you must use adifferent dual-stack or IPv6-only subnet withipv6-access-type set toINTERNAL for theload balancer's internal forwarding rule. For more information, seeAdd adual-stack subnet.
  • Backends must be configured to be dual-stack withstack-type set toIPV4_IPV6. If the backend subnet'sipv6-access-type is set toEXTERNAL,you must also set the--ipv6-network-tier toPREMIUM. For more information,seeCreate an instance template with IPv6addresses.

IPv6-only backends

If you want the load balancer to use IPv6-only backends, note thefollowing requirements:

  • IPv6-only instances are only supported in unmanaged instance groups. No otherbackend type is supported.
  • Backends must be configured in eitherdual-stack orIPv6-onlysubnets that arein the same region as the load balancer's IPv6 forwarding rule. For thebackends, you can use a subnet with theipv6-access-type set to eitherINTERNAL orEXTERNAL. If the backend subnet'sipv6-access-type is set toEXTERNAL, you must use a different IPv6-only subnet withipv6-access-typeset toINTERNAL for the load balancer's internal forwarding rule.
  • Backends must be configured to be IPv6-only with the VMstack-type set toIPV6_ONLY. If the backend subnet'sipv6-access-type is set toEXTERNAL,you must also set the--ipv6-network-tier toPREMIUM. For more information,seeCreate an instance template with IPv6addresses.

Note that IPv6-only VMs can be created under both dual-stackand IPv6-only subnets, but dual-stack VMs can't be created under IPv6-onlysubnets.

Backend subsetting

Backend subsetting is an optional feature that improves performance by limitingthe number of backends to which traffic is distributed.

We recommend that you only enable subsetting if you need to support more than250 backend VMs on a single load balancer. For more information, seeBackend subsetting for internal passthrough Network Load Balancer.

Health check

Health check information is used to determine eligible backends for newconnections, and you can control whether existing connections persist onunhealthy backends. For more information about eligible backends, seeTrafficdistribution forinternal passthrough Network Load Balancers.

Health check type, protocol, and port

The load balancer's backend service must reference a global or regional healthcheck, using any supported health check protocol and port. The health checkprotocol and port details don't have to match the load balancer backend serviceprotocol and forwarding rule IP port information.

Because all supported health check protocols rely on TCP, when you use aninternal passthrough Network Load Balancer to balance connections and traffic for other protocols, backendVMs must run a TCP-based server to answer health check probers. For example, youcan use an HTTP health check combined with running an HTTP server on eachbackend VM. In this example, your scripts or software are responsible forconfiguring the HTTP server so that it returns status200 only when thesoftware listening to load-balanced connections is operational.

For more information about supported health check protocols and ports, seeHealth check categories, protocols, andportsandHow health checks work.

Health check packets

Health check probers send packets to the backend VM network interface that's inthe VPC network that matches theBackend service networkspecification. Health check packets have thefollowing characteristics:

  • Source IP address from the relevant health checkprobe IPrange.
  • Destination IP address that matches each IP address of a forwarding rule thatreferences the internal passthrough Network Load Balancer backend service.
  • Destination port that matches the port number you specify in the health check.

Software running on the backend VMs must bind to and listen on relevant IPaddress and port combinations. The simplest way to accomplish this is toconfigure software to bind to and listen on the relevant ports of any of theVM's IP addresses (0.0.0.0). For more information, seeDestination for probepackets.

High availability architecture

The internal passthrough Network Load Balancer is highly available by design. There are nospecial steps to make the load balancer highly available because the mechanismdoesn't rely on a single device or VM instance.

To deploy your backend VM instances to multiple zones, followthese deployment recommendations:

  • Useregional managed instancegroupsif you can deploy your software by using instance templates. Regional managedinstance groups automatically distribute traffic among multiple zones,providing the best option to avoid potential issues in any given zone.

  • If you usezonal managed instance groups or unmanaged instancegroups,use multiple instance groups in different zones (in the same region) forthe same backend service. Using multiple zones protects against potentialissues in any given zone.

Shared VPC architecture

The following table summarizes the component requirements forinternal passthrough Network Load Balancers used withShared VPC networks. For an example, seecreating aninternal passthrough Network Load Balancer on the Provisioning Shared VPC page.

IP addressForwarding ruleBackend components
An internal IP address must be defined in the same project as the backend VMs.

For the load balancer to be available in a Shared VPC network, the Google Cloud internal IP addressmust be defined in the same service project where the backend VMs are located,and it must reference a subnet in the desired Shared VPC network in the host project. The address itself comes from the primary IP range of the referenced subnet.

If you create an internal IP address in a service project and the IP address subnet is in the service project's VPC network, your internal passthrough Network Load Balancer is local to the service project. Your internal passthrough Network Load Balancer isn't local to any Shared VPC host project.
Aninternal forwarding rule must be defined in the same project as the backend VMs.

For the load balancer to be available in a Shared VPC network, the internal forwarding rulemust be defined in the same service project where the backend VMs are located,and it must reference the same subnet (in the Shared VPC network) that the associated internal IP address references.

If you create an internal forwarding rule in a service project and the forwarding rule's subnet is in the service project's VPC network, your internal passthrough Network Load Balancer is local to the service project. Your internal passthrough Network Load Balancer isn't local to any Shared VPC host project.
In a Shared VPC scenario, backend VMs are located in a service project. A regional internal backend service and health check must be defined in that service project.

Traffic distribution

Internal passthrough Network Load Balancers support a variety of traffic distribution customizationoptions, including session affinity, connection tracking, and failover. Fordetails about how internal passthrough Network Load Balancers distribute traffic, and how these optionsinteract with each other, seeTraffic distribution forinternal passthrough Network Load Balancers.

Quotas and limits

For information about quotas and limits, seeLoad balancing resource quotas.

Limitations

  • You cannot use the Google Cloud console to create an internal passthrough Network Load Balancer with zonalNEG backends.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.