Internal proxy Network Load Balancers and connected networks

This page describes scenarios for accessing an internal load balancer in yourVirtual Private Cloud (VPC) network from a connected network. Before reviewingthe information on this page, you should already be familiar with the conceptsin the following guide:

Use VPC Network Peering

When you useVPC Network Peeringto connect your VPC network to another network,Google Cloud shares subnet routes between the networks. The subnet routesallow traffic from the peer network to reach internal load balancers inyour network. Access is allowed if the following is true:

  • You create ingress firewall rules to allow traffic from client VMs in the peernetwork. Google Cloudfirewall rulesaren't shared among networks when using VPC Network Peering.
  • For regional internal Application Load Balancers, client virtual machine (VM) instances in thepeer network must be located in the same region as your internal load balancer.This restriction is waived if you configureglobal access.

You cannot selectively share only some internal passthrough Network Load Balancers, regional internal proxy Network Load Balancers,or internal Application Load Balancers by using VPC Network Peering. All internal loadbalancers are shared automatically.You can limit access to the load balancer's backends by using ingressfirewallrules applicable to the backend VM instances.

Use Cloud VPN and Cloud Interconnect

You can access an internal load balancer from a peer network that isconnected through aCloud VPN tunnel orVLAN attachment for aDedicated Interconnectconnection orPartner Interconnect.The peer network can be an on-premises network, another Google CloudVPC network, or a virtual network hosted by a different cloudprovider.

Access through Cloud VPN tunnels

You can access an internal load balancer through a Cloud VPN tunnelwhen all of the following conditions are met.

In the internal load balancer's network

  • Both the Cloud VPN gateway and tunnel(s) must be located in thesame region as the load balancer whenglobal access isdisabled. If global access is enabled on the load balancer's forwardingrule, this restriction is waived.
  • Routes must provide response paths from proxy systems back to theon-premises or peer network where the client is located. If you're usingCloud VPN tunnels withdynamicrouting,consider thedynamic routingmode of the load balancer'sCloud VPN network. The dynamic routing mode determines which customdynamic routes are available to proxies in theproxy-onlysubnet.

In the peer network

The peer network must have at least one Cloud VPN tunnel with routes tothe subnet where the internal load balancer is defined.

If the peer network is another Google Cloud VPC network:

  • The peer network's Cloud VPN gateway and tunnel(s) can be located inany region.

  • For Cloud VPN tunnels that use dynamic routing, thedynamic routing mode of the VPC network determines whichroutes are available to clients in each region. To provide a consistent setof custom dynamic routes to clients in all regions, use global dynamicrouting mode.

  • Ensure on-premises or peer network firewalls permit packets sent to the IPaddress of the load balancer's forwarding rule. Ensure on-premises or peernetwork firewalls permit response packets received from the IP address ofthe load balancer's forwarding rule.

The following diagram highlights key concepts when accessing an internal loadbalancer by way of a Cloud VPN gateway and its associated tunnel.Cloud VPN securely connects your on-premises network to yourGoogle Cloud VPC network using Cloud VPN tunnels.

Internal load balancing and Cloud VPN.
Internal load balancing and Cloud VPN (click to enlarge).

Note the following configuration elements associated with this example:

  • In thelb-network, a Cloud VPN tunnel that uses dynamic routinghas been configured. The VPN tunnel, gateway, and Cloud Router areall located inREGION_A, the same region where the internal loadbalancer's components are located.
  • Ingress allow firewall rules have been configured to apply to the backendVMs in the instance groups A and B so that they canreceive traffic from IP addresses in the VPC network and fromthe on-premises network,10.1.2.0/24 and192.168.1.0/24. Noegress deny firewall rules have been created, so theimplied allow egress ruleapplies.
  • Packets sent from clients in the on-premises networks, including from192.168.1.0/24, to the IP address of the internal load balancer,10.1.2.99, are delivered directly to a healthy backend VM, such asvm-a2, according to the configuredsession affinity.
  • Replies sent from the backend VMs (such asvm-a2) are delivered throughthe VPN tunnel to the on-premises clients.

To troubleshoot Cloud VPN, seeCloud VPN troubleshooting.

Access through Cloud Interconnect

You can access an internal load balancer from an on-premises peernetwork that is connected to the load balancer's VPC network whenall the following conditions are met in the internal load balancer's network:

  • Both the VLAN attachment andCloud Router must be located in the same region as the loadbalancer whenglobal access is disabled. If global accessis enabled on the load balancer's forwarding rule, this restriction iswaived.

  • On-premises routers must provide response paths from the load balancer'sbackends to the on-premises network. VLAN attachments forboth Dedicated Interconnect andPartner Interconnect must use Cloud Routers; thus,custom dynamic routes provide response paths. The set of custom dynamicroutes they learn depends on the dynamic routing mode of the load balancer'snetwork.

  • Ensure on-premises firewalls permit packets sent to the IP address of theload balancer's forwarding rule. Ensure on-premises firewalls permitresponse packets received from the IP address of the load balancer'sforwarding rule.

Use global access with Cloud VPN and Cloud Interconnect

By default, clients must be in the same network or in a VPCnetwork connected by usingVPC Network Peering. Youcan enable global access to allow clients from any region to access your loadbalancer.

For cross-region internal proxy Network Load Balancers, global access isenabled by default. Clients from any region can access your load balancer.For regional internal proxy Network Load Balancers, when youenable globalaccess,the following resources can be located in any region:
  • Cloud Routers
  • Cloud VPN gateways and tunnels
  • VLAN attachments

In the diagram:

  • The load balancer's frontend and backends are in theREGION_A region.
  • The Cloud Router is in theREGION_B region.
  • The Cloud Router peers with the on-premises VPN router.
  • The Border Gateway Protocol (BGP) peering session can be throughCloud VPN or Cloud Interconnect with Direct Peering orPartner Interconnect.
Internal load balancing with global access.
Internal load balancing with global access (click to enlarge).

The VPC network'sdynamic routing modeis set toglobal to enable the Cloud Router inREGION_B toadvertise the subnet routes for subnets in any region of the load balancer'sVPC network.

Multiple egress paths

In production environments, you should use multiple Cloud VPNtunnels or VLAN attachments for redundancy. Thissection discusses requirements when using multiple tunnels or VLAN attachments.

In the following diagram, two Cloud VPN tunnels connectlb-network toan on-premises network. Although Cloud VPN tunnels are used here, thesame principles apply to Cloud Interconnect.

Internal load balancing and multiple Cloud VPN tunnels.
Internal load balancing and multiple Cloud VPN tunnels (click to enlarge).

You must configure each tunnel or each VLAN attachmentin the same region as the internal load balancer. This requirement iswaived if you enabled global access.

Multiple tunnels or VLAN attachments can provide additional bandwidth or can serve asstandby paths for redundancy.

Keep in mind the following points:

  • If the on-premises network has two routes with the same priorities, eachwith a destination of10.1.2.0/24 and a next hop corresponding to adifferent VPN tunnel in the same region as the internal load balancer, trafficcan be sent from the on-premises network(192.168.1.0/24) to the load balancer by usingequal-cost multipath(ECMP).
  • After packets are delivered to the VPC network, theinternal load balancer distributes them to backend VMs according tothe configuredsession affinity.
  • If thelb-network has two routes, each with the destination192.168.1.0/24 and a next hop corresponding to different VPN tunnels,responses from backend VMs can be delivered over each tunnel according tothepriority of the routes in thenetwork. If different route prioritiesare used, one tunnel can serve as a backup for the other. If the samepriorities are used, responses are delivered by using ECMP.
  • Replies sent from the backend VMs (such asvm-a2) are delivered directlyto the on-premises clients through the appropriate tunnel. From theperspective oflb-network, if routes or VPN tunnels change, traffic mightegress by using a different tunnel. This might result in TCP session resets ifan in-progress connection is interrupted.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.