Hybrid connectivity network endpoint groups overview Stay organized with collections Save and categorize content based on your preferences.
Cloud Load Balancing supports load-balancing traffic to endpoints that extendbeyond Google Cloud, such as on-premises data centers and other public cloudsthat you can usehybrid connectivity to reach.
A hybrid strategy is a pragmatic solution for you to adapt to changing marketdemands and incrementally modernize your applications. This could be a temporaryhybrid deployment to enable migration to a modern cloud-basedsolution or a permanent fixture of your organization's IT infrastructure.
Setting up hybrid load balancing also lets you bring the benefits ofCloud Load Balancing's networking capabilities to services running onyour existing infrastructure outside of Google Cloud.
Hybrid load balancing is supported on the following Google Cloud load balancers:
- External Application Load Balancers: global external Application Load Balancers,classic Application Load Balancers, and regional external Application Load Balancers
- Internal Application Load Balancers:cross-region internal Application Load Balancers and regional internal Application Load Balancers
- External proxy Network Load Balancers:global external proxy Network Load Balancers, classic proxy Network Load Balancers, andregional external proxy Network Load Balancers
- Internal proxy Network Load Balancers:regional internal proxy Network Load Balancers and cross-region internal proxy Network Load Balancers
On-premises and other cloud services are treated like any otherCloud Load Balancing backend. The key difference is that you use ahybrid connectivity NEG to configure the endpoints of these backends. Theendpoints must be validIP:port combinations that your load balancer can reachby using hybrid connectivity products such asCloud VPN,Cloud Interconnect,orRouter applianceVMs.
Use-case: Routing traffic to an on-premises location or another cloud
The simplest use case for using hybrid NEGs is routing traffic from aGoogle Cloud load balancer to an on-premises location or toanother cloud environment. Clients can originate traffic eitherfrom the public internet, from within Google Cloud,or from an on-premises client.
Note: Hybrid connectivity NEGs are configured only within a specific zone. Ifyou require regional availability, you can use a regional internet NEGas a workaround. By creating a regional internet NEG with anINTERNET_FQDN_PORT endpoint type, you can use a fully qualified domain name(FQDN) that resolves to a private IP address through aprivate Cloud DNS zone. This approach allows the load balancer toroute traffic to your private on-premises or multi-cloud backends at aregional level, avoiding a single zonal dependency. For more details,see the documentation forInternet NEGs.Public clients
You can use an external Application Load Balancer with a hybrid NEG backend to route traffic fromexternal clients to a backend on-premises or in another cloud network. You canalso enable the following value-added networking capabilities for your serviceson-premises or in other cloud networks:
With the global external Application Load Balancer and classic Application Load Balancer, you can:
- Use Google's global edge infrastructure to terminate user connections closerto the user, thus decreasing latency.
- Protect your service withGoogle Cloud Armor, an edge DDoS defense/WAFsecurity product available to all services accessed through anexternal Application Load Balancer.
- Enable your service to optimize delivery usingCloud CDN. WithCloud CDN, you can cache content close to your users.Cloud CDN provides capabilities like cache invalidation andCloud CDN signed URLs.
- UseGoogle-managed SSLcertificates. Youcan reuse certificates and private keysthat you already use for other Google Cloud products. This eliminatesthe need to manage separate certificates.
The following diagram demonstrates a hybrid deployment with an external Application Load Balancer.
Hybrid connectivity with a global external Application Load Balancer (click to enlarge). In this diagram, traffic from clients on the public internet entersyour private on-premise or cloud network through a Google Cloud loadbalancer, such as the external Application Load Balancer. When traffic reaches the loadbalancer, you can apply network edge services such as Cloud Armor DDoSprotection or Identity-Aware Proxy (IAP) user authentication.
- With the regional external Application Load Balancer, you can route external traffic toendpoints that are within the same Google Cloud region as theload balancer's resources. Use this load balancer if you need to serve contentfrom only one geolocation (for example, to meet compliance regulations)or if you want to use theStandard Network ServiceTier.
How the request is routed (whether to a Google Cloud backend or to anon-premises/cloud endpoint) depends on how your URL map is configured. Dependingon your URL map, the load balancer selects abackend servicefor the request. If the selected backend service has been configured with ahybrid connectivity NEG (used for non-Google Cloud endpoints only), theload balancer forwards the traffic across Cloud VPN,Cloud Interconnect, or Router appliance VMs, to its intended externaldestination.
Internal clients (within Google Cloud or on-premises)
You can also set up a hybrid deployment for clients internal toGoogle Cloud. In this case, client traffic originates from theGoogle Cloud VPC network, your on-premises network, or fromanother cloud, and is routed to endpoints on-premises or in other cloudnetworks.
The regional internal Application Load Balancer is a regional load balancer, which means that it canonly route traffic to endpoints within the same Google Cloud region as theload balancer's resources. The cross-region internal Application Load Balancer is a multi-region loadbalancer that can load balance traffic to backend services that are globallydistributed.
The following diagram demonstrates a hybrid deployment with a regional internal Application Load Balancer.
Use-case: Migrate to Cloud
Migrating an existing service to cloud lets you free up on-premisecapacity and reduce the cost and burden of maintaining on-premiseinfrastructure. You can temporarily set up a hybrid deployment that lets youroute traffic to both your current on-premises service and a correspondingGoogle Cloud service endpoint.
The following diagram demonstrates this setup with an internal Application Load Balancer.If you are using an internal Application Load Balancer to handle internal clients, you canconfigure the Google Cloud load balancer to use weight-based trafficsplitting to split traffic across the two services. Traffic splitting lets youstart by sending 0% of traffic to the Google Cloud service and 100% to theon-premises service. You can then gradually increase the proportion of trafficsent to the Google Cloud service. Eventually, you send 100% of traffic tothe Google Cloud service, and you can retire the on-premises service.
Hybrid architecture
This section describes the load balancing architecture and resources requiredto configure a hybrid load balancing deployment.
On-premises and other cloud services are like any otherCloud Load Balancing backend. The key difference is that you use ahybrid connectivity NEG to configure the endpoints of these backends. Theendpoints must be validIP:port combinations that your clients can reachthrough hybrid connectivity, such as Cloud VPN,Cloud Interconnect, or a Router appliance VM.
Global external HTTP(S)
Regional external HTTP(S)
Regional internal HTTP(S)
Regional internal proxy
Regional versus global
Cloud Load Balancing routing depends on the scope of the configured loadbalancer:
External Application Load Balancer and external proxy Network Load Balancer. These load balancers can beconfigured for either global or regional routing depending on the network tierthat is used. You create the load balancer's hybrid NEG backends in the sameregions where hybrid connectivity has been configured.Non-Google Cloud endpoints must also be configured accordingly to takeadvantage of proximity-based load balancing.
Cross-region internal Application Load Balancer and cross-region internal proxy Network Load Balancer. This is amulti-region load balancer that can load balance traffic to backend services thatare globally distributed. You create the load balancer's hybrid NEG backends inthe same regions where hybrid connectivity has been configured.Non-Google Cloud endpoints must also be configured accordingly to takeadvantage of proximity-based load balancing.
Regional internal Application Load Balancer and regional internal proxy Network Load Balancer. These areregional load balancers. That is, they can only route traffic to endpointswithin the same region as the load balancer. The load balancer componentsmustbe configured in the same region where hybrid connectivity has been configured.By default, clients accessing the load balancer must also be in the same region.However, if youenable global access, clients from any region can access theload balancer.
For example, if the Cloud VPN gateway or the Cloud InterconnectVLAN attachment is configured inREGION_A, the resources required bythe load balancer (such as a backend service, hybrid NEG, or forwarding rule)mustbe created in theREGION_A region. By default, clients accessing the loadbalancer must also be in theREGION_A region. However, if you enable globalaccess, clients from any region can access the load balancer.
Network connectivity requirements
Before you configure a hybrid load balancing deployment, you must have thefollowing resources set up:
Google Cloud VPC network. A VPCnetwork configured inside Google Cloud. This is the VPCnetwork used to configure Cloud Interconnect/Cloud VPN andCloud Router. This is also the same network where you'll create the loadbalancing resources (forwarding rule, target proxy, backend service, etc.).On-premises, other cloud, and Google Cloud subnet IP addresses and IPaddress ranges must not overlap. When IP addresses overlap, subnet routes areprioritized over remote connectivity.
Hybrid connectivity. Your Google Cloud and on-premises or othercloud environments must be connected throughhybridconnectivity, using eitherCloud Interconnect VLAN attachments, Cloud VPN tunnels withCloud Router, or Router appliance VMs. We recommend that you use a highavailability connection. A Cloud Router enabled withglobal dynamicroutinglearns about the specific endpoint by using BGP and programs it into yourGoogle Cloud VPC network. Regional dynamic routing isnot supported. Static routes are also not supported.
The Cloud Router must alsoadvertise the followingroutes toyour on-premises environment:
Ranges used by Google's health check probes:
35.191.0.0/16and130.211.0.0/22. This is required for global external Application Load Balancers,classic Application Load Balancers, global external proxy Network Load Balancers, andclassic proxy Network Load Balancers.The range of the region's proxy-only subnet: for Envoy-based loadbalancers—regional external Application Load Balancers, regional internal Application Load Balancers, cross-region internal Application Load Balancers,regional external proxy Network Load Balancers, cross-region internal proxy Network Load Balancers, and regional internal proxy Network Load Balancers.
Advertising region's proxy-only subnet is also required fordistributedEnvoy health checks to work. Distributed Envoy healthcheck is the default health check mechanism for zonal hybrid connectivityNEGS (that is,
NON_GCP_PRIVATE_IP_PORTendpoints) behind Envoy-based loadbalancers.
You can use either the same network or a different VPC networkwithin the same project to configure both hybrid networking(Cloud Interconnect or Cloud VPN) and the load balancer.Note the following:
If you use different VPC networks, the two networks must beconnected using either VPC Network Peering or they must beVPCspokeson the sameNCChub.
If you use the same VPC network, ensure that yourVPC network's subnet CIDR ranges don't conflict with yourremote CIDR ranges. When IP addresses overlap, subnet routes are prioritizedover remote connectivity.
Network endpoints (
IP:Port) on-premises or in other clouds. One or moreIP:Portnetwork endpoints configured within your on-premises or other cloudenvironments, routable using Cloud InterconnectCloud VPN, or a Router appliance VM. If there are multiple paths tothe IP endpoint, routing will follow the behavior described in theVPC routes overview and theCloud Router overview.Firewall rules on your on-premises or other cloud. The following firewallrules must be created on your on-premises or other cloud environment:
- Ingress allow firewall rules to allow traffic from Google's health-checkingprobes to your endpoints.The ranges to be allowed are:
35.191.0.0/16and130.211.0.0/22. Note that these ranges must also be advertised byCloud Router to your on-premises network.For more details, seeProbe IP ranges andfirewall rules. - Ingress allow firewall rules to allow traffic that is being load-balancedto reach the endpoints.
- For Envoy-based loadbalancers—regional external Application Load Balancers, regional internal Application Load Balancers, cross-region internal Application Load Balancers,regional external proxy Network Load Balancers, cross-region internal proxy Network Load Balancer, and regional internal proxy Network Load Balancers, youalso need to create a firewall rule to allow traffic from the region'sproxy-only subnet toreach the endpoints that are on-premises or in other cloud environments.
- Ingress allow firewall rules to allow traffic from Google's health-checkingprobes to your endpoints.The ranges to be allowed are:
Load balancer components
Depending on the type of load balancer, you can set up a hybrid load-balancingdeployment by using either the Standard or the Premium Network Service Tiers.A hybrid load balancer requiresspecial configuration only for the backendservice. The frontend configuration is the same as any otherload balancer. The Envoy-based load balancers—regional external Application Load Balancers,regional internal Application Load Balancers, cross-region internal Application Load Balancers,regional external proxy Network Load Balancers, cross-region internal proxy Network Load Balancer, andregional internal proxy Network Load Balancers,—require an additionalproxy-onlysubnetto run Envoy proxies on your behalf.
Frontend configuration
No special frontend configuration is required for hybrid load balancing.Forwarding rules are used toroute traffic by IP address, port, and protocol to a target proxy. Thetargetproxy then terminates connections fromclients.
URL maps are used by HTTP(S) loadbalancers to set up URL-based routing of requests to the appropriate backendservices.
For more details on each of these components, refer the architecture sections ofthe specific load balancer overviews:
- External Application Load Balancer
- Internal Application Load Balancer
- External proxy Network Load Balancer
Backend service
Backend services provide configurationinformation to the load balancer. Load balancers use the information in abackend service to direct incoming traffic to one or more attached backends.
To set up a hybrid load balancing deployment, you configure the load balancerwith backends that are both within Google Cloud, as well asoutside of Google Cloud.
Non-Google Cloud backends (on-premise or other cloud)
Any destination that you can reach using Google'shybridconnectivity products (either Cloud VPN orCloud Interconnect or Router appliance VMs), and that can bereached with a valid
IP:Portcombination, can be configured as an endpointfor the load balancer.Configure your non-Google Cloud backends as follows:
- Add each non-Google Cloud network endpoint's
IP:Portcombinationto ahybrid connectivity network endpoint group (NEG). Make sure this IPaddress and port is reachable from Google Cloud by usinghybrid connectivity (either by Cloud VPNor Cloud Interconnect or Router appliance VMs). For hybridconnectivity NEGs, you set thenetwork endpoint type toNON_GCP_PRIVATE_IP_PORT. - While creating the NEG, specify a Google Cloudzonethat minimizes the geographic distance between Google Cloud and youron-premises or other cloud environment. For example, if you are hosting aservice in an on-premises environment in Frankfurt, Germany, you canspecify the
europe-west3-aGoogle Cloud zone when you create theNEG. Add this hybrid connectivity NEG as a backend for the backend service.
A hybrid connectivity NEG must only include non-Google Cloudendpoints. Traffic might be dropped if a hybrid NEG includes endpoints forresources within a Google Cloud VPC network, such asforwarding rule IP addresses forinternal passthrough Network Load Balancers. Configure Google Cloud endpoints as directed inthe next section.
- Add each non-Google Cloud network endpoint's
Google Cloud backends
Configure your Google Cloud endpoints as follows:
- Create a separate backend service for the Google Cloud backends.
- Configure multiple backends (either
GCE_VM_IP_PORTzonal NEGs orinstance groups) within the same region in which you have set up hybridconnectivity.
Additional points for consideration:
Each hybrid connectivity NEG can only contain network endpoints of the sametype (
NON_GCP_PRIVATE_IP_PORT).You can use a single backend service to reference bothGoogle Cloud-based backends (using zonal NEGs with
Note: For GKE deployments, mixed NEG backends are onlysupported withstandalone NEGs.GCE_VM_IP_PORTendpoints) and on-premises or other cloud backends (using hybrid connectivityNEGs withNON_GCP_PRIVATE_IP_PORTendpoints). No other combination of mixedbackend types is allowed. Cloud Service Mesh does not support mixed backendtypes in a single backend service.
The backend service's load balancing scheme must be one of the following:
EXTERNAL_MANAGEDfor global external Application Load Balancers, regional external Application Load Balancers,global external proxy Network Load Balancers, and regional external proxy Network Load BalancersEXTERNALfor classic Application Load Balancers and classic proxy Network Load BalancersINTERNAL_MANAGEDfor internal Application Load Balancers and Internal proxy Network Load Balancers
INTERNAL_SELF_MANAGEDis supportedforCloud Service Mesh multi-environment deployments with hybrid connectivityNEGs.
The backend service protocol must be one of
HTTP,HTTPS, orHTTP2forthe Application Load Balancers, and eitherTCPorSSLfor theproxy Network Load Balancers. For the list of backend serviceprotocols supportedby each load balancer, seeProtocols from the load balancer to thebackend.The balancing mode for the hybrid NEG backend must be
RATEforApplication Load Balancers, andCONNECTIONfor proxy Network Load Balancers. For details on balancing modes, seeBackend services overview.To add more network endpoints, update the backends attached to your backendservice.
If you're usingdistributed Envoy healthchecks with hybrid connectivity NEG backends (supported only forEnvoy-basedload balancers), make sure that you configure unique networkendpoints for all the NEGs attached to the same backend service. Addingthe same network endpoint to multiple NEGs results in undefined behavior.
Centralized health checks
Centralized health checks, when using hybrid NEGs, are required for global external Application Load Balancers,classic Application Load Balancers, global external proxy Network Load Balancers, and classic proxy Network Load Balancers.Other Envoy-based load balancers usedistributed Envoy health checksas described in the following section.
ForNON_GCP_PRIVATE_IP_PORT endpoints outside Google Cloud,create firewall rules on your on-premises and other cloud networks. Contact yournetwork administrator for this. The Cloud Router used for hybridconnectivity must also advertise the ranges used by Google's health checkprobes. The ranges to be advertised are35.191.0.0/16 and130.211.0.0/22.
For other types of backends within Google Cloud, create firewall rules onGoogle Cloud asdemonstrated in thisexample.
Related documentation:
- Set up a global external Application Load Balancer with hybridconnectivity
- Set up a classic Application Load Balancer with hybridconnectivity
Distributed Envoy health checks
Your health check configuration varies depending on the type of load balancer:
- Global external Application Load Balancer, classic Application Load Balancer,global external proxy Network Load Balancer , and classic proxy Network Load Balancer. These loadbalancers don't support distributed Envoy health checks. They use Google'scentralized health checking mechanism as described in the sectionCentralizedhealth checks.
Regional external Application Load Balancer, regional internal Application Load Balancer,regional external proxy Network Load Balancer, regional internal proxy Network Load Balancer,cross-region internal proxy Network Load Balancer, and cross-region internal Application Load Balancer.These loadbalancers use distributed Envoy health checks to check the health of hybridNEGs. The health check probes originate from the Envoyproxy software itself. Eachbackend service must be associated with a health check that checks the healthof the backends. Health check probes originate from the Envoy proxies in theproxy-only subnet in the region.For the health check probes to function correctly, you must create firewallrules in the external environment that allow traffic from the proxy-onlysubnet to reach your external backends.
For
NON_GCP_PRIVATE_IP_PORTendpoints outside Google Cloud,you must create these firewall rules on your on-premises and other cloudnetworks. Contact your network administrator for this. TheCloud Router that you use for hybrid connectivity must also advertise theregion's proxy-only subnet range.
Distributed Envoy health checks are created by using thesameGoogle Cloud console, gcloud CLI, and APIprocesses as centralized health checks.No other configuration is required.
Points to note:
- gRPC health checks are not supported.
- Health checks with PROXY protocol v1 enabled are not supported.
- If you use mixed NEGs where a single backend service has a combinationof zonal NEGs (
GCE_VM_IP_PORTendpoints withinGoogle Cloud) and hybrid NEGs (NON_GCP_PRIVATE_IP_PORTendpoints outside Google Cloud), you need to set up firewall rulesto allow traffic fromGoogle health check probe IPranges(130.211.0.0/22and35.191.0.0/16) to the zonal NEG endpoints onGoogle Cloud. This is because zonal NEGs use Google's centralizedhealth checking system. Because the Envoy data plane handles health checks, you cannot use theGoogle Cloud console, the API, or the gcloud CLI to check thehealth status of these external endpoints. For hybrid NEGs with Envoy-basedload balancers, the Google Cloud console shows the health check status as
N/A. This is expected.Every Envoy proxy assigned to the proxy-only subnet in the region in theVPC network initiates health checks independently. Therefore,you might see an increase in network traffic because of health checking. Theincrease depends on the number of Envoy proxies assigned to yourVPC network in a region, the amount of traffic received bythese proxies, and the number of endpoints that each Envoy proxy needs tohealth check. In the worst case scenario, network traffic because of healthchecks increases at a quadratic
(O(n^2))rate.Health check logs for distributed Envoy health checks don't includedetailed health states. For details about what is logged, seeHealth checklogging. To furthertroubleshoot connectivity from Envoy proxies to your NEG endpoints, youshould also check the respective load balancer logs.
Related documentation:
- Set up a regional external Application Load Balancer with hybridconnectivity
- Set up a regional internal Application Load Balancer with hybridconnectivity
- Set up a cross-region internal proxy Network Load Balancer with hybridconnectivity
Limitations
- The Cloud Router used for hybrid connectivity must be enabled withglobaldynamicrouting.Regional dynamic routing and static routes are notsupported.
- For the Envoy-basedregional loadbalancers—regional external Application Load Balancers, regional external proxy Network Load Balancers,regional internal proxy Network Load Balancers, andregional internal Application Load Balancers—hybrid connectivity must beconfigured in the same region as the load balancer. If they are configured indifferent regions, you might see backends as healthy but client requests won'tbe forwarded to the backends.
The considerations for encrypted connections from the load balancer tothe backendsdocumentedherealso apply to non-Google Cloud backend endpoints configured in thehybrid connectivity NEG.
Make sure that you also review the security settings on your hybrid connectivityconfiguration. HA VPN connections are encrypted bydefault (IPsec). Cloud Interconnect connections are not encrypted bydefault. For more details, see theEncryption in transit whitepaper.
Logging
Requests proxied to an endpoint in a hybrid NEG are logged to Cloud Loggingin the same way that requests for other backends are logged. If you enableCloud CDN for your global external Application Load Balancer, cache hits are also logged.
For more information, see:
- External Application Load Balancer logging andmonitoring
- Internal Application Load Balancer logging andmonitoring
Quota
You can configure as many hybrid NEGs with network endpoints as permitted byyour existing network endpoint group quota. For more information, seeNEG backendsandEndpoints per NEG.
What's next
- Set up a classic Application Load Balancer with hybridconnectivity
- Set up a regional external Application Load Balancer with hybridconnectivity
- Set up a regional internal Application Load Balancer with hybridconnectivity
- Set up a regional internal proxy Network Load Balancer with hybrid connectivity
- Set up a cross-region internal proxy Network Load Balancer with hybrid connectivity
- Set up a regional external proxy Network Load Balancer with hybrid connectivity
- To learn more about hybrid connectivity with Cloud Service Mesh, see theCloud Service Mesh hybrid connectivityoverview.
- To configure Cloud Service Mesh for hybrid deployments, seeNetwork edge services for multi-environmentdeployments.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.