Set up a cross-region internal proxy Network Load Balancer with hybrid connectivity Stay organized with collections Save and categorize content based on your preferences.
This page shows how to deploy a cross-region internal proxy Network Load Balancer to load balancetraffic to network endpoints that are on-premises or in other public clouds andthat are reachable by usinghybrid connectivity.
If you haven't already done so, review theHybrid connectivity NEGsoverview to understand thenetwork requirements to set up hybrid load balancing.
Setup overview
The example sets up a cross-region internal proxy Network Load Balancer for mixed zonal and hybridconnectivity NEG backends, as shown in the following figure:
You must configure hybrid connectivity before setting up a hybridload balancing deployment. Depending on your choice of hybrid connectivityproduct, use either Cloud VPN or Cloud Interconnect (Dedicatedor Partner).
Permissions
To set up hybrid load balancing, you must have the following permissions:
On Google Cloud
- Permissions to establish hybrid connectivity between Google Cloud andyour on-premises environment or other cloud environments. For the listof permissions needed, see the relevantNetwork Connectivity productdocumentation.
- Permissions to create a hybrid connectivity NEG and the load balancer.TheCompute Load Balancer Adminrole(
roles/compute.loadBalancerAdmin) contains the permissions required toperform the tasks described in this guide.
On your on-premises environment or other non-Google Cloud cloudenvironment
- Permissions to configure network endpoints that allow services on youron-premises environment or other cloud environments to be reachable fromGoogle Cloud by using an
IP:Portcombination. For more information,contact your environment's network administrator. - Permissions to create firewall rules on your on-premises environment orother cloud environments to allow Google's health check probes to reach theendpoints.
- Permissions to configure network endpoints that allow services on youron-premises environment or other cloud environments to be reachable fromGoogle Cloud by using an
Additionally, to complete the instructions on this page, you need to create ahybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints)to serve as Google Cloud-based backends for the load balancer.
You should be either a projectOwneror Editor, or you should have thefollowingCompute Engine IAMroles.
| Task | Required role |
|---|---|
| Create networks, subnets, and load balancer components | Compute Network Admin (roles/compute.networkAdmin) |
| Add and remove firewall rules | Compute Security Admin (roles/compute.securityAdmin) |
| Create instances | Compute Instance Admin (roles/compute.instanceAdmin) |
Establish hybrid connectivity
Your Google Cloud and on-premises environment or other cloud environmentsmust be connected throughhybrid connectivity by usingeither Cloud Interconnect VLAN attachments or Cloud VPNtunnels with Cloud Router or Router appliance VMs. We recommend that youuse a high availability connection.
A Cloud Router enabled withglobal dynamicroutinglearns about the specific endpoint through Border Gateway Protocol (BGP) andprograms it into your Google Cloud VPC network. Regionaldynamic routing is not supported. Static routes are also not supported.
You can use either the same network or a different VPC networkwithin the same project to configure both hybrid networking(Cloud Interconnect or Cloud VPN or a Router appliance VM) and the load balancer. Notethe following:
If you use different VPC networks, the two networks must beconnected using either VPC Network Peering or they must beVPCspokeson the sameNetwork Connectivity Centerhub.
If you use the same VPC network, ensure that yourVPC network's subnet CIDR ranges don't conflict with yourremote CIDR ranges. When IP addresses overlap, subnet routes are prioritizedover remote connectivity.
For instructions, see the following documentation:
Important: Don't proceed with the instructions on this page until you set uphybrid connectivity between your environments.Set up your environment that is outside Google Cloud
Perform the following steps to set up your on-premises environment or other cloudenvironment for hybrid load balancing:
- Configure network endpoints to expose on-premises services toGoogle Cloud (
IP:Port). - Configure firewall rules on your on-premises environment or other cloud environment.
- Configure Cloud Router to advertise certain required routes to yourprivate environment.
Set up network endpoints
After you set up hybrid connectivity, you configure one or more networkendpoints within your on-premises environment or other cloud environments thatare reachable through Cloud Interconnect or Cloud VPN orRouter appliance by using anIP:port combination. ThisIP:portcombination is configured as one or more endpoints for the hybrid connectivityNEG that is created in Google Cloud later on in this process.
If there are multiple paths to the IP endpoint, routingfollows the behavior described in theCloud Routeroverview.
Set up firewall rules
The following firewall rules must be created on your on-premises environmentor other cloud environment:
- Create an ingress allow firewall rule in on-premises or other cloud environments to allow traffic from the region'sproxy-only subnet to reach the endpoints.
Allowing traffic from Google's health check probe ranges isn't required for hybridNEGs. However, if you're using a combination of hybrid and zonal NEGs ina single backend service, you need to allow traffic from theGooglehealth check probe ranges for the zonal NEGs.
Advertise routes
Configure Cloud Router toadvertise the following custom IPranges to youron-premises environment or other cloud environment:
- The range of the region's proxy-only subnet.
Set up the Google Cloud environment
For the following steps, make sure you use the same VPC network(calledNETWORK in this procedure) thatwas used to configure hybrid connectivity between the environments.
Additionally, make sure the regions used(calledREGION_A andREGION_B in this procedure)are the same as those used to create the Cloud VPN tunnel orCloud Interconnect VLAN attachments.
Configure the backend subnets
Use this subnet to create the load balancer's zonal NEG backends:
Console
In the Google Cloud console, go to theVPC networks page.
Go to the network that was used to configure hybrid connectivity betweenthe environments.
In theSubnets section:
- Set theSubnet creation mode toCustom.
- In theNew subnet section, enter the following information:
- Provide aName for the subnet.
- Select aRegion:REGION_A
- Enter anIP address range.
- ClickDone.
ClickCreate.
To add more subnets in different regions, clickAdd subnet and repeatthe previous steps forREGION_B
gcloud
Create subnets in the network that was used to configure hybridconnectivity between the environments.
gcloud compute networks subnets createSUBNET_A \ --network=NETWORK \ --range=LB_SUBNET_RANGE1 \ --region=REGION_A
gcloud compute networks subnets createSUBNET_B \ --network=NETWORK \ --range=LB_SUBNET_RANGE2 \ --region=REGION_B
API
Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks{ "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "LB_SUBNET_RANGE1", "region": "projects/PROJECT_ID/regions/REGION_A",}Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks{ "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "LB_SUBNET_RANGE2", "region": "projects/PROJECT_ID/regions/REGION_B",}Replace the following:
SUBNET_AandSUBNET_B: the name of the subnetsLB_SUBNET_RANGE1andLB_SUBNET_RANGE2: the IP address range forthe subnetsREGION_AandREGION_B:the regions where you have configured the load balancer
Configure the proxy-only subnet
Aproxy-only subnet provides aset of IP addresses that Google uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.
This proxy-only subnet is used by allEnvoy-based regional loadbalancers in thesame region of the VPC network. There can only beone active proxy-only subnet for a given purpose, per region, per network.
Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on theLoad balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to theVPC networks page.
- Click the name of the VPC network.
- On theSubnets tab, clickAdd subnet.
- Provide aName for the proxy-only subnet.
- In theRegion list, selectREGION_A.
- In thePurpose list, selectCross-region Managed Proxy.
- In theIP address range field, enter
10.129.0.0/23. - ClickAdd.
Create the proxy-only subnet inREGION_B
- ClickAdd subnet.
- Provide aName for the proxy-only subnet.
- In theRegion list, selectREGION_B.
- In thePurpose list, selectCross-region Managed Proxy.
- In theIP address range field, enter
10.130.0.0/23. - ClickAdd.
gcloud
Create the proxy-only subnets with thegcloud compute networks subnets create command.
gcloud compute networks subnets createPROXY_SN_A \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=NETWORK \ --range=PROXY_ONLY_SUBNET_RANGE1
gcloud compute networks subnets createPROXY_SN_B \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_B \ --network=NETWORK \ --range=PROXY_ONLY_SUBNET_RANGE2
Replace the following:
PROXY_SN_AandPROXY_SN_B: the name of the proxy-only subnetsPROXY_ONLY_SUBNET_RANGE1andPROXY_ONLY_SUBNET_RANGE2: the IP address range for the proxy-only subnetsREGION_AandREGION_B: the regions where you have configured the load balancer
API
Create the proxy-only subnets with thesubnetworks.insert method, replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks { "name": "PROXY_SN_A", "ipCidrRange": "PROXY_ONLY_SUBNET_RANGE1", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_A", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" } POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks { "name": "PROXY_SN_B", "ipCidrRange": "PROXY_ONLY_SUBNET_RANGE2", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_B", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" }Create firewall rules
In this example, you create the following firewall rules for the zonal NEGbackends on Google Cloud:
fw-allow-health-check: An ingress rule, applicable to theinstances being load balanced, that allows traffic fromGoogle Cloud health checking systems (130.211.0.0/22and35.191.0.0/16). This example uses the target tagallow-health-checktoidentify the zonal NEGs to which it should apply.fw-allow-ssh: An ingress rule that allows incoming SSH connectivity on TCPport 22 from any address. You can choose a more restrictive source IP rangefor this rule; for example, you can specify just the IP ranges of the systemsfrom which you will initiate SSH sessions. This example uses the target tagallow-sshto identify the VMs to which it should apply.fw-allow-proxy-only-subnet: An ingress rule that allows connections from theproxy-only subnet to reach the zonal NEG backends.
Console
In the Google Cloud console, go to theFirewall policies page.
ClickCreate firewall rule to create the rule to allow traffic fromhealth check probes:
- Enter aName of
fw-allow-health-check. - ForNetwork, selectNETWORK.
- ForTargets, selectSpecified target tags.
- Populate theTarget tags field with
allow-health-check. - SetSource filter toIPv4 ranges.
- SetSource IPv4 ranges to
130.211.0.0/22and35.191.0.0/16. - ForProtocols and ports, selectSpecified protocols andports.
- SelectTCP and then enter
80for the port number. - ClickCreate.
- Enter aName of
ClickCreate firewall rule again to create the rule to allow incomingSSH connections:
- Name:
fw-allow-ssh - Network:NETWORK
- Priority:
1000 - Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports: ChooseSpecified protocols and ports.
- SelectTCP and then enter
22for the port number. - ClickCreate.
- Name:
ClickCreate firewall rule again to create the rule to allow incomingconnections from the proxy-only subnet:
- Name:
fw-allow-proxy-only-subnet - Network:NETWORK
- Priority:
1000 - Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-proxy-only-subnet - Source filter:IPv4 ranges
- Source IPv4 ranges:PROXY_ONLY_SUBNET_RANGE1andPROXY_ONLY_SUBNET_RANGE2
- Protocols and ports: ChooseSpecified protocols and ports
- SelectTCP and then enter
80for the port number. - ClickCreate.
- Name:
gcloud
Create the
fw-allow-health-check-and-proxyrule to allowthe Google Cloud health checks to reach thebackend instances on TCP port80:gcloud compute firewall-rules create fw-allow-health-check \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
Create the
fw-allow-sshfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create an ingress allow firewall rule for the proxy-only subnet to allowthe load balancer to communicate with backend instances on TCP port
80:gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-proxy-only-subnet \ --source-ranges=PROXY_ONLY_SUBNET_RANGE1,PROXY_ONLY_SUBNET_RANGE2 \ --rules=tcp:80
Set up the zonal NEG
For Google Cloud-based backends, we recommend you configure multiple zonalNEGs in the same region where you configuredhybridconnectivity.
For this example, we set up a zonal NEG (withGCE_VM_IP_PORT type endpoints)in theREGION1. First create the VMs intheNEG_ZONE1 zone. Thencreate a zonal NEG in theNEG_ZONE2 andadd the VMs' network endpoints to the NEG.To support high availability, we set up a similar zonal NEG in theREGION2region. If backends in one region happen to be down, traffic fails over tothe other region.
Create VMs
Console
In the Google Cloud console, go to theVM instances page.
Repeat steps 3 to 8 for each VM, using the following nameand zone combinations.
- Name: of
vm-a1- Zone:NEG_ZONE1 inthe regionREGION_A
- Subnet:SUBNET_A
- Name: of
vm-b1- Zone:NEG_ZONE2 inthe regionREGION_B
- Subnet:SUBNET_B
- Name: of
ClickCreate instance.
Set the name as indicated in the preceding step.
For theRegion, choose as indicated in the earlier step.
For theZone, choose as indicated in the earlier step.
In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. ClickChoose to change the image if necessary.
In theAdvanced options section, expandNetworking, and then dothe following:
- Add the followingNetwork tags:
allow-ssh,allow-health-check, andallow-proxy-only-subnet. - In theNetwork interfaces section, clickAdd a network interfacemake the following changes, and then clickDone:
- Network:NETWORK
- Subnetwork: as indicated in the earlier step.
- Primary internal IP: Ephemeral (automatic)
- External IP: Ephemeral
ExpandManagement. In theAutomation field, copy and pastethe following script contents. The script contents are identical forall VMs:
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
- Add the followingNetwork tags:
ClickCreate.
gcloud
Create the VMs by running the following command, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.
VM_NAMEofvm-a1- The zone
GCP_NEG_ZONEasNEG_ZONE1in the regionREGION_A - The subnet
LB_SUBNET_NAMEasSUBNET_A
- The zone
VM_NAMEofvm-b1- Zone
GCP_NEG_ZONEasNEG_ZONE2in the regionREGION_B - Subnet
LB_SUBNET_NAMEasSUBNET_B
gcloud compute instances createVM_NAME \ --zone=GCP_NEG_ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --subnet=LB_SUBNET_NAME \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
- Zone
Create the zonal NEG
Console
To create a zonal network endpoint group:
In the Google Cloud console, go to theNetwork Endpoint Groups page.
Repeat steps 3 to 8 for each zonal NEG, using the following nameand zone combinations:
- Name:
neg-1- Zone:NEG_ZONE1 in theregion
REGION_A - Subnet:SUBNET_A
- Zone:NEG_ZONE1 in theregion
- Name:
neg-2- Zone:NEG_ZONE2 in theregion
REGION_B - Subnet:SUBNET_B
- Zone:NEG_ZONE2 in theregion
- Name:
ClickCreate network endpoint group.
Set the name as indicated in the preceding step.
Select theNetwork endpoint group type:Network endpoint group(Zonal).
Select theNetwork:NETWORK
Select theSubnetwork as indicated in earlier step.
Select theZone as indicated in earlier step.
Enter theDefault port:
80.ClickCreate.
Add endpoints to the zonal NEG:
In the Google Cloud console, go to theNetwork Endpoint Groups page.
Click theName of the network endpoint group created in the previousstep. Yousee theNetwork endpoint group details page.
In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.
Select aVM instance to add its internal IP addresses as networkendpoints. In theNetwork interface section, the name, zone,and subnet of the VM is displayed.
Enter theIP address of the new network endpoint.
Select thePort type.
- If you selectDefault, the endpoint uses the default port
80for all endpoints in the network endpoint group. This is sufficientfor our example because the Apache server is serving requests atport80. - If you selectCustom, enter thePort number for the endpointto use.
- If you selectDefault, the endpoint uses the default port
To add more endpoints, clickAdd network endpoint and repeat theprevious steps.
After you add all the endpoints, clickCreate.
gcloud
Create zonal NEGs (with
GCE_VM_IP_PORTendpoints)using the name, zone, and subnet combinations.Use thegcloud compute network-endpoint-groupscreatecommand.- Name:
neg-1- Zone
GCP_NEG_ZONE:NEG_ZONE1 in theregionREGION_A - Subnet
LB_SUBNET_NAME:SUBNET_A
- Zone
- Name:
neg-2- Zone
GCP_NEG_ZONE:NEG_ZONE2 in theregionREGION_B - Subnet
LB_SUBNET_NAME:SUBNET_B
- Zone
gcloud compute network-endpoint-groups createGCP_NEG_NAME \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=GCP_NEG_ZONE \ --network=NETWORK \ --subnet=LB_SUBNET_NAME
You can either specify a port using the
--default-portoption whilecreating the NEG, orspecify a port number for eachendpointas shown in the next step.- Name:
Add endpoints to
neg1andneg2.gcloud compute network-endpoint-groups update neg1 \ --zone=NEG_ZONE1 \ --add-endpoint='instance=vm-a1,port=80'
gcloud compute network-endpoint-groups update neg2 \ --zone=NEG_ZONE2 \ --add-endpoint='instance=vm-b1,port=80'
Set up the hybrid connectivity NEG
When creating the NEG, use a zone that minimizes the geographicdistance between Google Cloud and your on-premises or other cloudenvironment.
And, if you're using Cloud Interconnect, the zone usedto create the NEG is in the same region where theCloud Interconnect attachment was configured.
Hybrid NEGs support only thedistributed Envoy healthchecks.
Console
To create a hybrid connectivity network endpoint group:
In the Google Cloud console, go to theNetwork Endpoint Groups page.
ClickCreate network endpoint group.
Repeat steps 4 to 9 for each hybrid NEG, using the following name and zone combinations.
- NameON_PREM_NEG_NAME:
hybrid-1- Zone:ON_PREM_NEG_ZONE1
- Subnet:SUBNET_A
- NameON_PREM_NEG_NAME:
hybrid-2- Zone:ON_PREM_NEG_ZONE2
- Subnet:SUBNET_B
- NameON_PREM_NEG_NAME:
Set the name as indicated in the previous step.
Select theNetwork endpoint group type:Hybrid connectivity networkendpoint group (Zonal).
Select theNetwork:NETWORK
For theSubnet, choose as indicated in the previous step.
For theZone, choose as indicated in the previous step.
Enter theDefault port.
ClickCreate
Add endpoints to the hybrid connectivity NEG:
In the Google Cloud console, go to theNetwork Endpoint Groups page.
Click theName of the network endpoint group created in the previousstep. Yousee theNetwork endpoint group detail page.
In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.
Enter theIP address of the new network endpoint.
Select thePort type.
- If you selectDefault, the endpoint uses the default portfor all endpoints in the network endpoint group.
- If you selectCustom, you can enter a differentPort numberfor the endpoint to use.
To add more endpoints, clickAdd network endpoint and repeat theprevious steps.
After you add all the non-Google Cloud endpoints,clickCreate.
gcloud
Create a hybrid connectivity NEG that uses the following name combinations.Use the
gcloud compute network-endpoint-groupscreatecommand.- Name
ON_PREM_NEG_NAME:hybrid-1- Zone
ON_PREM_NEG_ZONE:ON_PREM_NEG_ZONE1
- Zone
- Name
ON_PREM_NEG_NAME:hybrid-2- Zone
GCP_NEG_ZONE:ON_PREM_NEG_ZONE2
- Zone
gcloud compute network-endpoint-groups createON_PREM_NEG_NAME \ --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \ --zone=ON_PREM_NEG_ZONE \ --network=NETWORK
- Name
Add the on-premises backend VM endpoint toON_PREM_NEG_NAME:
gcloud compute network-endpoint-groups updateON_PREM_NEG_NAME \ --zone=ON_PREM_NEG_ZONE \ --add-endpoint="ip=ON_PREM_IP_ADDRESS_1,port=PORT_1" \ --add-endpoint="ip=ON_PREM_IP_ADDRESS_2,port=PORT_2"
You can use this command to add the network endpoints youpreviouslyconfigured on premises or in your cloud environment.Repeat--add-endpoint as many times as needed.
Configure the load balancer
Console
Note: You cannot use the Google Cloud console to create a load balancerthat has mixed zonal and hybrid connectivity NEG backends in a singlebackend service. Use either gcloud or the REST API instead.gcloud
Define the TCP health check with the
gcloud compute health-checkscreate tcpcommand.gcloud compute health-checks create tcp gil4-basic-check \ --use-serving-port \ --global
Create the backend service and enable logging with the
gcloud compute backend-servicescreatecommand.gcloud compute backend-services createBACKEND_SERVICE \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --enable-logging \ --logging-sample-rate=1.0 \ --health-checks=gil4-basic-check \ --global-health-checks \ --global
Add backends to the backend service with the
gcloud compute backend-servicesadd-backendcommand.gcloud compute backend-services add-backendBACKEND_SERVICE \ --global \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=MAX_CONNECTIONS \ --network-endpoint-group=neg1 \ --network-endpoint-group-zone=NEG_ZONE1 \ --network-endpoint-group=neg2 \ --network-endpoint-group-zone=NEG_ZONE2
For details aboutconfiguring the balancing mode, see the gcloud CLI documentationfor the
--max-connections-per-endpointflag.ForMAX_CONNECTIONS, enter the maximum concurrentconnections for the backend to handle.Add the hybrid NEGs as a backend to the backend service.
gcloud compute backend-services add-backendBACKEND_SERVICE \ --global \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=MAX_CONNECTIONS \ --network-endpoint-group=hybrid1 \ --network-endpoint-group-zone=ON_PREM_NEG_ZONE1 \ --network-endpoint-group=hybrid2 \ --network-endpoint-group-zone=ON_PREM_NEG_ZONE2 \
For detailsabout configuring the balancing mode, see the gcloud CLIdocumentation for the
--max-connections-per-endpointparameter.ForMAX_CONNECTIONS, enter the maximum concurrentconnections for the backend to handle.Create the target proxy.
Create the target proxy with the
gcloud compute target-tcp-proxiescreatecommand.gcloud compute target-tcp-proxies create gil4-tcp-proxy \ --backend-service=BACKEND_SERVICE \ --global
Create two forwarding rules, one with a VIPIP_ADDRESS1 inREGION_A andanother one with a VIPIP_ADDRESS2in
REGION_B.For the forwarding rule's IP address, use theLB_SUBNET_RANGE1orLB_SUBNET_RANGE2 IPaddress range. If you try to use theproxy-only subnet, forwarding rule creation fails.For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.
Use the
gcloud compute forwarding-rulescreatecommandwith the correct flags.gcloud compute forwarding-rules create gil4-forwarding-rule-a \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_A \ --subnet-region=REGION_A \ --address=IP_ADDRESS1 \ --ports=80 \ --target-tcp-proxy=gil4-tcp-proxy \ --global
gcloud compute forwarding-rules create gil4-forwarding-rule-b \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_B \ --subnet-region=REGION_B \ --address=IP_ADDRESS2 \ --ports=80 \ --target-tcp-proxy=gil4-tcp-proxy \ --global
Test the load balancer
Create a VM instance to test connectivity
Create client VMs in
REGION_AandREGION_Band regions:gcloud compute instances create l4-ilb-client-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_A \ --zone=NEG_ZONE1 \ --tags=allow-ssh
gcloud compute instances create l4-ilb-client-b \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_B \ --zone=NEG_ZONE2 \ --tags=allow-ssh
Use SSH to connect to each client instance.
gcloud compute ssh l4-ilb-client-a \ --zone=NEG_ZONE1
gcloud compute ssh l4-ilb-client-b \ --zone=NEG_ZONE2
Verify that the IP address is serving its hostname.
Verify that the client VM can reach both IP addresses.The command should succeed and return the name of the backend VM which served therequest:
curlIP_ADDRESS1
curlIP_ADDRESS2
Run 100 requests
Run 100 curl requests and confirm from the responses that they are loadbalanced.
Verify that the client VM can reach both IP addresses.The command should succeed and return the name of the backend VM whichserved the request:
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silentIP_ADDRESS1)" done echo "***" echo "*** Results of load-balancing toIP_ADDRESS1: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo}{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silentIP_ADDRESS2)" done echo "***" echo "*** Results of load-balancing toIP_ADDRESS2: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo}
Test failover
Verify failover to backends in the
REGION_Aregion when backends in theREGION_Bare unhealthy or unreachable.We simulate this by removing all thebackends fromREGION_B:gcloud compute backend-services remove-backendBACKEND_SERVICE \ --balancing-mode=CONNECTION \ --network-endpoint-group=neg2 \ --network-endpoint-group-zone=NEG_ZONE2
Use SSH to connect to the client VM in
REGION_B.gcloud compute ssh l4-ilb-client-b \ --zone=NEG_ZONE2
Send requests to the load balanced IP address in
REGION_Bregion. The command output should displayresponses from backend VMs inREGION_A.{RESULTS=for i in {1..100}do RESULTS="$RESULTS:$(curl --silentIP_ADDRESS2)"doneecho "***"echo "*** Results of load-balancing toIP_ADDRESS2: "echo "***"echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -cecho}
What's next
- Convert proxy Network Load Balancer to IPv6
- Internal proxy Network Load Balancer overview
- Proxy-only subnets for Envoy-based load balancers
- Clean up a load balancing setup
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-24 UTC.