Set up a cross-region internal proxy Network Load Balancer with hybrid connectivity

This page shows how to deploy a cross-region internal proxy Network Load Balancer to load balancetraffic to network endpoints that are on-premises or in other public clouds andthat are reachable by usinghybrid connectivity.

If you haven't already done so, review theHybrid connectivity NEGsoverview to understand thenetwork requirements to set up hybrid load balancing.

Setup overview

The example sets up a cross-region internal proxy Network Load Balancer for mixed zonal and hybridconnectivity NEG backends, as shown in the following figure:

Cross-region internal proxy Network Load Balancer example for mixed zonal and hybrid connectivity NEG backends.
Cross-region internal proxy Network Load Balancer example for mixed zonal and hybrid connectivity NEG backends (click to enlarge).

You must configure hybrid connectivity before setting up a hybridload balancing deployment. Depending on your choice of hybrid connectivityproduct, use either Cloud VPN or Cloud Interconnect (Dedicatedor Partner).

Permissions

To set up hybrid load balancing, you must have the following permissions:

  • On Google Cloud

    • Permissions to establish hybrid connectivity between Google Cloud andyour on-premises environment or other cloud environments. For the listof permissions needed, see the relevantNetwork Connectivity productdocumentation.
    • Permissions to create a hybrid connectivity NEG and the load balancer.TheCompute Load Balancer Adminrole(roles/compute.loadBalancerAdmin) contains the permissions required toperform the tasks described in this guide.
  • On your on-premises environment or other non-Google Cloud cloudenvironment

    • Permissions to configure network endpoints that allow services on youron-premises environment or other cloud environments to be reachable fromGoogle Cloud by using anIP:Port combination. For more information,contact your environment's network administrator.
    • Permissions to create firewall rules on your on-premises environment orother cloud environments to allow Google's health check probes to reach theendpoints.

Additionally, to complete the instructions on this page, you need to create ahybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints)to serve as Google Cloud-based backends for the load balancer.

You should be either a projectOwneror Editor, or you should have thefollowingCompute Engine IAMroles.

TaskRequired role
Create networks, subnets, and load balancer componentsCompute Network Admin (roles/compute.networkAdmin)
Add and remove firewall rulesCompute Security Admin (roles/compute.securityAdmin)
Create instancesCompute Instance Admin (roles/compute.instanceAdmin)

Establish hybrid connectivity

Your Google Cloud and on-premises environment or other cloud environmentsmust be connected throughhybrid connectivity by usingeither Cloud Interconnect VLAN attachments or Cloud VPNtunnels with Cloud Router or Router appliance VMs. We recommend that youuse a high availability connection.

A Cloud Router enabled withglobal dynamicroutinglearns about the specific endpoint through Border Gateway Protocol (BGP) andprograms it into your Google Cloud VPC network. Regionaldynamic routing is not supported. Static routes are also not supported.

You can use either the same network or a different VPC networkwithin the same project to configure both hybrid networking(Cloud Interconnect or Cloud VPN or a Router appliance VM) and the load balancer. Notethe following:

  • If you use different VPC networks, the two networks must beconnected using either VPC Network Peering or they must beVPCspokeson the sameNetwork Connectivity Centerhub.

  • If you use the same VPC network, ensure that yourVPC network's subnet CIDR ranges don't conflict with yourremote CIDR ranges. When IP addresses overlap, subnet routes are prioritizedover remote connectivity.

For instructions, see the following documentation:

Important: Don't proceed with the instructions on this page until you set uphybrid connectivity between your environments.

Set up your environment that is outside Google Cloud

Perform the following steps to set up your on-premises environment or other cloudenvironment for hybrid load balancing:

  • Configure network endpoints to expose on-premises services toGoogle Cloud (IP:Port).
  • Configure firewall rules on your on-premises environment or other cloud environment.
  • Configure Cloud Router to advertise certain required routes to yourprivate environment.

Set up network endpoints

After you set up hybrid connectivity, you configure one or more networkendpoints within your on-premises environment or other cloud environments thatare reachable through Cloud Interconnect or Cloud VPN orRouter appliance by using anIP:port combination. ThisIP:portcombination is configured as one or more endpoints for the hybrid connectivityNEG that is created in Google Cloud later on in this process.

If there are multiple paths to the IP endpoint, routingfollows the behavior described in theCloud Routeroverview.

Set up firewall rules

The following firewall rules must be created on your on-premises environmentor other cloud environment:

  • Create an ingress allow firewall rule in on-premises or other cloud environments to allow traffic from the region'sproxy-only subnet to reach the endpoints.
  • Allowing traffic from Google's health check probe ranges isn't required for hybridNEGs. However, if you're using a combination of hybrid and zonal NEGs ina single backend service, you need to allow traffic from theGooglehealth check probe ranges for the zonal NEGs.

Advertise routes

Configure Cloud Router toadvertise the following custom IPranges to youron-premises environment or other cloud environment:

  • The range of the region's proxy-only subnet.

Set up the Google Cloud environment

For the following steps, make sure you use the same VPC network(calledNETWORK in this procedure) thatwas used to configure hybrid connectivity between the environments.

Additionally, make sure the regions used(calledREGION_A andREGION_B in this procedure)are the same as those used to create the Cloud VPN tunnel orCloud Interconnect VLAN attachments.

Configure the backend subnets

Use this subnet to create the load balancer's zonal NEG backends:

Console

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. Go to the network that was used to configure hybrid connectivity betweenthe environments.

  3. In theSubnets section:

    • Set theSubnet creation mode toCustom.
    • In theNew subnet section, enter the following information:
      • Provide aName for the subnet.
      • Select aRegion:REGION_A
      • Enter anIP address range.
    • ClickDone.
  4. ClickCreate.

  5. To add more subnets in different regions, clickAdd subnet and repeatthe previous steps forREGION_B

gcloud

  1. Create subnets in the network that was used to configure hybridconnectivity between the environments.

    gcloud compute networks subnets createSUBNET_A \    --network=NETWORK \    --range=LB_SUBNET_RANGE1 \    --region=REGION_A
    gcloud compute networks subnets createSUBNET_B \    --network=NETWORK \    --range=LB_SUBNET_RANGE2 \    --region=REGION_B

API

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks{ "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "LB_SUBNET_RANGE1", "region": "projects/PROJECT_ID/regions/REGION_A",}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks{ "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "LB_SUBNET_RANGE2", "region": "projects/PROJECT_ID/regions/REGION_B",}

Replace the following:

  • SUBNET_A andSUBNET_B: the name of the subnets
  • LB_SUBNET_RANGE1 andLB_SUBNET_RANGE2: the IP address range forthe subnets
  • REGION_A andREGION_B:the regions where you have configured the load balancer

Configure the proxy-only subnet

Aproxy-only subnet provides aset of IP addresses that Google uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.

This proxy-only subnet is used by allEnvoy-based regional loadbalancers in thesame region of the VPC network. There can only beone active proxy-only subnet for a given purpose, per region, per network.

Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.

Console

If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on theLoad balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. Click the name of the VPC network.
  3. On theSubnets tab, clickAdd subnet.
  4. Provide aName for the proxy-only subnet.
  5. In theRegion list, selectREGION_A.
  6. In thePurpose list, selectCross-region Managed Proxy.
  7. In theIP address range field, enter10.129.0.0/23.
  8. ClickAdd.

Create the proxy-only subnet inREGION_B

  1. ClickAdd subnet.
  2. Provide aName for the proxy-only subnet.
  3. In theRegion list, selectREGION_B.
  4. In thePurpose list, selectCross-region Managed Proxy.
  5. In theIP address range field, enter10.130.0.0/23.
  6. ClickAdd.

gcloud

Create the proxy-only subnets with thegcloud compute networks subnets create command.

    gcloud compute networks subnets createPROXY_SN_A \        --purpose=GLOBAL_MANAGED_PROXY \        --role=ACTIVE \        --region=REGION_A \        --network=NETWORK \        --range=PROXY_ONLY_SUBNET_RANGE1
    gcloud compute networks subnets createPROXY_SN_B \        --purpose=GLOBAL_MANAGED_PROXY \        --role=ACTIVE \        --region=REGION_B \        --network=NETWORK \        --range=PROXY_ONLY_SUBNET_RANGE2

Replace the following:

  • PROXY_SN_A andPROXY_SN_B: the name of the proxy-only subnets
  • PROXY_ONLY_SUBNET_RANGE1 andPROXY_ONLY_SUBNET_RANGE2: the IP address range for the proxy-only subnets
  • REGION_A andREGION_B: the regions where you have configured the load balancer

API

Create the proxy-only subnets with thesubnetworks.insert method, replacingPROJECT_ID with your project ID.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks    {      "name": "PROXY_SN_A",      "ipCidrRange": "PROXY_ONLY_SUBNET_RANGE1",      "network": "projects/PROJECT_ID/global/networks/NETWORK",      "region": "projects/PROJECT_ID/regions/REGION_A",      "purpose": "GLOBAL_MANAGED_PROXY",      "role": "ACTIVE"    }
    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks    {      "name": "PROXY_SN_B",      "ipCidrRange": "PROXY_ONLY_SUBNET_RANGE2",      "network": "projects/PROJECT_ID/global/networks/NETWORK",      "region": "projects/PROJECT_ID/regions/REGION_B",      "purpose": "GLOBAL_MANAGED_PROXY",      "role": "ACTIVE"    }

Create firewall rules

In this example, you create the following firewall rules for the zonal NEGbackends on Google Cloud:

  • fw-allow-health-check: An ingress rule, applicable to theinstances being load balanced, that allows traffic fromGoogle Cloud health checking systems (130.211.0.0/22and35.191.0.0/16). This example uses the target tagallow-health-check toidentify the zonal NEGs to which it should apply.
  • fw-allow-ssh: An ingress rule that allows incoming SSH connectivity on TCPport 22 from any address. You can choose a more restrictive source IP rangefor this rule; for example, you can specify just the IP ranges of the systemsfrom which you will initiate SSH sessions. This example uses the target tagallow-ssh to identify the VMs to which it should apply.
  • fw-allow-proxy-only-subnet: An ingress rule that allows connections from theproxy-only subnet to reach the zonal NEG backends.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. ClickCreate firewall rule to create the rule to allow traffic fromhealth check probes:

    1. Enter aName offw-allow-health-check.
    2. ForNetwork, selectNETWORK.
    3. ForTargets, selectSpecified target tags.
    4. Populate theTarget tags field withallow-health-check.
    5. SetSource filter toIPv4 ranges.
    6. SetSource IPv4 ranges to130.211.0.0/22 and35.191.0.0/16.
    7. ForProtocols and ports, selectSpecified protocols andports.
    8. SelectTCP and then enter80 for the port number.
    9. ClickCreate.
  3. ClickCreate firewall rule again to create the rule to allow incomingSSH connections:

    1. Name:fw-allow-ssh
    2. Network:NETWORK
    3. Priority:1000
    4. Direction of traffic: ingress
    5. Action on match: allow
    6. Targets: Specified target tags
    7. Target tags:allow-ssh
    8. Source filter:IPv4 ranges
    9. Source IPv4 ranges:0.0.0.0/0
    10. Protocols and ports: ChooseSpecified protocols and ports.
    11. SelectTCP and then enter22 for the port number.
    12. ClickCreate.
  4. ClickCreate firewall rule again to create the rule to allow incomingconnections from the proxy-only subnet:

    1. Name:fw-allow-proxy-only-subnet
    2. Network:NETWORK
    3. Priority:1000
    4. Direction of traffic: ingress
    5. Action on match: allow
    6. Targets: Specified target tags
    7. Target tags:allow-proxy-only-subnet
    8. Source filter:IPv4 ranges
    9. Source IPv4 ranges:PROXY_ONLY_SUBNET_RANGE1andPROXY_ONLY_SUBNET_RANGE2
    10. Protocols and ports: ChooseSpecified protocols and ports
    11. SelectTCP and then enter80 for the port number.
    12. ClickCreate.

gcloud

  1. Create thefw-allow-health-check-and-proxy rule to allowthe Google Cloud health checks to reach thebackend instances on TCP port80:

    gcloud compute firewall-rules create fw-allow-health-check \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-health-check \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --rules=tcp:80
  2. Create thefw-allow-ssh firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-allow-ssh \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  3. Create an ingress allow firewall rule for the proxy-only subnet to allowthe load balancer to communicate with backend instances on TCP port80:

    gcloud compute firewall-rules create fw-allow-proxy-only-subnet \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-proxy-only-subnet \    --source-ranges=PROXY_ONLY_SUBNET_RANGE1,PROXY_ONLY_SUBNET_RANGE2 \    --rules=tcp:80

Set up the zonal NEG

For Google Cloud-based backends, we recommend you configure multiple zonalNEGs in the same region where you configuredhybridconnectivity.

For this example, we set up a zonal NEG (withGCE_VM_IP_PORT type endpoints)in theREGION1. First create the VMs intheNEG_ZONE1 zone. Thencreate a zonal NEG in theNEG_ZONE2 andadd the VMs' network endpoints to the NEG.To support high availability, we set up a similar zonal NEG in theREGION2region. If backends in one region happen to be down, traffic fails over tothe other region.

Create VMs

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. Repeat steps 3 to 8 for each VM, using the following nameand zone combinations.

    • Name: ofvm-a1
      • Zone:NEG_ZONE1 inthe regionREGION_A
      • Subnet:SUBNET_A
    • Name: ofvm-b1
      • Zone:NEG_ZONE2 inthe regionREGION_B
      • Subnet:SUBNET_B
  3. ClickCreate instance.

  4. Set the name as indicated in the preceding step.

  5. For theRegion, choose as indicated in the earlier step.

  6. For theZone, choose as indicated in the earlier step.

  7. In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. ClickChoose to change the image if necessary.

  8. In theAdvanced options section, expandNetworking, and then dothe following:

    • Add the followingNetwork tags:allow-ssh,allow-health-check, andallow-proxy-only-subnet.
    • In theNetwork interfaces section, clickAdd a network interfacemake the following changes, and then clickDone:
      • Network:NETWORK
      • Subnetwork: as indicated in the earlier step.
      • Primary internal IP: Ephemeral (automatic)
      • External IP: Ephemeral
    • ExpandManagement. In theAutomation field, copy and pastethe following script contents. The script contents are identical forall VMs:

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
  9. ClickCreate.

gcloud

Create the VMs by running the following command, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.

  • VM_NAME ofvm-a1
    • The zoneGCP_NEG_ZONE asNEG_ZONE1in the regionREGION_A
    • The subnetLB_SUBNET_NAME asSUBNET_A
  • VM_NAME ofvm-b1

    • ZoneGCP_NEG_ZONE asNEG_ZONE2in the regionREGION_B
    • SubnetLB_SUBNET_NAME asSUBNET_B
    gcloud compute instances createVM_NAME \    --zone=GCP_NEG_ZONE \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \    --subnet=LB_SUBNET_NAME \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://metadata.google.internal/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      systemctl restart apache2'

Create the zonal NEG

Console

To create a zonal network endpoint group:

  1. In the Google Cloud console, go to theNetwork Endpoint Groups page.

    Go to Network Endpoint Groups

  2. Repeat steps 3 to 8 for each zonal NEG, using the following nameand zone combinations:

    • Name:neg-1
      • Zone:NEG_ZONE1 in theregionREGION_A
      • Subnet:SUBNET_A
    • Name:neg-2
      • Zone:NEG_ZONE2 in theregionREGION_B
      • Subnet:SUBNET_B
  3. ClickCreate network endpoint group.

  4. Set the name as indicated in the preceding step.

  5. Select theNetwork endpoint group type:Network endpoint group(Zonal).

  6. Select theNetwork:NETWORK

  7. Select theSubnetwork as indicated in earlier step.

  8. Select theZone as indicated in earlier step.

  9. Enter theDefault port:80.

  10. ClickCreate.

Add endpoints to the zonal NEG:

  1. In the Google Cloud console, go to theNetwork Endpoint Groups page.

    Go to the Network endpoint groups

  2. Click theName of the network endpoint group created in the previousstep. Yousee theNetwork endpoint group details page.

  3. In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.

  4. Select aVM instance to add its internal IP addresses as networkendpoints. In theNetwork interface section, the name, zone,and subnet of the VM is displayed.

  5. Enter theIP address of the new network endpoint.

  6. Select thePort type.

    1. If you selectDefault, the endpoint uses the default port80for all endpoints in the network endpoint group. This is sufficientfor our example because the Apache server is serving requests atport80.
    2. If you selectCustom, enter thePort number for the endpointto use.
  7. To add more endpoints, clickAdd network endpoint and repeat theprevious steps.

  8. After you add all the endpoints, clickCreate.

gcloud

  1. Create zonal NEGs (withGCE_VM_IP_PORT endpoints)using the name, zone, and subnet combinations.Use thegcloud compute network-endpoint-groupscreate command.

    • Name:neg-1
      • ZoneGCP_NEG_ZONE:NEG_ZONE1 in theregionREGION_A
      • SubnetLB_SUBNET_NAME:SUBNET_A
    • Name:neg-2
      • ZoneGCP_NEG_ZONE:NEG_ZONE2 in theregionREGION_B
      • SubnetLB_SUBNET_NAME:SUBNET_B
    gcloud compute network-endpoint-groups createGCP_NEG_NAME \    --network-endpoint-type=GCE_VM_IP_PORT \    --zone=GCP_NEG_ZONE \    --network=NETWORK \    --subnet=LB_SUBNET_NAME

    You can either specify a port using the--default-port option whilecreating the NEG, orspecify a port number for eachendpointas shown in the next step.

  2. Add endpoints toneg1 andneg2.

    gcloud compute network-endpoint-groups update neg1 \    --zone=NEG_ZONE1 \    --add-endpoint='instance=vm-a1,port=80'
    gcloud compute network-endpoint-groups update neg2 \    --zone=NEG_ZONE2 \    --add-endpoint='instance=vm-b1,port=80'

Set up the hybrid connectivity NEG

When creating the NEG, use a zone that minimizes the geographicdistance between Google Cloud and your on-premises or other cloudenvironment.

And, if you're using Cloud Interconnect, the zone usedto create the NEG is in the same region where theCloud Interconnect attachment was configured.

Hybrid NEGs support only thedistributed Envoy healthchecks.

Console

To create a hybrid connectivity network endpoint group:

  1. In the Google Cloud console, go to theNetwork Endpoint Groups page.

    Go to Network endpoint groups

  2. ClickCreate network endpoint group.

  3. Repeat steps 4 to 9 for each hybrid NEG, using the following name and zone combinations.

    • NameON_PREM_NEG_NAME:hybrid-1
      • Zone:ON_PREM_NEG_ZONE1
      • Subnet:SUBNET_A
    • NameON_PREM_NEG_NAME:hybrid-2
      • Zone:ON_PREM_NEG_ZONE2
      • Subnet:SUBNET_B
  4. Set the name as indicated in the previous step.

  5. Select theNetwork endpoint group type:Hybrid connectivity networkendpoint group (Zonal).

  6. Select theNetwork:NETWORK

  7. For theSubnet, choose as indicated in the previous step.

  8. For theZone, choose as indicated in the previous step.

  9. Enter theDefault port.

  10. ClickCreate

Add endpoints to the hybrid connectivity NEG:

  1. In the Google Cloud console, go to theNetwork Endpoint Groups page.

    Go to Network endpoint groups

  2. Click theName of the network endpoint group created in the previousstep. Yousee theNetwork endpoint group detail page.

  3. In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.

  4. Enter theIP address of the new network endpoint.

  5. Select thePort type.

    1. If you selectDefault, the endpoint uses the default portfor all endpoints in the network endpoint group.
    2. If you selectCustom, you can enter a differentPort numberfor the endpoint to use.
  6. To add more endpoints, clickAdd network endpoint and repeat theprevious steps.

  7. After you add all the non-Google Cloud endpoints,clickCreate.

gcloud

  1. Create a hybrid connectivity NEG that uses the following name combinations.Use thegcloud compute network-endpoint-groupscreate command.

    • NameON_PREM_NEG_NAME:hybrid-1
      • ZoneON_PREM_NEG_ZONE:ON_PREM_NEG_ZONE1
    • NameON_PREM_NEG_NAME:hybrid-2
      • ZoneGCP_NEG_ZONE:ON_PREM_NEG_ZONE2
    gcloud compute network-endpoint-groups createON_PREM_NEG_NAME \    --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \    --zone=ON_PREM_NEG_ZONE \    --network=NETWORK
  2. Add the on-premises backend VM endpoint toON_PREM_NEG_NAME:

    gcloud compute network-endpoint-groups updateON_PREM_NEG_NAME \    --zone=ON_PREM_NEG_ZONE \    --add-endpoint="ip=ON_PREM_IP_ADDRESS_1,port=PORT_1" \    --add-endpoint="ip=ON_PREM_IP_ADDRESS_2,port=PORT_2"

You can use this command to add the network endpoints youpreviouslyconfigured on premises or in your cloud environment.Repeat--add-endpoint as many times as needed.

Configure the load balancer

Console

Note: You cannot use the Google Cloud console to create a load balancerthat has mixed zonal and hybrid connectivity NEG backends in a singlebackend service. Use either gcloud or the REST API instead.

gcloud

  1. Define the TCP health check with thegcloud compute health-checkscreate tcp command.

    gcloud compute health-checks create tcp gil4-basic-check \   --use-serving-port \   --global
  2. Create the backend service and enable logging with thegcloud compute backend-servicescreate command.

    gcloud compute backend-services createBACKEND_SERVICE \  --load-balancing-scheme=INTERNAL_MANAGED \  --protocol=TCP \  --enable-logging \  --logging-sample-rate=1.0 \  --health-checks=gil4-basic-check \  --global-health-checks \  --global
  3. Add backends to the backend service with thegcloud compute backend-servicesadd-backend command.

    gcloud compute backend-services add-backendBACKEND_SERVICE \  --global \  --balancing-mode=CONNECTION \  --max-connections-per-endpoint=MAX_CONNECTIONS \  --network-endpoint-group=neg1 \  --network-endpoint-group-zone=NEG_ZONE1 \  --network-endpoint-group=neg2 \  --network-endpoint-group-zone=NEG_ZONE2

    For details aboutconfiguring the balancing mode, see the gcloud CLI documentationfor the--max-connections-per-endpoint flag.ForMAX_CONNECTIONS, enter the maximum concurrentconnections for the backend to handle.

  4. Add the hybrid NEGs as a backend to the backend service.

    gcloud compute backend-services add-backendBACKEND_SERVICE \  --global \  --balancing-mode=CONNECTION \  --max-connections-per-endpoint=MAX_CONNECTIONS \  --network-endpoint-group=hybrid1 \  --network-endpoint-group-zone=ON_PREM_NEG_ZONE1 \  --network-endpoint-group=hybrid2 \  --network-endpoint-group-zone=ON_PREM_NEG_ZONE2 \

    For detailsabout configuring the balancing mode, see the gcloud CLIdocumentation for the--max-connections-per-endpoint parameter.ForMAX_CONNECTIONS, enter the maximum concurrentconnections for the backend to handle.

  5. Create the target proxy.

    Create the target proxy with thegcloud compute target-tcp-proxiescreate command.

    gcloud compute target-tcp-proxies create gil4-tcp-proxy \  --backend-service=BACKEND_SERVICE \  --global
  6. Create two forwarding rules, one with a VIPIP_ADDRESS1 inREGION_A andanother one with a VIPIP_ADDRESS2inREGION_B.For the forwarding rule's IP address, use theLB_SUBNET_RANGE1orLB_SUBNET_RANGE2 IPaddress range. If you try to use theproxy-only subnet, forwarding rule creation fails.

    For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.

    Use thegcloud compute forwarding-rulescreate commandwith the correct flags.

    gcloud compute forwarding-rules create gil4-forwarding-rule-a \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_A \  --subnet-region=REGION_A \  --address=IP_ADDRESS1 \  --ports=80 \  --target-tcp-proxy=gil4-tcp-proxy \  --global
    gcloud compute forwarding-rules create gil4-forwarding-rule-b \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_B \  --subnet-region=REGION_B \  --address=IP_ADDRESS2 \  --ports=80 \  --target-tcp-proxy=gil4-tcp-proxy \  --global

Test the load balancer

Create a VM instance to test connectivity

  1. Create client VMs inREGION_A andREGION_B and regions:

    gcloud compute instances create l4-ilb-client-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --network=NETWORK \    --subnet=SUBNET_A \    --zone=NEG_ZONE1 \    --tags=allow-ssh
    gcloud compute instances create l4-ilb-client-b \    --image-family=debian-12 \    --image-project=debian-cloud \    --network=NETWORK \    --subnet=SUBNET_B \    --zone=NEG_ZONE2 \    --tags=allow-ssh
  2. Use SSH to connect to each client instance.

    gcloud compute ssh l4-ilb-client-a \   --zone=NEG_ZONE1
    gcloud compute ssh l4-ilb-client-b \   --zone=NEG_ZONE2
  3. Verify that the IP address is serving its hostname.

    • Verify that the client VM can reach both IP addresses.The command should succeed and return the name of the backend VM which served therequest:

      curlIP_ADDRESS1
      curlIP_ADDRESS2

Run 100 requests

Run 100 curl requests and confirm from the responses that they are loadbalanced.

  • Verify that the client VM can reach both IP addresses.The command should succeed and return the name of the backend VM whichserved the request:

    {  RESULTS=  for i in {1..100}  do    RESULTS="$RESULTS:$(curl --silentIP_ADDRESS1)"  done  echo "***"  echo "*** Results of load-balancing toIP_ADDRESS1: "  echo "***"  echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c  echo}
    {  RESULTS=  for i in {1..100}  do    RESULTS="$RESULTS:$(curl --silentIP_ADDRESS2)"  done  echo "***"  echo "*** Results of load-balancing toIP_ADDRESS2: "  echo "***"  echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c  echo}

Test failover

  1. Verify failover to backends in theREGION_Aregion when backends in theREGION_Bare unhealthy or unreachable.We simulate this by removing all thebackends fromREGION_B:

    gcloud compute backend-services remove-backendBACKEND_SERVICE \   --balancing-mode=CONNECTION \   --network-endpoint-group=neg2 \   --network-endpoint-group-zone=NEG_ZONE2
  2. Use SSH to connect to the client VM inREGION_B.

    gcloud compute ssh l4-ilb-client-b \   --zone=NEG_ZONE2
  3. Send requests to the load balanced IP address inREGION_B region. The command output should displayresponses from backend VMs inREGION_A.

    {RESULTS=for i in {1..100}do  RESULTS="$RESULTS:$(curl --silentIP_ADDRESS2)"doneecho "***"echo "*** Results of load-balancing toIP_ADDRESS2: "echo "***"echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -cecho}

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-24 UTC.