Set up a cross-region internal proxy Network Load Balancer with VM instance group backends

This document provides instructions for configuring a cross-region internal proxy Network Load Balancerfor your services that run on Compute Engine VMs.

Before you begin

Before following this guide, familiarize yourself with the following:

Permissions

To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you must have all ofthe followingCompute Engine IAM roles.

TaskRequired role
Create networks, subnets, and load balancer componentsCompute Network Admin
Add and remove firewall rulesCompute Security Admin
Create instancesCompute Instance Admin

For more information, see the following guides:

Setup overview

You can configure load balancer as shown in the following diagram:

Cross-region internal proxy Network Load Balancer high availability deployment.
Cross-region internal proxy Network Load Balancer high availability deployment (click to enlarge).

As shown in the diagram, this example creates a cross-region internal proxy Network Load Balancer in aVPC network, with one backendservice and two backend managed instance groups (MIGs) in regionREGION_A andREGION_B regions.

The diagram shows the following:

  1. A VPC network with the following subnets:

    • SubnetSUBNET_A and a proxy-only subnet inREGION_A.
    • SubnetSUBNET_B and a proxy-only subnet inREGION_B.

    You must createproxy-only subnetsin each region of a VPC network where you usecross-region internal proxy Network Load Balancers. The region'sproxy-only subnet is shared among all cross-region internal proxy Network Load Balancers inthe region. Source addresses of packets sent from the load balancerto your service's backends are allocated from theproxy-only subnet.In this example, the proxy-only subnet for the regionREGION_B has a primary IP address range of10.129.0.0/23, and forREGION_A, has a primary IP address range of10.130.0.0/23, which is therecommended subnet size.

  2. High availability setup has managed instance group backends forCompute Engine VM deployments inREGION_A andREGION_B regions.If backends in one region happen to be down, traffic fails over to theother region.

  3. A global backend service that monitors the usage and health ofbackends.

  4. A global target TCP proxy, which receives a request from theuser and forwards it to the backend service.

  5. Global forwarding rules, which have the regional internal IP address of yourload balancer and can forward each incoming request to the target proxy.

    The internal IP address associated with the forwarding rule can come froma subnet in the same network and region as the backends. Note the followingconditions:

    • The IP address can (but does not need to) come from the same subnet asthe backend instance groups.
    • The IP address must not come from a reserved proxy-only subnet that hasits--purpose flag set toGLOBAL_MANAGED_PROXY.
    • If you want touse the same internal IP address with multiple forwardingrules,set the IP address--purpose flag toSHARED_LOADBALANCER_VIP.

Configure the network and subnets

Within the VPC network, configure a subnet in each regionwhere your backends are configured. In addition, configure aproxy-only-subnetin each region that you want to configure the load balancer.

This example uses the following VPC network, region, andsubnets:

  • Network. The network is acustom mode VPCnetwork namedNETWORK.

  • Subnets for backends.

    • A subnet namedSUBNET_A in theREGION_A region uses10.1.2.0/24 for its primaryIP range.
    • A subnet namedSUBNET_B in theREGION_B region uses10.1.3.0/24 for its primaryIP range.
  • Subnets for proxies.

    • A subnet namedPROXY_SN_A intheREGION_A region uses10.129.0.0/23 for itsprimary IP range.
    • A subnet namedPROXY_SN_B in theREGION_B region uses10.130.0.0/23 for itsprimary IP range.

Cross-region internal Application Load Balancers can be accessed from any region within the VPC.So clients from any region can globally access your load balancer backends.

Note: Subsequent steps in this guide use the network, region,and subnet parameters as outlined here.

Configure the backend subnets

Console

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. Provide aName for the network.

  4. In theSubnets section, set theSubnet creation mode toCustom.

  5. Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:

    • Provide aName for the subnet.
    • Select aRegion:REGION_A
    • Enter anIP address range:10.1.2.0/24
  6. ClickDone.

  7. ClickAdd subnet.

  8. Create a subnet for the load balancer's backends. In theNewsubnet section, enter the following information:

    • Provide aName for the subnet.
    • Select aRegion:REGION_B
    • Enter anIP address range:10.1.3.0/24
  9. ClickDone.

  10. ClickCreate.

gcloud

  1. Create the custom VPC network with thegcloud computenetworks create command:

    gcloud compute networks createNETWORK \    --subnet-mode=custom
  2. Create a subnet in theNETWORK network in theREGION_A region withthegcloud compute networks subnets create command:

    gcloud compute networks subnets createSUBNET_A \    --network=NETWORK \    --range=10.1.2.0/24 \    --region=REGION_A
  3. Create a subnet in theNETWORK network in theREGION_B region withthegcloud compute networks subnets create command:

    gcloud compute networks subnets createSUBNET_B \    --network=NETWORK \    --range=10.1.3.0/24 \    --region=REGION_B

Terraform

To create the VPC network, use thegoogle_compute_network resource.

resource "google_compute_network" "default" {  auto_create_subnetworks = false  name                    = "lb-network-crs-reg"  provider                = google-beta}

To create the VPC subnets in thelb-network-crs-reg network,use thegoogle_compute_subnetwork resource.

resource "google_compute_subnetwork" "subnet_a" {  provider      = google-beta  ip_cidr_range = "10.1.2.0/24"  name          = "lbsubnet-uswest1"  network       = google_compute_network.default.id  region        = "us-west1"}
resource "google_compute_subnetwork" "subnet_b" {  provider      = google-beta  ip_cidr_range = "10.1.3.0/24"  name          = "lbsubnet-useast1"  network       = google_compute_network.default.id  region        = "us-east1"}

API

Make aPOST request to thenetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{ "routingConfig": {   "routingMode": "regional" }, "name": "NETWORK", "autoCreateSubnetworks": false}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks{ "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/lb-network-crs-reg", "ipCidrRange": "10.1.2.0/24", "region": "projects/PROJECT_ID/regions/REGION_A",}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks{ "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "10.1.3.0/24", "region": "projects/PROJECT_ID/regions/REGION_B",}

Configure the proxy-only subnet

Aproxy-only subnet provides aset of IP addresses that Google Cloud uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.

This proxy-only subnet is used by all Envoy-based regional loadbalancers in the same region as the VPC network. There can only beone active proxy-only subnet for a given purpose, per region, per network.

Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.

Console

If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on theLoad balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. Click the name of the VPC network.
  3. On theSubnet tab, clickAdd subnet.
  4. Provide aName for the proxy-only subnet.
  5. Select aRegion:REGION_A
  6. In thePurpose list, selectCross-region Managed Proxy.
  7. In theIP address range field, enter10.129.0.0/23.
  8. ClickAdd.

Create the proxy-only subnet inREGION_B

  1. On theSubnet tab, clickAdd subnet.
  2. Provide aName for the proxy-only subnet.
  3. Select aRegion:REGION_B
  4. In thePurpose list, selectCross-region Managed Proxy.
  5. In theIP address range field, enter10.130.0.0/23.
  6. ClickAdd.

gcloud

Create the proxy-only subnets with thegcloud compute networks subnets create command.

    gcloud compute networks subnets createPROXY_SN_A \        --purpose=GLOBAL_MANAGED_PROXY \        --role=ACTIVE \        --region=REGION_A \        --network=NETWORK \        --range=10.129.0.0/23
    gcloud compute networks subnets createPROXY_SN_B \        --purpose=GLOBAL_MANAGED_PROXY \        --role=ACTIVE \        --region=REGION_B \        --network=NETWORK \        --range=10.130.0.0/23

Terraform

To create the VPC proxy-only subnet in thelb-network-crs-reg network, use thegoogle_compute_subnetwork resource.

resource "google_compute_subnetwork" "proxy_subnet_a" {  provider      = google-beta  ip_cidr_range = "10.129.0.0/23"  name          = "proxy-only-subnet1"  network       = google_compute_network.default.id  purpose       = "GLOBAL_MANAGED_PROXY"  region        = "us-west1"  role          = "ACTIVE"  lifecycle {    ignore_changes = [ipv6_access_type]  }}
resource "google_compute_subnetwork" "proxy_subnet_b" {  provider      = google-beta  ip_cidr_range = "10.130.0.0/23"  name          = "proxy-only-subnet2"  network       = google_compute_network.default.id  purpose       = "GLOBAL_MANAGED_PROXY"  region        = "us-east1"  role          = "ACTIVE"  lifecycle {    ignore_changes = [ipv6_access_type]  }}

API

Create the proxy-only subnets with thesubnetworks.insert method, replacingPROJECT_ID with your project ID.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks    {      "name": "PROXY_SN_A",      "ipCidrRange": "10.129.0.0/23",      "network": "projects/PROJECT_ID/global/networks/NETWORK",      "region": "projects/PROJECT_ID/regions/REGION_A",      "purpose": "GLOBAL_MANAGED_PROXY",      "role": "ACTIVE"    }
    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks    {      "name": "PROXY_SN_B",      "ipCidrRange": "10.130.0.0/23",      "network": "projects/PROJECT_ID/global/networks/NETWORK",      "region": "projects/PROJECT_ID/regions/REGION_B",      "purpose": "GLOBAL_MANAGED_PROXY",      "role": "ACTIVE"    }

Configure firewall rules

This example uses the following firewall rules:

  • fw-ilb-to-backends. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22 from anyaddress. You can choose a more restrictive source IP address range for this rule; forexample, you can specify just the IP address ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-ssh toidentify the VMs that the firewall rule applies to.

  • fw-healthcheck. An ingress rule, applicable to the instancesbeing load balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in130.211.0.0/22 and35.191.0.0/16). Thisexample uses the target tagload-balanced-backend to identify the VMs thatthe firewall rule applies to.

  • fw-backends. An ingress rule, applicable to the instances beingload balanced, that allows TCP traffic on ports80,443, and8080 fromthe internal proxy Network Load Balancer's managed proxies. This example uses the target tagload-balanced-backend to identify the VMs that the firewall rule applies to.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreating a managed instancegroup.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. ClickCreate firewall rule to create the rule to allow incomingSSH connections:

    • Name:fw-ilb-to-backends
    • Network:NETWORK
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:allow-ssh
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter22 for the port number.
  3. ClickCreate.

  4. ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:

    • Name:fw-healthcheck
    • Network:NETWORK
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:load-balanced-backend
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports:

      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.

      As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you usetcp:80 forthe protocol and port, Google Cloud can useHTTP on port80 to contact your VMs, but it cannot use HTTPS onport443 to contact them.

  5. ClickCreate.

  6. ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:

    • Name:fw-backends
    • Network:NETWORK
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:load-balanced-backend
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:10.129.0.0/23 and10.130.0.0/23
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80, 443, 8080 for theport numbers.
  7. ClickCreate.

gcloud

  1. Create thefw-ilb-to-backends firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-ilb-to-backends \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  2. Create thefw-healthcheck rule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.

    gcloud compute firewall-rules create fw-healthcheck \    --network=NETWORK \    --action=allow \    --direction=ingress \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --target-tags=load-balanced-backend \    --rules=tcp
  3. Create thefw-backends rule to allow the internal proxy Network Load Balancer'sproxies to connect to your backends. Setsource-ranges to theallocated ranges of your proxy-only subnet,for example,10.129.0.0/23 and10.130.0.0/23.

    gcloud compute firewall-rules create fw-backends \    --network=NETWORK \    --action=allow \    --direction=ingress \    --source-ranges=SOURCE_RANGE \    --target-tags=load-balanced-backend \    --rules=tcp:80,tcp:443,tcp:8080

API

Create thefw-ilb-to-backends firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-ilb-to-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [   "0.0.0.0/0" ], "targetTags": [   "allow-ssh" ], "allowed": [  {    "IPProtocol": "tcp",    "ports": [      "22"    ]  } ],"direction": "INGRESS"}

Create thefw-healthcheck firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-healthcheck", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [   "130.211.0.0/22",   "35.191.0.0/16" ], "targetTags": [   "load-balanced-backend" ], "allowed": [   {     "IPProtocol": "tcp"   } ], "direction": "INGRESS"}

Create thefw-backends firewall rule to allow TCP traffic within theproxy subnet for thefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [   "10.129.0.0/23",   "10.130.0.0/23" ], "targetTags": [   "load-balanced-backend" ], "allowed": [   {     "IPProtocol": "tcp",     "ports": [       "80"     ]   }, {     "IPProtocol": "tcp",     "ports": [       "443"     ]   },   {     "IPProtocol": "tcp",     "ports": [       "8080"     ]   } ], "direction": "INGRESS"}

Create a managed instance group

This section shows how to create a template and a managed instance group. Themanaged instance group provides VM instances running the backend servers of anexample cross-region internal proxy Network Load Balancer. For your instance group, you can define an HTTPservice and map a port name to the relevant port. The backend serviceof the load balancer forwards traffic to thenamed ports.Traffic from clients is load balanced to backend servers. For demonstrationpurposes, backends serve their own hostnames.

Console

  1. In the Google Cloud console, go totheInstance templates page.

    Go to Instance templates

    1. ClickCreate instance template.
    2. ForName, entergil4-backendeast1-template.
    3. Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.
    4. ClickAdvanced options.
    5. ClickNetworking and configure the following fields:
      1. ForNetwork tags, enterallow-ssh andload-balanced-backend.
      2. ForNetwork interfaces, select the following:
        • Network:NETWORK
        • Subnet:SUBNET_B
    6. ClickManagement. Enter the following script into theStartup script field.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
    7. ClickCreate.

    8. ClickCreate instance template.

    9. ForName, entergil4-backendwest1-template.

    10. Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.

    11. ClickAdvanced options.

    12. ClickNetworking and configure the following fields:

      1. ForNetwork tags, enterallow-ssh andload-balanced-backend.
      2. ForNetwork interfaces, select the following:
        • Network:NETWORK
        • Subnet:SUBNET_A
    13. ClickManagement. Enter the following script into theStartup script field.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
    14. ClickCreate.

  2. In the Google Cloud console, go to theInstance groups page.

    Go to Instance groups

    1. ClickCreate instance group.
    2. SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
    3. ForName, entergl4-ilb-miga.
    4. ForLocation, selectSingle zone.
    5. ForRegion, selectREGION_A.
    6. ForZone, selectZONE_A.
    7. ForInstance template, selectgil4-backendwest1-template.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.

      Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU use.

    9. ClickCreate.

    10. ClickCreate instance group.

    11. SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.

    12. ForName, entergl4-ilb-migb.

    13. ForLocation, selectSingle zone.

    14. ForRegion, selectREGION_B.

    15. ForZone, selectZONE_B.

    16. ForInstance template, selectgil4-backendeast1-template.

    17. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.

      Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.

    18. ClickCreate.

gcloud

The gcloud CLI instructions in this guide assume that you are usingCloud Shell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with thegcloud compute instance-templates create command.

    gcloud compute instance-templates create gil4-backendwest1-template \   --region=REGION_A \   --network=NETWORK \   --subnet=SUBNET_A \   --tags=allow-ssh,load-balanced-backend \   --image-family=debian-12 \   --image-project=debian-cloud \   --metadata=startup-script='#! /bin/bash     apt-get update     apt-get install apache2 -y     a2ensite default-ssl     a2enmod ssl     vm_hostname="$(curl -H "Metadata-Flavor:Google" \     http://169.254.169.254/computeMetadata/v1/instance/name)"     echo "Page served from: $vm_hostname" | \     tee /var/www/html/index.html     systemctl restart apache2'
    gcloud compute instance-templates create gil4-backendeast1-template \    --region=REGION_B \    --network=NETWORK \    --subnet=SUBNET_B \    --tags=allow-ssh,load-balanced-backend \    --image-family=debian-12 \    --image-project=debian-cloud \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://169.254.169.254/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      systemctl restart apache2'
  2. Create a managed instance group in the zone with thegcloud computeinstance-groups managed create command.

    gcloud compute instance-groups managed create gl4-ilb-miga \    --zone=ZONE_A \    --size=2 \    --template=gil4-backendwest1-template
    gcloud compute instance-groups managed create gl4-ilb-migb \    --zone=ZONE_B \    --size=2 \    --template=gil4-backendeast1-template

API

Create the instance template with theinstanceTemplates.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{  "name":"gil4-backendwest1-template",  "properties":{     "machineType":"e2-standard-2",     "tags":{       "items":[         "allow-ssh",         "load-balanced-backend"       ]     },     "metadata":{        "kind":"compute#metadata",        "items":[          {            "key":"startup-script",            "value":"#! /bin/bash\napt-get update\napt-get install            apache2 -y\na2ensite default-ssl\na2enmod ssl\n            vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\"            \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n            echo \"Page served from: $vm_hostname\" | \\\ntee            /var/www/html/index.html\nsystemctl restart apache2"          }        ]     },     "networkInterfaces":[       {         "network":"projects/PROJECT_ID/global/networks/NETWORK",         "subnetwork":"regions/REGION_A/subnetworks/SUBNET_A",         "accessConfigs":[           {             "type":"ONE_TO_ONE_NAT"           }         ]       }     ],     "disks":[       {         "index":0,         "boot":true,         "initializeParams":{           "sourceImage":"projects/debian-cloud/global/images/family/debian-12"         },         "autoDelete":true       }     ]  }}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{  "name":"gil4-backendeast1-template",  "properties":{     "machineType":"e2-standard-2",     "tags":{       "items":[         "allow-ssh",         "load-balanced-backend"       ]     },     "metadata":{        "kind":"compute#metadata",        "items":[          {            "key":"startup-script",            "value":"#! /bin/bash\napt-get update\napt-get install            apache2 -y\na2ensite default-ssl\na2enmod ssl\n            vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\"            \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n            echo \"Page served from: $vm_hostname\" | \\\ntee            /var/www/html/index.html\nsystemctl restart apache2"          }        ]     },     "networkInterfaces":[       {         "network":"projects/PROJECT_ID/global/networks/NETWORK",         "subnetwork":"regions/REGION_B/subnetworks/SUBNET_B",         "accessConfigs":[           {             "type":"ONE_TO_ONE_NAT"           }         ]       }     ],     "disks":[       {         "index":0,         "boot":true,         "initializeParams":{           "sourceImage":"projects/debian-cloud/global/images/family/debian-12"         },         "autoDelete":true       }     ]  }}

Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{  "name": "gl4-ilb-miga",  "zone": "projects/PROJECT_ID/zones/ZONE_A",  "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil4-backendwest1-template",  "baseInstanceName": "gl4-ilb-miga",  "targetSize": 2}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{  "name": "gl4-ilb-migb",  "zone": "projects/PROJECT_ID/zones/ZONE_A",  "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil4-backendwest1-template",  "baseInstanceName": "gl4-ilb-migb",  "targetSize": 2}

Configure the load balancer

This example shows you how to create the following cross-region internal proxy Network Load Balancerresources:

  • A global TCP health check.
  • A global backend service with the same MIGs as the backend.
  • A global target proxy.
  • Two global forwarding rules with regional IP addresses.For the forwarding rule's IP address, use theSUBNET_A orSUBNET_B IP address range. If youtry to use theproxy-only subnet,forwarding rule creation fails.

Proxy availability

Sometimes Google Cloud regions don't have enough proxy capacity fora new load balancer. If this happens, the Google Cloud console provides aproxy availability warning message when you are creating your load balancer. Toresolve this issue, you can do one of the following:

  • Select a different region for your load balancer. This can be a practicaloption if you have backends in another region.
  • Select a VPC network that already has an allocatedproxy-only subnet.
  • Wait for the capacity issue to be resolved.

Console

Start your configuration

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickCreate load balancer.
  3. ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
  4. ForProxy or passthrough, selectProxy load balancer and clickNext.
  5. ForPublic facing or internal, selectInternal and clickNext.
  6. ForCross-region or single region deployment, selectBest for cross-region workloads and clickNext.
  7. ClickConfigure.

Basic configuration

  1. Provide aName for the load balancer.
  2. ForNetwork, selectNETWORK.

Configure the frontend with two forwarding rules

  1. ClickFrontend configuration.
    1. Provide aName for the forwarding rule.
    2. In theSubnetwork region list, selectREGION_A.

      Reserve a proxy-only subnet

    3. If you already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps.
    4. In theSubnetwork list, selectSUBNET_A.
    5. In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
      • Provide aName for the static IP address.
      • In theStatic IP address list, selectLet me choose.
      • In theCustom IP address field, enter10.1.2.99.
      • SelectReserve.
  2. ClickDone.
  3. To add the second forwarding rule, clickAdd frontend IP and port.
    1. Provide aName for the forwarding rule.
    2. In theSubnetwork region list, selectREGION_B.

      Reserve a proxy-only subnet

    3. Because you have already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps.
    4. In theSubnetwork list, selectSUBNET_B.
    5. In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
      • Provide aName for the static IP address.
      • In theStatic IP address list, selectLet me choose.
      • In theCustom IP address field, enter10.1.3.99.
      • SelectReserve.
  4. ClickDone.
Configure the backend service
  1. ClickBackend configuration.
  2. In theCreate or select backend services list, clickCreate a backend service.
  3. Provide aName for the backend service.
  4. ForProtocol, selectTCP.
  5. ForNamed Port, enterhttp.
  6. In theBackend type list, selectInstance group.
  7. In theHealth check list, clickCreate a health check, and then enter the following information:
    • In theName field, enterglobal-http-health-check.
    • In theProtocol list, selectHTTP.
    • In thePort field, enter80.
    • ClickCreate.
  8. In theNew backend section:
    1. In theInstance group list, selectgl4-ilb-miga inREGION_A.
    2. SetPort numbers to80.
    3. For theBalancing mode, selectConnection.
    4. ClickDone.
    5. To add another backend, clickAdd backend.
    6. In theInstance group list, selectgl4-ilb-migb inREGION_B.
    7. SetPort numbers to80.
    8. ClickDone.

Review the configuration

  1. ClickReview and finalize.
  2. Review your load balancer configuration settings.
  3. ClickCreate.

gcloud

  1. Define the TCP health check with thegcloud compute health-checkscreate tcp command.

    gcloud compute health-checks create tcp global-health-check \   --use-serving-port \   --global
  2. Define the backend service with thegcloud compute backend-servicescreate command.

    gcloud compute backend-services create gl4-gilb-backend-service \  --load-balancing-scheme=INTERNAL_MANAGED \  --protocol=TCP \  --enable-logging \  --logging-sample-rate=1.0 \  --health-checks=global-health-check \  --global-health-checks \  --global
  3. Add backends to the backend service with thegcloud compute backend-servicesadd-backend command.

    gcloud compute backend-services add-backend gl4-gilb-backend-service \  --balancing-mode=CONNECTION \  --max-connections=50 \  --instance-group=gl4-ilb-miga \  --instance-group-zone=ZONE_A \  --global
    gcloud compute backend-services add-backend gl4-gilb-backend-service \  --balancing-mode=CONNECTION \  --max-connections=50 \  --instance-group=gl4-ilb-migb \  --instance-group-zone=ZONE_B \  --global
  4. Create the target proxy.

    Create the target proxy with thegcloud compute target-tcp-proxiescreate command.

    gcloud compute target-tcp-proxies create gilb-tcp-proxy \  --backend-service=gl4-gilb-backend-service \  --global
  5. Create two forwarding rules, one with a VIP (10.1.2.99) inREGION_B and another one with a VIP (10.1.3.99)inREGION_A. For more information,seeReserve a static internal IPv4 address.

    For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.

    Use thegcloud compute forwarding-rulescreate commandwith the correct flags.

    gcloud compute forwarding-rules create gil4forwarding-rule-a \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_A \  --subnet-region=REGION_A \  --address=10.1.2.99 \  --ports=80 \  --target-tcp-proxy=gilb-tcp-proxy \  --global
    gcloud compute forwarding-rules create gil4forwarding-rule-b \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_B \  --subnet-region=REGION_B \  --address=10.1.3.99 \  --ports=80 \  --target-tcp-proxy=gilb-tcp-proxy \  --global

API

Create the health check by making aPOST request to thehealthChecks.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/healthChecks{"name": "global-health-check","type": "TCP","httpHealthCheck": {  "portSpecification": "USE_SERVING_PORT"}}

Create the global backend service by making aPOST request to thebackendServices.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices{"name": "gl4-gilb-backend-service","backends": [  {    "group": "projects/PROJECT_ID/zones/ZONE_A/instanceGroups/gl4-ilb-miga",    "balancingMode": "CONNECTION"  },  {    "group": "projects/PROJECT_ID/zones/ZONE_B/instanceGroups/gl4-ilb-migb",    "balancingMode": "CONNECTION"  }],"healthChecks": [  "projects/PROJECT_ID/regions/global/healthChecks/global-health-check"],"loadBalancingScheme": "INTERNAL_MANAGED"}

Create the target TCP proxy by making aPOST request to thetargetTcpProxies.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetTcpProxy{"name": "l4-ilb-proxy",}

Create the forwarding rule by making aPOST request to theforwardingRules.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-a","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-b","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}

Create the forwarding rule by making aPOST request to theglobalForwardingRules.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-a","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-b","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}

Test the load balancer

Create a VM instance to test connectivity

  1. Create a client VM inREGION_B andREGION_A regions:

    gcloud compute instances create l4-ilb-client-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --network=NETWORK \    --subnet=SUBNET_A \    --zone=ZONE_A \    --tags=allow-ssh
    gcloud compute instances create l4-ilb-client-b \    --image-family=debian-12 \    --image-project=debian-cloud \    --network=NETWORK \    --subnet=SUBNET_B \    --zone=ZONE_B \    --tags=allow-ssh
  2. Use SSH to connect to each client instance.

    gcloud compute ssh l4-ilb-client-a --zone=ZONE_A
    gcloud compute ssh l4-ilb-client-b --zone=ZONE_B
  3. Verify that the IP address is serving its hostname

    • Verify that the client VM can reach both IP addresses.The command should succeed and return the name of the backend VM which served therequest:

      curl 10.1.2.99
      curl 10.1.3.99

Test failover

  1. Verify failover to backends in theREGION_A region when backends in theREGION_B are unhealthy or unreachable. To simulate failover, remove allbackends fromREGION_B:

    gcloud compute backend-services remove-backend gl4-gilb-backend-service \  --instance-group=gl4-ilb-migb \  --instance-group-zone=ZONE_B \  --global
  2. Connect using SSH to a client VM inREGION_B.

    gcloud compute ssh l4-ilb-client-b \   --zone=ZONE_B
  3. Send requests to the load balanced IP address in theREGION_B region.The command output shows responses from backend VMs inREGION_A:

    {RESULTS=for i in {1..100}do  RESULTS="$RESULTS:$(curl 10.1.3.99)"doneecho "***"echo "*** Results of load-balancing to 10.1.3.99: "echo "***"echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -cecho}

Additional configuration options

This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.

PROXY protocol for retaining client connection information

The internal proxy Network Load Balancer terminates TCP connections fromthe client and creates new connections to the VM instances. By default, theoriginal client IP and port information is not preserved.

To preserve and send the original connection information to your instances,enablePROXY protocol(version 1). This protocol sends an additional header that contains the sourceIP address, destination IP address, and port numbers to the instance as a partof the request.

Make sure that the internal proxy Network Load Balancer's backend instances are running HTTPor HTTPS servers that support PROXY protocol headers. If the HTTP orHTTPS servers are notconfigured to support PROXY protocol headers, the backend instances return emptyresponses. For example, the PROXY protocol doesn't work with the Apache HTTPServer software. You can use different web server software, such as Nginx.

If you set the PROXY protocol for user traffic, you must also set it for yourhealth checks. If you are checking health and servingcontent on the same port, set the health check's--proxy-header to match yourload balancer setting.

The PROXY protocol header is typically a single line of user-readabletext in the following format:

PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n

Following is an example of the PROXY protocol:

PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n

In the preceding example, the client IP is192.0.2.1, the load balancing IP is198.51.100.1, the client port is15221, and the destination port is110.

In cases where the client IP is not known, the load balancer generatesa PROXY protocol header in the following format:

PROXY UNKNOWN\r\n

Update PROXY protocol header for target TCP proxy

Theexample load balancer setup on this page shows you how toenable the PROXY protocol header while creating the internal proxy Network Load Balancer. Use thesesteps to change the PROXY protocol header for an existing target TCP proxy.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickEdit for your loadbalancer.

  3. ClickFrontend configuration.

  4. Change the value of theProxy protocol field toOn.

  5. ClickUpdate to save your changes.

gcloud

In the following command, edit the--proxy-header field and set it toeitherNONE orPROXY_V1 depending on your requirement.

gcloud compute target-ssl-proxies update int-tcp-target-proxy \    --proxy-header=[NONE | PROXY_V1]

Use the same IP address between multiple internal forwarding rules

For multiple internal forwarding rules to share the same internal IP address,you must reserve the IP address and set its--purpose flag toSHARED_LOADBALANCER_VIP.

gcloud

gcloud compute addresses createSHARED_IP_ADDRESS_NAME \    --region=REGION \    --subnet=SUBNET_NAME \    --purpose=SHARED_LOADBALANCER_VIP

Enable session affinity

The example configuration creates a backend service without session affinity.

These procedures show you how to update a backend service for an example loadbalancer so that the backend service uses client IP affinity orgenerated cookie affinity.

When client IP affinity is enabled, the load balancer directs a particularclient's requests to the same backend VM based on a hash created from theclient's IP address and the load balancer's IP address (the internal IP addressof an internal forwarding rule).

Console

To enable client IP session affinity:

  1. In the Google Cloud console, go to theLoad balancing page.
    Go to Load balancing
  2. ClickBackends.
  3. Click the name of the backend serviceyou created for this example and clickEdit.
  4. On theBackend service details page, clickAdvancedconfiguration.
  5. UnderSession affinity, selectClient IP from the menu.
  6. ClickUpdate.

gcloud

Use the following gcloud command to update theBACKEND_SERVICEbackend service, specifying client IP session affinity:

gcloud compute backend-services updateBACKEND_SERVICE \    --global \    --session-affinity=CLIENT_IP

Enable connection draining

You can enable connection draining on backend services to ensure minimalinterruption to your users when an instance that is serving traffic isterminated, removed manually, or removed by an autoscaler. To learn more aboutconnection draining, read theEnabling connection drainingdocumentation.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.