Set up a cross-region internal Application Load Balancer with VM instance group backends

This document provides instructions for configuring a cross-region internal Application Load Balancerfor your services that run on Compute Engine virtual machine (VM) instances.

Before you begin

Before following this guide, familiarize yourself with the following:

Set up an SSL certificate resource

Create a Certificate Manager SSL certificate resource as described inthe following:

We recommend using a Google-managed certificate.

Permissions

To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you must have all ofthe followingCompute Engine IAM roles.

TaskRequired role
Create networks, subnets, and load balancer componentsCompute Network Admin
Add and remove firewall rulesCompute Security Admin
Create instancesCompute Instance Admin

For more information, see the following guides:

Setup overview

You can configure load balancer as shown in the following diagram:

Cross-region internal Application Load Balancer high availability deployment.
Cross-region internal Application Load Balancer high availability deployment (click to enlarge).

As shown in the diagram, this example creates a cross-region internal Application Load Balancer in aVPC network, with one backendservice and two backend managed instance groups in theREGION_A andREGION_B regions.

The diagram shows the following:

  1. A VPC network with the following subnets:

    • SubnetSUBNET_A and a proxy-only subnet inREGION_A.
    • SubnetSUBNET_B and a proxy-only subnet inREGION_B.

    You must createproxy-only subnetsin each region of a VPC network where you usecross-region internal Application Load Balancers. The region'sproxy-only subnet is shared among all cross-region internal Application Load Balancers inthe region. Source addresses of packets sent from the load balancerto your service's backends are allocated from theproxy-only subnet.In this example, the proxy-only subnet for the regionREGION_A has a primary IP address range of10.129.0.0/23 and forREGION_Bhas a primary IP address range of10.130.0.0/23 which is therecommended subnet size.

  2. High availability setup has managed instance group backends forCompute Engine VM deployments inREGION_A andREGION_B regions.If backends in one region happen to be down, traffic fails over to theother region.

  3. A global backend service that monitors the usage and health ofbackends.

  4. A global URL map that parses the URL of a request and forwardsrequests to specific backend services based on the host and path of therequest URL.

  5. A global target HTTP or HTTPS proxy, receives a request from theuser and forwards it to the URL map. For HTTPS, configure a global SSLcertificate resource. The target proxy uses the SSL certificate to decrypt SSLtraffic if you configure HTTPS load balancing. The target proxy can forwardtraffic to your instances by using HTTP or HTTPS.

  6. Global forwarding rules, has the regional internal IP address of yourload balancer, to forward each incoming request to the target proxy.

    The internal IP address associated with the forwarding rule can come froma subnet in the same network and region as the backends. Note the followingconditions:

    • The IP address can (but does not need to) come from the same subnet asthe backend instance groups.
    • The IP address must not come from a reserved proxy-only subnet that hasits--purpose flag set toGLOBAL_MANAGED_PROXY.
    • If you want touse the same internal IP address with multiple forwardingrules,set the IP address--purpose flag toSHARED_LOADBALANCER_VIP.
  7. Optional:Configure DNS routing policiesof typeGEO to route client traffic to the load balancer VIP inthe region closest to the client.

Configure the network and subnets

Within the VPC network, configure a subnet in each regionwhere your backends are configured. In addition, configure aproxy-only-subnetin each region that you want to configure the load balancer.

This example uses the following VPC network, region, andsubnets:

  • Network. The network is acustom mode VPCnetwork namedNETWORK.

  • Subnets for backends.

    • A subnet namedSUBNET_A in theREGION_A region uses10.1.2.0/24 for its primaryIP range.
    • A subnet namedSUBNET_B in theREGION_B region uses10.1.3.0/24 for its primaryIP range.
  • Subnets for proxies.

    • A subnet namedPROXY_SN_A intheREGION_A region uses10.129.0.0/23 for itsprimary IP range.
    • A subnet namedPROXY_SN_B in theREGION_B region uses10.130.0.0/23 for itsprimary IP range.

Cross-region internal Application Load Balancers can be accessed from any region within the VPC.So clients from any region can globally access your load balancer backends.

Note: Subsequent steps in this guide use the network, region,and subnet parameters as outlined here.

Configure the backend subnets

Console

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. Provide aName for the network.

  4. In theSubnets section, set theSubnet creation mode toCustom.

  5. Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:

    • Provide aName for the subnet.
    • Select aRegion:REGION_A
    • Enter anIP address range:10.1.2.0/24
  6. ClickDone.

  7. ClickAdd subnet.

  8. Create a subnet for the load balancer's backends. In theNewsubnet section, enter the following information:

    • Provide aName for the subnet.
    • Select aRegion:REGION_B
    • Enter anIP address range:10.1.3.0/24
  9. ClickDone.

  10. ClickCreate.

gcloud

  1. Create the custom VPC network with thegcloud computenetworks create command:

    gcloud compute networks createNETWORK \    --subnet-mode=custom
  2. Create a subnet in theNETWORK network in theREGION_A region withthegcloud compute networks subnets create command:

    gcloud compute networks subnets createSUBNET_A \    --network=NETWORK \    --range=10.1.2.0/24 \    --region=REGION_A
  3. Create a subnet in theNETWORK network in theREGION_B region withthegcloud compute networks subnets create command:

    gcloud compute networks subnets createSUBNET_B \    --network=NETWORK \    --range=10.1.3.0/24 \    --region=REGION_B

Terraform

To create the VPC network, use thegoogle_compute_network resource.

resource "google_compute_network" "default" {  auto_create_subnetworks = false  name                    = "lb-network-crs-reg"  provider                = google-beta}

To create the VPC subnets in thelb-network-crs-reg network,use thegoogle_compute_subnetwork resource.

resource "google_compute_subnetwork" "subnet_a" {  provider      = google-beta  ip_cidr_range = "10.1.2.0/24"  name          = "lbsubnet-uswest1"  network       = google_compute_network.default.id  region        = "us-west1"}
resource "google_compute_subnetwork" "subnet_b" {  provider      = google-beta  ip_cidr_range = "10.1.3.0/24"  name          = "lbsubnet-useast1"  network       = google_compute_network.default.id  region        = "us-east1"}

API

Make aPOST request to thenetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{ "routingConfig": {   "routingMode": "regional" }, "name": "NETWORK", "autoCreateSubnetworks": false}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks{ "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/lb-network-crs-reg", "ipCidrRange": "10.1.2.0/24", "region": "projects/PROJECT_ID/regions/REGION_A",}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks{ "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "10.1.3.0/24", "region": "projects/PROJECT_ID/regions/REGION_B",}

Configure the proxy-only subnet

Aproxy-only subnet provides aset of IP addresses that Google Cloud uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.

This proxy-only subnet is used by all Envoy-based regional loadbalancers in the same region as the VPC network. There can only beone active proxy-only subnet for a given purpose, per region, per network.

Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.

Console

If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on theLoad balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. Click the name of the VPC network.
  3. On theSubnet tab, clickAdd subnet.
  4. Provide aName for the proxy-only subnet.
  5. Select aRegion:REGION_A
  6. In thePurpose list, selectCross-region Managed Proxy.
  7. In theIP address range field, enter10.129.0.0/23.
  8. ClickAdd.

Create the proxy-only subnet inREGION_B

  1. On theSubnet tab, clickAdd subnet.
  2. Provide aName for the proxy-only subnet.
  3. Select aRegion:REGION_B
  4. In thePurpose list, selectCross-region Managed Proxy.
  5. In theIP address range field, enter10.130.0.0/23.
  6. ClickAdd.

gcloud

Create the proxy-only subnets with thegcloud compute networks subnets create command.

    gcloud compute networks subnets createPROXY_SN_A \        --purpose=GLOBAL_MANAGED_PROXY \        --role=ACTIVE \        --region=REGION_A \        --network=NETWORK \        --range=10.129.0.0/23
    gcloud compute networks subnets createPROXY_SN_B \        --purpose=GLOBAL_MANAGED_PROXY \        --role=ACTIVE \        --region=REGION_B \        --network=NETWORK \        --range=10.130.0.0/23

Terraform

To create the VPC proxy-only subnet in thelb-network-crs-reg network, use thegoogle_compute_subnetwork resource.

resource "google_compute_subnetwork" "proxy_subnet_a" {  provider      = google-beta  ip_cidr_range = "10.129.0.0/23"  name          = "proxy-only-subnet1"  network       = google_compute_network.default.id  purpose       = "GLOBAL_MANAGED_PROXY"  region        = "us-west1"  role          = "ACTIVE"  lifecycle {    ignore_changes = [ipv6_access_type]  }}
resource "google_compute_subnetwork" "proxy_subnet_b" {  provider      = google-beta  ip_cidr_range = "10.130.0.0/23"  name          = "proxy-only-subnet2"  network       = google_compute_network.default.id  purpose       = "GLOBAL_MANAGED_PROXY"  region        = "us-east1"  role          = "ACTIVE"  lifecycle {    ignore_changes = [ipv6_access_type]  }}

API

Create the proxy-only subnets with thesubnetworks.insert method, replacingPROJECT_ID with your project ID.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks    {      "name": "PROXY_SN_A",      "ipCidrRange": "10.129.0.0/23",      "network": "projects/PROJECT_ID/global/networks/NETWORK",      "region": "projects/PROJECT_ID/regions/REGION_A",      "purpose": "GLOBAL_MANAGED_PROXY",      "role": "ACTIVE"    }
    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks    {      "name": "PROXY_SN_B",      "ipCidrRange": "10.130.0.0/23",      "network": "projects/PROJECT_ID/global/networks/NETWORK",      "region": "projects/PROJECT_ID/regions/REGION_B",      "purpose": "GLOBAL_MANAGED_PROXY",      "role": "ACTIVE"    }

Configure firewall rules

This example uses the following firewall rules:

  • fw-ilb-to-backends. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22 from anyaddress. You can choose a more restrictive source IP address range for this rule; forexample, you can specify just the IP address ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-ssh toidentify the VMs that the firewall rule applies to.

  • fw-healthcheck. An ingress rule, applicable to the instancesbeing load balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in130.211.0.0/22 and35.191.0.0/16). Thisexample uses the target tagload-balanced-backend to identify the VMs thatthe firewall rule applies to.

  • fw-backends. An ingress rule, applicable to the instances beingload balanced, that allows TCP traffic on ports80,443, and8080 fromthe internal Application Load Balancer's managed proxies. This example uses the target tagload-balanced-backend to identify the VMs that the firewall rule applies to.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreating a managed instancegroup.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. ClickCreate firewall rule to create the rule to allow incomingSSH connections:

    • Name:fw-ilb-to-backends
    • Network:NETWORK
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:allow-ssh
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter22 for the port number.
  3. ClickCreate.

  4. ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:

    • Name:fw-healthcheck
    • Network:NETWORK
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:load-balanced-backend
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports:

      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.

      As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you usetcp:80 forthe protocol and port, Google Cloud can useHTTP on port80 to contact your VMs, but it cannot use HTTPS onport443 to contact them.

  5. ClickCreate.

  6. ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:

    • Name:fw-backends
    • Network:NETWORK
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:load-balanced-backend
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:10.129.0.0/23 and10.130.0.0/23
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80, 443, 8080 for theport numbers.
  7. ClickCreate.

gcloud

  1. Create thefw-ilb-to-backends firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-ilb-to-backends \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  2. Create thefw-healthcheck rule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.

    gcloud compute firewall-rules create fw-healthcheck \    --network=NETWORK \    --action=allow \    --direction=ingress \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --target-tags=load-balanced-backend \    --rules=tcp
  3. Create thefw-backends rule to allow the internal Application Load Balancer'sproxies to connect to your backends. Setsource-ranges to theallocated ranges of your proxy-only subnet,for example,10.129.0.0/23 and10.130.0.0/23.

    gcloud compute firewall-rules create fw-backends \    --network=NETWORK \    --action=allow \    --direction=ingress \    --source-ranges=source-range \    --target-tags=load-balanced-backend \    --rules=tcp:80,tcp:443,tcp:8080

Terraform

To create the firewall rules, use thegoogle_compute_firewall resource.

resource "google_compute_firewall" "fw_healthcheck" {  name          = "gl7-ilb-fw-allow-hc"  provider      = google-beta  direction     = "INGRESS"  network       = google_compute_network.default.id  source_ranges = ["130.211.0.0/22", "35.191.0.0/16", "35.235.240.0/20"]  allow {    protocol = "tcp"  }}
resource "google_compute_firewall" "fw_ilb_to_backends" {  name          = "fw-ilb-to-fw"  provider      = google-beta  network       = google_compute_network.default.id  source_ranges = ["0.0.0.0/0"]  allow {    protocol = "tcp"    ports    = ["22", "80", "443", "8080"]  }}
resource "google_compute_firewall" "fw_backends" {  name          = "gl7-ilb-fw-allow-ilb-to-backends"  direction     = "INGRESS"  network       = google_compute_network.default.id  source_ranges = ["10.129.0.0/23", "10.130.0.0/23"]  target_tags   = ["http-server"]  allow {    protocol = "tcp"    ports    = ["80", "443", "8080"]  }}

API

Create thefw-ilb-to-backends firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-ilb-to-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [   "0.0.0.0/0" ], "targetTags": [   "allow-ssh" ], "allowed": [  {    "IPProtocol": "tcp",    "ports": [      "22"    ]  } ],"direction": "INGRESS"}

Create thefw-healthcheck firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-healthcheck", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [   "130.211.0.0/22",   "35.191.0.0/16" ], "targetTags": [   "load-balanced-backend" ], "allowed": [   {     "IPProtocol": "tcp"   } ], "direction": "INGRESS"}

Create thefw-backends firewall rule to allow TCP traffic within theproxy subnet for thefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [   "10.129.0.0/23",   "10.130.0.0/23" ], "targetTags": [   "load-balanced-backend" ], "allowed": [   {     "IPProtocol": "tcp",     "ports": [       "80"     ]   }, {     "IPProtocol": "tcp",     "ports": [       "443"     ]   },   {     "IPProtocol": "tcp",     "ports": [       "8080"     ]   } ], "direction": "INGRESS"}

Create a managed instance group

This section shows how to create a template and a managed instance group. Themanaged instance group provides VM instances running the backend servers of anexample cross-region internal Application Load Balancer. For your instance group, you can define an HTTPservice and map a port name to the relevant port. The backend serviceof the load balancer forwards traffic to thenamed ports.Traffic from clients is load balanced to backend servers. For demonstrationpurposes, backends serve their own hostnames.

Console

  1. In the Google Cloud console, go totheInstance templates page.

    Go to Instance templates

    1. ClickCreate instance template.
    2. ForName, entergil7-backendeast1-template.
    3. Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.
    4. ClickAdvanced options.
    5. ClickNetworking and configure the following fields:
      1. ForNetwork tags, enterallow-ssh andload-balanced-backend.
      2. ForNetwork interfaces, select the following:
        • Network:NETWORK
        • Subnet:SUBNET_B
    6. ClickManagement. Enter the following script into theStartup script field.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
    7. ClickCreate.

    8. ClickCreate instance template.

    9. ForName, entergil7-backendwest1-template.

    10. Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.

    11. ClickAdvanced options.

    12. ClickNetworking and configure the following fields:

      1. ForNetwork tags, enterallow-ssh andload-balanced-backend.
      2. ForNetwork interfaces, select the following:
        • Network:NETWORK
        • Subnet:SUBNET_A
    13. ClickManagement. Enter the following script into theStartup script field.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
    14. ClickCreate.

  2. In the Google Cloud console, go to theInstance groups page.

    Go to Instance groups

    1. ClickCreate instance group.
    2. SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
    3. ForName, entergl7-ilb-mig-a.
    4. ForLocation, selectSingle zone.
    5. ForRegion, selectREGION_A.
    6. ForZone, selectZONE_A.
    7. ForInstance template, selectgil7-backendwest1-template.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.

      Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU use.

    9. ClickCreate.

    10. ClickCreate instance group.

    11. SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.

    12. ForName, entergl7-ilb-mig-b.

    13. ForLocation, selectSingle zone.

    14. ForRegion, selectREGION_B.

    15. ForZone, selectZONE_B.

    16. ForInstance template, selectgil7-backendeast1-template.

    17. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.

      Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.

    18. ClickCreate.

gcloud

The gcloud CLI instructions in this guide assume that you are usingCloud Shell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with thegcloud compute instance-templates create command.

    gcloud compute instance-templates create gil7-backendwest1-template \  --region=REGION_A \  --network=NETWORK \  --subnet=SUBNET_A \  --tags=allow-ssh,load-balanced-backend \  --image-family=debian-12 \  --image-project=debian-cloud \  --metadata=startup-script='#! /bin/bash    apt-get update    apt-get install apache2 -y    a2ensite default-ssl    a2enmod ssl    vm_hostname="$(curl -H "Metadata-Flavor:Google" \    http://169.254.169.254/computeMetadata/v1/instance/name)"    echo "Page served from: $vm_hostname" | \    tee /var/www/html/index.html    systemctl restart apache2'
    gcloud compute instance-templates create gil7-backendeast1-template \    --region=REGION_B \    --network=NETWORK \    --subnet=SUBNET_B \    --tags=allow-ssh,load-balanced-backend \    --image-family=debian-12 \    --image-project=debian-cloud \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://169.254.169.254/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      systemctl restart apache2'
  2. Create a managed instance group in the zone with thegcloud computeinstance-groups managed create command.

    gcloud compute instance-groups managed create gl7-ilb-mig-a \    --zone=ZONE_A \    --size=2 \    --template=gil7-backendwest1-template
    gcloud compute instance-groups managed create gl7-ilb-mig-b \    --zone=ZONE_B \    --size=2 \    --template=gil7-backendeast1-template

Terraform

To create the instance template, use thegoogle_compute_instance_template resource.

resource "google_compute_instance_template" "instance_template_a" {  name         = "gil7-backendwest1-template"  provider     = google-beta  machine_type = "e2-small"  region       = "us-west1"  tags         = ["http-server"]  network_interface {    network    = google_compute_network.default.id    subnetwork = google_compute_subnetwork.subnet_a.id    access_config {      # add external ip to fetch packages    }  }  disk {    source_image = "debian-cloud/debian-11"    auto_delete  = true    boot         = true  }  # install nginx and serve a simple web page  metadata = {    startup-script = <<-EOF1      #! /bin/bash      set -euo pipefail      export DEBIAN_FRONTEND=noninteractive      apt-get update      apt-get install -y nginx-light jq      NAME=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/hostname")      IP=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip")      METADATA=$(curl -f -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=True" | jq 'del(.["startup-script"])')      cat <<EOF > /var/www/html/index.html      <pre>      Name: $NAME      IP: $IP      Metadata: $METADATA      </pre>      EOF    EOF1  }  lifecycle {    create_before_destroy = true  }}
resource "google_compute_instance_template" "instance_template_b" {  name         = "gil7-backendeast1-template"  provider     = google-beta  machine_type = "e2-small"  region       = "us-east1"  tags         = ["http-server"]  network_interface {    network    = google_compute_network.default.id    subnetwork = google_compute_subnetwork.subnet_b.id    access_config {      # add external ip to fetch packages    }  }  disk {    source_image = "debian-cloud/debian-11"    auto_delete  = true    boot         = true  }  # install nginx and serve a simple web page  metadata = {    startup-script = <<-EOF1      #! /bin/bash      set -euo pipefail      export DEBIAN_FRONTEND=noninteractive      apt-get update      apt-get install -y nginx-light jq      NAME=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/hostname")      IP=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip")      METADATA=$(curl -f -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=True" | jq 'del(.["startup-script"])')      cat <<EOF > /var/www/html/index.html      <pre>      Name: $NAME      IP: $IP      Metadata: $METADATA      </pre>      EOF    EOF1  }  lifecycle {    create_before_destroy = true  }}

To create the managed instance group, use thegoogle_compute_instance_group_manager resource.

resource "google_compute_region_instance_group_manager" "mig_a" {  name     = "gl7-ilb-miga"  provider = google-beta  region   = "us-west1"  version {    instance_template = google_compute_instance_template.instance_template_a.id    name              = "primary"  }  base_instance_name = "vm"  target_size        = 2}
resource "google_compute_region_instance_group_manager" "mig_b" {  name     = "gl7-ilb-migb"  provider = google-beta  region   = "us-east1"  version {    instance_template = google_compute_instance_template.instance_template_b.id    name              = "primary"  }  base_instance_name = "vm"  target_size        = 2}

API

Create the instance template with theinstanceTemplates.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{  "name":"gil7-backendwest1-template",  "properties":{     "machineType":"e2-standard-2",     "tags":{       "items":[         "allow-ssh",         "load-balanced-backend"       ]     },     "metadata":{        "kind":"compute#metadata",        "items":[          {            "key":"startup-script",            "value":"#! /bin/bash\napt-get update\napt-get install            apache2 -y\na2ensite default-ssl\na2enmod ssl\n            vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\"            \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n            echo \"Page served from: $vm_hostname\" | \\\ntee            /var/www/html/index.html\nsystemctl restart apache2"          }        ]     },     "networkInterfaces":[       {         "network":"projects/PROJECT_ID/global/networks/NETWORK",         "subnetwork":"regions/REGION_A/subnetworks/SUBNET_A",         "accessConfigs":[           {             "type":"ONE_TO_ONE_NAT"           }         ]       }     ],     "disks":[       {         "index":0,         "boot":true,         "initializeParams":{           "sourceImage":"projects/debian-cloud/global/images/family/debian-12"         },         "autoDelete":true       }     ]  }}

Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{  "name": "gl7-ilb-mig-a",  "zone": "projects/PROJECT_ID/zones/ZONE_A",  "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil7-backendwest1-template",  "baseInstanceName": "gl7-ilb-mig-b",  "targetSize": 2}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{  "name":"gil7-backendeast1-template",  "properties":{     "machineType":"e2-standard-2",     "tags":{       "items":[         "allow-ssh",         "load-balanced-backend"       ]     },     "metadata":{        "kind":"compute#metadata",        "items":[          {            "key":"startup-script",            "value":"#! /bin/bash\napt-get update\napt-get install            apache2 -y\na2ensite default-ssl\na2enmod ssl\n            vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\"            \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n            echo \"Page served from: $vm_hostname\" | \\\ntee            /var/www/html/index.html\nsystemctl restart apache2"          }        ]     },     "networkInterfaces":[       {         "network":"projects/PROJECT_ID/global/networks/NETWORK",         "subnetwork":"regions/REGION_B/subnetworks/SUBNET_B",         "accessConfigs":[           {             "type":"ONE_TO_ONE_NAT"           }         ]       }     ],     "disks":[       {         "index":0,         "boot":true,         "initializeParams":{           "sourceImage":"projects/debian-cloud/global/images/family/debian-12"         },         "autoDelete":true       }     ]  }}

Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{  "name": "gl7-ilb-mig-b",  "zone": "projects/PROJECT_ID/zones/ZONE_B",  "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil7-backendwest1-template",  "baseInstanceName": "gl7-ilb-mig-b",  "targetSize": 2}

Configure the load balancer

This example shows you how to create the following cross-region internal Application Load Balancerresources:

  • A global HTTP health check.
  • A global backend service with the managed instance groups as the backend.
  • AnURL map.Make sure to refer to a global URL map for the target HTTP(S) proxy.A global URL map routes requests to a global backend service based on rulesthat you define for the host and path of an incoming URL. A global URL mapcan be referenced by a global target proxy rule.
  • A global SSL certificate (for HTTPS).
  • A global target proxy.
  • Two global forwarding rules with regional IP addresses.For the forwarding rule's IP address, use theSUBNET_AorSUBNET_B IP address range. If youtry to use theproxy-only subnet,forwarding rule creation fails.

Console

Select the load balancer type

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickCreate load balancer.
  3. ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
  4. ForPublic facing or internal, selectInternal and clickNext.
  5. ForCross-region or single region deployment, selectBest for cross-region workloads and clickNext.
  6. ClickConfigure.

Basic configuration

  1. Provide aName for the load balancer.
  2. ForNetwork, selectNETWORK.

Configure the frontend with two forwarding rules

For HTTP:

  1. ClickFrontend configuration.
    1. Provide aName for the forwarding rule.
    2. In theSubnetwork region list, selectREGION_A.

      Reserve a proxy-only subnet

    3. If you already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps.
    4. In theSubnetwork list, selectSUBNET_A.
    5. In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
      • Provide aName for the static IP address.
      • In theStatic IP address list, selectLet me choose.
      • In theCustom IP address field, enter10.1.2.99.
      • SelectReserve.
  2. ClickDone.
  3. To add the second forwarding rule, clickAdd frontend IP and port.
    1. Provide aName for the forwarding rule.
    2. In theSubnetwork region list, selectREGION_B.

      Reserve a proxy-only subnet

    3. Because you have already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps.
    4. In theSubnetwork list, selectSUBNET_B.
    5. In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
      • Provide aName for the static IP address.
      • In theStatic IP address list, selectLet me choose.
      • In theCustom IP address field, enter10.1.3.99.
      • SelectReserve.
  4. ClickDone.

For HTTPS:

To assign an SSL certificate to the target HTTPS proxy of the load balancer, you need to use a Certificate Manager certificate.

  1. ClickFrontend configuration.
    1. Provide aName for the forwarding rule.
    2. In theProtocol field, selectHTTPS (includes HTTP/2).
    3. Ensure that thePort is set to443.
    4. In theSubnetwork region list, selectREGION_A.

      Reserve a proxy-only subnet

    5. Because you have already configured theproxy-only subnet,theReserve subnet button isn't displayed.You can continue with the next steps.
    6. In theSubnetwork list, selectSUBNET_A.
    7. In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
      • Provide aName for the static IP address.
      • In theStatic IP address list, selectLet me choose.
      • In theCustom IP address field, enter10.1.3.99.
      • SelectReserve.
    8. ClickAdd certificate to select an existing certificate or create a new certificate.

      If you already have an existing Certificate Manager certificate to select, do the following:

      1. ClickAdd Certificate.
      2. ClickSelect an existing certificate and select the certificate from the list of certificates.
      3. ClickSelect.

      After you select the new Certificate Manager certificate, it appears in the list of certificates.

      To create a new Certificate Manager certificate, do the following:

      1. ClickAdd Certificate.
      2. ClickCreate a new certificate.
      3. To create a new certificate, follow the steps starting fromstep 3 as outlined in any one of the following configuration methods in the Certificate Manager documentation:
    9. Select an SSL policy from theSSL policy list. If you have not created any SSL policies, adefault Google Cloud SSL policy is applied.
    10. ClickDone.

    Add the second frontend configuration:

    1. Provide aName for the frontend configuration.
    2. In theProtocol field, selectHTTPS (includes HTTP/2).
    3. Ensure that thePort is set to443.
    4. In theSubnetwork region list, selectREGION_B.

      Reserve a proxy-only subnet

    5. Because you have already configured theproxy-only subnet,theReserve subnet button isn't displayed.You can continue with the next steps.
    6. In theSubnetwork list, selectSUBNET_B.
    7. In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
      • Provide aName for the static IP address.
      • In theStatic IP address list, selectLet me choose.
      • In theCustom IP address field, enter10.1.3.99.
      • SelectReserve.
    8. ClickAdd certificate and then select an existing certificateor create a new certificate.
    9. Select an SSL policy from theSSL policy list. If you have not created any SSL policies, adefault Google Cloud SSL policy is applied.
    10. ClickDone.
    Configure the backend service
    1. ClickBackend configuration.
    2. In theCreate or select backend services list, clickCreate a backend service.
    3. Provide aName for the backend service.
    4. ForProtocol, selectHTTP.
    5. ForNamed Port, enterhttp.
    6. In theBackend type list, selectInstance group.
    7. In theHealth check list, clickCreate a health check, and then enter the following information:
      • In theName field, enterglobal-http-health-check.
      • In theProtocol list, selectHTTP.
      • In thePort field, enter80.
      • ClickCreate.
    8. In theNew backend section:
      1. In theInstance group list, selectgl4-ilb-miga inREGION_A.
      2. SetPort numbers to80.
      3. For theBalancing mode, selectUtilization.
      4. ClickDone.
      5. To add another backend, clickAdd backend.
      6. In theInstance group list, selectgl4-ilb-migb inREGION_B.
      7. SetPort numbers to80.
      8. ClickDone.

    Configure the routing rules

    1. ClickRouting rules.
    2. ForMode, selectSimple host and path rule.
    3. Ensure that there is only one backend service for any unmatched host and any unmatched path.

    Review the configuration

    1. ClickReview and finalize.
    2. Review your load balancer configuration settings.
    3. ClickCreate.

gcloud

  1. Define the HTTP health check with thegcloud compute health-checkscreate http command.

    gcloud compute health-checks create http global-http-health-check \   --use-serving-port \   --global
  2. Define the backend service with thegcloud compute backend-servicescreate command.

    gcloud compute backend-services createBACKEND_SERVICE_NAME \  --load-balancing-scheme=INTERNAL_MANAGED \  --protocol=HTTP \  --enable-logging \  --logging-sample-rate=1.0 \  --health-checks=global-http-health-check \  --global-health-checks \  --global
  3. Add backends to the backend service with thegcloud compute backend-servicesadd-backend command.

    gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \  --balancing-mode=UTILIZATION \  --instance-group=gl7-ilb-mig-a \  --instance-group-zone=ZONE_A \  --global
    gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \  --balancing-mode=UTILIZATION \  --instance-group=gl7-ilb-mig-b \  --instance-group-zone=ZONE_B \  --global
  4. Create the URL map with thegcloud compute url-mapscreate command.

    gcloud compute url-maps create gl7-gilb-url-map \  --default-service=BACKEND_SERVICE_NAME \  --global
  5. Create the target proxy.

    For HTTP:

    Create the target proxy with thegcloud compute target-http-proxiescreate command.

    gcloud compute target-http-proxies create gil7-http-proxy \  --url-map=gl7-gilb-url-map \  --global

    For HTTPS:

    To create a Google-managed certificate, see the following documentation:

    After you create the Google-managed certificate,attach the certificate directly to the target proxy.Certificate maps are not supported by cross-region internal Application Load Balancers.

    To create a self-managed certificate, see the following documentation:

    Assign your file paths to variable names.

    export LB_CERT=PATH_TO_PEM_FORMATTED_FILE
    export LB_PRIVATE_KEY=PATH_TO_LB_PRIVATE_KEY_FILE

    Create an all region SSL certificate using thegcloud beta certificate-managercertificates create command.

    gcloud certificate-manager certificates create gilb-certificate \  --private-key-file=$LB_PRIVATE_KEY \  --certificate-file=$LB_CERT \  --scope=all-regions

    Use the SSL certificate to create a target proxy with thegcloudcompute target-https-proxies create command

    gcloud compute target-https-proxies create gil7-https-proxy \  --url-map=gl7-gilb-url-map \  --certificate-manager-certificates=gilb-certificate \  --global
  6. Create two forwarding rules, one with a VIP (10.1.2.99) in theREGION_B region and another one with aVIP (10.1.3.99) in theREGION_A region.For more information, seeReserve a static internal IPv4 address.

    For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.

    For HTTP:

    Use thegcloud compute forwarding-rulescreate commandwith the correct flags.

    gcloud compute forwarding-rules createFWRULE_A \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_A \  --subnet-region=REGION_A \  --address=10.1.2.99 \  --ports=80 \  --target-http-proxy=gil7-http-proxy \  --global
    gcloud compute forwarding-rules createFWRULE_B \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_B \  --subnet-region=REGION_B \  --address=10.1.3.99 \  --ports=80 \  --target-http-proxy=gil7-http-proxy \  --global

    For HTTPS:

    Use thegcloud compute forwarding-rulescreate commandwith the correct flags.

    gcloud compute forwarding-rules createFWRULE_A \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_A \  --subnet-region=REGION_A \  --address=10.1.2.99 \  --ports=443 \  --target-https-proxy=gil7-https-proxy \  --global
    gcloud compute forwarding-rules createFWRULE_B \  --load-balancing-scheme=INTERNAL_MANAGED \  --network=NETWORK \  --subnet=SUBNET_B \  --subnet-region=REGION_B \  --address=10.1.3.99 \  --ports=443 \  --target-https-proxy=gil7-https-proxy \  --global

Terraform

To create the health check, use thegoogle_compute_health_check resource.

resource "google_compute_health_check" "default" {  provider = google-beta  name     = "global-http-health-check"  http_health_check {    port_specification = "USE_SERVING_PORT"  }}

To create the backend service, use thegoogle_compute_backend_service resource.

resource "google_compute_backend_service" "default" {  name                  = "gl7-gilb-backend-service"  provider              = google-beta  protocol              = "HTTP"  load_balancing_scheme = "INTERNAL_MANAGED"  timeout_sec           = 10  health_checks         = [google_compute_health_check.default.id]  backend {    group           = google_compute_region_instance_group_manager.mig_a.instance_group    balancing_mode  = "UTILIZATION"    capacity_scaler = 1.0  }  backend {    group           = google_compute_region_instance_group_manager.mig_b.instance_group    balancing_mode  = "UTILIZATION"    capacity_scaler = 1.0  }}

To create the URL map, use thegoogle_compute_url_map resource.

resource "google_compute_url_map" "default" {  name            = "gl7-gilb-url-map"  provider        = google-beta  default_service = google_compute_backend_service.default.id}

To create the target HTTP proxy, use thegoogle_compute_target_http_proxy resource.

resource "google_compute_target_http_proxy" "default" {  name     = "gil7target-http-proxy"  provider = google-beta  url_map  = google_compute_url_map.default.id}

To create the forwarding rules, use thegoogle_compute_forwarding_rule resource.

resource "google_compute_global_forwarding_rule" "fwd_rule_a" {  provider              = google-beta  depends_on            = [google_compute_subnetwork.proxy_subnet_a]  ip_address            = "10.1.2.99"  ip_protocol           = "TCP"  load_balancing_scheme = "INTERNAL_MANAGED"  name                  = "gil7forwarding-rule-a"  network               = google_compute_network.default.id  port_range            = "80"  target                = google_compute_target_http_proxy.default.id  subnetwork            = google_compute_subnetwork.subnet_a.id}
resource "google_compute_global_forwarding_rule" "fwd_rule_b" {  provider              = google-beta  depends_on            = [google_compute_subnetwork.proxy_subnet_b]  ip_address            = "10.1.3.99"  ip_protocol           = "TCP"  load_balancing_scheme = "INTERNAL_MANAGED"  name                  = "gil7forwarding-rule-b"  network               = google_compute_network.default.id  port_range            = "80"  target                = google_compute_target_http_proxy.default.id  subnetwork            = google_compute_subnetwork.subnet_b.id}

To learn how to apply or remove a Terraform configuration, seeBasic Terraform commands.

API

Create the health check by making aPOST request to thehealthChecks.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/healthChecks{"name": "global-http-health-check","type": "HTTP","httpHealthCheck": {  "portSpecification": "USE_SERVING_PORT"}}

Create the global backend service by making aPOST request to thebackendServices.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices{"name": "BACKEND_SERVICE_NAME","backends": [  {    "group": "projects/PROJECT_ID/zones/ZONE_A/instanceGroups/gl7-ilb-mig-a",    "balancingMode": "UTILIZATION"  },  {    "group": "projects/PROJECT_ID/zones/ZONE_B/instanceGroups/gl7-ilb-mig-b",    "balancingMode": "UTILIZATION"  }],"healthChecks": [  "projects/PROJECT_ID/regions/global/healthChecks/global-http-health-check"],"loadBalancingScheme": "INTERNAL_MANAGED"}

Create the URL map by making aPOST request to theurlMaps.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/urlMaps{"name": "l7-ilb-map","defaultService": "projects/PROJECT_ID/global/backendServices/BACKEND_SERVICE_NAME"}

For HTTP:

Create the target HTTP proxy by making aPOST request to thetargetHttpProxies.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetHttpProxy{"name": "l7-ilb-proxy","urlMap": "projects/PROJECT_ID/global/urlMaps/l7-ilb-map"}

Create the forwarding rule by making aPOST request to theforwardingRules.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "FWRULE_A","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil7forwarding-rule-b","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}

For HTTPS:

Read the certificate and private key files, and then create the SSLcertificate. The following example showshow to do this with Python.

frompathlibimportPathfrompprintimportpprintfromtypingimportUnionfromgoogleapiclientimportdiscoverydefcreate_regional_certificate(project_id:str,region:str,certificate_file:Union[str,Path],private_key_file:Union[str,Path],certificate_name:str,description:str="Certificate created from a code sample.",)->dict:"""    Create a regional SSL self-signed certificate within your Google Cloud project.    Args:        project_id: project ID or project number of the Cloud project you want to use.        region: name of the region you want to use.        certificate_file: path to the file with the certificate you want to create in your project.        private_key_file: path to the private key you used to sign the certificate with.        certificate_name: name for the certificate once it's created in your project.        description: description of the certificate.        Returns:        Dictionary with information about the new regional SSL self-signed certificate.    """service=discovery.build("compute","v1")# Read the cert into memorywithopen(certificate_file)asf:_temp_cert=f.read()# Read the private_key into memorywithopen(private_key_file)asf:_temp_key=f.read()# Now that the certificate and private key are in memory, you can create the# certificate resourcessl_certificate_body={"name":certificate_name,"description":description,"certificate":_temp_cert,"privateKey":_temp_key,}request=service.regionSslCertificates().insert(project=project_id,region=region,body=ssl_certificate_body)response=request.execute()pprint(response)returnresponse

Create the target HTTPS proxy by making aPOST request to thetargetHttpsProxies.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetHttpsProxy{"name": "l7-ilb-proxy","urlMap": "projects/PROJECT_ID/global/urlMaps/l7-ilb-map","sslCertificates": /projects/PROJECT_ID/global/sslCertificates/SSL_CERT_NAME}

Create the forwarding rule by making aPOST request to theglobalForwardingRules.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "FWRULE_A","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpsProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "FWRULE_B","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpsProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}

Test the load balancer

Create a VM instance to test connectivity

  1. Create a client VM:

    gcloud compute instances create l7-ilb-client-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --network=NETWORK \    --subnet=SUBNET_A \    --zone=ZONE_A \    --tags=allow-ssh
    gcloud compute instances create l7-ilb-client-b \    --image-family=debian-12 \    --image-project=debian-cloud \    --network=NETWORK \    --subnet=SUBNET_B \    --zone=ZONE_B \    --tags=allow-ssh
  2. Use SSH to connect to each client instance.

    gcloud compute ssh l7-ilb-client-a --zone=ZONE_A
    gcloud compute ssh l7-ilb-client-b --zone=ZONE_B
  3. Verify that the IP address is serving its hostname

    • Verify that the client VM can reach both IP addresses.The command returns the name of the backend VM whichserved the request:

      curl 10.1.2.99
      curl 10.1.3.99

      For HTTPS testing, replacecurl with the following command line:

      curl -k 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.2.99:443
      curl -k 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.3.99:443

      ReplaceDOMAIN_NAME with your application domain name, forexample,test.example.com.

      The-k flag causes curl to skip certificate validation.

    • Optional: Use the configuredDNS record to resolvethe IP address closest to the client VM. For example,DNS_NAME can beservice.example.com.

      curlDNS_NAME

Run 100 requests and confirm that they are load balanced

For HTTP:

  {    RESULTS=    for i in {1..100}    do        RESULTS="$RESULTS:$(curl --silent 10.1.2.99)"    done    echo ""    echo " Results of load-balancing to 10.1.2.99: "    echo "***"    echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c    echo  }

  {    RESULTS=    for i in {1..100}    do      RESULTS="$RESULTS:$(curl --silent 10.1.3.99)"    done    echo ""    echo " Results of load-balancing to 10.1.3.99: "    echo "***"    echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c    echo  }

For HTTPS:

In the following scripts, replaceDOMAIN_NAME with your application domain name, for example,test.example.com.

  {    RESULTS=    for i in {1..100}    do      RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.2.99:443)"    done    echo ""    echo " Results of load-balancing to 10.1.2.99: "    echo "***"    echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c    echo  }

  {    RESULTS=    for i in {1..100}    do       RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.3.99:443)"    done    echo ""    echo " Results of load-balancing to 10.1.3.99: "    echo "***"    echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c    echo  }

Test failover

  1. Verify failover to backends in theREGION_A regionwhen backends in theREGION_B are unhealthy orunreachable. To simulate failover, remove all backends fromREGION_B:

    gcloud compute backend-services remove-backendBACKEND_SERVICE_NAME \   --balancing-mode=UTILIZATION \   --instance-group=gl7-ilb-mig-b \   --instance-group-zone=ZONE_B
  2. Connect using SSH to a client VM inREGION_B.

    gcloud compute ssh l7-ilb-client-b \   --zone=ZONE_B
  3. Send requests to the load balanced IP address in theREGION_B region. The command output shows responsesfrom backend VMs inREGION_A.

    In the following script, replaceDOMAIN_NAME with your applicationdomain name, for example,test.example.com.

    {RESULTS=for i in {1..100}do  RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.3.99:443)"doneecho "***"echo "*** Results of load-balancing to 10.1.3.99: "echo "***"echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -cecho}

Additional configuration options

This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.

Enable session affinity

These procedures show you how to update a backend service for the exampleregional internal Application Load Balancer or cross-region internal Application Load Balancerso that the backend serviceuses generated cookie affinity, header field affinity, or HTTP cookie affinity.

When generated cookie affinity is enabled, the load balancer issues a cookieon the first request. For each subsequent request with the same cookie, the loadbalancer directs the request to the same backend virtual machine (VM) instanceor endpoint. In this example, the cookie is namedGCILB.

When header field affinity is enabled, the load balancer routes requests tobackend VMs or endpoints in a network endpoint group (NEG) based on the value ofthe HTTP header named in the--custom-request-header flag.Header field affinity is only valid ifthe load balancing locality policy is eitherRING_HASH orMAGLEV and thebackend service's consistent hash specifies the name of the HTTP header.

When HTTP cookie affinity is enabled, the load balancer routes requests tobackend VMs or endpoints in a NEG, based on an HTTP cookie named in theHTTP_COOKIE flag with the optional--affinity-cookie-ttl flag. If the clientdoesn't provide the cookie in its HTTP request, the proxy generatesthe cookie and returns it to the client in aSet-Cookie header. HTTP cookieaffinity is only valid if the load balancing locality policy is eitherRING_HASH orMAGLEV and the backend service's consistent hash specifies theHTTP cookie.

Console

To enable or change session affinity for a backend service:

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickBackends.
  3. Clickgil7-backend-service (the name of the backend service you created for this example), and then clickEdit.
  4. On theBackend service details page, clickAdvanced configuration.
  5. UnderSession affinity, select the type of session affinity you want.
  6. ClickUpdate.

gcloud

Use the following Google Cloud CLI commands to update the backend service to different types of session affinity:

    gcloud compute backend-services update gil7-backend-service \        --session-affinity=[GENERATED_COOKIE | HEADER_FIELD | HTTP_COOKIE | CLIENT_IP] \        --global

API

To set session affinity, make a `PATCH` request to thebackendServices/patch method.

    PATCH https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/backendServices/gil7-backend-service    {    "sessionAffinity": ["GENERATED_COOKIE" | "HEADER_FIELD" | "HTTP_COOKIE" | "CLIENT_IP" ]    }

Restrict which clients can send traffic to the load balancer

You can restrict clients from connecting to an internal Application Load Balancerforwarding rule VIP by configuring egress firewall rules on these clients. Setthese firewall rules on specific client VMs based onserviceaccounts ortags.

You can't use firewall rules to restrict inbound traffic to specificinternal Application Load Balancer forwarding rule VIPs. Any client on the same VPCnetwork and in the same region as the forwarding rule VIP can generally sendtraffic to the forwarding rule VIP.

Additionally, all requests to backends come from proxies that use IP addresses intheproxy-only subnetrange. It isn't possible to create firewall rules that allow or deny ingresstraffic on these backends based on the forwarding rule VIP used by a client.

Here are some examples of how to use egress firewall rules to restrict trafficto the load balancer's forwarding rule VIP.

Console

To identify the client VMs,tag thespecific VMsyou want to restrict. These tags are used to associate firewall rules withthe tagged client VMs. Then, add the tag to theTARGET_TAGfield in the following steps.

Use either a single firewall rule or multiple rules to set this up.

Single egress firewall rule

You can configure one firewall egress rule to deny all egresstraffic going from tagged client VMs to a load balancer's VIP.

  1. In the Google Cloud console, go to theFirewall rules page.

    Go to Firewall rules

  2. ClickCreate firewall rule to create the rule to deny egresstraffic from tagged client VMs to a load balancer's VIP.

    • Name:fr-deny-access
    • Network:lb-network
    • Priority:100
    • Direction of traffic:Egress
    • Action on match:Deny
    • Targets:Specified target tags
    • Target tags:TARGET_TAG
    • Destination filter:IP ranges
    • Destination IP ranges:10.1.2.99
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select thetcp checkbox, and then enter80 for the port number.
  3. ClickCreate.

Multiple egress firewall rules

A more scalable approach involves setting two rules. A default, low-priorityrule that restricts all clients from accessing the load balancer's VIP. Asecond, higher-priority rule that allows a subset of tagged clients toaccess the load balancer's VIP. Only tagged VMs can access theVIP.

  1. In the Google Cloud console, go to theFirewall rules page.

    Go to Firewall rules

  2. ClickCreate firewall rule to create the lower priority rule to denyaccess by default:

    • Name:fr-deny-all-access-low-priority
    • Network:lb-network
    • Priority:200
    • Direction of traffic:Egress
    • Action on match:Deny
    • Targets:Specified target tags
    • Target tags:TARGET_TAG
    • Destination filter:IP ranges
    • Destination IP ranges:10.1.2.99
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.
  3. ClickCreate.

  4. ClickCreate firewall rule to create the higher priority rule toallow traffic from certain tagged instances.

    • Name:fr-allow-some-access-high-priority
    • Network:lb-network
    • Priority:100
    • Direction of traffic:Egress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:TARGET_TAG
    • Destination filter:IP ranges
    • Destination IP ranges:10.1.2.99
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.
  5. ClickCreate.

gcloud

To identify the client VMs,tag thespecific VMsyou want to restrict. Then add the tag to theTARGET_TAGfield in these steps.

Use either a single firewall rule or multiple rules to set this up.

Single egress firewall rule

You can configure one firewall egress rule to deny all egresstraffic going from tagged client VMs to a load balancer's VIP.

gcloud compute firewall-rules create fr-deny-access \    --network=lb-network \    --action=deny \    --direction=egress \    --rules=tcp \    --priority=100 \    --destination-ranges=10.1.2.99 \    --target-tags=TARGET_TAG

Multiple egress firewall rules

A more scalable approach involves setting two rules: a default, low-priorityrule that restricts all clients from accessing the load balancer's VIP, and asecond, higher-priority rule that allows a subset of tagged clients to accessthe load balancer's VIP. Only tagged VMs can access the VIP.

  1. Create the lower-priority rule:

    gcloud compute firewall-rules create fr-deny-all-access-low-priority \    --network=lb-network \    --action=deny \    --direction=egress \    --rules=tcp \    --priority=200 \    --destination-ranges=10.1.2.99
  2. Create the higher priority rule:

    gcloud compute firewall-rules create fr-allow-some-access-high-priority \    --network=lb-network \    --action=allow \    --direction=egress \    --rules=tcp \    --priority=100 \    --destination-ranges=10.1.2.99 \    --target-tags=TARGET_TAG

To use service accounts instead of tags to control access, usethe--target-service-accountsoptioninstead of the--target-tags flag when creating firewall rules.

Scale restricted access to internal Application Load Balancer backends based on subnets

Maintaining separate firewall rules or adding new load-balanced IP addresses toexisting rules as described in the previous section becomes inconvenient as thenumber of forwarding rules increases. One way to prevent this is to allocateforwarding rule IP addresses from a reserved subnet.Then, traffic from tagged instances orservice accounts can be allowed or blocked by using the reserved subnet as thedestination range for firewall rules. This lets you effectively controlaccess to a group of forwarding rule VIPs without having to maintainper-VIP firewall egress rules.

Here are the high-level steps to set this up, assuming that you will createall the other required load balancer resources separately.

gcloud

  1. Create a regional subnet to use to allocate load-balanced IPaddresses for forwarding rules:

    gcloud compute networks subnets create l7-ilb-restricted-subnet \    --network=lb-network \    --region=us-west1 \    --range=10.127.0.0/24
  2. Create a forwarding rule that takes an address from thesubnet. The following example uses the address10.127.0.1 from the subnetcreated in the previous step.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule-restricted \    --load-balancing-scheme=INTERNAL_MANAGED \    --network=lb-network \    --subnet=l7-ilb-restricted-subnet \    --address=10.127.0.1 \    --ports=80 \    --global \    --target-http-proxy=gil7-http-proxy
  3. Create a firewall rule to restrict traffic destined for the range IPaddresses in the forwarding rule subnet (l7-ilb-restricted-subnet):

    gcloud compute firewall-rules create restrict-traffic-to-subnet \    --network=lb-network \    --action=deny \    --direction=egress \    --rules=tcp:80 \    --priority=100 \    --destination-ranges=10.127.0.0/24 \    --target-tags=TARGET_TAG

Use the same IP address between multiple internal forwarding rules

For multiple internal forwarding rules to share the same internal IP address,you must reserve the IP address and set its--purpose flag toSHARED_LOADBALANCER_VIP.

gcloud

gcloud compute addresses createSHARED_IP_ADDRESS_NAME \    --region=REGION \    --subnet=SUBNET_NAME \    --purpose=SHARED_LOADBALANCER_VIP
If you need to redirect HTTP traffic to HTTPS, you can create two forwardingrules that use a common IP address. For more information, seeSet upHTTP-to-HTTPS redirect forinternal Application Load Balancers.

Configure DNS routing policies

If your clients are in multiple regions, you might want to make yourcross-region internal Application Load Balancer accessible by using VIPs in these regions. You can useDNS routing policies of typeGEO to route clienttraffic to the load balancer VIP in the region closest to the client. Thismulti-region setup minimizes latency and network transit costs. In addition, itlets you set up a DNS-based, global, load balancing solution that providesresilience against regional outages.

Cloud DNS supports health checking and enables automatic failover whenthe endpoints fail their health checks. During a failover, Cloud DNSautomatically adjusts the traffic split among the remaining healthy endpoints.For more information, seeManage DNS routing policies and healthchecks.

gcloud

To create a DNS entry with a 30 second TTL, use thegcloud dns record-sets create command.

gcloud dns record-sets createDNS_ENTRY --ttl="30" \  --type="A" --zone="service-zone" \  --routing-policy-type="GEO" \  --routing-policy-data="REGION_A=gil7-forwarding-rule-a@global;REGION_B=gil7-forwarding-rule-b@global" \  --enable-health-checking

Replace the following:

  • DNS_ENTRY: DNS or domain name of the record-set

    For example,service.example.com

  • REGION_A andREGION_B:the regions where you have configured the load balancer

API

Create the DNS record by making aPOST request to theResourceRecordSets.create method.ReplacePROJECT_ID with your project ID.

POST https://www.googleapis.com/dns/v1/projects/PROJECT_ID/managedZones/SERVICE_ZONE/rrsets{  "name": "DNS_ENTRY",  "type": "A",  "ttl": 30,  "routingPolicy": {    "geo": {      "items": [        {          "location": "REGION_A",          "healthCheckedTargets": {            "internalLoadBalancers": [              {                "loadBalancerType": "globalL7ilb",                "ipAddress": "IP_ADDRESS",                "port": "80",                "ipProtocol": "tcp",                "networkUrl": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",                "project": "PROJECT_ID"              }            ]          }        },        {          "location": "REGION_B",          "healthCheckedTargets": {            "internalLoadBalancers": [              {                "loadBalancerType": "globalL7ilb",                "ipAddress": "IP_ADDRESS_B",                "port": "80",                "ipProtocol": "tcp",                "networkUrl": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",                "project": "PROJECT_ID"              }            ]          }        }      ]    }  }}

Update client HTTP keepalive timeout

The load balancer created in the previous steps has been configured witha default value for theclient HTTP keepalivetimeout.

To update the client HTTP keepalive timeout, use the following instructions.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing.

  2. Click the name of the load balancer that you want to modify.
  3. ClickEdit.
  4. ClickFrontend configuration.
  5. ExpandAdvanced features. ForHTTP keepalive timeout, enter a timeout value.
  6. ClickUpdate.
  7. To review your changes, clickReview and finalize, and then clickUpdate.

gcloud

For an HTTP load balancer, update the target HTTP proxy by using thegcloud compute target-http-proxies update command:

      gcloud compute target-http-proxies updateTARGET_HTTP_PROXY_NAME \          --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \          --global

For an HTTPS load balancer, update the target HTTPS proxy by using thegcloud compute target-https-proxies update command:

      gcloud compute target-https-proxies updateTARGET_HTTPS_PROXY_NAME \          --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \          --global

Replace the following:

  • TARGET_HTTP_PROXY_NAME: the name of the target HTTP proxy.
  • TARGET_HTTPS_PROXY_NAME: the name of the target HTTPS proxy.
  • HTTP_KEEP_ALIVE_TIMEOUT_SEC: the HTTP keepalive timeout value from 5 to 600 seconds.

Enable outlier detection

You can enableoutlierdetectionon global backend services to identify unhealthy serverless NEGs and reduce thenumber the requests sent to the unhealthy serverless NEGs.

Outlier detection is enabled on the backend service by using one of the following methods:

  • TheconsecutiveErrors method (outlierDetection.consecutiveErrors), inwhich a5xx series HTTP status code qualifies as an error.
  • TheconsecutiveGatewayFailure method(outlierDetection.consecutiveGatewayFailure), in which only the502,503, and504 HTTP status codes qualify as an error.

Use the following steps to enable outlier detection for an existing backendservice. Note that even after enabling outlier detection, some requests can besent to the unhealthy service and return a5xx status code tothe clients. To further reduce the error rate, you can configure more aggressivevalues for the outlier detection parameters. For more information, see theoutlierDetection field.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. Click the name of the load balancer whose backend service you want toedit.

  3. On theLoad balancer details page, clickEdit.

  4. On theEdit cross-region internal Application Load Balancer page, clickBackend configuration.

  5. On theBackend configuration page, clickEdit for the backend service thatyou want to modify.

  6. Scroll down and expand theAdvanced configurations section.

  7. In theOutlier detection section, select theEnable checkbox.

  8. ClickEdit to configureoutlier detection.

    Verify that the following options are configured with these values:

    PropertyValue
    Consecutive errors5
    Interval1000
    Base ejection time30000
    Max ejection percent50
    Enforcing consecutive errors100

    In this example, the outlier detection analysis runs every one second. Ifthe number of consecutive HTTP5xx status codesreceived by anEnvoy proxy is five or more, the backend endpoint is ejected from theload-balancing pool of that Envoy proxy for 30 seconds. When theenforcing percentage is set to 100%, the backend service enforces theejection of unhealthy endpoints from the load-balancing pools of thosespecific Envoy proxies every time the outlier detection analysis runs. Ifthe ejection conditions are met, up to 50% of the backend endpoints fromthe load-balancing pool can be ejected.

  9. ClickSave.

  10. To update the backend service, clickUpdate.

  11. To update the load balancer, on theEdit cross-region internal Application Load Balancer page, clickUpdate.

gcloud

  1. Export the backend service into a YAML file.

    gcloud compute backend-services exportBACKEND_SERVICE_NAME \  --destination=BACKEND_SERVICE_NAME.yaml --global

    ReplaceBACKEND_SERVICE_NAME with the name of thebackend service.

  2. Edit the YAML configuration of the backend service to add the fields foroutlier detection as highlighted in the following YAML configuration,in theoutlierDetection section:

    In this example, the outlier detection analysis runs every one second. Ifthe number of consecutive HTTP5xx status codesreceived by anEnvoy proxy is five or more, the backend endpoint is ejected from theload-balancing pool of that Envoy proxy for 30 seconds. When theenforcing percentage is set to 100%, the backend service enforces theejection of unhealthy endpoints from the load-balancing pools of thosespecific Envoy proxies every time the outlier detection analysis runs. Ifthe ejection conditions are met, up to 50% of the backend endpoints fromthe load-balancing pool can be ejected.

    name:BACKEND_SERVICE_NAMEbackends:- balancingMode: UTILIZATION  capacityScaler: 1.0  group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/networkEndpointGroups/SERVERLESS_NEG_NAME- balancingMode: UTILIZATION  capacityScaler: 1.0  group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/networkEndpointGroups/SERVERLESS_NEG_NAME_2outlierDetection:  baseEjectionTime:    nanos: 0    seconds: 30  consecutiveErrors: 5  enforcingConsecutiveErrors: 100  interval:    nanos: 0    seconds: 1  maxEjectionPercent: 50port: 80selfLink: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices/BACKEND_SERVICE_NAMEsessionAffinity: NONEtimeoutSec: 30...

    Replace the following:

    • BACKEND_SERVICE_NAME: the name of the backendservice
    • PROJECT_ID: the ID of your project
    • REGION_A andREGION_B:the regions where the load balancer has been configured.
    • SERVERLESS_NEG_NAME: the name of the firstserverless NEG
    • SERVERLESS_NEG_NAME_2: the name of the secondserverless NEG
  3. Update the backend service by importing the latest configuration.

    gcloud compute backend-services importBACKEND_SERVICE_NAME \  --source=BACKEND_SERVICE_NAME.yaml --global

    Outlier detection is now enabled on the backend service.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.