Set up a cross-region internal Application Load Balancer with VM instance group backends Stay organized with collections Save and categorize content based on your preferences.
This document provides instructions for configuring a cross-region internal Application Load Balancerfor your services that run on Compute Engine virtual machine (VM) instances.
Before you begin
Before following this guide, familiarize yourself with the following:
- Internal Application Load Balancer overview,including theLimitationssection
- VPC firewall rules overview
Set up an SSL certificate resource
Create a Certificate Manager SSL certificate resource as described inthe following:
- Deploy a global self-managed certificate.
- Create a Google-managed certificate issued by your Certificate Authority Service instance.
- Create a Google-managed certificate with DNS authorization.
We recommend using a Google-managed certificate.
Permissions
To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you must have all ofthe followingCompute Engine IAM roles.
| Task | Required role |
|---|---|
| Create networks, subnets, and load balancer components | Compute Network Admin |
| Add and remove firewall rules | Compute Security Admin |
| Create instances | Compute Instance Admin |
For more information, see the following guides:
Setup overview
You can configure load balancer as shown in the following diagram:
As shown in the diagram, this example creates a cross-region internal Application Load Balancer in aVPC network, with one backendservice and two backend managed instance groups in theREGION_A andREGION_B regions.
The diagram shows the following:
A VPC network with the following subnets:
- Subnet
SUBNET_Aand a proxy-only subnet inREGION_A. - Subnet
SUBNET_Band a proxy-only subnet inREGION_B.
You must createproxy-only subnetsin each region of a VPC network where you usecross-region internal Application Load Balancers. The region'sproxy-only subnet is shared among all cross-region internal Application Load Balancers inthe region. Source addresses of packets sent from the load balancerto your service's backends are allocated from theproxy-only subnet.In this example, the proxy-only subnet for the region
REGION_Ahas a primary IP address range of10.129.0.0/23and forREGION_Bhas a primary IP address range of10.130.0.0/23which is therecommended subnet size.- Subnet
High availability setup has managed instance group backends forCompute Engine VM deployments in
REGION_AandREGION_Bregions.If backends in one region happen to be down, traffic fails over to theother region.A global backend service that monitors the usage and health ofbackends.
A global URL map that parses the URL of a request and forwardsrequests to specific backend services based on the host and path of therequest URL.
A global target HTTP or HTTPS proxy, receives a request from theuser and forwards it to the URL map. For HTTPS, configure a global SSLcertificate resource. The target proxy uses the SSL certificate to decrypt SSLtraffic if you configure HTTPS load balancing. The target proxy can forwardtraffic to your instances by using HTTP or HTTPS.
Global forwarding rules, has the regional internal IP address of yourload balancer, to forward each incoming request to the target proxy.
The internal IP address associated with the forwarding rule can come froma subnet in the same network and region as the backends. Note the followingconditions:
- The IP address can (but does not need to) come from the same subnet asthe backend instance groups.
- The IP address must not come from a reserved proxy-only subnet that hasits
--purposeflag set toGLOBAL_MANAGED_PROXY. - If you want touse the same internal IP address with multiple forwardingrules,set the IP address
--purposeflag toSHARED_LOADBALANCER_VIP.
Optional:Configure DNS routing policiesof type
GEOto route client traffic to the load balancer VIP inthe region closest to the client.
Configure the network and subnets
Within the VPC network, configure a subnet in each regionwhere your backends are configured. In addition, configure aproxy-only-subnetin each region that you want to configure the load balancer.
This example uses the following VPC network, region, andsubnets:
Network. The network is acustom mode VPCnetwork named
NETWORK.Subnets for backends.
- A subnet named
SUBNET_Ain theREGION_Aregion uses10.1.2.0/24for its primaryIP range. - A subnet named
SUBNET_Bin theREGION_Bregion uses10.1.3.0/24for its primaryIP range.
- A subnet named
Subnets for proxies.
- A subnet named
PROXY_SN_AintheREGION_Aregion uses10.129.0.0/23for itsprimary IP range. - A subnet named
PROXY_SN_Bin theREGION_Bregion uses10.130.0.0/23for itsprimary IP range.
- A subnet named
Cross-region internal Application Load Balancers can be accessed from any region within the VPC.So clients from any region can globally access your load balancer backends.
Note: Subsequent steps in this guide use the network, region,and subnet parameters as outlined here.Configure the backend subnets
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
Provide aName for the network.
In theSubnets section, set theSubnet creation mode toCustom.
Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:
- Provide aName for the subnet.
- Select aRegion:REGION_A
- Enter anIP address range:
10.1.2.0/24
ClickDone.
ClickAdd subnet.
Create a subnet for the load balancer's backends. In theNewsubnet section, enter the following information:
- Provide aName for the subnet.
- Select aRegion:REGION_B
- Enter anIP address range:
10.1.3.0/24
ClickDone.
ClickCreate.
gcloud
Create the custom VPC network with the
gcloud computenetworks createcommand:gcloud compute networks createNETWORK \ --subnet-mode=custom
Create a subnet in the
NETWORKnetwork in theREGION_Aregion withthegcloud compute networks subnets createcommand:gcloud compute networks subnets createSUBNET_A \ --network=NETWORK \ --range=10.1.2.0/24 \ --region=REGION_A
Create a subnet in the
NETWORKnetwork in theREGION_Bregion withthegcloud compute networks subnets createcommand:gcloud compute networks subnets createSUBNET_B \ --network=NETWORK \ --range=10.1.3.0/24 \ --region=REGION_B
Terraform
To create the VPC network, use thegoogle_compute_network resource.
resource "google_compute_network" "default" { auto_create_subnetworks = false name = "lb-network-crs-reg" provider = google-beta}To create the VPC subnets in thelb-network-crs-reg network,use thegoogle_compute_subnetwork resource.
resource "google_compute_subnetwork" "subnet_a" { provider = google-beta ip_cidr_range = "10.1.2.0/24" name = "lbsubnet-uswest1" network = google_compute_network.default.id region = "us-west1"}resource "google_compute_subnetwork" "subnet_b" { provider = google-beta ip_cidr_range = "10.1.3.0/24" name = "lbsubnet-useast1" network = google_compute_network.default.id region = "us-east1"}API
Make aPOST request to thenetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{ "routingConfig": { "routingMode": "regional" }, "name": "NETWORK", "autoCreateSubnetworks": false}Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks{ "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/lb-network-crs-reg", "ipCidrRange": "10.1.2.0/24", "region": "projects/PROJECT_ID/regions/REGION_A",}Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks{ "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "10.1.3.0/24", "region": "projects/PROJECT_ID/regions/REGION_B",}Configure the proxy-only subnet
Aproxy-only subnet provides aset of IP addresses that Google Cloud uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.
This proxy-only subnet is used by all Envoy-based regional loadbalancers in the same region as the VPC network. There can only beone active proxy-only subnet for a given purpose, per region, per network.
Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on theLoad balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to theVPC networks page.
- Click the name of the VPC network.
- On theSubnet tab, clickAdd subnet.
- Provide aName for the proxy-only subnet.
- Select aRegion:REGION_A
- In thePurpose list, selectCross-region Managed Proxy.
- In theIP address range field, enter
10.129.0.0/23. - ClickAdd.
Create the proxy-only subnet inREGION_B
- On theSubnet tab, clickAdd subnet.
- Provide aName for the proxy-only subnet.
- Select aRegion:REGION_B
- In thePurpose list, selectCross-region Managed Proxy.
- In theIP address range field, enter
10.130.0.0/23. - ClickAdd.
gcloud
Create the proxy-only subnets with thegcloud compute networks subnets create command.
gcloud compute networks subnets createPROXY_SN_A \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=NETWORK \ --range=10.129.0.0/23
gcloud compute networks subnets createPROXY_SN_B \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_B \ --network=NETWORK \ --range=10.130.0.0/23
Terraform
To create the VPC proxy-only subnet in thelb-network-crs-reg network, use thegoogle_compute_subnetwork resource.
resource "google_compute_subnetwork" "proxy_subnet_a" { provider = google-beta ip_cidr_range = "10.129.0.0/23" name = "proxy-only-subnet1" network = google_compute_network.default.id purpose = "GLOBAL_MANAGED_PROXY" region = "us-west1" role = "ACTIVE" lifecycle { ignore_changes = [ipv6_access_type] }}resource "google_compute_subnetwork" "proxy_subnet_b" { provider = google-beta ip_cidr_range = "10.130.0.0/23" name = "proxy-only-subnet2" network = google_compute_network.default.id purpose = "GLOBAL_MANAGED_PROXY" region = "us-east1" role = "ACTIVE" lifecycle { ignore_changes = [ipv6_access_type] }}API
Create the proxy-only subnets with thesubnetworks.insert method, replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks { "name": "PROXY_SN_A", "ipCidrRange": "10.129.0.0/23", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_A", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" } POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks { "name": "PROXY_SN_B", "ipCidrRange": "10.130.0.0/23", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_B", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" }Configure firewall rules
This example uses the following firewall rules:
fw-ilb-to-backends. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port
22from anyaddress. You can choose a more restrictive source IP address range for this rule; forexample, you can specify just the IP address ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-sshtoidentify the VMs that the firewall rule applies to.fw-healthcheck. An ingress rule, applicable to the instancesbeing load balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in
130.211.0.0/22and35.191.0.0/16). Thisexample uses the target tagload-balanced-backendto identify the VMs thatthe firewall rule applies to.fw-backends. An ingress rule, applicable to the instances beingload balanced, that allows TCP traffic on ports
80,443, and8080fromthe internal Application Load Balancer's managed proxies. This example uses the target tagload-balanced-backendto identify the VMs that the firewall rule applies to.
Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.
Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreating a managed instancegroup.
Console
In the Google Cloud console, go to theFirewall policies page.
ClickCreate firewall rule to create the rule to allow incomingSSH connections:
- Name:
fw-ilb-to-backends - Network:NETWORK
- Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
22for the port number.
- Name:
ClickCreate.
ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:
- Name:
fw-healthcheck - Network:NETWORK
- Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
load-balanced-backend - Source filter:IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80for the port number.
As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you use
tcp:80forthe protocol and port, Google Cloud can useHTTP on port80to contact your VMs, but it cannot use HTTPS onport443to contact them.
- Name:
ClickCreate.
ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:
- Name:
fw-backends - Network:NETWORK
- Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
load-balanced-backend - Source filter:IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23and10.130.0.0/23 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80, 443, 8080for theport numbers.
- Name:
ClickCreate.
gcloud
Create the
fw-ilb-to-backendsfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-ilb-to-backends \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-healthcheckrule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.gcloud compute firewall-rules create fw-healthcheck \ --network=NETWORK \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
Create the
fw-backendsrule to allow the internal Application Load Balancer'sproxies to connect to your backends. Setsource-rangesto theallocated ranges of your proxy-only subnet,for example,10.129.0.0/23and10.130.0.0/23.gcloud compute firewall-rules create fw-backends \ --network=NETWORK \ --action=allow \ --direction=ingress \ --source-ranges=source-range \ --target-tags=load-balanced-backend \ --rules=tcp:80,tcp:443,tcp:8080
Terraform
To create the firewall rules, use thegoogle_compute_firewall resource.
resource "google_compute_firewall" "fw_healthcheck" { name = "gl7-ilb-fw-allow-hc" provider = google-beta direction = "INGRESS" network = google_compute_network.default.id source_ranges = ["130.211.0.0/22", "35.191.0.0/16", "35.235.240.0/20"] allow { protocol = "tcp" }}resource "google_compute_firewall" "fw_ilb_to_backends" { name = "fw-ilb-to-fw" provider = google-beta network = google_compute_network.default.id source_ranges = ["0.0.0.0/0"] allow { protocol = "tcp" ports = ["22", "80", "443", "8080"] }}resource "google_compute_firewall" "fw_backends" { name = "gl7-ilb-fw-allow-ilb-to-backends" direction = "INGRESS" network = google_compute_network.default.id source_ranges = ["10.129.0.0/23", "10.130.0.0/23"] target_tags = ["http-server"] allow { protocol = "tcp" ports = ["80", "443", "8080"] }}API
Create thefw-ilb-to-backends firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-ilb-to-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [ "0.0.0.0/0" ], "targetTags": [ "allow-ssh" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] } ],"direction": "INGRESS"}Create thefw-healthcheck firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-healthcheck", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16" ], "targetTags": [ "load-balanced-backend" ], "allowed": [ { "IPProtocol": "tcp" } ], "direction": "INGRESS"}Create thefw-backends firewall rule to allow TCP traffic within theproxy subnet for thefirewalls.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [ "10.129.0.0/23", "10.130.0.0/23" ], "targetTags": [ "load-balanced-backend" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "80" ] }, { "IPProtocol": "tcp", "ports": [ "443" ] }, { "IPProtocol": "tcp", "ports": [ "8080" ] } ], "direction": "INGRESS"}Create a managed instance group
This section shows how to create a template and a managed instance group. Themanaged instance group provides VM instances running the backend servers of anexample cross-region internal Application Load Balancer. For your instance group, you can define an HTTPservice and map a port name to the relevant port. The backend serviceof the load balancer forwards traffic to thenamed ports.Traffic from clients is load balanced to backend servers. For demonstrationpurposes, backends serve their own hostnames.
Console
In the Google Cloud console, go totheInstance templates page.
- ClickCreate instance template.
- ForName, enter
gil7-backendeast1-template. - Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such as
apt-get. - ClickAdvanced options.
- ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandload-balanced-backend. - ForNetwork interfaces, select the following:
- Network:NETWORK
- Subnet:SUBNET_B
- ForNetwork tags, enter
ClickManagement. Enter the following script into theStartup script field.
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
ClickCreate.
ClickCreate instance template.
ForName, enter
gil7-backendwest1-template.Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such as
apt-get.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandload-balanced-backend. - ForNetwork interfaces, select the following:
- Network:NETWORK
- Subnet:SUBNET_A
- ForNetwork tags, enter
ClickManagement. Enter the following script into theStartup script field.
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
ClickCreate.
In the Google Cloud console, go to theInstance groups page.
- ClickCreate instance group.
- SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
- ForName, enter
gl7-ilb-mig-a. - ForLocation, selectSingle zone.
- ForRegion, selectREGION_A.
- ForZone, selectZONE_A.
- ForInstance template, select
gil7-backendwest1-template. Specify the number of instances that you want to create in the group.
For this example, specify the following options underAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU use.
- ForAutoscaling mode, select
ClickCreate.
ClickCreate instance group.
SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
ForName, enter
gl7-ilb-mig-b.ForLocation, selectSingle zone.
ForRegion, selectREGION_B.
ForZone, selectZONE_B.
ForInstance template, select
gil7-backendeast1-template.Specify the number of instances that you want to create in the group.
For this example, specify the following options underAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.
- ForAutoscaling mode, select
ClickCreate.
gcloud
The gcloud CLI instructions in this guide assume that you are usingCloud Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates createcommand.gcloud compute instance-templates create gil7-backendwest1-template \ --region=REGION_A \ --network=NETWORK \ --subnet=SUBNET_A \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
gcloud compute instance-templates create gil7-backendeast1-template \ --region=REGION_B \ --network=NETWORK \ --subnet=SUBNET_B \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the zone with the
gcloud computeinstance-groups managed createcommand.gcloud compute instance-groups managed create gl7-ilb-mig-a \ --zone=ZONE_A \ --size=2 \ --template=gil7-backendwest1-template
gcloud compute instance-groups managed create gl7-ilb-mig-b \ --zone=ZONE_B \ --size=2 \ --template=gil7-backendeast1-template
Terraform
To create the instance template, use thegoogle_compute_instance_template resource.
resource "google_compute_instance_template" "instance_template_a" { name = "gil7-backendwest1-template" provider = google-beta machine_type = "e2-small" region = "us-west1" tags = ["http-server"] network_interface { network = google_compute_network.default.id subnetwork = google_compute_subnetwork.subnet_a.id access_config { # add external ip to fetch packages } } disk { source_image = "debian-cloud/debian-11" auto_delete = true boot = true } # install nginx and serve a simple web page metadata = { startup-script = <<-EOF1 #! /bin/bash set -euo pipefail export DEBIAN_FRONTEND=noninteractive apt-get update apt-get install -y nginx-light jq NAME=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/hostname") IP=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip") METADATA=$(curl -f -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=True" | jq 'del(.["startup-script"])') cat <<EOF > /var/www/html/index.html <pre> Name: $NAME IP: $IP Metadata: $METADATA </pre> EOF EOF1 } lifecycle { create_before_destroy = true }}resource "google_compute_instance_template" "instance_template_b" { name = "gil7-backendeast1-template" provider = google-beta machine_type = "e2-small" region = "us-east1" tags = ["http-server"] network_interface { network = google_compute_network.default.id subnetwork = google_compute_subnetwork.subnet_b.id access_config { # add external ip to fetch packages } } disk { source_image = "debian-cloud/debian-11" auto_delete = true boot = true } # install nginx and serve a simple web page metadata = { startup-script = <<-EOF1 #! /bin/bash set -euo pipefail export DEBIAN_FRONTEND=noninteractive apt-get update apt-get install -y nginx-light jq NAME=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/hostname") IP=$(curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip") METADATA=$(curl -f -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/attributes/?recursive=True" | jq 'del(.["startup-script"])') cat <<EOF > /var/www/html/index.html <pre> Name: $NAME IP: $IP Metadata: $METADATA </pre> EOF EOF1 } lifecycle { create_before_destroy = true }}To create the managed instance group, use thegoogle_compute_instance_group_manager resource.
resource "google_compute_region_instance_group_manager" "mig_a" { name = "gl7-ilb-miga" provider = google-beta region = "us-west1" version { instance_template = google_compute_instance_template.instance_template_a.id name = "primary" } base_instance_name = "vm" target_size = 2}resource "google_compute_region_instance_group_manager" "mig_b" { name = "gl7-ilb-migb" provider = google-beta region = "us-east1" version { instance_template = google_compute_instance_template.instance_template_b.id name = "primary" } base_instance_name = "vm" target_size = 2}API
Create the instance template with theinstanceTemplates.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{ "name":"gil7-backendwest1-template", "properties":{ "machineType":"e2-standard-2", "tags":{ "items":[ "allow-ssh", "load-balanced-backend" ] }, "metadata":{ "kind":"compute#metadata", "items":[ { "key":"startup-script", "value":"#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\n vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n echo \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2" } ] }, "networkInterfaces":[ { "network":"projects/PROJECT_ID/global/networks/NETWORK", "subnetwork":"regions/REGION_A/subnetworks/SUBNET_A", "accessConfigs":[ { "type":"ONE_TO_ONE_NAT" } ] } ], "disks":[ { "index":0, "boot":true, "initializeParams":{ "sourceImage":"projects/debian-cloud/global/images/family/debian-12" }, "autoDelete":true } ] }}Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{ "name": "gl7-ilb-mig-a", "zone": "projects/PROJECT_ID/zones/ZONE_A", "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil7-backendwest1-template", "baseInstanceName": "gl7-ilb-mig-b", "targetSize": 2}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{ "name":"gil7-backendeast1-template", "properties":{ "machineType":"e2-standard-2", "tags":{ "items":[ "allow-ssh", "load-balanced-backend" ] }, "metadata":{ "kind":"compute#metadata", "items":[ { "key":"startup-script", "value":"#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\n vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n echo \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2" } ] }, "networkInterfaces":[ { "network":"projects/PROJECT_ID/global/networks/NETWORK", "subnetwork":"regions/REGION_B/subnetworks/SUBNET_B", "accessConfigs":[ { "type":"ONE_TO_ONE_NAT" } ] } ], "disks":[ { "index":0, "boot":true, "initializeParams":{ "sourceImage":"projects/debian-cloud/global/images/family/debian-12" }, "autoDelete":true } ] }}Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{ "name": "gl7-ilb-mig-b", "zone": "projects/PROJECT_ID/zones/ZONE_B", "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil7-backendwest1-template", "baseInstanceName": "gl7-ilb-mig-b", "targetSize": 2}Configure the load balancer
This example shows you how to create the following cross-region internal Application Load Balancerresources:
- A global HTTP health check.
- A global backend service with the managed instance groups as the backend.
- AnURL map.Make sure to refer to a global URL map for the target HTTP(S) proxy.A global URL map routes requests to a global backend service based on rulesthat you define for the host and path of an incoming URL. A global URL mapcan be referenced by a global target proxy rule.
- A global SSL certificate (for HTTPS).
- A global target proxy.
Two global forwarding rules with regional IP addresses.For the forwarding rule's IP address, use the
SUBNET_AorSUBNET_BIP address range. If youtry to use theproxy-only subnet,forwarding rule creation fails.
Console
Select the load balancer type
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ForCross-region or single region deployment, selectBest for cross-region workloads and clickNext.
- ClickConfigure.
Basic configuration
- Provide aName for the load balancer.
- ForNetwork, selectNETWORK.
Configure the frontend with two forwarding rules
For HTTP:
- ClickFrontend configuration.
- Provide aName for the forwarding rule.
- In theSubnetwork region list, selectREGION_A.
Reserve a proxy-only subnet
If you already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps. - In theSubnetwork list, selectSUBNET_A.
- In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
- Provide aName for the static IP address.
- In theStatic IP address list, selectLet me choose.
- In theCustom IP address field, enter
10.1.2.99. - SelectReserve.
- ClickDone.
- To add the second forwarding rule, clickAdd frontend IP and port.
- Provide aName for the forwarding rule.
- In theSubnetwork region list, selectREGION_B.
Reserve a proxy-only subnet
Because you have already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps. - In theSubnetwork list, selectSUBNET_B.
- In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
- Provide aName for the static IP address.
- In theStatic IP address list, selectLet me choose.
- In theCustom IP address field, enter
10.1.3.99. - SelectReserve.
- ClickDone.
For HTTPS:
To assign an SSL certificate to the target HTTPS proxy of the load balancer, you need to use a Certificate Manager certificate.
- ClickFrontend configuration.
- Provide aName for the forwarding rule.
- In theProtocol field, select
HTTPS (includes HTTP/2). - Ensure that thePort is set to
443. - In theSubnetwork region list, selectREGION_A.
Reserve a proxy-only subnet
Because you have already configured theproxy-only subnet,theReserve subnet button isn't displayed.You can continue with the next steps. - In theSubnetwork list, selectSUBNET_A.
- In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
- Provide aName for the static IP address.
- In theStatic IP address list, selectLet me choose.
- In theCustom IP address field, enter
10.1.3.99. - SelectReserve.
- ClickAdd certificate to select an existing certificate or create a new certificate.
If you already have an existing Certificate Manager certificate to select, do the following:
- ClickAdd Certificate.
- ClickSelect an existing certificate and select the certificate from the list of certificates.
- ClickSelect.
After you select the new Certificate Manager certificate, it appears in the list of certificates.
To create a new Certificate Manager certificate, do the following:
- ClickAdd Certificate.
- ClickCreate a new certificate.
- To create a new certificate, follow the steps starting fromstep 3 as outlined in any one of the following configuration methods in the Certificate Manager documentation:
- Select an SSL policy from theSSL policy list. If you have not created any SSL policies, adefault Google Cloud SSL policy is applied.
- ClickDone.
- Provide aName for the frontend configuration.
- In theProtocol field, select
HTTPS (includes HTTP/2). - Ensure that thePort is set to
443. - In theSubnetwork region list, selectREGION_B.
Reserve a proxy-only subnet
Because you have already configured theproxy-only subnet,theReserve subnet button isn't displayed.You can continue with the next steps. - In theSubnetwork list, selectSUBNET_B.
- In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
- Provide aName for the static IP address.
- In theStatic IP address list, selectLet me choose.
- In theCustom IP address field, enter
10.1.3.99. - SelectReserve.
- ClickAdd certificate and then select an existing certificateor create a new certificate.
- Select an SSL policy from theSSL policy list. If you have not created any SSL policies, adefault Google Cloud SSL policy is applied.
- ClickDone.
- ClickBackend configuration.
- In theCreate or select backend services list, clickCreate a backend service.
- Provide aName for the backend service.
- ForProtocol, selectHTTP.
- ForNamed Port, enter
http. - In theBackend type list, selectInstance group.
- In theHealth check list, clickCreate a health check, and then enter the following information:
- In theName field, enter
global-http-health-check. - In theProtocol list, select
HTTP. - In thePort field, enter
80. - ClickCreate.
- In theNew backend section:
- In theInstance group list, select
gl4-ilb-migainREGION_A. - SetPort numbers to
80. - For theBalancing mode, selectUtilization.
- ClickDone.
- To add another backend, clickAdd backend.
- In theInstance group list, select
gl4-ilb-migbinREGION_B. - SetPort numbers to
80. - ClickDone.
- ClickRouting rules.
- ForMode, selectSimple host and path rule.
- Ensure that there is only one backend service for any unmatched host and any unmatched path.
- ClickReview and finalize.
- Review your load balancer configuration settings.
- ClickCreate.
Add the second frontend configuration:
Configure the routing rules
Review the configuration
gcloud
Define the HTTP health check with the
gcloud compute health-checkscreate httpcommand.gcloud compute health-checks create http global-http-health-check \ --use-serving-port \ --global
Define the backend service with the
gcloud compute backend-servicescreatecommand.gcloud compute backend-services createBACKEND_SERVICE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --enable-logging \ --logging-sample-rate=1.0 \ --health-checks=global-http-health-check \ --global-health-checks \ --global
Add backends to the backend service with the
gcloud compute backend-servicesadd-backendcommand.gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \ --balancing-mode=UTILIZATION \ --instance-group=gl7-ilb-mig-a \ --instance-group-zone=ZONE_A \ --global
gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \ --balancing-mode=UTILIZATION \ --instance-group=gl7-ilb-mig-b \ --instance-group-zone=ZONE_B \ --global
Create the URL map with the
gcloud compute url-mapscreatecommand.gcloud compute url-maps create gl7-gilb-url-map \ --default-service=BACKEND_SERVICE_NAME \ --global
Create the target proxy.
For HTTP:
Create the target proxy with the
gcloud compute target-http-proxiescreatecommand.gcloud compute target-http-proxies create gil7-http-proxy \ --url-map=gl7-gilb-url-map \ --global
For HTTPS:
To create a Google-managed certificate, see the following documentation:
- Create a Google-managed certificate issued by your Certificate Authority Service instance.
- Create a Google-managed certificate with DNS authorization.
After you create the Google-managed certificate,attach the certificate directly to the target proxy.Certificate maps are not supported by cross-region internal Application Load Balancers.
To create a self-managed certificate, see the following documentation:
Assign your file paths to variable names.
export LB_CERT=PATH_TO_PEM_FORMATTED_FILE
export LB_PRIVATE_KEY=PATH_TO_LB_PRIVATE_KEY_FILE
Create an all region SSL certificate using the
gcloud beta certificate-managercertificates createcommand.gcloud certificate-manager certificates create gilb-certificate \ --private-key-file=$LB_PRIVATE_KEY \ --certificate-file=$LB_CERT \ --scope=all-regions
Use the SSL certificate to create a target proxy with the
gcloudcompute target-https-proxies createcommandgcloud compute target-https-proxies create gil7-https-proxy \ --url-map=gl7-gilb-url-map \ --certificate-manager-certificates=gilb-certificate \ --global
Create two forwarding rules, one with a VIP (
10.1.2.99) in theREGION_Bregion and another one with aVIP (10.1.3.99) in theREGION_Aregion.For more information, seeReserve a static internal IPv4 address.For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.
For HTTP:
Use the
gcloud compute forwarding-rulescreatecommandwith the correct flags.gcloud compute forwarding-rules createFWRULE_A \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_A \ --subnet-region=REGION_A \ --address=10.1.2.99 \ --ports=80 \ --target-http-proxy=gil7-http-proxy \ --global
gcloud compute forwarding-rules createFWRULE_B \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_B \ --subnet-region=REGION_B \ --address=10.1.3.99 \ --ports=80 \ --target-http-proxy=gil7-http-proxy \ --global
For HTTPS:
Use the
gcloud compute forwarding-rulescreatecommandwith the correct flags.gcloud compute forwarding-rules createFWRULE_A \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_A \ --subnet-region=REGION_A \ --address=10.1.2.99 \ --ports=443 \ --target-https-proxy=gil7-https-proxy \ --global
gcloud compute forwarding-rules createFWRULE_B \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_B \ --subnet-region=REGION_B \ --address=10.1.3.99 \ --ports=443 \ --target-https-proxy=gil7-https-proxy \ --global
Terraform
To create the health check, use thegoogle_compute_health_check resource.
resource "google_compute_health_check" "default" { provider = google-beta name = "global-http-health-check" http_health_check { port_specification = "USE_SERVING_PORT" }}To create the backend service, use thegoogle_compute_backend_service resource.
resource "google_compute_backend_service" "default" { name = "gl7-gilb-backend-service" provider = google-beta protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" timeout_sec = 10 health_checks = [google_compute_health_check.default.id] backend { group = google_compute_region_instance_group_manager.mig_a.instance_group balancing_mode = "UTILIZATION" capacity_scaler = 1.0 } backend { group = google_compute_region_instance_group_manager.mig_b.instance_group balancing_mode = "UTILIZATION" capacity_scaler = 1.0 }}To create the URL map, use thegoogle_compute_url_map resource.
resource "google_compute_url_map" "default" { name = "gl7-gilb-url-map" provider = google-beta default_service = google_compute_backend_service.default.id}To create the target HTTP proxy, use thegoogle_compute_target_http_proxy resource.
resource "google_compute_target_http_proxy" "default" { name = "gil7target-http-proxy" provider = google-beta url_map = google_compute_url_map.default.id}To create the forwarding rules, use thegoogle_compute_forwarding_rule resource.
resource "google_compute_global_forwarding_rule" "fwd_rule_a" { provider = google-beta depends_on = [google_compute_subnetwork.proxy_subnet_a] ip_address = "10.1.2.99" ip_protocol = "TCP" load_balancing_scheme = "INTERNAL_MANAGED" name = "gil7forwarding-rule-a" network = google_compute_network.default.id port_range = "80" target = google_compute_target_http_proxy.default.id subnetwork = google_compute_subnetwork.subnet_a.id}resource "google_compute_global_forwarding_rule" "fwd_rule_b" { provider = google-beta depends_on = [google_compute_subnetwork.proxy_subnet_b] ip_address = "10.1.3.99" ip_protocol = "TCP" load_balancing_scheme = "INTERNAL_MANAGED" name = "gil7forwarding-rule-b" network = google_compute_network.default.id port_range = "80" target = google_compute_target_http_proxy.default.id subnetwork = google_compute_subnetwork.subnet_b.id}To learn how to apply or remove a Terraform configuration, seeBasic Terraform commands.
API
Create the health check by making aPOST request to thehealthChecks.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/healthChecks{"name": "global-http-health-check","type": "HTTP","httpHealthCheck": { "portSpecification": "USE_SERVING_PORT"}}Create the global backend service by making aPOST request to thebackendServices.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices{"name": "BACKEND_SERVICE_NAME","backends": [ { "group": "projects/PROJECT_ID/zones/ZONE_A/instanceGroups/gl7-ilb-mig-a", "balancingMode": "UTILIZATION" }, { "group": "projects/PROJECT_ID/zones/ZONE_B/instanceGroups/gl7-ilb-mig-b", "balancingMode": "UTILIZATION" }],"healthChecks": [ "projects/PROJECT_ID/regions/global/healthChecks/global-http-health-check"],"loadBalancingScheme": "INTERNAL_MANAGED"}Create the URL map by making aPOST request to theurlMaps.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/urlMaps{"name": "l7-ilb-map","defaultService": "projects/PROJECT_ID/global/backendServices/BACKEND_SERVICE_NAME"}For HTTP:
Create the target HTTP proxy by making aPOST request to thetargetHttpProxies.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetHttpProxy{"name": "l7-ilb-proxy","urlMap": "projects/PROJECT_ID/global/urlMaps/l7-ilb-map"}Create the forwarding rule by making aPOST request to theforwardingRules.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "FWRULE_A","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil7forwarding-rule-b","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}For HTTPS:
Read the certificate and private key files, and then create the SSLcertificate. The following example showshow to do this with Python.
frompathlibimportPathfrompprintimportpprintfromtypingimportUnionfromgoogleapiclientimportdiscoverydefcreate_regional_certificate(project_id:str,region:str,certificate_file:Union[str,Path],private_key_file:Union[str,Path],certificate_name:str,description:str="Certificate created from a code sample.",)->dict:""" Create a regional SSL self-signed certificate within your Google Cloud project. Args: project_id: project ID or project number of the Cloud project you want to use. region: name of the region you want to use. certificate_file: path to the file with the certificate you want to create in your project. private_key_file: path to the private key you used to sign the certificate with. certificate_name: name for the certificate once it's created in your project. description: description of the certificate. Returns: Dictionary with information about the new regional SSL self-signed certificate. """service=discovery.build("compute","v1")# Read the cert into memorywithopen(certificate_file)asf:_temp_cert=f.read()# Read the private_key into memorywithopen(private_key_file)asf:_temp_key=f.read()# Now that the certificate and private key are in memory, you can create the# certificate resourcessl_certificate_body={"name":certificate_name,"description":description,"certificate":_temp_cert,"privateKey":_temp_key,}request=service.regionSslCertificates().insert(project=project_id,region=region,body=ssl_certificate_body)response=request.execute()pprint(response)returnresponseCreate the target HTTPS proxy by making aPOST request to thetargetHttpsProxies.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetHttpsProxy{"name": "l7-ilb-proxy","urlMap": "projects/PROJECT_ID/global/urlMaps/l7-ilb-map","sslCertificates": /projects/PROJECT_ID/global/sslCertificates/SSL_CERT_NAME}Create the forwarding rule by making aPOST request to theglobalForwardingRules.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "FWRULE_A","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpsProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "FWRULE_B","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetHttpsProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}Test the load balancer
Create a VM instance to test connectivity
Create a client VM:
gcloud compute instances create l7-ilb-client-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_A \ --zone=ZONE_A \ --tags=allow-ssh
gcloud compute instances create l7-ilb-client-b \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_B \ --zone=ZONE_B \ --tags=allow-ssh
Use SSH to connect to each client instance.
gcloud compute ssh l7-ilb-client-a --zone=ZONE_A
gcloud compute ssh l7-ilb-client-b --zone=ZONE_B
Verify that the IP address is serving its hostname
Verify that the client VM can reach both IP addresses.The command returns the name of the backend VM whichserved the request:
curl 10.1.2.99
curl 10.1.3.99
For HTTPS testing, replace
curlwith the following command line:curl -k 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.2.99:443
curl -k 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.3.99:443
ReplaceDOMAIN_NAME with your application domain name, forexample,
test.example.com.The
-kflag causes curl to skip certificate validation.Optional: Use the configuredDNS record to resolvethe IP address closest to the client VM. For example,DNS_NAME can be
service.example.com.curlDNS_NAME
Run 100 requests and confirm that they are load balanced
For HTTP:
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silent 10.1.2.99)" done echo "" echo " Results of load-balancing to 10.1.2.99: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo } { RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl --silent 10.1.3.99)" done echo "" echo " Results of load-balancing to 10.1.3.99: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }For HTTPS:
In the following scripts, replaceDOMAIN_NAME with your application domain name, for example,test.example.com.
{ RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.2.99:443)" done echo "" echo " Results of load-balancing to 10.1.2.99: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo } { RESULTS= for i in {1..100} do RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.3.99:443)" done echo "" echo " Results of load-balancing to 10.1.3.99: " echo "***" echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c echo }Test failover
Verify failover to backends in the
REGION_Aregionwhen backends in theREGION_Bare unhealthy orunreachable. To simulate failover, remove all backends fromREGION_B:gcloud compute backend-services remove-backendBACKEND_SERVICE_NAME \ --balancing-mode=UTILIZATION \ --instance-group=gl7-ilb-mig-b \ --instance-group-zone=ZONE_B
Connect using SSH to a client VM inREGION_B.
gcloud compute ssh l7-ilb-client-b \ --zone=ZONE_B
Send requests to the load balanced IP address in the
REGION_Bregion. The command output shows responsesfrom backend VMs inREGION_A.In the following script, replaceDOMAIN_NAME with your applicationdomain name, for example,
test.example.com.{RESULTS=for i in {1..100}do RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:10.1.3.99:443)"doneecho "***"echo "*** Results of load-balancing to 10.1.3.99: "echo "***"echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -cecho}
Additional configuration options
This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.
Enable session affinity
These procedures show you how to update a backend service for the exampleregional internal Application Load Balancer or cross-region internal Application Load Balancerso that the backend serviceuses generated cookie affinity, header field affinity, or HTTP cookie affinity.
When generated cookie affinity is enabled, the load balancer issues a cookieon the first request. For each subsequent request with the same cookie, the loadbalancer directs the request to the same backend virtual machine (VM) instanceor endpoint. In this example, the cookie is namedGCILB.
When header field affinity is enabled, the load balancer routes requests tobackend VMs or endpoints in a network endpoint group (NEG) based on the value ofthe HTTP header named in the--custom-request-header flag.Header field affinity is only valid ifthe load balancing locality policy is eitherRING_HASH orMAGLEV and thebackend service's consistent hash specifies the name of the HTTP header.
When HTTP cookie affinity is enabled, the load balancer routes requests tobackend VMs or endpoints in a NEG, based on an HTTP cookie named in theHTTP_COOKIE flag with the optional--affinity-cookie-ttl flag. If the clientdoesn't provide the cookie in its HTTP request, the proxy generatesthe cookie and returns it to the client in aSet-Cookie header. HTTP cookieaffinity is only valid if the load balancing locality policy is eitherRING_HASH orMAGLEV and the backend service's consistent hash specifies theHTTP cookie.
Console
To enable or change session affinity for a backend service:
In the Google Cloud console, go to theLoad balancing page.
- ClickBackends.
- Clickgil7-backend-service (the name of the backend service you created for this example), and then clickEdit.
- On theBackend service details page, clickAdvanced configuration.
- UnderSession affinity, select the type of session affinity you want.
- ClickUpdate.
gcloud
Use the following Google Cloud CLI commands to update the backend service to different types of session affinity:
gcloud compute backend-services update gil7-backend-service \ --session-affinity=[GENERATED_COOKIE | HEADER_FIELD | HTTP_COOKIE | CLIENT_IP] \ --global
API
To set session affinity, make a `PATCH` request to thebackendServices/patch method.
PATCH https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/backendServices/gil7-backend-service { "sessionAffinity": ["GENERATED_COOKIE" | "HEADER_FIELD" | "HTTP_COOKIE" | "CLIENT_IP" ] }Restrict which clients can send traffic to the load balancer
You can restrict clients from connecting to an internal Application Load Balancerforwarding rule VIP by configuring egress firewall rules on these clients. Setthese firewall rules on specific client VMs based onserviceaccounts ortags.
You can't use firewall rules to restrict inbound traffic to specificinternal Application Load Balancer forwarding rule VIPs. Any client on the same VPCnetwork and in the same region as the forwarding rule VIP can generally sendtraffic to the forwarding rule VIP.
Additionally, all requests to backends come from proxies that use IP addresses intheproxy-only subnetrange. It isn't possible to create firewall rules that allow or deny ingresstraffic on these backends based on the forwarding rule VIP used by a client.
Here are some examples of how to use egress firewall rules to restrict trafficto the load balancer's forwarding rule VIP.
Console
To identify the client VMs,tag thespecific VMsyou want to restrict. These tags are used to associate firewall rules withthe tagged client VMs. Then, add the tag to theTARGET_TAGfield in the following steps.
Use either a single firewall rule or multiple rules to set this up.
Single egress firewall rule
You can configure one firewall egress rule to deny all egresstraffic going from tagged client VMs to a load balancer's VIP.
In the Google Cloud console, go to theFirewall rules page.
ClickCreate firewall rule to create the rule to deny egresstraffic from tagged client VMs to a load balancer's VIP.
- Name:
fr-deny-access - Network:
lb-network - Priority:
100 - Direction of traffic:Egress
- Action on match:Deny
- Targets:Specified target tags
- Target tags:
TARGET_TAG - Destination filter:IP ranges
- Destination IP ranges:
10.1.2.99 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select thetcp checkbox, and then enter
80for the port number.
- Name:
ClickCreate.
Multiple egress firewall rules
A more scalable approach involves setting two rules. A default, low-priorityrule that restricts all clients from accessing the load balancer's VIP. Asecond, higher-priority rule that allows a subset of tagged clients toaccess the load balancer's VIP. Only tagged VMs can access theVIP.
In the Google Cloud console, go to theFirewall rules page.
ClickCreate firewall rule to create the lower priority rule to denyaccess by default:
- Name:
fr-deny-all-access-low-priority - Network:
lb-network - Priority:
200 - Direction of traffic:Egress
- Action on match:Deny
- Targets:Specified target tags
- Target tags:
TARGET_TAG - Destination filter:IP ranges
- Destination IP ranges:
10.1.2.99 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80for the port number.
- Name:
ClickCreate.
ClickCreate firewall rule to create the higher priority rule toallow traffic from certain tagged instances.
- Name:
fr-allow-some-access-high-priority - Network:
lb-network - Priority:
100 - Direction of traffic:Egress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
TARGET_TAG - Destination filter:IP ranges
- Destination IP ranges:
10.1.2.99 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80for the port number.
- Name:
ClickCreate.
gcloud
To identify the client VMs,tag thespecific VMsyou want to restrict. Then add the tag to theTARGET_TAGfield in these steps.
Use either a single firewall rule or multiple rules to set this up.
Single egress firewall rule
You can configure one firewall egress rule to deny all egresstraffic going from tagged client VMs to a load balancer's VIP.
gcloud compute firewall-rules create fr-deny-access \ --network=lb-network \ --action=deny \ --direction=egress \ --rules=tcp \ --priority=100 \ --destination-ranges=10.1.2.99 \ --target-tags=TARGET_TAG
Multiple egress firewall rules
A more scalable approach involves setting two rules: a default, low-priorityrule that restricts all clients from accessing the load balancer's VIP, and asecond, higher-priority rule that allows a subset of tagged clients to accessthe load balancer's VIP. Only tagged VMs can access the VIP.
Create the lower-priority rule:
gcloud compute firewall-rules create fr-deny-all-access-low-priority \ --network=lb-network \ --action=deny \ --direction=egress \ --rules=tcp \ --priority=200 \ --destination-ranges=10.1.2.99
Create the higher priority rule:
gcloud compute firewall-rules create fr-allow-some-access-high-priority \ --network=lb-network \ --action=allow \ --direction=egress \ --rules=tcp \ --priority=100 \ --destination-ranges=10.1.2.99 \ --target-tags=TARGET_TAG
To use service accounts instead of tags to control access, usethe--target-service-accountsoptioninstead of the--target-tags flag when creating firewall rules.
Scale restricted access to internal Application Load Balancer backends based on subnets
Maintaining separate firewall rules or adding new load-balanced IP addresses toexisting rules as described in the previous section becomes inconvenient as thenumber of forwarding rules increases. One way to prevent this is to allocateforwarding rule IP addresses from a reserved subnet.Then, traffic from tagged instances orservice accounts can be allowed or blocked by using the reserved subnet as thedestination range for firewall rules. This lets you effectively controlaccess to a group of forwarding rule VIPs without having to maintainper-VIP firewall egress rules.
Here are the high-level steps to set this up, assuming that you will createall the other required load balancer resources separately.
gcloud
Create a regional subnet to use to allocate load-balanced IPaddresses for forwarding rules:
gcloud compute networks subnets create l7-ilb-restricted-subnet \ --network=lb-network \ --region=us-west1 \ --range=10.127.0.0/24
Create a forwarding rule that takes an address from thesubnet. The following example uses the address
10.127.0.1from the subnetcreated in the previous step.gcloud compute forwarding-rules create l7-ilb-forwarding-rule-restricted \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=l7-ilb-restricted-subnet \ --address=10.127.0.1 \ --ports=80 \ --global \ --target-http-proxy=gil7-http-proxy
Create a firewall rule to restrict traffic destined for the range IPaddresses in the forwarding rule subnet (
l7-ilb-restricted-subnet):gcloud compute firewall-rules create restrict-traffic-to-subnet \ --network=lb-network \ --action=deny \ --direction=egress \ --rules=tcp:80 \ --priority=100 \ --destination-ranges=10.127.0.0/24 \ --target-tags=TARGET_TAG
Use the same IP address between multiple internal forwarding rules
For multiple internal forwarding rules to share the same internal IP address,you must reserve the IP address and set its--purpose flag toSHARED_LOADBALANCER_VIP.
gcloud
gcloud compute addresses createSHARED_IP_ADDRESS_NAME \ --region=REGION \ --subnet=SUBNET_NAME \ --purpose=SHARED_LOADBALANCER_VIP
Configure DNS routing policies
If your clients are in multiple regions, you might want to make yourcross-region internal Application Load Balancer accessible by using VIPs in these regions. You can useDNS routing policies of typeGEO to route clienttraffic to the load balancer VIP in the region closest to the client. Thismulti-region setup minimizes latency and network transit costs. In addition, itlets you set up a DNS-based, global, load balancing solution that providesresilience against regional outages.
Cloud DNS supports health checking and enables automatic failover whenthe endpoints fail their health checks. During a failover, Cloud DNSautomatically adjusts the traffic split among the remaining healthy endpoints.For more information, seeManage DNS routing policies and healthchecks.
gcloud
To create a DNS entry with a 30 second TTL, use thegcloud dns record-sets create command.
gcloud dns record-sets createDNS_ENTRY --ttl="30" \ --type="A" --zone="service-zone" \ --routing-policy-type="GEO" \ --routing-policy-data="REGION_A=gil7-forwarding-rule-a@global;REGION_B=gil7-forwarding-rule-b@global" \ --enable-health-checking
Replace the following:
DNS_ENTRY: DNS or domain name of the record-setFor example,
service.example.comREGION_AandREGION_B:the regions where you have configured the load balancer
API
Create the DNS record by making aPOST request to theResourceRecordSets.create method.ReplacePROJECT_ID with your project ID.
POST https://www.googleapis.com/dns/v1/projects/PROJECT_ID/managedZones/SERVICE_ZONE/rrsets{ "name": "DNS_ENTRY", "type": "A", "ttl": 30, "routingPolicy": { "geo": { "items": [ { "location": "REGION_A", "healthCheckedTargets": { "internalLoadBalancers": [ { "loadBalancerType": "globalL7ilb", "ipAddress": "IP_ADDRESS", "port": "80", "ipProtocol": "tcp", "networkUrl": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "project": "PROJECT_ID" } ] } }, { "location": "REGION_B", "healthCheckedTargets": { "internalLoadBalancers": [ { "loadBalancerType": "globalL7ilb", "ipAddress": "IP_ADDRESS_B", "port": "80", "ipProtocol": "tcp", "networkUrl": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "project": "PROJECT_ID" } ] } } ] } }}Update client HTTP keepalive timeout
The load balancer created in the previous steps has been configured witha default value for theclient HTTP keepalivetimeout.To update the client HTTP keepalive timeout, use the following instructions.
Console
In the Google Cloud console, go to theLoad balancing page.
- Click the name of the load balancer that you want to modify.
- ClickEdit.
- ClickFrontend configuration.
- ExpandAdvanced features. ForHTTP keepalive timeout, enter a timeout value.
- ClickUpdate.
- To review your changes, clickReview and finalize, and then clickUpdate.
gcloud
For an HTTP load balancer, update the target HTTP proxy by using thegcloud compute target-http-proxies update command:
gcloud compute target-http-proxies updateTARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --global
For an HTTPS load balancer, update the target HTTPS proxy by using thegcloud compute target-https-proxies update command:
gcloud compute target-https-proxies updateTARGET_HTTPS_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --global
Replace the following:
TARGET_HTTP_PROXY_NAME: the name of the target HTTP proxy.TARGET_HTTPS_PROXY_NAME: the name of the target HTTPS proxy.HTTP_KEEP_ALIVE_TIMEOUT_SEC: the HTTP keepalive timeout value from 5 to 600 seconds.
Enable outlier detection
You can enableoutlierdetectionon global backend services to identify unhealthy serverless NEGs and reduce thenumber the requests sent to the unhealthy serverless NEGs.
Outlier detection is enabled on the backend service by using one of the following methods:
- The
consecutiveErrorsmethod (outlierDetection.consecutiveErrors), inwhich a5xxseries HTTP status code qualifies as an error. - The
consecutiveGatewayFailuremethod(outlierDetection.consecutiveGatewayFailure), in which only the502,503, and504HTTP status codes qualify as an error.
Use the following steps to enable outlier detection for an existing backendservice. Note that even after enabling outlier detection, some requests can besent to the unhealthy service and return a5xx status code tothe clients. To further reduce the error rate, you can configure more aggressivevalues for the outlier detection parameters. For more information, see theoutlierDetection field.
Console
In the Google Cloud console, go to theLoad balancing page.
Click the name of the load balancer whose backend service you want toedit.
On theLoad balancer details page, clickEdit.
On theEdit cross-region internal Application Load Balancer page, clickBackend configuration.
On theBackend configuration page, clickEdit for the backend service thatyou want to modify.
Scroll down and expand theAdvanced configurations section.
In theOutlier detection section, select theEnable checkbox.
ClickEdit to configureoutlier detection.
Verify that the following options are configured with these values:
Property Value Consecutive errors 5 Interval 1000 Base ejection time 30000 Max ejection percent 50 Enforcing consecutive errors 100 In this example, the outlier detection analysis runs every one second. Ifthe number of consecutive HTTP
5xxstatus codesreceived by anEnvoy proxy is five or more, the backend endpoint is ejected from theload-balancing pool of that Envoy proxy for 30 seconds. When theenforcing percentage is set to 100%, the backend service enforces theejection of unhealthy endpoints from the load-balancing pools of thosespecific Envoy proxies every time the outlier detection analysis runs. Ifthe ejection conditions are met, up to 50% of the backend endpoints fromthe load-balancing pool can be ejected.ClickSave.
To update the backend service, clickUpdate.
To update the load balancer, on theEdit cross-region internal Application Load Balancer page, clickUpdate.
gcloud
Export the backend service into a YAML file.
gcloud compute backend-services exportBACKEND_SERVICE_NAME \ --destination=BACKEND_SERVICE_NAME.yaml --global
Replace
BACKEND_SERVICE_NAMEwith the name of thebackend service.Edit the YAML configuration of the backend service to add the fields foroutlier detection as highlighted in the following YAML configuration,in the
outlierDetectionsection:In this example, the outlier detection analysis runs every one second. Ifthe number of consecutive HTTP
5xxstatus codesreceived by anEnvoy proxy is five or more, the backend endpoint is ejected from theload-balancing pool of that Envoy proxy for 30 seconds. When theenforcing percentage is set to 100%, the backend service enforces theejection of unhealthy endpoints from the load-balancing pools of thosespecific Envoy proxies every time the outlier detection analysis runs. Ifthe ejection conditions are met, up to 50% of the backend endpoints fromthe load-balancing pool can be ejected.name:BACKEND_SERVICE_NAMEbackends:- balancingMode: UTILIZATION capacityScaler: 1.0 group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/networkEndpointGroups/SERVERLESS_NEG_NAME- balancingMode: UTILIZATION capacityScaler: 1.0 group: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/networkEndpointGroups/SERVERLESS_NEG_NAME_2outlierDetection: baseEjectionTime: nanos: 0 seconds: 30 consecutiveErrors: 5 enforcingConsecutiveErrors: 100 interval: nanos: 0 seconds: 1 maxEjectionPercent: 50port: 80selfLink: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices/BACKEND_SERVICE_NAMEsessionAffinity: NONEtimeoutSec: 30...
Replace the following:
BACKEND_SERVICE_NAME: the name of the backendservicePROJECT_ID: the ID of your projectREGION_AandREGION_B:the regions where the load balancer has been configured.SERVERLESS_NEG_NAME: the name of the firstserverless NEGSERVERLESS_NEG_NAME_2: the name of the secondserverless NEG
Update the backend service by importing the latest configuration.
gcloud compute backend-services importBACKEND_SERVICE_NAME \ --source=BACKEND_SERVICE_NAME.yaml --global
Outlier detection is now enabled on the backend service.
What's next
- Convert Application Load Balancer to IPv6
- Internal Application Load Balancer overview
- Proxy-only subnets for Envoy-based load balancers
- Manage certificates
- Clean up a load balancing setup
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.