Set up a cross-region internal proxy Network Load Balancer with VM instance group backends Stay organized with collections Save and categorize content based on your preferences.
This document provides instructions for configuring a cross-region internal proxy Network Load Balancerfor your services that run on Compute Engine VMs.
Before you begin
Before following this guide, familiarize yourself with the following:
Permissions
To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you must have all ofthe followingCompute Engine IAM roles.
| Task | Required role |
|---|---|
| Create networks, subnets, and load balancer components | Compute Network Admin |
| Add and remove firewall rules | Compute Security Admin |
| Create instances | Compute Instance Admin |
For more information, see the following guides:
Setup overview
You can configure load balancer as shown in the following diagram:
As shown in the diagram, this example creates a cross-region internal proxy Network Load Balancer in aVPC network, with one backendservice and two backend managed instance groups (MIGs) in regionREGION_A andREGION_B regions.
The diagram shows the following:
A VPC network with the following subnets:
- Subnet
SUBNET_Aand a proxy-only subnet inREGION_A. - Subnet
SUBNET_Band a proxy-only subnet inREGION_B.
You must createproxy-only subnetsin each region of a VPC network where you usecross-region internal proxy Network Load Balancers. The region'sproxy-only subnet is shared among all cross-region internal proxy Network Load Balancers inthe region. Source addresses of packets sent from the load balancerto your service's backends are allocated from theproxy-only subnet.In this example, the proxy-only subnet for the region
REGION_Bhas a primary IP address range of10.129.0.0/23, and forREGION_A, has a primary IP address range of10.130.0.0/23, which is therecommended subnet size.- Subnet
High availability setup has managed instance group backends forCompute Engine VM deployments in
REGION_AandREGION_Bregions.If backends in one region happen to be down, traffic fails over to theother region.A global backend service that monitors the usage and health ofbackends.
A global target TCP proxy, which receives a request from theuser and forwards it to the backend service.
Global forwarding rules, which have the regional internal IP address of yourload balancer and can forward each incoming request to the target proxy.
The internal IP address associated with the forwarding rule can come froma subnet in the same network and region as the backends. Note the followingconditions:
- The IP address can (but does not need to) come from the same subnet asthe backend instance groups.
- The IP address must not come from a reserved proxy-only subnet that hasits
--purposeflag set toGLOBAL_MANAGED_PROXY. - If you want touse the same internal IP address with multiple forwardingrules,set the IP address
--purposeflag toSHARED_LOADBALANCER_VIP.
Configure the network and subnets
Within the VPC network, configure a subnet in each regionwhere your backends are configured. In addition, configure aproxy-only-subnetin each region that you want to configure the load balancer.
This example uses the following VPC network, region, andsubnets:
Network. The network is acustom mode VPCnetwork named
NETWORK.Subnets for backends.
- A subnet named
SUBNET_Ain theREGION_Aregion uses10.1.2.0/24for its primaryIP range. - A subnet named
SUBNET_Bin theREGION_Bregion uses10.1.3.0/24for its primaryIP range.
- A subnet named
Subnets for proxies.
- A subnet named
PROXY_SN_AintheREGION_Aregion uses10.129.0.0/23for itsprimary IP range. - A subnet named
PROXY_SN_Bin theREGION_Bregion uses10.130.0.0/23for itsprimary IP range.
- A subnet named
Cross-region internal Application Load Balancers can be accessed from any region within the VPC.So clients from any region can globally access your load balancer backends.
Note: Subsequent steps in this guide use the network, region,and subnet parameters as outlined here.Configure the backend subnets
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
Provide aName for the network.
In theSubnets section, set theSubnet creation mode toCustom.
Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:
- Provide aName for the subnet.
- Select aRegion:REGION_A
- Enter anIP address range:
10.1.2.0/24
ClickDone.
ClickAdd subnet.
Create a subnet for the load balancer's backends. In theNewsubnet section, enter the following information:
- Provide aName for the subnet.
- Select aRegion:REGION_B
- Enter anIP address range:
10.1.3.0/24
ClickDone.
ClickCreate.
gcloud
Create the custom VPC network with the
gcloud computenetworks createcommand:gcloud compute networks createNETWORK \ --subnet-mode=custom
Create a subnet in the
NETWORKnetwork in theREGION_Aregion withthegcloud compute networks subnets createcommand:gcloud compute networks subnets createSUBNET_A \ --network=NETWORK \ --range=10.1.2.0/24 \ --region=REGION_A
Create a subnet in the
NETWORKnetwork in theREGION_Bregion withthegcloud compute networks subnets createcommand:gcloud compute networks subnets createSUBNET_B \ --network=NETWORK \ --range=10.1.3.0/24 \ --region=REGION_B
Terraform
To create the VPC network, use thegoogle_compute_network resource.
resource "google_compute_network" "default" { auto_create_subnetworks = false name = "lb-network-crs-reg" provider = google-beta}To create the VPC subnets in thelb-network-crs-reg network,use thegoogle_compute_subnetwork resource.
resource "google_compute_subnetwork" "subnet_a" { provider = google-beta ip_cidr_range = "10.1.2.0/24" name = "lbsubnet-uswest1" network = google_compute_network.default.id region = "us-west1"}resource "google_compute_subnetwork" "subnet_b" { provider = google-beta ip_cidr_range = "10.1.3.0/24" name = "lbsubnet-useast1" network = google_compute_network.default.id region = "us-east1"}API
Make aPOST request to thenetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{ "routingConfig": { "routingMode": "regional" }, "name": "NETWORK", "autoCreateSubnetworks": false}Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks{ "name": "SUBNET_A", "network": "projects/PROJECT_ID/global/networks/lb-network-crs-reg", "ipCidrRange": "10.1.2.0/24", "region": "projects/PROJECT_ID/regions/REGION_A",}Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks{ "name": "SUBNET_B", "network": "projects/PROJECT_ID/global/networks/NETWORK", "ipCidrRange": "10.1.3.0/24", "region": "projects/PROJECT_ID/regions/REGION_B",}Configure the proxy-only subnet
Aproxy-only subnet provides aset of IP addresses that Google Cloud uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.
This proxy-only subnet is used by all Envoy-based regional loadbalancers in the same region as the VPC network. There can only beone active proxy-only subnet for a given purpose, per region, per network.
Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on theLoad balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to theVPC networks page.
- Click the name of the VPC network.
- On theSubnet tab, clickAdd subnet.
- Provide aName for the proxy-only subnet.
- Select aRegion:REGION_A
- In thePurpose list, selectCross-region Managed Proxy.
- In theIP address range field, enter
10.129.0.0/23. - ClickAdd.
Create the proxy-only subnet inREGION_B
- On theSubnet tab, clickAdd subnet.
- Provide aName for the proxy-only subnet.
- Select aRegion:REGION_B
- In thePurpose list, selectCross-region Managed Proxy.
- In theIP address range field, enter
10.130.0.0/23. - ClickAdd.
gcloud
Create the proxy-only subnets with thegcloud compute networks subnets create command.
gcloud compute networks subnets createPROXY_SN_A \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=NETWORK \ --range=10.129.0.0/23
gcloud compute networks subnets createPROXY_SN_B \ --purpose=GLOBAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_B \ --network=NETWORK \ --range=10.130.0.0/23
Terraform
To create the VPC proxy-only subnet in thelb-network-crs-reg network, use thegoogle_compute_subnetwork resource.
resource "google_compute_subnetwork" "proxy_subnet_a" { provider = google-beta ip_cidr_range = "10.129.0.0/23" name = "proxy-only-subnet1" network = google_compute_network.default.id purpose = "GLOBAL_MANAGED_PROXY" region = "us-west1" role = "ACTIVE" lifecycle { ignore_changes = [ipv6_access_type] }}resource "google_compute_subnetwork" "proxy_subnet_b" { provider = google-beta ip_cidr_range = "10.130.0.0/23" name = "proxy-only-subnet2" network = google_compute_network.default.id purpose = "GLOBAL_MANAGED_PROXY" region = "us-east1" role = "ACTIVE" lifecycle { ignore_changes = [ipv6_access_type] }}API
Create the proxy-only subnets with thesubnetworks.insert method, replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_A/subnetworks { "name": "PROXY_SN_A", "ipCidrRange": "10.129.0.0/23", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_A", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" } POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION_B/subnetworks { "name": "PROXY_SN_B", "ipCidrRange": "10.130.0.0/23", "network": "projects/PROJECT_ID/global/networks/NETWORK", "region": "projects/PROJECT_ID/regions/REGION_B", "purpose": "GLOBAL_MANAGED_PROXY", "role": "ACTIVE" }Configure firewall rules
This example uses the following firewall rules:
fw-ilb-to-backends. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22from anyaddress. You can choose a more restrictive source IP address range for this rule; forexample, you can specify just the IP address ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-sshtoidentify the VMs that the firewall rule applies to.fw-healthcheck. An ingress rule, applicable to the instancesbeing load balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in130.211.0.0/22and35.191.0.0/16). Thisexample uses the target tagload-balanced-backendto identify the VMs thatthe firewall rule applies to.fw-backends. An ingress rule, applicable to the instances beingload balanced, that allows TCP traffic on ports80,443, and8080fromthe internal proxy Network Load Balancer's managed proxies. This example uses the target tagload-balanced-backendto identify the VMs that the firewall rule applies to.
Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.
Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreating a managed instancegroup.
Console
In the Google Cloud console, go to theFirewall policies page.
ClickCreate firewall rule to create the rule to allow incomingSSH connections:
- Name:
fw-ilb-to-backends - Network:NETWORK
- Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
22for the port number.
- Name:
ClickCreate.
ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:
- Name:
fw-healthcheck - Network:NETWORK
- Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
load-balanced-backend - Source filter:IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80for the port number.
As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you use
tcp:80forthe protocol and port, Google Cloud can useHTTP on port80to contact your VMs, but it cannot use HTTPS onport443to contact them.
- Name:
ClickCreate.
ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:
- Name:
fw-backends - Network:NETWORK
- Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
load-balanced-backend - Source filter:IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23and10.130.0.0/23 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80, 443, 8080for theport numbers.
- Name:
ClickCreate.
gcloud
Create the
fw-ilb-to-backendsfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-ilb-to-backends \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-healthcheckrule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.gcloud compute firewall-rules create fw-healthcheck \ --network=NETWORK \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
Create the
fw-backendsrule to allow the internal proxy Network Load Balancer'sproxies to connect to your backends. Setsource-rangesto theallocated ranges of your proxy-only subnet,for example,10.129.0.0/23and10.130.0.0/23.gcloud compute firewall-rules create fw-backends \ --network=NETWORK \ --action=allow \ --direction=ingress \ --source-ranges=SOURCE_RANGE \ --target-tags=load-balanced-backend \ --rules=tcp:80,tcp:443,tcp:8080
API
Create thefw-ilb-to-backends firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-ilb-to-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [ "0.0.0.0/0" ], "targetTags": [ "allow-ssh" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] } ],"direction": "INGRESS"}Create thefw-healthcheck firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-healthcheck", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16" ], "targetTags": [ "load-balanced-backend" ], "allowed": [ { "IPProtocol": "tcp" } ], "direction": "INGRESS"}Create thefw-backends firewall rule to allow TCP traffic within theproxy subnet for thefirewalls.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-backends", "network": "projects/PROJECT_ID/global/networks/NETWORK", "sourceRanges": [ "10.129.0.0/23", "10.130.0.0/23" ], "targetTags": [ "load-balanced-backend" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "80" ] }, { "IPProtocol": "tcp", "ports": [ "443" ] }, { "IPProtocol": "tcp", "ports": [ "8080" ] } ], "direction": "INGRESS"}Create a managed instance group
This section shows how to create a template and a managed instance group. Themanaged instance group provides VM instances running the backend servers of anexample cross-region internal proxy Network Load Balancer. For your instance group, you can define an HTTPservice and map a port name to the relevant port. The backend serviceof the load balancer forwards traffic to thenamed ports.Traffic from clients is load balanced to backend servers. For demonstrationpurposes, backends serve their own hostnames.
Console
In the Google Cloud console, go totheInstance templates page.
- ClickCreate instance template.
- ForName, enter
gil4-backendeast1-template. - Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such as
apt-get. - ClickAdvanced options.
- ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandload-balanced-backend. - ForNetwork interfaces, select the following:
- Network:NETWORK
- Subnet:SUBNET_B
- ForNetwork tags, enter
ClickManagement. Enter the following script into theStartup script field.
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
ClickCreate.
ClickCreate instance template.
ForName, enter
gil4-backendwest1-template.Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such as
apt-get.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandload-balanced-backend. - ForNetwork interfaces, select the following:
- Network:NETWORK
- Subnet:SUBNET_A
- ForNetwork tags, enter
ClickManagement. Enter the following script into theStartup script field.
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://169.254.169.254/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
ClickCreate.
In the Google Cloud console, go to theInstance groups page.
- ClickCreate instance group.
- SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
- ForName, enter
gl4-ilb-miga. - ForLocation, selectSingle zone.
- ForRegion, selectREGION_A.
- ForZone, selectZONE_A.
- ForInstance template, select
gil4-backendwest1-template. Specify the number of instances that you want to create in the group.
For this example, specify the following options underAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU use.
- ForAutoscaling mode, select
ClickCreate.
ClickCreate instance group.
SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
ForName, enter
gl4-ilb-migb.ForLocation, selectSingle zone.
ForRegion, selectREGION_B.
ForZone, selectZONE_B.
ForInstance template, select
gil4-backendeast1-template.Specify the number of instances that you want to create in the group.
For this example, specify the following options underAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.
- ForAutoscaling mode, select
ClickCreate.
gcloud
The gcloud CLI instructions in this guide assume that you are usingCloud Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates createcommand.gcloud compute instance-templates create gil4-backendwest1-template \ --region=REGION_A \ --network=NETWORK \ --subnet=SUBNET_A \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
gcloud compute instance-templates create gil4-backendeast1-template \ --region=REGION_B \ --network=NETWORK \ --subnet=SUBNET_B \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the zone with the
gcloud computeinstance-groups managed createcommand.gcloud compute instance-groups managed create gl4-ilb-miga \ --zone=ZONE_A \ --size=2 \ --template=gil4-backendwest1-template
gcloud compute instance-groups managed create gl4-ilb-migb \ --zone=ZONE_B \ --size=2 \ --template=gil4-backendeast1-template
API
Create the instance template with theinstanceTemplates.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{ "name":"gil4-backendwest1-template", "properties":{ "machineType":"e2-standard-2", "tags":{ "items":[ "allow-ssh", "load-balanced-backend" ] }, "metadata":{ "kind":"compute#metadata", "items":[ { "key":"startup-script", "value":"#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\n vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n echo \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2" } ] }, "networkInterfaces":[ { "network":"projects/PROJECT_ID/global/networks/NETWORK", "subnetwork":"regions/REGION_A/subnetworks/SUBNET_A", "accessConfigs":[ { "type":"ONE_TO_ONE_NAT" } ] } ], "disks":[ { "index":0, "boot":true, "initializeParams":{ "sourceImage":"projects/debian-cloud/global/images/family/debian-12" }, "autoDelete":true } ] }}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{ "name":"gil4-backendeast1-template", "properties":{ "machineType":"e2-standard-2", "tags":{ "items":[ "allow-ssh", "load-balanced-backend" ] }, "metadata":{ "kind":"compute#metadata", "items":[ { "key":"startup-script", "value":"#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\n vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\n echo \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2" } ] }, "networkInterfaces":[ { "network":"projects/PROJECT_ID/global/networks/NETWORK", "subnetwork":"regions/REGION_B/subnetworks/SUBNET_B", "accessConfigs":[ { "type":"ONE_TO_ONE_NAT" } ] } ], "disks":[ { "index":0, "boot":true, "initializeParams":{ "sourceImage":"projects/debian-cloud/global/images/family/debian-12" }, "autoDelete":true } ] }}Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{ "name": "gl4-ilb-miga", "zone": "projects/PROJECT_ID/zones/ZONE_A", "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil4-backendwest1-template", "baseInstanceName": "gl4-ilb-miga", "targetSize": 2}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{ "name": "gl4-ilb-migb", "zone": "projects/PROJECT_ID/zones/ZONE_A", "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/gil4-backendwest1-template", "baseInstanceName": "gl4-ilb-migb", "targetSize": 2}Configure the load balancer
This example shows you how to create the following cross-region internal proxy Network Load Balancerresources:
- A global TCP health check.
- A global backend service with the same MIGs as the backend.
- A global target proxy.
- Two global forwarding rules with regional IP addresses.For the forwarding rule's IP address, use the
SUBNET_AorSUBNET_BIP address range. If youtry to use theproxy-only subnet,forwarding rule creation fails.
Proxy availability
Sometimes Google Cloud regions don't have enough proxy capacity fora new load balancer. If this happens, the Google Cloud console provides aproxy availability warning message when you are creating your load balancer. Toresolve this issue, you can do one of the following:
- Select a different region for your load balancer. This can be a practicaloption if you have backends in another region.
- Select a VPC network that already has an allocatedproxy-only subnet.
Wait for the capacity issue to be resolved.
Console
Start your configuration
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
- ForProxy or passthrough, selectProxy load balancer and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ForCross-region or single region deployment, selectBest for cross-region workloads and clickNext.
- ClickConfigure.
Basic configuration
- Provide aName for the load balancer.
- ForNetwork, selectNETWORK.
Configure the frontend with two forwarding rules
- ClickFrontend configuration.
- Provide aName for the forwarding rule.
- In theSubnetwork region list, selectREGION_A.
Reserve a proxy-only subnet
If you already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps. - In theSubnetwork list, selectSUBNET_A.
- In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
- Provide aName for the static IP address.
- In theStatic IP address list, selectLet me choose.
- In theCustom IP address field, enter
10.1.2.99. - SelectReserve.
- ClickDone.
- To add the second forwarding rule, clickAdd frontend IP and port.
- Provide aName for the forwarding rule.
- In theSubnetwork region list, selectREGION_B.
Reserve a proxy-only subnet
Because you have already configured theproxy-only subnet, theReserve subnet button isn't displayed.You can continue with the next steps. - In theSubnetwork list, selectSUBNET_B.
- In theIP address list, clickCreate IP address. TheReserve a static internal IP address page opens.
- Provide aName for the static IP address.
- In theStatic IP address list, selectLet me choose.
- In theCustom IP address field, enter
10.1.3.99. - SelectReserve.
- ClickDone.
- ClickBackend configuration.
- In theCreate or select backend services list, clickCreate a backend service.
- Provide aName for the backend service.
- ForProtocol, selectTCP.
- ForNamed Port, enter
http. - In theBackend type list, selectInstance group.
- In theHealth check list, clickCreate a health check, and then enter the following information:
- In theName field, enter
global-http-health-check. - In theProtocol list, select
HTTP. - In thePort field, enter
80. - ClickCreate.
- In theNew backend section:
- In theInstance group list, select
gl4-ilb-migainREGION_A. - SetPort numbers to
80. - For theBalancing mode, selectConnection.
- ClickDone.
- To add another backend, clickAdd backend.
- In theInstance group list, select
gl4-ilb-migbinREGION_B. - SetPort numbers to
80. - ClickDone.
Review the configuration
- ClickReview and finalize.
- Review your load balancer configuration settings.
- ClickCreate.
gcloud
Define the TCP health check with the
gcloud compute health-checkscreate tcpcommand.gcloud compute health-checks create tcp global-health-check \ --use-serving-port \ --global
Define the backend service with the
gcloud compute backend-servicescreatecommand.gcloud compute backend-services create gl4-gilb-backend-service \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --enable-logging \ --logging-sample-rate=1.0 \ --health-checks=global-health-check \ --global-health-checks \ --global
Add backends to the backend service with the
gcloud compute backend-servicesadd-backendcommand.gcloud compute backend-services add-backend gl4-gilb-backend-service \ --balancing-mode=CONNECTION \ --max-connections=50 \ --instance-group=gl4-ilb-miga \ --instance-group-zone=ZONE_A \ --global
gcloud compute backend-services add-backend gl4-gilb-backend-service \ --balancing-mode=CONNECTION \ --max-connections=50 \ --instance-group=gl4-ilb-migb \ --instance-group-zone=ZONE_B \ --global
Create the target proxy.
Create the target proxy with the
gcloud compute target-tcp-proxiescreatecommand.gcloud compute target-tcp-proxies create gilb-tcp-proxy \ --backend-service=gl4-gilb-backend-service \ --global
Create two forwarding rules, one with a VIP (10.1.2.99) in
REGION_Band another one with a VIP (10.1.3.99)inREGION_A. For more information,seeReserve a static internal IPv4 address.For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.
Use the
gcloud compute forwarding-rulescreatecommandwith the correct flags.gcloud compute forwarding-rules create gil4forwarding-rule-a \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_A \ --subnet-region=REGION_A \ --address=10.1.2.99 \ --ports=80 \ --target-tcp-proxy=gilb-tcp-proxy \ --global
gcloud compute forwarding-rules create gil4forwarding-rule-b \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=SUBNET_B \ --subnet-region=REGION_B \ --address=10.1.3.99 \ --ports=80 \ --target-tcp-proxy=gilb-tcp-proxy \ --global
API
Create the health check by making aPOST request to thehealthChecks.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/healthChecks{"name": "global-health-check","type": "TCP","httpHealthCheck": { "portSpecification": "USE_SERVING_PORT"}}Create the global backend service by making aPOST request to thebackendServices.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendServices{"name": "gl4-gilb-backend-service","backends": [ { "group": "projects/PROJECT_ID/zones/ZONE_A/instanceGroups/gl4-ilb-miga", "balancingMode": "CONNECTION" }, { "group": "projects/PROJECT_ID/zones/ZONE_B/instanceGroups/gl4-ilb-migb", "balancingMode": "CONNECTION" }],"healthChecks": [ "projects/PROJECT_ID/regions/global/healthChecks/global-health-check"],"loadBalancingScheme": "INTERNAL_MANAGED"}Create the target TCP proxy by making aPOST request to thetargetTcpProxies.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/targetTcpProxy{"name": "l4-ilb-proxy",}Create the forwarding rule by making aPOST request to theforwardingRules.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-a","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-b","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}Create the forwarding rule by making aPOST request to theglobalForwardingRules.insert method,replacingPROJECT_ID with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-a","IPAddress": "10.1.2.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_A/subnetworks/SUBNET_A","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/forwardingRules{"name": "gil4forwarding-rule-b","IPAddress": "10.1.3.99","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/global/targetTcpProxies/l4-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/REGION_B/subnetworks/SUBNET_B","network": "projects/PROJECT_ID/global/networks/NETWORK","networkTier": "PREMIUM"}Test the load balancer
Create a VM instance to test connectivity
Create a client VM in
REGION_BandREGION_Aregions:gcloud compute instances create l4-ilb-client-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_A \ --zone=ZONE_A \ --tags=allow-ssh
gcloud compute instances create l4-ilb-client-b \ --image-family=debian-12 \ --image-project=debian-cloud \ --network=NETWORK \ --subnet=SUBNET_B \ --zone=ZONE_B \ --tags=allow-ssh
Use SSH to connect to each client instance.
gcloud compute ssh l4-ilb-client-a --zone=ZONE_A
gcloud compute ssh l4-ilb-client-b --zone=ZONE_B
Verify that the IP address is serving its hostname
Verify that the client VM can reach both IP addresses.The command should succeed and return the name of the backend VM which served therequest:
curl 10.1.2.99
curl 10.1.3.99
Test failover
Verify failover to backends in the
REGION_Aregion when backends in theREGION_Bare unhealthy or unreachable. To simulate failover, remove allbackends fromREGION_B:gcloud compute backend-services remove-backend gl4-gilb-backend-service \ --instance-group=gl4-ilb-migb \ --instance-group-zone=ZONE_B \ --global
Connect using SSH to a client VM in
REGION_B.gcloud compute ssh l4-ilb-client-b \ --zone=ZONE_B
Send requests to the load balanced IP address in the
REGION_Bregion.The command output shows responses from backend VMs inREGION_A:{RESULTS=for i in {1..100}do RESULTS="$RESULTS:$(curl 10.1.3.99)"doneecho "***"echo "*** Results of load-balancing to 10.1.3.99: "echo "***"echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -cecho}
Additional configuration options
This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.
PROXY protocol for retaining client connection information
The internal proxy Network Load Balancer terminates TCP connections fromthe client and creates new connections to the VM instances. By default, theoriginal client IP and port information is not preserved.
To preserve and send the original connection information to your instances,enablePROXY protocol(version 1). This protocol sends an additional header that contains the sourceIP address, destination IP address, and port numbers to the instance as a partof the request.
Make sure that the internal proxy Network Load Balancer's backend instances are running HTTPor HTTPS servers that support PROXY protocol headers. If the HTTP orHTTPS servers are notconfigured to support PROXY protocol headers, the backend instances return emptyresponses. For example, the PROXY protocol doesn't work with the Apache HTTPServer software. You can use different web server software, such as Nginx.
If you set the PROXY protocol for user traffic, you must also set it for yourhealth checks. If you are checking health and servingcontent on the same port, set the health check's--proxy-header to match yourload balancer setting.
The PROXY protocol header is typically a single line of user-readabletext in the following format:
PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n
Following is an example of the PROXY protocol:
PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n
In the preceding example, the client IP is192.0.2.1, the load balancing IP is198.51.100.1, the client port is15221, and the destination port is110.
In cases where the client IP is not known, the load balancer generatesa PROXY protocol header in the following format:
PROXY UNKNOWN\r\n
Update PROXY protocol header for target TCP proxy
Theexample load balancer setup on this page shows you how toenable the PROXY protocol header while creating the internal proxy Network Load Balancer. Use thesesteps to change the PROXY protocol header for an existing target TCP proxy.
Console
In the Google Cloud console, go to theLoad balancing page.
ClickEdit for your loadbalancer.
ClickFrontend configuration.
Change the value of theProxy protocol field toOn.
ClickUpdate to save your changes.
gcloud
In the following command, edit the--proxy-header field and set it toeitherNONE orPROXY_V1 depending on your requirement.
gcloud compute target-ssl-proxies update int-tcp-target-proxy \ --proxy-header=[NONE | PROXY_V1]
Use the same IP address between multiple internal forwarding rules
For multiple internal forwarding rules to share the same internal IP address,you must reserve the IP address and set its--purpose flag toSHARED_LOADBALANCER_VIP.
gcloud
gcloud compute addresses createSHARED_IP_ADDRESS_NAME \ --region=REGION \ --subnet=SUBNET_NAME \ --purpose=SHARED_LOADBALANCER_VIP
Enable session affinity
The example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for an example loadbalancer so that the backend service uses client IP affinity orgenerated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particularclient's requests to the same backend VM based on a hash created from theclient's IP address and the load balancer's IP address (the internal IP addressof an internal forwarding rule).
Console
To enable client IP session affinity:
- In the Google Cloud console, go to theLoad balancing page.
Go to Load balancing - ClickBackends.
- Click the name of the backend serviceyou created for this example and clickEdit.
- On theBackend service details page, clickAdvancedconfiguration.
- UnderSession affinity, selectClient IP from the menu.
- ClickUpdate.
gcloud
Use the following gcloud command to update theBACKEND_SERVICEbackend service, specifying client IP session affinity:
gcloud compute backend-services updateBACKEND_SERVICE \ --global \ --session-affinity=CLIENT_IP
Enable connection draining
You can enable connection draining on backend services to ensure minimalinterruption to your users when an instance that is serving traffic isterminated, removed manually, or removed by an autoscaler. To learn more aboutconnection draining, read theEnabling connection drainingdocumentation.
What's next
- Convert proxy Network Load Balancer to IPv6
- Internal proxy Network Load Balancer overview.
- Proxy-only subnets for Envoy-based load balancers.
- Clean up a load balancing setup.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.