Set up a regional internal Application Load Balancer with VM instance group backends

This document provides instructions for configuring a regional internal Application Load Balancerfor your services that run on Compute Engine VMs.

To configure load balancing for your services running inGoogle Kubernetes Engine (GKE) Pods, seeContainer-native load balancing through standaloneNEGsand theAttaching an internal Application Load Balancer to standaloneNEGssection.

To configure load balancing to access Google APIs and services usingPrivate Service Connect, seeAccess regional Google APIsthrough backends.

The setup for internal Application Load Balancers has two parts:

  • Perform prerequisite tasks, such as ensuring that required accounts havethe correct permissions and preparing the Virtual Private Cloud (VPC) network.
  • Set up the load balancer resources.

Before following this guide, familiarize yourself with the following:

Permissions

To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you must have all ofthe followingCompute Engine IAM roles.

TaskRequired role
Create networks, subnets, and load balancer componentsCompute Network Admin (roles/compute.networkAdmin)
Add and remove firewall rulesCompute Security Admin (roles/compute.securityAdmin)
Create instancesCompute Instance Admin (roles/compute.instanceAdmin.v1)

For more information, see the following guides:

Setup overview

You can configure an internal Application Load Balancer as described in the followinghigh-level configuration flow. The numbered steps refer to the numbers in thediagram.

Internal Application Load Balancer numbered components.
Internal Application Load Balancer numbered components (click to enlarge).

As shown in the diagram, this example creates an internal Application Load Balancer in aVPC network in regionus-west1, with one backend serviceand two backend groups.

The diagram shows the following:

  1. A VPC network with two subnets:

    • One subnet is used for backends (instance groups) and theforwarding rule. Its primary IP address range is10.1.2.0/24.

    • One subnet is a proxy-only subnet in theus-west1 region. You mustcreate one proxy-only subnet in each region of a VPCnetwork where you use internal Application Load Balancers. The region'sproxy-only subnet is shared among all internal Application Load Balancers inthe region. Source addresses of packets sent from the internal Application Load Balancerto your service's backends are allocated from theproxy-only subnet. In this example, the proxy-only subnet for the regionhas a primary IP address range of10.129.0.0/23, which is therecommended subnet size. For more information,seeProxy-only subnets for Envoy-based load balancers.

  2. Two firewall rules:

    • A firewall rule that permits proxy-only subnet traffic flows in yournetwork. This means adding one rule that allows TCP port80,443, and8080 traffic from10.129.0.0/23 (the range of the proxy-only subnet inthis example).
    • Another firewall rule for thehealth check probes.
  3. Backend Compute Engine VM instances.

  4. Managed or unmanaged instance groups for Compute Engine VM deployments.

    In each zone, you can have a combination of backend group types based onthe requirements of your deployment.

    Note: This setup shows you how to load balance requests to only VM instance group backends. To learn how to load balance requests to GKE Pods, seeGKE Ingress for Application Load Balancers.
  5. A regional health check that reports the readiness of your backends.

  6. A regional backend service that monitors the usage and health ofbackends.

  7. A regional URL map that parses the URL of a request and forwardsrequests to specific backend services based on the host and path of therequest URL.

  8. A regional target HTTP or HTTPS proxy that receives a request from theuser and forwards it to the URL map. For HTTPS, configure a regional SSLcertificate resource. The target proxy uses the SSL certificate to decrypt SSLtraffic if you configure HTTPS load balancing. The target proxy can forwardtraffic to your instances by using HTTP or HTTPS.

  9. A forwarding rule that has the internal IP address of your loadbalancer, to forward each incoming request to the target proxy.

    The internal IP address associated with the forwarding rule can come fromany subnet in the same network and region. Note the following conditions:

    • The IP address can (but does not need to) come from the same subnet asthe backend instance groups.
    • The IP address must not come from a reserved proxy-only subnet that hasits--purpose flag set toREGIONAL_MANAGED_PROXY.
    • If you want to share the internal IP address with multiple forwardingrules, set the IP address's--purpose flag toSHARED_LOADBALANCER_VIP.

    The example on this page uses a reserved internal IP address for theregional internal Application Load Balancer's forwarding rule, rather than allowing an ephemeralinternal IP address to be allocated. As a best practice, we recommendreserving IP addresses for forwarding rules.

Configure the network and subnets

You need a VPC network with two subnets: one for the loadbalancer's backends and the other for the load balancer's proxies. Aninternal Application Load Balancer is regional. Traffic within the VPCnetwork is routed to the load balancer if the traffic's source is in asubnet in the same region as the load balancer.

This example uses the following VPC network, region, andsubnets:

  • Network. The network is acustom-mode VPCnetwork namedlb-network.

  • Subnet for backends. A subnet namedbackend-subnet in theus-west1 region uses10.1.2.0/24 for its primary IP range.

  • Subnet for proxies. A subnet namedproxy-only-subnet in theus-west1 region uses10.129.0.0/23 for its primary IP range.

To demonstrateglobal access,this example also creates a second test client VM in a different region andsubnet:

  • Region:europe-west1
  • Subnet:europe-subnet, with primary IP address range10.3.4.0/24
Note: You can change the name of the network, the region, and the parameters forthe subnets; however, subsequent steps in this guide use the network, region,and subnet parameters as outlined here.

Configure the network and subnets

Console

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. ForName, enterlb-network.

  4. In theSubnets section, set theSubnet creation mode toCustom.

  5. Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:

    • Name:backend-subnet
    • Region:us-west1
    • IP address range:10.1.2.0/24
  6. ClickDone.

  7. ClickAdd subnet.

  8. Create a subnet to demonstrateglobalaccess. In theNewsubnet section, enter the following information:

    • Name:europe-subnet
    • Region:europe-west1
    • IP address range:10.3.4.0/24
  9. ClickDone.

  10. ClickCreate.

gcloud

  1. Create the custom VPC network with thegcloud computenetworks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
  2. Create a subnet in thelb-network network in theus-west1 region withthegcloud compute networks subnets create command:

    gcloud compute networks subnets create backend-subnet \    --network=lb-network \    --range=10.1.2.0/24 \    --region=us-west1
  3. Create a subnet in thelb-network network in theeurope-west1 regionwith thegcloud compute networks subnetscreate command:

    gcloud compute networks subnets create europe-subnet \    --network=lb-network \    --range=10.3.4.0/24 \    --region=europe-west1

API

Make aPOST request to thenetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{ "routingConfig": {   "routingMode": "REGIONAL" }, "name": "lb-network", "autoCreateSubnetworks": false}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks{ "name": "backend-subnet", "network": "projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.1.2.0/24", "region": "projects/PROJECT_ID/regions/us-west1",}

Make aPOST request to thesubnetworks.insert method.ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/europe-west1/subnetworks{ "name": "europe-subnet", "network": "projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.3.4.0/24", "region": "projects/PROJECT_ID/regions/europe-west1",}

Configure the proxy-only subnet

This proxy-only subnet is for allregional Envoy-based load balancers in theus-west1 region of thelb-network.

Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.

Console

If you're using the Google Cloud console, you can wait and create the proxy-onlysubnet later on theLoad balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. Click the name of the VPC network:lb-network.

  3. ClickAdd subnet.

  4. ForName, enterproxy-only-subnet.

  5. ForRegion, selectus-west1.

  6. SetPurpose toRegional Managed Proxy.

  7. ForIP address range, enter10.129.0.0/23.

  8. ClickAdd.

gcloud

Create the proxy-only subnet with thegcloud compute networks subnetscreate command.

gcloud compute networks subnets create proxy-only-subnet \    --purpose=REGIONAL_MANAGED_PROXY \    --role=ACTIVE \    --region=us-west1 \    --network=lb-network \    --range=10.129.0.0/23

API

Create the proxy-only subnet with thesubnetworks.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/projects/PROJECT_ID/regions/us-west1/subnetworks{  "name": "proxy-only-subnet",  "ipCidrRange": "10.129.0.0/23",  "network": "projects/PROJECT_ID/global/networks/lb-network",  "region": "projects/PROJECT_ID/regions/us-west1",  "purpose": "REGIONAL_MANAGED_PROXY",  "role": "ACTIVE"}

Configure firewall rules

This example uses the following firewall rules:

  • fw-allow-ssh. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22 from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-ssh toidentify the VMs that the firewall rule applies to.

  • fw-allow-health-check. An ingress rule, applicable to the instancesbeing load balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in130.211.0.0/22 and35.191.0.0/16). Thisexample uses the target tagload-balanced-backend to identify the VMs thatthe firewall rule applies to.

  • fw-allow-proxies. An ingress rule, applicable to the instances beingload balanced, that allows TCP traffic on ports80,443, and8080 fromthe internal Application Load Balancer's managed proxies. This example uses the target tagload-balanced-backend to identify the VMs that the firewall rule applies to.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreate a managed VM instance group backend.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. ClickCreate firewall rule to create the rule to allow incomingSSH connections:

    • Name:fw-allow-ssh
    • Network:lb-network
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:allow-ssh
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter22 for the port number.
  3. ClickCreate.

  4. ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:

    • Name:fw-allow-health-check
    • Network:lb-network
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:load-balanced-backend
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.
        As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you usetcp:80 forthe protocol and port, Google Cloud can useHTTP on port80 to contact your VMs, but it cannot use HTTPS onport443 to contact them.
  5. ClickCreate.

  6. ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:

    • Name:fw-allow-proxies
    • Network:lb-network
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:load-balanced-backend
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:10.129.0.0/23
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80, 443, 8080 for theport numbers.
  7. ClickCreate.

gcloud

  1. Create thefw-allow-ssh firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-allow-ssh \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  2. Create thefw-allow-health-check rule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.

    gcloud compute firewall-rules create fw-allow-health-check \    --network=lb-network \    --action=allow \    --direction=ingress \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --target-tags=load-balanced-backend \    --rules=tcp
  3. Create thefw-allow-proxies rule to allow the internal Application Load Balancer'sproxies to connect to your backends. Setsource-ranges to theallocated ranges of your proxy-only subnet—for example,10.129.0.0/23.

    gcloud compute firewall-rules create fw-allow-proxies \    --network=lb-network \    --action=allow \    --direction=ingress \    --source-ranges=source-range \    --target-tags=load-balanced-backend \    --rules=tcp:80,tcp:443,tcp:8080

API

Create thefw-allow-ssh firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-ssh", "network": "projects/PROJECT_ID/global/networks/lb-network", "sourceRanges": [   "0.0.0.0/0" ], "targetTags": [   "allow-ssh" ], "allowed": [  {    "IPProtocol": "tcp",    "ports": [      "22"    ]  } ],"direction": "INGRESS"}

Create thefw-allow-health-check firewall rule by making aPOST request tothefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-health-check", "network": "projects/PROJECT_ID/global/networks/lb-network", "sourceRanges": [   "130.211.0.0/22",   "35.191.0.0/16" ], "targetTags": [   "load-balanced-backend" ], "allowed": [   {     "IPProtocol": "tcp"   } ], "direction": "INGRESS"}

Create thefw-allow-proxies firewall rule to allow TCP traffic within theproxy subnet for thefirewalls.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-proxies", "network": "projects/PROJECT_ID/global/networks/lb-network", "sourceRanges": [   "10.129.0.0/23" ], "targetTags": [   "load-balanced-backend" ], "allowed": [   {     "IPProtocol": "tcp",     "ports": [       "80"     ]   }, {     "IPProtocol": "tcp",     "ports": [       "443"     ]   },   {     "IPProtocol": "tcp",     "ports": [       "8080"     ]   } ], "direction": "INGRESS"}

Reserve the load balancer's IP address

By default, one IP address is used for each forwarding rule. You canreserve ashared IP address, which lets you use the same IP addresswith multiple forwarding rules. However, if you want topublish the load balancer by using Private Service Connect,don't use a shared IP address for the forwarding rule.

For the forwarding rule's IP address, use thebackend-subnet. If youtry to use theproxy-onlysubnet, forwarding rulecreation fails.

Console

You can reserve a standalone internal IP address using theGoogle Cloud console.

  1. Go to theVPC networks page.

    Go to VPC networks

  2. Click the network that was used to configurehybrid connectivity between the environments.
  3. ClickStatic internal IP addresses, and then clickReserve static address.
  4. ForName, enterl7-ilb-ip-address.
  5. For theSubnet, selectbackend-subnet.
  6. If you want to specify which IP address to reserve, underStatic IPaddress, selectLet me choose, and then fill in aCustomIP address. Otherwise, the system automatically assigns an IP addressin the subnet for you.
  7. If you want to use this IP address with multiple forwarding rules, underPurpose, chooseShared.
  8. ClickReserve to finish the process.

gcloud

  1. Using the gcloud CLI, run thegcloud compute addresses create command:

    gcloud compute addresses create l7-ilb-ip-address \    --region=us-west1 \    --subnet=backend-subnet

    If you want to use the same IP address with multiple forwarding rules,specify--purpose=SHARED_LOADBALANCER_VIP.

  2. Use thegcloud compute addresses describecommandto view the allocated IP address:

    gcloud compute addresses describe l7-ilb-ip-address \    --region=us-west1

Create a managed VM instance group backend

This section shows how to create an instance group template and a managedinstance group. The managed instance group provides VM instances running thebackend servers of an example regional internal Application Load Balancer. For your instance group,you can define an HTTP service and map a port name to the relevant port. Thebackend service of the load balancer forwards traffic to thenamedports. Traffic from clientsis load balanced to backend servers. For demonstration purposes, backends servetheir own hostnames.

Console

  1. Create an instance template. In the Google Cloud console, go totheInstance templates page.

    Go to Instance templates

    1. ClickCreate instance template.
    2. ForName, enterl7-ilb-backend-template.
    3. Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.
    4. ClickAdvanced options.
    5. ClickNetworking and configure the following fields:
      1. ForNetwork tags, enterallow-ssh andload-balanced-backend.
      2. ForNetwork interfaces, select the following:
        • Network:lb-network
        • Subnet:backend-subnet
    6. ClickManagement. Enter the following script into theStartup script field.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
    7. ClickCreate.

  2. Create a managed instance group. In the Google Cloud console, go totheInstance groups page.

    Go to Instance groups

    1. ClickCreate instance group.
    2. SelectNew managed instance group (stateless). For moreinformation, seeStateful managed instancegroups.
    3. ForName, enterl7-ilb-backend-example.
    4. ForLocation, selectSingle zone.
    5. ForRegion, selectus-west1.
    6. ForZone, selectus-west1-a.
    7. ForInstance template, selectl7-ilb-backend-template.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.

      Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.

    9. ClickCreate.

gcloud

Thegcloud instructions in this guide assume that you are usingCloudShell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with thegcloud compute instance-templates create command.

     gcloud compute instance-templates create l7-ilb-backend-template \     --region=us-west1 \     --network=lb-network \     --subnet=backend-subnet \     --tags=allow-ssh,load-balanced-backend \     --image-family=debian-12 \     --image-project=debian-cloud \     --metadata=startup-script='#! /bin/bash     apt-get update     apt-get install apache2 -y     a2ensite default-ssl     a2enmod ssl     vm_hostname="$(curl -H "Metadata-Flavor:Google" \     http://metadata.google.internal/computeMetadata/v1/instance/name)"     echo "Page served from: $vm_hostname" | \     tee /var/www/html/index.html     systemctl restart apache2'
  2. Create a managed instance group in the zone with thegcloud computeinstance-groups managed create command.

      gcloud compute instance-groups managed create l7-ilb-backend-example \      --zone=us-west1-a \      --size=2 \      --template=l7-ilb-backend-template

API

Create the instance template with theinstanceTemplates.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/instanceTemplates{  "name":"l7-ilb-backend-template",  "properties":{     "machineType":"e2-standard-2",     "tags":{       "items":[         "allow-ssh",         "load-balanced-backend"       ]     },     "metadata":{        "kind":"compute#metadata",        "items":[          {            "key":"startup-script",            "value":"#! /bin/bash\napt-get update\napt-get install            apache2 -y\na2ensite default-ssl\na2enmod ssl\n            vm_hostname=\"$(curl -H \"Metadata-Flavor:Google\"            \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\n            echo \"Page served from: $vm_hostname\" | \\\ntee            /var/www/html/index.html\nsystemctl restart apache2"          }        ]     },     "networkInterfaces":[       {         "network":"projects/PROJECT_ID/global/networks/lb-network",         "subnetwork":"regions/us-west1/subnetworks/backend-subnet",         "accessConfigs":[           {             "type":"ONE_TO_ONE_NAT"           }         ]       }     ],     "disks":[       {         "index":0,         "boot":true,         "initializeParams":{           "sourceImage":"projects/debian-cloud/global/images/family/debian-12"         },         "autoDelete":true       }     ]  }}

Create a managed instance group in each zone with theinstanceGroupManagers.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/{zone}/instanceGroupManagers{  "name": "l7-ilb-backend-example",  "zone": "projects/PROJECT_ID/zones/us-west1-a",  "instanceTemplate": "projects/PROJECT_ID/global/instanceTemplates/l7-ilb-backend-template",  "baseInstanceName": "l7-ilb-backend-example",  "targetSize": 2}

Configure the load balancer

This example shows you how to create the following regional internal Application Load Balancer resources:

  • HTTP health check
  • Backend service with a managed instance group as the backend
  • AURL map
    • Make sure to refer to a regional URL map if a region is defined forthe target HTTP(S) proxy. A regional URL map routes requests to a regionalbackend service based on rules that you define for the host and path of anincoming URL. A regional URL map can be referenced by a regional targetproxy rule in the same region only.
  • SSL certificate (for HTTPS)
  • Target proxy
  • Forwarding rule

Proxy availability

Sometimes Google Cloud regions don't have enough proxy capacity fora new load balancer. If this happens, the Google Cloud console provides aproxy availability warning message when you are creating your load balancer. Toresolve this issue, you can do one of the following:

  • Select a different region for your load balancer. This can be a practicaloption if you have backends in another region.
  • Select a VPC network that already has an allocatedproxy-only subnet.
  • Wait for the capacity issue to be resolved.

Console

Select the load balancer type

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickCreate load balancer.
  3. ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
  4. ForPublic facing or internal, selectInternal and clickNext.
  5. ForCross-region or single region deployment, selectBest for regional workloads and clickNext.
  6. ClickConfigure.

Basic configuration

  1. For theName of the load balancer, enterl7-ilb-map.
  2. ForRegion, selectus-west1.
  3. ForNetwork, selectlb-network.

Reserve a proxy-only subnet

Note: If you've already reserved a proxy-only subnet, as instructed in thepreparation setup,theReserve a Subnet button isn't displayed, so you need to skip thissection and continue with the steps to configure the backendservice.

Reserve a proxy-only subnet:

  1. ClickReserve a Subnet.
  2. ForName, enterproxy-only-subnet.
  3. ForIP address range, enter10.129.0.0/23.
  4. ClickAdd.

Configure the backend service

  1. ClickBackend configuration.
  2. From theCreate or select backend services menu, selectCreate abackend service.
  3. Set the name of the backend service tol7-ilb-backend-service.
  4. SetBackend type toInstance group.
  5. In theHealth check list, clickCreate a health check, and thenenter the following information:
    1. Name:l7-ilb-basic-check
    2. Protocol:HTTP
    3. Port:80
  6. ClickCreate.
  7. In theNew backend section:
    1. SetInstance group tol7-ilb-backend-example.
    2. SetPort numbers to80.
    3. SetBalancing mode toUtilization.
    4. ClickDone.
  8. Optional: Configure a default backend security policy. The default security policy throttles traffic over a user-configured threshold. For more information about default security policies, see theRate limiting overview.

    1. To opt out of the Cloud Armor default security policy, selectNone in theCloud Armor backend security policy list.
    2. To configure the Cloud Armor default security policy, selectDefault security policy in theCloud Armor backend security policy list.
    3. In thePolicy name field, accept the automatically generated name or enter a name for your security policy.
    4. In theRequest count field, accept the default request count or enter an integer between1 and10,000.
    5. In theInterval field, select an interval.
    6. In theEnforce on key field, choose one of the following values:All,IP address, orX-Forwarded-For IP address. For more information about these options, seeIdentifying clients for rate limiting.
  9. ClickCreate.

Configure the URL map

  1. ClickHost and path rules.

  2. ForMode, selectSimple host and path rule.

  3. Ensure that thel7-ilb-backend-service is the only backend service forany unmatched host and any unmatched path.

For information about traffic management, seeSet up traffic management forinternal Application Load Balancers.

Configure the frontend

For HTTP:

  1. ClickFrontend configuration.
  2. Set the name of the forwarding rule tol7-ilb-forwarding-rule.
  3. SetProtocol toHTTP.
  4. SetSubnetwork tobackend-subnet.
  5. Set thePort to80.
  6. From theIP address list, selectl7-ilb-ip-address.
  7. ClickDone.

For HTTPS:

  1. ClickFrontend configuration.
  2. Set the name of the forwarding rule tol7-ilb-forwarding-rule.
  3. SetProtocol toHTTPS (includes HTTP/2).
  4. SetSubnetwork tobackend-subnet.
  5. Ensure that thePort is set to443, to allow HTTPS traffic.
  6. From theIP address list, selectl7-ilb-ip-address.
  7. To assign an SSL certificate to the target HTTPS proxy of theload balancer, you can either use a Compute EngineSSL certificate or a Certificate Manager certificate.

    1. To attach a Certificate Manager certificate to thetarget HTTPS proxy of the load balancer, in theChoose certificate repository section, selectCertificates.

      If you already have an existing Certificate Managercertificate to select, do the following:

      1. ClickAdd Certificate.
      2. ClickSelect an existing certificateand select the certificate from the list of certificates.
      3. ClickSelect.

      After you select the new Certificate Managercertificate, it appears in the list of certificates.

      To create a new Certificate Manager certificate,do the following:

      1. ClickAdd Certificate.
      2. ClickCreate a new certificate.
      3. To create a new certificate, follow the stepsstarting fromstep 3 as outlined in any oneof the following configuration methods in theCertificate Manager documentation:

      After you create the new Certificate Managercertificate, it appears in the list of certificates.

    2. To attach a Compute Engine SSL certificate to thetarget HTTPS proxy of the load balancer, in theChoose certificate repository section,selectClassic Certificates.

      1. In theCertificate list, do the following:
        1. If you already have a Compute Engineself-managed SSL certificateresource, select the primary SSL certificate.
        2. ClickCreate a new certificate.
          1. In theName field, enterl7-ilb-cert.
          2. In the appropriate fields, upload your PEM-formatted files:
            • Certificate
            • Private key
          3. ClickCreate.
        3. Optional: To add certificates in addition to the primary SSL certificate:
          1. ClickAdd certificate.
          2. If you already have a certificate, select it from theCertificates list.
          3. Optional: ClickCreate a new certificate and follow the instructions as specified in the previous step.
  8. Select an SSL policy from theSSL policy list. Optionally, to createan SSL policy, do the following:

    1. In theSSL policy list, selectCreate a policy.
    2. Enter a name for the SSL policy.
    3. Select a minimum TLS version. The default value isTLS 1.0.
    4. Select one of the pre-configured Google-managed profiles or select aCustom profile that lets you select SSL features individually. TheEnabled features andDisabled features are displayed.
    5. ClickSave.

    If you have not created any SSL policies, adefault Google Cloud SSL policy is applied.

  9. ClickDone.

Review the configuration

  1. ClickReview and finalize.
  2. Review your load balancer configuration settings.
  3. Optional: ClickEquivalent code to view the REST API requestthat will be used to create the load balancer.
  4. ClickCreate.

gcloud

  1. Define the HTTP health check with thegcloud compute health-checkscreate http command.

     gcloud compute health-checks create http l7-ilb-basic-check \     --region=us-west1 \     --use-serving-port
  2. Define the backend service with thegcloud computebackend-services createcommand.

    gcloud compute backend-services create l7-ilb-backend-service \    --load-balancing-scheme=INTERNAL_MANAGED \    --protocol=HTTP \    --health-checks=l7-ilb-basic-check \    --health-checks-region=us-west1 \    --region=us-west1
  3. Add backends to the backend service with thegcloud compute backend-servicesadd-backend command.

    gcloud compute backend-services add-backend l7-ilb-backend-service \    --balancing-mode=UTILIZATION \    --instance-group=l7-ilb-backend-example \    --instance-group-zone=us-west1-a \    --region=us-west1
  4. Create the URL map with thegcloud compute url-mapscreate command.

    gcloud compute url-maps create l7-ilb-map \    --default-service=l7-ilb-backend-service \    --region=us-west1
  5. Create the target proxy.

    For HTTP:

    For an internal HTTP load balancer, create the target proxywith thegcloud compute target-http-proxiescreate command.

    gcloud compute target-http-proxies create l7-ilb-proxy \    --url-map=l7-ilb-map \    --url-map-region=us-west1 \    --region=us-west1

    For HTTPS:

    You can create either Compute Engine or Certificate Manager certificates. Use any of the following methods to create certificates using Certificate Manager:

    After you create certificates, attach the certificate directly to the targetproxy.

    Assign your filepaths to variable names.

    export LB_CERT=path to PEM-formatted file
    export LB_PRIVATE_KEY=path to PEM-formatted file

    Create a regional SSL certificate using thegcloud computessl-certificatescreate command.

    gcloud compute ssl-certificates create l7-ilb-cert \    --certificate=$LB_CERT \    --private-key=$LB_PRIVATE_KEY \    --region=us-west1

    Use the regional SSL certificate to create a target proxy with thegcloudcompute target-https-proxiescreate command.

    gcloud compute target-https-proxies create l7-ilb-proxy \    --url-map=l7-ilb-map \    --region=us-west1 \    --ssl-certificates=l7-ilb-cert
  6. Create the forwarding rule.

    For custom networks, you must reference the subnet in the forwardingrule. Note that this is the VM subnet, not the proxy subnet.

    For HTTP:

    Use thegcloud compute forwarding-rulescreate commandwith the correct flags.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule \    --load-balancing-scheme=INTERNAL_MANAGED \    --network=lb-network \    --subnet=backend-subnet \    --address=l7-ilb-ip-address \    --ports=80 \    --region=us-west1 \    --target-http-proxy=l7-ilb-proxy \    --target-http-proxy-region=us-west1

    For HTTPS:

    Create the forwarding rule with thegcloud compute forwarding-rulescreate commandwith the correct flags.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule \    --load-balancing-scheme=INTERNAL_MANAGED \    --network=lb-network \    --subnet=backend-subnet \    --address=l7-ilb-ip-address \    --ports=443 \    --region=us-west1 \    --target-https-proxy=l7-ilb-proxy \    --target-https-proxy-region=us-west1

API

Create the health check by making aPOST request to theregionHealthChecks.insertmethod, replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/{region}/healthChecks{"name": "l7-ilb-basic-check","type": "HTTP","httpHealthCheck": {  "portSpecification": "USE_SERVING_PORT"}}

Create the regional backend service by making aPOST request to theregionBackendServices.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices{"name": "l7-ilb-backend-service","backends": [  {    "group": "projects/PROJECT_ID/zones/us-west1-a/instanceGroups/l7-ilb-backend-example",    "balancingMode": "UTILIZATION"  }],"healthChecks": [  "projects/PROJECT_ID/regions/us-west1/healthChecks/l7-ilb-basic-check"],"loadBalancingScheme": "INTERNAL_MANAGED"}

Create the URL map by making aPOST request to theregionUrlMaps.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/urlMaps{"name": "l7-ilb-map","defaultService": "projects/PROJECT_ID/regions/us-west1/backendServices/l7-ilb-backend-service"}

For HTTP:

Create the target HTTP proxy by making aPOST request to theregionTargetHttpProxies.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/targetHttpProxy{"name": "l7-ilb-proxy","urlMap": "projects/PROJECT_ID/global/urlMaps/l7-ilb-map","region": "us-west1"}

Create the forwarding rule by making aPOST request to theforwardingRules.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "l7-ilb-forwarding-rule","IPAddress": "IP_ADDRESS","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/regions/us-west1/targetHttpProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/us-west1/subnetworks/backend-subnet","network": "projects/PROJECT_ID/global/networks/lb-network","networkTier": "PREMIUM"}

For HTTPS:

You can create either Compute Engine or Certificate Manager certificates. Use any of the following methods to create certificates using Certificate Manager:

After you create certificates, attach the certificate directly to the targetproxy.

Read the certificate and private key files, and then create the SSLcertificate. The following example showshow to do this with Python.

frompathlibimportPathfrompprintimportpprintfromtypingimportUnionfromgoogleapiclientimportdiscoverydefcreate_regional_certificate(project_id:str,region:str,certificate_file:Union[str,Path],private_key_file:Union[str,Path],certificate_name:str,description:str="Certificate created from a code sample.",)->dict:"""    Create a regional SSL self-signed certificate within your Google Cloud project.    Args:        project_id: project ID or project number of the Cloud project you want to use.        region: name of the region you want to use.        certificate_file: path to the file with the certificate you want to create in your project.        private_key_file: path to the private key you used to sign the certificate with.        certificate_name: name for the certificate once it's created in your project.        description: description of the certificate.        Returns:        Dictionary with information about the new regional SSL self-signed certificate.    """service=discovery.build("compute","v1")# Read the cert into memorywithopen(certificate_file)asf:_temp_cert=f.read()# Read the private_key into memorywithopen(private_key_file)asf:_temp_key=f.read()# Now that the certificate and private key are in memory, you can create the# certificate resourcessl_certificate_body={"name":certificate_name,"description":description,"certificate":_temp_cert,"privateKey":_temp_key,}request=service.regionSslCertificates().insert(project=project_id,region=region,body=ssl_certificate_body)response=request.execute()pprint(response)returnresponse

Create the target HTTPS proxy by making aPOST request to theregionTargetHttpsProxies.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionTargetHttpsProxy{"name": "l7-ilb-proxy","urlMap": "projects/PROJECT_ID/regions/us-west1/urlMaps/l7-ilb-map","sslCertificates": /projects/PROJECT_ID/regions/us-west1/sslCertificates/SSL_CERT_NAME}

Create the forwarding rule by making aPOST request to theforwardingRules.insert method,replacingPROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "l7-ilb-forwarding-rule","IPAddress": "IP_ADDRESS","IPProtocol": "TCP","portRange": "80-80","target": "projects/PROJECT_ID/regions/us-west1/targetHttpsProxies/l7-ilb-proxy","loadBalancingScheme": "INTERNAL_MANAGED","subnetwork": "projects/PROJECT_ID/regions/us-west1/subnetworks/backend-subnet","network": "projects/PROJECT_ID/global/networks/lb-network","networkTier": "PREMIUM",}

Test the load balancer

To test the load balancer, create a client VM. Then, establish anSSH session with the VM and send traffic from the VM to the load balancer.

Create a VM instance to test connectivity

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. SetName tol7-ilb-client-us-west1-a.

  4. SetZone tous-west1-a.

  5. ClickAdvanced options.

  6. ClickNetworking and configure the following fields:

    1. ForNetwork tags, enterallow-ssh.
    2. ForNetwork interfaces, select the following:
      1. Network:lb-network
      2. Subnet:backend-subnet
  7. ClickCreate.

gcloud

  gcloud compute instances create l7-ilb-client-us-west1-a \      --image-family=debian-12 \      --image-project=debian-cloud \      --network=lb-network \      --subnet=backend-subnet \      --zone=us-west1-a \      --tags=allow-ssh

Send traffic to the load balancer

Sign in to the instance that you just created and test that HTTP(S) services onthe backends are reachable by using the regional internal Application Load Balancer's forwardingrule IP address, and traffic is being load balanced across the backendinstances.

Connect using SSH to each client instance

gcloud compute ssh l7-ilb-client-us-west1-a \    --zone=us-west1-a

Get the load balancer's IP address

Use thegcloud compute addresses describecommandto view the allocated IP address:

gcloud compute addresses describe l7-ilb-ip-address \    --region=us-west1

Verify that the IP address is serving its hostname

ReplaceIP_ADDRESS with the load balancer's IP address.

For HTTP testing:

curlIP_ADDRESS

For HTTPS testing:

curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:IP_ADDRESS:443

ReplaceDOMAIN_NAME with your application domain name, forexample,test.example.com.

The-k flag causes curl to skip certificate validation.

Run 100 requests and confirm that they are load balanced

ReplaceIP_ADDRESS with the load balancer's IP address.

For HTTP:

{  RESULTS=  for i in {1..100}  do      RESULTS="$RESULTS:$(curl --silentIP_ADDRESS)"  done  echo "***"  echo "*** Results of load-balancing: "  echo "***"  echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c  echo}

For HTTPS:

ReplaceDOMAIN_NAME with your application domain name, forexample,test.example.com.

{  RESULTS=  for i in {1..100}  do      RESULTS="$RESULTS:$(curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:IP_ADDRESS:443)"  done  echo "***"  echo "*** Results of load-balancing: "  echo "***"  echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c  echo}

Additional configuration options

This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.

Enable global access

You can enableglobal accessfor Regional internal Application Load Balancer and Regional internal proxy Network Load Balancer to make themaccessible to clients in all regions. The backends of your example load balancer muststill be located in one region (us-west1).

Regional internal Application Load Balancer with global access.
Regional internal Application Load Balancer with global access (click to enlarge).

You can't modify an existing regional forwarding rule to enable global access.You must create a new forwarding rule for this purpose and delete the previousforwarding rule. Additionally, after aforwarding rule is created with global access enabled, it cannot bemodified. To disable global access, you must create a new regionalaccess forwarding rule and delete the previous global access forwarding rule.

To configure global access, make the following configuration changes.

Console

Create a new forwarding rule for the load balancer:

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. In theName column, click your load balancer.

  3. ClickFrontend configuration.

  4. ClickAdd frontend IP and port.

  5. Enter the name and subnet details for the new forwarding rule.

  6. ForSubnetwork, selectbackend-subnet.

  7. ForIP address, you can either select the same IP address as anexisting forwarding rule, reserve a new IP address, or use an ephemeralIP address. Sharing the same IP address across multipleforwarding rules is only possible if you set the IP address--purposeflag toSHARED_LOADBALANCER_VIP while creating the IP address.

  8. ForPort number, enter110.

  9. ForGlobal access, selectEnable.

  10. ClickDone.

  11. ClickUpdate.

gcloud

  1. Create a new forwarding rule for the load balancer with the--allow-global-access flag.

    For HTTP:

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule-global-access \    --load-balancing-scheme=INTERNAL_MANAGED \    --network=lb-network \    --subnet=backend-subnet \    --address=10.1.2.99 \    --ports=80 \    --region=us-west1 \    --target-http-proxy=l7-ilb-proxy \    --target-http-proxy-region=us-west1 \    --allow-global-access

    For HTTPS:

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule-global-access \    --load-balancing-scheme=INTERNAL_MANAGED \    --network=lb-network \    --subnet=backend-subnet \    --address=10.1.2.99 \    --ports=443 \    --region=us-west1 \    --target-https-proxy=l7-ilb-proxy \    --target-https-proxy-region=us-west1 \    --allow-global-access
  2. You can use thegcloud compute forwarding-rules describecommand todetermine whether a forwarding rule has global access enabled. Forexample:

     gcloud compute forwarding-rules describe l7-ilb-forwarding-rule-global-access \     --region=us-west1 \     --format="get(name,region,allowGlobalAccess)"

    When global access is enabled, the wordTrue appears in the outputafter the name and region of the forwarding rule.

Create a client VM to test global access

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. SetName toeurope-client-vm.

  4. SetZone toeurope-west1-b.

  5. ClickAdvanced options.

  6. ClickNetworking and configure the following fields:

    1. ForNetwork tags, enterallow-ssh.
    2. ForNetwork interfaces, select the following:
      • Network:lb-network
      • Subnet:europe-subnet
  7. ClickCreate.

gcloud

Create a client VM in theeurope-west1-b zone.

gcloud compute instances create europe-client-vm \    --zone=europe-west1-b \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=europe-subnet

Connect to the VM client and test connectivity

  1. Usessh to connect to the client instance.

    gcloud compute ssh europe-client-vm \    --zone=europe-west1-b
  2. Test connections to the load balancer as you did from thevm-client in theus-west1 region.

    curl http://10.1.2.99

Enable session affinity

These procedures show you how to update a backend service for the exampleregional internal Application Load Balancer or cross-region internal Application Load Balancerso that the backend serviceuses generated cookie affinity, header field affinity, or HTTP cookie affinity.

When generated cookie affinity is enabled, the load balancer issues a cookieon the first request. For each subsequent request with the same cookie, the loadbalancer directs the request to the same backend virtual machine (VM) instanceor endpoint. In this example, the cookie is namedGCILB.

When header field affinity is enabled, the load balancer routes requests tobackend VMs or endpoints in a network endpoint group (NEG) based on the value ofthe HTTP header named in the--custom-request-header flag.Header field affinity is only valid ifthe load balancing locality policy is eitherRING_HASH orMAGLEV and thebackend service's consistent hash specifies the name of the HTTP header.

When HTTP cookie affinity is enabled, the load balancer routes requests tobackend VMs or endpoints in a NEG, based on an HTTP cookie named in theHTTP_COOKIE flag with the optional--affinity-cookie-ttl flag. If the clientdoesn't provide the cookie in its HTTP request, the proxy generatesthe cookie and returns it to the client in aSet-Cookie header. HTTP cookieaffinity is only valid if the load balancing locality policy is eitherRING_HASH orMAGLEV and the backend service's consistent hash specifies theHTTP cookie.

Console

To enable or change session affinity for a backend service:

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickBackends.
  3. Clickl7-ilb-backend-service (the name of the backend service you created for this example) and clickEdit.
  4. On theBackend service details page, clickAdvanced configuration.
  5. UnderSession affinity, select the type of session affinity you want.
  6. ClickUpdate.

gcloud

Use the following Google Cloud CLI commands to update the backend service to different types of session affinity:

    gcloud compute backend-services update l7-ilb-backend-service \        --session-affinity=[GENERATED_COOKIE | HEADER_FIELD | HTTP_COOKIE | CLIENT_IP] \        --region=us-west1

API

To set session affinity, make a `PATCH` request to thebackendServices/patch method.

    PATCH https://compute.googleapis.com/compute/v1/projects/[PROJECT_ID]/regions/us-west1/regionBackendServices/l7-ilb-backend-service    {      "sessionAffinity": ["GENERATED_COOKIE" | "HEADER_FIELD" | "HTTP_COOKIE" | "CLIENT_IP" ]    }

Restrict which clients can send traffic to the load balancer

Note: This section shows you how to restrict client access to yourregional internal Application Load Balancer by using firewall rules. Youcan also use Google Cloud Armor to restrict client access to your load balancer. Formore information, see theSecurity policyoverview in the Cloud Armordocumentation.

You can restrict clients from connecting to an internal Application Load Balancerforwarding rule VIP by configuring egress firewall rules on these clients. Setthese firewall rules on specific client VMs based onserviceaccounts ortags.

You can't use firewall rules to restrict inbound traffic to specificinternal Application Load Balancer forwarding rule VIPs. Any client on the same VPCnetwork and in the same region as the forwarding rule VIP can generally sendtraffic to the forwarding rule VIP.

Additionally, all requests to backends come from proxies that use IP addresses intheproxy-only subnetrange. It isn't possible to create firewall rules that allow or deny ingresstraffic on these backends based on the forwarding rule VIP used by a client.

Here are some examples of how to use egress firewall rules to restrict trafficto the load balancer's forwarding rule VIP.

Console

To identify the client VMs,tag thespecific VMsyou want to restrict. These tags are used to associate firewall rules withthe tagged client VMs. Then, add the tag to theTARGET_TAGfield in the following steps.

Use either a single firewall rule or multiple rules to set this up.

Single egress firewall rule

You can configure one firewall egress rule to deny all egresstraffic going from tagged client VMs to a load balancer's VIP.

  1. In the Google Cloud console, go to theFirewall rules page.

    Go to Firewall rules

  2. ClickCreate firewall rule to create the rule to deny egresstraffic from tagged client VMs to a load balancer's VIP.

    • Name:fr-deny-access
    • Network:lb-network
    • Priority:100
    • Direction of traffic:Egress
    • Action on match:Deny
    • Targets:Specified target tags
    • Target tags:TARGET_TAG
    • Destination filter:IP ranges
    • Destination IP ranges:10.1.2.99
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select thetcp checkbox, and then enter80 for the port number.
  3. ClickCreate.

Multiple egress firewall rules

A more scalable approach involves setting two rules. A default, low-priorityrule that restricts all clients from accessing the load balancer's VIP. Asecond, higher-priority rule that allows a subset of tagged clients toaccess the load balancer's VIP. Only tagged VMs can access theVIP.

  1. In the Google Cloud console, go to theFirewall rules page.

    Go to Firewall rules

  2. ClickCreate firewall rule to create the lower priority rule to denyaccess by default:

    • Name:fr-deny-all-access-low-priority
    • Network:lb-network
    • Priority:200
    • Direction of traffic:Egress
    • Action on match:Deny
    • Targets:Specified target tags
    • Target tags:TARGET_TAG
    • Destination filter:IP ranges
    • Destination IP ranges:10.1.2.99
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.
  3. ClickCreate.

  4. ClickCreate firewall rule to create the higher priority rule toallow traffic from certain tagged instances.

    • Name:fr-allow-some-access-high-priority
    • Network:lb-network
    • Priority:100
    • Direction of traffic:Egress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:TARGET_TAG
    • Destination filter:IP ranges
    • Destination IP ranges:10.1.2.99
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.
  5. ClickCreate.

gcloud

To identify the client VMs,tag thespecific VMsyou want to restrict. Then add the tag to theTARGET_TAGfield in these steps.

Use either a single firewall rule or multiple rules to set this up.

Single egress firewall rule

You can configure one firewall egress rule to deny all egresstraffic going from tagged client VMs to a load balancer's VIP.

gcloud compute firewall-rules create fr-deny-access \    --network=lb-network \    --action=deny \    --direction=egress \    --rules=tcp \    --priority=100 \    --destination-ranges=10.1.2.99 \    --target-tags=TARGET_TAG

Multiple egress firewall rules

A more scalable approach involves setting two rules: a default, low-priorityrule that restricts all clients from accessing the load balancer's VIP, and asecond, higher-priority rule that allows a subset of tagged clients to accessthe load balancer's VIP. Only tagged VMs can access the VIP.

  1. Create the lower-priority rule:

    gcloud compute firewall-rules create fr-deny-all-access-low-priority \    --network=lb-network \    --action=deny \    --direction=egress \    --rules=tcp \    --priority=200 \    --destination-ranges=10.1.2.99
  2. Create the higher priority rule:

    gcloud compute firewall-rules create fr-allow-some-access-high-priority \    --network=lb-network \    --action=allow \    --direction=egress \    --rules=tcp \    --priority=100 \    --destination-ranges=10.1.2.99 \    --target-tags=TARGET_TAG

To use service accounts instead of tags to control access, usethe--target-service-accountsoptioninstead of the--target-tags flag when creating firewall rules.

Scale restricted access to internal Application Load Balancer backends based on subnets

Maintaining separate firewall rules or adding new load-balanced IP addresses toexisting rules as described in the previous section becomes inconvenient as thenumber of forwarding rules increases. One way to prevent this is to allocateforwarding rule IP addresses from a reserved subnet.Then, traffic from tagged instances orservice accounts can be allowed or blocked by using the reserved subnet as thedestination range for firewall rules. This lets you effectively controlaccess to a group of forwarding rule VIPs without having to maintainper-VIP firewall egress rules.

Here are the high-level steps to set this up, assuming that you will createall the other required load balancer resources separately.

gcloud

  1. Create a regional subnet to use to allocate load-balanced IPaddresses for forwarding rules:

    gcloud compute networks subnets create l7-ilb-restricted-subnet \    --network=lb-network \    --region=us-west1 \    --range=10.127.0.0/24
  2. Create a forwarding rule that takes an address from thesubnet. The following example uses the address10.127.0.1 from the subnetcreated in the previous step.

    gcloud compute forwarding-rules create l7-ilb-forwarding-rule-restricted \    --load-balancing-scheme=INTERNAL_MANAGED \    --network=lb-network \    --subnet=l7-ilb-restricted-subnet \    --address=10.127.0.1 \    --ports=80 \    --region=us-west1 \    --target-http-proxy=l7-ilb-proxy \    --target-http-proxy-region=us-west1

  3. Create a firewall rule to restrict traffic destined for the range IPaddresses in the forwarding rule subnet (l7-ilb-restricted-subnet):

    gcloud compute firewall-rules create restrict-traffic-to-subnet \    --network=lb-network \    --action=deny \    --direction=egress \    --rules=tcp:80 \    --priority=100 \    --destination-ranges=10.127.0.0/24 \    --target-tags=TARGET_TAG

Configure backend subsetting

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

Backend subsetting improves performance and scalability by assigning a subsetof backends to each of the proxy instances. When enabled for a backend service,backend subsetting adjusts the number of backends utilized by each proxy instanceas follows:

  • As the number of proxy instances participating in the load balancerincreases, the subset size decreases.

  • When the total number of backends in a network exceeds the capacity of asingle proxy instance, the subset size is reduced automatically for eachservice that has backend subsetting enabled.

Note: In the REST API, thesubsetting.subsetSize setting is available only forCloud Service Mesh. This setting isn't available for regional internal Application Load Balancer.

This example shows you how to create regional internal Application Load Balancer resources and enablebackend subsetting:

  1. Use theexample configuration to create a regional backend servicel7-ilb-backend-service.
  2. Enable backend subsetting by specifying the--subsetting-policy flag asCONSISTENT_HASH_SUBSETTING. Set the load balancing scheme toINTERNAL_MANAGED.

    gcloud

    Use the followinggcloud command to updatel7-ilb-backend-servicewith backend subsetting:

    gcloud beta compute backend-services update l7-ilb-backend-service \    --region=us-west1 \    --subsetting-policy=CONSISTENT_HASH_SUBSETTING

    API

    Make aPATCH request to theregionBackendServices/patch methodmethod.

    PATCH https://compute.googleapis.com/compute/beta/projects/PROJECT_ID/regions/us-west1/backendServices/l7-ilb-backend-service{ "subsetting":{ "policy": CONSISTENT_HASH_SUBSETTING}}

You can also refine backend load balancing by setting thelocalityLbPolicy policy.For more information, seeTraffic policies.

Use the same IP address between multiple internal forwarding rules

For multiple internal forwarding rules to share the same internal IP address,you must reserve the IP address and set its--purpose flag toSHARED_LOADBALANCER_VIP.

gcloud

gcloud compute addresses createSHARED_IP_ADDRESS_NAME \    --region=REGION \    --subnet=SUBNET_NAME \    --purpose=SHARED_LOADBALANCER_VIP
If you need to redirect HTTP traffic to HTTPS, you can create two forwardingrules that use a common IP address. For more information, seeSet upHTTP-to-HTTPS redirect forinternal Application Load Balancers.

Update client HTTP keepalive timeout

The load balancer created in the previous steps has been configured witha default value for theclient HTTP keepalivetimeout.

To update the client HTTP keepalive timeout, use the following instructions.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing.

  2. Click the name of the load balancer that you want to modify.
  3. ClickEdit.
  4. ClickFrontend configuration.
  5. ExpandAdvanced features. ForHTTP keepalive timeout, enter a timeout value.
  6. ClickUpdate.
  7. To review your changes, clickReview and finalize, and then clickUpdate.

gcloud

For an HTTP load balancer, update the target HTTP proxy by using thegcloud compute target-http-proxies update command.

      gcloud compute target-http-proxies updateTARGET_HTTP_PROXY_NAME \          --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \          --region=REGION

For an HTTPS load balancer, update the target HTTPS proxy by using thegcloud compute target-https-proxies update command.

      gcloud compute target-https-proxies updateTARGET_HTTP_PROXY_NAME \          --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \          --regionREGION

Replace the following:

  • TARGET_HTTP_PROXY_NAME: the name of the target HTTP proxy.
  • TARGET_HTTPS_PROXY_NAME: the name of the target HTTPS proxy.
  • HTTP_KEEP_ALIVE_TIMEOUT_SEC: the HTTP keepalive timeout value from 5 to 600 seconds.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.