Configure failover for internal passthrough Network Load Balancers

This guide uses an example to teach you how to configure failoverfor a Google Cloud internal passthrough Network Load Balancer. Before following thisguide, familiarize yourself with the following:

Permissions

To follow this guide, you need to create instances and modify anetwork in a project. You should be either a projectowner or editor, or you shouldhave all of the followingCompute Engine IAM roles:

TaskRequired Role
Create networks, subnets, and load balancer componentsNetwork Admin
Add and remove firewall rulesSecurity Admin
Create instancesCompute Instance Admin

For more information, see the following guides:

Setup

This guide shows you how to configure and test an internal passthrough Network Load Balancer that usesfailover. The steps in this section describe how to configurethe following:

  1. A sample VPC network with custom subnets
  2. Firewall rules that allow incoming connections to backend VMs
  3. Backend VMs:
    • One primary backend in an unmanaged instance group in zoneus-west1-a
    • One failover backend in an unmanaged instance group in zoneus-west1-c
  4. One client VM to test connections and observe failover behavior
  5. The following internal passthrough Network Load Balancer components:
    • A health check for the backend service
    • An internal backend service in theus-west1 region to manageconnection distribution among the backend VMs
    • An internal forwarding rule and internal IP address for thefrontend of the load balancer

The architecture for this example looks like this:

Simple failover example for an internal passthrough Network Load Balancer.
Simple failover example for an internal passthrough Network Load Balancer (click to enlarge).

Unmanaged instance groups are used for both the primary and failoverbackends in this example. For more information, seesupportedinstance groups.

Configuring a network, region, and subnet

This example uses the following VPC network, region, andsubnet:

  • Network: The network is acustom mode VPCnetwork namedlb-network.

  • Region: The region isus-west1.

  • Subnet: The subnet,lb-subnet, uses the10.1.2.0/24 IP range.

Note: You can change the name of the network, the region, and the parameters forthe subnet; however, subsequent steps in this guide use the network, region, andsubnet parameters as outlined above.

To create the example network and subnet, follow these steps.

Console

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. Enter aName oflb-network.

  4. In theSubnets section:

    • Set theSubnet creation mode toCustom.
    • In theNew subnet section, enter the following information:
      • Name:lb-subnet
      • Region:us-west1
      • IP address range:10.1.2.0/24
      • ClickDone.
  5. ClickCreate.

gcloud

  1. Create the custom VPC network:

    gcloud compute networks create lb-network --subnet-mode=custom
  2. Create a subnet in thelb-network network in theus-west1 region:

    gcloud compute networks subnets create lb-subnet \    --network=lb-network \    --range=10.1.2.0/24 \    --region=us-west1

API

Make aPOST request to thenetworks.insert method.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks

{ "routingConfig": {   "routingMode": "REGIONAL" }, "name": "lb-network", "autoCreateSubnetworks": false}

Make aPOST request to thesubnetworks.insertmethod. ReplacePROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks

{ "name": "lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.1.2.0/24", "privateIpGoogleAccess": false}

Configuring firewall rules

This example uses the following firewall rules:

  • fw-allow-lb-subnet: An ingress rule, applicable to all targets in theVPC network, allowing traffic from sources in the10.1.2.0/24 range. This rules allows incoming traffic from any sourcewithin thelb-subnet to the instances (VMs) being load balanced.

  • fw-allow-ssh: An ingress rule applied to the instances being loadbalanced, allowing incoming SSH connectivity on TCP port 22 from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify the IP ranges of the systems from which you plan toinitiate SSH sessions. This example uses the target tagallow-ssh toidentify the VMs to which the firewall rule applies.

  • fw-allow-health-check: An ingress rule, applicable to the instances beingload balanced, that allows traffic from the Google Cloud healthchecking systems (130.211.0.0/22 and35.191.0.0/16). This example uses thetarget tagallow-health-check to identify the instances to which it shouldapply.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Note: You must create a firewall rule to allow health checks from the IP rangesof Google Cloud probe systems. Seeprobe IPranges for moreinformation.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. ClickCreate firewall rule and enter the followinginformation to create the rule to allow subnet traffic:

    • Name:fw-allow-lb-subnet
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: All instances in the network
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:10.1.2.0/24
    • Protocols and ports: Allow all
  3. ClickCreate.

  4. ClickCreate firewall rule again to create the rule to allow incomingSSH connections:

    • Name:fw-allow-ssh
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags:allow-ssh
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports: ChooseSpecified protocols and ports thentype:tcp:22
  5. ClickCreate.

  6. ClickCreate firewall rule a third time to create the rule to allowGoogle Cloud health checks:

    • Name:fw-allow-health-check
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags:allow-health-check
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports: Allow all
  7. ClickCreate.

gcloud

  1. Create thefw-allow-lb-subnet firewall rule to allow communication fromwith the subnet:

    gcloud compute firewall-rules create fw-allow-lb-subnet \    --network=lb-network \    --action=allow \    --direction=ingress \    --source-ranges=10.1.2.0/24 \    --rules=tcp,udp,icmp
  2. Create thefw-allow-ssh firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-allow-ssh \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  3. Create thefw-allow-health-check rule to allow Google Cloudhealth checks.

    gcloud compute firewall-rules create fw-allow-health-check \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-health-check \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --rules=tcp,udp,icmp

API

Create thefw-allow-lb-subnet firewall rule by making aPOST request tothefirewalls.insertmethod. ReplacePROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{ "name": "fw-allow-lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [   "10.1.2.0/24" ], "allowed": [   {     "IPProtocol": "tcp"   },   {     "IPProtocol": "udp"   },   {     "IPProtocol": "icmp"   } ], "direction": "INGRESS", "logConfig": {   "enable": false }, "disabled": false}

Create thefw-allow-ssh firewall rule by making aPOST request tothefirewalls.insertmethod. ReplacePROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{ "name": "fw-allow-ssh",      "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [   "0.0.0.0/0" ], "targetTags": [   "allow-ssh" ], "allowed": [  {    "IPProtocol": "tcp",    "ports": [      "22"    ]  } ],"direction": "INGRESS","logConfig": {  "enable": false},"disabled": false}

Create thefw-allow-health-check firewall rule by making aPOST request tothefirewalls.insertmethod. ReplacePROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{ "name": "fw-allow-health-check", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [   "130.211.0.0/22",   "35.191.0.0/16" ], "targetTags": [   "allow-health-check" ], "allowed": [   {     "IPProtocol": "tcp"   },   {     "IPProtocol": "udp"   },   {     "IPProtocol": "icmp"   } ], "direction": "INGRESS", "logConfig": {   "enable": false }, "disabled": false}

Creating backend VMs and instance groups

In this step, you'll create the backend VMs and unmanaged instance groups:

  • The instance groupig-a inus-west1-a is a primarybackend with two VMs:
    • vm-a1
    • vm-a2
  • The instance groupig-c inus-west1-c is a failoverbackend with two VMs:
    • vm-c1
    • vm-c2

The primary and failover backends are placed in separate zones forinstructional clarity and to handle failover in case one zone goes down.

Each primary and backup VM is configured to run an Apache web server on TCPports 80 and 443. Each VM is assigned an internal IP address in thelb-subnetfor client access and an ephemeral external (public) IP address for SSH access.For information about removing external IP addresses, seeremoving external IP addresses from backendVMs.

By default, Apache is configured to bind to any IP address.Internal passthrough Network Load Balancers deliver packets by preserving the destination IPaddress.

Ensure that server software running on your primary and backup VMs is listeningon the IP address of the load balancer's internal forwarding rule. If youconfiguremultiple internal forwarding rules,ensure that your software listens to the internal IP address associated witheach one. The destination IP address of a packet delivered to a backend VM by aninternal passthrough Network Load Balancer is the internal IP address of the forwarding rule.

For instructional simplicity, all primary and backup VMs run Debian GNU/Linux 12.

Console

Create backend VMs

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. Repeat the following steps to create four VMs, using the following nameand zone combinations.

    • Name:vm-a1, zone:us-west1-a
    • Name:vm-a2, zone:us-west1-a
    • Name:vm-c1, zone:us-west1-c
    • Name:vm-c2, zone:us-west1-c
  3. ClickCreate instance.

  4. Set theName as indicated in step 2.

  5. For theRegion, chooseus-west1, and choose aZone asindicated in step 2.

  6. In theBoot disk section, ensure that the selected image isDebian GNU/Linux 12 (bookworm). ClickChoose to change the image ifnecessary.

  7. ClickAdvanced options.

  8. ClickNetworking and configure the following fields:

    1. ForNetwork tags, enterallow-health-check andallow-ssh.
    2. ForNetwork interfaces, select the following:
      • Network:lb-network
      • Subnet:lb-subnet
  9. ClickManagement. Enter the following script into theStartup script field. The script contents are identical forall four VMs:

    #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
  10. ClickCreate.

Create instance groups

  1. In the Google Cloud console, go to theInstance groups page.

    Go to Instance groups

  2. Repeat the following steps to create two unmanaged instance groups eachwith two VMs in them, using these combinations.

    • Instance group:ig-a, zone:us-west1-a, VMs:vm-a1 andvm-a2
    • Instance group:ig-c, zone:us-west1-c, VMs:vm-c1 andvm-c2
  3. ClickCreate instance group.

  4. ClickNew unmanaged instance group.

  5. SetName as indicated in step 2.

  6. In theLocation section, chooseus-west1 for theRegion, andthen choose aZone as indicated in step 2.

  7. ForNetwork, enterlb-network.

  8. ForSubnetwork, enterlb-subnet.

  9. In theVM instances section, add the VMs as indicated in step 2.

  10. ClickCreate.

gcloud

  1. Create four VMs by running the following command four times, usingthese four combinations forVM_NAME andZONE. The script contentsare identical for all four VMs.

    • VM_NAME ofvm-a1 andZONE ofus-west1-a
    • VM_NAME ofvm-a2 andZONE ofus-west1-a
    • VM_NAME ofvm-c1 andZONE ofus-west1-c
    • VM_NAME ofvm-c2 andZONE ofus-west1-c
    gcloud compute instances createVM_NAME \    --zone=ZONE \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh,allow-health-check \    --subnet=lb-subnet \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://metadata.google.internal/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      systemctl restart apache2'
  2. Create the two unmanaged instance groups in each zone:

    gcloud compute instance-groups unmanaged create ig-a \    --zone=us-west1-agcloud compute instance-groups unmanaged create ig-c \    --zone=us-west1-c
  3. Add the VMs to the appropriate instance groups:

    gcloud compute instance-groups unmanaged add-instances ig-a \    --zone=us-west1-a \    --instances=vm-a1,vm-a2gcloud compute instance-groups unmanaged add-instances ig-c \    --zone=us-west1-c \    --instances=vm-c1,vm-c2

API

Create four backend VMs by making fourPOST requests to theinstances.insertmethod.

For the four VMs, use the following VM names and zones:

  • VM_NAME ofvm-a1 andZONE ofus-west1-a
  • VM_NAME ofvm-a2 andZONE ofus-west1-a
  • VM_NAME ofvm-c1 andZONE ofus-west1-c
  • VM_NAME ofvm-c2 andZONE ofus-west1-c

Replace the following:

  • PROJECT_ID: your project ID
  • ZONE: the zone of the instance
  • DEBIAN_IMAGE_NAME: the name of the Debian image for theinstance. The currentDEBIAN_IMAGE_NAME can be obtainedby running the followinggcloud command:

    gcloud compute images list \ --filter="family=debian-12"

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances

{ "name": "VM_NAME", "tags": {   "items": [     "allow-health-check",     "allow-ssh"   ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [   {     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",     "accessConfigs": [       {         "type": "ONE_TO_ONE_NAT",         "name": "external-nat",         "networkTier": "PREMIUM"       }     ]   } ], "disks": [   {     "type": "PERSISTENT",     "boot": true,     "mode": "READ_WRITE",     "autoDelete": true,     "deviceName": "VM_NAME",     "initializeParams": {       "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard",       "diskSizeGb": "10"     }   } ], "metadata": {   "items": [     {       "key": "startup-script",       "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2"     }   ] }, "scheduling": {   "preemptible": false }, "deletionProtection": false}

Create two instance groups by making aPOST request to theinstanceGroups.insertmethod. ReplacePROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups

{ "name": "ig-a", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups

{ "name": "ig-c", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}

Add instances to each instance group by making aPOST request to theinstanceGroups.addInstancesmethod. ReplacePROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances

{ "instances": [   {     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1",     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a2"   } ]}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c/addInstances

{ "instances": [   {     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c1",     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c2"   } ]}

Creating a client VM

This example creates a client VM (vm-client) in the same region as the loadbalancer. The client is used to demonstrate how failover works.

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. Set theName tovm-client.

  4. Set theZone tous-west1-a.

  5. ClickAdvanced options.

  6. ClickNetworking and configure the following fields:

    1. ForNetwork tags, enterallow-ssh.
    2. ForNetwork interfaces, select the following:
      • Network:lb-network
      • Subnet:lb-subnet
  7. ClickCreate.

gcloud

The client VM can be in any zone in the same region as the load balancer,and it can use any subnet in that region. In this example, the client is intheus-west1-a zone, and it uses the same subnet used by the primary andbackup VMs.

gcloud compute instances create vm-client \    --zone=us-west1-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=lb-subnet

API

Make aPOST request to theinstances.insertmethod.

Replace the following:

  • PROJECT_ID: your project ID
  • DEBIAN_IMAGE_NAME: the name of the Debian image for theinstance. The currentDEBIAN_IMAGE_NAME can be obtainedby running the followinggcloud command:

    gcloud compute images list \ --filter="family=debian-12"

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances

{ "name": "vm-client", "tags": {   "items": [     "allow-ssh"   ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [   {     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",     "accessConfigs": [       {         "type": "ONE_TO_ONE_NAT",         "name": "external-nat",         "networkTier": "PREMIUM"       }     ]   } ], "disks": [   {     "type": "PERSISTENT",     "boot": true,     "mode": "READ_WRITE",     "autoDelete": true,     "deviceName": "vm-client",     "initializeParams": {       "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",       "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard",       "diskSizeGb": "10"     }   } ], "scheduling": {   "preemptible": false }, "deletionProtection": false}

Configuring load balancer components

These steps configure all of theinternal passthrough Network Load Balancercomponents starting with the healthcheck and backend service, and then the frontend components:

  • Health check: This example uses an HTTP health check that simplychecks for an HTTP200 (OK) response. For more information, see thehealth checks section of the internal passthrough Network Load Balanceroverview.

  • Backend service: Because the example passes HTTP traffic through theload balancer, the configuration specifies TCP, not UDP. Toillustrate failover, this backend service has a failover ratio of0.75.

  • Forwarding rule: This example creates a single internal forwarding rule.

  • Internal IP address: In this example, we specify an internal IPaddress,10.1.2.99, when we create the forwarding rule.

Console

Start your configuration

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickCreate load balancer.
  3. ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
  4. ForProxy or passthrough, selectPassthrough load balancer and clickNext.
  5. ForPublic facing or internal, selectInternal and clickNext.
  6. ClickConfigure.

Basic configuration

  1. Set theName tobe-ilb.
  2. SetRegion tous-west1.
  3. SetNetwork tolb-network.
  4. ClickBackend configuration and make the following changes:
    1. ForBackends, in theNew item section, select theig-ainstance group. Ensure thatUse this instance group as a failovergroup for backup isnot checked. ClickDone.
    2. ClickAdd backend. In theNew item section that appears,select theig-c instance group. CheckUse this instance groupas a failover group for backup. ClickDone.
    3. ForHealth check, chooseCreate another health check,enter the following information, and clickSave and continue:
      • Name:hc-http-80
      • Protocol:HTTP
      • Port:80
      • Proxy protocol:NONE
      • Request path:/Note that when you use the Google Cloud console to create your loadbalancer, the health check is global. If you want to create aregional health check, usegcloud or the API.
    4. ClickAdvanced configurations. In theFailover policysection, configure the following:
      • Failover ratio:0.75
      • CheckEnable connection draining on failover.
    5. Verify that there is a blue check mark next toBackendconfiguration before continuing. Review this step if not.
  5. ClickFrontend configuration. In theNew Frontend IP and portsection, make the following changes:
    1. Name:fr-ilb
    2. Subnetwork:ilb-subnet
    3. FromInternal IP, chooseReserve a static internal IP address,enter the following information, and clickReserve:
      • Name:ip-ilb
      • Static IP address:Let me choose
      • Custom IP address:10.1.2.99
    4. Ports: ChooseSingle, and enter80 for thePortnumber.
    5. Verify that there is a blue check mark next toFrontendconfiguration before continuing. Review this step if not.
  6. ClickReview and finalize. Double-check your settings.
  7. ClickCreate.

gcloud

  1. Create a new HTTP health check to test TCP connectivity to the VMson 80.

    gcloud compute health-checks create http hc-http-80 \    --region=us-west1 \    --port=80
  2. Create the backend service for HTTP traffic:

    gcloud compute backend-services create be-ilb \    --load-balancing-scheme=internal \    --protocol=tcp \    --region=us-west1 \    --health-checks=hc-http-80 \    --health-checks-region=us-west1 \    --failover-ratio 0.75
  3. Add the primary backend to the backend service:

    gcloud compute backend-services add-backend be-ilb \    --region=us-west1 \    --instance-group=ig-a \    --instance-group-zone=us-west1-a
  4. Add the failover backend to the backend service:

    gcloud compute backend-services add-backend be-ilb \    --region=us-west1 \    --instance-group=ig-c \    --instance-group-zone=us-west1-c \    --failover
  5. Create a forwarding rule for the backend service. When you create theforwarding rule, specify10.1.2.99 for the internal IP in thesubnet.

    gcloud compute forwarding-rules create fr-ilb \    --region=us-west1 \    --load-balancing-scheme=internal \    --network=lb-network \    --subnet=lb-subnet \    --address=10.1.2.99 \    --ip-protocol=TCP \    --ports=80 \    --backend-service=be-ilb \    --backend-service-region=us-west1

API

Create the health check by making aPOST request to theregionHealthChecks.insertmethod. ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks

{"name": "hc-http-80","type": "HTTP","httpHealthCheck": {  "port": 80}}

Create the regional backend service by making aPOST request to theregionBackendServices.insertmethod. ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices

{"name": "be-ilb","backends": [  {    "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a",    "balancingMode": "CONNECTION"  },  {    "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c",    "balancingMode": "CONNECTION"    "failover": true  }],"failoverPolicy": {  "failoverRatio": 0.75},"healthChecks": [  "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"],"loadBalancingScheme": "INTERNAL","connectionDraining": {  "drainingTimeoutSec": 0 }}

Create the forwarding rule by making aPOST request to theforwardingRules.insertmethod. ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules

{"name": "fr-ilb","IPAddress": "10.1.2.99","IPProtocol": "TCP","ports": [  "80", "8008", "8080", "8088"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}

Testing

These tests show how to validate your load balancer configuration and learnabout its expected behavior.

Client test procedure

This procedure contacts the load balancer from the client VM. You'll use thisprocedure to complete the other tests.

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client --zone=us-west1-a
  2. Make a web request to the load balancer usingcurl to contact its IPaddress.

    curl http://10.1.2.99
  3. Note the text returned by thecurl command. The name of the backend VMgenerating the response is displayed in that text; for example:Page servedfrom: vm-a1

Testing initial state

After you've configured the example load balancer, all four of the backend VMsshould be healthy:

  • the two primary VMs,vm-a1 andvm-a2
  • the two backup VMs,vm-c1 andvm-c2

Follow theclient test procedure. Repeat the second step a fewtimes. The expected behavior is for traffic to be served by the two primary VMs,vm-a1 andvm-a2, because both of them are healthy. You should see eachprimary VM serve a response approximately half of the timebecause no sessionaffinity has been configured for this loadbalancer.

Testing failover

This test simulates the failure ofvm-a1 so you can observe failover behavior.

  1. Connect to thevm-a1 VM.

    gcloud compute ssh vm-a1 --zone=us-west1-a
  2. Stop the Apache web server. After ten seconds, Google Cloudconsiders this VM to be unhealthy. (Thehc-http-80 health check that youcreated in the setup uses the default check interval of five seconds andunhealthy threshold of two consecutive failed probes.)

    sudo apachectl stop
  3. Follow theclient test procedure. Repeat the second step afew times. The expected behavior is for traffic to be served by the twobackup VMs,vm-c1 andvm-c2. Because only one primary VM,vm-a2, ishealthy, the ratio of healthy primary VMs to total primary VMs is0.5.This number is less than the failover threshold of0.75, soGoogle Cloud reconfigured the load balancer's active pool to use thebackup VMs. You should see each backup VM serve a response approximatelyhalf of the timeas long as no session affinity has beenconfigured for this load balancer.

Testing failback

This test simulates failback by restarting the Apache server onvm-a1.

  1. Connect to thevm-a1 VM.

    gcloud compute ssh vm-a1 --zone=us-west1-a
  2. Start the Apache web server and wait 10 seconds.

    sudo apachectl start
  3. Follow theclient test procedure. Repeat the second step afew times. The expected behavior is for traffic to be served by the twoprimary VMs,vm-a1 andvm-a2. With both primary VMs being healthy, theratio of healthy primary VMs to total primary VMs is1.0, greater than thefailover threshold of0.75, so Google Cloud configured the activepool to use the primary VMs again.

Adding more backend VMs

This section extends the example configuration by adding more primary andbackup VMs to the load balancer. It does so by creating two more backendinstance groups to demonstrate that you can distribute primary and backup VMsamong multiple zones in the same region:

  • A third instance group,ig-d inus-west1-c, serves as a primarybackend with two VMs:
    • vm-d1
    • vm-d2
  • A fourth instance group,ig-b inus-west1-a, serves as a failoverbackend with two VMs:
    • vm-b1
    • vm-b2

The modified architecture for this example looks like this:

Multi-zone internal passthrough Network Load Balancer failover.
Multi-zone internal passthrough Network Load Balancer failover (click to enlarge).

Create additional VMs and instance groups

Follow these steps to create the additional primary and backup VMs and theircorresponding unmanaged instance groups.

Console

Create backend VMs

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. Repeat the following steps to create four VMs, using the following nameand zone combinations.

    • Name:vm-b1, zone:us-west1-a
    • Name:vm-b2, zone:us-west1-a
    • Name:vm-d1, zone:us-west1-c
    • Name:vm-d2, zone:us-west1-c
  3. ClickCreate instance.

  4. Set theName as indicated in step 2.

  5. For theRegion, chooseus-west1, and choose aZone asindicated in step 2.

  6. In theBoot disk section, ensure that the selected image isDebian GNU/Linux 12 (bookworm). ClickChoose to change the image ifnecessary.

  7. ClickAdvanced options and make the following changes:

    • ClickNetworking and add the followingNetwork tags:allow-ssh andallow-health-check
    • Click the edit button underNetwork interfaces and make thefollowing changes then clickDone:
      • Network:lb-network
      • Subnet:lb-subnet
      • Primary internal IP: Ephemeral (automatic)
      • External IP: Ephemeral
    • ClickManagement. In theStartup script field, copy and pastethe following script contents. The script contents are identical forall four VMs:

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
  8. ClickCreate.

Create instance groups

  1. In the Google Cloud console, go to theInstance groups page.

    Go to Instance groups

  2. Repeat the following steps to create two unmanaged instance groups eachwith two VMs in their one, using these combinations.

    • Instance group:ig-b, zone:us-west1-a, VMs:vm-b1 andvm-b2
    • Instance group:ig-d, zone:us-west1-c, VMs:vm-d1 andvm-d2
  3. ClickCreate instance group.

  4. ClickNew unmanaged instance group.

  5. SetName as indicated in step 2.

  6. In theLocation section, chooseus-west1 for theRegion, andthen choose aZone as indicated in step 2.

  7. ForNetwork, enterlb-network.

  8. ForSubnetwork, enterlb-subnet.

  9. In theVM instances section, add the VMs as indicated in step 2.

  10. ClickCreate.

gcloud

  1. Create four VMs by running the following command four times, usingthese four combinations forVM_NAME andZONE. The script contentsare identical for all four VMs.

    • VM_NAME ofvm-b1 andZONE ofus-west1-a
    • VM_NAME ofvm-b2 andZONE ofus-west1-a
    • VM_NAME ofvm-d1 andZONE ofus-west1-c
    • VM_NAME ofvm-d2 andZONE ofus-west1-c
    gcloud compute instances createVM_NAME \    --zone=ZONE \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh,allow-health-check \    --subnet=lb-subnet \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://metadata.google.internal/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      systemctl restart apache2'
  2. Create the two unmanaged instance groups in each zone:

    gcloud compute instance-groups unmanaged create ig-b \    --zone=us-west1-agcloud compute instance-groups unmanaged create ig-d \    --zone=us-west1-c
  3. Add the VMs to the appropriate instance groups:

    gcloud compute instance-groups unmanaged add-instances ig-b \    --zone=us-west1-a \    --instances=vm-b1,vm-b2gcloud compute instance-groups unmanaged add-instances ig-d \    --zone=us-west1-c \    --instances=vm-d1,vm-d2

API

Create four backend VMs by making fourPOST requests to theinstances.insertmethod.

For the four VMs, use the following VM names and zones:

  • VM_NAME ofvm-b1 andZONE ofus-west1-a
  • VM_NAME ofvm-b2 andZONE ofus-west1-a
  • VM_NAME ofvm-d1 andZONE ofus-west1-c
  • VM_NAME ofvm-d2 andZONE ofus-west1-c

Replace the following:

  • PROJECT_ID: your project ID
  • DEBIAN_IMAGE_NAME: the name of the Debian image for theinstance. The currentDEBIAN_IMAGE_NAME can be obtainedby running the followinggcloud command:

    gcloud compute images list \ --filter="family=debian-12"

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances

{ "name": "VM_NAME", "tags": {   "items": [     "allow-health-check",     "allow-ssh"   ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [   {     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",     "accessConfigs": [       {         "type": "ONE_TO_ONE_NAT",         "name": "external-nat",         "networkTier": "PREMIUM"       }     ]   } ], "disks": [   {     "type": "PERSISTENT",     "boot": true,     "mode": "READ_WRITE",     "autoDelete": true,     "deviceName": "VM_NAME",     "initializeParams": {       "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard",       "diskSizeGb": "10"     }   } ], "metadata": {   "items": [     {       "key": "startup-script",       "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2"     }   ] }, "scheduling": {   "preemptible": false }, "deletionProtection": false}

Create two instance groups by making aPOST request to theinstanceGroups.insertmethod. ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups

{ "name": "ig-b", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups

{ "name": "ig-d", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}

Add instances to each instance group by making aPOST request to theinstanceGroups.addInstancesmethod. ReplacePROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-b/addInstances

{ "instances": [   {     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-b1",     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-b2"   } ]}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-d/addInstances

{ "instances": [   {     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-d1",     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-d2"   } ]}

Adding a primary backend

You can use this procedure as a template for how to add an unmanaged instancegroup to an existing internal passthrough Network Load Balancer's backend service as aprimary backend. For the example configuration, this procedure shows you how toadd instance groupig-d as a primary backend to thebe-ilb load balancer.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. In theLoad balancers tab, click the name of an existing internal TCPor internal UDP load balancer (in this example,be-ilb).

  3. ClickEdit .

  4. In theBackend configuration, clickAdd backend and select anunmanaged instance group (in this example,ig-d).

  5. Ensure thatUse this instance group as a failover group for backup isnot checked.

  6. ClickDone and then clickUpdate.

gcloud

Use the followinggcloud command to add a primary backendto an existing internal passthrough Network Load Balancer's backend service.

gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \   --instance-groupINSTANCE_GROUP_NAME \   --instance-group-zoneINSTANCE_GROUP_ZONE \   --regionREGION

Replace the following:

  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service. For the example, usebe-ilb.
  • INSTANCE_GROUP_NAME: the name of the instancegroup to add as a primary backend. For the example, useig-d.
  • INSTANCE_GROUP_ZONE: is the zone where theinstance group is defined. For the example, useus-west1-c.
  • REGION: the region of the load balancer.For the example, useus-west1.

API

Add a primary backend to an existing backend service withtheregionBackendServices.patchmethod.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME{  "backends":  [    {      "balancingMode": "connection",      "failover": false,      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"    }  ]}

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region of the load balancer. For theexample, useus-west1.
  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service. For the example, usebe-ilb.
  • INSTANCE_GROUP_NAME: the name of the instancegroup to add as a primary backend. For the example, useig-d.
  • INSTANCE_GROUP_ZONE: the zone where theinstance group is defined. For the example, useus-west1-c.

Adding a failover backend

You can use this procedure as a template for how to add an unmanaged instancegroup to an existing internal passthrough Network Load Balancer's backend service as afailover backend. For the example configuration, this procedure shows you how toadd instance groupig-b as a failover backend to thebe-ilb load balancer.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. In theLoad balancers tab, click the name of an existingload balancer of typeTCP/UDP (Internal) (in this example,be-ilb).

  3. ClickEdit .

  4. In theBackend configuration, clickAdd backend and select anunmanaged instance group (in this example,ig-b).

  5. CheckUse this instance group as a failover group for backup.

  6. ClickDone and then clickUpdate.

gcloud

Use the followinggcloud command to add a failover backendto an existing internal passthrough Network Load Balancer's backend service.

gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \   --instance-groupINSTANCE_GROUP_NAME \   --instance-group-zoneINSTANCE_GROUP_ZONE \   --regionREGION \   --failover

Replace the following:

  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service. For the example, usebe-ilb.
  • INSTANCE_GROUP_NAME: the name of the instancegroup to add as a primary backend. For the example, useig-b.
  • INSTANCE_GROUP_ZONE: is the zone where theinstance group is defined. For the example, useus-west1-a.
  • REGION is the region of the load balancer.For the example, useus-west1.

API

Add a failover backend to an existing backend service withtheregionBackendServices.patchmethod.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME{  "backends":  [    {      "balancingMode": "connection",      "failover": true,      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"    }  ]}

Replace the following:

  • PROJECT_ID: your project ID
  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service. For the example, usebe-ilb.
  • INSTANCE_GROUP_NAME: the name of the instancegroup to add as a primary backend. For the example, useig-b.
  • INSTANCE_GROUP_ZONE: is the zone where theinstance group is defined. For the example, useus-west1-a.
  • REGION: the region of the load balancer.For the example, useus-west1.

Converting a primary or failover backend

You can use convert a primary backend to a failover backend, or vice versa,without having to remove the instance group from the internal passthrough Network Load Balancer'sbackend service.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. In theLoad balancers tab, click the name of an existing existingload balancer of typeTCP/UDP (Internal).

  3. ClickEdit .

  4. In theBackend configuration, click the name of one of the backendinstance groups. Then:

    • To make the instance group a failover backend, checkUse this instancegroup as a failover group for backup.
    • To make the instance group a primary backend, un-checkUse thisinstance group as a failover group for backup.
  5. ClickDone and then clickUpdate.

gcloud

Use the followinggcloud command to convert an existing primary backend toa failover backend:

gcloud compute backend-services update-backendBACKEND_SERVICE_NAME \   --instance-groupINSTANCE_GROUP_NAME \   --instance-group-zoneINSTANCE_GROUP_ZONE \   --regionREGION \   --failover

Use the followinggcloud command to convert an existing failover backend toa primary backend:

gcloud compute backend-services update-backendBACKEND_SERVICE_NAME \   --instance-groupINSTANCE_GROUP_NAME \   --instance-group-zoneINSTANCE_GROUP_ZONE \   --regionREGION \   --no-failover

Replace the following:

  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service
  • INSTANCE_GROUP_NAME: the name of the instancegroup to add as a primary backend
  • INSTANCE_GROUP_ZONE: the zone where theinstance group is defined
  • REGION: the region of the load balancer

API

Convert a primary backend to a failover backend, or vice versa, by using theregionBackendServices.patchmethod.

To convert a primary backend to a failover backend:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME{  "backends":  [    {      "failover": true,      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"    }  ]}

To convert a failover backend to a primary backend:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME{  "backends":  [    {      "failover": false,      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"    }  ],}

Replace the following:

  • PROJECT_ID: your project ID
  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service
  • INSTANCE_GROUP_NAME: the name of the instancegroup to add as a primary backend
  • INSTANCE_GROUP_ZONE: the zone where theinstance group is defined
  • REGION: the region of the load balancer

Configuring failover policies

This section describes how to manage afailoverpolicy for an internal passthrough Network Load Balancer'sbackend service. A failover policy consists of the:

  • Failover ratio
  • Dropping traffic when all backend VMs are unhealthy
  • Connection draining on failover

For more information on theparameters of a failover policy, see:

Defining a failover policy

The following instructions describe how to define the failover policy for anexisting internal passthrough Network Load Balancer.

Console

To define a failover policy using the Google Cloud console you musthave at least one failover backend.

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. From theLoad balancers tab, click the name of an existingload balancer of typeTCP/UDP (Internal).

  3. ClickEdit .

  4. Make sure you have at least one failover backend. At least one of the loadbalancer's backends must have theUse this instance group as a failovergroup for backup selected.

  5. ClickAdvanced configurations.

    • ForFailover Policy, set theFailover ratio to a valuebetween0.0 and1.0, inclusive.
    • Check the box next toEnable drop traffic if you want to droptraffic when all active VMs and all backup VMs are unhealthy.
    • Check the box next toEnable connection draining on failover ifyou want to terminate existing connections quickly during failover.
  6. ClickReview and finalize and then clickUpdate.

gcloud

To define a failover policy using the gcloud CLI, update theload balancer's backend service:

gcloud compute backend-services updateBACKEND_SERVICE_NAME \   --regionREGION \   --failover-ratioFAILOVER_RATIO \   --drop-traffic-if-unhealthy \   --no-connection-drain-on-failover

Replace the following:

  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service. For the example, usebe-ilb.
  • REGION: the region of the load balancer.For the example, useus-west1.
  • FAILOVER_RATIO: the failover ratio. Possiblevalues are between0.0 and1.0, inclusive. For the example, use0.75.
  • --drop-traffic-if-unhealthy instructs the load balancer to drop trafficwhen all primary VMs and all backup VMs are unhealthy. Change this to--no-drop-traffic-if-unhealthy if you want to distribute traffic amongall primary VMs when all backend VMs are unhealthy.
  • --no-connection-drain-on-failover instructs the load balancer toterminate existing TCP connections quickly during failover. Use--connection-drain-on-failover to enable connection draining duringfailover.

API

Use theregionBackendServices.patchmethod to define the failover policy.

PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME{  "failoverPolicy":  {    "failoverRatio":FAILOVER_RATIO,    "dropTrafficIfUnhealthy": [true|false],    "disableConnectionDrainOnFailover": [true|false]  }}

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region of the load balancer
  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service
  • FAILOVER_RATIO: the failover ratio. Possiblevalues are between0.0 and1.0, inclusive.
  • SettingdropTrafficIfUnhealthy totrue instructs the load balancer todrop traffic when all primary VMs and all backup VMs are unhealthy. Setthis tofalse if you want to distribute traffic among all primary VMswhen all backend VMs are unhealthy.
  • SettingdisableConnectionDrainOnFailover totrue instructs the loadbalancer to terminate existing TCP connections quickly when doing afailover. Set this tofalse to enable connection draining duringfailover.

Viewing a failover policy

The following instructions describe how to view the existing failover policy foran internal passthrough Network Load Balancer.

Console

The Google Cloud console shows the existing failover policy settings whenyou edit an internal passthrough Network Load Balancer. Refer todefining a failoverpolicy for instructions.

gcloud

To list the failover policy settings using the gcloud CLI,use the following command. Undefined settings in a failover policy use thedefault failover policy values.

gcloud compute backend-services describeBACKEND_SERVICE_NAME \   --regionREGION \   --format="get(failoverPolicy)"

Replace the following:

  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service
  • REGION: the region of the load balancer

API

Use theregionBackendServices.getmethod to view the failover policy.

The response to the API request shows the failover policy. Anexample is shown below.

GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

Replace the following:

  • PROJECT_ID: your project ID
  • REGION: the region of the load balancer
  • BACKEND_SERVICE_NAME: the name of the loadbalancer's backend service
{..."failoverPolicy": {  "disableConnectionDrainOnFailover": false,  "dropTrafficIfUnhealthy": false,  "failoverRatio": 0.75...}

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.