Migrate an external passthrough Network Load Balancer from target pool to backend service

This guide provides instructions for migrating an existing external passthrough Network Load Balancerfrom a target pool backend to a regional backend service.

Moving to a regional backend service allows you to take advantage offeatures such as non-legacy health checks (for TCP, SSL, HTTP, HTTPS,and HTTP/2), managed instance groups,connectiondraining, andfailover policy.


To follow step-by-step guidance for this task directly in the Google Cloud console, clickGuide me:

Guide me


This guide walks you through migrating the following sample targetpool-based external passthrough Network Load Balancer to use a regional backend service instead.

Before: External passthrough Network Load Balancer with a target pool
Before: External passthrough Network Load Balancer with a target pool

Your resulting backend service-based external passthrough Network Load Balancer deployment willlook like this.

After: External passthrough Network Load Balancer with a regional backend service
After: External passthrough Network Load Balancer with a regional backend service

This example assumes that you have a traditional target pool-based external passthrough Network Load Balancerwith two instances in zoneus-central-1a and two instances in zoneus-central-1c.

The high-level steps required for such a transition are as follows:

  1. Group your target pool instances into instance groups.

    Backend services only work with managed or unmanaged instance groups.While there is no limit on the number of instances thatcan be placed into a single target pool, instance groupsdo have amaximum size.If your target pool has more than thismaximum number of instances, you need to split its backends acrossmultiple instance groups.

    If your existing deployment includes a backup target pool, createa separate instance group for those instances. This instance groupis configured as a failover group.

  2. Create a regional backend service.

    If your deployment includes a backup target pool, you need tospecify a failover ratio while creating the backend service. This shouldmatch the failover ratio previously configured for the target pooldeployment.

  3. Add instance groups (created previously) to the backend service.

    If your deployment includes a backup target pool, mark the correspondingfailover instance group with a--failover flag when addingit to the backend service.

  4. Configure a forwarding rule that points to the new backend service.

    You can choose one of the following options:

    • Update the existing forwarding rule to point to the backend service (recommended).

    • Create a new forwarding rule that points to the backend service. Thisrequires you to create a new IP address for the load balancer'sfrontend. You then modify your DNS settings to seamlessly transition from theold target pool-based load balancer's IP address to the new IP address.

Before you begin

Install the Google Cloud CLI. For a complete overview of the tool,see thegcloud Tool Guide. You can find commands related toload balancing in thegcloud computecommand group.

If you haven't run the Google Cloud CLI previously, first rungcloud init to authenticate.

This guide assumes that you are familiar withbash.

Identify the backends and forwarding rule to migrate

  1. To list all the target pools, run the following command in Cloud Shell:

    gcloud compute target-pools list

    Note the name of the target pool to migrate from. This name is referred to later asTARGET_POOL_NAME.

  2. To list all the VM instances in the target poolTARGET_POOL_NAME,run the command in Cloud Shell:

    gcloud compute target-pools describeTARGET_POOL_NAME \  --region=us-central1

    Note the names of the VM instances. These names are referred to later asBACKEND_INSTANCE1,BACKEND_INSTANCE2,BACKEND_INSTANCE3, andBACKEND_INSTANCE4.

  3. To list the forwarding rules in the external passthrough Network Load Balancer, run the command inCloud Shell:

    gcloud compute forwarding-rules list  --filter="target: (TARGET_POOL_NAME )"

    Note the name of the forwarding rule. This name is referred to later asFORWARDING_RULE.

Creating the zonal unmanaged instance groups

Create a zonal unmanaged instance group for each of the zones in which you havebackends. Depending on your setup, you can divide your instances across as manyinstance groups as needed. For our example, we are only using two instancegroups, one for each zone, and placing all the backend VMs in a given zone inthe associated instance group.

For this example, we create two instance groups: one in theuc-central1-a zone and one in theus-central1-c zone.

Set up the instance groups

Console

  1. In the Google Cloud console, go to theInstance groups page.

    Go to Instance groups

  2. ClickCreate instance group.
  3. In the left pane, selectNew unmanaged instance group.
  4. ForName, enterig-us-1.
  5. ForRegion, selectus-central1.
  6. ForZone, selectus-central1-a.
  7. SelectNetwork andSubnetwork depending on where yourinstances are located. In this example, the existing target poolinstances are in thedefault network and subnetwork.
  8. To add instances to the instance group, in theVM instancessection, select the two instancesBACKEND_INSTANCE1 andBACKEND_INSTANCE2.
  9. ClickCreate.
  10. Repeat these steps to create a second instance group with the followingspecifications:

    • Name:ig-us-2
    • Region:us-central1
    • Zone:us-central1-c

    Add the two instancesBACKEND_INSTANCE3 andBACKEND_INSTANCE4 in theus-central1-c zone to this instancegroup.

  11. If your existing load balancer deployment also has a backup target pool,repeat these steps to create a separate failover instance group for thoseinstances.

gcloud

  1. Create an unmanaged instance group in theus-central1-a zone withthegcloud compute instance-groups unmanagedcreatecommand.

    gcloud compute instance-groups unmanaged create ig-us-1 \    --zone us-central1-a
  2. Create a second unmanaged instance group in theus-central1-c zone.

    gcloud compute instance-groups unmanaged create ig-us-2 \    --zone us-central1-c
  3. Add instances to theig-us-1 instance group.

    gcloud compute instance-groups unmanaged add-instances ig-us-1 \    --instancesBACKEND_INSTANCE_1,BACKEND_INSTANCE_2 \    --zone us-central1-a
  4. Add instances to theig-us-2 instance group.

    gcloud compute instance-groups unmanaged add-instances ig-us-2 \    --instancesBACKEND_INSTANCE_3,BACKEND_INSTANCE_4 \    --zone us-central1-c
  5. If your existing load balancer deployment also has a backup target pool,repeat these steps to create a separate failover instance group for thoseinstances.

Create a health check

Create a health check to determine the health of the instances in yourinstance groups. Your existing target pool-based external passthrough Network Load Balancerlikely has alegacy HTTP health check associated with it.

You can create a new health check that matches the protocol of thetraffic that the load balancer will be distributing. Backend service-basedexternal passthrough Network Load Balancers can useTCP, SSL, HTTP(S), and HTTP/2 healthchecks.

Console

  1. In the Google Cloud console, go to theHealth checks page.

    Go to Health checks

  2. ClickCreate health check.
  3. In theName field, enternetwork-lb-health-check.
  4. SetScope toRegional.
  5. ForRegion, selectus-central1.
  6. ForProtocol, selectHTTP.
  7. ForPort, enter80.
  8. ClickCreate.

gcloud

  1. For this example, we create a non-legacy HTTP health check to be used with thebackend service.

    gcloud compute health-checks create http network-lb-health-check \--region us-central1 \--port 80

Configure the backend service

Useone of the following sections to create the backend service. If yourexisting external passthrough Network Load Balancer has a backup target pool,you need to configure a failover ratio while creating the backend service.

You also need to designate the failover instance group with the--failover flag when adding backends to the backend service.

Deployments without a backup target pool

gcloud

  1. Create a regional backend service in theus-central1 region.

    gcloud compute backend-services create network-lb-backend-service \   --region us-central1 \   --health-checks network-lb-health-check \   --health-checks-region us-central1 \   --protocol TCP
  2. Add the two instance groups (ig-us-1 andig-us-2) as backendsto the backend service.

    gcloud compute backend-services add-backend network-lb-backend-service \   --instance-group ig-us-1 \   --instance-group-zone us-central1-a \   --region us-central1
    gcloud compute backend-services add-backend network-lb-backend-service \   --instance-group ig-us-2 \   --instance-group-zone us-central1-c \   --region us-central1

Deployments with a backup target pool

gcloud

  1. Create a regional backend service in theus-central1 region.Configure the backend service failover ratio to match the failoverratio previously configured for the target pool.

    gcloud compute backend-services create network-lb-backend-service \   --region us-central1 \   --health-check network-lb-health-check \   --failover-ratio 0.5
  2. Add the two instance groups (ig-us-1 andig-us-2) as backendsto the backend service.

    gcloud compute backend-services add-backend network-lb-backend-service \   --instance-group ig-us-1 \   --instance-group-zone us-central1-a \   --region us-central1
    gcloud compute backend-services add-backend network-lb-backend-service \   --instance-group ig-us-2 \   --instance-group-zone us-central1-c \   --region us-central1
  3. If you created a failover instance group, add it to the backendservice. Mark this backend with the--failover flag when you addit to the backend service.

    gcloud compute backend-services add-backend network-lb-backend-service \   --instance-groupFAILOVER_INSTANCE_GROUP \   --instance-group-zoneZONE \   --region us-central1 \   --failover

Configure the forwarding rule

You have two options to configure the forwarding rule to direct traffic to thenew backend service. You can either update the existing forwarding rule orcreate a new forwarding rule with a new IP address.

Update the existing forwarding rule (recommended)

Use theset-target flag to update the existing forwarding rule to pointto the new backend service.

gcloud compute forwarding-rules set-targetFORWARDING_RULE  \    --backend-service network-lb-backend-service \    --region us-central1

ReplaceFORWARDING_RULE with the name of the existingforwarding rule.

Create a new forwarding rule

If you don't want to update the existing forwarding rule, you can create a newforwarding rule with a new IP address. Because a given IP address can only beassociated with a single forwarding rule at a time, you need to manuallymodify your DNS setting to transition incoming traffic from the old IPaddress to the new one.

Use the following command to create a new forwarding rule with a newIP address. You can use the use the--address flag if you want to specify anIP address already reserved in theus-central1 region.

gcloud compute forwarding-rules create network-lb-forwarding-rule \    --load-balancing-scheme external \    --region us-central1 \    --ports 80 \    --backend-service network-lb-backend-service

Testing the load balancer

Test the load balancer to confirm that the forwarding rule is directingincoming traffic as expected.

Look up the load balancer's external IP address

gcloud

Enter the following command to view the external IP address of thenetwork-lb-forwarding-rule forwarding rule used by the load balancer.

gcloud compute forwarding-rules describe network-lb-forwarding-rule    --region us-central1

Use thenc command to access the external IP address

In this example, we use the default hashing method for sessionaffinity, so requests from thenc command are distributed randomly to thebackend VMs based on the source port assigned by your operating system.

  1. To test connectivity, first install Netcat on Linux by running the followingcommand:

    $sudo apt install netcat
  2. Repeat the following command a few times until you see all the backend VMs responding:

    $ncIP_ADDRESS 80

Remove resources associated with the old load balancer

After you confirm that the new external passthrough Network Load Balancer works as expected,you can delete the old target pool resources.

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. Select the old load balancer that was associated with the target pool, andthen clickDelete.
  3. Select the health checks that you created, and then clickDelete load balancer and the selected resources.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.