Migrate an external passthrough Network Load Balancer from target pool to backend service Stay organized with collections Save and categorize content based on your preferences.
This guide provides instructions for migrating an existing external passthrough Network Load Balancerfrom a target pool backend to a regional backend service.
Moving to a regional backend service allows you to take advantage offeatures such as non-legacy health checks (for TCP, SSL, HTTP, HTTPS,and HTTP/2), managed instance groups,connectiondraining, andfailover policy.
To follow step-by-step guidance for this task directly in the Google Cloud console, clickGuide me:
This guide walks you through migrating the following sample targetpool-based external passthrough Network Load Balancer to use a regional backend service instead.
Your resulting backend service-based external passthrough Network Load Balancer deployment willlook like this.
This example assumes that you have a traditional target pool-based external passthrough Network Load Balancerwith two instances in zoneus-central-1a and two instances in zoneus-central-1c.
The high-level steps required for such a transition are as follows:
Group your target pool instances into instance groups.
Backend services only work with managed or unmanaged instance groups.While there is no limit on the number of instances thatcan be placed into a single target pool, instance groupsdo have amaximum size.If your target pool has more than thismaximum number of instances, you need to split its backends acrossmultiple instance groups.
If your existing deployment includes a backup target pool, createa separate instance group for those instances. This instance groupis configured as a failover group.
Create a regional backend service.
If your deployment includes a backup target pool, you need tospecify a failover ratio while creating the backend service. This shouldmatch the failover ratio previously configured for the target pooldeployment.
Add instance groups (created previously) to the backend service.
If your deployment includes a backup target pool, mark the correspondingfailover instance group with a
--failoverflag when addingit to the backend service.Configure a forwarding rule that points to the new backend service.
You can choose one of the following options:
Update the existing forwarding rule to point to the backend service (recommended).
Create a new forwarding rule that points to the backend service. Thisrequires you to create a new IP address for the load balancer'sfrontend. You then modify your DNS settings to seamlessly transition from theold target pool-based load balancer's IP address to the new IP address.
Before you begin
Install the Google Cloud CLI. For a complete overview of the tool,see thegcloud Tool Guide. You can find commands related toload balancing in thegcloud computecommand group.
If you haven't run the Google Cloud CLI previously, first rungcloud init to authenticate.
This guide assumes that you are familiar withbash.
Identify the backends and forwarding rule to migrate
To list all the target pools, run the following command in Cloud Shell:
gcloud compute target-pools list
Note the name of the target pool to migrate from. This name is referred to later asTARGET_POOL_NAME.
To list all the VM instances in the target poolTARGET_POOL_NAME,run the command in Cloud Shell:
gcloud compute target-pools describeTARGET_POOL_NAME \ --region=us-central1
Note the names of the VM instances. These names are referred to later asBACKEND_INSTANCE1,BACKEND_INSTANCE2,BACKEND_INSTANCE3, andBACKEND_INSTANCE4.
To list the forwarding rules in the external passthrough Network Load Balancer, run the command inCloud Shell:
gcloud compute forwarding-rules list --filter="target: (TARGET_POOL_NAME )"
Note the name of the forwarding rule. This name is referred to later asFORWARDING_RULE.
Creating the zonal unmanaged instance groups
Create a zonal unmanaged instance group for each of the zones in which you havebackends. Depending on your setup, you can divide your instances across as manyinstance groups as needed. For our example, we are only using two instancegroups, one for each zone, and placing all the backend VMs in a given zone inthe associated instance group.
For this example, we create two instance groups: one in theuc-central1-a zone and one in theus-central1-c zone.
Set up the instance groups
Console
- In the Google Cloud console, go to theInstance groups page.
- ClickCreate instance group.
- In the left pane, selectNew unmanaged instance group.
- ForName, enter
ig-us-1. - ForRegion, select
us-central1. - ForZone, select
us-central1-a. - SelectNetwork andSubnetwork depending on where yourinstances are located. In this example, the existing target poolinstances are in the
defaultnetwork and subnetwork. - To add instances to the instance group, in theVM instancessection, select the two instancesBACKEND_INSTANCE1 andBACKEND_INSTANCE2.
- ClickCreate.
Repeat these steps to create a second instance group with the followingspecifications:
- Name:
ig-us-2 - Region:
us-central1 - Zone:
us-central1-c
Add the two instancesBACKEND_INSTANCE3 andBACKEND_INSTANCE4 in the
us-central1-czone to this instancegroup.- Name:
If your existing load balancer deployment also has a backup target pool,repeat these steps to create a separate failover instance group for thoseinstances.
gcloud
Create an unmanaged instance group in the
us-central1-azone withthegcloud compute instance-groups unmanagedcreatecommand.gcloud compute instance-groups unmanaged create ig-us-1 \ --zone us-central1-a
Create a second unmanaged instance group in the
us-central1-czone.gcloud compute instance-groups unmanaged create ig-us-2 \ --zone us-central1-c
Add instances to the
ig-us-1instance group.gcloud compute instance-groups unmanaged add-instances ig-us-1 \ --instancesBACKEND_INSTANCE_1,BACKEND_INSTANCE_2 \ --zone us-central1-a
Add instances to the
ig-us-2instance group.gcloud compute instance-groups unmanaged add-instances ig-us-2 \ --instancesBACKEND_INSTANCE_3,BACKEND_INSTANCE_4 \ --zone us-central1-c
If your existing load balancer deployment also has a backup target pool,repeat these steps to create a separate failover instance group for thoseinstances.
Create a health check
Create a health check to determine the health of the instances in yourinstance groups. Your existing target pool-based external passthrough Network Load Balancerlikely has alegacy HTTP health check associated with it.
You can create a new health check that matches the protocol of thetraffic that the load balancer will be distributing. Backend service-basedexternal passthrough Network Load Balancers can useTCP, SSL, HTTP(S), and HTTP/2 healthchecks.
Console
- In the Google Cloud console, go to theHealth checks page.
- ClickCreate health check.
- In theName field, enter
network-lb-health-check. - SetScope toRegional.
- ForRegion, select
us-central1. - ForProtocol, selectHTTP.
- ForPort, enter
80. - ClickCreate.
gcloud
For this example, we create a non-legacy HTTP health check to be used with thebackend service.
gcloud compute health-checks create http network-lb-health-check \--region us-central1 \--port 80
Configure the backend service
Useone of the following sections to create the backend service. If yourexisting external passthrough Network Load Balancer has a backup target pool,you need to configure a failover ratio while creating the backend service.
You also need to designate the failover instance group with the--failover flag when adding backends to the backend service.
Deployments without a backup target pool
gcloud
Create a regional backend service in the
us-central1region.gcloud compute backend-services create network-lb-backend-service \ --region us-central1 \ --health-checks network-lb-health-check \ --health-checks-region us-central1 \ --protocol TCP
Add the two instance groups (
ig-us-1andig-us-2) as backendsto the backend service.gcloud compute backend-services add-backend network-lb-backend-service \ --instance-group ig-us-1 \ --instance-group-zone us-central1-a \ --region us-central1
gcloud compute backend-services add-backend network-lb-backend-service \ --instance-group ig-us-2 \ --instance-group-zone us-central1-c \ --region us-central1
Deployments with a backup target pool
gcloud
Create a regional backend service in the
us-central1region.Configure the backend service failover ratio to match the failoverratio previously configured for the target pool.gcloud compute backend-services create network-lb-backend-service \ --region us-central1 \ --health-check network-lb-health-check \ --failover-ratio 0.5
Add the two instance groups (
ig-us-1andig-us-2) as backendsto the backend service.gcloud compute backend-services add-backend network-lb-backend-service \ --instance-group ig-us-1 \ --instance-group-zone us-central1-a \ --region us-central1
gcloud compute backend-services add-backend network-lb-backend-service \ --instance-group ig-us-2 \ --instance-group-zone us-central1-c \ --region us-central1
If you created a failover instance group, add it to the backendservice. Mark this backend with the
--failoverflag when you addit to the backend service.gcloud compute backend-services add-backend network-lb-backend-service \ --instance-groupFAILOVER_INSTANCE_GROUP \ --instance-group-zoneZONE \ --region us-central1 \ --failover
Configure the forwarding rule
You have two options to configure the forwarding rule to direct traffic to thenew backend service. You can either update the existing forwarding rule orcreate a new forwarding rule with a new IP address.
Update the existing forwarding rule (recommended)
Use theset-target flag to update the existing forwarding rule to pointto the new backend service.
gcloud compute forwarding-rules set-targetFORWARDING_RULE \ --backend-service network-lb-backend-service \ --region us-central1
ReplaceFORWARDING_RULE with the name of the existingforwarding rule.
Create a new forwarding rule
If you don't want to update the existing forwarding rule, you can create a newforwarding rule with a new IP address. Because a given IP address can only beassociated with a single forwarding rule at a time, you need to manuallymodify your DNS setting to transition incoming traffic from the old IPaddress to the new one.
Use the following command to create a new forwarding rule with a newIP address. You can use the use the--address flag if you want to specify anIP address already reserved in theus-central1 region.
gcloud compute forwarding-rules create network-lb-forwarding-rule \ --load-balancing-scheme external \ --region us-central1 \ --ports 80 \ --backend-service network-lb-backend-service
Testing the load balancer
Test the load balancer to confirm that the forwarding rule is directingincoming traffic as expected.
Look up the load balancer's external IP address
gcloud
Enter the following command to view the external IP address of thenetwork-lb-forwarding-rule forwarding rule used by the load balancer.
gcloud compute forwarding-rules describe network-lb-forwarding-rule --region us-central1
Use thenc command to access the external IP address
In this example, we use the default hashing method for sessionaffinity, so requests from thenc command are distributed randomly to thebackend VMs based on the source port assigned by your operating system.
To test connectivity, first install Netcat on Linux by running the followingcommand:
$sudo apt install netcatRepeat the following command a few times until you see all the backend VMs responding:
$ncIP_ADDRESS 80
Remove resources associated with the old load balancer
After you confirm that the new external passthrough Network Load Balancer works as expected,you can delete the old target pool resources.
- In the Google Cloud console, go to theLoad balancing page.
- Select the old load balancer that was associated with the target pool, andthen clickDelete.
- Select the health checks that you created, and then clickDelete load balancer and the selected resources.
What's next
- For information about how external passthrough Network Load Balancers work with backendservices, seeBackend service-based external passthrough Network Load Balanceroverview.
- To configure an external passthrough Network Load Balancer with abackend service, seeSet upan external passthrough Network Load Balancer with a backendservice.
- To configure an external passthrough Network Load Balancer with atarget pool, seeSet upan external passthrough Network Load Balancer with a targetpool.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.