Set up a regional internal proxy Network Load Balancer with VM instance group backends Stay organized with collections Save and categorize content based on your preferences.
The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer thatlets you run and scale your TCP service traffic behind an internal IPaddress that is accessible only to clients in the same VPCnetwork or clients connected to your VPCnetwork.
This guide contains instructions for setting up a regional internal proxy Network Load Balancerwith a managed instance group (MIG) backend.
Before you start, read theRegional internal proxy Network Load Balanceroverview.
Overview
In this example, we'll use the load balancer to distribute TCP traffic acrossbackend VMs in two zonal managed instance groups in theREGION_A region. Forpurposes of the example, the service is a set ofApache servers configured to respond on port110.Many browsers don't allow port110, so the testing section usescurl.
In this example, you configure the following deployment:
The regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components(backend instance groups, backend service, target proxy, and forwarding rule)must be in the same region.
Permissions
To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you musthave all of the followingCompute Engine IAM roles:
| Task | Required Role |
|---|---|
| Create networks, subnets, and load balancer components | Network Admin |
| Add and remove firewall rules | Security Admin |
| Create instances | Compute Instance Admin |
For more information, see the following guides:
Configure the network and subnets
You need a VPC network with two subnets: one for the loadbalancer's backends and the other for the load balancer's proxies.Regional internal proxy Network Load Balancers are regional. Traffic within the VPCnetwork is routed to the load balancer if the traffic's source is in a subnet inthe same region as the load balancer.
This example uses the following VPC network, region, andsubnets:
Network. The network is acustom-mode VPCnetwork named
lb-network.Subnet for backends. A subnet named
backend-subnetin theREGION_Aregion uses10.1.2.0/24for its primary IP range.Subnet for proxies. A subnet named
proxy-only-subnetin theREGION_Aregion uses10.129.0.0/23for its primary IP range.
To demonstrateglobalaccess, this examplealso creates a second test client VM in a different region (REGION_B)and a subnet with primary IP address range10.3.4.0/24.
Create the network and subnets
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
ForName, enter
lb-network.In theSubnets section, set theSubnet creation mode toCustom.
Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:
- Name:
backend-subnet - Region:
REGION_A - IP address range:
10.1.2.0/24
- Name:
ClickDone.
ClickAdd subnet.
Create a subnet to demonstrateglobalaccess. In theNewsubnet section, enter the following information:
- Name:
test-global-access-subnet - Region:
REGION_B - IP address range:
10.3.4.0/24
- Name:
ClickDone.
ClickCreate.
gcloud
Create the custom VPC network with the
gcloud computenetworks createcommand:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-networknetwork in theREGION_Aregion withthegcloud compute networks subnets createcommand:gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
ReplaceREGION_A with the name of the target Google Cloud region.
Create a subnet in the
lb-networknetwork in theREGION_Bregionwith thegcloud compute networks subnetscreatecommand:gcloud compute networks subnets create test-global-access-subnet \ --network=lb-network \ --range=10.3.4.0/24 \ --region=REGION_B
ReplaceREGION_B with the name of the Google Cloud region where you want to create the second subnet to test global access.
Create the proxy-only subnet
Aproxy-only subnet provides aset of IP addresses that Google uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.
This proxy-only subnet is used by allEnvoy-based loadbalancers in theREGION_Aregion of thelb-network VPC network.
Console
If you're using the Google Cloud console, you can wait and create theproxy-only subnet later on theLoad balancing page.
If you want to create the proxy-only subnet now, use the following steps:
- In the Google Cloud console, go to theVPC networks page.
Go to VPC networks - Click the name of the Shared VPC network:
lb-network. - ClickAdd subnet.
- ForName, enter
proxy-only-subnet. - ForRegion, select
REGION_A. - SetPurpose toRegional Managed Proxy.
- ForIP address range, enter
10.129.0.0/23. - ClickAdd.
gcloud
Create the proxy-only subnet with thegcloud compute networks subnetscreate command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23
Create firewall rules
This example requires the following firewall rules:
fw-allow-ssh. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-ssh.fw-allow-health-check. An ingress rule, applicable to the instances beingload balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in130.211.0.0/22and35.191.0.0/16). Thisexample uses the target tagallow-health-check.fw-allow-proxy-only-subnet. An ingress rule that allows connections from theproxy-only subnet to reach the backends.
Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.
Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreating a managed instancegroup.
Console
- In the Google Cloud console, go to theFirewall policies page.
Go to Firewall policies - ClickCreate firewall rule to create the rule to allow incomingSSH connections:
- Name:
fw-allow-ssh - Network:
lb-network - Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
22for the port number.
- Name:
- ClickCreate.
- ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:
- Name:
fw-allow-health-check - Network:
lb-network - Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-health-check - Source filter:IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80for the port number.
As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you usetcp:80forthe protocol and port, Google Cloud can useHTTP on port80to contact your VMs, but it cannot use HTTPS onport443to contact them.
- Name:
- ClickCreate.
- ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:
- Name:
fw-allow-proxy-only-subnet - Network:
lb-network - Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-proxy-only-subnet - Source filter:IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23 - Protocols and ports:
- ChooseSpecified protocols and ports.
- Select theTCP checkbox, and then enter
80for theport numbers.
- Name:
- ClickCreate.
gcloud
Create the
fw-allow-sshfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-checkrule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:80
Create the
fw-allow-proxy-only-subnetrule to allow the region's Envoyproxies to connect to your backends. Set--source-rangesto theallocated ranges of your proxy-only subnet, in this example,10.129.0.0/23.gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=allow-proxy-only-subnet \ --rules=tcp:80
Reserve the load balancer's IP address
To reserve a static internal IP address for your load balancer, seeReserve a new static internal IPv4 or IPv6address.
Note: Ensure that you use the subnet name that you specified when youcreated the subnet.Create a managed instance group
This section shows you how to create two managed instance group (MIG) backendsin theREGION_A region for the load balancer. The MIG provides VM instancesrunning the backend Apache servers for this example regional internal proxy Network Load Balancer.Typically, a regional internal proxy Network Load Balancer isn't used for HTTP traffic, butApache software is commonly used for testing.
Console
Create an instance template. In the Google Cloud console, go totheInstance templates page.
- ClickCreate instance template.
- ForName, enter
int-tcp-proxy-backend-template. - Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commandsthat are only available on Debian, such as
apt-get. - ClickAdvanced options.
- ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh,allow-health-checkandallow-proxy-only-subnet. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
backend-subnet
- Network:
- ForNetwork tags, enter
ClickManagement. Enter the following script into theStartup script field.
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
ClickCreate.
Create a managed instance group. In the Google Cloud console, go to theInstance groups page.
- ClickCreate instance group.
- SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
- ForName, enter
mig-a. - UnderLocation, selectSingle zone.
- ForRegion, select
REGION_A. - ForZone, select
ZONE_A1. - UnderInstance template, select
int-tcp-proxy-backend-template. Specify the number of instances that you want to create in the group.
For this example, specify the following options underAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
- ForAutoscaling mode, select
ForPort mapping, clickAdd port.
- ForPort name, enter
tcp80. - ForPort number, enter
80.
- ForPort name, enter
ClickCreate.
Repeat Step 2 to create a second managed instance group with thefollowing settings:
- Name:
mig-c - Zone:
ZONE_A2Keep all other settings the same.
- Name:
gcloud
Thegcloud instructions in this guide assume that you are usingCloudShell or another environment with bash installed.
Create a VM instance template with HTTP server with the
gcloud compute instance-templates createcommand.gcloud compute instance-templates create int-tcp-proxy-backend-template \ --region=REGION_A \ --network=lb-network \ --subnet=backend-subnet \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the
ZONE_A1zone.gcloud compute instance-groups managed create mig-a \ --zone=ZONE_A1 \ --size=2 \ --template=int-tcp-proxy-backend-template
ReplaceZONE_A1 with the name of the zone in the target Google Cloud region.
Create a managed instance group in the
ZONE_A2zone.gcloud compute instance-groups managed create mig-c \ --zone=ZONE_A2 \ --size=2 \ --template=int-tcp-proxy-backend-template
ReplaceZONE_A2 with the name of another zone in the target Google Cloud region.
Configure the load balancer
Console
Start your configuration
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
- ForProxy or passthrough, selectProxy load balancer and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ForCross-region or single region deployment, selectBest for regional workloads and clickNext.
- ClickConfigure.
Basic configuration
- ForName, enter
my-int-tcp-lb. - ForRegion, select
REGION_A. - ForNetwork, select
lb-network.
Reserve a proxy-only subnet
Note: If you alreadycreated the proxy-only subnet,theReserve subnet button isn't displayed. You can skip thissection and continue with the steps in theBackend configurationsection.To reserve a proxy-only subnet:
- ClickReserve subnet.
- ForName, enter
proxy-only-subnet. - ForIP address range, enter
10.129.0.0/23. - ClickAdd.
Backend configuration
- ClickBackend configuration.
- ForBackend type, selectInstance group.
- ForProtocol, selectTCP.
- ForNamed port, enter
tcp80. - In theHealth check list, clickCreate a health check, and thenenter the following information:
- Name:
tcp-health-check - Protocol:TCP
- Port:
80
- Name:
- ClickCreate.
- Configure the first backend:
- UnderNew backend, select instance group
mig-a. - ForPort numbers, enter
80. - Retain the remaining default values and clickDone.
- UnderNew backend, select instance group
- Configure the second backend:
- ClickAdd backend.
- UnderNew backend, select instance group
mig-c. - ForPort numbers, enter
80. - Retain the remaining default values and clickDone.
- In the Google Cloud console, verify that there is a check mark next toBackend configuration. If not, double-check that you have completedall of the steps.
Frontend configuration
- ClickFrontend configuration.
- ForName, enter
int-tcp-forwarding-rule. - ForSubnetwork, selectbackend-subnet.
- ForIP address, select the IP address reserved previously:LB_IP_ADDRESS
- ForPort number, enter
110. The forwarding rule onlyforwards packets with a matching destination port. - In this example, don't enable theProxy Protocol because itdoesn't work with the Apache HTTP Server software. For moreinformation, seeProxy protocol.
- ClickDone.
- In the Google Cloud console, verify that there is a check mark next toFrontend configuration. If not, double-check that you have completedall the previous steps.
Review and finalize
- ClickReview and finalize.
- Review your load balancer configuration settings.
- Optional: ClickEquivalent code to view the REST API requestthat will be used to create the load balancer.
- ClickCreate.
gcloud
Create a regional health check.
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service.
gcloud compute backend-services create internal-tcp-proxy-bs \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add instance groups to your backend service.
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-a \ --instance-group-zone=ZONE_A1 \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-c \ --instance-group-zone=ZONE_A2 \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
Create an internal target TCP proxy.
gcloud compute target-tcp-proxies create int-tcp-target-proxy \ --backend-service=internal-tcp-proxy-bs \ --proxy-header=NONE \ --region=REGION_A
If you want to turn on theproxyheader, set it to
PROXY_V1instead ofNONE.In this example, don't enable Proxy protocol because itdoesn't work with the Apache HTTP Server software. For moreinformation, seeProxy protocol.Create the forwarding rule. For
--ports, specify a single port numberfrom 1-65535. This example uses port110. The forwarding rule onlyforwards packets with a matching destination port.gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=int-tcp-ip-address \ --ports=110
Test your load balancer
To test the load balancer, create a client VM in the same region as theload balancer. Then send traffic from the client to the load balancer.
Create a client VM
Create a client VM (client-vm) in the same region as the load balancer.
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
SetName to
client-vm.SetZone to
ZONE_A1.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
backend-subnet
- Network:
- ForNetwork tags, enter
ClickCreate.
gcloud
The client VM must be in the same VPC network and region asthe load balancer. It doesn't need to be in the same subnet or zone. Theclient uses the same subnet as the backend VMs.
gcloud compute instances create client-vm \ --zone=ZONE_A1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=backend-subnet
Send traffic to the load balancer
Note: It might take a few minutes for the load balancer configuration topropagate globally after you first deploy it.Now that you have configured your load balancer, you can test sendingtraffic to the load balancer's IP address.
Use SSH to connect to the client instance.
gcloud compute ssh client-vm \ --zone=ZONE_A1
Verify that the load balancer is serving backend hostnames as expected.
Use the
compute addresses describecommandto view the load balancer's IP address:gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer. ReplaceIP_ADDRESS with theIP address of the load balancer.
curlIP_ADDRESS:110
Additional configuration options
This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.
Enable global access
You can enableglobalaccess for your loadbalancer to make it accessible to clients in all regions. The backends of yourexample load balancer must still be located in one region (REGION_A).
You can't modify an existing regional forwarding rule to enable global access.You must create a new forwarding rule for this purpose. Additionally, after aforwarding rule has been created with global access enabled, it cannot bemodified. To disable global access, you must create a new regionalaccess forwarding rule and delete the previous global access forwarding rule.
To configure global access, make the following configuration changes.
Console
Create a new forwarding rule for the load balancer:
In the Google Cloud console, go to theLoad balancing page.
In theName column, click your load balancer.
ClickFrontend configuration.
ClickAdd frontend IP and port.
Enter the name and subnet details for the new forwarding rule.
ForSubnetwork, selectbackend-subnet.
ForIP address, you can either select the same IP address as anexisting forwarding rule, reserve a new IP address, or use an ephemeralIP address. Sharing the same IP address across multipleforwarding rules is only possible if you set the IP address
--purposeflag toSHARED_LOADBALANCER_VIPwhile creating the IP address.ForPort number, enter
110.ForGlobal access, selectEnable.
ClickDone.
ClickUpdate.
gcloud
Create a new forwarding rule for the load balancer with the
--allow-global-accessflag.gcloud compute forwarding-rules create int-tcp-forwarding-rule-global-access \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=int-tcp-ip-address \ --ports=110 \ --allow-global-access
You can use the
gcloud compute forwarding-rules describecommand todetermine whether a forwarding rule has global access enabled. Forexample:gcloud compute forwarding-rules describe int-tcp-forwarding-rule-global-access \ --region=REGION_A \ --format="get(name,region,allowGlobalAccess)"
When global access is enabled, the word
Trueappears in the outputafter the name and region of the forwarding rule.
Create a client VM to test global access
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
SetName to
test-global-access-vm.SetZone to
ZONE_B1.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
test-global-access-subnet
- Network:
- ForNetwork tags, enter
ClickCreate.
gcloud
Create a client VM in theZONE_B1 zone.
gcloud compute instances create test-global-access-vm \ --zone=ZONE_B1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=test-global-access-subnet
ReplaceZONE_B1 with the name of the zone in theREGION_B region.
Connect to the client VM and test connectivity
Use
sshto connect to the client instance:gcloud compute ssh test-global-access-vm \ --zone=ZONE_B1
Use the
gcloud compute addresses describecommandto get the load balancer's IP address:gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer; replaceIP_ADDRESS with theIP address of the load balancer:
curlIP_ADDRESS:110
PROXY protocol for retaining client connection information
The proxy Network Load Balancer ends TCP connections fromthe client and creates new connections to the instances. By default, theoriginal client IP and port information is not preserved.
To preserve and send the original connection information to your instances,enablePROXY protocol version 1.This protocol sends an additional header that contains the sourceIP address, destination IP address, and port numbers to the instance as a partof the request.
Make sure that the proxy Network Load Balancer's backend instances are running servers that support PROXY protocol headers. If the servers are notconfigured to support PROXY protocol headers, the backend instances return emptyresponses.
If you set the PROXY protocol for user traffic, you can also set it for yourhealth checks. If you are checking health and servingcontent on the same port, set the health check's--proxy-header to match yourload balancer setting.
The PROXY protocol header is typically a single line of user-readabletext in the following format:
PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n
The following example shows a PROXY protocol:
PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n
In the preceding example, the client IP is192.0.2.1, the load balancing IP is198.51.100.1, the client port is15221, and the destination port is110.
When the client IP is not known, the load balancer generatesa PROXY protocol header in the following format:
PROXY UNKNOWN\r\n
Update PROXY protocol header for target proxy
You cannot update the PROXY protocol header in the existing target proxy. Youhave to create a new target proxy with the required setting for the PROXYprotocol header. Use these steps to create a new frontend with the requiredsettings:
Console
In the Google Cloud console, go to theLoad balancing page.
- Click the name of the load balancer you want to edit.
- ClickEdit for your load balancer.
- ClickFrontend configuration.
- Delete the old frontend IP and port.
- ClickAdd frontend IP and port.
- ForName, enter
int-tcp-forwarding-rule. - ForSubnetwork, selectbackend-subnet.
- ForIP address, select the IP address reserved previously:LB_IP_ADDRESS
- ForPort number, enter
110. The forwarding rule only forwards packets with a matching destination port. - Change the value of theProxy protocol field toOn.
- ClickDone.
- ForName, enter
- ClickUpdate to save your changes.
gcloud
In the following command, edit the
--proxy-headerfield and set it to eitherNONEorPROXY_V1depending on your requirement.gcloud compute target-tcp-proxies createTARGET_PROXY_NAME \ --backend-service=BACKEND_SERVICE \ --proxy-header=[NONE | PROXY_V1] \ --region=REGION
Delete the existing forwarding rule.
gcloud compute forwarding-rules delete int-tcp-forwarding-rule \ --region=REGION
Create a new forwarding rule and associate it with the target proxy.
gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION \ --target-tcp-proxy=TARGET_PROXY_NAME \ --target-tcp-proxy-region=REGION \ --address=LB_IP_ADDRESS \ --ports=110
Enable session affinity
The example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for the exampleregional internal proxy Network Load Balancer so that the backend service uses client IP affinity orgenerated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particularclient's requests to the same backend VM based on a hash created from theclient's IP address and the load balancer's IP address (the internal IP addressof an internal forwarding rule).
Console
To enable client IP session affinity:
- In the Google Cloud console, go to theLoad balancing page.
Go to Load balancing - ClickBackends.
- Clickinternal-tcp-proxy-bs (the name of the backend serviceyou created for this example) and clickEdit.
- On theBackend service details page, clickAdvancedconfiguration.
- UnderSession affinity, selectClient IP from the menu.
- ClickUpdate.
gcloud
Use the following Google Cloud CLI command to update theinternal-tcp-proxy-bs backendservice, specifying client IP session affinity:
gcloud compute backend-services update internal-tcp-proxy-bs \ --region=REGION_A \ --session-affinity=CLIENT_IP
Enable connection draining
You can enable connection draining on backend services to ensure minimalinterruption to your users when an instance that is serving traffic isterminated, removed manually, or removed by an autoscaler. To learn more aboutconnection draining, read theEnabling connection drainingdocumentation.
What's next
- Convert proxy Network Load Balancer to IPv6
- Regional internal proxy Network Load Balanceroverview
- Using monitoring
- Clean up the load balancer setup
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.