Set up a regional internal proxy Network Load Balancer with VM instance group backends

The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer thatlets you run and scale your TCP service traffic behind an internal IPaddress that is accessible only to clients in the same VPCnetwork or clients connected to your VPCnetwork.

This guide contains instructions for setting up a regional internal proxy Network Load Balancerwith a managed instance group (MIG) backend.

Before you start, read theRegional internal proxy Network Load Balanceroverview.

Overview

In this example, we'll use the load balancer to distribute TCP traffic acrossbackend VMs in two zonal managed instance groups in theREGION_A region. Forpurposes of the example, the service is a set ofApache servers configured to respond on port110.Many browsers don't allow port110, so the testing section usescurl.

In this example, you configure the following deployment:

Regional internal proxy Network Load Balancer example configuration with instance group backends.
Regional internal proxy Network Load Balancer example configuration with instance group backends

The regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components(backend instance groups, backend service, target proxy, and forwarding rule)must be in the same region.

Permissions

To follow this guide, you must be able to create instances and modify anetwork in a project. You must be either a projectowner or editor, or you musthave all of the followingCompute Engine IAM roles:

TaskRequired Role
Create networks, subnets, and load balancer componentsNetwork Admin
Add and remove firewall rulesSecurity Admin
Create instancesCompute Instance Admin

For more information, see the following guides:

Configure the network and subnets

You need a VPC network with two subnets: one for the loadbalancer's backends and the other for the load balancer's proxies.Regional internal proxy Network Load Balancers are regional. Traffic within the VPCnetwork is routed to the load balancer if the traffic's source is in a subnet inthe same region as the load balancer.

This example uses the following VPC network, region, andsubnets:

  • Network. The network is acustom-mode VPCnetwork namedlb-network.

  • Subnet for backends. A subnet namedbackend-subnet in theREGION_A region uses10.1.2.0/24 for its primary IP range.

  • Subnet for proxies. A subnet namedproxy-only-subnet in theREGION_A region uses10.129.0.0/23 for its primary IP range.

To demonstrateglobalaccess, this examplealso creates a second test client VM in a different region (REGION_B)and a subnet with primary IP address range10.3.4.0/24.

Note: You can change the name of the network, the region, and the parameters forthe subnets; however, subsequent steps in this guide use the network, region,and subnet parameters as outlined here.

Create the network and subnets

Console

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. ForName, enterlb-network.

  4. In theSubnets section, set theSubnet creation mode toCustom.

  5. Create a subnet for the load balancer's backends. In theNew subnetsection, enter the following information:

    • Name:backend-subnet
    • Region:REGION_A
    • IP address range:10.1.2.0/24
  6. ClickDone.

  7. ClickAdd subnet.

  8. Create a subnet to demonstrateglobalaccess. In theNewsubnet section, enter the following information:

    • Name:test-global-access-subnet
    • Region:REGION_B
    • IP address range:10.3.4.0/24
  9. ClickDone.

  10. ClickCreate.

gcloud

  1. Create the custom VPC network with thegcloud computenetworks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
  2. Create a subnet in thelb-network network in theREGION_A region withthegcloud compute networks subnets create command:

    gcloud compute networks subnets create backend-subnet \   --network=lb-network \   --range=10.1.2.0/24 \   --region=REGION_A

    ReplaceREGION_A with the name of the target Google Cloud region.

  3. Create a subnet in thelb-network network in theREGION_B regionwith thegcloud compute networks subnetscreate command:

    gcloud compute networks subnets create test-global-access-subnet \   --network=lb-network \   --range=10.3.4.0/24 \   --region=REGION_B

    ReplaceREGION_B with the name of the Google Cloud region where you want to create the second subnet to test global access.

Create the proxy-only subnet

Aproxy-only subnet provides aset of IP addresses that Google uses to run Envoy proxies on your behalf. Theproxies terminate connections from the client and create new connections to thebackends.

This proxy-only subnet is used by allEnvoy-based loadbalancers in theREGION_Aregion of thelb-network VPC network.

Important: Don't try to assign addresses from the proxy-only subnet to your loadbalancer's forwarding rule or backends. You assign the forwarding rule's IPaddress and the backend instance IP addresses froma different subnet range (or ranges), not this one.Google Cloud reserves this subnet range for Google Cloud-managedproxies.

Console

If you're using the Google Cloud console, you can wait and create theproxy-only subnet later on theLoad balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.
    Go to VPC networks
  2. Click the name of the Shared VPC network:lb-network.
  3. ClickAdd subnet.
  4. ForName, enterproxy-only-subnet.
  5. ForRegion, selectREGION_A.
  6. SetPurpose toRegional Managed Proxy.
  7. ForIP address range, enter10.129.0.0/23.
  8. ClickAdd.

gcloud

Create the proxy-only subnet with thegcloud compute networks subnetscreate command.

gcloud compute networks subnets create proxy-only-subnet \    --purpose=REGIONAL_MANAGED_PROXY \    --role=ACTIVE \    --region=REGION_A \    --network=lb-network \    --range=10.129.0.0/23

Create firewall rules

This example requires the following firewall rules:

  • fw-allow-ssh. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22 from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-ssh.

  • fw-allow-health-check. An ingress rule, applicable to the instances beingload balanced, that allows all TCP traffic from the Google Cloudhealth checking systems (in130.211.0.0/22 and35.191.0.0/16). Thisexample uses the target tagallow-health-check.

  • fw-allow-proxy-only-subnet. An ingress rule that allows connections from theproxy-only subnet to reach the backends.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Thetarget tagsdefine the backend instances. Without the target tags, the firewallrules apply to all of your backend instances in the VPC network.When you create the backend VMs, make sure toinclude the specified target tags, as shown inCreating a managed instancegroup.

Console

  1. In the Google Cloud console, go to theFirewall policies page.
    Go to Firewall policies
  2. ClickCreate firewall rule to create the rule to allow incomingSSH connections:
    • Name:fw-allow-ssh
    • Network:lb-network
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:allow-ssh
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter22 for the port number.
  3. ClickCreate.
  4. ClickCreate firewall rule a second time to create the rule to allowGoogle Cloud health checks:
    • Name:fw-allow-health-check
    • Network:lb-network
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:allow-health-check
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for the port number.
        As a best practice, limit this rule to just the protocols and portsthat match those used by your health check. If you usetcp:80 forthe protocol and port, Google Cloud can useHTTP on port80 to contact your VMs, but it cannot use HTTPS onport443 to contact them.
  5. ClickCreate.
  6. ClickCreate firewall rule a third time to create the rule to allowthe load balancer's proxy servers to connect the backends:
    • Name:fw-allow-proxy-only-subnet
    • Network:lb-network
    • Direction of traffic:Ingress
    • Action on match:Allow
    • Targets:Specified target tags
    • Target tags:allow-proxy-only-subnet
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:10.129.0.0/23
    • Protocols and ports:
      • ChooseSpecified protocols and ports.
      • Select theTCP checkbox, and then enter80 for theport numbers.
  7. ClickCreate.

gcloud

  1. Create thefw-allow-ssh firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-allow-ssh \   --network=lb-network \   --action=allow \   --direction=ingress \   --target-tags=allow-ssh \   --rules=tcp:22
  2. Create thefw-allow-health-check rule to allow Google Cloudhealth checks. This example allows all TCP traffic from health checkprobers; however, you can configure a narrower set of ports to meet yourneeds.

    gcloud compute firewall-rules create fw-allow-health-check \   --network=lb-network \   --action=allow \   --direction=ingress \   --source-ranges=130.211.0.0/22,35.191.0.0/16 \   --target-tags=allow-health-check \   --rules=tcp:80
  3. Create thefw-allow-proxy-only-subnet rule to allow the region's Envoyproxies to connect to your backends. Set--source-ranges to theallocated ranges of your proxy-only subnet, in this example,10.129.0.0/23.

    gcloud compute firewall-rules create fw-allow-proxy-only-subnet \   --network=lb-network \   --action=allow \   --direction=ingress \   --source-ranges=10.129.0.0/23 \   --target-tags=allow-proxy-only-subnet \   --rules=tcp:80

Reserve the load balancer's IP address

To reserve a static internal IP address for your load balancer, seeReserve a new static internal IPv4 or IPv6address.

Note: Ensure that you use the subnet name that you specified when youcreated the subnet.

Create a managed instance group

This section shows you how to create two managed instance group (MIG) backendsin theREGION_A region for the load balancer. The MIG provides VM instancesrunning the backend Apache servers for this example regional internal proxy Network Load Balancer.Typically, a regional internal proxy Network Load Balancer isn't used for HTTP traffic, butApache software is commonly used for testing.

Console

  1. Create an instance template. In the Google Cloud console, go totheInstance templates page.

    Go to Instance templates

    1. ClickCreate instance template.
    2. ForName, enterint-tcp-proxy-backend-template.
    3. Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commandsthat are only available on Debian, such asapt-get.
    4. ClickAdvanced options.
    5. ClickNetworking and configure the following fields:
      1. ForNetwork tags, enterallow-ssh,allow-health-checkandallow-proxy-only-subnet.
      2. ForNetwork interfaces, select the following:
        • Network:lb-network
        • Subnet:backend-subnet
    6. ClickManagement. Enter the following script into theStartup script field.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
    7. ClickCreate.

  2. Create a managed instance group. In the Google Cloud console, go to theInstance groups page.

    Go to Instance groups

    1. ClickCreate instance group.
    2. SelectNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
    3. ForName, entermig-a.
    4. UnderLocation, selectSingle zone.
    5. ForRegion, selectREGION_A.
    6. ForZone, selectZONE_A1.
    7. UnderInstance template, selectint-tcp-proxy-backend-template.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.
    9. ForPort mapping, clickAdd port.

      • ForPort name, entertcp80.
      • ForPort number, enter80.
    10. ClickCreate.

  3. Repeat Step 2 to create a second managed instance group with thefollowing settings:

    1. Name:mig-c
    2. Zone:ZONE_A2Keep all other settings the same.

gcloud

Thegcloud instructions in this guide assume that you are usingCloudShell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with thegcloud compute instance-templates createcommand.

    gcloud compute instance-templates create int-tcp-proxy-backend-template \   --region=REGION_A \   --network=lb-network \   --subnet=backend-subnet \   --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \   --image-family=debian-12 \   --image-project=debian-cloud \   --metadata=startup-script='#! /bin/bash     apt-get update     apt-get install apache2 -y     a2ensite default-ssl     a2enmod ssl     vm_hostname="$(curl -H "Metadata-Flavor:Google" \     http://metadata.google.internal/computeMetadata/v1/instance/name)"     echo "Page served from: $vm_hostname" | \     tee /var/www/html/index.html     systemctl restart apache2'
  2. Create a managed instance group in theZONE_A1 zone.

    gcloud compute instance-groups managed create mig-a \   --zone=ZONE_A1 \   --size=2 \   --template=int-tcp-proxy-backend-template

    ReplaceZONE_A1 with the name of the zone in the target Google Cloud region.

  3. Create a managed instance group in theZONE_A2 zone.

    gcloud compute instance-groups managed create mig-c \   --zone=ZONE_A2 \   --size=2 \   --template=int-tcp-proxy-backend-template

    ReplaceZONE_A2 with the name of another zone in the target Google Cloud region.

Configure the load balancer

Console

Start your configuration

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickCreate load balancer.
  3. ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
  4. ForProxy or passthrough, selectProxy load balancer and clickNext.
  5. ForPublic facing or internal, selectInternal and clickNext.
  6. ForCross-region or single region deployment, selectBest for regional workloads and clickNext.
  7. ClickConfigure.

Basic configuration

  1. ForName, entermy-int-tcp-lb.
  2. ForRegion, selectREGION_A.
  3. ForNetwork, selectlb-network.

Reserve a proxy-only subnet

Note: If you alreadycreated the proxy-only subnet,theReserve subnet button isn't displayed. You can skip thissection and continue with the steps in theBackend configurationsection.

To reserve a proxy-only subnet:

  1. ClickReserve subnet.
  2. ForName, enterproxy-only-subnet.
  3. ForIP address range, enter10.129.0.0/23.
  4. ClickAdd.

Backend configuration

  1. ClickBackend configuration.
  2. ForBackend type, selectInstance group.
  3. ForProtocol, selectTCP.
  4. ForNamed port, entertcp80.
  5. In theHealth check list, clickCreate a health check, and thenenter the following information:
    • Name:tcp-health-check
    • Protocol:TCP
    • Port:80
  6. ClickCreate.
  7. Configure the first backend:
    1. UnderNew backend, select instance groupmig-a.
    2. ForPort numbers, enter80.
    3. Retain the remaining default values and clickDone.
  8. Configure the second backend:
    1. ClickAdd backend.
    2. UnderNew backend, select instance groupmig-c.
    3. ForPort numbers, enter80.
    4. Retain the remaining default values and clickDone.
  9. In the Google Cloud console, verify that there is a check mark next toBackend configuration. If not, double-check that you have completedall of the steps.

Frontend configuration

  1. ClickFrontend configuration.
  2. ForName, enterint-tcp-forwarding-rule.
  3. ForSubnetwork, selectbackend-subnet.
  4. ForIP address, select the IP address reserved previously:LB_IP_ADDRESS
  5. ForPort number, enter110. The forwarding rule onlyforwards packets with a matching destination port.
  6. In this example, don't enable theProxy Protocol because itdoesn't work with the Apache HTTP Server software. For moreinformation, seeProxy protocol.
  7. ClickDone.
  8. In the Google Cloud console, verify that there is a check mark next toFrontend configuration. If not, double-check that you have completedall the previous steps.

Review and finalize

  1. ClickReview and finalize.
  2. Review your load balancer configuration settings.
  3. Optional: ClickEquivalent code to view the REST API requestthat will be used to create the load balancer.
  4. ClickCreate.

gcloud

  1. Create a regional health check.

    gcloud compute health-checks create tcp tcp-health-check \   --region=REGION_A \   --use-serving-port
  2. Create a backend service.

    gcloud compute backend-services create internal-tcp-proxy-bs \   --load-balancing-scheme=INTERNAL_MANAGED \   --protocol=TCP \   --region=REGION_A \   --health-checks=tcp-health-check \   --health-checks-region=REGION_A
  3. Add instance groups to your backend service.

    gcloud compute backend-services add-backend internal-tcp-proxy-bs \   --region=REGION_A \   --instance-group=mig-a \   --instance-group-zone=ZONE_A1 \   --balancing-mode=UTILIZATION \   --max-utilization=0.8
    gcloud compute backend-services add-backend internal-tcp-proxy-bs \   --region=REGION_A \   --instance-group=mig-c \   --instance-group-zone=ZONE_A2 \   --balancing-mode=UTILIZATION \   --max-utilization=0.8
  4. Create an internal target TCP proxy.

    gcloud compute target-tcp-proxies create int-tcp-target-proxy \   --backend-service=internal-tcp-proxy-bs \   --proxy-header=NONE \   --region=REGION_A

    If you want to turn on theproxyheader, set it toPROXY_V1 instead ofNONE.In this example, don't enable Proxy protocol because itdoesn't work with the Apache HTTP Server software. For moreinformation, seeProxy protocol.

  5. Create the forwarding rule. For--ports, specify a single port numberfrom 1-65535. This example uses port110. The forwarding rule onlyforwards packets with a matching destination port.

    gcloud compute forwarding-rules create int-tcp-forwarding-rule \   --load-balancing-scheme=INTERNAL_MANAGED \   --network=lb-network \   --subnet=backend-subnet \   --region=REGION_A \   --target-tcp-proxy=int-tcp-target-proxy \   --target-tcp-proxy-region=REGION_A \   --address=int-tcp-ip-address \   --ports=110

Test your load balancer

To test the load balancer, create a client VM in the same region as theload balancer. Then send traffic from the client to the load balancer.

Create a client VM

Create a client VM (client-vm) in the same region as the load balancer.

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. SetName toclient-vm.

  4. SetZone toZONE_A1.

  5. ClickAdvanced options.

  6. ClickNetworking and configure the following fields:

    1. ForNetwork tags, enterallow-ssh.
    2. ForNetwork interfaces, select the following:
      • Network:lb-network
      • Subnet:backend-subnet
  7. ClickCreate.

gcloud

The client VM must be in the same VPC network and region asthe load balancer. It doesn't need to be in the same subnet or zone. Theclient uses the same subnet as the backend VMs.

gcloud compute instances create client-vm \    --zone=ZONE_A1 \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=backend-subnet

Send traffic to the load balancer

Note: It might take a few minutes for the load balancer configuration topropagate globally after you first deploy it.

Now that you have configured your load balancer, you can test sendingtraffic to the load balancer's IP address.

  1. Use SSH to connect to the client instance.

    gcloud compute ssh client-vm \   --zone=ZONE_A1
  2. Verify that the load balancer is serving backend hostnames as expected.

    1. Use thecompute addresses describecommandto view the load balancer's IP address:

      gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A

      Make a note of the IP address.

    2. Send traffic to the load balancer. ReplaceIP_ADDRESS with theIP address of the load balancer.

      curlIP_ADDRESS:110

Additional configuration options

This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.

Enable global access

You can enableglobalaccess for your loadbalancer to make it accessible to clients in all regions. The backends of yourexample load balancer must still be located in one region (REGION_A).

Regional internal proxy Network Load Balancer with global access
Regional internal proxy Network Load Balancer with global access (click to enlarge)

You can't modify an existing regional forwarding rule to enable global access.You must create a new forwarding rule for this purpose. Additionally, after aforwarding rule has been created with global access enabled, it cannot bemodified. To disable global access, you must create a new regionalaccess forwarding rule and delete the previous global access forwarding rule.

To configure global access, make the following configuration changes.

Console

Create a new forwarding rule for the load balancer:

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. In theName column, click your load balancer.

  3. ClickFrontend configuration.

  4. ClickAdd frontend IP and port.

  5. Enter the name and subnet details for the new forwarding rule.

  6. ForSubnetwork, selectbackend-subnet.

  7. ForIP address, you can either select the same IP address as anexisting forwarding rule, reserve a new IP address, or use an ephemeralIP address. Sharing the same IP address across multipleforwarding rules is only possible if you set the IP address--purposeflag toSHARED_LOADBALANCER_VIP while creating the IP address.

  8. ForPort number, enter110.

  9. ForGlobal access, selectEnable.

  10. ClickDone.

  11. ClickUpdate.

gcloud

  1. Create a new forwarding rule for the load balancer with the--allow-global-access flag.

    gcloud compute forwarding-rules create int-tcp-forwarding-rule-global-access \   --load-balancing-scheme=INTERNAL_MANAGED \   --network=lb-network \   --subnet=backend-subnet \   --region=REGION_A \   --target-tcp-proxy=int-tcp-target-proxy \   --target-tcp-proxy-region=REGION_A \   --address=int-tcp-ip-address \   --ports=110 \   --allow-global-access
  2. You can use thegcloud compute forwarding-rules describe command todetermine whether a forwarding rule has global access enabled. Forexample:

    gcloud compute forwarding-rules describe int-tcp-forwarding-rule-global-access \   --region=REGION_A \   --format="get(name,region,allowGlobalAccess)"

    When global access is enabled, the wordTrue appears in the outputafter the name and region of the forwarding rule.

Create a client VM to test global access

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. SetName totest-global-access-vm.

  4. SetZone toZONE_B1.

  5. ClickAdvanced options.

  6. ClickNetworking and configure the following fields:

    1. ForNetwork tags, enterallow-ssh.
    2. ForNetwork interfaces, select the following:
      • Network:lb-network
      • Subnet:test-global-access-subnet
  7. ClickCreate.

gcloud

Create a client VM in theZONE_B1 zone.

gcloud compute instances create test-global-access-vm \    --zone=ZONE_B1 \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=test-global-access-subnet

ReplaceZONE_B1 with the name of the zone in theREGION_B region.

Connect to the client VM and test connectivity

  1. Usessh to connect to the client instance:

    gcloud compute ssh test-global-access-vm \    --zone=ZONE_B1
  2. Use thegcloud compute addresses describecommandto get the load balancer's IP address:

    gcloud compute addresses describe int-tcp-ip-address \    --region=REGION_A

    Make a note of the IP address.

  3. Send traffic to the load balancer; replaceIP_ADDRESS with theIP address of the load balancer:

    curlIP_ADDRESS:110

PROXY protocol for retaining client connection information

The proxy Network Load Balancer ends TCP connections fromthe client and creates new connections to the instances. By default, theoriginal client IP and port information is not preserved.

To preserve and send the original connection information to your instances,enablePROXY protocol version 1.This protocol sends an additional header that contains the sourceIP address, destination IP address, and port numbers to the instance as a partof the request.

Make sure that the proxy Network Load Balancer's backend instances are running servers that support PROXY protocol headers. If the servers are notconfigured to support PROXY protocol headers, the backend instances return emptyresponses.

If you set the PROXY protocol for user traffic, you can also set it for yourhealth checks. If you are checking health and servingcontent on the same port, set the health check's--proxy-header to match yourload balancer setting.

The PROXY protocol header is typically a single line of user-readabletext in the following format:

PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n

The following example shows a PROXY protocol:

PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n

In the preceding example, the client IP is192.0.2.1, the load balancing IP is198.51.100.1, the client port is15221, and the destination port is110.

When the client IP is not known, the load balancer generatesa PROXY protocol header in the following format:

PROXY UNKNOWN\r\n

Update PROXY protocol header for target proxy

You cannot update the PROXY protocol header in the existing target proxy. Youhave to create a new target proxy with the required setting for the PROXYprotocol header. Use these steps to create a new frontend with the requiredsettings:

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. Click the name of the load balancer you want to edit.
  3. ClickEdit for your load balancer.
  4. ClickFrontend configuration.
  5. Delete the old frontend IP and port.
  6. ClickAdd frontend IP and port.
    1. ForName, enterint-tcp-forwarding-rule.
    2. ForSubnetwork, selectbackend-subnet.
    3. ForIP address, select the IP address reserved previously:LB_IP_ADDRESS
    4. ForPort number, enter110. The forwarding rule only forwards packets with a matching destination port.
    5. Change the value of theProxy protocol field toOn.
    6. ClickDone.
  7. ClickUpdate to save your changes.

gcloud

  1. In the following command, edit the--proxy-header field and set it to eitherNONE orPROXY_V1 depending on your requirement.

       gcloud compute target-tcp-proxies createTARGET_PROXY_NAME \       --backend-service=BACKEND_SERVICE \       --proxy-header=[NONE | PROXY_V1] \       --region=REGION
  2. Delete the existing forwarding rule.

       gcloud compute forwarding-rules delete int-tcp-forwarding-rule \       --region=REGION
  3. Create a new forwarding rule and associate it with the target proxy.

       gcloud compute forwarding-rules create int-tcp-forwarding-rule \       --load-balancing-scheme=INTERNAL_MANAGED \       --network=lb-network \       --subnet=backend-subnet \       --region=REGION \       --target-tcp-proxy=TARGET_PROXY_NAME \       --target-tcp-proxy-region=REGION \       --address=LB_IP_ADDRESS \       --ports=110

Enable session affinity

The example configuration creates a backend service without session affinity.

These procedures show you how to update a backend service for the exampleregional internal proxy Network Load Balancer so that the backend service uses client IP affinity orgenerated cookie affinity.

When client IP affinity is enabled, the load balancer directs a particularclient's requests to the same backend VM based on a hash created from theclient's IP address and the load balancer's IP address (the internal IP addressof an internal forwarding rule).

Console

To enable client IP session affinity:

  1. In the Google Cloud console, go to theLoad balancing page.
    Go to Load balancing
  2. ClickBackends.
  3. Clickinternal-tcp-proxy-bs (the name of the backend serviceyou created for this example) and clickEdit.
  4. On theBackend service details page, clickAdvancedconfiguration.
  5. UnderSession affinity, selectClient IP from the menu.
  6. ClickUpdate.

gcloud

Use the following Google Cloud CLI command to update theinternal-tcp-proxy-bs backendservice, specifying client IP session affinity:

gcloud compute backend-services update internal-tcp-proxy-bs \    --region=REGION_A \    --session-affinity=CLIENT_IP

Enable connection draining

You can enable connection draining on backend services to ensure minimalinterruption to your users when an instance that is serving traffic isterminated, removed manually, or removed by an autoscaler. To learn more aboutconnection draining, read theEnabling connection drainingdocumentation.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.