Set up a regional internal Application Load Balancer with hybrid connectivity Stay organized with collections Save and categorize content based on your preferences.
This page illustrates how to deploy a regional internal Application Load Balancer to load balancetraffic to network endpoints that are on-premises or in other public clouds andare reachable by usinghybrid connectivity.
If you haven't already done so, review theHybrid connectivity NEGsoverview to understand thenetwork requirements to set up hybrid load balancing.
Setup overview
The example on this page sets up the following deployment:
You must configure hybrid connectivity before setting up a hybridload balancing deployment. This page does not include the hybrid connectivitysetup.
Depending on your choice of hybrid connectivity product (eitherCloud VPN or Cloud Interconnect (Dedicated or Partner)), usethe relevant product documentation.
Permissions
To set up hybrid load balancing, you must have the following permissions:
On Google Cloud
- Permissions to establish hybrid connectivity between Google Cloud andyour on-premises environment or other cloud environments. For the listof permissions needed, see the relevantNetwork Connectivity productdocumentation.
- Permissions to create a hybrid connectivity NEG and the load balancer.TheCompute Load Balancer Adminrole(
roles/compute.loadBalancerAdmin) contains the permissions required toperform the tasks described in this guide.
On your on-premises environment or other non-Google Cloud cloudenvironment
- Permissions to configure network endpoints that allow services on youron-premises environment or other cloud environments to be reachable fromGoogle Cloud by using an
IP:Portcombination. For more information,contact your environment's network administrator. - Permissions to create firewall rules on your on-premises environment orother cloud environments to allow Google's health check probes to reach theendpoints.
- Permissions to configure network endpoints that allow services on youron-premises environment or other cloud environments to be reachable fromGoogle Cloud by using an
Additionally, to complete the instructions on this page, you need to create ahybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints)to serve as Google Cloud-based backends for the load balancer.
You should be either a projectOwneror Editor, or you should have thefollowingCompute Engine IAMroles.
| Task | Required role |
|---|---|
| Create networks, subnets, and load balancer components | Compute Network Admin (roles/compute.networkAdmin) |
| Add and remove firewall rules | Compute Security Admin (roles/compute.securityAdmin) |
| Create instances | Compute Instance Admin (roles/compute.instanceAdmin) |
Establish hybrid connectivity
Your Google Cloud and on-premises environment or other cloud environmentsmust be connected throughhybrid connectivity by usingeither Cloud Interconnect VLAN attachments or Cloud VPNtunnels with Cloud Router or Router appliance VMs. We recommend that youuse a high availability connection.
A Cloud Router enabled withglobal dynamicroutinglearns about the specific endpoint through Border Gateway Protocol (BGP) andprograms it into your Google Cloud VPC network. Regionaldynamic routing is not supported. Static routes are also not supported.
You can use either the same network or a different VPC networkwithin the same project to configure both hybrid networking(Cloud Interconnect or Cloud VPN or a Router appliance VM) and the load balancer. Notethe following:
If you use different VPC networks, the two networks must beconnected using either VPC Network Peering or they must beVPCspokeson the sameNetwork Connectivity Centerhub.
If you use the same VPC network, ensure that yourVPC network's subnet CIDR ranges don't conflict with yourremote CIDR ranges. When IP addresses overlap, subnet routes are prioritizedover remote connectivity.
For instructions, see the following documentation:
Important: Don't proceed with the instructions on this page until you set uphybrid connectivity between your environments.Set up your environment that is outside Google Cloud
Perform the following steps to set up your on-premises environment or other cloudenvironment for hybrid load balancing:
- Configure network endpoints to expose on-premises services toGoogle Cloud (
IP:Port). - Configure firewall rules on your on-premises environment or other cloud environment.
- Configure Cloud Router to advertise certain required routes to yourprivate environment.
Set up network endpoints
After you set up hybrid connectivity, you configure one or more networkendpoints within your on-premises environment or other cloud environments thatare reachable through Cloud Interconnect or Cloud VPN orRouter appliance by using anIP:port combination. ThisIP:portcombination is configured as one or more endpoints for the hybrid connectivityNEG that is created in Google Cloud later on in this process.
If there are multiple paths to the IP endpoint, routingfollows the behavior described in theCloud Routeroverview.
Set up firewall rules
The following firewall rules must be created on your on-premises environmentor other cloud environment:
- Create an ingress allow firewall rule in on-premises or other cloud environments to allow traffic from the region'sproxy-only subnet to reach the endpoints.
Allowing traffic from Google's health check probe ranges isn't required for hybridNEGs. However, if you're using a combination of hybrid and zonal NEGs ina single backend service, you need to allow traffic from theGooglehealth check probe ranges for the zonal NEGs.
Advertise routes
Configure Cloud Router toadvertise the following custom IPranges to youron-premises environment or other cloud environment:
- The range of the region's proxy-only subnet.
Set up Google Cloud environment
For the following steps, make sure you use the same VPC network(calledNETWORK in this procedure) thatwas used to configure hybrid connectivity between the environments.
Additionally, make sure the region used (calledREGION in thisprocedure) is the same as that used to create the Cloud VPN tunnel orCloud Interconnect VLAN attachment.
Configure the proxy-only subnet
This proxy-only subnet is used for all regional Envoy-based load balancers intheREGION region.
Console
- In the Google Cloud console, go to theVPC networks page.
Go to VPC networks - Go to the network that was used to configurehybrid connectivity between the environments.
- ClickAdd subnet.
- Enter aName:PROXY_ONLY_SUBNET_NAME.
- Select aRegion:REGION.
- SetPurpose toRegional Managed Proxy.
- Enter anIP address range:PROXY_ONLY_SUBNET_RANGE.
- ClickAdd.
gcloud
Create the proxy-only subnet with thegcloud compute networks subnetscreate command.
gcloud compute networks subnets createPROXY_ONLY_SUBNET_NAME \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION \ --network=NETWORK \ --range=PROXY_ONLY_SUBNET_RANGE
Configure the load balancer subnet
This subnet is used to create the load balancer's zonal NEG backends, thefrontend, and the internal IP address.
Cloud console
- In the Google Cloud console, go to theVPC networks page.
Go to VPC networks - Go to the network that was used to configure hybridconnectivity between the environments.
- In theSubnets section:
- Set theSubnet creation mode toCustom.
- In theNew subnet section, enter the following information:
- Name:LB_SUBNET_NAME
- Region:REGION
- IP address range:LB_SUBNET_RANGE
- ClickDone.
- ClickCreate.
gcloud
Create a subnet in the network that was usedto configure hybrid connectivity between the environments.
gcloud compute networks subnets createLB_SUBNET_NAME \ --network=NETWORK \ --range=LB_SUBNET_RANGE \ --region=REGION
Reserve the load balancer's IP address
By default, one IP address is used for each forwarding rule. You canreserve ashared IP address, which lets you use the same IP addresswith multiple forwarding rules. However, if you want topublish the load balancer by using Private Service Connect,don't use a shared IP address for the forwarding rule.
Console
You can reserve a standalone internal IP address using theGoogle Cloud console.
- Go to theVPC networks page.
- Click the network that was used to configurehybrid connectivity between the environments.
- ClickStatic internal IP addresses and then clickReserve static address.
- Enter aName:LB_IP_ADDRESS.
- For theSubnet, selectLB_SUBNET_NAME.
- If you want to specify which IP address to reserve, underStatic IPaddress, selectLet me choose, and then fill in aCustomIP address. Otherwise, the system automatically assigns an IP addressin the subnet for you.
- If you want to use this IP address with multiple forwarding rules, underPurpose, chooseShared.
- ClickReserve to finish the process.
gcloud
Using the gcloud CLI, run the
compute addresses createcommand:gcloud compute addresses createLB_IP_ADDRESS \ --region=REGION \ --subnet=LB_SUBNET_NAME \
Use the
compute addresses describecommandto view the allocated IP address:gcloud compute addresses describeLB_IP_ADDRESS \ --region=REGION
If you want to use the same IP address with multiple forwarding rules,specify
--purpose=SHARED_LOADBALANCER_VIP.
Create firewall rules for zonal NEGs
In this example, you create the following firewall rules for the zonal NEGbackends on Google Cloud:
fw-allow-health-check: An ingress rule, applicable to theinstances being load balanced, that allows traffic fromGoogle Cloud health checking systems (130.211.0.0/22and35.191.0.0/16). This example uses the target tagallow-health-checktoidentify the backend VMs to which it should apply.Allowing traffic from Google's health check probe ranges isn't required for hybridNEGs. However, if you're using a combination of hybrid and zonal NEGs ina single backend service, you need to allow traffic from theGooglehealth check probe ranges for the zonal NEGs.fw-allow-ssh: An ingress rule that allows incoming SSH connectivity on TCPport 22 from any address. You can choose a more restrictive source IP rangefor this rule; for example, you can specify just the IP ranges of the systemsfrom which you will initiate SSH sessions. This example uses the target tagallow-sshto identify the VMs to which it should apply.fw-allow-proxy-only-subnet: An ingress rule that allows connections from theproxy-only subnet to reach the backends.
Console
- In the Google Cloud console, go to theFirewall policies page.
Go to Firewall policies - ClickCreate firewall rule to create the rule to allow traffic fromhealth check probes:
- Enter aName of
fw-allow-health-check. - UnderNetwork, selectNETWORK.
- UnderTargets, selectSpecified target tags.
- Populate theTarget tags field with
allow-health-check. - SetSource filter toIPv4 ranges.
- SetSource IPv4 ranges to
130.211.0.0/22and35.191.0.0/16. - UnderProtocols and ports, selectSpecified protocols andports.
- SelectTCP and then enter
80for the port number. - ClickCreate.
- Enter aName of
- ClickCreate firewall rule again to create the rule to allow incomingSSH connections:
- Name:
fw-allow-ssh - Network:NETWORK
- Priority:
1000 - Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports: ChooseSpecified protocols and ports.
- SelectTCP and then enter
22for the port number. - ClickCreate.
- Name:
- ClickCreate firewall rule again to create the rule to allow incomingconnections from the proxy-only subnet:
- Name:
fw-allow-proxy-only-subnet - Network:NETWORK
- Priority:
1000 - Direction of traffic: ingress
- Action on match: allow
- Targets: Specified target tags
- Target tags:
allow-proxy-only-subnet - Source filter:IPv4 ranges
- Source IPv4 ranges:PROXY_ONLY_SUBNET_RANGE
- Protocols and ports: ChooseSpecified protocols and ports
- SelectTCP and then enter
80for the port number. - ClickCreate.
- Name:
gcloud
Create the
fw-allow-health-check-and-proxyrule to allowthe Google Cloud health checks to reach thebackend instances on TCP port80:gcloud compute firewall-rules create fw-allow-health-check \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
Create the
fw-allow-sshfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create an ingress allow firewall rule for the proxy-only subnet to allowthe load balancer to communicate with backend instances on TCP port
80:gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-proxy-only-subnet \ --source-ranges=PROXY_ONLY_SUBNET_RANGE \ --rules=tcp:80
Set up the zonal NEG
For Google Cloud-based backends, we recommend you configure multiple zonalNEGs in the same region where you configuredhybridconnectivity.
For this example, we set up a zonal NEG (withGCE_VM_IP_PORT type endpoints)in theREGION region. First create the VMs intheGCP_NEG_ZONE zone. Thencreate a zonal NEG in the sameGCP_NEG_ZONE andadd the VMs' network endpoints to the NEG.
Create VMs
Console
- Go to the VM instances page in the Google Cloud console.
Go to VM instances - ClickCreate instance.
- Set theName to
vm-a1. - For theRegion, chooseREGION, and chooseanyZone.This will be referred to asGCP_NEG_ZONEin this procedure.
- In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. ClickChoose to change the image if necessary.
ClickAdvanced options and make the following changes:
- ClickNetworking and add the followingNetwork tags:
allow-ssh,allow-health-check, andallow-proxy-only-subnet. - ClickEdit underNetwork interfaces and make the following changes then clickDone:
- Network:NETWORK
- Subnet:LB_SUBNET_NAME
- Primary internal IP: Ephemeral (automatic)
- External IP: Ephemeral
ClickManagement. In theStartup script field, copy and pastethe following script contents. The script contents are identical forall four VMs:
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
- ClickNetworking and add the followingNetwork tags:
ClickCreate.
Repeat the following steps to create a second VM, using the followingname and zone combination:
- Name:
vm-a2, zone:GCP_NEG_ZONE
- Name:
gcloud
Create the VMs by running the following command two times, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.
- VM_NAME of
vm-a1and anyGCP_NEG_ZONE zone ofyour choice VM_NAME of
vm-a2and the sameGCP_NEG_ZONE zonegcloud compute instances createVM_NAME \ --zone=GCP_NEG_ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --subnet=LB_SUBNET_NAME \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create the zonal NEG
Console
To create a zonal network endpoint group:
- Go to the Network Endpoint Groups page in the Google Cloud console.
Go to the Network Endpoint Groups page - ClickCreate network endpoint group.
- Enter aName for the zonal NEG. Referred to asGCP_NEG_NAME in thisprocedure.
- Select theNetwork endpoint group type:Network endpoint group(Zonal).
- Select theNetwork:NETWORK
- Select theSubnet:LB_SUBNET_NAME
- Select theZone:GCP_NEG_ZONE
- Enter theDefault port:
80. - ClickCreate.
Add endpoints to the zonal NEG:
- Go to the Network Endpoint Groups page in the Google Cloud console.
Go to the Network endpoint groups - Click theName of the network endpoint group created in the previousstep (GCP_NEG_NAME). Yousee theNetwork endpoint group details page.
- In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.
- Select aVM instance to add its internal IP addresses as networkendpoints. In theNetwork interface section, the name, zone,and subnet of the VM is displayed.
- In theIPv4 address field, enter the IPv4 address of the new networkendpoint.
- Select thePort type.
- If you selectDefault, the endpoint uses the default port
80for all endpoints in the network endpoint group. This is sufficientfor our example because the Apache server is serving requests atport80. - If you selectCustom, enter thePort number for the endpointto use.
- If you selectDefault, the endpoint uses the default port
- To add more endpoints, clickAdd network endpoint and repeat theprevious steps.
- After you add all the endpoints, clickCreate.
gcloud
Create a zonal NEG (with
GCE_VM_IP_PORTendpoints)using thegcloud compute network-endpoint-groupscreatecommand:gcloud compute network-endpoint-groups createGCP_NEG_NAME \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=GCP_NEG_ZONE \ --network=NETWORK \ --subnet=LB_SUBNET_NAME
You can either specify a
--default-portwhile creating the NEG,orspecify a port number for eachendpointas shown in the next step.Add endpoints toGCP_NEG_NAME.
gcloud compute network-endpoint-groups updateGCP_NEG_NAME \ --zone=GCP_NEG_ZONE \ --add-endpoint='instance=vm-a1,port=80' \ --add-endpoint='instance=vm-a2,port=80'
Set up the hybrid connectivity NEG
Note: If you're usingdistributed Envoy healthchecks with hybrid connectivity NEG backends (supported only forEnvoy-basedload balancers), make sure that you configure unique networkendpoints for all the NEGs attached to the same backend service. Addingthe same network endpoint to multiple NEGs results in undefined behavior.When creating the NEG, use aZONE that minimizes the geographicdistance between Google Cloud and your on-premises or other cloudenvironment. For example, if you are hosting a service in an on-premisesenvironment in Frankfurt, Germany, you can specify theeurope-west3-aGoogle Cloud zone when you create the NEG.
Moreover, if you're using Cloud Interconnect, theZONE usedto create the NEG should be in the same region where theCloud Interconnect attachment was configured.
For the available regions and zones, see theCompute Enginedocumentation: Available regions andzones.
Console
To create a hybrid connectivity network endpoint group:
- Go to the Network Endpoint Groups page in the Google Cloud console.
Go to Network endpoint groups - ClickCreate network endpoint group.
- Enter aName for the hybrid NEG. Referred to asON_PREM_NEG_NAME in thisprocedure.
- Select theNetwork endpoint group type:Hybrid connectivity networkendpoint group (Zonal).
- Select theNetwork:NETWORK
- Select theSubnet:LB_SUBNET_NAME
- Select theZone:ON_PREM_NEG_ZONE
- Enter theDefault port.
- ClickCreate
Add endpoints to the hybrid connectivity NEG:
- Go to the Network Endpoint Groups page in the Google Cloud console.
Go to the Network Endpoint Groups page - Click theName of the network endpoint group created in the previousstep (ON_PREM_NEG_NAME). Yousee theNetwork endpoint group detail page.
- In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.
- Enter theIP address of the new network endpoint.
- Select thePort type.
- If you selectDefault, the endpoint uses the default portfor all endpoints in the network endpoint group.
- If you selectCustom, you can enter a differentPort numberfor the endpoint to use.
- To add more endpoints, clickAdd network endpoint and repeat theprevious steps.
- After you add all the non-Google Cloud endpoints,clickCreate.
gcloud
Create a hybrid connectivity NEG using the
gcloud compute network-endpoint-groupscreatecommand.gcloud compute network-endpoint-groups createON_PREM_NEG_NAME \ --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \ --zone=ON_PREM_NEG_ZONE \ --network=NETWORK
Add the on-premises backend VM endpoint toON_PREM_NEG_NAME:
gcloud compute network-endpoint-groups updateON_PREM_NEG_NAME \ --zone=ON_PREM_NEG_ZONE \ --add-endpoint="ip=ON_PREM_IP_ADDRESS_1,port=PORT_1" \ --add-endpoint="ip=ON_PREM_IP_ADDRESS_2,port=PORT_2"
You can use this command to add the network endpoints youpreviouslyconfigured on-premises or in your cloud environment.Repeat--add-endpoint as many times as needed.
You can repeat these steps to create multiple hybrid NEGs if needed.
Configure the load balancer
Console
Note: You cannot use the Google Cloud console to create a load balancerthat has mixed zonal and hybrid connectivity NEG backends in a singlebackend service. Use eithergcloud or the REST API instead.gcloud
- Create ahealth check for the backends.
Health check probes for hybrid NEG backends originate from Envoy proxies in the proxy-only subnet, whereas probes for zonal NEG backends originate from [Google's central probe IP ranges](/load-balancing/docs/health-check-concepts#ip-ranges).gcloud compute health-checks create httpHTTP_HEALTH_CHECK_NAME \ --region=REGION \ --use-serving-port
- Create a backend service for Google Cloud-based backends. You add both the zonal NEG and the hybrid connectivity NEG as backends to this backend service.
gcloud compute backend-services createBACKEND_SERVICE \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --health-checks=HTTP_HEALTH_CHECK_NAME \ --health-checks-region=REGION \ --region=REGION
- Add the zonal NEG as a backend to the backend service.
For details about configuring the balancing mode, see the gcloud CLI documentation for thegcloud compute backend-services add-backendBACKEND_SERVICE \ --region=REGION \ --balancing-mode=RATE \ --max-rate-per-endpoint=MAX_REQUEST_RATE_PER_ENDPOINT \ --network-endpoint-group=GCP_NEG_NAME \ --network-endpoint-group-zone=GCP_NEG_ZONE
--max-rate-per-endpointparameter. - Add the hybrid NEG as a backend to the backend service.
For details about configuring the balancing mode, see the gcloud CLI documentation for thegcloud compute backend-services add-backendBACKEND_SERVICE \ --region=REGION \ --balancing-mode=RATE \ --max-rate-per-endpoint=MAX_REQUEST_RATE_PER_ENDPOINT \ --network-endpoint-group=ON_PREM_NEG_NAME \ --network-endpoint-group-zone=ON_PREM_NEG_ZONE
--max-rate-per-endpointparameter. - Create a URL map to route incoming requests to the backend service:
gcloud compute url-maps createURL_MAP_NAME \ --default-serviceBACKEND_SERVICE \ --region=REGION
- Optional: Perform this step if you are using HTTPS between the client andload balancer. This is not required for HTTP load balancers.
You can create either Compute Engine or Certificate Manager certificates. Use any of the following methods to create certificates using Certificate Manager:
- Regional self-managed certificates. For information about creating and using regional self-managed certificates, seeDeploy a regional self-managed certificate. Certificate maps aren't supported.
Regional Google-managed certificates. Certificate maps aren't supported.
The following types of regional Google-managed certificates are supported by Certificate Manager:
- Regional Google-managed certificates with per-project DNS authorization. For more information, seeDeploy a regional Google-managed certificate with DNS authorization.
- Regional Google-managed (private) certificates with Certificate Authority Service. For more information, seeDeploy a regional Google-managed certificate with Certificate Authority Service.
After you create certificates, attach the certificate directly to the targetproxy.
To create a Compute Engine self-managed SSL certificate resource:gcloud compute ssl-certificates createSSL_CERTIFICATE_NAME \ --certificateCRT_FILE_PATH \ --private-keyKEY_FILE_PATH
- Create a target HTTP(S) proxy to route requests to your URL map.
For an HTTP load balancer, create an HTTP target proxy: For an HTTPS load balancer, create an HTTPS target proxy.The proxy is the portion of the loadbalancer that holds the SSL certificate for HTTPS Load Balancing, so youalso load your certificate in this step.gcloud compute target-http-proxies createTARGET_HTTP_PROXY_NAME \ --url-map=URL_MAP_NAME \ --url-map-region=REGION \ --region=REGION
gcloud compute target-https-proxies createTARGET_HTTPS_PROXY_NAME \ --ssl-certificates=SSL_CERTIFICATE_NAME \ --url-map=URL_MAP_NAME \ --url-map-region=REGION \ --region=REGION
- Create a forwarding rule to route incoming requests to the proxy. Don'tuse the proxy-only subnet to create the forwarding rule.
For an HTTP load balancer: For an HTTPS load balancer:gcloud compute forwarding-rules createHTTP_FORWARDING_RULE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=LB_SUBNET_NAME \ --address=LB_IP_ADDRESS \ --ports=80 \ --region=REGION \ --target-http-proxy=TARGET_HTTP_PROXY_NAME \ --target-http-proxy-region=REGION
gcloud compute forwarding-rules createHTTPS_FORWARDING_RULE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=LB_SUBNET_NAME \ --address=LB_IP_ADDRESS \ --ports=443 \ --region=REGION \ --target-https-proxy=TARGET_HTTPS_PROXY_NAME \ --target-https-proxy-region=REGION
Connect your domain to your load balancer
After the load balancer is created, note the IP address that is associated withthe load balancer—for example,30.90.80.100. To point your domain to yourload balancer, create anA record by using your domain registration service. Ifyou added multiple domains to your SSL certificate, you must add anA recordfor each one, all pointing to the load balancer's IP address. For example, tocreateA records forwww.example.com andexample.com, use the following:
NAME TYPE DATAwww A 30.90.80.100@ A 30.90.80.100
If you use Cloud DNS as your DNS provider, seeAdd, modify, and delete records.
Test the load balancer
To test the load balancer, create a client VM in the same region as theload balancer. Then send traffic from the client to the load balancer.
Create a client VM
This example creates a client VM (vm-client) in the same region as the backendNEGs. The client is used to validate the load balancer's configurationand demonstrate expected behavior as described in thetesting section.
Console
- Go to the VM instances page in the Google Cloud console.
Go to VM instances - ClickCreate instance.
- Set theName to
vm-client. - Set theZone toCLIENT_VM_ZONE.
- ClickAdvanced options and make the following changes:
- ClickNetworking and add the
allow-sshtoNetwork tags. - Click the edit button underNetwork interfaces and make thefollowing changes then clickDone:
- Network:NETWORK
- Subnet:LB_SUBNET_NAME
- Primary internal IP: Ephemeral (automatic)
- External IP: Ephemeral
- ClickNetworking and add the
- ClickCreate.
gcloud
The client VM can be in any zone in the same region as the load balancer,and it can use any subnet in that region. In this example, the client is intheCLIENT_VM_ZONE zone, andit uses the same subnet as the backend VMs.
gcloud compute instances create vm-client \ --zone=CLIENT_VM_ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=LB_SUBNET_NAME
Send traffic to the load balancer
Note: It might take a few minutes for the load balancer configuration topropagate globally after you first deploy it.Now that you have configured your load balancer, you can start sendingtraffic to the load balancer's IP address.
Use SSH to connect to the client instance.
gcloud compute ssh client-vm \ --zone=CLIENT_VM_ZONE
Get the load balancer's IP address. Use the
compute addresses describecommandto view the allocated IP address:gcloud compute addresses describe l7-ilb-ip-address \ --region=us-west1
Verify that the load balancer is serving backend hostnames as expected.ReplaceIP_ADDRESS with the load balancer's IP address.
For HTTP testing, run:
curlIP_ADDRESS
For HTTPS testing, run:
curl -k -s 'https://DOMAIN_NAME:443' --connect-toDOMAIN_NAME:443:IP_ADDRESS:443
ReplaceDOMAIN_NAME with your application domain name, for example,test.example.com.
The-k flag causes curl to skip certificate validation.
Testing the non-Google Cloud endpoints depends on the service you haveexposed through the hybrid NEG endpoint.
Additional configuration options
This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You canperform them in any order.
Update client HTTP keepalive timeout
The load balancer created in the previous steps has been configured witha default value for theclient HTTP keepalivetimeout.To update the client HTTP keepalive timeout, use the following instructions.
Console
In the Google Cloud console, go to theLoad balancing page.
- Click the name of the load balancer that you want to modify.
- ClickEdit.
- ClickFrontend configuration.
- ExpandAdvanced features. ForHTTP keepalive timeout, enter a timeout value.
- ClickUpdate.
- To review your changes, clickReview and finalize, and then clickUpdate.
gcloud
For an HTTP load balancer, update the target HTTP proxy by using thegcloud compute target-http-proxies update command.
gcloud compute target-http-proxies updateTARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --region=REGION
For an HTTPS load balancer, update the target HTTPS proxy by using thegcloud compute target-https-proxies update command.
gcloud compute target-https-proxies updateTARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec=HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --regionREGION
Replace the following:
TARGET_HTTP_PROXY_NAME: the name of the target HTTP proxy.TARGET_HTTPS_PROXY_NAME: the name of the target HTTPS proxy.HTTP_KEEP_ALIVE_TIMEOUT_SEC: the HTTP keepalive timeout value from 5 to 600 seconds.
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-24 UTC.