Set up an internal passthrough Network Load Balancer with VM instance group backends Stay organized with collections Save and categorize content based on your preferences.
This guide uses an example to teach the fundamentals of Google Cloudinternal passthrough Network Load Balancers. Before following this guide, familiarizeyourself with the following:
- Internal passthrough Network Load Balancer concepts
- How internal passthrough Network Load Balancers work
- Firewall rules overview
- Health check concepts
To follow step-by-step guidance for this task directly in the Google Cloud console, clickGuide me:
Permissions
To follow this guide, you need to create instances and modify anetwork in a project. You need to be either a projectowner oreditor, or you need to have allof the followingCompute Engine IAMroles:
| Task | Required role |
|---|---|
| Create networks, subnets, and load balancer components | Compute Network Admin ( roles/compute.networkAdmin) |
| Add and remove firewall rules | Compute Security Admin ( roles/compute.securityAdmin) |
| Create instances | Compute Instance Admin ( roles/compute.instanceAdmin) |
For more information, see the following guides:
Set up load balancer with IPv4-only subnets and backends
This guide shows you how to configure and test an internal passthrough Network Load Balancer.The steps in this section describe how to configure the following:
- An example that uses acustom mode VPCnetwork named
lb-network. - Asingle-stack subnet (
stack-typeset toIPv4), which is required for IPv4 traffic. When you create a single stack subnet on a custom mode VPC network,you choose anIPv4 subnet rangefor the subnet. - Firewall rules that allow incoming connections to backend virtual machine(VM) instances.
- The backend instance group, which is located in the following regionand subnet for this example:
- Region:
us-west1 - Subnet:
lb-subnet, with primary IPv4 address range10.1.2.0/24.
- Region:
- Four backend VMs: two VMs in an unmanaged instance group in zone
us-west1-aand two VMs in an unmanaged instance group in zoneus-west1-c.To demonstrateglobal access,this example creates a second test client VM in a different region and subnet:- Region:
europe-west1 - Subnet:
europe-subnet, with primary IP address range10.3.4.0/24
- Region:
- One client VM to test connections.
- The following internal passthrough Network Load Balancer components:
- A health check for the backend service.
- An internal backend service in the
us-west1region to manageconnection distribution to the two zonal instance groups. - An internal forwarding rule and internal IP address for thefrontend of the load balancer.
The architecture for this example looks like this:
Configure a network, region, and subnet
To create the example network and subnet, follow these steps.
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
ForName, enter
lb-network.In theSubnets section, do the following:
- Set theSubnet creation mode toCustom.
- In theNew subnet section, enter the following information:
- Name:
lb-subnet - Region:
us-west1 - IP stack type:IPv4 (single-stack)
- IP address range:
10.1.2.0/24
- Name:
- ClickDone.
- ClickAdd subnet and enter the following information:
- Name:
europe-subnet - Region:
europe-west1 - IP stack type:IPv4 (single-stack)
- IP address range:
10.3.4.0/24
- Name:
- ClickDone.
ClickCreate.
gcloud
Create the custom VPC network:
gcloud compute networks create lb-network --subnet-mode=custom
In the
lb-networknetwork, create a subnet for backends in theus-west1region:gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1
In the
lb-networknetwork, create another subnet for testing global access in theeurope-west1region:gcloud compute networks subnets create europe-subnet \ --network=lb-network \ --range=10.3.4.0/24 \ --region=europe-west1
API
Make aPOST request to thenetworks.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{ "routingConfig": { "routingMode": "REGIONAL" }, "name": "lb-network", "autoCreateSubnetworks": false}Make twoPOST requests to thesubnetworks.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks{ "name": "lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.1.2.0/24", "privateIpGoogleAccess": false}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/europe-west1/subnetworks{ "name": "europe-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.3.4.0/24", "privateIpGoogleAccess": false}Configure firewall rules
This example uses the following firewall rules:
fw-allow-lb-access: An ingress rule, applicable to all targets in theVPC network, allowing traffic from sources in the10.1.2.0/24and10.3.4.0/24ranges. This rule allows incoming trafficfrom any client located in either of the two subnets. It later lets you toconfigure and test global access.fw-allow-ssh: An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port 22 from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which you willbe initiating SSH sessions. This example uses the target tagallow-sshtoidentify the VMs to which it should apply.fw-allow-health-check: An ingress rule, applicable to the instances beingload balanced, that allows traffic from the Google Cloud healthchecking systems (130.211.0.0/22and35.191.0.0/16). This example uses thetarget tagallow-health-checkto identify the instances to which it shouldapply.
Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.
Note: You must create a firewall rule to allow health checks from the IP rangesof Google Cloud probe systems. For more information, seeprobe IP ranges.Console
In the Google Cloud console, go to theFirewall policies page.
To allow subnet traffic, clickCreate firewall rule and enter the following information:
- Name:
fw-allow-lb-access - Network:
lb-network - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:All instances in the network
- Source filter:IPv4 ranges
- Source IPv4 ranges:
10.1.2.0/24 - Protocols and ports:Allow all
- Name:
ClickCreate.
To allow incoming SSH connections, clickCreate firewall rule again and enter the following information:
- Name:
fw-allow-ssh - Network:
lb-network - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports: SelectSpecified protocols and ports,select theTCP checkbox, and then enter
22inPorts.
- Name:
ClickCreate.
To allow Google Cloud health checks, clickCreate firewall rule a third time and enter the following information:
- Name:
fw-allow-health-check - Network:
lb-network - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:Specified target tags
- Target tags:
allow-health-check - Source filter:IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 - Protocols and ports:Allow all
- Name:
ClickCreate.
gcloud
Create the
fw-allow-lb-accessfirewall rule to allow communication fromwithin the subnet:gcloud compute firewall-rules create fw-allow-lb-access \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.1.2.0/24,10.3.4.0/24 \ --rules=tcp,udp,icmp
Create the
fw-allow-sshfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-checkrule to allow Google Cloudhealth checks.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
API
Create thefw-allow-lb-access firewall rule by making aPOST request tothefirewalls.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-lb-access", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "10.1.2.0/24", "10.3.4.0/24" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false}Create thefw-allow-ssh firewall rule by making aPOST request tothefirewalls.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-ssh", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "0.0.0.0/0" ], "targetTags": [ "allow-ssh" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] } ],"direction": "INGRESS","logConfig": { "enable": false},"disabled": false}Create thefw-allow-health-check firewall rule by making aPOST request tothefirewalls.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-health-check", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16" ], "targetTags": [ "allow-health-check" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false}Create backend VMs and instance groups
This example uses twounmanaged instance groups each having two backend(server) VMs. To demonstrate the regional nature of internal passthrough Network Load Balancers,the two instance groups are placed in separate zones,us-west1-aandus-west1-c.
- Instance group
ig-acontains these two VMs:vm-a1vm-a2
- Instance group
ig-ccontains these two VMs:vm-c1vm-c2
Traffic to all four of the backend VMs is load balanced.
To support this example and the additional configuration options, each of thefour VMs runs an Apache web server that listens on the following TCP ports: 80,8008, 8080, 8088, 443, and 8443.
Each VM is assigned an internal IP address in thelb-subnet and an ephemeralexternal (public) IP address. You canremove the external IP addresseslater.
External IP address for the backend VMs are not required; however, they areuseful for this example because they permit the backend VMs to download Apachefrom the internet, and they canconnect usingSSH.
By default, Apache is configured to bind to any IP address.Internal passthrough Network Load Balancers deliver packets by preserving thedestination IP.Ensure that server software running on your backend VMs is listening on the IPaddress of the load balancer's internal forwarding rule. If you configuremultiple internal forwarding rules, ensurethat your software listens to the internal IP address associated with each one.The destination IP address of a packet delivered to a backend VM by aninternal passthrough Network Load Balancer is the internal IP address of the forwarding rule.
For instructional simplicity, these backend VMs run Debian GNU/Linux 12.
Console
Create backend VMs
In the Google Cloud console, go to theVM instances page.
Repeat steps 3 to 8 for each VM, using the following nameand zone combinations.
- Name:
vm-a1, zone:us-west1-a - Name:
vm-a2, zone:us-west1-a - Name:
vm-c1, zone:us-west1-c - Name:
vm-c2, zone:us-west1-c
- Name:
ClickCreate instance.
Set theName as indicated in step 2.
ForRegion, select
us-west1, and choose aZone asindicated in step 2.In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. If necessary, clickChange to change the image.
ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandallow-health-check. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
lb-subnet - IP stack type:IPv4 (single-stack)
- Primary internal IPv4 address:Ephemeral (automatic)
- External IPv4 address:Ephemeral
- Network:
- ForNetwork tags, enter
ClickManagement, and then in theStartup script field, enter the following script. The script contents are identical for all four VMs.
#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completedClickCreate.
Create instance groups
In the Google Cloud console, go to theInstance groups page.
Repeat the following steps to create two unmanaged instance groups eachwith two VMs in them, using these combinations.
- Instance group name:
ig-a, zone:us-west1-a, VMs:vm-a1andvm-a2 - Instance group name:
ig-c, zone:us-west1-c, VMs:vm-c1andvm-c2
- Instance group name:
ClickCreate instance group.
ClickNew unmanaged instance group.
SetName as indicated in step 2.
In theLocation section, select
us-west1forRegion, andthen choose aZone as indicated in step 2.ForNetwork, select
lb-network.ForSubnetwork, select
lb-subnet.In theVM instances section, add the VMs as indicated in step 2.
ClickCreate.
gcloud
Create the four VMs by running the following command four times, usingthese four combinations for
[VM-NAME]and[ZONE]. The script contentsare identical for all four VMs.VM-NAME:vm-a1,ZONE:us-west1-aVM-NAME:vm-a2,ZONE:us-west1-aVM-NAME:vm-c1,ZONE:us-west1-cVM-NAME:vm-c2,ZONE:us-west1-c
gcloud compute instances createVM-NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check \ --subnet=lb-subnet \ --metadata=startup-script='#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completed'Create the two unmanaged instance groups in each zone:
gcloud compute instance-groups unmanaged create ig-a \ --zone=us-west1-agcloud compute instance-groups unmanaged create ig-c \ --zone=us-west1-c
Add the VMs to the appropriate instance groups:
gcloud compute instance-groups unmanaged add-instances ig-a \ --zone=us-west1-a \ --instances=vm-a1,vm-a2gcloud compute instance-groups unmanaged add-instances ig-c \ --zone=us-west1-c \ --instances=vm-c1,vm-c2
API
For the four VMs, use the following VM names and zones:
VM-NAME:vm-a1,ZONE:us-west1-aVM-NAME:vm-a2,ZONE:us-west1-aVM-NAME:vm-c1,ZONE:us-west1-cVM-NAME:vm-c2,ZONE:us-west1-c
You can get the currentDEBIAN_IMAGE_NAME by running the followinggcloud command:
gcloud compute images list \ --filter="family=debian-12"
Create four backend VMs by making fourPOST requests to theinstances.insert method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{ "name": "VM-NAME", "tags": { "items": [ "allow-health-check", "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/[ZONE]/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "VM-NAME", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/zone/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "metadata": { "items": [ { "key": "startup-script", "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nfile_ports=\"/etc/apache2/ports.conf\"\nfile_http_site=\"/etc/apache2/sites-available/000-default.conf\"\nfile_https_site=\"/etc/apache2/sites-available/default-ssl.conf\"\nhttp_listen_prts=\"Listen 80\\nListen 8008\\nListen 8080\\nListen 8088\"\nhttp_vh_prts=\"*:80 *:8008 *:8080 *:8088\"\nhttps_listen_prts=\"Listen 443\\nListen 8443\"\nhttps_vh_prts=\"*:443 *:8443\"\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nprt_conf=\"$(cat \"$file_ports\")\"\nprt_conf_2=\"$(echo \"$prt_conf\" | sed \"s|Listen 80|${http_listen_prts}|\")\"\nprt_conf=\"$(echo \"$prt_conf_2\" | sed \"s|Listen 443|${https_listen_prts}|\")\"\necho \"$prt_conf\" | tee \"$file_ports\"\nhttp_site_conf=\"$(cat \"$file_http_site\")\"\nhttp_site_conf_2=\"$(echo \"$http_site_conf\" | sed \"s|*:80|${http_vh_prts}|\")\"\necho \"$http_site_conf_2\" | tee \"$file_http_site\"\nhttps_site_conf=\"$(cat \"$file_https_site\")\"\nhttps_site_conf_2=\"$(echo \"$https_site_conf\" | sed \"s|_default_:443|${https_vh_prts}|\")\"\necho \"$https_site_conf_2\" | tee \"$file_https_site\"\nsystemctl restart apache2" } ] }, "scheduling": { "preemptible": false }, "deletionProtection": false}Create two instance groups by making aPOST request to theinstanceGroups.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups{ "name": "ig-a", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups{ "name": "ig-c", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}Add instances to each instance group by making aPOST request to theinstanceGroups.addInstances method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances{ "instances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1", "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a2" } ]}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c/addInstances{ "instances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c1", "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c2" } ]}Configure load balancer components
These steps configure all of theinternal passthrough Network Load Balancercomponents starting with the healthcheck and backend service, and then the frontend components:
Health check. In this example, you use an HTTP health check that checksfor an HTTP
200 OKstatus code. For more information, see theHealth check section.Backend service. Because you need to pass HTTP traffic through the internalload balancer, you need to use TCP, not UDP.
Forwarding rule. This example creates a single internal forwarding rule.
Internal IP address. In this example, you specify an internal IPaddress,
10.1.2.99, when you create the forwarding rule.
Console
Start your configuration
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
- ForProxy or passthrough, selectPassthrough load balancer and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ClickConfigure.
Basic configuration
On theCreate internal passthrough Network Load Balancer page, enter thefollowing information:
- Load balancer name:
be-ilb - Region:
us-west1 - Network:
lb-network
Configure the backends
- ClickBackend configuration.
In theHealth check list, clickCreate a health check,and enter the following information.
- Name:
hc-http-80 - Protocol:HTTP
- Port:
80 - Proxy protocol:None
- Request:
/
Note that when you use the Google Cloud console to create your loadbalancer, the health check is global. If you want to create aregional health check, use
gcloudor the API.- Name:
ClickCreate.
To handle only IPv4 traffic, in theNew Backend section, forIP stack type, select theIPv4 (single-stack) option.
In theInstance group list, select the
ig-cinstance group andclickDone.ClickAdd a backend and repeat the step to add the
ig-ainstancegroup.Verify that there is a blue check mark next toBackend configuration before continuing.
Configure the frontend
- ClickFrontend configuration.
- In theNew Frontend IP and port section, do the following:
- ForName, enter
fr-ilb. - ForSubnetwork, select
lb-subnet. - In theInternal IP purpose section, in theIP address list,selectCreate IP address, enter the following information, andthen clickReserve.
- Name:
ip-ilb - IP version:IPv4
- Static IP address:Let me choose
- Custom IP address:
10.1.2.99
- Name:
- ForPorts, selectMultiple and then inPort numbers, enter
80,8008,8080, and8088. - Verify that there is a blue check mark next toFrontend configuration before continuing.
- ForName, enter
Review the configuration
- ClickReview and finalize.
- Review your load balancer configuration settings.
- Optional: ClickEquivalent code to view the REST API requestthat will be used to create the load balancer.
- ClickCreate.
gcloud
Create a new regional HTTP health check to test HTTP connectivity to theVMs on port 80.
gcloud compute health-checks create http hc-http-80 \ --region=us-west1 \ --port=80
Create the backend service for HTTP traffic:
gcloud compute backend-services create be-ilb \ --load-balancing-scheme=internal \ --protocol=tcp \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the two instance groups to the backend service:
gcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-agcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-c \ --instance-group-zone=us-west1-c
Create a forwarding rule for the backend service. When you create theforwarding rule, specify
10.1.2.99for the internal IP address in thesubnet.gcloud compute forwarding-rules create fr-ilb \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=TCP \ --ports=80,8008,8080,8088 \ --backend-service=be-ilb \ --backend-service-region=us-west1
API
Create the health check by making aPOST request to theregionHealthChecks.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks{"name": "hc-http-80","type": "HTTP","httpHealthCheck": { "port": 80}}Create the regional backend service by making aPOST request to theregionBackendServices.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices{"name": "be-ilb","backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" }, { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c", "balancingMode": "CONNECTION" }],"healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"],"loadBalancingScheme": "INTERNAL","connectionDraining": { "drainingTimeoutSec": 0 }}Create the forwarding rule by making aPOST request to theforwardingRules.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb","IPAddress": "10.1.2.99","IPProtocol": "TCP","ports": [ "80", "8008", "8080", "8088"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}Test your load balancer
These tests show how to validate your load balancer configuration and learnabout its expected behavior.
Create a client VM
This example creates a client VM (vm-client) in the same region as the backend(server) VMs. The client is used to validate the load balancer's configurationand demonstrate expected behavior as described in thetesting section.
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
ForName, enter
vm-client.ForRegion, select
us-west1.ForZone, select
us-west1-a.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
lb-subnet
- Network:
- ForNetwork tags, enter
ClickCreate.
gcloud
The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theus-west1-a zone, and it uses the samesubnet as the backend VMs.
gcloud compute instances create vm-client \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=lb-subnet
API
Make aPOST request to theinstances.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances{ "name": "vm-client", "tags": { "items": [ "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "scheduling": { "preemptible": false }, "deletionProtection": false}Test connection from client VM
This test contacts the load balancer from a separate client VM; that is, notfrom a backend VM of the load balancer. The expected behavior is for traffic tobe distributed among the four backend VMsbecause no session affinity has beenconfigured.
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Make a web request to the load balancer using
curlto contact its IPaddress. Repeat the request so you can see that responses come fromdifferent backend VMs. The name of the VM generating the response isdisplayed in the text in the HTML response, by virtue of the contents of/var/www/html/index.htmlon each backend VM. For example, expected responses look likePage served from: vm-a1andPage served from: vm-a2.curl http://10.1.2.99
The forwarding rule is configured to serve ports
80,8008,8080, and8088. To send traffic to those other ports, append a colon (:) and theport number after the IP address, like this:curl http://10.1.2.99:8008
If youadd a service label to the internal forwarding rule, you can useinternal DNS to contact the load balancer using its service name.
curl http://web-test.fr-ilb.il4.us-west1.lb.PROJECT_ID.internal
Ping the load balancer's IP address
This test demonstrates an expected behavior: You cannot ping the IP address ofthe load balancer. This is because internal passthrough Network Load Balancers areimplemented in virtual network programming — they are not separatedevices.
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Attempt to ping the IP address of the load balancer. Notice that you don'tget a response and that the
pingcommand times out after 10 seconds in thisexample.timeout 10 ping 10.1.2.99
Send requests from load-balanced VMs
This test demonstrates that when a backend VM sends packets to the IP address ofits load balancer's forwarding rule, those requests are routed back to itself.This is the case regardless of the backend VM's health check state.
Internal passthrough Network Load Balancers are implemented by using virtual network programmingand VM configuration in the guest OS. On Linux VMs, theGuestenvironment creates a route for theload balancer's IP address in the operating system's local routing table.
Because this local route is within the VM itself (not a route in theVPC network), packets sent to the load balancer's IP address arenot processed by the VPC network. Instead, packets sent to theload balancer's IP address remain within the operating system of the VM.
Connect to a backend VM, such as
vm-a1: Important: You won't be able to connect to a backend VM in this way if youhave removed its external IP address. For connectionoptions if your backend VMs don't have external IP addresses, seeChoose a connection option for internal-only VMs.gcloud compute ssh vm-a1 --zone=us-west1-a
Make a web request to the load balancer (by IP address or service name) using
curl. The response comes from the same backend VM that makes the request.Repeated requests are answered in the same way. The expected response whentesting fromvm-a1is alwaysPage served from: vm-a1.curl http://10.1.2.99
Inspect the local routing table, looking for a destination that matches theIP address of the load balancer itself,
10.1.2.99. This route is anecessary part of an internal passthrough Network Load Balancer, but it also demonstrateswhy a request from a VM behind the load balancer is always responded to bythe same VM.ip route show table local | grep 10.1.2.99
When a backend VM for an internal passthrough Network Load Balancer sends packets to the load balancer'sforwarding rule IP address, the packets arealways routed back to the VM thatmakes the request. This is because an internal passthrough Network Load Balancer is a pass-through loadbalancer and is implemented by creating a local route for the load balancer's IPaddress within the VM's guest OS, as indicated in this section. If you have ause case where load-balanced backends need to send TCP traffic to the loadbalancer's IP address, and you need the traffic to be distributed as if itoriginated from a non-load-balanced backend, consider using aregional internal proxy Network Load Balancer instead.
For more information, seeInternal passthrough Network Load Balancers as nexthops.
Set up load balancer with dual-stack subnets and backends
This document shows you how to configure and test an internal passthrough Network Load Balancer that supportsboth IPv4 and IPv6 traffic. The steps in this section describe how to configurethe following:
- The example on this page uses acustom mode VPCnetwork named
lb-network-dual-stack. IPv6trafficrequires a custom mode subnet. - Adual-stack subnet (
stack-typeset toIPV4_IPV6), which is required for IPv6 traffic. When you create a dual stack subnet on a custom mode VPC network,you choose anIPv6 access type for thesubnet. For this example, we set the subnet'sipv6-access-typeparameter toINTERNAL. This means new VMs on this subnet can be assigned both internalIPv4 addresses and internal IPv6 addresses. For instructions, seeVPC documentation aboutAdding a dual-stacksubnet. - Firewall rules that allow incoming connections to backend VMs.
- The backend instance group, which is located in the following regionand subnet for this example:
- Region:
us-west1 - Subnet:
lb-subnet, with primary IPv4 address range10.1.2.0/24. Although you choose which IPv4 address range toconfigure on the subnet, the IPv6 address range is assigned automatically.Google provides a fixed size (/64) IPv6 CIDR block.
- Region:
- Four backenddual-stack VMs: two VMs in an unmanaged instance group in zone
us-west1-aand two VMs in an unmanaged instance group in zoneus-west1-c.To demonstrateglobal access,this example creates a second test client VM in a different region andsubnet:- Region:
europe-west1 - Subnet:
europe-subnet, with primary IP address range10.3.4.0/24
- Region:
- One client VM to test connections.
- The following internal passthrough Network Load Balancer components:
- A health check for the backend service.
- An internal backend service in the
us-west1region to manageconnection distribution to the two zonal instance groups. - Two internal forwarding rules for the frontend of the load balancer.
The following diagram shows the architecture for this example:
Configure a network, region, and subnet
The example internal passthrough Network Load Balancer described on this page is created in acustom mode VPC network namedlb-network-dual-stack.
To configure subnets with internal IPv6 ranges, enable a VPCnetwork ULA internal IPv6 range. Internal IPv6 subnet ranges are allocated fromthis range.
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
ForName, enter
lb-network-dual-stack.If you want to configure internal IPv6 address ranges on subnets in thisnetwork, complete these steps:
- ForPrivate IPv6 address settings, selectConfigure a ULAinternal IPv6 range for this VPC Network.
- ForAllocate internal IPv6 range, selectAutomatically orManually.If you selectManually, enter a
/48range from within thefd20::/20range. If the range is in use, you are prompted to providea different range.
ForSubnet creation mode, selectCustom.
In theNew subnet section, specify the following configurationparameters for a subnet:
- Name:
lb-subnet - Region:
us-west1 - IP stack type:IPv4 and IPv6 (dual-stack)
- IPv4 range:
10.1.2.0/24. - IPv6 access type:Internal
- Name:
ClickDone.
ClickAdd subnet and enter the following information:
- Name:
europe-subnet - Region:
europe-west1 - IP stack type:IPv4 (single-stack)
- IP address range:
10.3.4.0/24
- Name:
ClickDone.
ClickCreate.
gcloud
To create a new custom mode VPC network, run the
gcloud compute networks createcommand.To configureinternal IPv6 ranges on any subnets in this network, use the
--enable-ula-internal-ipv6flag. This option assigns a/48ULA prefixfrom within thefd20::/20range used by Google Cloud forinternal IPv6 subnet ranges. If you want to select the/48IPv6 rangethat is assigned, use the--internal-ipv6-rangeflag to specify a range.gcloud compute networks create lb-network-dual-stack \ --subnet-mode=custom \ --enable-ula-internal-ipv6 \ --internal-ipv6-range=ULA_IPV6_RANGE \ --bgp-routing-mode=regional
Replace
ULA_IPV6_RANGEwith a/48prefix from within thefd20::/20range used by Google for internal IPv6 subnet ranges. If youdon't use the--internal-ipv6-rangeflag, Google selects a/48prefixfor the network, such asfd20:bc7:9a1c::/48.Within the
NETWORKnetwork, create a subnet for backendsin theus-west1region and another subnet for testing global access intheeurope-west1region.To create the subnets, run the
gcloud compute networks subnets createcommand.gcloud compute networks subnets create lb-subnet \ --network=lb-network-dual-stack \ --range=10.1.2.0/24 \ --region=us-west1 \ --stack-type=IPV4_IPV6 \ --ipv6-access-type=INTERNAL
gcloud compute networks subnets create europe-subnet \ --network=lb-network-dual-stack \ --range=10.3.4.0/24 \ --region=europe-west1 \ --stack-type=IPV4_IPV6 \ --ipv6-access-type=INTERNAL
API
Create a new custom mode VPC network.
To configureinternal IPv6 ranges on any subnets in thisnetwork, setenableUlaInternalIpv6 to true. This option assigns a/48range from within thefd20::/20 range used by Google for internal IPv6subnet ranges. If you want to select which/48 IPv6 range that is assigned,also use theinternalIpv6Range field to specify a range.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "autoCreateSubnetworks": false, "name": "lb-network-dual-stack", "mtu":MTU, "enableUlaInternalIpv6": true, "internalIpv6Range": "ULA_IPV6_RANGE", "routingConfig": { "routingMode": "DYNAMIC_ROUTING_MODE" } }Replace the following:
PROJECT_ID: the ID of the project where the VPCnetwork is created.MTU: the maximum transmission unit of the network. MTU caneither be1460(default) or1500. Review themaximum transmissionunit overview before setting the MTU to1500.ULA_IPV6_RANGE: a/48prefix from within thefd20::/20range used by Google for internal IPv6 subnet ranges. If youdon't provide a value forinternalIpv6Range, Google selects a/48prefixfor the network.DYNAMIC_ROUTING_MODE: eitherglobalorregionaltocontrol the route advertisement behavior of Cloud Routers in thenetwork. For more information, refer todynamic routingmode.For more information, refer to the
networks.insertmethod.
Make twoPOST requests to thesubnetworks.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "ipCidrRange": "10.1.2.0/24", "network": "lb-network-dual-stack", "name": "lb-subnet" "stackType": IPV4_IPV6, "ipv6AccessType": Internal } POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "ipCidrRange": "10.3.4.0/24", "network": "lb-network-dual-stack", "name": "europe-subnet" "stackType": IPV4_IPV6, "ipv6AccessType": Internal }Configure firewall rules
This example uses the following firewall rules:
fw-allow-lb-access: An ingress rule, applicable to all targets in theVPC network, that allows traffic from sources in the10.1.2.0/24and10.3.4.0/24ranges. This rule allows incoming trafficfrom any client located in either of the two subnets. Later, you canconfigure and test global access.fw-allow-lb-access-ipv6: An ingress rule, applicable to all targets in theVPC network, that allows traffic from sources in theIPv6 range configured in the subnet. This rule allows incoming IPv6 trafficfrom any client located in either of the two subnets. Later, you canconfigure and test global access.fw-allow-ssh: An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port 22 from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-sshtoidentify the VMs to which it should apply.fw-allow-health-check: An ingress rule, applicable to the instances beingload balanced, that allows traffic from the Google Cloud healthchecking systems (130.211.0.0/22and35.191.0.0/16). This example uses thetarget tagallow-health-checkto identify the instances to which it shouldapply.fw-allow-health-check-ipv6: An ingress rule, applicable to the instancesbeing load balanced, that allows traffic from the Google Cloud healthchecking systems (2600:2d00:1:b029::/64). This example uses thetarget tagallow-health-check-ipv6to identify the instances to which itshould apply.
Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.
Note: You must create a firewall rule to allow health checks from the IP rangesof Google Cloud probe systems. Seeprobe IPranges for moreinformation.Console
In the Google Cloud console, go to theFirewall policies page.
To create the rule to allow subnet traffic, clickCreate firewall rule and enter the following information:
- Name:
fw-allow-lb-access - Network:
lb-network-dual-stack - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:All instances in the network
- Source filter:IPv4 ranges
- Source IPv4 ranges:
10.1.2.0/24and10.3.4.0/24 - Protocols and ports:Allow all
- Name:
ClickCreate.
To allow IPv6 subnet traffic, clickCreate firewall rule again and enter the following information:
- Name:
fw-allow-lb-access-ipv6 - Network:
lb-network-dual-stack - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:All instances in the network
- Source filter:IPv6 ranges
- Source IPv6 ranges:IPV6_ADDRESS assigned in the
lb-subnet - Protocols and ports:Allow all
- Name:
ClickCreate.
To allow incoming SSH connections, clickCreate firewall rule againand enter the following information:
- Name:
fw-allow-ssh - Network:
lb-network-dual-stack - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports: SelectSpecified protocols and ports,select theTCP checkbox, and then enter
22inPorts.
- Name:
ClickCreate.
To allow Google Cloud IPv6 health checks, clickCreate firewall rule again and enter the following information:
- Name:
fw-allow-health-check-ipv6 - Network:
lb-network-dual-stack - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:Specified target tags
- Target tags:
allow-health-check-ipv6 - Source filter:IPv6 ranges
- Source IPv6 ranges:
2600:2d00:1:b029::/64 - Protocols and ports:Allow all
- Name:
ClickCreate.
To allow Google Cloud health checks, clickCreate firewall ruleagain and enter the following information:
- Name:
fw-allow-health-check - Network:
lb-network-dual-stack - Priority:
1000 - Direction of traffic:ingress
- Action on match:allow
- Targets:Specified target tags
- Target tags:
allow-health-check - Source filter:IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 - Protocols and ports:Allow all
- Name:
ClickCreate.
gcloud
Create the
fw-allow-lb-accessfirewall rule to allow communicationwith the subnet:gcloud compute firewall-rules create fw-allow-lb-access \ --network=lb-network-dual-stack \ --action=allow \ --direction=ingress \ --source-ranges=10.1.2.0/24,10.3.4.0/24 \ --rules=all
Create the
fw-allow-lb-access-ipv6firewall rule to allow communicationwith the subnet:gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \ --network=lb-network-dual-stack \ --action=allow \ --direction=ingress \ --source-ranges=IPV6_ADDRESS \ --rules=all
Replace
IPV6_ADDRESSwith the IPv6 address assignedin thelb-subnet.Create the
fw-allow-sshfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network-dual-stack \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-check-ipv6rule to allowGoogle Cloud IPv6 health checks.gcloud compute firewall-rules create fw-allow-health-check-ipv6 \ --network=lb-network-dual-stack \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64 \ --rules=tcp,udp
Create the
fw-allow-health-checkrule to allow Google Cloudhealth checks.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network-dual-stack \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
API
Create the
fw-allow-lb-accessfirewall rule by making aPOSTrequest tothefirewalls.insertmethod.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-lb-access","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack","priority": 1000,"sourceRanges": [ "10.1.2.0/24", "10.3.4.0/24"],"allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" }],"direction": "INGRESS","logConfig": { "enable": false},"disabled": false}Create the
fw-allow-lb-access-ipv6firewall rule by making aPOSTrequest tothefirewalls.insertmethod.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-lb-access-ipv6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack", "priority": 1000, "sourceRanges": [ "IPV6_ADDRESS" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false}Replace
IPV6_ADDRESSwith the IPv6 address assigned in thelb-subnet.Create the
fw-allow-sshfirewall rule by making aPOSTrequest tothefirewalls.insertmethod.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-ssh", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack","priority": 1000,"sourceRanges": [ "0.0.0.0/0"],"targetTags": [ "allow-ssh"],"allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] }],"direction": "INGRESS","logConfig": { "enable": false},"disabled": false}Create the
fw-allow-health-check-ipv6firewall rule by making aPOSTrequest tothefirewalls.insertmethod.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-health-check-ipv6","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack","priority": 1000,"sourceRanges": [ "2600:2d00:1:b029::/64"],"targetTags": [ "allow-health-check-ipv6"],"allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }],"direction": "INGRESS","logConfig": { "enable": false},"disabled": false}Create the
fw-allow-health-checkfirewall rule by making aPOSTrequest tothefirewalls.insertmethod.POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-health-check","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack","priority": 1000,"sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16"],"targetTags": [ "allow-health-check"],"allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" }],"direction": "INGRESS","logConfig": { "enable": false},"disabled": false}
Create backend VMs and instance groups
This example uses twounmanaged instance groups each having two backend(server) VMs. To demonstrate the regional nature of internal passthrough Network Load Balancers,the two instance groups are placed in separate zones,us-west1-aandus-west1-c.
- Instance group
ig-acontains these two VMs:vm-a1vm-a2
- Instance group
ig-ccontains these two VMs:vm-c1vm-c2
Traffic to all four of the backend VMs is load balanced.
To support this example and the additional configuration options, each of thefour VMs runs an Apache web server that listens on the following TCP ports:80,8008,8080,8088,443, and8443.
Each VM is assigned an internal IP address in thelb-subnet and an ephemeralexternal (public) IP address. You canremove the external IP addresseslater.
External IP address for the backend VMs are not required; however, they areuseful for this example because they permit the backend VMs to download Apachefrom the internet, and they canconnect usingSSH.
By default, Apache is configured to bind to any IP address.Internal passthrough Network Load Balancers deliver packets by preserving thedestination IP.
Ensure that server software running on your backend VMs is listening on the IPaddress of the load balancer's internal forwarding rule. If you configuremultiple internal forwarding rules, ensurethat your software listens to the internal IP address associated with each one.The destination IP address of a packet delivered to a backend VM by aninternal passthrough Network Load Balancer is the internal IP address of the forwarding rule.
If you're using managed instance groups, ensure that the subnetwork stack typematches the stack type of instance templates used by the managed instancegroups. The subnetwork must be dual-stack if the managed instance group is usinga dual-stack instance template.
For instructional simplicity, these backend VMs run Debian GNU/Linux 12.
Console
Create backend VMs
In the Google Cloud console, go to theVM instances page.
Repeat steps 3 to 8 for each VM, using the following nameand zone combinations.
- Name:
vm-a1, zone:us-west1-a - Name:
vm-a2, zone:us-west1-a - Name:
vm-c1, zone:us-west1-c - Name:
vm-c2, zone:us-west1-c
- Name:
ClickCreate instance.
Set theName as indicated in step 2.
ForRegion, select
us-west1, and choose aZone asindicated in step 2.In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. If necessary, clickChange to change the image.
ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandallow-health-check-ipv6. - ForNetwork interfaces, select the following:
- Network:
lb-network-dual-stack - Subnet:
lb-subnet - IP stack type:IPv4 and IPv6 (dual-stack)
- Primary internal IPv4 address:Ephemeral (automatic)
- External IPv4 address:Ephemeral
- Network:
ClickManagement, and then in theStartup script field, enter the following script. The script contents are identical for all four VMs.
#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completed
- ForNetwork tags, enter
ClickCreate.
Create instance groups
In the Google Cloud console, go to theInstance groups page.
Repeat the following steps to create two unmanaged instance groups eachwith two VMs in them, using these combinations.
- Instance group name:
ig-a, zone:us-west1-a, VMs:vm-a1andvm-a2 - Instance group name:
ig-c, zone:us-west1-c, VMs:vm-c1andvm-c2
- Instance group name:
ClickCreate instance group.
ClickNew unmanaged instance group.
SetName as indicated in step 2.
In theLocation section, select
us-west1for theRegion, andthen choose aZone as indicated in step 2.ForNetwork, select
lb-network-dual-stack.ForSubnetwork, select
lb-subnet.In theVM instances section, add the VMs as indicated in step 2.
ClickCreate.
gcloud
To create the four VMs, run the
gcloud compute instances createcommand four times, usingthese four combinations for[VM-NAME]and[ZONE]. The script contentsare identical for all four VMs.VM-NAME:vm-a1,ZONE:us-west1-aVM-NAME:vm-a2,ZONE:us-west1-aVM-NAME:vm-c1,ZONE:us-west1-cVM-NAME:vm-c2,ZONE:us-west1-cgcloud compute instances createVM-NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check-ipv6 \ --subnet=lb-subnet \ --stack-type=IPV4_IPV6 \ --metadata=startup-script='#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completed'
Create the two unmanaged instance groups in each zone:
gcloud compute instance-groups unmanaged create ig-a \ --zone=us-west1-agcloud compute instance-groups unmanaged create ig-c \ --zone=us-west1-c
Add the VMs to the appropriate instance groups:
gcloud compute instance-groups unmanaged add-instances ig-a \ --zone=us-west1-a \ --instances=vm-a1,vm-a2gcloud compute instance-groups unmanaged add-instances ig-c \ --zone=us-west1-c \ --instances=vm-c1,vm-c2
api
For the four VMs, use the following VM names and zones:
VM-NAME:vm-a1,ZONE:us-west1-aVM-NAME:vm-a2,ZONE:us-west1-aVM-NAME:vm-c1,ZONE:us-west1-cVM-NAME:vm-c2,ZONE:us-west1-c
You can get the currentDEBIAN_IMAGE_NAME by running thefollowinggcloud command:
gcloud compute images list \ --filter="family=debian-12"
Create four backend VMs by making fourPOST requests to theinstances.insert method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{ "name": "VM-NAME", "tags": { "items": [ "allow-health-check-ipv6", "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/[ZONE]/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4_IPV6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "VM-NAME", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/zone/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "metadata": { "items": [ { "key": "startup-script", "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nfile_ports=\"/etc/apache2/ports.conf\"\nfile_http_site=\"/etc/apache2/sites-available/000-default.conf\"\nfile_https_site=\"/etc/apache2/sites-available/default-ssl.conf\"\nhttp_listen_prts=\"Listen 80\\nListen 8008\\nListen 8080\\nListen 8088\"\nhttp_vh_prts=\"*:80 *:8008 *:8080 *:8088\"\nhttps_listen_prts=\"Listen 443\\nListen 8443\"\nhttps_vh_prts=\"*:443 *:8443\"\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://169.254.169.254/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nprt_conf=\"$(cat \"$file_ports\")\"\nprt_conf_2=\"$(echo \"$prt_conf\" | sed \"s|Listen 80|${http_listen_prts}|\")\"\nprt_conf=\"$(echo \"$prt_conf_2\" | sed \"s|Listen 443|${https_listen_prts}|\")\"\necho \"$prt_conf\" | tee \"$file_ports\"\nhttp_site_conf=\"$(cat \"$file_http_site\")\"\nhttp_site_conf_2=\"$(echo \"$http_site_conf\" | sed \"s|*:80|${http_vh_prts}|\")\"\necho \"$http_site_conf_2\" | tee \"$file_http_site\"\nhttps_site_conf=\"$(cat \"$file_https_site\")\"\nhttps_site_conf_2=\"$(echo \"$https_site_conf\" | sed \"s|_default_:443|${https_vh_prts}|\")\"\necho \"$https_site_conf_2\" | tee \"$file_https_site\"\nsystemctl restart apache2" } ] }, "scheduling": { "preemptible": false }, "deletionProtection": false}Create two instance groups by making aPOST request to theinstanceGroups.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups{ "name": "ig-a", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups{ "name": "ig-c", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}Add instances to each instance group by making aPOST request to theinstanceGroups.addInstances method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances{ "instances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1", "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a2" } ]}POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c/addInstances{ "instances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c1", "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c2" } ]}Configure load balancer components
These steps configure all of theinternal passthrough Network Load Balancercomponents starting with the healthcheck and backend service, and then the frontend components:
Health check. In this example, you use an HTTP health check thatchecks for an HTTP
200 OKstatus code. For more information, seehealth checks section of the internal passthrough Network Load Balancer overview.Backend service. Because you need to pass HTTP traffic through theinternal load balancer, you need to use TCP, not UDP.
Forwarding rule. This example creates two internal forwarding rulesfor IPv4 and IPv6 traffic.
Internal IP address. In this example, you specify an internal IPaddress,
10.1.2.99, when you create the IPv4 forwarding rule. For moreinformation, seeInternal IP address.Although you choose which IPv4 address is configured, the IPv6 addressis assigned automatically.
Console
Start your configuration
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
- ForProxy or passthrough, selectPassthrough load balancer and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ClickConfigure.
Basic configuration
On theCreate internal passthrough Network Load Balancer page, enter thefollowing information:
- Load balancer name:
be-ilb - Region:
us-west1 - Network:
lb-network-dual-stack
Backend configuration
- ClickBackend configuration.
- In theHealth check list, clickCreate a health check,and enter the following information:
- Name:
hc-http-80 - Scope:Regional
- Protocol:HTTP
- Port:
80 - Proxy protocol:
NONE - Request:
/
- Name:
- ClickCreate.
- In theNew Backend section, forIP stack type, select theIPv4 and IPv6 (dual-stack) option.
- InInstance group, select the
ig-ainstance group and clickDone. - ClickAdd a backend and repeat the step to add the
ig-cinstancegroup. - Verify that a blue check mark appears next toBackend configuration.
Frontend configuration
- ClickFrontend configuration. In theNew Frontend IP and port section, do the following:
- ForName, enter
fr-ilb-ipv6. - To handle IPv6 traffic, do the following:
- ForIP version, selectIPv6.
- ForSubnetwork, select
lb-subnet.The IPv6 address range in the forwarding rule is always ephemeral. - ForPorts, selectMultiple, and then in thePort number field, enter
80,8008,8080,8088. - ClickDone.
- To handle IPv4 traffic, do the following:
- ClickAdd frontend IP and port.
- ForName, enter
fr-ilb. - ForSubnetwork, select
lb-subnet. - In theInternal IP purpose section, from theIP address list, selectCreate IP address,enter the following information, and then clickReserve.
- Name:
ip-ilb - IP version:IPv4
- Static IP address:Let me choose
- Custom IP address:
10.1.2.99
- Name:
- ForPorts, selectMultiple, and then inPort numbers, enter
80,8008,8080, and8088. - ClickDone.
- Verify that there is a blue check mark next toFrontend configuration before continuing.
- ForName, enter
Review the configuration
- ClickReview and finalize. Check all your settings.
- If the settings are correct, clickCreate. It takes a few minutes forthe internal passthrough Network Load Balancer to be created.
gcloud
Create a new regional HTTP health check to test HTTP connectivity to theVMs on port 80.
gcloud compute health-checks create http hc-http-80 \ --region=us-west1 \ --port=80
Create the backend service for HTTP traffic:
gcloud compute backend-services create be-ilb \ --load-balancing-scheme=internal \ --protocol=tcp \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the two instance groups to the backend service:
gcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-agcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-c \ --instance-group-zone=us-west1-c
Create two forwarding rules for the backend service. When you create theIPv4 forwarding rule, specify
10.1.2.99for the internal IP address inthe subnet for IPv4 addresses.gcloud compute forwarding-rules create fr-ilb \ --region=us-west1 \ --load-balancing-scheme=internal \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=TCP \ --ports=80,8008,8080,8088 \ --backend-service=be-ilb \ --backend-service-region=us-west1
gcloud compute forwarding-rules create fr-ilb-ipv6 \ --region=us-west1 \ --load-balancing-scheme=internal \ --subnet=lb-subnet \ --ip-protocol=TCP \ --ports=80,8008,8080,8088 \ --backend-service=be-ilb \ --backend-service-region=us-west1 \ --ip-version=IPV6
api
Create the health check by making aPOST request to theregionHealthChecks.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks{"name": "hc-http-80","type": "HTTP","httpHealthCheck": { "port": 80}}Create the regional backend service by making aPOST request to theregionBackendServices.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices{"name": "be-ilb","backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" }, { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c", "balancingMode": "CONNECTION" }],"healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"],"loadBalancingScheme": "INTERNAL","connectionDraining": { "drainingTimeoutSec": 0 }}Create the forwarding rule by making aPOST request to theforwardingRules.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-ipv6","IPProtocol": "TCP","ports": [ "80", "8008", "8080", "8088"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","ipVersion": "IPV6","networkTier": "PREMIUM"}Create the forwarding rule by making aPOST request to theforwardingRules.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb","IPAddress": "10.1.2.99","IPProtocol": "TCP","ports": [ "80", "8008", "8080", "8088"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}Test your load balancer
To test the load balancer, create a client VM in the same region as the load balancer, and then send traffic from the client to the load balancer.
Create a client VM
This example creates a client VM (vm-client) in the same region as the backend(server) VMs. The client is used to validate the load balancer's configurationand demonstrate expected behavior as described in thetesting section.
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
ForName, enter
vm-client.ForRegion, select
us-west1.ForZone, select
us-west1-a.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh. - ForNetwork interfaces, select the following:
- Network:
lb-network-dual-stack - Subnet:
lb-subnet - IP stack type:IPv4 and IPv6 (dual-stack)
- Primary internal IP:Ephemeral (automatic)
- External IP:Ephemeral
- Network:
- ClickDone.
- ForNetwork tags, enter
ClickCreate.
gcloud
The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theus-west1-a zone, and it uses the samesubnet as the backend VMs.
gcloud compute instances create vm-client \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --stack-type=IPV4_IPV6 \ --tags=allow-ssh \ --subnet=lb-subnet
api
Make aPOST request to theinstances.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances{ "name": "vm-client", "tags": { "items": [ "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4_IPV6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network-dual-stack", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "scheduling": { "preemptible": false }, "deletionProtection": false}Test the connection
This test contacts the load balancer from a separate client VM; that is, notfrom a backend VM of the load balancer. The expected behavior is for traffic tobe distributed among the four backend VMs.
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Describe the IPv6 forwarding rule
fr-ilb-ipv6. Note theIPV6_ADDRESSin the description.gcloud compute forwarding-rules describe fr-ilb-ipv6 --region=us-west1
Describe the IPv4 forwarding rule
fr-ilb.gcloud compute forwarding-rules describe fr-ilb --region=us-west1
From clients with IPv6 connectivity, run the following command:
curl -m 10 -s http://IPV6_ADDRESS:80
For example, if the assigned IPv6 address is
[fd20:1db0:b882:802:0:46:0:0/96]:80, the command should look like:curl -m 10 -s http://[fd20:1db0:b882:802:0:46:0:0]:80
From clients with IPv4 connectivity, run the following command:
curl -m 10 -s http://10.1.2.99:80
Replace the placeholders with valid values:
IPV6_ADDRESSis the ephemeral IPv6 address in thefr-ilb-ipv6forwarding rule.
Set up load balancer with IPv6-only subnets and backends
This document shows you how to configure and test an internal passthrough Network Load Balancer thatsupports only IPv6 traffic. The steps in this section describe how to configurethe following:
- Acustom mode VPCnetwork named
lb-network-ipv6-only. IPv6trafficrequires a custom modesubnet. - An internalIPv6-only subnet called
lb-subnet-ipv6-only(stack-typeset toIPV6_ONLY), which is required for IPv6-only traffic. - Firewall rules that allow incoming connections to backend VMs.
- The backend instance group, which is located in the following region andsubnet for this example:
- Region:
us-west1 - Subnet:
lb-subnet-ipv6-onlyThe IPv6 address range for the subnet is assignedautomatically. Google provides a fixed size (/64) IPv6 CIDR block.
- Region:
- Four backendIPv6-only VMs—two VMs in an unmanaged instance group in zone
us-west1-aand two VMs in an unmanaged instance group in zoneus-west1-c. - An IPv6 TCP server on the backend VMs. The server listens for incomingconnections on the specified VIP of the load balancer's forwarding ruleand the specified network interface. The server accepts incoming clientconnections, sends a response, and then closes the connection.
- One client VM to test connections.
- The following internal passthrough Network Load Balancer components:
- A health check for the backend service
- An internal backend service in the
us-west1region to manageconnection distribution to the two zonal instance groups - An IPv6 forwarding rule
You can also set up an internal passthrough Network Load Balancer with internal IPv6-only backends byusing a VM instance that serves as a NAT gateway. To learn more about thisconfiguration, seeSet up an internal passthrough Network Load Balancer with internal IPv6-only backends.
Configure a network, region, and subnet
The example internal passthrough Network Load Balancer described on this page is created in acustom mode VPC network namedlb-network-ipv6-only.
To configure subnets with internal IPv6 ranges, enable a VPCnetwork ULA internal IPv6 range. Internal IPv6 subnet ranges are allocated fromthis range.
Later, in theInstall an IPv6 TCP server on the backend VMs using a startup Bash script section of this document, the internal IPv6 subnetrange is used to create a routing rule to route traffic from the VPC subnetthrough the specified gateway and network interface.
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
ForName, enter
lb-network-ipv6-only.If you want to configure internal IPv6 address ranges on subnets in thisnetwork, complete these steps:
- ForPrivate IPv6 address settings, selectConfigure a ULAinternal IPv6 range for this VPC Network.
- ForAllocate internal IPv6 range, selectAutomatically orManually.If you selectManually, enter a
/48range from within thefd20::/20range. If the range is already in use, you are prompted toprovide a different range.
ForSubnet creation mode, selectCustom.
In theNew subnet section, specify the following configurationparameters for a subnet:
- Name:
lb-subnet-ipv6-only - Region:
us-west1 - IP stack type:IPv6 (single-stack)
- IPv6 access type:Internal
- Name:
ClickDone.
ClickCreate.
gcloud
To create a new custom mode VPC network, run the
gcloud compute networks createcommand.To configureinternal IPv6 rangeson any subnets in this network, use the
--enable-ula-internal-ipv6flag.gcloud compute networks create lb-network-ipv6-only \ --subnet-mode=custom \ --enable-ula-internal-ipv6 \ --bgp-routing-mode=regional
Configure a subnet with the
ipv6-access-typeset toINTERNAL.This indicates that the VMs in this subnet can only haveinternal IPv6 addresses.To create the subnet, run thegcloud compute networks subnets createcommand.gcloud compute networks subnets create lb-subnet-ipv6-only \ --network=lb-network-ipv6-only \ --region=us-west1 \ --stack-type=IPV6_ONLY \ --ipv6-access-type=INTERNAL
Configure firewall rules
This example uses the following firewall rules:
fw-allow-lb-access-ipv6-only: an ingress rule, applicable to all targets inthe VPC network, that allows traffic from all IPv6 sources.fw-allow-ssh: an ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which youinitiate SSH sessions. This example uses the target tagallow-sshtoidentify the VMs to which it must apply.fw-allow-health-check-ipv6-only: an ingress rule, applicable to the instancesbeing load balanced, that allows traffic from the Google Cloud healthchecking systems (2600:2d00:1:b029::/64). This example uses thetarget tagallow-health-check-ipv6to identify the instances to which itmust apply.
Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.
Note: You must create a firewall rule to allow health checks from the IP rangesof Google Cloud probe systems. For more information, seeProbe IPranges and firewall rules.Console
In the Google Cloud console, go to theFirewall policies page.
To allow IPv6 subnet traffic, clickCreate firewall rule again andenter the following information:
- Name:
fw-allow-lb-access-ipv6-only - Network:
lb-network-ipv6-only - Priority:
1000 - Direction of traffic:Ingress
- Action on match:Allow
- Targets:All instances in the network
- Source filter:IPv6 ranges
- Source IPv6 ranges:
::/0 - Protocols and ports:Allow all
- Name:
ClickCreate.
To allow incoming SSH connections, clickCreate firewall rule againand enter the following information:
- Name:
fw-allow-ssh - Network:
lb-network-ipv6-only - Priority:
1000 - Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-ssh - Source filter:IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports: SelectSpecified protocols and ports,select theTCP checkbox, and then enter
22inPorts.
- Name:
ClickCreate.
To allow Google Cloud IPv6 health checks, clickCreate firewallrule again and enter the following information:
- Name:
fw-allow-health-check-ipv6-only - Network:
lb-network-ipv6-only - Priority:
1000 - Direction of traffic:Ingress
- Action on match:Allow
- Targets:Specified target tags
- Target tags:
allow-health-check-ipv6 - Source filter:IPv6 ranges
- Source IPv6 ranges:
2600:2d00:1:b029::/64 - Protocols and ports:Allow all
- Name:
ClickCreate.
gcloud
Create the
fw-allow-lb-access-ipv6-onlyfirewall ruleto allow IPv6 traffic to all VM instances in theVPC network.gcloud compute firewall-rules create fw-allow-lb-access-ipv6-only \ --network=lb-network-ipv6-only \ --action=allow \ --direction=ingress \ --source-ranges=::/0 \ --rules=all
Create the
fw-allow-sshfirewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network-ipv6-only \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-check-ipv6rule to allowGoogle Cloud IPv6 health checks.gcloud compute firewall-rules create fw-allow-health-check-ipv6-only \ --network=lb-network-ipv6-only \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64 \ --rules=tcp,udp
Create backend VMs and instance groups
This example uses twounmanaged instance groups, each having two backendVMs. To demonstrate the regional nature of internal passthrough Network Load Balancers,the two instance groups are placed in separate zones,us-west1-aandus-west1-c.
- Instance group
ig-acontains these two VMs:vm-a1vm-a2
- Instance group
ig-ccontains these two VMs:vm-c1vm-c2
Traffic to all four of the backend VMs is load balanced.
Console
Create backend VMs
In the Google Cloud console, go to theVM instances page.
Repeat these steps for each VM, using the following nameand zone combinations.
- Name:
vm-a1, zone:us-west1-a - Name:
vm-a2, zone:us-west1-a - Name:
vm-c1, zone:us-west1-c - Name:
vm-c2, zone:us-west1-c
- Name:
ClickCreate instance.
Set theName as indicated in step 2.
ForRegion, select
us-west1, and choose aZone asindicated in step 2.In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. If necessary, clickChange to change the image.
ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandallow-health-check-ipv6. - ForNetwork interfaces, select the following:
- Network:
lb-network-ipv6-only - Subnet:
lb-subnet-ipv6-only - IP stack type:IPv6 (single-stack)
- Primary internal IPv6 address:Ephemeral (Automatic)
- Network:
- ForNetwork tags, enter
ClickCreate.
The backend VM needs to run an IPv6 TCP server that listens forincoming connections. Youinstall this server on the backend VMs after you have configured the load balancer. This isbecause the server script creates a socket that binds to the forwarding ruleof the load balancer.
Create instance groups
In the Google Cloud console, go to theInstance groups page.
Repeat the following steps to create two unmanaged instance groups eachwith two VMs in them, using these combinations.
- Instance group name:
ig-a, zone:us-west1-a, VMs:vm-a1andvm-a2 - Instance group name:
ig-c, zone:us-west1-c, VMs:vm-c1andvm-c2
- Instance group name:
ClickCreate instance group.
ClickNew unmanaged instance group.
SetName as indicated in step 2.
In theLocation section, select
us-west1for theRegion, andthen choose aZone as indicated in step 2.ForNetwork, select
lb-network-ipv6-only.ForSubnetwork, select
lb-subnet-ipv6-only.In theVM instances section, add the VMs as indicated in step 2.
ClickCreate.
gcloud
To create the four VMs, run the
gcloud compute instances createcommand four times, usingthese four combinations for[VM-NAME]and[ZONE].VM-NAME:vm-a1,ZONE:us-west1-aVM-NAME:vm-a2,ZONE:us-west1-aVM-NAME:vm-c1,ZONE:us-west1-cVM-NAME:vm-c2,ZONE:us-west1-c
gcloud beta compute instances createVM-NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check-ipv6 \ --subnet=lb-subnet-ipv6-only \ --stack-type=IPV6_ONLY
The backend VM needs to run an IPv6 TCP server that listens forincoming connections. Youinstall this server on the backend VMs after you have configured the load balancer. This isbecause the server script creates a socket that binds to the forwarding ruleof the load balancer.
Create the two unmanaged instance groups in each zone:
gcloud beta compute instance-groups unmanaged create ig-a \ --zone=us-west1-agcloud beta compute instance-groups unmanaged create ig-c \ --zone=us-west1-c
Add the VMs to the appropriate instance groups:
gcloud beta compute instance-groups unmanaged add-instances ig-a \ --zone=us-west1-a \ --instances=vm-a1,vm-a2gcloud beta compute instance-groups unmanaged add-instances ig-c \ --zone=us-west1-c \ --instances=vm-c1,vm-c2
Configure load balancer components
These steps configure all of theinternal passthrough Network Load Balancercomponents starting with the healthcheck and backend service, and then the frontend components:
Console
Start your configuration
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
- ForProxy or passthrough, selectPassthrough load balancer and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ClickConfigure.
Basic configuration
On theCreate internal passthrough Network Load Balancer page, enter thefollowing information:
- Load balancer name:
ilb-ipv6-only - Region:
us-west1 - Network:
lb-network-ipv6-only
Backend configuration
- ClickBackend configuration.
- In theHealth check list, clickCreate a health check,and then enter the following information:
- Name:
hc-http-80 - Scope:Regional
- Protocol:HTTP
- Port:
80 - Proxy protocol:None
- Request:
/
- Name:
- ClickCreate.
- In theNew Backend section, forIP stack type, select theIPv6 (single-stack) option.
- InInstance group, select the
ig-ainstance group and clickDone. - ClickAdd a backend and repeat the step to add the
ig-cinstancegroup. - Verify that a blue check mark appears next toBackendconfiguration.
Frontend configuration
- ClickFrontend configuration. In theNew Frontend IP and portsection, do the following:
- ForName, enter
fr-ilb-ipv6-only. - To handle IPv6 traffic, do the following:
- ForIP version, selectIPv6. The IPv6 TCP server that youare going to create in the following section binds to the VIP ofthe forwarding rule.
- ForSubnetwork, select
lb-subnet-ipv6-only. The IPv6 addressrange in the forwarding rule is always ephemeral. - ForPorts, selectMultiple, and then in thePortnumber field, enter
80,8008,8080,8088. - ClickDone.
- Verify that there is a blue check mark next toFrontendconfiguration before continuing.
- ForName, enter
Review the configuration
- ClickReview and finalize. Check all your settings.
- If the settings are correct, clickCreate. It takes a few minutes forthe internal passthrough Network Load Balancer to be created.
gcloud
Create a new regional HTTP health check to test HTTP connectivity to theVMs on port 80.
gcloud beta compute health-checks create http hc-http-80 \ --region=us-west1 \ --port=80
Create the backend service:
gcloud beta compute backend-services create ilb-ipv6-only \ --load-balancing-scheme=INTERNAL \ --protocol=tcp \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the two instance groups to the backend service:
gcloud beta compute backend-services add-backend ilb-ipv6-only \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
gcloud beta compute backend-services add-backend ilb-ipv6-only \ --region=us-west1 \ --instance-group=ig-c \ --instance-group-zone=us-west1-c
Create the IPv6 forwarding rule with an ephemeral IPv6 address.
gcloud beta compute forwarding-rules create fr-ilb-ipv6-only \ --region=us-west1 \ --load-balancing-scheme=INTERNAL \ --subnet=lb-subnet-ipv6-only \ --ip-protocol=TCP \ --ports=80,8008,8080,8088 \ --backend-service=ilb-ipv6-only \ --backend-service-region=us-west1 \ --ip-version=IPV6
The IPv6 TCP server that you are going to create in the following sectionbinds to the VIP of the forwarding rule.
Install an IPv6 TCP server on the backend VMs using a startup Bash script
In this example, the Bash startup script for backend VMs contains the following:
- Networking commands to route all outgoing packets originatingfrom a subnet through a specified gateway and network interface.
- Python server script (
server.py), which is an IPv6 TCP server, that listensfor incoming connections on the specified VIP and network interface.It accepts incoming client connections, sends a response,and then closes the connection.
You need to add the following details in the Bash script:
- Gateway address
- Subnet range
- VIP of the forwarding rule
- Network interface
You can identify the gateway address and the subnet rangeby running theip -6 route show table all command in the backend VM. To learnmore about the commands that are used in the startup Bash script, see theAppendix section.
To add a startup script to the VM instance, do the following:
Note: You need to run these steps for each backend VM that you created earlier.Console
gcloud
Test your load balancer
To test the load balancer, create a client VM in the same region as the loadbalancer, and then send traffic from the client to the load balancer.
Create a client VM
This example creates a client VM (vm-client) in the same region as the backend(server) VMs. The client is used to validate the load balancer's configurationand demonstrate expected behavior as described in thetesting section.
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
ForName, enter
vm-client.ForRegion, select
us-west1.ForZone, select
us-west1-a.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh. - ForNetwork interfaces, select the following:
- Network:
lb-network-ipv6-only - Subnet:
lb-subnet-ipv6-only - IP stack type:IPv6 (single-stack)
- Network:
- ClickDone.
- ForNetwork tags, enter
ClickCreate.
gcloud
The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theus-west1-a zone, and it uses the samesubnet as the backend VMs.
gcloud beta compute instances create vm-client \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --stack-type=IPV6_ONLY \ --tags=allow-ssh \ --subnet=lb-subnet-ipv6-only
Test the connection
This test contacts the load balancer from a separate client VM; that is, notfrom a backend VM of the load balancer. The expected behavior is for traffic tobe distributed among the four backend VMs.
Connect to the client VM instanceby using SSH.
gcloud compute ssh vm-client --zone=us-west1-a
Describe the IPv6 forwarding rule
fr-ilb-ipv6-only. Note theIPV6_ADDRESSin the description.gcloud beta compute forwarding-rules describe fr-ilb-ipv6-only \ --region=us-west1
From clients with IPv6 connectivity, run the following command:
curl http://IPV6_ADDRESS:80
For example, if the assigned IPv6 address is
[fd20:307:120c:2000:0:1:0:0/96]:80, the command should look like:curl http://[fd20:307:120c:2000:0:1:0:0]:80
Additional configuration options
This section expands on the configuration example to provide alternativeand additional configuration options. All of the tasks are optional.You can perform them in any order.
Enable global access
You can enableglobal access foryour example internal passthrough Network Load Balancer to make it accessible to clients in all regions.The backends of your example load balancer must still be located in one region(us-west1).
To configure global access, make the following configuration changes.
Console
Edit the load balancer's forwarding rule
In the Google Cloud console, go to theLoad balancing page.
In theName column, click your internal passthrough Network Load Balancer. The example loadbalancer is named
be-ilb.ClickFrontend configuration.
ClickEdit.
UnderGlobal access, selectEnable.
ClickDone.
ClickUpdate.
On theLoad balancer details page, verify that the frontend configurationsaysRegional (REGION) with global access.
gcloud
Update the example load balancer's forwarding rule,
fr-ilbto includethe--allow-global-accessflag.gcloud compute forwarding-rules update fr-ilb \ --region=us-west1 \ --allow-global-access
You can use the
forwarding-rules describecommand to determine whethera forwarding rule has global access enabled. For example:gcloud compute forwarding-rules describe fr-ilb \ --region=us-west1 \ --format="get(name,region,allowGlobalAccess)"
The word
Trueappears in the output, after the name and region of theforwarding rule, when global access is enabled.
API
Make aPATCH request to theforwardingRules/patch method.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules/fr-ilb{"allowGlobalAccess": true}Create a VM client to test global access
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
Set theName to
vm-client2.Set theRegion to
europe-west1.Set theZone to
europe-west1-b.ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-ssh. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
europe-subnet
- Network:
- ForNetwork tags, enter
ClickCreate.
gcloud
The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theeurope-west1-b zone, and it uses the samesubnet as the backend VMs.
gcloud compute instances create vm-client2 \ --zone=europe-west1-b \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=europe-subnet
API
Make aPOST request to theinstances.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/europe-west1-b/instances{"name": "vm-client2","tags": { "items": [ "allow-ssh" ]},"machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/europe-west1-b/machineTypes/e2-standard-2","canIpForward": false,"networkInterfaces": [ { "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/europe-west1/subnetworks/europe-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] }],"disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client2", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/europe-west1-b/diskTypes/pd-standard", "diskSizeGb": "10" } }],"scheduling": { "preemptible": false},"deletionProtection": false}Connect to the VM client and test connectivity
To test the connectivity, run the following command:
gcloud compute ssh vm-client2 --zone=europe-west1-b
Test connecting to the load balancer on all configured ports,as you didfrom thevm-client in theus-west1 region. TestHTTP connectivity on the four ports configured on the forwarding rule:
curl http://10.1.2.99 curl http://10.1.2.99:8008 curl http://10.1.2.99:8080 curl http://10.1.2.99:8088
Configure managed instance groups
The example configuration created twounmanaged instancegroups. You can instead usemanagedinstance groups,including zonal and regional managed instance groups, as backends forinternal passthrough Network Load Balancers.
Managed instance groups require that you create an instance template. Thisprocedure demonstrates how to replace the two zonal unmanaged instance groupsfrom the example with a single, regional managed instance group. A regionalmanaged instance group automatically creates VMs in multiple zones of theregion, making it simpler to distribute production traffic among zones.
Managed instance groups also supportautoscalingandautohealing.If you use autoscaling with internal passthrough Network Load Balancers, you cannot scalebased on load balancing.
This procedure shows you how to modify the backend service for the exampleinternal passthrough Network Load Balancer so that it uses a regional managed instancegroup.
Console
Instance template
In the Google Cloud console, go to theVM instance templates page.
ClickCreate instance template.
Set theName to
template-vm-ilb.Choose amachine type.
In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. If necessary, clickChange to change the image.
ClickAdvanced options.
ClickNetworking and configure the following fields:
- ForNetwork tags, enter
allow-sshandallow-health-check. - ForNetwork interfaces, select the following:
- Network:
lb-network - Subnet:
lb-subnet
- Network:
- ForNetwork tags, enter
ClickManagement, and then in theStartup script field, enter the following script:
#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completedClickCreate.
Managed instance group
In the Google Cloud console, go to theInstance groups page.
ClickCreate instance group.
Set theName to
ig-ilb.ForLocation, chooseMulti-zone, and set theRegion to
us-west1.Set theInstance template to
template-vm-ilb.Optional: Configureautoscaling. Youcannot autoscale the instance group based on HTTP load balancing usagebecause the instance group is a backend for the internal passthrough Network Load Balancer.
Set theMinimum number of instances to
1and theMaximum numberof instances to6.Optional: Configureautohealing.If you configure autohealing, use the same health check used by the backendservice for the internal passthrough Network Load Balancer. In this example, use
hc-http-80.ClickCreate.
gcloud
Create the instance template. Optionally, you canset otherparameters,such asmachine type, for the imagetemplate to use.
gcloud compute instance-templates create template-vm-ilb \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check \ --subnet=lb-subnet \ --region=us-west1 \ --network=lb-network \ --metadata=startup-script='#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completed'Create one regional managed instance group using the template:
gcloud compute instance-groups managed create ig-ilb \ --template=template-vm-ilb \ --region=us-west1 \ --size=6
Add the regional managed instance group as a backend tothe backendservice that you already created:
gcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-ilb \ --instance-group-region=us-west1
Disconnect the two unmanaged (zonal) instance groups from the backendservice:
gcloud compute backend-services remove-backend be-ilb \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-agcloud compute backend-services remove-backend be-ilb \ --region=us-west1 \ --instance-group=ig-c \ --instance-group-zone=us-west1-c
Remove external IP addresses from backend VMs
When youcreated the backend VMs,each was assigned an ephemeral external IP address so it could download Apacheusing a startup script. Because the backend VMs are only used by aninternal passthrough Network Load Balancer, you can remove their external IP addresses. Removing external IPaddresses prevents the backend VMs from accessing the internet directly.
Important: Removing the external IP address from a VM limits how you can connectto it. For details, seeChoose a connection option for internal-only VMs.It's useful to leave external IP addresses on backend VMs if youneed to demonstratehow requests from a backend VM to the IP address of theload balancer are handled.Console
In the Google Cloud console, go to theVM instances page.
Repeat the following steps for each backend VM.
Click the name of the backend VM, for example,
vm-a1.ClickEdit.
In theNetwork interfaces section, click the network.
From theExternal IP list, selectNone, and clickDone.
ClickSave.
gcloud
To look up the zone for an instance – for example, if you're using aregional managed instance group– run the following command for each instance to determine itszone. Replace
[SERVER-VM]with the name of the VM to look up.gcloud compute instances list --filter="name=[SERVER-VM]"
Repeat the following step for each backend VM. Replace
[SERVER-VM]with the name of the VM, and replace and[ZONE]with the VM's zone.gcloud compute instances delete-access-config [SERVER-VM] \ --zone=[ZONE] \ --access-config-name=external-nat
API
Make aPOST request to theinstances.deleteAccessConfig method for each backend VM, replacingvm-a1with the name of the VM, and replacing andus-west1-a with the VM's zone.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1/deleteAccessConfig?accessConfig=external-nat&networkInterface=None
Use a reserved internal IP address
When you create backend VMs and instance groups,the VM instance uses an ephemeral internal IPv4 or IPv6 address.
The following steps show you how to promote an internal IPv4 or IPv6 address to a static internal IPv4 or IPv6 address and then update the VM instance to use the static internal IP address:
- Promote an in-use ephemeral internal IPv4 or IPv6 address to a staticaddress.
- Change or assign an internal IPv6 address to an existinginstance.
Alternatively, the following steps show you how to reservea new static internal IPv4 or IPv6 address and then update the VM instance to usethe static internal IP address:
Reserve a new static internal IPv4 orIPv6 address.
Unlike internal IPv4 reservation, internal IPv6 reservation doesn't supportreserving a specific IP address from the subnetwork. Instead, a
/96internalIPv6 address range is automatically allocated from the subnet's/64internal IPv6address range.Change or assign an internal IPv6 address to an existinginstance.
For more information, seeHow to reserve a static internal IPaddress.
Accept traffic on all ports
The load balancer's forwarding rule, not its backend service, determines theport or ports on which the load balancer accepts traffic. For information aboutthe purpose of each component, seeComponents.
When youcreated this example load balancer's forwardingrule, you configured ports80,8008,8080,and8088. The startup script that installs Apache also configures it toaccept HTTPS connections on ports443 and8443.
To support these six ports, you can configure the forwarding rule to accepttraffic on all ports. With this strategy, you can also configure the firewallrule or rules that allow incoming connections to backend VMs so that they onlypermit certain ports.
This procedure shows you how to delete the load balancer's current forwardingrule and create a new one that accepts traffic on all ports.
Note: If you try to edit the existing forwarding rule by adding a new port, youmight encounter the following error:Forwarding rule's port range is conflicting with forwarding rule: resource_type: FORWARDING_RULEresource_name: "<EXISTING_FORWARDING_RULE_NAME>"For more information about when to use this setup, seeInternal passthrough Network Load Balancers and forwarding rules with a common IPaddress.
Console
Delete your forwarding rule and create a new one
In the Google Cloud console, go to theLoad balancing page.
Click the
be-ilbload balancer and clickEdit.ClickFrontend configuration.
Hold the pointer over the
10.1.2.9forwarding rule and clickDelete.ClickAdd frontend IP and port.
In theNew Frontend IP and portsection, enter the following information and clickDone:
- Name:
fr-ilb - Subnetwork:
lb-subnet - Internal IP:
ip-ilb - Ports:All.
- Name:
Verify that there is a blue check mark next toFrontend configuration before continuing.
ClickReview and finalize and review your load balancer configuration settings.
ClickCreate.
gcloud
Delete your existing forwarding rule,
fr-ilb.gcloud compute forwarding-rules delete fr-ilb \ --region=us-west1
Create a replacement forwarding rule, with the same name, whose portconfiguration uses the keyword
ALL. The other parameters for theforwarding rule remain the same.gcloud compute forwarding-rules create fr-ilb \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=TCP \ --ports=ALL \ --backend-service=be-ilb \ --backend-service-region=us-west1
API
Delete the forwarding rule by making aDELETE request to theforwardingRules.delete method.
DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules/fr-ilb
Create the forwarding rule by making aPOST request to theforwardingRules.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb","IPAddress": "10.1.2.99","IPProtocol": "TCP","allPorts": true,"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}Test the traffic on all ports setup
Connect to the client VM instance and test HTTP and HTTPS connections.
Connect to the client VM:
gcloud compute ssh vm-client --zone=us-west1-a
Test HTTP connectivity on all four ports:
curl http://10.1.2.99curl http://10.1.2.99:8008curl http://10.1.2.99:8080curl http://10.1.2.99:8088
Test HTTPS connectivity on ports
443and8443. The--insecureflag is required because each Apache server in the example setup usesa self-signed certificate.curl https://10.1.2.99 --insecurecurl https://10.1.2.99:8443 --insecure
Observe that HTTP requests (on all four ports) and HTTPS requests(on both ports) are distributed among all of the backend VMs.
Accept traffic on multiple ports using two forwarding rules
When youcreated this example load balancer's forwardingrule, you configured ports80,8008,8080,and8088. The startup script that installs Apache also configures it toaccept HTTPS connections on ports443 and8443.
An alternative strategy to configuring a single forwarding rule toaccepttraffic on all ports is to create multiple forwarding rules, eachsupporting five or fewer ports.
This procedure shows you how to replace the example load balancer's forwardingrule with two forwarding rules, one handling traffic on ports80,8008,8080, and8088, and the other handling traffic on ports443 and8443.
For more information about when to use this setup, seeInternal passthrough Network Load Balancers and forwarding rules with a common IPaddress.
Console
In the Google Cloud console, go to theForwarding rules page.
In theName column, click
fr-ilb, and then clickDelete.In the Google Cloud console, go to theLoad balancing page.
In theName column, click
be-ilb.ClickEdit.
ClickFrontend configuration.
ClickAdd frontend IP and port.
In theNew Frontend IP and portsection, do the following:
- ForName, enter
fr-ilb-http. - ForSubnetwork, select
lb-subnet. - ForInternal IP purpose, selectShared.
- From theIP address list, selectCreate IP address,enter the following information, and clickReserve:
- Name:
internal-10-1-2-99 - Static IP address:Let me choose
- Custom IP address:
10.1.2.99
- Name:
- ForPorts, selectMultiple, and then inPort numbers, enter
80,8008,8080, and8088. - ClickDone.
- ForName, enter
ClickAdd frontend IP and port.
In theNew Frontend IP and portsection, do the following:
- ForName, enter
fr-ilb-https. - ForSubnetwork, select
lb-subnet. - ForInternal IP purpose, selectShared.
- From theIP address list, select
internal-10-1-2-99. - ForPorts, selectMultiple, and then inPort numbers, enter
443and8443. - ClickDone.
- ForName, enter
ClickReview and finalize, and review your load balancer configuration settings.
ClickUpdate.
gcloud
Delete your existing forwarding rule,
fr-ilb.gcloud compute forwarding-rules delete fr-ilb \ --region=us-west1
Create a static (reserved) internal IP address for
10.1.2.99and setits--purposeflag toSHARED_LOADBALANCER_VIP. The--purposeflagis required so that two internal forwarding rules can use the sameinternal IP address.gcloud compute addresses create internal-10-1-2-99 \ --region=us-west1 \ --subnet=lb-subnet \ --addresses=10.1.2.99 \ --purpose=SHARED_LOADBALANCER_VIP
- Create two replacement forwarding rules with the following parameters:
gcloud compute forwarding-rules create fr-ilb-http \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=TCP \ --ports=80,8008,8080,8088 \ --backend-service=be-ilb \ --backend-service-region=us-west1
gcloud compute forwarding-rules create fr-ilb-https \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=TCP \ --ports=443,8443 \ --backend-service=be-ilb \ --backend-service-region=us-west1
API
Delete the forwarding rule by making aDELETE request to theforwardingRules.delete method.
DELETE https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules/fr-ilb
Create a static (reserved) internal IP address for10.1.2.99 and set its purpose toSHARED_LOADBALANCER_VIP by making aPOST request to theaddresses.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/addresses{"name": "internal-10-1-2-99","address": "10.1.2.99","prefixLength": 32,"addressType": INTERNAL,"purpose": SHARED_LOADBALANCER_VIP,"subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}Create two forwarding rules by making twoPOST requests to theforwardingRules.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-http","IPAddress": "10.1.2.99","IPProtocol": "TCP","ports": [ "80", "8008", "8080", "8088"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}{"name": "fr-ilb-https","IPAddress": "10.1.2.99","IPProtocol": "TCP","ports": [ "443", "8443"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}Test the traffic on multiple ports setup
Connect to the client VM instance and test HTTP and HTTPS connections.
Connect to the client VM:
gcloud compute ssh vm-client --zone=us-west1-a
Test HTTP connectivity on all four ports:
curl http://10.1.2.99curl http://10.1.2.99:8008curl http://10.1.2.99:8080curl http://10.1.2.99:8088
Test HTTPS connectivity on ports
443and8443. The--insecureflag is required because each Apache server in the example setup usesa self-signed certificate.curl https://10.1.2.99 --insecurecurl https://10.1.2.99:8443 --insecure
Observe that HTTP requests (on all four ports) and HTTPS requests(on both ports) are distributed among all of the backend VMs.
Use session affinity
Theexample configuration creates a backendservice without session affinity.
This procedure shows you how to update the backend service for the exampleinternal passthrough Network Load Balancer so that it uses session affinity based on a hash created fromthe client's IP addresses and the IP address of the load balancer's internalforwarding rule.
For supported session affinity types, seeSession affinityoptions.
Note: For internalUDP load balancers, setting session affinity is supportedin the gcloud CLI and the API. You can't set sessionaffinity for UDP traffic by using the Google Cloud console.Console
In the Google Cloud console, go to theLoad balancing page.
Clickbe-ilb (the name of the backend service that you created forthis example) and clickEdit.
On theEdit internal passthrough Network Load Balancer page, clickBackend configuration.
From theSession affinity list, selectClient IP.
ClickUpdate.
gcloud
Use the followinggcloud command to update thebe-ilb backend service,specifying client IP session affinity:
gcloud compute backend-services update be-ilb \ --region=us-west1 \ --session-affinity CLIENT_IP
API
Make aPATCH request to theregionBackendServices/patch method.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb{"sessionAffinity": "CLIENT_IP"}Configure a connection tracking policy
This section shows you how to update the backend service to change the loadbalancer's default connection tracking policy.
A connection tracking policy includes the following settings:
Note: You cannot use the Google Cloud console to configure a connection tracking policy. Use the Google Cloud CLI or the [REST API](/compute/docs/reference/rest/v1/regionBackendServices) instead.gcloud
Use the followinggcloud computebackend-services command to update the connection tracking policy for the backend service:
gcloud compute backend-services updateBACKEND_SERVICE \ --region=REGION \ --tracking-mode=TRACKING_MODE \ --connection-persistence-on-unhealthy-backends=CONNECTION_PERSISTENCE_BEHAVIOR \ --idle-timeout-sec=IDLE_TIMEOUT_VALUE
Replace the placeholders with valid values:
BACKEND_SERVICE: the backend service that you'reupdatingREGION: the region of the backend service that you'reupdatingTRACKING_MODE: the connection tracking mode to be usedfor incoming packets; for the list of supported values, seeTracking modeCONNECTION_PERSISTENCE_BEHAVIOR: the connectionpersistence behavior when backends are unhealthy; for the list ofsupported values, seeConnection persistence on unhealthybackendsIDLE_TIMEOUT_VALUE: the number of seconds that aconnection tracking table entry must be maintained after the load balancerprocesses the last packet that matched the entry
Use zonal affinity
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
This section shows you how to configure an internal passthrough Network Load Balancer toenable zonalaffinity on the backend service of the load balancer.By default, zonal affinity is disabled. For more information, seeZonalaffinity for internal passthrough Network Load Balancers.
Use the following command to configure the
ZONAL_AFFINITY_STAY_WITHIN_ZONEzonal affinity option. For more information about how this option works,seeHowZONAL_AFFINITY_STAY_WITHIN_ZONEworks.gcloud beta compute backend-services update be-ilb \ --zonal-affinity-spillover=ZONAL_AFFINITY_STAY_WITHIN_ZONE \ --region=us-west1
Use the following command to configure
ZONAL_AFFINITY_SPILL_CROSS_ZONEzonal affinity with the default0.0spillover ratio. For more informationabout how this option works, see theHowZONAL_AFFINITY_SPILL_CROSS_ZONEand spillover ratio work andZero spillover ratiosections.gcloud beta compute backend-services update be-ilb \ --zonal-affinity-spillover=ZONAL_AFFINITY_SPILL_CROSS_ZONE \ --region=us-west1
Use the following command to configure
ZONAL_AFFINITY_SPILL_CROSS_ZONEzonal affinity with a 30% spillover ratio. For more information about howthis option works, seeHowZONAL_AFFINITY_SPILL_CROSS_ZONEand spillover ratio work andNonzero spillover ratio sections.gcloud beta compute backend-services update be-ilb \ --zonal-affinity-spillover=ZONAL_AFFINITY_SPILL_CROSS_ZONE \ --zonal-affinity-spillover-ratio=0.3 \ --region=us-west1
Create a forwarding rule in another subnet
This procedure creates a second IP address and forwarding rule in a differentsubnet to demonstrate that you cancreate multiple forwardingrules for one internal passthrough Network Load Balancer.The region for the forwarding rule must match the region of the backend service.
Subject to firewall rules, clients in any subnet in theregion can contact either internal passthrough Network Load Balancer IP address.
Console
Add the second subnet
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
Click
lb-network.In theSubnets section, do the following:
- ClickAdd subnet.
- In theNew subnet section, enter the following information:
- Name:
second-subnet - Region:
us-west1 - IP address range:
10.5.6.0/24
- Name:
- ClickAdd.
Add the second forwarding rule
In the Google Cloud console, go to theLoad balancing page.
Click the
be-ilbload balancer and clickEdit.ClickFrontend configuration.
ClickAdd frontend IP and port.
In theNew Frontend IP and portsection, set the following fields and clickDone:
- Name:
fr-ilb-2 - IP version:IPv4
- Subnetwork:
second-subnet - Internal IP:
ip-ilb - Ports:
80and443
- Name:
Verify that there is a blue check mark next toFrontend configuration before continuing.
ClickReview and finalize, and review your load balancer configuration settings.
ClickCreate.
gcloud
Create a second subnet in the
lb-networknetwork in theus-west1region:gcloud compute networks subnets create second-subnet \ --network=lb-network \ --range=10.5.6.0/24 \ --region=us-west1
Create a second forwarding rule for ports 80 and 443. The otherparameters for this rule, including IP address and backend service, arethe same as for the primary forwarding rule,
fr-ilb.gcloud compute forwarding-rules create fr-ilb-2 \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=second-subnet \ --address=10.5.6.99 \ --ip-protocol=TCP \ --ports=80,443 \ --backend-service=be-ilb \ --backend-service-region=us-west1
API
Make aPOST requests to thesubnetworks.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks{ "name": "second-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.5.6.0/24", "privateIpGoogleAccess": false}Create the forwarding rule by making aPOST request to theforwardingRules.insert method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-2","IPAddress": "10.5.6.99","IPProtocol": "TCP","ports": [ "80", "443"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}Test the new forwarding rule
Connect to the client VM instance and test HTTP and HTTPS connections to the IP addresses.
Connect to the client VM:
gcloud compute ssh vm-client --zone=us-west1-a
Test HTTP connectivity to the IP addresses:
curl http://10.1.2.99curl http://10.5.6.99
Test HTTPS connectivity. Use of
--insecureis required because the Apache server configuration in the example setup uses self-signed certificates.curl https://10.1.2.99 --insecurecurl https://10.5.6.99 --insecure
Observe that requests are handled by all of the backend VMs, regardless of the protocol (HTTP or HTTPS) or IP address used.
Use backend subsetting
Theexample configuration creates a backendservice without subsetting.
This procedure shows you how to enable subsetting on the backend service for theexample internal passthrough Network Load Balancer so that the deployment can scale to a larger numberof backend instances.
Caution: Enabling backend subsetting might be temporarily disruptive and mightbreak existing TCP connections.You should only enable subsetting if you need to support more than 250 backendVMs on a single load balancer.
Note: This feature does not support IPv6 addresses.
For more information about this use case, seebackend subsetting.
gcloud
Use the followinggcloud command to update thebe-ilb backend service,specifying subsetting policy:
gcloud compute backend-services update be-ilb \ --subsetting-policy=CONSISTENT_HASH_SUBSETTING
API
Make aPATCH request to theregionBackendServices/patch method.
PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb{"subsetting": { "policy": CONSISTENT_HASH_SUBSETTING }}Create a load balancer for Packet Mirroring
Packet Mirroringlets you copy and collect packet data from specific instances in aVPC. The collected data can help you detect security threatsand monitor application performance.
Packet Mirroring requires an internal passthrough Network Load Balancer in order to balancetraffic to an instance group of collector destinations. To create aninternal passthrough Network Load Balancer for Packet Mirroring, follow these steps.
Console
Start your configuration
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectNetwork Load Balancer (TCP/UDP/SSL) and clickNext.
- ForProxy or passthrough, selectPassthrough load balancer and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ClickConfigure.
Basic configuration
- ForLoad balancer name, enter a name.
- ForRegion, select the region of the VM instances where you want to mirror packets.
- ForNetwork, select the network where you want to mirror packets.
- ClickBackend configuration.
- In theNew Backend section, forInstance group, select the instance group to forward packets to.
- In theHealth check list, clickCreate a health check, and thenenter the following information:
- In theName field, enter a name for the health check.
- In theProtocol list, selectHTTP.
- In thePort field, enter
80.
- ClickCreate.
- ClickFrontend configuration.
- In theNew Frontend IP and port section, do the following:
- ForName, enter a name.
- ForSubnetwork, select a subnetwork in the same region as theinstances to mirror.
- ForPorts, selectAll.
- ClickAdvanced configurations and select theEnable this load balancer for packet mirroring checkbox.
- ClickDone.
- ClickCreate.
gcloud
Create a new regional HTTP health check to test HTTP connectivity to aninstance group on port 80:
gcloud compute health-checks create httpHEALTH_CHECK_NAME \ --region=REGION \ --port=80
Replace the following:
HEALTH_CHECK_NAME: the name of the health check.REGION: the region of the VM instances that youwant to mirror packets for.
Create a backend service for HTTP traffic:
gcloud compute backend-services createCOLLECTOR_BACKEND_SERVICE \ --region=REGION \ --health-checks-region=REGION \ --health-checks=HEALTH_CHECK_NAME \ --load-balancing-scheme=internal \ --protocol=tcp
Replace the following:
COLLECTOR_BACKEND_SERVICE: the name of thebackend service.REGION: the region of the VM instances where youwant to mirror packets.HEALTH_CHECK_NAME: the name of the health check.
Add an instance group to the backend service:
gcloud compute backend-services add-backendCOLLECTOR_BACKEND_SERVICE \ --region=REGION \ --instance-group=INSTANCE_GROUP \ --instance-group-zone=ZONE
Replace the following:
COLLECTOR_BACKEND_SERVICE: the name of thebackend service.REGION: the region of the instance group.INSTANCE_GROUP: the name of the instance group.ZONE: the zone of the instance group.
Create a forwarding rule for the backend service:
gcloud compute forwarding-rules createFORWARDING_RULE_NAME \ --region=REGION \ --network=NETWORK \ --subnet=SUBNET \ --backend-service=COLLECTOR_BACKEND_SERVICE \ --load-balancing-scheme=internal \ --ip-protocol=TCP \ --ports=all \ --is-mirroring-collector
Replace the following:
FORWARDING_RULE_NAME: the name of the forwardingrule.REGION: the region for the forwarding rule.NETWORK: the network for the forwarding rule.SUBNET: a subnetwork in the region of the VMswhere you want to mirror packets.COLLECTOR_BACKEND_SERVICE: the backend servicefor this load balancer.
L3_DEFAULT protocol in your forwarding rule to configure packet mirroring.What's next
- SeeInternal passthrough Network Load Balancer overview for importantfundamentals.
- SeeFailover concepts for internal passthrough Network Load Balancersfor important information about failover.
- SeeInternal load balancing and DNS namesfor available DNS name options that your load balancer can use.
- SeeConfiguring failover for internal passthrough Network Load Balancers forconfiguration steps and an example internal passthrough Network Load Balancer failover configuration.
- SeeInternal passthrough Network Load Balancer logging andmonitoring for information about configuringLogging and Monitoring for internal passthrough Network Load Balancers.
- SeeInternal passthrough Network Load Balancers and connectednetworks forinformation about accessing internal passthrough Network Load Balancers from peer networksconnected to your VPC network.
- SeeTroubleshoot internal passthrough Network Load Balancers forinformation about how to troubleshoot issues with your internal passthrough Network Load Balancer.
- Clean up the load balancer setup.
Appendix: IPv6 Bash script primer
This section provides a brief description of the different networking commandsthat are related to the Bash startup script that is used toinstall an IPv6 TCP server on the backend VMs.You don't need to run these steps again.They are only outlined here to provide context and aid understanding.
Show all IPv6 routes.
ip -6 route show table all
The output is as follows:
fd20:307:120c:2000::/64 via fe80::57:2ff:fe36:ffbe dev ens4 proto ra metric 100 expires 62sec pref mediumfe80::/64 dev ens4 proto kernel metric 256 pref mediumdefault via fe80::57:2ff:fe36:ffbe dev ens4 proto ra metric 100 expires 62sec mtu 1460 pref mediumlocal ::1 dev lo table local proto kernel metric 0 pref mediumlocal fd20:307:120c:2000:0:b:: dev ens4 table local proto kernel metric 0 pref mediumlocal fe80::56:24ff:feb1:59b3 dev ens4 table local proto kernel metric 0 pref mediummulticast ff00::/8 dev ens4 table local proto kernel metric 256 pref medium
From the output of the previous step, identify the following:
- Link-local IPv6 address (starting with fe80::/10): in the example output,the link local address is fe80::57:2ff:fe36:ffbe. This link-local addressis used in the default route that is defined in routing table 1.This default route is created in step 2.
- /64 subnet: this subnet is referenced in the source-based policyrouting rule in step 3
Add a custom default route in routing table 1.
The following command sends packets to the gateway using thenetwork interface named
ens4.sudo ip route add default viaGATEWAY_ADDRESS dev ens4 table 1
After you run this command, a default route is added to acustom routing table (table 1),pointing to the gateway
fe80::57:2ff:fe36:ffbethrough the ens4 interface.If you were to run the
ip -6 route show table allcommand again,the output is as follows:default via fe80::57:2ff:fe36:ffbe dev ens4 table 1 metric 1024 pref mediumfd20:307:120c:2000::/64 via fe80::57:2ff:fe36:ffbe dev ens4 proto ra metric 100 expires 89sec pref mediumfe80::/64 dev ens4 proto kernel metric 256 pref mediumdefault via fe80::57:2ff:fe36:ffbe dev ens4 proto ra metric 100 expires 89sec mtu 1460 pref mediumlocal ::1 dev lo table local proto kernel metric 0 pref mediumlocal fd20:307:120c:2000:0:1::/96 dev ens4 table local proto 66 metric 1024 pref mediumlocal fd20:307:120c:2000:0:b:: dev ens4 table local proto kernel metric 0 pref mediumlocal fe80::56:24ff:feb1:59b3 dev ens4 table local proto kernel metric 0 pref mediummulticast ff00::/8 dev ens4 table local proto kernel metric 256 pref medium
Add a source-based policy routing rule.
The following command adds a rule to route all outgoing packets originatingfrom a specified subnet. If the source address matches the subnet, the systemuses the default route defined in table 1 to forward the traffic.
sudo ip -6 rule add fromSUBNET_RANGE table 1
To view the policy routing rules list, run the following command.
ip -6 rule show
The output is as follows:
0: from all lookup local32765: from fd20:307:120c:2000::/64 lookup 132766: from all lookup main
The line with
from fd20:... lookup 1is the rule that youadded—telling the kernel to use routing table 1 for traffic originatingfrom that subnet.Allow the server to bind to a non-local IPv6 address—theVIP of the load balancer.
sudo sysctl -w net.ipv6.ip_nonlocal_bind=1
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.