Set up an internal passthrough Network Load Balancer with zonal NEGs

This guide shows you how to deploy an internal passthrough Network Load Balancer with zonal networkendpoint group (NEG) backends. Zonal NEGs arezonalresources that represent collections of either IP addresses or IPaddress/port combinations for Google Cloud resources within a singlesubnet. NEGs allow you to createlogical groupings of IP addresses or IP address/port combinations that representsoftware services instead of entire VMs.

Before following this guide, familiarize yourselfwith the following:

Internal passthrough Network Load Balancers only support zonal NEGs withGCE_VM_IPendpoints.

Permissions

To follow this guide, you need to create instances and modify anetwork in a project. You should be either a projectowner oreditor, or you should have allof the followingCompute Engine IAM roles:

TaskRequired Role
Create networks, subnets, and load balancer componentsNetwork Admin
Add and remove firewall rulesSecurity Admin
Create instancesCompute Instance Admin

For more information, see the following guides:

Setup overview

This guide shows you how to configure and test an internal passthrough Network Load Balancer withGCE_VM_IP zonal NEG backends. The steps in this section describe how toconfigure the following:

  1. A sample VPC network calledlb-network with a custom subnet
  2. Firewall rules that allow incoming connections to backend VMs
  3. Four VMs:
    • VMsvm-a1 andvm-a2 in zoneus-west1-a
    • VMsvm-c1 andvm-c2 in zoneus-west1-c
  4. Two backend zonal NEGs,neg-a in zoneus-west1-a, andneg-c in zoneus-west1-c. Each NEG will have the following endpoints:
    • neg-a contains these two endpoints:
      • Internal IP address of VMvm-a1
      • Internal IP address of VMvm-a2
    • neg-c contains these two endpoints:
      • Internal IP address of VMvm-c1
      • Internal IP address of VMvm-c2
  5. One client VM (vm-client) inus-west1-a to test connections
  6. The following internal passthrough Network Load Balancer components:
    • An internal backend service in theus-west1 region to manageconnection distribution to the two zonal NEGs
    • An internal forwarding rule and internal IP address for thefrontend of the load balancer

The architecture for this example looks like this:

Internal passthrough Network Load Balancer configuration with zonal NEGs
Internal passthrough Network Load Balancer configuration with zonal NEGs

Configure a network, region, and subnet

The example internal passthrough Network Load Balancer described on this page is created in acustom mode VPC network namedlb-network.

This example's backend VMs, zonal NEGs and load balancer's components arelocated in this region and subnet:

  • Region:us-west1
  • Subnet:lb-subnet, with primary IP address range10.1.2.0/24
Note: You can change the name of the network, the region, and the parameters forthe subnet; however, subsequent steps in this guide use the network, region, andsubnet parameters as outlined here.

To create the example network and subnet, follow these steps.

Console

  1. Go to the VPC networks page in the Google Cloud console.
    Go to the VPC network page
  2. ClickCreate VPC network.
  3. Enter aName oflb-network.
  4. In theSubnets section:
    • Set theSubnet creation mode toCustom.
    • In theNew subnet section, enter the following information:
      • Name:lb-subnet
      • Region:us-west1
      • IP address range:10.1.2.0/24
      • ClickDone.
  5. ClickCreate.

gcloud

  1. Create the custom VPC network:

    gcloud compute networks create lb-network --subnet-mode=custom
    1. Within thelb-network network, create a subnet for backend VMs in theus-west1 region:
    gcloud compute networks subnets create lb-subnet \    --network=lb-network \    --range=10.1.2.0/24 \    --region=us-west1

Configure firewall rules

This example uses the following firewall rules:

  • fw-allow-lb-access: An ingress rule, applicable to all targets in theVPC network, allowing traffic from sources in the10.1.2.0/24 range. This rule allows incoming trafficfrom any client located inlb-subnet.

  • fw-allow-ssh: An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port 22 from anyaddress. You can choose a more restrictive source IP range for this rule; forexample, you can specify just the IP ranges of the system from which you willinitiating SSH sessions. This example uses the target tagallow-ssh toidentify the VMs to which it should apply.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Console

  1. In the Google Cloud console, go to theFirewall policies page.
    Go to Firewall policies
  2. ClickCreate firewall rule and enter the followinginformation to create the rule to allow subnet traffic:
    • Name:fw-allow-lb-access
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: All instances in the network
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:10.1.2.0/24
    • Protocols and ports: Allow all
  3. ClickCreate.
  4. ClickCreate firewall rule again to create the rule to allow incomingSSH connections:
    • Name:fw-allow-ssh
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags:allow-ssh
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports: ChooseSpecified protocols and ports thentype:tcp:22
  5. ClickCreate.
  6. ClickCreate firewall rule a third time to create the rule to allowGoogle Cloud health checks:
    • Name:fw-allow-health-check
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: ingress
    • Action on match: allow
    • Targets: Specified target tags
    • Target tags:allow-health-check
    • Source filter:IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports: Allow all
  7. ClickCreate.

gcloud

  1. Create thefw-allow-lb-access firewall rule to allow communication fromwith the subnet:

    gcloud compute firewall-rules create fw-allow-lb-access \    --network=lb-network \    --action=allow \    --direction=ingress \    --source-ranges=10.1.2.0/24 \    --rules=tcp,udp,icmp
  2. Create thefw-allow-ssh firewall rule to allow SSH connectivity toVMs with the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-allow-ssh \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  3. Create thefw-allow-health-check rule to allow Google Cloudhealth checks.

    gcloud compute firewall-rules create fw-allow-health-check \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-health-check \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --rules=tcp,udp,icmp

Create NEG backends

To demonstrate the regional nature of internal passthrough Network Load Balancers, this example usestwo zonal NEG backends,neg-a andneg-c, in zonesus-west1-a andus-west1-c. Traffic is load-balanced across both NEGs, and across endpoints withineach NEG.

Create VMs

To support this example, each of the four VMs runs an Apache web server thatlistens on the following TCP ports: 80, 8008, 8080, 8088, 443, and 8443.

Each VM is assigned an internal IP address in thelb-subnet and an ephemeralexternal (public) IP address. You can remove the external IP addresseslater.

External IP addresses are not required for backend VMs; however, they areuseful for this example because they permit the VMs to download Apachefrom the internet, and they let youconnect usingSSH. By default,Apache is configured to bind to any IP address. Internal passthrough Network Load Balancersdeliver packets by preserving the destination IP.

Ensure that server software running on your VMs is listening on the IPaddress of the load balancer's internal forwarding rule.

For instructional simplicity, these backend VMs run Debian GNU Linux 10.

Console

Create VMs

  1. Go to the VM instances page in the Google Cloud console.
    Go to the VM instances page
  2. Repeat the following steps to create four VMs, using the following nameand zone combinations.
    • Name:vm-a1, zone:us-west1-a
    • Name:vm-a2, zone:us-west1-a
    • Name:vm-c1, zone:us-west1-c
    • Name:vm-c2, zone:us-west1-c
  3. ClickCreate instance.
  4. Set theName as indicated in step 2.
  5. For theRegion, chooseus-west1, and choose aZone asindicated in step 2.
  6. In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. ClickChoose to change the image if necessary.
  7. ClickAdvanced options and make the following changes:

    • ClickNetworking and add the followingNetwork tags:allow-ssh andallow-health-check
    • ClickEdit underNetwork interfaces and make the following changes then clickDone:
      • Network:lb-network
      • Subnet:lb-subnet
      • Primary internal IP: Ephemeral (automatic)
      • External IP: Ephemeral
    • ClickManagement. In theStartup script field, copy and pastethe following script contents. The script contents are identical forall four VMs:

      #! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlecho "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completed
  8. ClickCreate.

gcloud

Create the four VMs by running the following command four times, using these four combinations for[VM-NAME] and[ZONE]. The script contents are identical for all four VMs.

  • [VM-NAME] ofvm-a1 and[ZONE] ofus-west1-a
  • [VM-NAME] ofvm-a2 and[ZONE] ofus-west1-a
  • [VM-NAME] ofvm-c1 and[ZONE] ofus-west1-c
  • [VM-NAME] ofvm-c2 and[ZONE] ofus-west1-c

    gcloud compute instances createVM-NAME \    --zone=ZONE \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh,allow-health-check \    --subnet=lb-subnet \    --metadata=startup-script='#! /bin/bashif [ -f /etc/startup_script_completed ]; thenexit 0fiapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslfile_ports="/etc/apache2/ports.conf"file_http_site="/etc/apache2/sites-available/000-default.conf"file_https_site="/etc/apache2/sites-available/default-ssl.conf"http_listen_prts="Listen 80\nListen 8008\nListen 8080\nListen 8088"http_vh_prts="*:80 *:8008 *:8080 *:8088"https_listen_prts="Listen 443\nListen 8443"https_vh_prts="*:443 *:8443"vm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlprt_conf="$(cat "$file_ports")"prt_conf_2="$(echo "$prt_conf" | sed "s|Listen 80|${http_listen_prts}|")"prt_conf="$(echo "$prt_conf_2" | sed "s|Listen 443|${https_listen_prts}|")"echo "$prt_conf" | tee "$file_ports"http_site_conf="$(cat "$file_http_site")"http_site_conf_2="$(echo "$http_site_conf" | sed "s|*:80|${http_vh_prts}|")"echo "$http_site_conf_2" | tee "$file_http_site"https_site_conf="$(cat "$file_https_site")"https_site_conf_2="$(echo "$https_site_conf" | sed "s|_default_:443|${https_vh_prts}|")"echo "$https_site_conf_2" | tee "$file_https_site"systemctl restart apache2touch /etc/startup_script_completed'

CreateGCE_VM_IP zonal NEGs

The NEGs (neg-a andneg-c) must be created in the same zones as the VMscreated in the previous step.

Console

To create a zonal network endpoint group:

  1. Go to the Network Endpoint Groups page in the Google Cloud console.
    Go to the Network Endpoint Groups page
  2. ClickCreate network endpoint group.
  3. Enter aName for the zonal NEG:neg-a.
  4. Select theNetwork endpoint group type:Network endpoint group(Zonal).
  5. Select theNetwork:lb-network
  6. Select theSubnet:lb-subnet
  7. Select theZone:us-west1-a
  8. ClickCreate.
  9. Repeat these steps to create a second zonal NEG calledneg-c, intheus-west1-c zone.

Add endpoints to the zonal NEG:

  1. Go to the Network Endpoint Groups page in the Google Cloud console.
    Go to the Network endpoint groups
  2. Click theName of the first network endpoint group created in theprevious step (neg-a). You see theNetwork endpoint group detailspage.
  3. In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.

    1. ClickVM instance and selectvm-a1 to add its internal IPaddresses as network endpoints.
    2. ClickCreate.
    3. Again clickAdd network endpoint and underVM instance selectvm-a2.
    4. ClickCreate.
  4. Click theName of the second network endpoint group created in theprevious step (neg-c). You see theNetwork endpoint group detailspage.

  5. In theNetwork endpoints in this group section, clickAdd networkendpoint. You see theAdd network endpoint page.

    1. ClickVM instance and selectvm-c1 to add its internal IPaddresses as network endpoints.
    2. ClickCreate.
    3. Again clickAdd network endpoint and underVM instance selectvm-c2.
    4. ClickCreate.

gcloud

  1. Create aGCE_VM_IP zonal NEG calledneg-a inus-west1-a using thegcloud compute network-endpoint-groupscreatecommand:

    gcloud compute network-endpoint-groups create neg-a \    --network-endpoint-type=gce-vm-ip \    --zone=us-west1-a \    --network=lb-network \    --subnet=lb-subnet
  2. Add endpoints toneg-a:

    gcloud compute network-endpoint-groups update neg-a \    --zone=us-west1-a \    --add-endpoint='instance=vm-a1' \    --add-endpoint='instance=vm-a2'
  3. Create aGCE_VM_IP zonal NEG calledneg-c inus-west1-c using thegcloud compute network-endpoint-groupscreatecommand:

    gcloud compute network-endpoint-groups create neg-c \    --network-endpoint-type=gce-vm-ip \    --zone=us-west1-c \    --network=lb-network \    --subnet=lb-subnet
  4. Add endpoints toneg-c:

    gcloud compute network-endpoint-groups update neg-c \    --zone=us-west1-c \    --add-endpoint='instance=vm-c1' \    --add-endpoint='instance=vm-c2'

Configure load balancer components

These steps configure all of theinternal passthrough Network Load Balancercomponents:

  • Backend service: For this example, you need to pass HTTP traffic throughthe load balancer. Therefore, you need to use TCP, not UDP.

  • Forwarding rule: This example creates a single internal forwarding rule.

  • Internal IP address: In this example, you specify an internal IPaddress,10.1.2.99, when you create the forwarding rule.

Console

Note: You cannot use the Google Cloud console to create or manage aninternal passthrough Network Load Balancer withGCE_VM_IP zonal NEGs. Use eithergcloudor theREST API.

gcloud

  1. Create a new regional HTTP health check.

    gcloud compute health-checks create http hc-http-80 \    --region=us-west1 \    --port=80
  2. Create the backend service:

    gcloud compute backend-services create bs-ilb \    --load-balancing-scheme=internal \    --protocol=tcp \    --region=us-west1 \    --health-checks=hc-http-80 \    --health-checks-region=us-west1
  3. Add the two zonal NEGs,neg-a andneg-c, to the backend service:

    gcloud compute backend-services add-backend bs-ilb \    --region=us-west1 \    --network-endpoint-group=neg-a \    --network-endpoint-group-zone=us-west1-a
    gcloud compute backend-services add-backend bs-ilb \    --region=us-west1 \    --network-endpoint-group=neg-c \    --network-endpoint-group-zone=us-west1-c
  4. Create a forwarding rule for the backend service. When you create theforwarding rule, specify10.1.2.99 for the internal IP address in thesubnet.

    gcloud compute forwarding-rules create fr-ilb \    --region=us-west1 \    --load-balancing-scheme=internal \    --network=lb-network \    --subnet=lb-subnet \    --address=10.1.2.99 \    --ip-protocol=TCP \    --ports=80,8008,8080,8088 \    --backend-service=bs-ilb \    --backend-service-region=us-west1

Test the load balancer

This test contacts the load balancer from a separate client VM; that is, notfrom a backend VM of the load balancer. The expected behavior is for traffic tobe distributed among the four backend VMsbecause no session affinity has beenconfigured.

Create a test client VM

This example creates a client VM (vm-client) in the same region as the backend(server) VMs. The client is used to validate the load balancer's configurationand demonstrate expected behavior as described in thetesting section.

Console

  1. Go to the VM instances page in the Google Cloud console.
    Go to the VM instances page
  2. ClickCreate instance.
  3. Set theName tovm-client.
  4. Set theZone tous-west1-a.
  5. ClickAdvanced options and make the following changes:
    • ClickNetworking and add theallow-ssh toNetwork tags.
    • Click the edit button underNetwork interfaces and make thefollowing changes then clickDone:
      • Network:lb-network
      • Subnet:lb-subnet
      • Primary internal IP: Ephemeral (automatic)
      • External IP: Ephemeral
  6. ClickCreate.

gcloud

The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theus-west1-a zone, and it uses the samesubnet as the backend VMs.

gcloud compute instances create vm-client \    --zone=us-west1-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=lb-subnet

Send traffic to the load balancer

Perform the following steps to connect to the load balancer.

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client --zone=us-west1-a
  2. Make a web request to the load balancer usingcurl to contact its IPaddress. Repeat the request so you can see that responses come fromdifferent backend VMs. The name of the VM generating the response isdisplayed in the text in the HTML response, by virtue of the contents of/var/www/html/index.html on each backend VM. Expected responses look like:Page served from: vm-a1 andPage served from: vm-a2.

    curl http://10.1.2.99

    The forwarding rule is configured to serve ports80,8008,8080, and8088. To send traffic to those other ports, append a colon (:) and theport number after the IP address, like this:

    curl http://10.1.2.99:8008

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.