Set up an internal passthrough Network Load Balancer with VM instance group backends for multiple protocols

This page provides instructions for creating internal passthrough Network Load Balancers toload balance traffic for multiple protocols.

To configure a load balancer for multiple protocols, including TCP and UDP, youcreate aforwarding rule with the protocol set toL3_DEFAULT. Thisforwarding rule points to abackend service with the protocol set toUNSPECIFIED.

In this example, we use one internal passthrough Network Load Balancer to distribute traffic across a backendVM in theus-west1 region. The load balancer has a forwarding rule with protocolL3_DEFAULT to handle TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE.

Load balancing IPv4 and IPv6 traffic based on the protocols, with    backend services to manage connection distribution to a single zonal    instance group.
Internal passthrough Network Load Balancer for multiple protocols (click to enlarge).

Before you begin

Permissions

To get the permissions that you need to complete this guide, ask your administrator to grant you the following IAM roles on the project:

For more information about granting roles, seeManage access to projects, folders, and organizations.

You might also be able to get the required permissions throughcustom roles or otherpredefined roles.

Note:IAM basic roles might also contain permissions to complete this guide. You shouldn't grant basic roles in a production environment, but you can grant them in a development or test environment.

Set up load balancer for L3_DEFAULT traffic

The steps in this section describe the following configurations:

  • An example that uses acustom mode VPCnetwork namedlb-network. You can use an automode network if you only want to handle IPv4 traffic. However, IPv6 trafficrequires a custom mode subnet.
  • Asingle-stack subnet (stack-type set toIPv4), which is required for IPv4 traffic. When you create asingle-stack subnet on a custom mode VPC network,you choose anIPv4 subnet rangefor the subnet. For IPv6 traffic, we require adual-stack subnet (stack-typeset toIPV4_IPV6). When you create a dual stack subnet on a custom modeVPC network, you choose anIPv6 access type forthe subnet. For this example, we set the subnet'sipv6-access-type parametertoINTERNAL. This means new VMs on this subnet can be assigned both internalIPv4 addresses and internal IPv6 addresses.
  • Firewall rules that allow incoming connections to backend VMs.
  • The backend instance group and the load balancer components used for thisexample are located in this region and subnet:
    • Region:us-west1
    • Subnet:lb-subnet, with primary IPv4 address range10.1.2.0/24. Although you choose which IPv4 address range is configuredon the subnet, the IPv6 address range is assigned automatically. Googleprovides a fixed size (/64) IPv6 CIDR block.
  • A backend VM in a managed instance group in zoneus-west1-a.
  • A client VM to test connections to the backends.
  • An internal passthrough Network Load Balancer with the following components:
    • A health check for the backend service.
    • A backend service in theus-west1 region with the protocol set toUNSPECIFIED to manage connection distribution to the zonal instance group.
    • A forwarding rule with the protocol set toL3_DEFAULT andthe port set toALL.
Note: You can change the name of the network, the region, and the parameters forthe subnet. However, subsequent steps in this guide use the network, region, andsubnet parameters as outlined in the preceding list.

Configure a network, region, and subnet

To configure subnets with internal IPv6 ranges, enable a Virtual Private Cloud (VPC)network ULA internal IPv6 range. Internal IPv6 subnet ranges are allocated fromthis range.To create the example network and subnet, follow these steps:

Console

To support bothIPv4 and IPv6 traffic, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. ForName, enterlb-network.

  4. If you want to configure internal IPv6 address ranges on subnets in thisnetwork, complete these steps:

    1. ForVPC network ULA internal IPv6 range, selectEnabled.
    2. ForAllocate internal IPv6 range, selectAutomatically orManually.

      Pro Tip:If you selectManually, enter a/48 range from within thefd20::/20 range. If the range is in use, you are prompted to providea different range.

  5. ForSubnet creation mode, selectCustom.

  6. In theNew subnet section, specify the following configurationparameters for a subnet:

    1. ForName, enterlb-subnet.
    2. ForRegion, selectus-west1.
    3. To create a dual-stack subnet, forIP stack type, selectIPv4 and IPv6 (dual-stack).
    4. ForIPv4 range, enter10.1.2.0/24.
    5. ForIPv6 access type, selectInternal.
  7. ClickDone.

  8. ClickCreate.

To supportIPv4 traffic, use the following steps:

  1. In the Google Cloud console, go to theVPC networks page.

    Go to VPC networks

  2. ClickCreate VPC network.

  3. ForName, enterlb-network.

  4. In theSubnets section:

    • Set theSubnet creation mode toCustom.
    • In theNew subnet section, enter the following information:
      • Name:lb-subnet
      • Region:us-west1
      • IP stack type:IPv4 (single-stack)
      • IP address range:10.1.2.0/24
    • ClickDone.
  5. ClickCreate.

gcloud

For bothIPv4 and IPv6 traffic, use the following commands:

  1. To create a new custom mode VPC network, run thegcloud compute networks create command.

    To configureinternal IPv6 ranges on any subnets in this network, use the--enable-ula-internal-ipv6 flag. This option assigns a/48 ULA prefixfrom within thefd20::/20 range used by Google Cloud forinternal IPv6 subnet ranges.

    gcloud compute networks create lb-network \ --subnet-mode=custom \ --enable-ula-internal-ipv6
  2. Within thelb-network, create a subnet for backendsin theus-west1 region.

    To create the subnets, run thegcloud compute networks subnets create command:

    gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1 \ --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL

ForIPv4 traffic only, use the following commands:

  1. To create the custom VPC network,use thegcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
  2. To create the subnet for backends in theus-west1 regionwithin thelb-network network, use thegcloud compute networks subnets create command.

    gcloud compute networks subnets create lb-subnet \    --network=lb-network \    --range=10.1.2.0/24 \    --region=us-west1

API

For bothIPv4 and IPv6 traffic, use the following commands:

  1. Create a new custom mode VPC network.Make aPOST request to thenetworks.insert method.

    To configureinternal IPv6 ranges on any subnets in thisnetwork, setenableUlaInternalIpv6 totrue. This option assigns a/48range from within thefd20::/20 range used by Google for internal IPv6subnet ranges.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{  "autoCreateSubnetworks": false,  "name": "lb-network",  "mtu":MTU,  "enableUlaInternalIpv6": true,}

    Replace the following:

    • PROJECT_ID: the ID of the project where the VPCnetwork is created.
    • MTU: the maximum transmission unit of the network. MTU caneither be1460 (default) or1500. Review themaximum transmissionunit overview before setting the MTU to1500.
  2. Make aPOST request to thesubnetworks.insert method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks{"ipCidrRange": "10.1.2.0/24","network": "lb-network","name": "lb-subnet""stackType": IPV4_IPV6,"ipv6AccessType": Internal}

ForIPv4 traffic only, use the following steps:

  1. Make aPOST request to thenetworks.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{"name": "lb-network","autoCreateSubnetworks": false}
  2. Make twoPOST requests to thesubnetworks.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks{"name": "lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","ipCidrRange": "10.1.2.0/24","privateIpGoogleAccess": false}

Configure firewall rules

This example uses the following firewall rules:

  • fw-allow-lb-access: An ingress rule, applicable to all targets in theVPC network, that allows traffic from sources in the10.1.2.0/24 ranges. This rule allows incoming trafficfrom any client located in the subnet.

  • fw-allow-lb-access-ipv6: An ingress rule, applicable to all targets in theVPC network, that allows traffic from sources in theIPv6 range configured in the subnet. This rule allows incoming IPv6 trafficfrom any client located in the subnet.

  • fw-allow-ssh: An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port 22 from anyaddress. You can choose a more restrictive source IP range for this rule—forexample, you can specify only the IP ranges of the system from which you areinitiating SSH sessions. This example uses the target tagallow-ssh toidentify the VMs to which it should apply.

  • fw-allow-health-check: An ingress rule, applicable to the instances beingload balanced, that allows traffic from the Google Cloud healthchecking systems (130.211.0.0/22 and35.191.0.0/16). This example uses thetarget tagallow-health-check to identify the instances to which it shouldapply.

  • fw-allow-health-check-ipv6: An ingress rule, applicable to the instancesbeing load balanced, that allows traffic from the Google Cloud healthchecking systems (2600:2d00:1:b029::/64). This example uses thetarget tagallow-health-check-ipv6 to identify the instances to which itshould apply.

Without these firewall rules, thedefault denyingress rule blocks incomingtraffic to the backend instances.

Note: You must create a firewall rule to allow health checks from the IP rangesof Google Cloud probe systems. For more information, seeProbe IP ranges and firewallrules.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. To allowIPv4 TCP, UDP, and ICMP traffic to reach backend instance groupig-a:

    • ClickCreate firewall rule.
    • Name:fw-allow-lb-access
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: All instances in the network
    • Source filter: IPv4 ranges
    • Source IPv4 ranges:10.1.2.0/24
    • Protocols and ports: selectSpecified protocols and ports.
      • SelectTCP and enterALL.
      • SelectUDP.
      • SelectOther and enterICMP.
  3. ClickCreate.

  4. To allow incoming SSH connections:

    • ClickCreate firewall rule.
    • Name:fw-allow-ssh
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags:allow-ssh
    • Source filter: IPv4 ranges
    • Source IPv4 ranges:0.0.0.0/0
    • Protocols and ports: chooseSpecified protocols and ports,and then typetcp:22.
  5. ClickCreate.

  6. To allowIPv6 TCP, UDP, and ICMP traffic to reach backend instance groupig-a:

    • ClickCreate firewall rule.
    • Name:fw-allow-lb-access-ipv6
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: All instances in the network
    • Source filter:IPv6 ranges
    • Source IPv6 ranges:IPV6_ADDRESS assigned in thelb-subnet
    • Protocols and ports: selectSpecified protocols and ports.
      • SelectTCP and enter0-65535.
      • SelectUDP.
      • SelectOther and for ICMPv6 protocol enter58.
  7. ClickCreate.

  8. To allow Google Cloud IPv6 health checks:

    • ClickCreate firewall rule.
    • Name:fw-allow-health-check-ipv6
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags:allow-health-check-ipv6
    • Source filter:IPv6 ranges
    • Source IPv6 ranges:2600:2d00:1:b029::/64
    • Protocols and ports: Allow all
  9. ClickCreate.

  10. To allow Google Cloud IPv4 health checks:

    • ClickCreate firewall rule
    • Name:fw-allow-health-check
    • Network:lb-network
    • Priority:1000
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags:allow-health-check
    • Source filter: IPv4 ranges
    • Source IPv4 ranges:130.211.0.0/22 and35.191.0.0/16
    • Protocols and ports: Allow all
  11. ClickCreate.

gcloud

  1. To allowIPv4 TCP traffic to reach backend instance groupig-a, createthe following rule:

    gcloud compute firewall-rules create fw-allow-lb-access \    --network=lb-network \    --action=allow \    --direction=ingress \    --source-ranges=10.1.2.0/24 \    --rules=tcp,udp,icmp
  2. Create thefw-allow-ssh firewall rule to allow SSH connectivity toVMs by using the network tagallow-ssh. When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.

    gcloud compute firewall-rules create fw-allow-ssh \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-ssh \    --rules=tcp:22
  3. To allowIPv6 traffic to reach backend instance groupig-a, createthe following rule:

    gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \    --network=lb-network \    --action=allow \    --direction=ingress \    --source-ranges=IPV6_ADDRESS \    --rules=all

    ReplaceIPV6_ADDRESS with the IPv6 address assignedin thelb-subnet.

  4. Create thefw-allow-health-check firewall rule to allow Google Cloudhealth checks.

    gcloud compute firewall-rules create fw-allow-health-check \    --network=lb-network \    --action=allow \    --direction=ingress \    --target-tags=allow-health-check \    --source-ranges=130.211.0.0/22,35.191.0.0/16 \    --rules=tcp,udp,icmp
  5. Create thefw-allow-health-check-ipv6 rule to allow Google CloudIPv6 health checks.

    gcloud compute firewall-rules create fw-allow-health-check-ipv6 \   --network=lb-network \   --action=allow \   --direction=ingress \   --target-tags=allow-health-check-ipv6 \   --source-ranges=2600:2d00:1:b029::/64 \   --rules=tcp,udp,icmp

API

  1. To create thefw-allow-lb-access firewall rule, make aPOST request tothefirewalls.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-lb-access","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","priority": 1000,"sourceRanges": [  "10.1.2.0/24"],"allPorts": true,"allowed": [  {    "IPProtocol": "tcp"  },  {    "IPProtocol": "udp"  },  {    "IPProtocol": "icmp"  }],"direction": "INGRESS","logConfig": {  "enable": false},"disabled": false}
  2. Create thefw-allow-lb-access-ipv6 firewall rule by making aPOST request tothefirewalls.insert method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{ "name": "fw-allow-lb-access-ipv6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [   "IPV6_ADDRESS" ], "allPorts": true, "allowed": [   {      "IPProtocol": "tcp"    },    {      "IPProtocol": "udp"    },    {      "IPProtocol": "58"    } ], "direction": "INGRESS", "logConfig": {    "enable": false }, "disabled": false}

    ReplaceIPV6_ADDRESS with the IPv6 address assigned in thelb-subnet.

  3. To create thefw-allow-ssh firewall rule, make aPOST request tothefirewalls.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-ssh",     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","priority": 1000,"sourceRanges": [  "0.0.0.0/0"],"targetTags": [  "allow-ssh"],"allowed": [ {   "IPProtocol": "tcp",   "ports": [     "22"   ] }],"direction": "INGRESS","logConfig": { "enable": false},"disabled": false}
  4. To create thefw-allow-health-check firewall rule, make aPOST request tothefirewalls.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-health-check","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","priority": 1000,"sourceRanges": [  "130.211.0.0/22",  "35.191.0.0/16"],"targetTags": [  "allow-health-check"],"allowed": [  {    "IPProtocol": "tcp"  },  {    "IPProtocol": "udp"  },  {    "IPProtocol": "icmp"  }],"direction": "INGRESS","logConfig": {  "enable": false},"disabled": false}
  5. Create thefw-allow-health-check-ipv6 firewall rule by making aPOST request tothefirewalls.insert method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{"name": "fw-allow-health-check-ipv6","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","priority": 1000,"sourceRanges": [  "2600:2d00:1:b029::/64"],"targetTags": [  "allow-health-check-ipv6"],"allowed": [  {    "IPProtocol": "tcp"  },  {    "IPProtocol": "udp"  }],"direction": "INGRESS","logConfig": {  "enable": false},"disabled": false}

Create backend VMs and instance groups

For this load balancing scenario, you create a Compute Enginezonal managed instance group and install an Apache web server.

To handle both IPv4 and IPv6 traffic, configure the backend VMs to bedual-stack. Set the VM'sstack-type toIPV4_IPV6. The VMs also inherit theipv6-access-type setting (in this example,INTERNAL) from the subnet. Formore details about IPv6 requirements, see theInternal passthrough Network Load Balancer overview:Forwarding rules.

If you want to use existing VMs as backends, update the VMs to be dual-stackby using thegcloudcompute instances network-interfaces updatecommand.

Instances that participate as backend VMs for internal passthrough Network Load Balancersmust be running the appropriateLinux Guest Environment,Windows Guest Environment,or other processes that provide equivalent functionality.

For instructional simplicity, the backend VMs run Debian GNU/Linux 12.

Create the instance group

Console

To support bothIPv4 and IPv6 traffic, use the following steps:

  1. Create an instance template. In the Google Cloud console, go to theInstance templates page.

    Go to Instance templates

    1. ClickCreate instance template.
    2. For theName, entervm-a1.
    3. Ensure that the Boot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.
    4. Expand theAdvanced options section.
    5. Expand theManagement section, and then copy the following scriptinto theStartup script field.The startup script also configures theApache server to listen on port8080 instead of port80.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.confsystemctl restart apache2
    6. Expand theNetworking section, and then specify the following:

      1. ForNetwork tags, addallow-ssh andallow-health-check-ipv6.
      2. ForNetwork interfaces, click thedefault interface andconfigure the following fields:
        • Network:lb-network
        • Subnetwork:lb-subnet
        • IP stack type:IPv4 and IPv6 (dual-stack)
    7. ClickCreate.

To supportIPv4 traffic, use the following steps:

  1. Create an instance template. In the Google Cloud console, go to theInstance templates page.

    Go to Instance templates

  2. ClickCreate instance template.

    1. For theName, entervm-a1.
    2. Ensure that the Boot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such asapt-get.
    3. Expand theAdvanced options section.
    4. Expand theManagement section, and then copy the following scriptinto theStartup script field.The startup script also configures theApache server to listen on port8080 instead of port80.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.confsystemctl restart apache2
    5. Expand theNetworking section, and then specify the following:

      1. ForNetwork tags, addallow-ssh andallow-health-check.
      2. ForNetwork interfaces, click thedefault interface andconfigure the following fields:
        • Network:lb-network
        • Subnetwork:lb-subnet
        • IP stack type:IPv4 (single-stack)
    6. ClickCreate.

  3. Create a managed instance group. Go to theInstance groups page in the Google Cloud console.

    Go to Instance groups

    1. ClickCreate instance group.
    2. ChooseNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
    3. For theName, enterig-a.
    4. ForLocation, selectSingle zone.
    5. For theRegion, selectus-west1.
    6. For theZone, selectus-west1-a.
    7. ForInstance template, selectvm-a1.
    8. Specify the number of instances that you want to create in the group.

      For this example, specify the following options underAutoscaling:

      • ForAutoscaling mode, selectOff:do not autoscale.
      • ForMaximum number of instances, enter2.
    9. ClickCreate.

gcloud

Thegcloud instructions in this guide assume that you are usingCloudShell or another environment with bash installed.

  1. Create a VM instance template with HTTP server with thegcloud compute instance-templatescreatecommand.

    The startup script also configures the Apache server to listen onport8080 instead of port80.

    To handle bothIPv4 and IPv6 traffic, use the following command.

    gcloud compute instance-templates create vm-a1 \    --region=us-west1 \    --network=lb-network \    --subnet=lb-subnet \--ipv6-network-tier=PREMIUM \--stack-type=IPV4_IPV6 \    --tags=allow-ssh \    --image-family=debian-12 \    --image-project=debian-cloud \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://metadata.google.internal/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf      systemctl restart apache2'

    Or, if you want to handleIPv4 traffic only, use the followingcommand.

    gcloud compute instance-templates create vm-a1 \    --region=us-west1 \    --network=lb-network \    --subnet=lb-subnet \    --tags=allow-ssh \    --image-family=debian-12 \    --image-project=debian-cloud \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://metadata.google.internal/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf      systemctl restart apache2'
  2. Create a managed instance group in the zone with thegcloud compute instance-groups managedcreatecommand.

    gcloud compute instance-groups managed create ig-a \    --zone us-west1-a \    --size 2 \    --template vm-a1

api

To handle bothIPv4 and IPv6 traffic, use the following steps:.

  1. Create a VM by makingPOST requests to theinstances.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{"name": "vm-a1","tags": { "items": [   "allow-health-check-ipv6",   "allow-ssh" ]},"machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2","canIpForward": false,"networkInterfaces": [ {   "stackType": "IPV4_IPV6",   "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",   "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",   "accessConfigs": [     {       "type": "ONE_TO_ONE_NAT",       "name": "external-nat",       "networkTier": "PREMIUM"     }   ] }],"disks": [ {   "type": "PERSISTENT",   "boot": true,   "mode": "READ_WRITE",   "autoDelete": true,   "deviceName": "vm-a1",   "initializeParams": {     "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",     "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard",     "diskSizeGb": "10"   } }],"metadata": { "items": [   {     "key": "startup-script",     "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname="$(curl -H "Metadata-Flavor:Google" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)"\necho "Page served from: $vm_hostname" | \\\ntee /var/www/html/index.html\nsed -ire "s/^Listen 80$/Listen 8080/g" /etc/\\napache2/ports.conf\nsystemctl restart apache2"   } ]},"scheduling": { "preemptible": false},"deletionProtection": false}

To handleIPv4 traffic, use the following steps.

  1. Create a VM by makingPOST requests to theinstances.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{"name": "vm-a1","tags": { "items": [   "allow-health-check",   "allow-ssh" ]},"machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2","canIpForward": false,"networkInterfaces": [ {   "stackType": "IPV4",   "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",   "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",   "accessConfigs": [     {       "type": "ONE_TO_ONE_NAT",       "name": "external-nat",       "networkTier": "PREMIUM"     }   ] }],"disks": [ {   "type": "PERSISTENT",   "boot": true,   "mode": "READ_WRITE",   "autoDelete": true,   "deviceName": "vm-a1",   "initializeParams": {     "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",     "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard",     "diskSizeGb": "10"   } }],"metadata": { "items": [   {     "key": "startup-script",     "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname="$(curl -H "Metadata-Flavor:Google" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)"\necho "Page served from: $vm_hostname" | \\\ntee /var/www/html/index.html\nsed -ire "s/^Listen 80$/Listen 8080/g" /etc/\\napache2/ports.conf\nsystemctl restart apache2"   } ]},"scheduling": { "preemptible": false},"deletionProtection": false}
  2. Create an instance group by making aPOST request to theinstanceGroups.insert method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups{"name": "ig-a","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"}
  3. Add instances to each instance group by making aPOST request to theinstanceGroups.addInstances method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances{"instances": [{ "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1"}]}

Create a client VM

This example creates a client VM in the same region as the backend(server) VMs. The client is used to validate the load balancer's configurationand demonstrate expected behavior as described in thetesting section.

ForIPv4 and IPv6 traffic:

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. Set theName tovm-client-ipv6.

  4. Set theZone tous-west1-a.

  5. Expand theAdvanced options section, and then make the followingchanges:

    • ExpandNetworking, and then add theallow-ssh toNetwork tags.
    • UnderNetwork interfaces, clickEdit, make thefollowing changes, and then clickDone:
      • Network:lb-network
      • Subnet:lb-subnet
      • IP stack type:IPv4 and IPv6 (dual-stack)
      • Primary internal IP: Ephemeral (automatic)
      • External IP: Ephemeral
  6. ClickCreate.

gcloud

The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theus-west1-a zone, and it uses the samesubnet as the backend VMs.

gcloud compute instances create vm-client-ipv6 \    --zone=us-west1-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --stack-type=IPV4_IPV6 \    --tags=allow-ssh \    --subnet=lb-subnet

api

Make aPOST request to theinstances.insert method.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances{ "name": "vm-client-ipv6", "tags": {   "items": [     "allow-ssh"   ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [   {     "stackType": "IPV4_IPV6",     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",     "accessConfigs": [       {         "type": "ONE_TO_ONE_NAT",         "name": "external-nat",         "networkTier": "PREMIUM"       }     ]   } ], "disks": [   {     "type": "PERSISTENT",     "boot": true,     "mode": "READ_WRITE",     "autoDelete": true,     "deviceName": "vm-client",     "initializeParams": {       "sourceImage": "projects/debian-cloud/global/images/debian-image-name",       "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard",       "diskSizeGb": "10"     }   } ], "scheduling": {   "preemptible": false }, "deletionProtection": false}

ForIPv4 traffic:

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. ForName, entervm-client.

  4. ForZone, enterus-west1-a.

  5. Expand theAdvanced options section.

  6. ExpandNetworking, and then configure the following fields:

    1. ForNetwork tags, enterallow-ssh.
    2. ForNetwork interfaces, select the following:
      • Network:lb-network
      • Subnet:lb-subnet
  7. ClickCreate.

gcloud

The client VM can be in any zone in the same region as theload balancer, and it can use any subnet in that region. In this example,the client is in theus-west1-a zone, and it uses the samesubnet as the backend VMs.

gcloud compute instances create vm-client \    --zone=us-west1-a \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=lb-subnet

API

Make aPOST request to theinstances.insert method. ReplacePROJECT_ID with the ID of your Google Cloud project.

 POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances {    "name": "vm-client",    "tags": {      "items": [        "allow-ssh"      ]  },    "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2",    "canIpForward": false,    "networkInterfaces": [      {        "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",        "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",        "accessConfigs": [          {            "type": "ONE_TO_ONE_NAT",            "name": "external-nat",            "networkTier": "PREMIUM"          }        ]      }    ],    "disks": [      {        "type": "PERSISTENT",        "boot": true,        "mode": "READ_WRITE",        "autoDelete": true,        "deviceName": "vm-client",        "initializeParams": {          "sourceImage": "projects/debian-cloud/global/images/debian-image-name",          "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard",          "diskSizeGb": "10"        }      }    ],    "scheduling": {      "preemptible": false     },    "deletionProtection": false  }

Configure load balancer components

Create a load balancer for multiple protocols.

gcloud

  1. Create an HTTP health check for port 80. This health check is usedto verify the health of backends in theig-a instance group.

    gcloud compute health-checks create http hc-http-80 \    --region=us-west1 \    --port=80
  2. Create the backend service with the protocol set toUNSPECIFIED:

    gcloud compute backend-services create be-ilb-l3-default \    --load-balancing-scheme=internal \    --protocol=UNSPECIFIED \    --region=us-west1 \    --health-checks=hc-http-80 \    --health-checks-region=us-west1
  3. Add the instance group to the backend service:

    gcloud compute backend-services add-backend be-ilb-l3-default \    --region=us-west1 \    --instance-group=ig-a \    --instance-group-zone=us-west1-a
  4. For IPv6 traffic: Create a forwarding rule with the protocol set toL3_DEFAULT to handle all supported IPv6 protocol traffic. All ports mustbe configured withL3_DEFAULT forwarding rules.

    gcloud compute forwarding-rules create fr-ilb-ipv6 \   --region=us-west1 \   --load-balancing-scheme=internal \   --subnet=lb-subnet \   --ip-protocol=L3_DEFAULT \   --ports=ALL \   --backend-service=be-ilb-l3-default \   --backend-service-region=us-west1 \   --ip-version=IPV6
  5. For IPv4 traffic: Create a forwarding rule with the protocol set toL3_DEFAULT to handle all supported IPv4 protocol traffic. All ports mustbe configured withL3_DEFAULT forwarding rules. Use10.1.2.99 as theinternal IP address.

    gcloud compute forwarding-rules create fr-ilb-l3-default \   --region=us-west1 \   --load-balancing-scheme=internal \   --network=lb-network \   --subnet=lb-subnet \   --address=10.1.2.99 \   --ip-protocol=L3_DEFAULT \   --ports=ALL \   --backend-service=be-ilb-l3-default \   --backend-service-region=us-west1

API

  1. Create the health check by making aPOST request to theregionHealthChecks.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks{"name": "hc-http-80","type": "HTTP","httpHealthCheck": { "port": 80}}
  2. Create the regional backend service by making aPOST request to theregionBackendServices.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices{"name": "be-ilb-l3-default","backends": [ {   "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a",   "balancingMode": "CONNECTION" }],"healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"],"loadBalancingScheme": "INTERNAL","protocol": "UNSPECIFIED","connectionDraining": { "drainingTimeoutSec": 0}}
  3. For IPv6 traffic: Create the forwarding rule by making aPOST request to theforwardingRules.insert method.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-ipv6","IPProtocol": "L3_DEFAULT","allPorts": true,"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default","ipVersion": "IPV6","networkTier": "PREMIUM"}
  4. For IPv4 traffic: Create the forwarding rule by making aPOST request to theforwardingRules.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-l3-default","IPAddress": "10.1.2.99","IPProtocol": "L3_DEFAULT","allPorts": true,"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default","networkTier": "PREMIUM"}

Test your load balancer

The following tests show how to validate your load balancerconfiguration and learn about its expected behavior.

Test connection from client VM

This test contacts the load balancer from a separate client VM; that is, notfrom a backend VM of the load balancer.

gcloud:IPv6

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client-ipv6 --zone=us-west1-a
  2. Describe the IPv6 forwarding rulefr-ilb-ipv6. Note theIPV6_ADDRESS in the description.

    gcloud compute forwarding-rules describe fr-ilb-ipv6 --region=us-west1
  3. From clients with IPv6 connectivity, run the following command. ReplaceIPV6_ADDRESS with the ephemeral IPv6 address in thefr-ilb-ipv6 forwarding rule.

    curl -m 10 -s http://IPV6_ADDRESS:80

    For example, if the assigned IPv6 address is[fd20:1db0:b882:802:0:46:0:0/96]:80, the command should look like:

    curl -m 10 -s http://[fd20:1db0:b882:802:0:46:0:0]:80

gcloud:IPv4

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client --zone=us-west1-a
  2. Describe the IPv4 forwarding rulefr-ilb.

    gcloud compute forwarding-rules describe fr-ilb --region=us-west1
  3. Make a web request to the load balancer by usingcurl to contact its IPaddress. Repeat the request so that you can see that responses come fromdifferent backend VMs. The name of the VM that generates the response isdisplayed in the text in the HTML response by virtue of the contents of/var/www/html/index.html on each backend VM. Expected responses look likePage served from: vm-a1.

    curl http://10.1.2.99

    The forwarding rule is configured to serve ports80 and53.To send traffic to those ports, append a colon (:) and theport number after the IP address, like this:

    curl http://10.1.2.99:80

Ping the load balancer's IP address

This test demonstrates an expected behavior: you can ping theIP address of the load balancer.

gcloud:IPv6

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client-ipv6 --zone=us-west1-a
  2. Attempt to ping the IPv6 address of the load balancer. ReplaceIPV6_ADDRESS with the ephemeral IPv6 address in thefr-ilb-ipv6 forwarding rule.

    Notice that you get a response and that theping command works in thisexample.

    ping6IPV6_ADDRESS

    For example, if the assigned IPv6 address is[2001:db8:1:1:1:1:1:1/96],the command is as follows:

    ping6 2001:db8:1:1:1:1:1:1

    The output is similar to the following:

    @vm-client: pingIPV6_ADDRESSPINGIPV6_ADDRESS (IPV6_ADDRESS) 56(84) bytes of data.64 bytes fromIPV6_ADDRESS: icmp_seq=1 ttl=64 time=1.58 ms

gcloud:IPv4

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client --zone=us-west1-a
  2. Attempt to ping the IPv4 address of the load balancer.Notice that you get a response and that theping command works in thisexample.

    ping 10.1.2.99

    The output is the following:

    @vm-client: ping 10.1.2.99PING 10.1.2.99 (10.1.2.99) 56(84) bytes of data.64 bytes from 10.1.2.99: icmp_seq=1 ttl=64 time=1.58 ms64 bytes from 10.1.2.99: icmp_seq=2 ttl=64 time=0.242 ms64 bytes from 10.1.2.99: icmp_seq=3 ttl=64 time=0.295 ms

Additional configuration options

This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You can performthem in any order.

You can reserve a static internal IP address for your example. This configurationallows multiple internal forwarding rules to use the same IP address withdifferent protocols and different ports.The backends of your example load balancer must still be located in the regionus-west1.

The following diagram shows the architecture for this example.

Load balancing traffic based on the protocols, with backend services to    manage connection distribution to a single zonal instance group.
An internal passthrough Network Load Balancer for multiple protocols that uses a static internal IP address (click to enlarge).

You can also consider using the following forwarding rule configurations:

  • Forwarding rules with multiple ports:

    • ProtocolTCP with ports80,8080
    • ProtocolL3_DEFAULT with portsALL
  • Forwarding rules with all ports:

    • ProtocolTCP with portsALL
    • ProtocolL3_DEFAULT with portsALL

Reserve static internal IPv4 address

Reserve a static internal IP address for10.1.2.99 and set its--purpose flag toSHARED_LOADBALANCER_VIP. The--purpose flagis required so that many forwarding rules can use the same internalIP address.

gcloud

Use thegcloud compute addresses create command:

gcloud compute addresses create internal-lb-ipv4 \    --region us-west1 \    --subnet lb-subnet \    --purpose SHARED_LOADBALANCER_VIP \    --addresses 10.1.2.99

API

Call theaddresses.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/addresses

The body of the request must include theaddressType, which should beINTERNAL, thename of the address, and thesubnetwork thatthe IP address belongs to. You must specify theaddress as10.1.2.99.

{  "addressType": "INTERNAL",  "name": "internal-lb-ipv4",  "subnetwork": "regions/us-west1/subnetworks/lb-subnet",  "purpose": "SHARED_LOADBALANCER_VIP",  "address": "10.1.2.99"}

Configure load balancer components

Configure three load balancers with the following components:

  • The first load balancer has a forwarding rule with protocolTCP and port80.TCP traffic arriving at the internal IP address on port80 is handledby theTCP forwarding rule.
  • The second load balancer has a forwarding rule with protocolUDP and port53.UDP traffic arriving at the internal IP address on port53 is handledby theUDP forwarding rule.
  • The third load balancer has a forwarding rule with protocolL3_DEFAULT andportALL. All other traffic that does not match theTCP orUDPforwarding rules is handled by theL3_DEFAULT forwarding rule.
  • All three load balancers share the same static internal IP address(internal-lb-ipv4) in their forwarding rules.

Create the first load balancer

Create the first load balancer for TCP traffic on port80.

gcloud

  1. Create the backend service for TCP traffic:

    gcloud compute backend-services create be-ilb \    --load-balancing-scheme=internal \    --protocol=tcp \    --region=us-west1 \    --health-checks=hc-http-80 \    --health-checks-region=us-west1
  2. Add the instance group to the backend service:

    gcloud compute backend-services add-backend be-ilb \    --region=us-west1 \    --instance-group=ig-a \    --instance-group-zone=us-west1-a
  3. Create a forwarding rule for the backend service. Use the static reservedinternal IP address (internal-lb-ipv4) for the internal IP address.

    gcloud compute forwarding-rules create fr-ilb \    --region=us-west1 \    --load-balancing-scheme=internal \    --network=lb-network \    --subnet=lb-subnet \    --address=internal-lb-ipv4 \    --ip-protocol=TCP \    --ports=80 \    --backend-service=be-ilb \    --backend-service-region=us-west1

API

  1. Create the regional backend service by making aPOST request to theregionBackendServices.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices{"name": "be-ilb","backends": [ {   "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a",   "balancingMode": "CONNECTION" }],"healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"],"loadBalancingScheme": "INTERNAL","protocol": "TCP","connectionDraining": { "drainingTimeoutSec": 0}}

  2. Create the forwarding rule by making aPOST request to theforwardingRules.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb","IPAddress": "internal-lb-ipv4","IPProtocol": "TCP","ports": [ "80"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb","networkTier": "PREMIUM"}

Create the second load balancer

Create the second load balancer for UDP traffic on port53.

gcloud

  1. Create the backend service with the protocol set toUDP:

    gcloud compute backend-services create be-ilb-udp \    --load-balancing-scheme=internal \    --protocol=UDP \    --region=us-west1 \    --health-checks=hc-http-80 \    --health-checks-region=us-west1
  2. Add the instance group to the backend service:

    gcloud compute backend-services add-backend be-ilb-udp \    --region=us-west1 \    --instance-group=ig-a \    --instance-group-zone=us-west1-a
  3. Create a forwarding rule for the backend service. Use the static reservedinternal IP address (internal-lb-ipv4) for the internal IP address.

    gcloud compute forwarding-rules create fr-ilb-udp \    --region=us-west1 \    --load-balancing-scheme=internal \    --network=lb-network \    --subnet=lb-subnet \    --address=internal-lb-ipv4 \    --ip-protocol=UDP \    --ports=53 \    --backend-service=be-ilb-udp \    --backend-service-region=us-west1

API

  1. Create the regional backend service by making aPOST request to theregionBackendServices.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices{"name": "be-ilb-udp","backends": [ {  "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a",  "balancingMode": "CONNECTION" }],"healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"],"loadBalancingScheme": "INTERNAL","protocol": "UDP","connectionDraining": { "drainingTimeoutSec": 0}}
  2. Create the forwarding rule by making aPOST request to theforwardingRules.insert method:

    POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-udp","IPAddress": "internal-lb-ipv4","IPProtocol": "UDP","ports": [ "53"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-udp","networkTier": "PREMIUM"}

Create the third load balancer

Create the forwarding rule of the third load balancer to use the static reservedinternal IP address.

gcloud

Create the forwarding rule with the protocol set toL3_DEFAULT to handleall other supported IPv4 protocol traffic. Use the static reservedinternal IP address (internal-lb-ipv4) as the internal IP address.

gcloud compute forwarding-rules create fr-ilb-l3-default \    --region=us-west1 \    --load-balancing-scheme=internal \    --network=lb-network \    --subnet=lb-subnet \    --address=internal-lb-ipv4 \    --ip-protocol=L3_DEFAULT \    --ports=ALL \    --backend-service=be-ilb-l3-default \    --backend-service-region=us-west1

API

Create the forwarding rule by making aPOST request to theforwardingRules.insert method.ReplacePROJECT_ID with the ID of yourGoogle Cloud project.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules{"name": "fr-ilb-l3-default","IPAddress": "internal-lb-ipv4","IPProtocol": "L3_DEFAULT","ports": [  "ALL"],"loadBalancingScheme": "INTERNAL","subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet","network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network","backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default","networkTier": "PREMIUM"}

Test your load balancer

To test your load balancer,follow the steps in the previous section.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.