Convert proxy Network Load Balancer to IPv6

This document shows you how to convert proxy Network Load Balancer resources and backendsfromIPv4 only (single-stack) toIPv4 and IPv6 (dual-stack). The mainadvantage of using IPv6 is that a much larger pool of IP addresses can beallocated to your load balancer. You can configure the load balancer toterminate inbound IPv6 traffic and send this traffic over an IPv4 or IPv6connection to your backends, based on your preference. For more information, seeIPv6 for Application Load Balancers andproxy Network Load Balancers.

In this document, IPv4 only (single-stack) refers to the resources that use onlyIPv4 addresses, and IPv4 and IPv6 (dual-stack) refers to the resources that useboth IPv4 and IPv6 addresses.

The instructions in this document apply to both TCP and SSL proxy Network Load Balancers.

Before you begin

Note the following conditions before you begin the conversion process:

  • You must be using one of the following types of proxy Network Load Balancers:

    • Global external proxy Network Load Balancer
    • Regional external proxy Network Load Balancer
    • Cross-region internal proxy Network Load Balancer
    • Regional internal proxy Network Load Balancer

    Classic proxy Network Load Balancers don't support dual-stack backends or subnets.For more information about IPv6 support, seeIPv6 forApplication Load Balancers and proxy Network Load Balancers.

  • Your load balancer is using either VM instance group backends or zonal networkendpoint groups (NEGs) withGCE_VM_IP_PORT endpoints. Other backend typesdon't have dual-stack support.

Additonally, the conversion process differs based on the type of load balancer.

  • For global external proxy Network Load Balancers, you convert the backends to dual-stackandyou create an IPv6 forwarding rule that can handle incoming IPv6 traffic.

  • For cross-region internal proxy Network Load Balancers, regional external proxy Network Load Balancers, andregional internal proxy Network Load Balancers, you only convert the backends to dual-stack.IPv6 forwarding rules aren't supported for these load balancers.

For information about how to set up global external proxy Network Load Balancers, see thefollowing documentation:

Identify the resources to convert

Note the names of the resources that your load balancer is associated with.You need to provide these names later.

  1. To list all the subnets, use thegcloud compute networks subnets list command:

    gcloud compute networks subnets list

    Note the name of the subnet with IPv4 only stack to convert to dual-stack.This name is referred to later asSUBNET.The VPC network is referred to later asNETWORK.

  2. To list all the backend services, use thegcloud compute backend-services list command:

    gcloud compute backend-services list

    Note the name of the backend service to convert to dual-stack.This name is referred to later asBACKEND_SERVICE.

  3. If you already have a load balancer, to view the IP stack type of yourbackends, use thegcloud compute instances list command:

    gcloud compute instances list \    --format= \    "table(    name,    zone.basename(),    networkInterfaces[].stackType.notnull().list(),    networkInterfaces[].ipv6AccessConfigs[0].externalIpv6.notnull().list():label=EXTERNAL_IPV6,    networkInterfaces[].ipv6Address.notnull().list():label=INTERNAL_IPV6)"
  4. To list all the VM instance and instance templates, use thegcloud compute instances list commandand thegcloud compute instance-templates list command:

    gcloud compute instances list
    gcloud compute instance-templates list

    Note the names of the instances and instance templates to convert to dual-stack.This name is referred to later asVM_INSTANCE andINSTANCE_TEMPLATES.

  5. To list all the instance groups, use thegcloud compute instance-groups list command:

    gcloud compute instance-groups list

    Note the name of the network endpoint groups to convert to dual stack.This name is referred to later asINSTANCE_GROUP.

  6. To list all the zonal NEGs, use thegcloud compute network-endpoint-groups list command:

    gcloud compute network-endpoint-groups list

    Note the name of the network endpoint groups to convert to dual stack.This name is referred to later asZONAL_NEG.

  7. To list all the target SSL proxies, use thegcloud compute target-ssl-proxies list command:

    gcloud compute target-ssl-proxies list

    Note the name of the target proxy associated with your load balancer.This name is referred to later asTARGET_PROXY.

  8. To list all the target TCP proxies, use thegcloud compute target-tcp-proxies list command:

    gcloud compute target-tcp-proxies list

    Note the name of the target proxy associated with your load balancer.This name is referred to later asTARGET_PROXY.

Convert from single-stack to dual-stack backends

This section shows you how to convert your load balancer resources and backendsusing IPv4 only (single-stack) addresses to IPv4 and IPv6 (dual-stack) addresses.

Update the subnet

Dual-stack subnets are supported on custom mode VPC networksonly. Dual-stack subnets are not supported on auto mode VPCnetworks or legacy networks. Though auto mode networks can be useful for earlyexploration, custom mode VPCs are better suited for mostproduction environments. We recommend that you use VPCsin custom mode.

To update the VPC to the dual-stack setting, follow these steps:

  1. If you are using an auto mode VPC network, you must firstconvert the auto mode VPC network tocustom mode.

    If you are using thedefault network, you must convert it to a custommode VPC network.

  2. To enable IPv6, seeChange a subnet's stack type to dual stack.

    Make sure that the IPv6 access type of the subnet is set toExternal.

    Note: Once you've updated a subnet's stack type to dual-stack, you cannotchange it back to IPv4-only.
  3. Optional: If you want to configure internal IPv6 address ranges onsubnets in this network, complete these steps:

    1. ForVPC network ULA internal IPv6 range, selectEnabled.
    2. ForAllocate internal IPv6 range, selectAutomatically orManually.

      If you selectManually, enter a/48 range from within thefd20::/20 range. If the range is in use, you are prompted to provide a different range.

Update the proxy-only subnet

If you are using an Envoy based load balancer, we recommend that youchange the proxy-only subnet stack type to dual stack. For information aboutload balancers that support proxy-only subnets, seeSupported load balancers.

Note: Before you update the stack type of a proxy-only subnet to dual-stack,you need toassign an internal IPv6 range on the VPC network.

You can't update the stack type of a proxy-only subnet (purpose=REGIONAL_MANAGED_PROXY) in the same way that you would for a regular subnet (with thesubnets update command). Instead, you mustcreate a backup proxy-only subnet with a dual-stack stack type andthen promote it to the active role. This is because only one proxy-onlysubnet can be active per region, per VPC network.

After assigning an internal IPv6 range on the VPC network, do the following to change the proxy-only subnet's stack type to dual stack.

Console

  1. Create a backup proxy-only subnet in the same region as the activeproxy-only subnet, specifying the IP stack type as dual-stack.

    1. In the Google Cloud console, go to theVPC networks page.
      Go to the VPC networks page
    2. Click the name of the VPC network that you want to add aproxy-only subnet to.
    3. ClickAdd subnet.
    4. Enter a name.
    5. Select a region.
    6. ForPurpose, selectRegional Managed Proxy.
    7. ForRole, selectBackup.
    8. ForIP stack type, selectIPv4 and IPv6 (dual-stack).
    9. Enter anIP address range.
    10. ClickAdd.
  2. Create or modify ingress allow firewall rulesthat apply to your backend VMs or endpoints so that they include theprimary IP address range of the backup proxy-only subnet.

  3. Promote the backup proxy-only subnet to the active role. This actionalso demotes the previously active proxy-only subnet to thebackup role:

    1. In the Google Cloud console, go to theVPC networks page.
      Go to the VPC networks page
    2. Click the name of the VPC network that you want tomodify.
    3. UnderReserved proxy-only subnets for load balancing, locate thebackup subnet created in the previous step.
    4. ClickActivate.
    5. Specify an optionalDrain timeout.
    6. ClickActivate the subnet.
  4. After the connection draining timeout, or after you're confident thatconnections to your backend VMs or endpoints aren't coming fromproxies in the previously active (now backup) proxy-only subnet, you can dothe following:

    • Modify ingress allow firewall rules that apply to your backend VMs orendpoints so they don't include the primary IP address range of thepreviously active (now backup) proxy-only subnet.
    • Delete the previously active(now backup) proxy-only subnet to release the IP addresses that thesubnet used for its primary IP address range.

gcloud

The following steps assume you already have an existing active proxy-onlysubnet.

  1. Create a backup proxy-only subnet in the same region, specifying a stacktype of dual-stack (--stack-type=IPV4_IPV6), using thegcloud computenetworks subnets create command. Thissubnet is assigned as a backup with the--role=BACKUP flag.

    gcloud compute networks subnets createBACKUP_PROXY_ONLY_SUBNET_NAME \  --purpose=REGIONAL_MANAGED_PROXY \  --role=BACKUP \  --region=REGION \  --network=VPC_NETWORK_NAME \  --range=BACKUP_PROXY_ONLY_SUBNET_RANGE \  --stack-type=IPV4_IPV6

    Replace the following:

    • BACKUP_PROXY_ONLY_SUBNET_NAME: the name of thenewly created backup proxy-only subnet
    • REGION: the region of the newly created backupproxy-only subnet. This should be the same region as the current activeproxy-only subnet.
    • VPC_NETWORK_NAME: the network of the newly createdbackup proxy-only subnet. This should be the same network as the currentactive proxy-only subnet.
    • VPC_NETWORK_NAME: the name of the VPCnetwork
    • BACKUP_PROXY_ONLY_SUBNET_RANGE: the CIDR range ofthe newly created backup proxy-only subnet
  2. Create or modify ingress allow firewall rules that apply to your backendVMs or endpoints so that they now include the primary IP address range ofthe backup proxy-only subnet. The firewall rule should already beaccepting connections from the active subnet.

    gcloud compute firewall-rules updatePROXY_ONLY_SUBNET_FIREWALL_RULE \  --source-rangesACTIVE_PROXY_ONLY_SUBNET_RANGE,BACKUP_PROXY_ONLY_SUBNET_RANGE

    Replace the following:

    • PROXY_ONLY_SUBNET_FIREWALL_RULE: the name of thefirewall rule that allows traffic from the proxy-only subnet to reachyour backend instances or endpoints
    • ACTIVE_PROXY_ONLY_SUBNET_RANGE: the CIDR range ofthe current active proxy-only subnet
    • BACKUP_PROXY_ONLY_SUBNET_RANGE: the CIDR range ofthe backup proxy-only subnet
  3. Update the new subnet, setting it to be theACTIVE proxy-only subnet inthe region and wait for the old subnet to drain. This also demotes thepreviously active proxy-only subnet to the backup role.

    To drain an IP address range immediately, set the--drain-timeout to0s.This promptly ends all connections to proxies that have assigned addressesin the subnet that is being drained.

    gcloud compute networks subnets updateBACKUP_PROXY_ONLY_SUBNET_NAME \  --region=REGION \  --role=ACTIVE \  --drain-timeout=CONNECTION_DRAINING_TIMEOUT

    Replace the following:

    • BACKUP_PROXY_ONLY_SUBNET_NAME: the name of thenewly created backup proxy-only subnet
    • REGION: the region of the newly created backupproxy-only subnet. This should be the same region as the current activeproxy-only subnet.
    • CONNECTION_DRAINING_TIMEOUT: the amount of time, inseconds, that Google Cloud uses to migrate existing connectionsaway from proxies in the previously active proxy-only subnet.
  4. Monitor the status of the drain by using alist ordescribe command.The status of the subnet isDRAINING while it is being drained.

    gcloud compute networks subnets list

    Wait for draining to complete. When the old proxy-only subnet is drained, thestatus of the subnet isREADY.

  5. Update your proxy only subnet firewall rule to only allow connections fromthe new subnet.

    gcloud compute firewall-rulesPROXY_ONLY_SUBNET_FIREWALL_RULE \  --source-rangesBACKUP_PROXY_ONLY_SUBNET_RANGE
  6. After you're confident that connections to your backend VMs or endpointsaren't coming from proxies in the previously active (now backup)proxy-only subnet, you can delete the old subnet.

    gcloud compute networks subnets deleteACTIVE_PROXY_ONLY_SUBNET_NAME \  --region=REGION

Update the VM instance or templates

You can configure IPv6 addresses on a VM instance if thesubnet that the VM is connected to has an IPv6 range configured.Only the following backends can support IPv6 addresses:

  • Instance group backends: One or more managed, unmanaged, or acombination of managed and unmanaged instance group backends.
  • Zonal NEGs: One or moreGCE_VM_IP_PORT type zonalNEGs.

Update VM instances

You cannot edit VM instances that are part of a managed or an unmanagedinstance group. To update the VM instances to dual stack, follow these steps:

  1. Delete specific instances from a group
  2. Create a dual-stack VM
  3. Create instances with specific names in MIGs

Update VM instance templates

You can't update an existing instance template. If you need to make changes,you can create another template with similar properties.To update the VM instance templates to dual stack, follow these steps:

Console

  1. In the Google Cloud console, go to theInstance templates page.

    Go to Instance templates

    1. Click the instance template that you want to copy and update.
    2. ClickCreate similar.
    3. Expand theAdvanced options section.
    4. ForNetwork tags, enterallow-health-check-ipv6.
    5. In theNetwork interfaces section, clickAdd a network interface.
    6. In theNetwork list, select the custom mode VPCnetwork.
    7. In theSubnetwork list, selectSUBNET.
    8. ForIP stack type, selectIPv4 and IPv6 (dual-stack).
    9. ClickCreate.
  2. Starting a basic rolling update on the managed instancegroupMIG associated with the load balancer.

Update the zonal NEG

Zonal NEG endpoints cannot be edited. You must delete the IPv4 endpoints andcreate a new dual-stack endpoint with both IPv4 and IPv6 addresses.

To set up a zonal NEG (withGCE_VM_IP_PORT type endpoints)in theREGION_A region, first create the VMs intheGCP_NEG_ZONE zone. Then add the VM network endpointsto the zonal NEG.

Create VMs

Console

  1. In the Google Cloud console, go to theVM instances page.

    Go to VM instances

  2. ClickCreate instance.

  3. Set theName tovm-a1.

  4. For theRegion, chooseREGION_A, and chooseany value for theZone field.This zone is referred to asGCP_NEG_ZONEin this procedure.

  5. In theBoot disk section, ensure thatDebian GNU/Linux 12 (bookworm) is selected for the boot diskoptions. ClickChoose to change the image if necessary.

  6. Expand theAdvanced options section and make the following changes:

    • Expand theNetworking section.
    • In theNetwork tags field, enterallow-health-check.
    • In theNetwork interfaces section, make the following changes:
      • Network:NETWORK
      • Subnet:SUBNET
      • IP stack type:IPv4 and IPv6 (dual-stack)
    • ClickDone.
    • ClickManagement. In theStartup script field, copy and pastethe following script contents.

      #! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2
  7. ClickCreate.

  8. Repeat the following steps to create a second VM, using the followingname and zone combination:

    • Name:vm-a2, zone:GCP_NEG_ZONE

gcloud

Create the VMs by running the following command two times, using these combinations for the name of the VM and its zone. The script contents are identical for both VMs.

  • VM_NAME ofvm-a1 and anyGCP_NEG_ZONE zone ofyour choice.
  • VM_NAME ofvm-a2 and the sameGCP_NEG_ZONE zone.

    gcloud compute instances createVM_NAME \    --zone=GCP_NEG_ZONE \    --stack-type=IPV4_IPV6 \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-health-check \    --subnet=SUBNET \    --metadata=startup-script='#! /bin/bash      apt-get update      apt-get install apache2 -y      a2ensite default-ssl      a2enmod ssl      vm_hostname="$(curl -H "Metadata-Flavor:Google" \      http://metadata.google.internal/computeMetadata/v1/instance/name)"      echo "Page served from: $vm_hostname" | \      tee /var/www/html/index.html      systemctl restart apache2'

Add endpoints to the zonal NEG

Console

To add endpoints to the zonal NEG:

  1. In the Google Cloud console, go to theNetwork endpoint groups page.

    Go to Network endpoint groups

  2. In theName list, click the name of the network endpoint group(ZONAL_NEG).You see theNetwork endpoint group details page.

  3. In theNetwork endpoints in this group section, select the previouslycreated NEG endpoint. ClickRemove endpoint.

  4. In theNetwork endpoints in this group section, clickAdd networkendpoint.

  5. Select theVM instance.

  6. In theNetwork interface section, the name, zone,and subnet of the VM is displayed.

  7. In theIPv4 address field, enter the IPv4 address of the new networkendpoint.

  8. In theIPv6 address field, enter the IPv6 address of the new networkendpoint.Note: A backend service with multiple endpoints must have unique IPv6 addresses. The endpoints can be in different subnets, but the same IPv6 address cannot be used for multiple endpoints.

  9. Select thePort type.

    1. If you selectDefault, the endpoint uses the default port80for all endpoints in the network endpoint group. This is sufficientfor our example because the Apache server is serving requests atport80.
    2. If you selectCustom, enter thePort number for the endpointto use.
  10. To add more endpoints, clickAdd network endpoint and repeat theprevious steps.

  11. After you add all the endpoints, clickCreate.

gcloud

  1. Add endpoints (GCE_VM_IP_PORT endpoints) toZONAL_NEG.Note: A backend service with multiple endpoints must have unique IPv6 addresses. The endpoints can be in different subnets, but the same IPv6 address cannot be used for multiple endpoints.

    gcloud compute network-endpoint-groups updateZONAL_NEG \    --zone=GCP_NEG_ZONE \    --add-endpoint='instance=vm-a1,ip=IPv4_ADDRESS, \      ipv6=IPv6_ADDRESS,port=80' \    --add-endpoint='instance=vm-a2,ip=IPv4_ADDRESS, \      ipv6=IPv6_ADDRESS,port=80'

Replace the following:

IPv4_ADDRESS: IPv4 address of the network endpoint. The IPv4 must belong to a VM in Compute Engine (either the primary IP or as part of an aliased IP range). If the IP address is not specified, then the primary IP address for the VM instance in the network that the network endpoint group belongs to is used.

IPv6_ADDRESS:IPv6 address of the network endpoint. The IPv6address must belong to a VM instance in the network that the network endpointgroup belongs (external IPv6 address).

Create a firewall rule for IPv6 health check probes

You must create a firewall rule to allow health checks from the IP rangesof Google Cloud probe systems. For more information, seeprobe IPranges.

Ensure that the ingress rule is applicable to the instances being load balancedand that it allows traffic from the Google Cloud health checking systems.This example uses the target tagallow-health-check-ipv6 to identify the VMinstances to which it applies.

Without this firewall rule, thedefault denyingress rule blocks incoming IPv6traffic to the backend instances.

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. To allow IPv6 subnet traffic, clickCreate firewall rule again andenter the following information:

    • Name:fw-allow-lb-access-ipv6
    • Network:NETWORK
    • Priority:1000
    • Direction of traffic:ingress
    • Targets:Specified target tags
    • Target tags:allow-health-check-ipv6
    • Source filter:IPv6 ranges
    • Source IPv6 ranges:

      • For global external Application Load Balancer and global external proxy Network Load Balancer, enter2600:2d00:1:b029::/64,2600:2d00:1:1::/64

      • For cross-region internal Application Load Balancer, regional external Application Load Balancer,regional internal Application Load Balancer, cross-region internal proxy Network Load Balancer,regional external proxy Network Load Balancer, and regional internal proxy Network Load Balancer,enter2600:2d00:1:b029::/64

    • Protocols and ports:Allow all

  3. ClickCreate.

gcloud

  1. Create thefw-allow-lb-access-ipv6 firewall rule to allow communicationwith the subnet.

    For global external Application Load Balancer and global external proxy Network Load Balancer, use thefollowing command:

    gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-health-check-ipv6 \    --source-ranges=2600:2d00:1:b029::/64,2600:2d00:1:1::/64 \    --rules=all

    For cross-region internal Application Load Balancer, regional external Application Load Balancer,regional internal Application Load Balancer, cross-region internal proxy Network Load Balancer,regional external proxy Network Load Balancer, and regional internal proxy Network Load Balancer,use the following command:

    gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \    --network=NETWORK \    --action=allow \    --direction=ingress \    --target-tags=allow-health-check-ipv6 \    --source-ranges=2600:2d00:1:b029::/64 \    --rules=all

Create a firewall rule for the proxy-only subnet

If you are using a regional external proxy Network Load Balancer or an internal proxy Network Load Balancer, youmust update the ingress firewall rulefw-allow-lb-access-ipv6 to allow trafficfrom the proxy-only subnet to the backends.

To get the IPv6 address range of the proxy-only subnet, run thefollowing command:

gcloud compute networks subnets describePROXY_ONLY_SUBNET \    --region=REGION \    --format="value(internalIpv6Prefix)"

Note the internal IPv6 address range; this range is later referred to asIPV6_PROXY_ONLY_SUBNET_RANGE.

To update the firewall rulefw-allow-lb-access-ipv6 for proxy-only subnet, dothe following:

Console

  1. In the Google Cloud console, go to theFirewall policies page.

    Go to Firewall policies

  2. In theVPC firewall rules panel, clickfw-allow-lb-access-ipv6.

    • Source IPv6 ranges:2600:2d00:1:b029::/64,IPV6_PROXY_ONLY_SUBNET_RANGE
  3. ClickSave.

gcloud

  1. Update thefw-allow-lb-access-ipv6 firewall rule to allow communicationwith the proxy-only subnet:

    gcloud compute firewall-rules update fw-allow-lb-access-ipv6 \    --source-ranges=2600:2d00:1:b029::/64,IPV6_PROXY_ONLY_SUBNET_RANGE

Update the backend service and create an IPv6 forwarding rule

This section provides instructions to update the backend service with dual-stackbackends and create an IPv6 forwarding rule.

Note that the IPv6 forwarding rule can be created only forglobal external proxy Network Load Balancers. IPv6 forwarding rules aren't supported forcross-region internal proxy Network Load Balancers, regional external proxy Network Load Balancers, andregional internal proxy Network Load Balancers.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. ClickEdit.

Configure the backend service for IPv6

  1. ClickBackend configuration.
  2. ForBackend type, selectZonal network endpoint group.
  3. In theIP address selection policy list, selectPrefer IPv6.
  4. In theProtocol field:
    • For TCP proxy, selectTCP.
    • For SSL proxy, selectSSL.
  5. For zonal NEGs:
    1. In theBackends section, clickAdd a backend.
    2. In theNew Backend panel, do the following:
      • In thenetwork endpoint group list, selectZONAL_NEG.
      • In theMaximum connections field, enter10.
  6. For instance groups: if you have alreadyupdated the VM instance or templates todual stack, this need not be updated.
  7. ClickDone.
  8. In theHealth check list, select an HTTP health check.

Configure the IPv6 forwarding rule

IPv6 forwarding rules aren't supported for cross-region internal proxy Network Load Balancers,regional external proxy Network Load Balancers, and regional internal proxy Network Load Balancers.

  1. ClickFrontend configuration.
  2. ClickAdd frontend IP and port.
  3. In theName field, enter a name for the forwarding rule.
  4. In theProtocol field:
    • For TCP proxy, selectTCP.
    • For SSL proxy, selectSSL.
  5. SetIP version toIPv6.
  6. For SSL proxy, in theCertificates list, select a certificate.
  7. ClickDone.
  8. ClickUpdate.

gcloud

  1. Add the dual-stack zonal NEGs as the backend to the backend service.

    global

    For the global external proxy Network Load Balancer, use the command:

     gcloud compute backend-services add-backendBACKEND_SERVICE \     --network-endpoint-group=ZONAL_NEG \     --max-rate-per-endpoint=10 \     --global

    cross-region

    For the cross-region internal proxy Network Load Balancer, use the command:

     gcloud compute backend-services add-backendBACKEND_SERVICE \     --network-endpoint-group=ZONAL_NEG \     --max-rate-per-endpoint=10 \     --global

    regional

    For the regional external proxy Network Load Balancer and the regional internal proxy Network Load Balancer,use the command:

     gcloud compute backend-services add-backendBACKEND_SERVICE \     --network-endpoint-group=ZONAL_NEG \     --max-rate-per-endpoint=10 \     --region=REGION
  2. Add the dual-stack instance group as the backend to the backend service.Given that you have alreadyupdated the VM instance or templates todual stack, no more action needs to be taken.

  3. For global external proxy Network Load Balancers only. To create the IPv6 forwardingrule for your global external proxy Network Load Balancerwith atarget SSL proxy, use the following command:

    gcloud compute forwarding-rules createFORWARDING_RULE_IPV6 \   --load-balancing-scheme=EXTERNAL_MANAGED \   --network-tier=PREMIUM \   --address=lb-ipv6-1 \   --global \   --target-ssl-proxy=TARGET_PROXY \   --ports=80

    To create the IPv6 forwarding rule for your global external proxy Network Load Balancerwith atarget TCP proxy, use the following command:

    gcloud compute forwarding-rules createFORWARDING_RULE_IPV6 \   --load-balancing-scheme=EXTERNAL_MANAGED \   --network-tier=PREMIUM \   --global \   --target-tcp-proxy=TARGET_PROXY \   --ports=80

Configure the IP address selection policy

This step is optional. After you have converted your resources and backends todual-stack, you can use the IP address selection policy to specifythe traffic type that is sent from the backend service to your backends.

ReplaceIP_ADDRESS_SELECTION_POLICY with any of thefollowing values:

IP address selection policyDescription
Only IPv4Only send IPv4 traffic to the backends of the backend service, regardless of traffic from the client to the GFE. Only IPv4 health checks are used to check the health of the backends.
Prefer IPv6

Prioritize the backend's IPv6 connection over the IPv4 connection (provided there is a healthy backend with IPv6 addresses).

The health checks periodically monitor the backends' IPv6 and IPv4 connections. The GFE first attempts the IPv6 connection; if the IPv6 connection is broken or slow, the GFE useshappy eyeballs to fall back and connect to IPv4.

Even if one of the IPv6 or IPv4 connections is unhealthy, the backend is still treated as healthy, and both connections can be tried by the GFE, with happy eyeballs ultimately selecting which one to use.

Only IPv6

Only send IPv6 traffic to the backends of the backend service, regardless of traffic from the client to the proxy. Only IPv6 health checks are used to check the health of the backends.

There is no validation to check if the backend traffic type matches the IP address selection policy. For example, if you have IPv4-only backends and selectOnly IPv6 as the IP address selection policy, the configuration results in unhealthy backends because traffic fails to reach those backends and the HTTP503 response code is returned to the clients.

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. ClickEdit.

  4. ClickBackend configuration.

  5. In theBackend service field, selectBACKEND_SERVICE.

  6. TheBackend type must beZonal network endpoint group orInstance group.

  7. In theIP address selection policy list, selectIP_ADDRESS_SELECTION_POLICY.

  8. ClickDone.

gcloud

Update the IP address selection policy for the backend service:

global

For the global external proxy Network Load Balancer, use the command:

gcloud compute backend-services updateBACKEND_SERVICE_IPV6 \    --load-balancing-scheme=EXTERNAL_MANAGED \    --protocol=SSL | TCP \    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \    --global

cross-region

For the cross-region internal proxy Network Load Balancer, use the command:

gcloud compute backend-services updateBACKEND_SERVICE_IPV6 \    --load-balancing-scheme=INTERNAL_MANAGED \    --protocol=TCP \    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \    --global

regional

For the regional external proxy Network Load Balancer, use the command:

gcloud compute backend-services updateBACKEND_SERVICE_IPV6 \    --load-balancing-scheme=EXTERNAL_MANAGED \    --protocol=TCP \    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \    --region=REGION

For the regional internal proxy Network Load Balancer, use the command:

gcloud compute backend-services updateBACKEND_SERVICE_IPV6 \    --load-balancing-scheme=INTERNAL_MANAGED \    --protocol=TCP \    --ip-address-selection-policy=IP_ADDRESS_SELECTION_POLICY \    --region=REGION

Test your load balancer

You must validate that all required resources are updated to dual stack.After you update all the resources, the traffic must automatically flow tothe backends. You cancheck the logs and verify that theconversion is complete.

Test the load balancer to confirm that the conversion is successful and theincoming traffic is reaching the backends as expected.

Look up the load balancer IP addresses

Console

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. Click the name of the load balancer.

  3. In theFrontend section, two load balancer IP addresses are displayed.In this procedure, the IPv4 address is referred to asIP_ADDRESS_IPV4 andthe IPv6 address is referred asIP_ADDRESS_IPV6.

  4. In theBackends section, when the IP address selection policyisPrefer IPv6 two health check statuses are displayed forthe backends.

Send traffic to the load balancer

Note: It might take a few minutes for the load balancer configuration topropagate globally after you first deploy it.

In this example, requests from thecurl command are distributed randomly to thebackends.

For external load balancers

  1. Repeat the following commands a few times until you see all the backend VMsresponding:

    curl -m1IP_ADDRESS_IPV4:PORT
    curl -m1IP_ADDRESS_IPV6:PORT

    For example, if the IPv6 address is[fd20:1db0:b882:802:0:46:0:0]:80, thecommand looks similar to this:

    curl -m1 [fd20:1db0:b882:802:0:46:0:0]:80

For internal load balancers

  1. Create a test client VM in the same VPC network and region asthe load balancer. It doesn't need to be in the same subnet or zone.

    gcloud compute instances create client-vm \    --zone=ZONE \    --image-family=debian-12 \    --image-project=debian-cloud \    --tags=allow-ssh \    --subnet=SUBNET
  2. Use SSH to connect to the client instance.

    gcloud compute ssh client-vm \   --zone=ZONE
  3. Repeat the following commands a few times until you see all the backend VMsresponding:

    curl -m1IP_ADDRESS_IPV4:PORT
    curl -m1IP_ADDRESS_IPV6:PORT

    For example, if the IPv6 address is[fd20:1db0:b882:802:0:46:0:0]:80, thecommand looks similar to this:

    curl -m1 [fd20:1db0:b882:802:0:46:0:0]:80

Check the logs

Every log entry captures the destination IPv4 and IPv6 address for the backend.Because we support dual stack, it is important to observethe IP address used by the backend.

You can check that traffic is going to IPv6 or failing back to IPv4 byviewing the logs.

The logs contain thebackend_ip address associated with thebackend. By examining the logs and comparing the destination IPv4 or IPv6address of thebackend_ip, you can confirm which IP address is used.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.