Set up an internal Application Load Balancer with Shared VPC Stay organized with collections Save and categorize content based on your preferences.
This document shows you two sample configurations for setting up aninternal Application Load Balancer in a Shared VPC environment:
- The first example creates all the load balancer components and backends in oneservice project.
- The second example creates the load balancer's frontend components and URL mapin one service project, while the load balancer's backend service andbackends are created in a different service project. This type of deployment,where the URL map references a backend service in another project, isreferred to asCross-project service referencing.
Both examples require the same upfront configuration tograntpermissions andset upShared VPC before you can start creating loadbalancers.
These are not the only Shared VPC configurations supported byinternal Application Load Balancers. For other valid Shared VPC architectures, seeShared VPC architectures.
If you don't want to use a Shared VPC network, seeSet up an internal Application Load Balancer.
Note:For cross-region internal Application Load Balancers, you can use all the Shared VPC examples provided in this document.
To configure cross-region internal Application Load Balancers, see Set up a cross-region internal Application Load Balancer with VM instance group backends.
Before you begin
- ReadShared VPC overview.
- ReadInternal Application Load Balancer overview,including theShared VPCarchitectures section.
Permissions required
Setting up a load balancer on a Shared VPC network requires someinitial setup and provisioning by an administrator. After the initial setup, aservice project owner can do one of the following:
- Deploy all the load balancer's components and its backends in a service project.
- Deploy the load balancer's backend components (backend service and backends) in service projects that can be referenced by a URL map in another service or host project.
This section summarizes the permissions required to follow this guideto set up a load balancer on a Shared VPC network.
Set up Shared VPC
The following roles are required for the following tasks:
- Perform one-off administrative tasks such as setting up theShared VPC and enabling a host project.
- Perform administrative tasks that must be repeated every time you want to onboard a newservice project. This includes attaching the service project, provisioningand configuring networking resources, and granting access to the serviceproject administrator.
These tasks must be performed in the Shared VPC host project. Werecommend that the Shared VPC Admin also be the owner of theShared VPC host project. This automatically grants the Network Admin andSecurity Admin roles.
| Task | Required role |
|---|---|
| Set up Shared VPC, enable host project, and grant access to service project administrators | Shared VPC Admin |
| Create subnets in the Shared VPC host project and grant access to service project administrators | Network Admin |
| Add and remove firewall rules | Security Admin |
After the subnets have been provisioned, the host project owner must grant theNetwork User rolein the host project to anyone (typically service projectadministrators, developers, or service accounts) who needs to use theseresources.
| Task | Required role |
|---|---|
| Use VPC networks and subnets belonging to the host project | Network User |
This role can be granted on the project level or for individual subnets. Werecommend that you grant the role on individual subnets. Granting the role onthe project provides access to all current and future subnets in theVPC network of the host project.
Deploy load balancer and backends
Service project administrators need the following rolesin the service projectto create load balancing resources and backends. These permissions are grantedautomatically to the service project owner or editor.
| Task | Required role |
|---|---|
| Create load balancer components | Network Admin |
| Create instances | Instance Admin |
| Create and modify SSL certificates | Security Admin |
Prerequisites
In this section, you need to perform the following steps:
The steps in this section do not need to be performed every time you want tocreate a new load balancer. However, you must ensure that you have access tothe resources described here before you proceed to creating the load balancer.
Configure the network and subnets in the host project
You need a Shared VPC network with two subnets: one for the loadbalancer's frontend and backends and one for the load balancer's proxies.This example uses the following network, region, and subnets:
Network. The network is named
lb-network.Subnet for load balancer's frontend and backends. A subnetnamed
lb-frontend-and-backend-subnetin theus-west1region uses10.1.2.0/24for its primary IP range.Subnet for proxies. A subnet named
proxy-only-subnetin theus-west1region uses10.129.0.0/23for its primary IP range.
Configure the subnet for the load balancer's frontend and backends
This step does not need to be performed every time you want to create a newload balancer. You only need to ensure that the service project has access toa subnet in the Shared VPC network (in addition to the proxy-onlysubnet).All the steps in this section must be performed in the host project.
Console
- In the Google Cloud console, go to theVPC networks page.
- ClickCreate VPC network.
- ForName, enter
lb-network. In theSubnets section:
- Set theSubnet creation mode toCustom.
In theNew subnet section, enter the following information:
- Name:
lb-frontend-and-backend-subnet
Region:
us-west1IP address range:
10.1.2.0/24
- Name:
ClickDone.
ClickCreate.
gcloud
Create a VPC network with the
gcloud computenetworks createcommand:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-networknetwork in theus-west1region:gcloud compute networks subnets create lb-frontend-and-backend-subnet
--network=lb-network
--range=10.1.2.0/24
--region=us-west1
Terraform
Create a VPC network:
# Shared VPC networkresource "google_compute_network" "lb_network" { name = "lb-network" provider = google-beta project = "my-host-project-id" auto_create_subnetworks = false}Create a subnet in the
us-west1region:# Shared VPC network - backend subnetresource "google_compute_subnetwork" "lb_frontend_and_backend_subnet" { name = "lb-frontend-and-backend-subnet" provider = google-beta project = "my-host-project-id" region = "us-west1" ip_cidr_range = "10.1.2.0/24" role = "ACTIVE" network = google_compute_network.lb_network.id}
Configure the proxy-only subnet
The proxy-only subnet is used by allregional Envoy-based loadbalancers in theus-west1region, in thelb-network VPC network. There can only be oneactive proxy-only subnet per region, per network.
Do not perform this step if there is already a proxy-only subnet reserved in theus-west1 region in this network.
All the steps in this section must be performed in the host project.
Console
- In the Google Cloud console, go to theVPC networks page.
- Click the name of the Shared VPC network:
lb-network. - ClickAdd subnet.
- ForName, enter
proxy-only-subnet. - ForRegion, select
us-west1. - SetPurpose toRegional Managed Proxy.
- ForIP address range, enter
10.129.0.0/23. - ClickAdd.
gcloud
Create the proxy-only subnet with thegcloud compute networks subnetscreate command:
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=us-west1 \ --network=lb-network \ --range=10.129.0.0/23
Terraform
Create the proxy-only subnet:
# Shared VPC network - proxy-only subnetresource "google_compute_subnetwork" "proxy_only_subnet" { name = "proxy-only-subnet" provider = google-beta project = "my-host-project-id" region = "us-west1" ip_cidr_range = "10.129.0.0/23" role = "ACTIVE" purpose = "REGIONAL_MANAGED_PROXY" network = google_compute_network.lb_network.id}Give service project admins access to the backend subnet
Service project administrators require access to thelb-frontend-and-backend-subnet subnet so that they can provision the loadbalancer's backends.A Shared VPC Admin must grant access to the backend subnet to serviceproject administrators (or developers who deploy resources and backendsthat use the subnet). For instructions, seeService Project Admins for some subnets.
Note: Service project administrators do not need to begranted access to the proxy-only subnet. However, without a pre-existing proxy-onlysubnet in the region, service project administrators cannot create forwarding rulesforregional Envoy-based loadbalancers in that region.Configure firewall rules in the host project
This example uses the following firewall rules:fw-allow-health-check. An ingress rule, applicable to the instancesbeing load balanced, that allows all TCP traffic from the Google Cloudhealth checking systems in130.211.0.0/22and35.191.0.0/16. Thisexample uses the target tagload-balanced-backendto identify the instancesto which it should apply.
fw-allow-proxies. An ingress rule, applicable to the instances beingload balanced, that allows TCP traffic on ports80,443, and8080fromthe load balancer's managed proxies. This example uses thetarget tagload-balanced-backendto identify the instances to which itshould apply.
fw-allow-ssh. An ingress rule, applicable to the instances being loadbalanced, that allows incoming SSH connectivity on TCP port22from anyaddress. You can choose a more restrictive source IP range for thisrule. For example, you can specify just the IP ranges of the systemfrom which you initiate SSH sessions. This example uses the targettagallow-sshto identify the virtual machines (VMs) to which thefirewall rule applies.
All the steps in this section must be performed in the host project.
Console
In the Google Cloud console, go to theFirewall policies page.
- ClickCreate firewall rule to create the rule to allow Google Cloud health checks:
- Name:
fw-allow-health-check - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend - Source filter: IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 - Protocols and ports:
- ChooseSpecified protocols and ports.
- CheckTCP and enter
80for the port number.
As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use
tcp:80for the protocol and port, Google Cloud can use HTTP on port80to contact your VMs, but it cannot use HTTPS on port443to contact them. - ClickCreate.
- ClickCreate firewall rule to create the rule to allow Google Cloud health checks:
- Name:
fw-allow-proxies - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend - Source filter: IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23 - Protocols and ports:
- ChooseSpecified protocols and ports.
- CheckTCP and enter
80, 443, 8080for the port numbers.
- ClickCreate.
- ClickCreate firewall rule to create the rule to allow Google Cloud health checks:
- Name:
fw-allow-ssh - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh - Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports:
- ChooseSpecified protocols and ports.
- CheckTCP and enter
22for the port number.
- ClickCreate.
gcloud
Create the
fw-allow-health-checkfirewall rule to allowGoogle Cloud health checks. This example allows all TCP trafficfrom health check probers. However, you can configure a narrower setof ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
Create the
fw-allow-proxiesfirewall rule to allow trafficfrom the Envoy proxy-only subnet to reach your backends.gcloud compute firewall-rules create fw-allow-proxies \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=load-balanced-backend \ --rules=tcp:80,tcp:443,tcp:8080
Create the
fw-allow-sshfirewall rule to allow SSHconnectivity to VMs with the network tagallow-ssh.When you omitsource-ranges,Google Cloudinterprets the rule to mean anysource.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Terraform
Create a firewall rule to allow Google Cloud health checks.
resource "google_compute_firewall" "fw_allow_health_check" { name = "fw-allow-health-check" provider = google-beta project = "my-host-project-id" direction = "INGRESS" network = google_compute_network.lb_network.id source_ranges = ["130.211.0.0/22", "35.191.0.0/16"] allow { protocol = "tcp" } target_tags = ["load-balanced-backend"]}Create a firewall rule to allow traffic from the Envoy proxy-only subnetto reach your backends.
resource "google_compute_firewall" "fw_allow_proxies" { name = "fw-allow-proxies" provider = google-beta project = "my-host-project-id" direction = "INGRESS" network = google_compute_network.lb_network.id source_ranges = ["10.129.0.0/23"] allow { protocol = "tcp" ports = ["80", "443", "8080"] } target_tags = ["load-balanced-backend"]}Create a firewall rule to allow SSH connectivity to VMs with the networktag
allow-ssh.resource "google_compute_firewall" "fw_allow_ssh" { name = "fw-allow-ssh" provider = google-beta project = "my-host-project-id" direction = "INGRESS" network = google_compute_network.lb_network.id source_ranges = ["0.0.0.0/0"] allow { protocol = "tcp" ports = ["22"] } target_tags = ["allow-ssh"]}
Set up Shared VPC in the host project
This step entails enabling a Shared VPC host project, sharing subnets ofthe host project, and attaching service projects to the host project so that theservice projects can use the Shared VPC network. To set upShared VPC in the host project, see the following pages:
Note: Managed instance groups used with Shared VPC require making theGoogle APIs service account a Service Project Admin. This is because taskslike automatic instance creation via autoscaling are performedby this type of service account. To define theGoogle APIs service account asa Service Project Admin for the subnet in the Shared VPC hostproject, seeGoogle APIs service account as a Service ProjectAdmin.The rest of these instructions assume that you have already set upShared VPC. This includessetting up IAM policies for yourorganization and designatingthe host and service projects.
Don't proceed until you have set up Shared VPC and enabled the hostand service projects.
After completing the steps defined in this prerequisites section, you can pursueeither of the following setups:
Configure a load balancer in the service project
This example creates an internal Application Load Balancer where all the load balancing components(forwarding rule, target proxy, URL map, and backend service) and backends arecreated in the service project.
The internal Application Load Balancer's networking resources such as the proxy-only subnet andthe subnet for the backend instances are created in the host project. Thefirewall rules for the backend instances are also created in the host project.
This section shows you how to set up the load balancer and backends. These stepsshould be carried out by the service project administrator (or a developeroperating within the service project) and do not require involvement from thehost project administrator. The steps in this section are largely similar tothestandard steps to set up aninternal Application Load Balancer.
The example on this page explicitly sets a reserved internal IP address forthe internal Application Load Balancer's forwarding rule, rather than allowing anephemeral internal IP address to be allocated. As a best practice, we recommendreserving IP addresses for forwarding rules.
Create the managed instance group backend
Note: This section shows you how to set up an internal Application Load Balancer with VMinstances located in a service project. Internal Application Load Balancers alsosupport Shared VPC with pods in a GKE clusterusingcontainer-native load balancing with network endpoint groups(NEGs).This section shows how to create a template and a managed instance group. Themanaged instance group provides VM instances running the backend servers of anexample internal Application Load Balancer. Traffic from clients is load balanced tothese backend servers. For demonstration purposes, backends serve their ownhostnames.
Console
Create an instance template. In the Google Cloud console, go to theInstance templates page.
- ClickCreate instance template.
- ForName, enter
l7-ilb-backend-template. - Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such as
apt-get. If you need to changetheBoot disk, clickChange.- ForOperating System, selectDebian.
- ForVersion, select one of the available Debian images suchasDebian GNU/Linux 12 (bookworm).
- ClickSelect.
- ClickAdvanced options, and then clickNetworking.
- Enter the followingNetworktags:
allow-ssh,load-balanced-backend. - In theNetwork interfaces section,selectNetworks shared with me(from host project:HOST_PROJECT_ID).
- Select the
lb-frontend-and-backend-subnetsubnet from thelb-networknetwork. - ClickManagement. ForManagement, insert the followingscript into theStartup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google"
http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" |
tee /var/www/html/index.html systemctl restart apache2 - ClickCreate.
Create a managed instance group. In the Google Cloud console, go to theInstance groups page.
- ClickCreate instance group.
- ChooseNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
- ForName, enter
l7-ilb-backend-example. - ForLocation, selectSingle zone.
- ForRegion, select
us-west1. - ForZone, select
us-west1-a. - ForInstance template, select
l7-ilb-backend-template. Specify the number of instances that you want to create in the group.
For this example, specify the following options forAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.
- ForAutoscaling mode, select
ClickCreate.
gcloud
Thegcloud instructions in this guide assume that you are usingCloudShell or another environment with bash installed.
Create a VM instance template with an HTTP server with the
gcloud compute instance-templates createcommand.gcloud compute instance-templates create l7-ilb-backend-template \--region=us-west1 \--network=projects/HOST_PROJECT_ID/global/networks/lb-network \--subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \--tags=allow-ssh,load-balanced-backend \--image-family=debian-12 \--image-project=debian-cloud \--metadata=startup-script='#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google" \http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" | \tee /var/www/html/index.htmlsystemctl restart apache2' \--project=SERVICE_PROJECT_ID
Create a managed instance group in the zone with the
gcloud computeinstance-groups managed createcommand.gcloud compute instance-groups managed create l7-ilb-backend-example \ --zone=us-west1-a \ --size=2 \ --template=l7-ilb-backend-template \ --project=SERVICE_PROJECT_ID
Terraform
Create a VM instance template.
# Instance templateresource "google_compute_instance_template" "default" { name = "l7-ilb-backend-template" provider = google-beta project = "my-service-project-id" region = "us-west1" # For machine type, using small. For more options check https://cloud.google.com/compute/docs/machine-types machine_type = "e2-small" tags = ["allow-ssh", "load-balanced-backend"] network_interface { network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id access_config { # add external ip to fetch packages like apache2, ssl } } disk { source_image = "debian-cloud/debian-12" auto_delete = true boot = true } # install apache2 and serve a simple web page metadata = { startup-script = <<EOF #! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo a2ensite default-ssl sudo a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" sudo echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sudo systemctl restart apache2 EOF }}Create a managed instance group.
For HTTP:
# MIGresource "google_compute_instance_group_manager" "default" { name = "l7-ilb-backend-example" provider = google-beta project = "my-service-project-id" zone = "us-west1-a" base_instance_name = "vm" target_size = 2 version { instance_template = google_compute_instance_template.default.id name = "primary" } named_port { name = "http" port = 80 }}For HTTPS:
# MIGresource "google_compute_instance_group_manager" "default" { name = "l7-ilb-backend-example" provider = google-beta project = "my-service-project-id" zone = "us-west1-a" base_instance_name = "vm" target_size = 2 version { instance_template = google_compute_instance_template.default.id name = "primary" } named_port { name = "https" port = 443 }}
Configure the load balancer
This section shows you how to create the internal Application Load Balancer resources:
- HTTP health check
- Backend service with a managed instance group as the backend
- A URL map
- SSL certificate (required only for HTTPS)
- Target proxy
- Forwarding rule
Proxy availability
Depending on the number of service projects that are using the sameShared VPC network, you might reachquotas or limitsmore quickly than in the network deployment model whereeach Google Cloud project hosts its own network.
For example, sometimes Google Cloud regions don't have enough proxycapacity for a new internal Application Load Balancer. If this happens, the Google Cloud consoleprovides a proxy availability warning message when you are creating your loadbalancer. To resolve this issue, you can do one of the following:
- Wait for the capacity issue to be resolved.
Contact your Google Cloud sales team to increase these limits.
Console
Switch context to the service project
- In the Google Cloud console, go to theDashboard page.
- Click theSelect from list at the top of the page. In theSelect from window that appears, select the service project whereyou want to create the load balancer.
Select the load balancer type
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ForCross-region or single region deployment, selectBest for regional workloads and clickNext.
- ClickConfigure.
Basic configuration
- For theName of the load balancer, enter
l7-ilb-shared-vpc. - For theRegion, select
us-west1. For theNetwork, selectlb-network(fromProject:HOST_PROJECT_ID).
If you see aProxy-only subnet required in Shared VPC network warning, confirm that the host project admin has created the
proxy-only-subnetin theus-west1region in thelb-networkShared VPC network. Load balancer creation succeeds even if you do not have permission to view the proxy-only subnet on this page.Keep the window open to continue.
Configure the backend
- ClickBackend configuration.
- From theCreate or select backend services menu,selectCreate a backend service.
- Set theName of the backend service to
l7-ilb-backend-service. - Set theBackend type toInstance groups.
- In theHealth check list, selectCreate a health check, and thenenter the following information:
- Name:
l7-ilb-basic-check - Protocol:HTTP
- Port:
80
- Name:
- ClickCreate.
In theNew backend section:
- Set theInstance group to
l7-ilb-backend-example. - Set thePort numbers to
80. - Set theBalancing mode toUtilization.
- ClickDone.
- Set theInstance group to
ClickCreate.
Configure the routing rules
- ClickRouting rules. Ensure that the
l7-ilb-backend-serviceis the only backend service for any unmatched host and any unmatchedpath.
For information about traffic management, seeSetting up trafficmanagement.
Configure the frontend
For HTTP:
- ClickFrontend configuration.
- Set theName to
l7-ilb-forwarding-rule. - Set theProtocol to
HTTP. - Set theSubnetwork to
lb-frontend-and-backend-subnet.Don't select the proxy-only subnet for the frontend even if itis an option in the list. - Set thePort to
80. - Click theIP address menu, and then clickCreate IP address.
- In theReserve a static internal IP address panel, provide thefollowing details:
- For theName, enter
ip-address-shared-vpc. - ForStatic IP address, clickLet me choose. ForCustom IPaddress, enter
10.1.2.99. - (Optional) If you want to share this IP address with differentfrontends, for setPurpose toShared.
- For theName, enter
- ClickDone.
For HTTPS:
If you are using HTTPS between the client and the load balancer,you need one or more SSL certificate resources to configure the proxy.For information about how to create SSL certificate resources, seeSSL certificates. Google-managedcertificates aren't currently supported with internal Application Load Balancers.
- ClickFrontend configuration.
- In theName field, enter
l7-ilb-forwarding-rule. - In theProtocol field, select
HTTPS (includes HTTP/2). - Set theSubnetwork to
lb-frontend-and-backend-subnet.Don't select the proxy-only subnet for the frontend even if itis an option in the list. - Ensure that thePort is set to
443to allow HTTPS traffic. - Click theIP address menu, and then clickCreate IP address.
- In theReserve a static internal IP address panel, provide thefollowing details:
- For theName, enter
ip-address-shared-vpc. - ForStatic IP address, clickLet me choose. ForCustom IPaddress, enter
10.1.2.99. - (Optional) If you want to share this IP address with differentfrontends, for setPurpose toShared.
- For theName, enter
- Click theCertificate list.
- If you already have aself-managed SSLcertificate resource thatyou want to use as the primary SSL certificate, select it from themenu.
- Otherwise, selectCreate a new certificate.
- Fill in aName of
l7-ilb-cert. - In the appropriate fields, upload your PEM-formatted files:
- Public key certificate
- Certificate chain
- Private key
- ClickCreate.
- Fill in aName of
- To add certificate resources in addition tothe primary SSL certificate resource:
- ClickAdd certificate.
- Select a certificate from theCertificates list or clickCreate a new certificate and follow the previous instructions.
- ClickDone.
Review and finalize the configuration
- ClickCreate.
gcloud
Define the HTTP health check with the
gcloud compute health-checkscreate httpcommand.gcloud compute health-checks create http l7-ilb-basic-check \ --region=us-west1 \ --use-serving-port \ --project=SERVICE_PROJECT_ID
Define the backend service with the
gcloud compute backend-servicescreatecommand.gcloud compute backend-services create l7-ilb-backend-service \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --health-checks=l7-ilb-basic-check \ --health-checks-region=us-west1 \ --region=us-west1 \ --project=SERVICE_PROJECT_ID
Add backends to the backend service with the
gcloud compute backend-servicesadd-backendcommand.gcloud compute backend-services add-backend l7-ilb-backend-service \ --balancing-mode=UTILIZATION \ --instance-group=l7-ilb-backend-example \ --instance-group-zone=us-west1-a \ --region=us-west1 \ --project=SERVICE_PROJECT_ID
Create the URL map with the
gcloud compute url-mapscreatecommand.gcloud compute url-maps create l7-ilb-map \ --default-service=l7-ilb-backend-service \ --region=us-west1 \ --project=SERVICE_PROJECT_ID
Create the target proxy.
For HTTP:
For an internal HTTP load balancer, create the target proxywith the
gcloud compute target-http-proxiescreatecommand.gcloud compute target-http-proxies create l7-ilb-proxy \ --url-map=l7-ilb-map \ --url-map-region=us-west1 \ --region=us-west1 \ --project=SERVICE_PROJECT_ID
For HTTPS:
For information about how to create SSL certificate resources, seeSSL certificates. Google-managedcertificates aren't currently supported with internal Application Load Balancers.
Assign your filepaths to variable names.
export LB_CERT=path to PEM-formatted file
export LB_PRIVATE_KEY=path to PEM-formatted file
Create a regional SSL certificate using the
gcloud computessl-certificatescreatecommand.gcloud compute ssl-certificates create l7-ilb-cert \ --certificate=$LB_CERT \ --private-key=$LB_PRIVATE_KEY \ --region=us-west1
Use the regional SSL certificate to create a target proxy with the
gcloudcompute target-https-proxiescreatecommand.gcloud compute target-https-proxies create l7-ilb-proxy \ --url-map=l7-ilb-map \ --region=us-west1 \ --ssl-certificates=l7-ilb-cert \ --project=SERVICE_PROJECT_ID
Create the forwarding rule.
For custom networks, you must reference the subnet in the forwarding rule.
For the forwarding rule's IP address, use the
lb-frontend-and-backend-subnet. If you tryto use theproxy-only subnet,forwarding rule creation fails.For HTTP:
Use the
gcloud compute forwarding-rulescreatecommandwith the correct flags.gcloud compute forwarding-rules create l7-ilb-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --address=IP_ADDRESS_NAME \ --ports=80 \ --region=us-west1 \ --target-http-proxy=l7-ilb-proxy \ --target-http-proxy-region=us-west1 \ --project=SERVICE_PROJECT_ID
For HTTPS:
Use the
gcloud compute forwarding-rulescreatecommandwith the correct flags.gcloud compute forwarding-rules create l7-ilb-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --address=IP_ADDRESS_NAME \ --ports=443 \ --region=us-west1 \ --target-https-proxy=l7-ilb-proxy \ --target-https-proxy-region=us-west1 \ --project=SERVICE_PROJECT_ID
Terraform
Define the HTTP health check.
For HTTP:
# health checkresource "google_compute_health_check" "default" { name = "l7-ilb-basic-check" provider = google-beta project = "my-service-project-id" timeout_sec = 1 check_interval_sec = 1 http_health_check { port = "80" }}For HTTPS:
# health checkresource "google_compute_health_check" "default" { name = "l7-ilb-basic-check" provider = google-beta project = "my-service-project-id" timeout_sec = 1 check_interval_sec = 1 https_health_check { port = "443" }}Define the backend service.
# backend serviceresource "google_compute_region_backend_service" "default" { name = "l7-ilb-backend-service" provider = google-beta project = "my-service-project-id" region = "us-west1" protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" timeout_sec = 10 health_checks = [google_compute_health_check.default.id] backend { group = google_compute_instance_group_manager.default.instance_group balancing_mode = "UTILIZATION" capacity_scaler = 1.0 }}Create the URL map.
# URL mapresource "google_compute_region_url_map" "default" { name = "l7-ilb-map" provider = google-beta project = "my-service-project-id" region = "us-west1" default_service = google_compute_region_backend_service.default.id}Create the target proxy.
For HTTP:
# HTTP target proxyresource "google_compute_region_target_http_proxy" "default" { name = "l7-ilb-proxy" provider = google-beta project = "my-service-project-id" region = "us-west1" url_map = google_compute_region_url_map.default.id}For HTTPS:Create a regional SSL certificate
For information about how to create SSL certificate resources, seeSSL certificates. Google-managedcertificates aren't currently supported with internal Application Load Balancers.
# Use self-signed SSL certificateresource "google_compute_region_ssl_certificate" "default" { name = "l7-ilb-cert" provider = google-beta project = "my-service-project-id" region = "us-west1" private_key = file("sample-private.key") # path to PEM-formatted file certificate = file("sample-server.cert") # path to PEM-formatted file}Use the regional SSL certificate to create a target proxy
# HTTPS target proxyresource "google_compute_region_target_https_proxy" "default" { name = "l7-ilb-proxy" provider = google-beta project = "my-service-project-id" region = "us-west1" url_map = google_compute_region_url_map.default.id ssl_certificates = [google_compute_region_ssl_certificate.default.id]}Create the forwarding rule.
For custom networks, you must reference the subnet in the forwarding rule.
For HTTP:
# Forwarding ruleresource "google_compute_forwarding_rule" "default" { name = "l7-ilb-forwarding-rule" provider = google-beta project = "my-service-project-id" region = "us-west1" ip_protocol = "TCP" port_range = "80" load_balancing_scheme = "INTERNAL_MANAGED" target = google_compute_region_target_http_proxy.default.id network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id network_tier = "PREMIUM" depends_on = [google_compute_subnetwork.lb_frontend_and_backend_subnet]}For HTTPS:
# Forwarding ruleresource "google_compute_forwarding_rule" "default" { name = "l7-ilb-forwarding-rule" provider = google-beta project = "my-service-project-id" region = "us-west1" ip_protocol = "TCP" port_range = "443" load_balancing_scheme = "INTERNAL_MANAGED" target = google_compute_region_target_https_proxy.default.id network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id network_tier = "PREMIUM" depends_on = [google_compute_subnetwork.lb_frontend_and_backend_subnet]}
Test the load balancer
To test the load balancer, first create a sample client VM. Then establish anSSH session with the VM and send traffic from this VM to the load balancer.
Create a test VM instance
Clients can be located in either the host project or any connected serviceproject. In this example, you test that the load balancer is working bydeploying a client VM in a service project. The client must use the sameShared VPC network and be in the same region as the load balancer.
Console
In the Google Cloud console, go to theVM instances page.
ClickCreate instance.
Set theName to
client-vm.Set theZone tous-west1-a.
ClickAdvanced options, and then clickNetworking.
Enter the followingNetworktags:
allow-ssh,load-balanced-backend.In theNetwork interfaces section,selectNetworks shared with me(from host project:HOST_PROJECT_ID).
Select the
lb-frontend-and-backend-subnetsubnet from thelb-networknetwork.ClickCreate.
gcloud
Create a test VM instance.
gcloud compute instances create client-vm \ --image-family=debian-12 \ --image-project=debian-cloud \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --zone=us-west1-a \ --tags=allow-ssh \ --project=SERVICE_PROJECT_ID
Terraform
Create a test VM instance.
resource "google_compute_instance" "vm_test" { name = "client-vm" provider = google-beta project = "my-service-project-id" zone = "us-west1-a" machine_type = "e2-small" tags = ["allow-ssh"] network_interface { network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id } boot_disk { initialize_params { image = "debian-cloud/debian-12" } } lifecycle { ignore_changes = [ metadata["ssh-keys"] ] }}Send traffic to the load balancer
Use SSH to connect to the instance that you just created and test that HTTP(S)services on the backends are reachable through the internal Application Load Balancer's forwardingrule IP address and that traffic is being load balanced across the backendinstances.
Connect to the client instance with SSH.
gcloud compute ssh client-vm \ --zone=us-west1-a
Verify that the IP address is serving its hostname. ReplaceLB_IP_ADDRESS with the load balancer's IP address.
curlLB_IP_ADDRESS
For HTTPS testing, replace
curlwith the following:curl -k -s 'https://LB_IP_ADDRESS:443'
The
-kflag causes curl to skip certificate validation.
Configure a load balancer with a cross-project backend service
The previous example on this page shows you how to set up a Shared VPCdeployment where all the load balancer components and its backends are createdin the service project.
Internal Application Load Balancers also let you configure Shared VPCdeployments where a URL map in one host or service project can reference backendservices (and backends) located across multiple service projects inShared VPC environments. This is referred to ascross-projectservice referencing.
You can use the steps in this section as a reference to configure any ofthe supported combinations listed here:
- Forwarding rule, target proxy, and URL map in the host project, and backendservice in a service project
- Forwarding rule, target proxy, and URL map in a service project, and backendservice in another service project
Cross-project service referencing can be used with instance groups, serverlessNEGs, or any other supported backend types. If you're using serverless NEGs,you need to create a VM in the VPC network where youintend to create the load balancer's frontend. For an example, seeCreate a VM instance in a specific subnetinSet up an internal Application Load Balancer with Cloud Run.
Set up requirements
This example configures a sample load balancer with its frontend and backendin two different service projects.
If you haven't already done so, you must complete all of the prerequisite stepsto set up Shared VPC and configure the network, subnets, and firewallrules required for this example. For instructions, see the following:
Create the backends and backend service in service project B
All the steps in this section must be performed in service project B.
Console
Create an instance template. In the Google Cloud console, go to theInstance templates page.
- ClickCreate instance template.
- Enter aName for the instance template:
cross-ref-backend-template. - Ensure that theBoot disk is set to a Debian image, such asDebian GNU/Linux 12 (bookworm). These instructions use commands thatare only available on Debian, such as
apt-get. If you need to changetheBoot disk, clickChange.- ForOperating System, selectDebian.
- ForVersion, select one of the available Debian images suchasDebian GNU/Linux 12 (bookworm).
- ClickSelect.
- ClickAdvanced options, and then clickNetworking.
- Enter the followingNetworktags:
allow-ssh,load-balanced-backend. - In theNetwork interfaces section,selectNetworks shared with me(from host project:HOST_PROJECT_ID).
- Select the
lb-frontend-and-backend-subnetsubnet from thelb-networknetwork. - ClickManagement. ForManagement, insert the followingscript into theStartup script field.
#! /bin/bashapt-get updateapt-get install apache2 -ya2ensite default-ssla2enmod sslvm_hostname="$(curl -H "Metadata-Flavor:Google"
http://metadata.google.internal/computeMetadata/v1/instance/name)"echo "Page served from: $vm_hostname" |
tee /var/www/html/index.htmlsystemctl restart apache2 - ClickCreate.
Create a managed instance group. In the Google Cloud console, go to theInstance groups page.
- ClickCreate instance group.
- ChooseNew managed instance group (stateless). For moreinformation, seeStateless or stateful MIGs.
- Enter aName for the instance group:
cross-ref-ig-backend. - ForLocation, selectSingle zone.
- ForRegion, select
us-west1. - ForZone, select
us-west1-a. - ForInstance template, selectcross-ref-backend-template.
Specify the number of instances that you want to create in the group.
For this example, specify the following options forAutoscaling:
- ForAutoscaling mode, select
Off:do not autoscale. - ForMaximum number of instances, enter
2.
Optionally, in theAutoscaling section of the UI, you can configurethe instance group toautomatically add or removeinstances based on instance CPU usage.
- ForAutoscaling mode, select
ClickCreate.
Create a regional backend service. As a part of this step we'll alsocreate the health check and add backends to the backend service.In the Google Cloud console, go to theBackends page.
- ClickCreate regional backend service.
- Enter aName for the backend service:
cross-ref-backend-service. - ForRegion, selectus-west1.
- ForLoad balancer type, selectRegional internal Application Load Balancer (INTERNAL_MANAGED).
- SetBackend type toInstance groups.
- In theHealth check list, clickCreate a health check, andthen enter the following information:
- Name:
cross-ref-http-health-check - Protocol:HTTP
- Port:
80
- Name:
- ClickCreate.
- In theBackends section, setNetwork tolb-network.
- ClickAdd backend and set the following fields:
- SetInstance group tocross-ref-ig-backend.
- Enter thePort numbers:
80. - SetBalancing mode toUtilization.
- ClickDone.
- ClickContinue.
Optional: In theAdd permissions section, enter theIAM principals (typically an email address) of LoadBalancer Admins from other projects so that they can use this backendservice for load balancers in their own projects. Without thispermission, you cannot use cross-project service referencing.
If you don't have permission to set access control policies forbackend services in this project, you can still create the backendservice now, and an authorized user can perform this step later asdescribed in the section,Grant permissions to the Load BalancerAdmin to use the backend service. That section alsodescribes how to grant access to all the backend services in thisproject, so that you don't have to grant access every time you createa new backend service.
ClickCreate.
gcloud
Create a VM instance template with an HTTP server with the
gcloud computeinstance-templatescreatecommand.gcloud compute instance-templates createBACKEND_IG_TEMPLATE \ --region=us-west1 \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2' \ --project=SERVICE_PROJECT_B_ID
Replace the following:
BACKEND_IG_TEMPLATE: the name for theinstance group template.SERVICE_PROJECT_B_ID: the project ID forservice project B, where the load balancer's backends and the backendservice are being created.HOST_PROJECT_ID: the project ID for theShared VPC host project.
Create a managed instance group in the zone with the
gcloud computeinstance-groups managedcreatecommand.gcloud compute instance-groups managed createBACKEND_MIG \ --zone=us-west1-a \ --size=2 \ --template=BACKEND_IG_TEMPLATE \ --project=SERVICE_PROJECT_B_ID
Replace the following:
BACKEND_MIG: the name for thebackend instance group.
Define the HTTP health check with the
gcloud compute health-checkscreate httpcommand.gcloud compute health-checks create httpHTTP_HEALTH_CHECK_NAME \ --region=us-west1 \ --use-serving-port \ --project=SERVICE_PROJECT_B_ID
Replace the following:
HTTP_HEALTH_CHECK_NAME: the name for theHTTP health check.
Define the backend service with the
gcloud compute backend-servicescreatecommand.gcloud compute backend-services createBACKEND_SERVICE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --health-checks=HTTP_HEALTH_CHECK_NAME \ --health-checks-region=us-west1 \ --region=us-west1 \ --project=SERVICE_PROJECT_B_ID
Replace the following:
BACKEND_SERVICE_NAME: the name for thebackend service created in service project B.
Add backends to the backend service with the
gcloud computebackend-servicesadd-backendcommand.gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \ --balancing-mode=UTILIZATION \ --instance-group=BACKEND_MIG \ --instance-group-zone=us-west1-a \ --region=us-west1 \ --project=SERVICE_PROJECT_B_ID
Terraform
Create an instance template.
# Instance templateresource "google_compute_instance_template" "default" { name = "l7-ilb-backend-template" provider = google-beta project = "my-service-project-b-id" region = "us-west1" # For machine type, using small. For more options check https://cloud.google.com/compute/docs/machine-types machine_type = "e2-small" tags = ["allow-ssh", "load-balanced-backend"] network_interface { network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id access_config { # add external ip to fetch packages like apache2, ssl } } disk { source_image = "debian-cloud/debian-12" auto_delete = true boot = true } # install apache2 and serve a simple web page metadata = { startup-script = <<EOF #! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo a2ensite default-ssl sudo a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" sudo echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sudo systemctl restart apache2 EOF }}Create a managed instance group.
For HTTP
# MIGresource "google_compute_instance_group_manager" "default" { name = "l7-ilb-backend-example" provider = google-beta project = "my-service-project-b-id" zone = "us-west1-a" base_instance_name = "vm" target_size = 2 version { instance_template = google_compute_instance_template.default.id name = "primary" } named_port { name = "http" port = 80 }}For HTTPS
# MIGresource "google_compute_instance_group_manager" "default" { name = "l7-ilb-backend-example" provider = google-beta project = "my-service-project-b-id" zone = "us-west1-a" base_instance_name = "vm" target_size = 2 version { instance_template = google_compute_instance_template.default.id name = "primary" } named_port { name = "https" port = 443 }}Create a health check for backend.
For HTTP
# health checkresource "google_compute_health_check" "default" { name = "l7-ilb-basic-check" provider = google-beta project = "my-service-project-b-id" timeout_sec = 1 check_interval_sec = 1 http_health_check { port = "80" }}For HTTPS
# health checkresource "google_compute_health_check" "default" { name = "l7-ilb-basic-check" provider = google-beta project = "my-service-project-b-id" timeout_sec = 1 check_interval_sec = 1 https_health_check { port = "443" }}Create a regional backend service.
# backend serviceresource "google_compute_region_backend_service" "default" { name = "l7-ilb-backend-service" provider = google-beta project = "my-service-project-b-id" region = "us-west1" protocol = "HTTP" load_balancing_scheme = "INTERNAL_MANAGED" timeout_sec = 10 health_checks = [google_compute_health_check.default.id] backend { group = google_compute_instance_group_manager.default.instance_group balancing_mode = "UTILIZATION" capacity_scaler = 1.0 }}
Create the load balancer frontend and URL map in service project A
All the steps in this section must be performed in service project A.
Console
Select the load balancer type
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
- ForPublic facing or internal, selectInternal and clickNext.
- ForCross-region or single region deployment, selectBest for regional workloads and clickNext.
- ClickConfigure.
Basic configuration
- Enter aName for the load balancer.
- For theRegion, select
us-west1. For theNetwork, selectlb-network(fromProject:HOST_PROJECT_NAME).
If you see aProxy-only subnet required in Shared VPC network warning, confirm that the host project admin has created the
proxy-only-subnetin theus-west1region in thelb-networkShared VPC network. Load balancer creation succeeds even if you do not have permission to view the proxy-only subnet on this page.Keep the window open to continue.
Configure the backend
- ClickBackend configuration.
- ClickCross-project backend services.
- ForProject ID, enter theprojectIDfor service project B.
- From theSelect backend services list, select the backend servicesfrom service project B that you want to use. For this example, you enter
cross-ref-backend-service. - ClickOK.
Configure the routing rules
- ClickRouting rules. Ensure that thecross-ref-backend-serviceis the only backend service for any unmatched host and any unmatchedpath.
For information about traffic management, seeSetting up trafficmanagement.
Configure the frontend
For cross-project service referencing to work, the frontend must usethe same network (lb-network) from the Shared VPC host projectthat was used to create the backend service.
For HTTP:
- ClickFrontend configuration.
- Enter aName for the forwarding rule:
cross-ref-http-forwarding-rule. - Set theProtocol to
HTTP. - Set theSubnetwork to
lb-frontend-and-backend-subnet.Don't select the proxy-only subnet for the frontend even if itis an option in the list. - Set thePort to
80. - Click theIP address menu, and then clickCreate IP address.
- In theReserve a static internal IP address panel, provide thefollowing details:
- For theName, enter
cross-ref-ip-address. - ForStatic IP address, clickLet me choose. ForCustom IPaddress, enter
10.1.2.98. - (Optional) If you want to share this IP address with differentfrontends, for setPurpose toShared.
- For theName, enter
- ClickDone.
For HTTPS:
If you are using HTTPS between the client and the load balancer,you need one or more SSL certificate resources to configure the proxy.For information about how to create SSL certificate resources, seeSSL certificates. Google-managedcertificates aren't currently supported with internal Application Load Balancers.
- ClickFrontend configuration.
- Enter aName for the forwarding rule:
cross-ref-https-forwarding-rule. - In theProtocol field, select
HTTPS (includes HTTP/2). - Set theSubnetwork to
lb-frontend-and-backend-subnet.Don't select the proxy-only subnet for the frontend even if itis an option in the list. - Ensure that thePort is set to
443to allow HTTPS traffic. - Click theIP address menu, and then clickCreate IP address.
- In theReserve a static internal IP address panel, provide thefollowing details:
- For theName, enter
cross-ref-ip-address. - ForStatic IP address, clickLet me choose. ForCustom IPaddress, enter
10.1.2.98. - (Optional) If you want to share this IP address with differentfrontends, for setPurpose toShared.
- For theName, enter
- Click theCertificate list.
- If you already have aself-managed SSLcertificate resourceyou want to use as the primary SSL certificate, select it from themenu.
- Otherwise, selectCreate a new certificate.
- Enter aName for the SSL certificate.
- In the appropriate fields, upload your PEM-formatted files:
- Public key certificate
- Certificate chain
- Private key
- ClickCreate.
- To add certificate resources in addition tothe primary SSL certificate resource:
- ClickAdd certificate.
- Select a certificate from theCertificates list or clickCreate a new certificate and follow the previous instructions.
- ClickDone.
Review and finalize the configuration
- ClickCreate.
Test the load balancer
After the load balancer is created, test the load balancer by using thesteps described inTest the load balancer.
gcloud
Optional: Before creating a load balancer with cross-referencing backend services, find out whether the backend services you want to refer to can be referenced using a URL map:
gcloud compute backend-services list-usable \ --region=us-west1 \ --project=SERVICE_PROJECT_B_ID
Create the URL map and set the default service to the backend servicecreated in service project B.
gcloud compute url-maps createURL_MAP_NAME \ --default-service=projects/SERVICE_PROJECT_B_ID/regions/us-west1/backendServices/BACKEND_SERVICE_NAME \ --region=us-west1 \ --project=SERVICE_PROJECT_A_ID
Replace the following:
URL_MAP_NAME: the name for theURL map.BACKEND_SERVICE_NAME: the name for thebackend service created in service project B.SERVICE_PROJECT_B_ID: the project ID forservice project B, where the load balancer's backends and the backendservice are created.SERVICE_PROJECT_A_ID: the project ID forservice project A, where the load balancer's frontend is being created.
URL map creation fails if you don't have the
compute.backendServices.usepermission for the backend service inservice project B.Create the target proxy.
For HTTP:
gcloud compute target-http-proxies createHTTP_TARGET_PROXY_NAME \ --url-map=URL_MAP_NAME \ --url-map-region=us-west1 \ --region=us-west1 \ --project=SERVICE_PROJECT_A_ID
Replace the following:
HTTP_TARGET_PROXY_NAME: the name for thetarget HTTP proxy.
For HTTPS:
Create a regional SSL certificate using the
gcloud computessl-certificatescreatecommand.gcloud compute ssl-certificates createSSL_CERTIFICATE_NAME \ --certificate=PATH_TO_CERTIFICATE \ --private-key=PATH_TO_PRIVATE_KEY \ --region=us-west1 \ --project=SERVICE_PROJECT_A_ID
Replace the following:
SSL_CERTIFICATE_NAME: the name for theSSL certificate resource.PATH_TO_CERTIFICATE: the path to the localSSL certificate file in PEM format.PATH_TO_PRIVATE_KEY: the path to the localSSL certificate private key in PEM format.
Use the regional SSL certificate to create a target proxy with the
gcloudcompute target-https-proxiescreatecommand.gcloud compute target-https-proxies createHTTPS_TARGET_PROXY_NAME \ --url-map=URL_MAP_NAME \ --region=us-west1 \ --ssl-certificates=SSL_CERTIFICATE_NAME \ --project=SERVICE_PROJECT_A_ID
Replace the following:
HTTPS_TARGET_PROXY_NAME: the name for thetarget HTTPS proxy.
Create the forwarding rule. For cross-project service referencing towork, the forwarding rule must use the same network (
lb-network) fromthe Shared VPC host project that was used to create the backendservice.For HTTP:
gcloud compute forwarding-rules createHTTP_FORWARDING_RULE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --address=IP_ADDRESS_CROSS_REF \ --ports=80 \ --region=us-west1 \ --target-http-proxy=HTTP_TARGET_PROXY_NAME \ --target-http-proxy-region=us-west1 \ --project=SERVICE_PROJECT_A_ID
Replace the following:
HTTP_FORWARDING_RULE_NAME: the name for theforwarding rule that is used to handle HTTP traffic.
For HTTPS:
gcloud compute forwarding-rules createHTTPS_FORWARDING_RULE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=projects/HOST_PROJECT_ID/global/networks/lb-network \ --subnet=projects/HOST_PROJECT_ID/regions/us-west1/subnetworks/lb-frontend-and-backend-subnet \ --address=IP_ADDRESS_CROSS_REF \ --ports=443 \ --region=us-west1 \ --target-https-proxy=HTTPS_TARGET_PROXY_NAME \ --target-https-proxy-region=us-west1 \ --project=SERVICE_PROJECT_A_ID
Replace the following:
HTTPS_FORWARDING_RULE_NAME: the name for theforwarding rule that is used to handle HTTPS traffic.
To test the load balancer, use the steps described inTest the loadbalancer.
Terraform
Create the URL map.
# URL mapresource "google_compute_region_url_map" "default" { name = "l7-ilb-map" provider = google-beta project = "my-service-project-a-id" region = "us-west1" default_service = google_compute_region_backend_service.default.id}Create the target proxy.
For HTTP
# HTTP target proxyresource "google_compute_region_target_http_proxy" "default" { name = "l7-ilb-proxy" provider = google-beta project = "my-service-project-a-id" region = "us-west1" url_map = google_compute_region_url_map.default.id}For HTTPS
Create a regional SSL certificate
# Use self-signed SSL certificateresource "google_compute_region_ssl_certificate" "default" { name = "l7-ilb-cert" provider = google-beta project = "my-service-project-a-id" region = "us-west1" private_key = file("sample-private.key") # path to PEM-formatted file certificate = file("sample-server.cert") # path to PEM-formatted file}Use the regional SSL certificate to create a target proxy
# HTTPS target proxyresource "google_compute_region_target_https_proxy" "default" { name = "l7-ilb-proxy" provider = google-beta project = "my-service-project-a-id" region = "us-west1" url_map = google_compute_region_url_map.default.id ssl_certificates = [google_compute_region_ssl_certificate.default.id]}Create the forwarding rule.
For HTTP
# Forwarding ruleresource "google_compute_forwarding_rule" "default" { name = "l7-ilb-forwarding-rule" provider = google-beta project = "my-service-project-a-id" region = "us-west1" ip_protocol = "TCP" port_range = "80" load_balancing_scheme = "INTERNAL_MANAGED" target = google_compute_region_target_http_proxy.default.id network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id network_tier = "PREMIUM" depends_on = [google_compute_subnetwork.lb_frontend_and_backend_subnet]}For HTTPS
# Forwarding ruleresource "google_compute_forwarding_rule" "default" { name = "l7-ilb-forwarding-rule" provider = google-beta project = "my-service-project-a-id" region = "us-west1" ip_protocol = "TCP" port_range = "443" load_balancing_scheme = "INTERNAL_MANAGED" target = google_compute_region_target_https_proxy.default.id network = google_compute_network.lb_network.id subnetwork = google_compute_subnetwork.lb_frontend_and_backend_subnet.id network_tier = "PREMIUM" depends_on = [google_compute_subnetwork.lb_frontend_and_backend_subnet]}To test the load balancer, use the steps described inTest the loadbalancer.
Grant permissions to the Load Balancer Admin to use the backend service
If you want load balancers to reference backend services in other serviceprojects, the Load Balancer Admin must have thecompute.backendServices.usepermission. To grant this permission, you can use the predefinedIAM role calledCompute Load Balancer Services User (roles/compute.loadBalancerServiceUser).This role must be granted by the Service Project Admin and can be applied atthe project level or at the individual backend service level.
This step isnot required if you already granted the required permissionsat the backend service level whilecreating the backendservice. You can either skip this section or continuereading to learn how to grant access to all the backend services in thisproject so that you don't have to grant access every time you create a newbackend service.
In this example, a Service Project Admin from service project B must runoneof the following commands to grant thecompute.backendServices.use permissionto a Load Balancer Admin from service project A. This can be done either at theproject level (for all backend services in the project) or per backend service.
Console
Project-level permissions
Use the following steps to grant permissions to all backend services inyour project.
You require thecompute.regionBackendServices.setIamPolicy and theresourcemanager.projects.setIamPolicy permissions to complete this step.
In the Google Cloud console, go to theIAM page.
Select your project.
ClickGrantaccess.
In theNew principals field, enter the principal's email address orother identifier.
In theSelect a role list, select theCompute Load BalancerServices User.
Optional: Add acondition to the role.
ClickSave.
Resource-level permissions for individual backend services
Use the following steps to grant permissions to individual backendservices in your project.
You require thecompute.regionBackendServices.setIamPolicy permission tocomplete this step.
In the Google Cloud console, go to theBackends page.
From the backends list, select the backend service that you want togrant access to and clickPermissions.
ClickAdd principal.
In theNew principals field, enter the principal's email address orother identifier.
In theSelect a role list, select theCompute Load BalancerServices User.
ClickSave.
gcloud
Project-level permissions
Use the following steps to grant permissions to all backend services inyour project.
You require thecompute.regionBackendServices.setIamPolicy and theresourcemanager.projects.setIamPolicy permissions to complete this step.
gcloud projects add-iam-policy-bindingSERVICE_PROJECT_B_ID \ --member="user:LOAD_BALANCER_ADMIN" \ --role="roles/compute.loadBalancerServiceUser"
Resource-level permissions for individual backend services
At the backend service level, Service Project Admins can useeither of thefollowing commands to grant the Compute Load Balancer Services User role(roles/compute.loadBalancerServiceUser).
You require thecompute.regionBackendServices.setIamPolicy permission tocomplete this step.
gcloud projects add-iam-policy-bindingSERVICE_PROJECT_B_ID \ --member="user:LOAD_BALANCER_ADMIN" \ --role="roles/compute.loadBalancerServiceUser" \ --condition='expression=resource.name=="projects/SERVICE_PROJECT_B_ID/regions/us-west1/backend-services/BACKEND_SERVICE_NAME",title=Shared VPC condition'
or
gcloud compute backend-services add-iam-policy-bindingBACKEND_SERVICE_NAME \ --member="user:LOAD_BALANCER_ADMIN" \ --role="roles/compute.loadBalancerServiceUser" \ --project=SERVICE_PROJECT_B_ID \ --region=us-west1
To use these commands, replaceLOAD_BALANCER_ADMIN with theuser'sprincipal—forexample,test-user@gmail.com.
You can also configure IAM permissions so that they only applyto a subset of regional backend services by using conditions andspecifyingcondition attributes.
To see URL maps referencing a particular Shared VPC backend service,follow these steps:
gcloud
To see resources referencing aregional Shared VPC backend service, run the following command:
gcloud compute backend-services describeBACKEND_SERVICE_NAME \ --regionREGION
Replace the following:
BACKEND_SERVICE_NAME: the name of the loadbalancer backend serviceREGION: the region of the load balancer
In the command output, review theusedBy field, which displays theresources referencing the backend service, as shown in the followingexample:
id: '123456789'kind: compute#backendServiceloadBalancingScheme: INTERNAL_MANAGED...usedBy:- reference: https://www.googleapis.com/compute/v1/projects/my-project/region/us-central1/urlMaps/my-url-map
What's next
- You can restrict how Shared VPC features such as cross-project servicereferencing are used in your project by using organization policy constraints.For more information, seeOrganization policy constraints forCloud Load Balancing.
- To manage the proxy-only subnet resource required byinternal Application Load Balancers, seeProxy-only subnet forinternal Application Load Balancers.
- To see how to troubleshoot issues with an internal Application Load Balancer, seeTroubleshoot internal Application Load Balancers.
- Clean up the load balancer setup.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.