Set up a regional external Application Load Balancer with Cloud Run Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to deploy a regional external Application Load Balancer with aCloud Run backend. To set this up, you use aserverless NEG backend for the load balancer.
Before you try this procedure, make sure you are familiar with the followingtopics:
This document shows you how to configure an Application Load Balancer that proxiesrequests to a serverless NEG backend.
Serverless NEGs let you useCloud Run services with your load balancer. After you configurea load balancer with the serverless NEG backend, requests to the load balancerare routed to the Cloud Run backend.
Before you begin
Install Google Cloud SDK
Install the Google Cloud CLItool. For conceptual and installation information about the gcloud CLI, seegcloud CLI overview.
If you haven't run the gcloud CLI previously, first rungcloud init to initialize your gcloud CLIdirectory.
Deploy a Cloud Run service
The instructions on this page assume you already have aCloud Run service running.
For the example on this page, you can useany of theCloud Run quickstarts to deploy aCloud Run service.
The serverless NEG and the load balancer must be in the same region as theCloud Run service. You can block external requeststhat are sent directly to the Cloud Run service's defaultURLs byrestricting ingress tointernal and cloud loadbalancing. For example:gcloud run deployCLOUD_RUN_SERVICE_NAME \ --platform=managed \ --allow-unauthenticated \ --ingress=internal-and-cloud-load-balancing \ --region=REGION \ --image=IMAGE_URL
Note the name of the service that you create. The rest of this page shows youhow to set up a load balancer that routes requests to this service.
Configure permissions
To follow this guide, you need to create a serverless NEG and create aload balancer in a project. You must be either a projectowner oreditor, or you have the followingCompute Engine IAM roles andpermissions:
| Task | Required role |
|---|---|
| Create load balancer and networking components | Compute Network Admin (roles/compute.networkAdmin) |
| Create and modify NEGs | Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1) |
| Create and modify SSL certificates | Security Admin (roles/iam.securityAdmin) |
Configure the VPC network and proxy-only subnet
To configure the network, perform the following tasks:
- Create a VPC network.
- Create a proxy-only subnet.
Create the VPC network
Create a custom mode VPC network.
Note: For this setup, you don't need a subnet for theforwarding rule or the serverless NEG.- Regional external IPv4 addresses always exist outside of VPC networks. For more information, seeForwarding rules and VPC networks
- A serverless NEG isn't associated with any VPC network.
Console
In the Google Cloud console, go to theVPC networks page.
ClickCreate VPC network.
ForName, enter
lb-network.ClickCreate.
gcloud
Create the custom VPC network by using the
gcloud computenetworks createcommand:gcloud compute networks create lb-network --subnet-mode=custom
Create a proxy-only subnet
Create a proxy-only subnet for allregional Envoy-based loadbalancers in aspecific region of thelb-network network.
Console
In the Google Cloud console, go to theVPC networks page.
Click the name of the Shared VPC network that you want to add theproxy-only subnet to.
ClickAdd subnet.
In theName field, enter
proxy-only-subnet.Select aRegion.
SetPurpose toRegional Managed Proxy.
Enter anIP address range as
10.129.0.0/23.ClickAdd.
gcloud
Create the proxy-only subnet by using thegcloud compute networks subnetscreate command.
This example uses an IP address range of10.129.0.0/23 for theproxy-only subnet. You can configure anyvalid subnetrange.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION \ --network=lb-network \ --range=10.129.0.0/23
Create the load balancer
In the following diagram, the load balancer uses a serverless NEG backendto direct requests to a serverless Cloud Run service.
Traffic going from the load balancer to the serverless NEG backends uses specialroutes defined outside your VPC that aren't subject to firewallrules. Therefore, if your load balancer only has serverless NEG backends, youdon't need to create firewall rules to allow traffic from the proxy-only subnetto the serverless backend.
Console
Select the load balancer type
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
- ForPublic facing or internal, selectPublic facing (external) and clickNext.
- ForGlobal or single region deployment, selectBest for regional workloads and clickNext.
- ClickConfigure.
Basic configuration
- For the name of the load balancer, enter
serverless-lb. - Select theNetwork as
lb_network. - Keep the window open to continue.
Configure the frontend
- Before you proceed, make sure you have anSSL certificate.
- ClickFrontend configuration.
- Enter aName.
- To configure a regional external Application Load Balancer, fill in the fields as follows.
- ForProtocol, selectHTTPS.
- ForNetwork service tier, selectStandard.
- ForIP version, selectIPv4.
- ForIP address, selectEphemeral.
- ForPort, select
443. ForChoose certificate repository, selectClassic Certificates.
The following example shows you how to create Compute Engine SSL certificates:
- ClickCreate a new certificate.
- In theName field, enter a name.
- In the appropriate fields, upload your PEM-formatted files:
- Certificate
- Private key
- ClickCreate.
- Optional: To create an HTTP load balancer, do the following:
- ForProtocol, selectHTTP.
- ForNetwork service tier, selectStandard.
- ForIP version, selectIPv4.
- ForIP address, selectEphemeral.
- ForPort, select
80. - ClickDone.
If you want to test this process without setting up an SSL certificate resource, you can set up an HTTP load balancer.
Configure the backend services
- ClickBackend configuration.
- In theCreate or select backend services menu, hold the pointer overBackend services, and then selectCreate a backend service.
- In theCreate a backend service window, enter aName.
- ForBackend type, selectServerless network endpoint group.
- LeaveProtocol unchanged. This parameter is ignored.
- ForBackends > New backend, selectCreate serverless network endpoint group.
- In theCreate serverless network endpoint group window, enter aName.
- ForRegion, the region of the load balancer is displayed.
- From theServerless network endpoint group type field, selectCloud Run. Cloud Run is the only supported type.
- SelectSelect service name.
- From theService list, select the Cloud Run service that you want to create a load balancer for.
- ClickDone.
- ClickCreate.
Optional: Configure a default backend security policy. The default security policy throttles traffic over a user-configured threshold. For more information about default security policies, see theRate limiting overview.
- To opt out of the Cloud Armor default security policy, select
Nonein theCloud Armor backend security policy list. - To configure the Cloud Armor default security policy, selectDefault security policy in theCloud Armor backend security policy list.
- In thePolicy name field, accept the automatically generated name or enter a name for your security policy.
- In theRequest count field, accept the default request count or enter an integer between
1and10,000. - In theInterval field, select an interval.
- In theEnforce on key field, choose one of the following values:All,IP address, orX-Forwarded-For IP address. For more information about these options, seeIdentifying clients for rate limiting.
- To opt out of the Cloud Armor default security policy, select
- In theCreate backend service window, clickCreate.
Configure routing rules
Routing rules determine how your traffic is directed. You can direct traffic to a backend service or a Kubernetes service. Any traffic not explicitly matched with a host and path matcher is sent to the default service.
- ClickSimple host and path rule.
- Select a backend service from theBackend list.
Review the configuration
- ClickReview and finalize.
- Review the values forBackend,Host and Path rules andFrontend.
- Optional: ClickEquivalent Code to view the REST API request that will be used to create the load balancer.
- ClickCreate. Wait for the load balancer to be created.
- Click the name of the load balancer (serverless-lb).
- Note the IP address of the load balancer for the next task.
gcloud compute backend-services update command to disable logging if needed.gcloud
- Reserve a static external IP address for the load balancer.
gcloud compute addresses createIP_ADDRESS_NAME \ --region=REGION \ --network-tier=STANDARD
- Create a serverless NEG for your Cloud Run service:
gcloud compute network-endpoint-groups createSERVERLESS_NEG_NAME \ --region=REGION \ --network-endpoint-type=serverless \ --cloud-run-service=CLOUD_RUN_SERVICE_NAME
- Create a regional backend service. Set the
--protocolto HTTP. This parameter is ignored but it is required because the--protocolotherwise defaults to TCP.gcloud compute backend-services createBACKEND_SERVICE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --region=REGION
- Add the serverless NEG as a backend to the backend service:
gcloud compute backend-services add-backendBACKEND_SERVICE_NAME \ --region=REGION \ --network-endpoint-group=SERVERLESS_NEG_NAME \ --network-endpoint-group-region=REGION
- Create a regional URL map to route incoming requests to the backend service:
This example URL map only targets one backend service representing a single serverless app, so you don't need to set up host rules or path matchers.gcloud compute url-maps createURL_MAP_NAME \ --default-service=BACKEND_SERVICE_NAME \ --region=REGION
- Optional: Perform this step if you are using HTTPS between the client and the load balancer. This step isn't required for HTTP load balancers.
You can create either Compute Engine or Certificate Manager certificates. Use any of the following methods to create certificates using Certificate Manager:
- Regional self-managed certificates. For information about creating and using regional self-managed certificates, seeDeploy a regional self-managed certificate. Certificate maps aren't supported.
Regional Google-managed certificates. Certificate maps aren't supported.
The following types of regional Google-managed certificates are supported by Certificate Manager:
- Regional Google-managed certificates with per-project DNS authorization. For more information, seeDeploy a regional Google-managed certificate with DNS authorization.
- Regional Google-managed (private) certificates with Certificate Authority Service. For more information, seeDeploy a regional Google-managed certificate with Certificate Authority Service.
After you create certificates, attach the certificate directly to the targetproxy.
To create a regional self-managed SSL certificate resource:gcloud compute ssl-certificates createSSL_CERTIFICATE_NAME \ --certificateCRT_FILE_PATH \ --private-keyKEY_FILE_PATH \ --region=REGION
- Create a regional target proxy to route requests to the URL map.
For an HTTP load balancer, create an HTTP target proxy: For an HTTPS load balancer, create an HTTPS target proxy. The proxy is the portion of the load balancer that holds the SSL certificate for HTTPS Load Balancing, so you also load your certificate in this step.gcloud compute target-http-proxies createTARGET_HTTP_PROXY_NAME \ --url-map=URL_MAP_NAME \ --region=REGION
gcloud compute target-https-proxies createTARGET_HTTPS_PROXY_NAME \ --ssl-certificates=SSL_CERTIFICATE_NAME \ --url-map=URL_MAP_NAME \ --region=REGION
- Create a forwarding rule to route incoming requests to the proxy. For an HTTP load balancer:
For an HTTPS load balancer:gcloud compute forwarding-rules createHTTP_FORWARDING_RULE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network=lb-network \ --target-http-proxy=TARGET_HTTP_PROXY_NAME \ --target-http-proxy-region=REGION \ --region=REGION \ --ports=80
gcloud compute forwarding-rules createHTTPS_FORWARDING_RULE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network=lb-network \ --target-https-proxy=TARGET_HTTPS_PROXY_NAME \ --target-https-proxy-region=REGION \ --region=REGION \ --ports=443
Test the load balancer
Now that you have configured your load balancer, you can start sendingtraffic to the load balancer's IP address.
In the Google Cloud console, go to theLoad balancing page.
Click the load balancer you just created.
Note theIP Address of the load balancer.
For anHTTP load balancer, you can test your load balancerusing a web browser by going to
http://IP_ADDRESS.ReplaceIP_ADDRESSwith theload balancer's IP address. Youshould be directed to the Cloud Run servicehomepage.For anHTTPS load balancer, you can test your load balancerusing a web browser by going to
https://IP_ADDRESS.ReplaceIP_ADDRESSwith theload balancer's IP address. Youare directed to the Cloud Run servicehomepage.
If you used a self-signed certificate for testing, your browser displaysa warning. You must explicitly instruct your browser to accept aself-signed certificate. Click through the warning to see the actual page.
Additional configuration options
This section expands on the configuration example to provide alternative andadditional configuration options. All of the tasks are optional. You can performthem in any order.
Using a URL mask
When creating a serverless NEG, instead of selecting a specificCloud Run service, you can use a URL mask to point tomultiple services serving at the same domain. A URL mask is a template of yourURL schema. The serverless NEG uses this template to extract the servicename from the incoming request's URL and map the request to the appropriateservice.
URL masks are particularly useful if your service ismapped to a customdomain rather than to the default addressthat Google Cloud provides for the deployed service. A URL mask lets youtarget multiple services and versions with a single rule even when yourapplication is using a custom URL pattern.
If you haven't already done so, make sure you read theServerless NEGSoverview: URLMasks.
Construct a URL mask
To construct a URL mask for your load balancer, start with the URL of yourservice. This example uses a sample serverless app running athttps://example.com/login. This is the URL where the app'slogin serviceis served.
- Remove the
httporhttpsfrom the URL. You are left withexample.com/login. - Replace the service name with a placeholder for the URL mask.
- Cloud Run: Replace theCloud Run service name with theplaceholder
<service>. If the Cloud Runservice has a tag associated with it,replace the tag name with the placeholder<tag>.In this example, the URL mask you are left with isexample.com/<service>.
- Cloud Run: Replace theCloud Run service name with theplaceholder
Optional: If the service name can beextracted from the path portion of the URL, the domain can be omitted. Thepath part of the URL mask is distinguishedby the first slash (
/) character. If a slash (/) is not present in theURL mask, the mask is understood to represent the host only. Therefore, forthis example, the URL mask can be reduced to/<service>.Similarly, if
<service>can be extracted from the host part of theURL, you can omit the path altogether from the URL mask.You can also omitany host or subdomain components that come before the first placeholder aswell as any path components that come after the last placeholder. In suchcases, the placeholder captures the required information for the component.
Here are a few more examples that demonstrate these rules:
This table assumes that you have a custom domain calledexample.com andall your Cloud Run services are beingmappedto this domain.
| Service, Tag name | Cloud Run custom domain URL | URL mask |
|---|---|---|
| service: login | https://login-home.example.com/web | <service>-home.example.com |
| service: login | https://example.com/login/web | example.com/<service> or /<service> |
| service: login, tag: test | https://test.login.example.com/web | <tag>.<service>.example.com |
| service: login, tag: test | https://example.com/home/login/test | example.com/home/<service>/<tag> or /home/<service>/<tag> |
| service: login, tag: test | https://test.example.com/home/login/web | <tag>.example.com/home/<service> |
Creating a serverless NEG with a URL mask
Console
For a new load balancer, you can use the same end-to-end process asdescribed previously in this document. When configuring the backend service,instead of selecting a specific service, enter a URL mask.
If you have an existing load balancer, you can edit the backend configurationand have the serverless NEG point to a URL mask instead of a specific service.
To add a URL mask-based serverless NEG to an existing backend service,do the following:
- In the Google Cloud console, go to theLoad balancing page.
Go to Load balancing - Click the name of the load balancer that has the backend service you want to edit.
- On theLoad balancer details page, clickEdit.
- On theEdit global external Application Load Balancer page, clickBackend configuration.
- On theBackend configuration page, clickEdit for the backend service you want to modify.
- ClickAdd backend.
- SelectCreate Serverless network endpoint group.
- For theName, enter
helloworld-serverless-neg. - UnderRegion, the region of the load balancer is displayed.
- UnderServerless network endpoint group type,Cloud Run is the only supported network endpoint group type.
- SelectUse URL Mask.
- Enter a URL mask. For information about how to create a URL mask, seeConstructing a URL mask.
- ClickCreate.
- In theNew backend, clickDone.
- ClickUpdate.
gcloud
To create a serverless NEG with a sample URL mask ofexample.com/<service>:
gcloud compute network-endpoint-groups createSERVERLESS_NEG_MASK_NAME \ --region=REGION \ --network-endpoint-type=serverless \ --cloud-run-url-mask="example.com/<service>"
Deleting a serverless NEG
A network endpoint group cannot be deleted if it is attached to a backendservice. Before you delete a NEG, ensure that it is detached from thebackend service.
Console
- To make sure the serverless NEG you want to delete is not in use by any backend service, go to theBackend services tab on theLoad balancing components page.
Go to Backend services - If the serverless NEG is in use, do the following:
- Click the name of the backend service that is using the serverless NEG.
- ClickEdit.
- From the list ofBackends, click to remove the serverless NEG backend from the backend service.
- ClickSave.
- Go to theNetwork endpoint group page in the Google Cloud console.
Go to Network endpoint group - Select the checkbox for the serverless NEG you want to delete.
- ClickDelete.
- ClickDelete again to confirm.
gcloud
To remove a serverless NEG from a backend service, you must specifythe region where the NEG was created.
gcloud compute backend-services remove-backendBACKEND_SERVICE_NAME \ --network-endpoint-group=SERVERLESS_NEG_NAME \ --network-endpoint-group-region=REGION \ --region=REGION
To delete the serverless NEG:
gcloud compute network-endpoint-groups deleteSERVERLESS_NEG_NAME \ --region=REGION
What's next
- Using logging and monitoring
- Troubleshooting serverless NEGs issues
- Clean up the load balancer setup
- Using a Terraform module for an external HTTPS load balancer with a Cloud Runbackend
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.