Set up a classic Application Load Balancer with Cloud Storage buckets

This document shows you how to create anexternal Application Load Balancerto route requests for static content toCloud Storage buckets. After youconfigure a load balancer with the backend buckets, requests to URL paths thatbegin with/love-to-fetch are sent to theus-east1Cloud Storage bucket, and all other requests are sent to theeurope-north1 Cloud Storage bucket, regardless of the user's region.

If your backends servedynamic content over HTTP(S), consider usingbackend services instead ofbackend buckets.


To follow step-by-step guidance for this task directly in the Google Cloud console, clickGuide me:

Guide me


Cloud Storage buckets as load balancer backends

An external Application Load Balancer uses a URL map to direct traffic from specified URL paths toyour backends.

In the following diagram, the load balancer sends traffic with a path of/love-to-fetch/ to a Cloud Storage bucket in theus-east1region. All other requests go to a Cloud Storage bucket in theeurope-north1 region.

The load balancer sends traffic to a Cloud Storage backend.
Distributing traffic to Cloud Storage

By default, Cloud Storageuses the same cachethat Cloud CDN uses. If you enable Cloud CDN on the backendbucket, you can use Cloud CDN controls on your content.Cloud CDN controls include, for example, cache modes, signed URLs, andinvalidation. Cloud CDN also lets you cache large content (> 10 MB). Ifyou don't enable Cloud CDN on your backend bucket, you can only useoriginCache-Control headers to control caching for smaller content, as setby the Cloud Storage metadata.

Before you begin

Make sure that your setup meets the following prerequisites. If you are usingthegcloud storage utility, you can install it by using instructions inDiscover object storage with the gcloud tool.

Set a default project

Console

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  5. Verify that billing is enabled for your Google Cloud project.

gcloud

gcloud config set projectPROJECT_ID

ReplacePROJECT_ID with the project that you are using for this guide.

Terraform

export GOOGLE_CLOUD_PROJECT=PROJECT_ID

Permissions

To follow this guide, you need to create Cloud Storage buckets and aload balancer in a project. You should be either a projectowner or editor, or you should have thefollowingCompute Engine IAM roles:

TaskRequired Role
Create load balancer componentsNetwork Admin
Create Cloud Storage bucketsStorage Object Admin

For more information, see the following guides:

Set up an SSL certificate resource

For an HTTPS load balancer, create an SSL certificate resource as described inthe following documentation:

We recommend using a Google-managed certificate.

This example assumes that you already have an SSL certificate resource namedwww-ssl-cert.

Warning: Don't use a self-signed certificate for production purposes.

Optional: Use BYOIP addresses

With bring your own IP (BYOIP), you can import your own public addresses toGoogle Cloud to use the addresses with Google Cloud resources. Forexample, if you import your own IPv4 addresses, you can assign one to theforwarding rule when you configure your load balancer. When you follow theinstructions in this document tocreate the load balancer, provide the BYOIP address as theIP address.

For more information about using BYOIP, seeBring your own IP addresses.

Prepare your Cloud Storage buckets and content

The process for preparing your Cloud Storage buckets is as follows:

  • Create the buckets.

  • Copy content to the buckets.

  • Provide public access to the buckets.

Create Cloud Storage buckets

In this example, you create two Cloud Storage buckets for the loadbalancer to access. For production deployments, we recommend that you choose amulti-region bucket, which automaticallyreplicates objects across multiple Google Cloud regions. This can improvethe availability of your content and improve failure tolerance across yourapplication.

Note the names of the Cloud Storage bucketsyou create, as they're used later. In this guide, they're referred to asBUCKET_1_NAME andBUCKET_2_NAME.

Console

  1. In the Google Cloud console, go to theCloud StorageBuckets page.

    Go to Cloud Storage Buckets

  2. ClickCreate bucket.

  3. In theName your bucket box, enter a globally unique name that followsthenaming guidelines.

  4. ClickChoose where to store your data.

  5. SetLocation type toRegion.

  6. SetLocation toeurope-north1. This isBUCKET_1_NAME in thisguide.

  7. ClickCreate.

  8. ClickBuckets to return to the Cloud Storage Buckets page.Use these instructions to create a second bucket, but set theLocation tous-east1. This isBUCKET_2_NAME inthis guide.

gcloud

gcloud storage buckets create gs://BUCKET_1_NAME --project=PROJECT_ID --default-storage-class=standard --location=europe-north1 --uniform-bucket-level-access
gcloud storage buckets create gs://BUCKET_2_NAME --project=PROJECT_ID --default-storage-class=standard --location=us-east1 --uniform-bucket-level-access

ReplaceBUCKET_1_NAME andBUCKET_2_NAMEwith the names of the buckets that you want to create.

Terraform

To create the buckets, use thegoogle_storage_bucket resource.

# Create Cloud Storage bucketsresource "random_id" "bucket_prefix" {  byte_length = 8}resource "google_storage_bucket" "bucket_1" {  name                        = "${random_id.bucket_prefix.hex}-bucket-1"  location                    = "us-east1"  uniform_bucket_level_access = true  storage_class               = "STANDARD"  // delete bucket and contents on destroy.  force_destroy = true}resource "google_storage_bucket" "bucket_2" {  name                        = "${random_id.bucket_prefix.hex}-bucket-2"  location                    = "us-east1"  uniform_bucket_level_access = true  storage_class               = "STANDARD"  // delete bucket and contents on destroy.  force_destroy = true}

To learn how to apply or remove a Terraform configuration, seeBasic Terraform commands.

Transfer content to your Cloud Storage buckets

So you can test the setup later, copy the following images from a publicCloud Storage bucket to your own Cloud Storage buckets.

gcloud

  1. ClickActivate Cloud Shell.

  2. Run the following commands in Cloud Shell, replacing the bucket namevariables with your Cloud Storage bucket names:

gcloud storage cp gs://gcp-external-http-lb-with-bucket/three-cats.jpg gs://BUCKET_1_NAME/never-fetch/
gcloud storage cp gs://gcp-external-http-lb-with-bucket/two-dogs.jpg gs://BUCKET_2_NAME/love-to-fetch/

Terraform

To copy items into the bucket, you can use thegoogle_storage_bucket_object resource.

resource "google_storage_bucket_object" "cat_image" {  name         = "never-fetch/three-cats.jpg"  source       = "images/three-cats.jpg"  content_type = "image/jpeg"  bucket = google_storage_bucket.bucket_1.name}resource "google_storage_bucket_object" "dog_image" {  name         = "love-to-fetch/two-dogs.jpg"  source       = "images/two-dogs.jpg"  content_type = "image/jpeg"  bucket = google_storage_bucket.bucket_2.name}

Alternatively, use thenull_resource resource.

resource "null_resource" "upload_cat_image" {provisioner "local-exec" {  command = "gcloud storage cp gs://gcp-external-http-lb-with-bucket/three-cats.jpg gs://${google_storage_bucket.bucket_1.name}/never-fetch/"}}resource "null_resource" "upload_dog_image" {provisioner "local-exec" {  command = "gcloud storage cp gs://gcp-external-http-lb-with-bucket/two-dogs.jpg gs://${google_storage_bucket.bucket_2.name}/love-to-fetch/"}}

In the Google Cloud console, clickRefresh on each bucket's detailspage to verify that the file has copied successfully.

Make your Cloud Storage buckets publicly readable

When you make Cloud Storage buckets publicly readable, anyone on theinternet can list and view their objects, and view their metadata (excludingACLs).Don't include sensitive information in your public buckets.

To reduce the likelihood of accidental exposure of sensitive information, don'tstore public objects and sensitive data in the same bucket.

Console

To grant all users access to view objects in your buckets, repeat thefollowing procedure for each bucket:

  1. In the Google Cloud console, go to theCloud Storage Buckets page.

    Go to Cloud Storage Buckets

  2. Click the bucket name, followed by thePermissions tab.

  3. ClickAdd.

  4. In theNew principals box, enterallUsers.

  5. In theSelect a role box, selectCloud Storage > Storage Object Viewer.

  6. ClickSave.

  7. ClickAllow public access.

gcloud

To grant all users access to view objects in your buckets, run the followingcommands:

gcloud storage buckets add-iam-policy-binding gs://BUCKET_1_NAME --member=allUsers --role=roles/storage.objectViewer
gcloud storage buckets add-iam-policy-binding gs://BUCKET_2_NAME --member=allUsers --role=roles/storage.objectViewer

Terraform

To grant all users access to view objects in your buckets, use thegoogle_storage_bucket_iam_member resource and specify theallUsersmember.

# Make buckets publicresource "google_storage_bucket_iam_member" "bucket_1" {  bucket = google_storage_bucket.bucket_1.name  role   = "roles/storage.objectViewer"  member = "allUsers"}resource "google_storage_bucket_iam_member" "bucket_2" {  bucket = google_storage_bucket.bucket_2.name  role   = "roles/storage.objectViewer"  member = "allUsers"}

Reserve an external IP address

After you've set up your Cloud Storage buckets, you can reserve aglobal static external IP address that your audience uses to reach your loadbalancer.

This step is optional but recommended, as a static external IP address providesa single address to point your domain at.

Note: You can skip this step and have Google Cloud associate an ephemeralIP address with your load balancer's forwarding rule. An ephemeral IP addressremains constant while the forwarding rule exists. If you need to delete theforwarding rule and re-add it, the forwarding rule might receive a new IPaddress. If needed, you canmake an ephemeral IP address static.

Console

  1. In the Google Cloud console, go to theExternal IP addresses page.

    Go to External IP addresses

  2. ClickReserve static address.

  3. In theName box, enterexample-ip.

  4. Set theNetwork Service Tier toPremium.

  5. Set theIP version toIPv4.

  6. Set theType toGlobal.

  7. ClickReserve.

gcloud

gcloud compute addresses create example-ip \    --network-tier=PREMIUM \    --ip-version=IPV4 \    --global

Note the IPv4 address that was reserved:

gcloud compute addresses describe example-ip \    --format="get(address)" \    --global

Terraform

To reserve an external IP address, use thegoogle_compute_global_address resource.

# Reserve IP addressresource "google_compute_global_address" "default" {  name = "example-ip"}

Create an external Application Load Balancer with backend buckets

These instructions cover creating either an HTTP or HTTPS load balancer. Tocreate an HTTPS load balancer you must add an SSL certificate resource to theload balancer's frontend. For more information, see theSSL certificates overview.

Console

Select the load balancer type

  1. In the Google Cloud console, go to theLoad balancing page.

    Go to Load balancing

  2. ClickCreate load balancer.
  3. ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
  4. ForPublic facing or internal, selectPublic facing (external) and clickNext.
  5. ForGlobal or single region deployment, selectBest for global workloads and clickNext.
  6. ForLoad balancer generation, selectClassic Application Load Balancer and clickNext.
  7. ClickConfigure.

Basic configuration

  1. In theName box, enterhttp-lb.

Configure the backend

  1. ClickBackend configuration.

  2. Click theBackend services and backend buckets box, and then clickCreate a backend bucket.

  3. In theBackend bucket name box, entercats.

  4. In theCloud Storage bucket box, clickBrowse.

  5. SelectBUCKET_1_NAME, and then clickSelect. Creating thecats backend bucket first makes it the default, where all unmatchedtraffic requests are directed. You can't change a default backend bucket'sredirect rules in the load balancer.

  6. ClickCreate.

  7. Use the same process to create a backend bucket nameddogs, and selectBUCKET_2_NAME.

  8. ClickOK.

Configure routing rules

Routing rules determine how your traffic is directed. To configure routing,you'll set up host rules and path matchers, which are configuration componentsof anexternal Application Load Balancer'sURL map. To set upthe rules for this example:

  1. ClickHost and path rules.
  2. Fordogs, enter* in theHosts field, and/love-to-fetch/* in thePaths field.

Configure the frontend

  1. ClickFrontend configuration.

  2. Verify that the following options are configured with these values:

    PropertyValue (type a value or select an option as specified)
    ProtocolHTTP
    Network Service TierPremium
    IP versionIPv4
    IP addressexample-ip
    Port80

    If you want to create an HTTPS load balancer instead of an HTTP loadbalancer, you must have anSSL certificate(gcloud compute ssl-certificates list), and you must fill in the fieldsas follows:

    PropertyValue (type a value or select an option as specified)
    ProtocolHTTP(S)
    Network Service TierPremium
    IP versionIPv4
    IP addressexample-ip
    Port443
    CertificateSelect thewww-ssl-cert certificate you created in theSet up an SSL certificate resource section, or create a new certificate.
    Optional: Enable HTTP to HTTPS RedirectUse this checkbox to enable redirects.

    Enabling this checkbox creates an additional partial HTTP load balancer that uses the same IP address as your HTTPS load balancer and redirects HTTP requests to your load balancer's HTTPS frontend.

    This checkbox can only be selected when the HTTPS protocol is selected and a reserved IP address is used.

  3. ClickDone.

Review the configuration

  1. ClickReview and finalize.

  2. Review theFrontend,Host and path rules, andBackend buckets.

  3. ClickCreate and wait for the load balancer to be created.

  4. Click the name of the load balancer (http-lb).

  5. Note the IP address of the load balancer for the next task. In this guide,it's referred to asIP_ADDRESS.

gcloud

Configure the backend

gcloud compute backend-buckets create cats \  --gcs-bucket-name=BUCKET_1_NAME
gcloud compute backend-buckets create dogs \  --gcs-bucket-name=BUCKET_2_NAME

Configure the URL map

gcloud compute url-maps create http-lb \  --default-backend-bucket=cats
gcloud compute url-maps add-path-matcher http-lb \  --path-matcher-name=path-matcher-2 \  --new-hosts=* \  --backend-bucket-path-rules="/love-to-fetch/*=dogs" \  --default-backend-bucket=cats

Configure the target proxy

gcloud compute target-http-proxies create http-lb-proxy \  --url-map=http-lb

Configure the forwarding rule

gcloud compute forwarding-rules create http-lb-forwarding-rule \  --load-balancing-scheme=EXTERNAL \  --network-tier=PREMIUM \  --address=example-ip \  --global \  --target-http-proxy=http-lb-proxy \  --ports=80

Terraform

To create the load balancer, use the following Terraform resources.

Configure the backend

To create the backend, use thegoogle_compute_backend_bucket resource.

# Create LB backend bucketsresource "google_compute_backend_bucket" "bucket_1" {  name        = "cats"  description = "Contains cat image"  bucket_name = google_storage_bucket.bucket_1.name}resource "google_compute_backend_bucket" "bucket_2" {  name        = "dogs"  description = "Contains dog image"  bucket_name = google_storage_bucket.bucket_2.name}

Configure the URL map

To create the URL map, use thegoogle_compute_url_map resource.

# Create url mapresource "google_compute_url_map" "default" {  name = "http-lb"  default_service = google_compute_backend_bucket.bucket_1.id  host_rule {    hosts        = ["*"]    path_matcher = "path-matcher-2"  }  path_matcher {    name            = "path-matcher-2"    default_service = google_compute_backend_bucket.bucket_1.id    path_rule {      paths   = ["/love-to-fetch/*"]      service = google_compute_backend_bucket.bucket_2.id    }  }}

Configure the target proxy

To create the target HTTP proxy, use thegoogle_compute_target_http_proxy resource.

# Create HTTP target proxyresource "google_compute_target_http_proxy" "default" {  name    = "http-lb-proxy"  url_map = google_compute_url_map.default.id}

Configure the forwarding rule

To create the forwarding rule, use thegoogle_compute_global_forwarding_rule resource.

# Create forwarding ruleresource "google_compute_global_forwarding_rule" "default" {  name                  = "http-lb-forwarding-rule"  ip_protocol           = "TCP"  load_balancing_scheme = "EXTERNAL_MANAGED"  port_range            = "80"  target                = google_compute_target_http_proxy.default.id  ip_address            = google_compute_global_address.default.id}

NOTE: To change themode to classic Application Load Balancer, set theload_balancing_scheme attribute to"EXTERNAL" instead of"EXTERNAL_MANAGED".

To learn how to apply or remove a Terraform configuration, seeBasic Terraform commands.

Send traffic to your load balancer

Several minutes after you have configured your load balancer, you can startsending traffic to the load balancer's IP address.

Console

In a web browser, go to the following addresses to test your load balancer,replacingIP_ADDRESS with theload balancer's IP address:

  • http://IP_ADDRESS/love-to-fetch/two-dogs.jpg

  • http://IP_ADDRESS/never-fetch/three-cats.jpg

If you've set up an HTTP load balancer, make sure your browser doesn'tautomatically redirect to HTTPS.

gcloud

Use thecurl command to test the response from the following URLs. ReplaceIP_ADDRESS with theload balancer's IPv4 address:

curl http://IP_ADDRESS/love-to-fetch/two-dogs.jpg
curl http://IP_ADDRESS/never-fetch/three-cats.jpg

Query string parameters for Cloud Storage XML API

When certain query string parameters are included in requests sent to a backendbucket through an Application Load Balancer, the client receives an HTTP 404response with an "Unsupported query parameter" error. This happens because theCloud Storage XML API doesn't support theseparameters when the request comes from an Application Load Balancer.

The following table summarizes how theCloud Storage XML API responds to variousquery parameters when requests are routed through an Application Load Balancer.Parameters are grouped by observed behavior to help identify which aresupported, ignored, or rejected in this context.

Parameter typeParametersObserved behavior
Supported parametersgeneration,prefix,marker,max-keys Adding these parameters (with appropriate values) works asdescribed in the Cloud Storage XML API documentation. The API returns a standardHTTP response.
Ignored parametersacl,billing,compose,delimiter,encryption,encryptionConfig,response-content-disposition,response-content-type,tagging,versions,websiteConfig Adding these parameters has no effect.

If the load balancer passes these parameters to Cloud Storage, the Cloud Storage XML API ignores them, and responds as though the parameters don't exist.
Rejected parameterscors,lifecycle,location,logging,storageClass,versioning The Cloud Storage XML API returns an "Unsupported query parameter" error.

Limitations

  • Backend buckets are only supported with global external Application Load Balancers andclassic Application Load Balancer. They aren't supported by theregional external Application Load Balancer or any other load balancer type.
  • Backend buckets aren't supported with Identity-Aware Proxy.
  • The classic Application Load Balancer doesn't fully support uploads toCloud Storage buckets. In particular, all query parameters on therequest are dropped when uploading to Cloud Storage.
  • The load balancer doesn't support the use of signed URLs unlessCloud CDN is enabled.

    Note:Signed URLs used in Cloud Storage are different fromsigned URLs used with Cloud CDN. Theglobal external Application Load Balancer supports only Cloud CDN signed URLs.Cloud Storage signed URLs arenot supported through theload balancer, regardless of whether Cloud CDN is enabled.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.