Host a static website
This page describes how to configure a Cloud Storage bucket to hosta static website for a domain you own. Static web pages can containclient-side technologies such as HTML, CSS, and JavaScript. They cannot containdynamic content such as server-side scripts like PHP.
Overview
Because Cloud Storage doesn't support custom domains with HTTPS on its own, thistutorial uses Cloud Storage with anexternal Application Load Balancer to serve content froma custom domain over HTTPS. For more ways to serve content from a custom domainover HTTPS, seetroubleshooting for HTTPS serving. You can alsouseCloud Storage to serve custom domain content over HTTP, which doesn'trequire a load balancer.
For examples and tips on static web pages, including how to host static assetsfor a dynamic website, see theStatic Website page.
Caution: This tutorial makes content available to the public internet. Werecommend that you don't serve content that contains sensitive or private datafrom your Cloud Storage bucket.The instructions in this page describe how to perform the following steps:
Upload and share your site's files.
Set up a load balancer and SSL certificate.
Connect your load balancer to your bucket.
Point your domain to your load balancer using an
Arecord.Test the website.
Pricing
The instructions in this page use the following billable components ofGoogle Cloud:
See theMonitoring your charges tip for details on what chargesmay be incurred when hosting a static website.
Limitations
You can host a static website using a bucket whose objects are readable to thepublic. You cannot host a static website using a bucket that haspublic access prevention enabled.To host a static website using Cloud Storage, you can use eitherof the following methods:
Create a new bucket whose data can be accessed publicly. During bucketcreation, clear the box labeledEnforce public access prevention on this bucket.After creating the bucket, grant the Storage Object Viewer role to the
allUsersprincipal. For more information, seeCreate a bucket.Make the data of an existing bucket public. For more information, seeShare your files.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
- Enable theCompute Engine API for your project.
- Have the following Identity and Access Management roles:Storage Admin andCompute Network Admin.
- Have a domain that you own or manage. If you don't have an existing domain,there are many services through which you can register a new domain, such asCloud Domains.
This tutorial uses the domain
example.com. - Have a few website files you want to serve. This tutorial works best if you have at least an index page (
index.html) and a 404 page (404.html). - Have a Cloud Storage bucket for storing the files you want to serve. If you don't currently have a bucket,create a bucket.
- (Optional) If you want your Cloud Storage bucket to have the same name as your domain, you mustverify that you own or manage the domain that you will be using. Make sure you are verifying the top-level domain, such as
example.com, and not a subdomain, such aswww.example.com. If you purchased your domain through Cloud Domains, verification is automatic.
Upload your site's files
Add the files you want your website to serve to the bucket:
Console
- In the Google Cloud console, go to the Cloud StorageBuckets page.
In the list of buckets, click the name of the bucket that you created.
TheBucket details page opens with theObjects tab selected.
Click theUpload files button.
In the file dialog, browse to the desired file and select it.
After the upload completes, you should see the filename along with fileinformation displayed in the bucket.
To learn how to get detailed error information about failed Cloud Storage operations in the Google Cloud console, seeTroubleshooting.
Command line
Use thegcloud storage cp command to copy files to your bucket.For example, to copy the fileindex.html from its current locationDesktop to the bucketmy-static-assets:
gcloud storage cp Desktop/index.html gs://my-static-assets
If successful, the response looks like the following example:
Completed files 1/1 | 164.3kiB/164.3kiB
Client libraries
For more information, see theCloud StorageC++ API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageC# API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageGo API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageJava API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. The following sample uploads an individual object: The following sample uploads multiple objects concurrently: The following sample uploads all objects with a common prefix concurrently: For more information, see theCloud StorageNode.js API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. The following sample uploads an individual object: The following sample uploads multiple objects concurrently: The following sample uploads all objects with a common prefix concurrently: For more information, see theCloud StoragePHP API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePython API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. The following sample uploads an individual object: The following sample uploads multiple objects concurrently: The following sample uploads all objects with a common prefix concurrently: For more information, see theCloud StorageRuby API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.C++
namespacegcs=::google::cloud::storage;using::google::cloud::StatusOr;[](gcs::Clientclient,std::stringconst&file_name,std::stringconst&bucket_name,std::stringconst&object_name){// Note that the client library automatically computes a hash on the// client-side to verify data integrity during transmission.StatusOr<gcs::ObjectMetadata>metadata=client.UploadFile(file_name,bucket_name,object_name,gcs::IfGenerationMatch(0));if(!metadata)throwstd::move(metadata).status();std::cout <<"Uploaded " <<file_name <<" to object " <<metadata->name() <<" in bucket " <<metadata->bucket() <<"\nFull metadata: " <<*metadata <<"\n";}C#
usingGoogle.Cloud.Storage.V1;usingSystem;usingSystem.IO;publicclassUploadFileSample{publicvoidUploadFile(stringbucketName="your-unique-bucket-name",stringlocalPath="my-local-path/my-file-name",stringobjectName="my-file-name"){varstorage=StorageClient.Create();usingvarfileStream=File.OpenRead(localPath);storage.UploadObject(bucketName,objectName,null,fileStream);Console.WriteLine($"Uploaded {objectName}.");}}Go
import("context""fmt""io""os""time""cloud.google.com/go/storage")// uploadFile uploads an object.funcuploadFile(wio.Writer,bucket,objectstring)error{// bucket := "bucket-name"// object := "object-name"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()// Open local file.f,err:=os.Open("notes.txt")iferr!=nil{returnfmt.Errorf("os.Open: %w",err)}deferf.Close()ctx,cancel:=context.WithTimeout(ctx,time.Second*50)defercancel()o:=client.Bucket(bucket).Object(object)// Optional: set a generation-match precondition to avoid potential race// conditions and data corruptions. The request to upload is aborted if the// object's generation number does not match your precondition.// For an object that does not yet exist, set the DoesNotExist precondition.o=o.If(storage.Conditions{DoesNotExist:true})// If the live object already exists in your bucket, set instead a// generation-match precondition using the live object's generation number.// attrs, err := o.Attrs(ctx)// if err != nil {// return fmt.Errorf("object.Attrs: %w", err)// }// o = o.If(storage.Conditions{GenerationMatch: attrs.Generation})// Upload an object with storage.Writer.wc:=o.NewWriter(ctx)if_,err=io.Copy(wc,f);err!=nil{returnfmt.Errorf("io.Copy: %w",err)}iferr:=wc.Close();err!=nil{returnfmt.Errorf("Writer.Close: %w",err)}fmt.Fprintf(w,"Blob %v uploaded.\n",object)returnnil}Java
importcom.google.cloud.storage.BlobId;importcom.google.cloud.storage.BlobInfo;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importjava.io.IOException;importjava.nio.file.Paths;publicclassUploadObject{publicstaticvoiduploadObject(StringprojectId,StringbucketName,StringobjectName,StringfilePath)throwsIOException{// The ID of your GCP project// String projectId = "your-project-id";// The ID of your GCS bucket// String bucketName = "your-unique-bucket-name";// The ID of your GCS object// String objectName = "your-object-name";// The path to your file to upload// String filePath = "path/to/your/file"Storagestorage=StorageOptions.newBuilder().setProjectId(projectId).build().getService();BlobIdblobId=BlobId.of(bucketName,objectName);BlobInfoblobInfo=BlobInfo.newBuilder(blobId).build();// Optional: set a generation-match precondition to avoid potential race// conditions and data corruptions. The request returns a 412 error if the// preconditions are not met.Storage.BlobWriteOptionprecondition;if(storage.get(bucketName,objectName)==null){// For a target object that does not yet exist, set the DoesNotExist precondition.// This will cause the request to fail if the object is created before the request runs.precondition=Storage.BlobWriteOption.doesNotExist();}else{// If the destination already exists in your bucket, instead set a generation-match// precondition. This will cause the request to fail if the existing object's generation// changes before the request runs.precondition=Storage.BlobWriteOption.generationMatch(storage.get(bucketName,objectName).getGeneration());}storage.createFrom(blobInfo,Paths.get(filePath),precondition);System.out.println("File "+filePath+" uploaded to bucket "+bucketName+" as "+objectName);}}importcom.google.cloud.storage.transfermanager.ParallelUploadConfig;importcom.google.cloud.storage.transfermanager.TransferManager;importcom.google.cloud.storage.transfermanager.TransferManagerConfig;importcom.google.cloud.storage.transfermanager.UploadResult;importjava.io.IOException;importjava.nio.file.Path;importjava.util.List;classUploadMany{publicstaticvoiduploadManyFiles(StringbucketName,List<Path>files)throwsIOException{TransferManagertransferManager=TransferManagerConfig.newBuilder().build().getService();ParallelUploadConfigparallelUploadConfig=ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();List<UploadResult>results=transferManager.uploadFiles(files,parallelUploadConfig).getUploadResults();for(UploadResultresult:results){System.out.println("Upload for "+result.getInput().getName()+" completed with status "+result.getStatus());}}}importcom.google.cloud.storage.transfermanager.ParallelUploadConfig;importcom.google.cloud.storage.transfermanager.TransferManager;importcom.google.cloud.storage.transfermanager.TransferManagerConfig;importcom.google.cloud.storage.transfermanager.UploadResult;importjava.io.IOException;importjava.nio.file.Files;importjava.nio.file.Path;importjava.util.ArrayList;importjava.util.List;importjava.util.stream.Stream;classUploadDirectory{publicstaticvoiduploadDirectoryContents(StringbucketName,PathsourceDirectory)throwsIOException{TransferManagertransferManager=TransferManagerConfig.newBuilder().build().getService();ParallelUploadConfigparallelUploadConfig=ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();// Create a list to store the file pathsList<Path>filePaths=newArrayList<>();// Get all files in the directory// try-with-resource to ensure pathStream is closedtry(Stream<Path>pathStream=Files.walk(sourceDirectory)){pathStream.filter(Files::isRegularFile).forEach(filePaths::add);}List<UploadResult>results=transferManager.uploadFiles(filePaths,parallelUploadConfig).getUploadResults();for(UploadResultresult:results){System.out.println("Upload for "+result.getInput().getName()+" completed with status "+result.getStatus());}}}Node.js
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The path to your file to upload// const filePath = 'path/to/your/file';// The new ID for your GCS file// const destFileName = 'your-new-file-name';// Imports the Google Cloud client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();asyncfunctionuploadFile(){constoptions={destination:destFileName,// Optional:// Set a generation-match precondition to avoid potential race conditions// and data corruptions. The request to upload is aborted if the object's// generation number does not match your precondition. For a destination// object that does not yet exist, set the ifGenerationMatch precondition to 0// If the destination object already exists in your bucket, set instead a// generation-match precondition using its generation number.preconditionOpts:{ifGenerationMatch:generationMatchPrecondition},};awaitstorage.bucket(bucketName).upload(filePath,options);console.log(`${filePath} uploaded to${bucketName}`);}uploadFile().catch(console.error);/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The ID of the first GCS file to upload// const firstFilePath = 'your-first-file-name';// The ID of the second GCS file to upload// const secondFilePath = 'your-second-file-name';// Imports the Google Cloud client libraryconst{Storage,TransferManager}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();// Creates a transfer manager clientconsttransferManager=newTransferManager(storage.bucket(bucketName));asyncfunctionuploadManyFilesWithTransferManager(){// Uploads the filesawaittransferManager.uploadManyFiles([firstFilePath,secondFilePath]);for(constfilePathof[firstFilePath,secondFilePath]){console.log(`${filePath} uploaded to${bucketName}.`);}}uploadManyFilesWithTransferManager().catch(console.error);/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The local directory to upload// const directoryName = 'your-directory';// Imports the Google Cloud client libraryconst{Storage,TransferManager}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();// Creates a transfer manager clientconsttransferManager=newTransferManager(storage.bucket(bucketName));asyncfunctionuploadDirectoryWithTransferManager(){// Uploads the directoryawaittransferManager.uploadManyFiles(directoryName);console.log(`${directoryName} uploaded to${bucketName}.`);}uploadDirectoryWithTransferManager().catch(console.error);PHP
use Google\Cloud\Storage\StorageClient;/** * Upload a file. * * @param string $bucketName The name of your Cloud Storage bucket. * (e.g. 'my-bucket') * @param string $objectName The name of your Cloud Storage object. * (e.g. 'my-object') * @param string $source The path to the file to upload. * (e.g. '/path/to/your/file') */function upload_object(string $bucketName, string $objectName, string $source): void{ $storage = new StorageClient(); if (!$file = fopen($source, 'r')) { throw new \InvalidArgumentException('Unable to open file for reading'); } $bucket = $storage->bucket($bucketName); $object = $bucket->upload($file, [ 'name' => $objectName ]); printf('Uploaded %s to gs://%s/%s' . PHP_EOL, basename($source), $bucketName, $objectName);}Python
fromgoogle.cloudimportstoragedefupload_blob(bucket_name,source_file_name,destination_blob_name):"""Uploads a file to the bucket."""# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The path to your file to upload# source_file_name = "local/path/to/file"# The ID of your GCS object# destination_blob_name = "storage-object-name"storage_client=storage.Client()bucket=storage_client.bucket(bucket_name)blob=bucket.blob(destination_blob_name)# Optional: set a generation-match precondition to avoid potential race conditions# and data corruptions. The request to upload is aborted if the object's# generation number does not match your precondition. For a destination# object that does not yet exist, set the if_generation_match precondition to 0.# If the destination object already exists in your bucket, set instead a# generation-match precondition using its generation number.generation_match_precondition=0blob.upload_from_filename(source_file_name,if_generation_match=generation_match_precondition)print(f"File{source_file_name} uploaded to{destination_blob_name}.")defupload_many_blobs_with_transfer_manager(bucket_name,filenames,source_directory="",workers=8):"""Upload every file in a list to a bucket, concurrently in a process pool. Each blob name is derived from the filename, not including the `source_directory` parameter. For complete control of the blob name for each file (and other aspects of individual blob metadata), use transfer_manager.upload_many() instead. """# The ID of your GCS bucket# bucket_name = "your-bucket-name"# A list (or other iterable) of filenames to upload.# filenames = ["file_1.txt", "file_2.txt"]# The directory on your computer that is the root of all of the files in the# list of filenames. This string is prepended (with os.path.join()) to each# filename to get the full path to the file. Relative paths and absolute# paths are both accepted. This string is not included in the name of the# uploaded blob; it is only used to find the source files. An empty string# means "the current working directory". Note that this parameter allows# directory traversal (e.g. "/", "../") and is not intended for unsanitized# end user input.# source_directory=""# The maximum number of processes to use for the operation. The performance# impact of this value depends on the use case, but smaller files usually# benefit from a higher number of processes. Each additional process occupies# some CPU and memory resources until finished. Threads can be used instead# of processes by passing `worker_type=transfer_manager.THREAD`.# workers=8fromgoogle.cloud.storageimportClient,transfer_managerstorage_client=Client()bucket=storage_client.bucket(bucket_name)results=transfer_manager.upload_many_from_filenames(bucket,filenames,source_directory=source_directory,max_workers=workers)forname,resultinzip(filenames,results):# The results list is either `None` or an exception for each filename in# the input list, in order.ifisinstance(result,Exception):print("Failed to upload{} due to exception:{}".format(name,result))else:print("Uploaded{} to{}.".format(name,bucket.name))defupload_directory_with_transfer_manager(bucket_name,source_directory,workers=8):"""Upload every file in a directory, including all files in subdirectories. Each blob name is derived from the filename, not including the `directory` parameter itself. For complete control of the blob name for each file (and other aspects of individual blob metadata), use transfer_manager.upload_many() instead. """# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The directory on your computer to upload. Files in the directory and its# subdirectories will be uploaded. An empty string means "the current# working directory".# source_directory=""# The maximum number of processes to use for the operation. The performance# impact of this value depends on the use case, but smaller files usually# benefit from a higher number of processes. Each additional process occupies# some CPU and memory resources until finished. Threads can be used instead# of processes by passing `worker_type=transfer_manager.THREAD`.# workers=8frompathlibimportPathfromgoogle.cloud.storageimportClient,transfer_managerstorage_client=Client()bucket=storage_client.bucket(bucket_name)# Generate a list of paths (in string form) relative to the `directory`.# This can be done in a single list comprehension, but is expanded into# multiple lines here for clarity.# First, recursively get all files in `directory` as Path objects.directory_as_path_obj=Path(source_directory)paths=directory_as_path_obj.rglob("*")# Filter so the list only includes files, not directories themselves.file_paths=[pathforpathinpathsifpath.is_file()]# These paths are relative to the current working directory. Next, make them# relative to `directory`relative_paths=[path.relative_to(source_directory)forpathinfile_paths]# Finally, convert them all to strings.string_paths=[str(path)forpathinrelative_paths]print("Found{} files.".format(len(string_paths)))# Start the upload.results=transfer_manager.upload_many_from_filenames(bucket,string_paths,source_directory=source_directory,max_workers=workers)forname,resultinzip(string_paths,results):# The results list is either `None` or an exception for each filename in# the input list, in order.ifisinstance(result,Exception):print("Failed to upload{} due to exception:{}".format(name,result))else:print("Uploaded{} to{}.".format(name,bucket.name))Ruby
defupload_filebucket_name:,local_file_path:,file_name:nil# The ID of your GCS bucket# bucket_name = "your-unique-bucket-name"# The path to your file to upload# local_file_path = "/local/path/to/file.txt"# The ID of your GCS object# file_name = "your-file-name"require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucketbucket_name,skip_lookup:truefile=bucket.create_filelocal_file_path,file_nameputs"Uploaded#{local_file_path} as#{file.name} in bucket#{bucket_name}"end
Terraform
# Upload a simple index.html page to the bucketresource "google_storage_bucket_object" "indexpage" { name = "index.html" content = "<html><body>Hello World!</body></html>" content_type = "text/html" bucket = google_storage_bucket.static_website.id}# Upload a simple 404 / error page to the bucketresource "google_storage_bucket_object" "errorpage" { name = "404.html" content = "<html><body>404!</body></html>" content_type = "text/html" bucket = google_storage_bucket.static_website.id}REST APIs
JSON API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call theJSON API with aPOSTObject request. For the fileindex.htmluploaded toa bucket namedmy-static-assets:curl -X POST --data-binary @index.html \ -H "Content-Type: text/html" \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ "https://storage.googleapis.com/upload/storage/v1/b/my-static-assets/o?uploadType=media&name=index.html"
XML API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call theXML API with aPUTObject request. For the fileindex.htmluploaded to abucket namedmy-static-assets:curl -X PUT --data-binary @index.html \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: text/html" \ "https://storage.googleapis.com/my-static-assets/index.html"
Share your files
To make all objects in your bucket readable to anyone on the public internet:
Caution: Before making your bucket publicly accessible, make sure that thefiles in your bucket don't contain sensitive or private information.Console
- In the Google Cloud console, go to the Cloud StorageBuckets page.
In the list of buckets, click the name of the bucket that you want tomake public.
Select thePermissions tab near the top of the page.
If thePublic access pane readsNot public, click the buttonlabeledRemove public access prevention and clickConfirm inthe dialog that appears.
Click theadd_boxGrant access button.
TheAdd principals dialog appears.
In theNew principals field, enter
allUsers.In theSelect a role drop-down, select theCloud Storagesub-menu, and click theStorage Object Viewer option.
ClickSave.
ClickAllow public access.
Once shared publicly, alink icon appears for each object in thepublicaccess column. You can click this icon to get the URL for the object.
To learn how to get detailed error information about failed Cloud Storage operations in the Google Cloud console, seeTroubleshooting.
Command line
Use thebuckets add-iam-policy-binding command:
gcloud storage buckets add-iam-policy-binding gs://my-static-assets --member=allUsers --role=roles/storage.objectViewer
Client libraries
For more information, see theCloud StorageC++ API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageC# API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageGo API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageJava API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageNode.js API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePHP API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePython API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageRuby API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.C++
namespacegcs=::google::cloud::storage;using::google::cloud::StatusOr;[](gcs::Clientclient,std::stringconst&bucket_name){autocurrent_policy=client.GetNativeBucketIamPolicy(bucket_name,gcs::RequestedPolicyVersion(3));if(!current_policy)throwstd::move(current_policy).status();current_policy->set_version(3);current_policy->bindings().emplace_back(gcs::NativeIamBinding("roles/storage.objectViewer",{"allUsers"}));autoupdated=client.SetNativeBucketIamPolicy(bucket_name,*current_policy);if(!updated)throwstd::move(updated).status();std::cout <<"Policy successfully updated: " <<*updated <<"\n";}C#
usingGoogle.Apis.Storage.v1.Data;usingGoogle.Cloud.Storage.V1;usingSystem;usingSystem.Collections.Generic;publicclassMakeBucketPublicSample{publicvoidMakeBucketPublic(stringbucketName="your-unique-bucket-name"){varstorage=StorageClient.Create();Policypolicy=storage.GetBucketIamPolicy(bucketName);policy.Bindings.Add(newPolicy.BindingsData{Role="roles/storage.objectViewer",Members=newList<string>{"allUsers"}});storage.SetBucketIamPolicy(bucketName,policy);Console.WriteLine(bucketName+" is now public ");}}Go
import("context""fmt""io""cloud.google.com/go/iam""cloud.google.com/go/iam/apiv1/iampb""cloud.google.com/go/storage")// setBucketPublicIAM makes all objects in a bucket publicly readable.funcsetBucketPublicIAM(wio.Writer,bucketNamestring)error{// bucketName := "bucket-name"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()policy,err:=client.Bucket(bucketName).IAM().V3().Policy(ctx)iferr!=nil{returnfmt.Errorf("Bucket(%q).IAM().V3().Policy: %w",bucketName,err)}role:="roles/storage.objectViewer"policy.Bindings=append(policy.Bindings,&iampb.Binding{Role:role,Members:[]string{iam.AllUsers},})iferr:=client.Bucket(bucketName).IAM().V3().SetPolicy(ctx,policy);err!=nil{returnfmt.Errorf("Bucket(%q).IAM().SetPolicy: %w",bucketName,err)}fmt.Fprintf(w,"Bucket %v is now publicly readable\n",bucketName)returnnil}Java
importcom.google.cloud.Identity;importcom.google.cloud.Policy;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importcom.google.cloud.storage.StorageRoles;publicclassMakeBucketPublic{publicstaticvoidmakeBucketPublic(StringprojectId,StringbucketName){// The ID of your GCP project// String projectId = "your-project-id";// The ID of your GCS bucket// String bucketName = "your-unique-bucket-name";Storagestorage=StorageOptions.newBuilder().setProjectId(projectId).build().getService();PolicyoriginalPolicy=storage.getIamPolicy(bucketName);storage.setIamPolicy(bucketName,originalPolicy.toBuilder().addIdentity(StorageRoles.objectViewer(),Identity.allUsers())// All users can view.build());System.out.println("Bucket "+bucketName+" is now publicly readable");}}Node.js
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// Imports the Google Cloud client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();asyncfunctionmakeBucketPublic(){awaitstorage.bucket(bucketName).makePublic();console.log(`Bucket${bucketName} is now publicly readable`);}makeBucketPublic().catch(console.error);PHP
use Google\Cloud\Storage\StorageClient;/** * Update the specified bucket's IAM configuration to make it publicly accessible. * * @param string $bucketName The name of your Cloud Storage bucket. * (e.g. 'my-bucket') */function set_bucket_public_iam(string $bucketName): void{ $storage = new StorageClient(); $bucket = $storage->bucket($bucketName); $policy = $bucket->iam()->policy(['requestedPolicyVersion' => 3]); $policy['version'] = 3; $role = 'roles/storage.objectViewer'; $members = ['allUsers']; $policy['bindings'][] = [ 'role' => $role, 'members' => $members ]; $bucket->iam()->setPolicy($policy); printf('Bucket %s is now public', $bucketName);}Python
fromtypingimportListfromgoogle.cloudimportstoragedefset_bucket_public_iam(bucket_name:str="your-bucket-name",members:List[str]=["allUsers"],):"""Set a public IAM Policy to bucket"""# bucket_name = "your-bucket-name"storage_client=storage.Client()bucket=storage_client.bucket(bucket_name)policy=bucket.get_iam_policy(requested_policy_version=3)policy.bindings.append({"role":"roles/storage.objectViewer","members":members})bucket.set_iam_policy(policy)print(f"Bucket{bucket.name} is now publicly readable")Ruby
defset_bucket_public_iambucket_name:# The ID of your GCS bucket# bucket_name = "your-unique-bucket-name"require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucketbucket_namebucket.policydo|p|p.add"roles/storage.objectViewer","allUsers"endputs"Bucket#{bucket_name} is now publicly readable"end
Terraform
# Make bucket public by granting allUsers storage.objectViewer accessresource "google_storage_bucket_iam_member" "public_rule" { bucket = google_storage_bucket.static_website.name role = "roles/storage.objectViewer" member = "allUsers"}REST APIs
JSON API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Create a JSON file that contains the following information:
{"bindings":[{"role":"roles/storage.objectViewer","members":["allUsers"]}]}
Use
cURLto call theJSON API with aPUTBucket request:curl -X PUT --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/iam"
Where:
JSON_FILE_NAMEis the path for the JSONfile that you created in Step 2.BUCKET_NAMEis the name of the bucketwhose objects you want to make public. For example,my-static-assets.
XML API
Making all objects in a bucket publicly readable is not supported bythe XML API. Use the Google Cloud console orgcloud storage instead,orset ACLs for each individual object. Note that in order to setACLs for each individual object, you must switch your bucket'sAccess control mode toFine-grained.
roles/storage.objectViewer includes permission to list the objectsin the bucket. If you don't want to grant listing publicly, useroles/storage.legacyObjectReader.If wanted, you can alternativelymake portions of your bucket publicly accessible.
Visitors receive ahttp 403 response code when requesting the URL for anon-public or non-existent file. See the next section for information on how toadd an error page that uses ahttp 404 response code.
Recommended: assign specialty pages
You can assign an index page suffix and a custom error page, which are known asspecialty pages. Assigning either is optional, but if you don't assign an indexpage suffix and upload the corresponding index page, users who access yourtop-level site are served an XML document tree containing a list of the publicobjects in your bucket.
For more information on the behavior of specialty pages, seeSpecialty pages.
Console
- In the Google Cloud console, go to the Cloud StorageBuckets page.
In the list of buckets, find the bucket you created.
Click theBucket overflow menu(more_vert) associatedwith the bucket and selectEdit website configuration.
In the website configuration dialog, specify the main page anderror page.
ClickSave.
To learn how to get detailed error information about failed Cloud Storage operations in the Google Cloud console, seeTroubleshooting.
Command line
Use thebuckets update command with the--web-main-page-suffixand--web-error-page flags.
In the following sample, theMainPageSuffix is set toindex.html andNotFoundPage is set to404.html:
gcloudstoragebucketsupdategs://my-static-assets--web-main-page-suffix=index.html--web-error-page=404.html
If successful, the command returns:
Updating gs://www.example.com/... Completed 1
buckets updatecommands and view these settings with thebuckets describe command.Client libraries
For more information, see theCloud StorageC++ API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageC# API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageGo API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageJava API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageNode.js API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePHP API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePython API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageRuby API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.C++
namespacegcs=::google::cloud::storage;using::google::cloud::StatusOr;[](gcs::Clientclient,std::stringconst&bucket_name,std::stringconst&main_page_suffix,std::stringconst¬_found_page){StatusOr<gcs::BucketMetadata>original=client.GetBucketMetadata(bucket_name);if(!original)throwstd::move(original).status();StatusOr<gcs::BucketMetadata>patched=client.PatchBucket(bucket_name,gcs::BucketMetadataPatchBuilder().SetWebsite(gcs::BucketWebsite{main_page_suffix,not_found_page}),gcs::IfMetagenerationMatch(original->metageneration()));if(!patched)throwstd::move(patched).status();if(!patched->has_website()){std::cout <<"Static website configuration is not set for bucket " <<patched->name() <<"\n";return;}std::cout <<"Static website configuration successfully set for bucket " <<patched->name() <<"\nNew main page suffix is: " <<patched->website().main_page_suffix <<"\nNew not found page is: " <<patched->website().not_found_page <<"\n";}C#
usingGoogle.Apis.Storage.v1.Data;usingGoogle.Cloud.Storage.V1;usingSystem;publicclassBucketWebsiteConfigurationSample{publicBucketBucketWebsiteConfiguration(stringbucketName="your-bucket-name",stringmainPageSuffix="index.html",stringnotFoundPage="404.html"){varstorage=StorageClient.Create();varbucket=storage.GetBucket(bucketName);if(bucket.Website==null){bucket.Website=newBucket.WebsiteData();}bucket.Website.MainPageSuffix=mainPageSuffix;bucket.Website.NotFoundPage=notFoundPage;bucket=storage.UpdateBucket(bucket);Console.WriteLine($"Static website bucket {bucketName} is set up to use {mainPageSuffix} as the index page and {notFoundPage} as the 404 not found page.");returnbucket;}}Go
import("context""fmt""io""time""cloud.google.com/go/storage")// setBucketWebsiteInfo sets website configuration on a bucket.funcsetBucketWebsiteInfo(wio.Writer,bucketName,indexPage,notFoundPagestring)error{// bucketName := "www.example.com"// indexPage := "index.html"// notFoundPage := "404.html"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()ctx,cancel:=context.WithTimeout(ctx,time.Second*10)defercancel()bucket:=client.Bucket(bucketName)bucketAttrsToUpdate:=storage.BucketAttrsToUpdate{Website:&storage.BucketWebsite{MainPageSuffix:indexPage,NotFoundPage:notFoundPage,},}if_,err:=bucket.Update(ctx,bucketAttrsToUpdate);err!=nil{returnfmt.Errorf("Bucket(%q).Update: %w",bucketName,err)}fmt.Fprintf(w,"Static website bucket %v is set up to use %v as the index page and %v as the 404 page\n",bucketName,indexPage,notFoundPage)returnnil}Java
importcom.google.cloud.storage.Bucket;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;publicclassSetBucketWebsiteInfo{publicstaticvoidsetBucketWesbiteInfo(StringprojectId,StringbucketName,StringindexPage,StringnotFoundPage){// The ID of your GCP project// String projectId = "your-project-id";// The ID of your static website bucket// String bucketName = "www.example.com";// The index page for a static website bucket// String indexPage = "index.html";// The 404 page for a static website bucket// String notFoundPage = "404.html";Storagestorage=StorageOptions.newBuilder().setProjectId(projectId).build().getService();Bucketbucket=storage.get(bucketName);bucket.toBuilder().setIndexPage(indexPage).setNotFoundPage(notFoundPage).build().update();System.out.println("Static website bucket "+bucketName+" is set up to use "+indexPage+" as the index page and "+notFoundPage+" as the 404 page");}}Node.js
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The name of the main page// const mainPageSuffix = 'http://example.com';// The Name of a 404 page// const notFoundPage = 'http://example.com/404.html';// Imports the Google Cloud client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();asyncfunctionaddBucketWebsiteConfiguration(){awaitstorage.bucket(bucketName).setMetadata({website:{mainPageSuffix,notFoundPage,},});console.log(`Static website bucket${bucketName} is set up to use${mainPageSuffix} as the index page and${notFoundPage} as the 404 page`);}addBucketWebsiteConfiguration().catch(console.error);PHP
use Google\Cloud\Storage\StorageClient;/** * Update the given bucket's website configuration. * * @param string $bucketName The name of your Cloud Storage bucket. * (e.g. 'my-bucket') * @param string $indexPageObject the name of an object in the bucket to use as * (e.g. 'index.html') * an index page for a static website bucket. * @param string $notFoundPageObject the name of an object in the bucket to use * (e.g. '404.html') * as the 404 Not Found page. */function define_bucket_website_configuration(string $bucketName, string $indexPageObject, string $notFoundPageObject): void{ $storage = new StorageClient(); $bucket = $storage->bucket($bucketName); $bucket->update([ 'website' => [ 'mainPageSuffix' => $indexPageObject, 'notFoundPage' => $notFoundPageObject ] ]); printf( 'Static website bucket %s is set up to use %s as the index page and %s as the 404 page.', $bucketName, $indexPageObject, $notFoundPageObject );}Python
fromgoogle.cloudimportstoragedefdefine_bucket_website_configuration(bucket_name,main_page_suffix,not_found_page):"""Configure website-related properties of bucket"""# bucket_name = "your-bucket-name"# main_page_suffix = "index.html"# not_found_page = "404.html"storage_client=storage.Client()bucket=storage_client.get_bucket(bucket_name)bucket.configure_website(main_page_suffix,not_found_page)bucket.patch()print("Static website bucket{} is set up to use{} as the index page and{} as the 404 page".format(bucket.name,main_page_suffix,not_found_page))returnbucketRuby
defdefine_bucket_website_configurationbucket_name:,main_page_suffix:,not_found_page:# The ID of your static website bucket# bucket_name = "www.example.com"# The index page for a static website bucket# main_page_suffix = "index.html"# The 404 page for a static website bucket# not_found_page = "404.html"require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucketbucket_namebucket.updatedo|b|b.website_main=main_page_suffixb.website_404=not_found_pageendputs"Static website bucket#{bucket_name} is set up to use#{main_page_suffix} as the index page and "\"#{not_found_page} as the 404 page"end
REST APIs
JSON API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Create a JSON file that sets the
mainPageSuffixandnotFoundPageproperties in awebsiteobject to the desired pages.In the following sample, the
mainPageSuffixis set toindex.htmlandnotFoundPageis set to404.html:{"website":{"mainPageSuffix":"index.html","notFoundPage":"404.html"}}
Use
cURLto call theJSON API with aPATCHBucket request. For the bucketmy-static-assets:curl -X PATCH --data-binary @web-config.json \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/my-static-assets"
XML API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Create an XML file that sets the
MainPageSuffixandNotFoundPageelements in aWebsiteConfigurationelement to the desired pages.In the following sample, the
MainPageSuffixis set toindex.htmlandNotFoundPageis set to404.html:<WebsiteConfiguration> <MainPageSuffix>index.html</MainPageSuffix> <NotFoundPage>404.html</NotFoundPage></WebsiteConfiguration>
Use
cURLto call theXML API with aPUTBucket request andwebsiteConfigquery stringparameter. Formy-static-assets:curl -X PUT --data-binary @web-config.xml \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ https://storage.googleapis.com/my-static-assets?websiteConfig
PUT requestsand view these settings with aGET request.Set up your load balancer and SSL certificate
Cloud Storage doesn't support custom domains with HTTPS on its own, so you alsoneed to set up anSSL certificate attached to anHTTPS load balancerto serve your website through HTTPS. This section shows you how to add yourbucket to a load balancer's backend and how to add a newGoogle-managed SSL certificate to the load balancer's frontend.
Select the load balancer type
In the Google Cloud console, go to theLoad balancing page.
- ClickCreate load balancer.
- ForType of load balancer, selectApplication Load Balancer (HTTP/HTTPS) and clickNext.
- ClickConfigure.
The configuration window for your load balancer appears.
Basic configuration
Before continuing with the configuration, enter aLoad balancer name, such asexample-lb.
Configure the frontend
This section shows you how to configure the HTTPS protocol and create an SSLcertificate. You can also select an existing certificate or upload aself-managed SSL certificate.
- ClickFrontend configuration.
- (Optional) Give your frontend configuration aName.
- ForProtocol, selectHTTPS (includes HTTP/2).
- ForIP version, selectIPv4. If you prefer IPv6, seeIPv6 termination for additional information.
For theIP address field:
- In the drop-down, clickCreate IP address.
- In theReserve a new static IP address dialog, enter a name, suchas
example-ipfor theName of the IP address. - ClickReserve.
ForPort, select443.
In theCertificate field dropdown, selectCreate a new certificate.The certificate creation form appears in a panel. Configure the following:
- Give your certificate aName, such as
example-ssl. - ForCreate mode, selectCreate Google-managed certificate.
- ForDomains, enter your website name, such as
www.example.com. Ifyou want to serve your content through additional domains such as the rootdomainexample.com, pressEnter to add them on additional lines.Each certificate has alimit of 100 domains.
- Give your certificate aName, such as
ClickCreate.
(Optional) If you want Google Cloud to automatically set up apartial HTTP load balancer for redirecting HTTP traffic, select thecheckbox next toEnable HTTP to HTTPS redirect.
ClickDone.
Configure the backend
- ClickBackend configuration.
- In theBackend services & backend buckets dropdown, clickCreatea backend bucket.
- Choose aBackend bucket name, such as
example-bucket. The name youchoose can be different from the name of the bucket you created earlier. - ClickBrowse, found in theCloud Storage bucket field.
- Select the
my-static-assetsbucket you created earlier, and clickSelect. - (Optional) If you want to useCloud CDN, select the checkbox forEnable Cloud CDN and configure Cloud CDN as desired. Note thatCloud CDN may incuradditional costs.
- ClickCreate.
Configure routing rules
Routing rules are the components of an external Application Load Balancer'sURL map. Forthis tutorial, you should skip this portion of the load balancer configuration,because it is automatically set to use the backend you just configured.
Review the configuration
- ClickReview and finalize.
- Review theFrontend,Routing rules, andBackend.
- ClickCreate.
You may need to wait a few minutes for the load balancer to be created.
Connect your domain to your load balancer
After the load balancer is created, click the name of your load balancer:example-lb. Note the IP address associated with the load balancer: forexample,30.90.80.100. To point your domain to your load balancer, create anA record using your domain registration service. If you added multiple domainsto your SSL certificate, you must add anA record for each one, all pointingto the load balancer's IP address. For example, to createA records forwww.example.com andexample.com:
NAME TYPE DATAwww A 30.90.80.100@ A 30.90.80.100
SeeTroubleshooting domain status for more information about connectingyour domain to your load balancer.
Recommended: Monitor the SSL certificate status
It might take up to 60-90 minutes for Google Cloud to provision the certificateand make the site available through the load balancer. To monitor the statusof your certificate:
Console
- Go to the Load balancing page in the Google Cloud console.
Go to Load balancing - Click the name of your load balancer:
example-lb. - Click the name of the SSL certificate associated with the load balancer:
example-ssl. - TheStatus andDomain status rows show the certificate status. Bothmust be active in order for the certificate to be valid for your website.
Command line
To check the certificate status, run the following command:
gcloud compute ssl-certificates describeCERTIFICATE_NAME \ --global \ --format="get(name,managed.status)"
To check the domain status, run the following command:
gcloud compute ssl-certificates describeCERTIFICATE_NAME \ --global \ --format="get(managed.domainStatus)"
SeeTroubleshooting SSL certificates for more information aboutcertificate status.
Test the website
Once the SSL certificate is active, verify that content is served from thebucket by going tohttps://www.example.com/test.html, wheretest.html is anobject stored in the bucket that you're using as the backend. If you set theMainPageSuffix property,https://www.example.com goes toindex.html.
Clean up
After you finish the tutorial, you can clean up the resources that you createdso that they stop using quota and incurring charges. The following sectionsdescribe how to delete or turn off these resources.
Delete the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
To delete the project:
Delete the load balancer and bucket
If you don't want to delete the entire project, delete the load balancer andbucket that you created for the tutorial:
- Go to the Load balancing page in the Google Cloud console.
Go to Load balancing - Select the checkbox next to
example-lb. - ClickDelete.
- (Optional) Select the checkbox next to the resources you want to delete alongwith the load balancer, such as the
my-static-assetsbucket or theexample-sslSSL certificate. - ClickDelete load balancer orDelete load balancer and the selectedresources.
Release a reserved IP address
To delete the reserved IP address you used for the tutorial:
In the Google Cloud console, go to theExternal IP addresses page.
Select the checkboxes next to
example-ip.ClickRelease static address.
In the confirmation window, clickDelete.
What's next
- Seeexamples and tips for using buckets to host a static website.
- Read abouttroubleshooting for hosting a static website.
- Learn abouthosting static assets for a dynamic website.
- Learn aboutother Google Cloud web serving solutions.
Try it for yourself
If you're new to Google Cloud, create an account to evaluate how Cloud Storage performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
Try Cloud Storage freeExcept as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-10-30 UTC.