Troubleshooting

This page describes troubleshooting methods for common errors you may encounterwhile using Cloud Storage.

See theGoogle Cloud Service Health Dashboard for information aboutincidents affecting Google Cloud services such as Cloud Storage.

Logging raw requests

Important: Never share your credentials. When you print out HTTP protocoldetails, your authentication credentials, such as OAuth 2.0 tokens, are visiblein the headers. If you need to post request or response details to a messageboard or need to supply them for troubleshooting, make sure that you sanitize orrevoke any credentials that appear as part of the output.

When using tools such asgcloud or the Cloud Storage client libraries, muchof the request and response information is handled by the tool. However, it issometimes useful to see details to aid in troubleshooting or when postingquestions to forums such asStack Overflow. Use thefollowing instructions to return request and response headers for your tool:

Console

Viewing request and response information depends on the browser you're usingto access the Google Cloud console. For the Google Chrome browser:

  1. Click Chrome'smain menu button().

  2. SelectMore Tools.

  3. ClickDeveloper Tools.

  4. In the pane that appears, click theNetwork tab.

Command line

Use global debugging flags in your request. For example:

gcloud storage ls gs://my-bucket/my-object --log-http --verbosity=debug

Client libraries

C++

  • Set the environment variableCLOUD_STORAGE_ENABLE_TRACING=http toget the full HTTP traffic.

  • Set the environment variable CLOUD_STORAGE_ENABLE_CLOG=yes to getlogging of each RPC.

C#

Add a logger viaApplicationContext.RegisterLogger, and set loggingoptions on theHttpClient message handler. For more information, seetheC# client library reference documentation.

Go

Set the environment variableGODEBUG=http2debug=1. For moreinformation, see theGo package net/http.

If you want to log the request body as well,use a custom HTTP client.

Java

  1. Create a file named "logging.properties" with the following contents:

    # Properties file which configures the operation of the JDK logging facility.# The system will look for this config file to be specified as a system property:# -Djava.util.logging.config.file=${project_loc:googleplus-simple-cmdline-sample}/logging.properties# Set up the console handler (uncomment "level" to show more fine-grained messages)handlers = java.util.logging.ConsoleHandlerjava.util.logging.ConsoleHandler.level = CONFIG# Set up logging of HTTP requests and responses (uncomment "level" to show)com.google.api.client.http.level = CONFIG
  2. Use logging.properties with Maven

    mvn -Djava.util.logging.config.file=path/to/logging.propertiesinsert_command

For more information, seePluggable HTTP Transport.

Node.js

Set the environment variableNODE_DEBUG=https before calling the Nodescript.

PHP

Provide your own HTTP handler to the client usinghttpHandler and set up middleware to log the requestand response.

Python

Use thelogging module. For example:

import loggingimport http.clientlogging.basicConfig(level=logging.DEBUG)http.client.HTTPConnection.debuglevel=5

Ruby

At the top of your.rb file afterrequire "google/cloud/storage",add the following:

rubyGoogle::Apis.logger.level = Logger::DEBUG

Adding custom headers

Adding custom headers to requests is a common tool for debugging purposes, suchas for enabling debug headers or for tracing a request. The following exampleshows how to set request headers for different Cloud Storage tools:

Command line

Use the--additional-headers flag, which is available for mostcommands. For example:

gcloud storage objects describe gs://my-bucket/my-object --additional-headers=HEADER_NAME=HEADER_VALUE

WhereHEADER_NAME andHEADER_VALUE define the header you are addingto the request.

Note: The--additional-headers flag is not available for allgcloud storage commands, such as commands working withIdentity and Access Management (IAM) policies andbuckets notifications commands.

Client libraries

C++

namespacegcs=google::cloud::storage;gcs::Clientclient=...;client.AnyFunction(...args...,gcs::CustomHeader("header-name","value"));
Note: The C++ client library supports only one custom header at a time.

C#

The following sample adds a custom header to every request made by theclient library.

usingGoogle.Cloud.Storage.V1;varclient=StorageClient.Create();client.Service.HttpClient.DefaultRequestHeaders.Add("custom-header","custom-value");varbuckets=client.ListBuckets("my-project-id");foreach(varbucketinbuckets){Console.WriteLine(bucket.Name);}

Go

You can add custom headers to any API call made by the Storage packageby usingcallctx.SetHeaders on the context which is passed to themethod.

packagemainimport("context""cloud.google.com/go/storage""github.com/googleapis/gax-go/v2/callctx")funcmain(){ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{// Handle error.}ctx=callctx.SetHeaders(ctx,"X-Custom-Header","value")// Use client as usual with the context and the additional headers will be sent._,err=client.Bucket("my-bucket").Attrs(ctx)iferr!=nil{// Handle error.}}

Java

importcom.google.api.gax.rpc.FixedHeaderProvider;importcom.google.api.gax.rpc.HeaderProvider;importcom.google.cloud.WriteChannel;importcom.google.cloud.storage.BlobInfo;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importjava.io.IOException;importjava.nio.ByteBuffer;import staticjava.nio.charset.StandardCharsets.UTF_8;publicclassExample{publicvoidmain(Stringargs[])throwsIOException{HeaderProviderheaderProvider=FixedHeaderProvider.create("custom-header","custom-value");Storagestorage=StorageOptions.getDefaultInstance().toBuilder().setHeaderProvider(headerProvider).build().getService();StringbucketName="example-bucket";StringblobName="test-custom-header";// Use client with custom headerBlobInfoblob=BlobInfo.newBuilder(bucketName,blobName).build();byte[]stringBytes;try(WriteChannelwriter=storage.writer(blob)){stringBytes="hello world".getBytes(UTF_8);writer.write(ByteBuffer.wrap(stringBytes));}}}

Node.js

conststorage=newStorage();storage.interceptors.push({request:requestConfig=>{Object.assign(requestConfig.headers,{'X-Custom-Header':'value',});returnrequestConfig;},});

PHP

All method calls which trigger http requests accept an optional$restOptions argument as the last argument. You can provide customheaders on a per-request basis, or on a per-client basis.

use Google\Cloud\Storage\StorageClient;$client = new StorageClient([   'restOptions' => [       'headers' => [           'x-foo' => 'bat'       ]   ]]);$bucket = $client->bucket('my-bucket');$bucket->info([   'restOptions' => [       'headers' => [           'x-foo' => 'bar'       ]   ]]);

Python

fromgoogle.cloudimportstorageclient=storage.Client(extra_headers={"x-custom-header":"value"})

Ruby

require"google/cloud/storage"storage=Google::Cloud::Storage.newstorage.add_custom_headers{'X-Custom-Header'=>'value'}

Accessing buckets with a CORS configuration

If you have set a CORS configuration on your bucket and notice that incomingrequests from client browsers are failing, try the following troubleshootingsteps:

  1. Review the CORS configuration on the target bucket. If there aremultiple CORS configuration entries, make sure that the request valuesyou use for troubleshooting map to values in a single CORS configurationentry.

  2. When testing issuing a CORS request, check that you are not making a requestto thestorage.cloud.google.com endpoint, which doesn't allow CORS requests.For more information about supported endpoints for CORS, seeCloud Storage CORS support.

  3. Review a request and response using the tool of your choice. In a Chromebrowser, you can use the standard developer tools to see this information:

    1. Click the Chrome menu() on thebrowser toolbar.
    2. SelectMore Tools >Developer Tools.
    3. Click theNetwork tab.
    4. From your application or command line, send the request.
    5. In the pane displaying the network activity, locate the request.
    6. In theName column, click the name corresponding to the request.
    7. Click theHeaders tab to see the response headers, or theResponse tab to see the content of the response.

    If you don't see a request and response, it's possible that yourbrowser has cached an earlier failed preflight request attempt. Clearingyour browser's cache should also clear the preflight cache. If it doesn't,set theMaxAgeSec value in your CORS configuration to a lower value thanthe default value of1800 (30 minutes), wait for however longthe oldMaxAgeSec was, then try the request again. This performs a newpreflight request, which fetches the new CORS configuration and purges thecache entries. Once you have debugged your problem, raiseMaxAgeSec backto a higher value to reduce the preflight traffic to your bucket.

  4. Ensure that the request has anOrigin header and that the header valuematches at least one of theOrigins values in the bucket's CORSconfiguration. Note that the scheme, host, and port of the values mustmatch exactly. Some examples of acceptable matches are the following:

    • http://origin.example.com matcheshttp://origin.example.com:80(because 80 is the default HTTP port) but does not matchhttps://origin.example.com,http://origin.example.com:8080,http://origin.example.com:5151, orhttp://sub.origin.example.com.

    • https://example.com:443 matcheshttps://example.com but nothttp://example.com orhttp://example.com:443.

    • http://localhost:8080 only matches exactlyhttp://localhost:8080 anddoes not matchhttp://localhost:5555 orhttp://localhost.example.com:8080.

  5. For simple requests, ensure that the HTTP method of the request matches atleast one of theMethods values in the bucket's CORS configuration. Forpreflight requests, ensure that the method specified inAccess-Control-Request-Method matches at least one of theMethods values.

  6. For preflight requests, check if it includes one or moreAccess-Control-Request-Header headers. If so, ensure that eachAccess-Control-Request-Header value matches aResponseHeader value inthe bucket's CORS configuration. All headers named in theAccess-Control-Request-Header must be in the CORS configuration for thepreflight request to succeed and include CORS headers in the response.

Error codes

The following are common HTTP status codes you may encounter.

301: Moved Permanently

Issue: I'msetting up a static website, and accessing a directory pathreturns an empty object and a301 HTTP response code.

Solution: If your browser downloads a zero byte object and you get a301HTTP response code when accessing a directory, such ashttp://www.example.com/dir/, your bucket most likely contains an empty objectof that name. To check that this is the case and fix the issue:

  1. In the Google Cloud console, go to the Cloud StorageBuckets page.

    Go to Buckets

  2. Click theActivate Cloud Shell button at the top of the Google Cloud console.
  3. Rungcloud storage ls --recursive gs://www.example.com/dir/. If the outputincludeshttp://www.example.com/dir/, you have an empty object at thatlocation.
  4. Remove the empty object with the command:gcloud storage rm gs://www.example.com/dir/

You can now accesshttp://www.example.com/dir/ and have it return thatdirectory'sindex.html file instead of the empty object.

400: Bad Request

Issue: While performing aresumable upload, I received this error andthe messageFailed to parse Content-Range header.

Solution: The value you used in yourContent-Range header is invalid. Forexample,Content-Range: */* is invalid and instead should be specified asContent-Range: bytes */*. If you receive this error, your current resumableupload is no longer active, and you must start a new resumable upload.

400: Storage Intelligence Specific Errors

The following sections describe common errors that you might encounter when youconfigure or manage Storage Intelligence for a resource.

400: Invalid Bucket Name

Issue: When you configure or manage Storage Intelligence for a resource, youmight receive this error and the messageThe specific bucket is not valid.

Solution: The URL that you used in the request is invalid. The URL must meetthe following requirements:

  • locations/global is the only supported location forStorage Intelligence. Using any other location is unsupported.
  • Storage Intelligence is singular in the URL, not plural.

The following is an example of a valid URL:

curl -X PATCH -H "Content-Type: application/json" -d    '{"edition_config": "STANDARD" }'    -H "Authorization: Bearer $(gcloud auth print-access-token)" "https://storage.googleapis.com/v2/projects/my-project/locations/global/storageIntelligence?updateMask=edition_config"

400: Invalid Argument - Empty Update Mask

Issue: When you configure or manage Storage Intelligence for a resource, youmight receive this error and the messageEmpty UPDATE_MASK in the request.

Solution:UPDATE_MASK is the comma-separated list of field names that therequest updates. The field names use theFieldMaskformat and are part of theintelligenceConfig resource. To update the Storage Intelligence configuration of a resource, use a validUPDATE_MASK in the request. Anempty value is not supported.

400: Invalid Update Mask Path

Issue: When you configure or manage Storage Intelligence for a resource, youmight receive this error and the messageInvalid UPDATE_MASK paths.

Solution: If you use an invalid field name in theUPDATE_MASK, you get anerror message.UPDATE_MASK is the comma-separated list of field names that therequest updates. The field names use theFieldMaskformat and are part of theIntelligenceConfig resource.To update the Storage Intelligence configuration of a resource, ensurethat every field name listed in theUPDATE_MASK is a valid field within theIntelligenceConfig resource.

400: Field Is Not Editable

Issue: When you configure or manage Storage Intelligence for a resource, youmight receive this error and the messageInvalid UPDATE_MASK: UPDATE_TIME field is not editable.

Solution:UPDATE_MASK is the comma-separated list of field names that therequest updates. The field names use theFieldMaskformat and are part of theIntelligenceConfig resource. If you try to update a field that is not editable, you get an error message. Remove the uneditable field from theUpdate_Mask and try again.

400: Invalid Value

Issue: When you configure or manage Storage Intelligence for a resource, youmight receive this error and the messageInvalid value at storage_intelligence.edition_config.

Solution: If you try to use an invalid value for theedition_config field,you get an error message. The allowed values areINHERIT,STANDARD, andDISABLED. Review the value and try again.

400: Non-empty Filter

Issue: When you update the Storage Intelligence configuration for a resource, youmight receive this error and the messageNon-empty filter cannot be specified for INHERIT or DISABLED edition configuration.

Solution: When you update the Storage Intelligenceedition_config toINHERIT orDISABLED, you cannot use anybucket filters in the request. Remove the filters from the request and try again.

400: Empty Location Or Bucket Values In Filter

Issue: When you update the Storage Intelligence configuration for a resource, youmight receive this error and the messageEmpty location or bucket values in filter.

Solution: When you update the Storage Intelligence configuration and use abucketfilter in the request, an error occurs if the value oflocation orbucket is an empty string. Provide a valid value forlocation orbucket andtry again.

401: Unauthorized

Issue: Requests to a public bucket directly, or using Cloud CDN, arefailing with aHTTP 401: Unauthorized and anAuthentication Requiredresponse.

Solution: Check that your client, or any intermediate proxy, is not adding anAuthorization header to requests to Cloud Storage. Any request withanAuthorization header, even if empty, is validated as if it were anauthentication attempt.

403: Account Disabled

Issue: I tried to create a bucket but got a403 Account Disabled error.

Solution: This error indicates that you have not yet turned on billing forthe associated project. For steps for enabling billing, seeEnable billing for a project.

If billing is turned on and you continue to receive this error message, you canreach out tosupport with yourproject ID and a description of yourproblem.

403: Forbidden

Issue: I should have permission to access a certain bucket or object, butwhen I attempt to do so, I get a403 - Forbidden error with a message that issimilar to:example@email.com does not have storage.objects.get access to theGoogle Cloud Storage object.

Solution: You are missing an IAM permission for the bucketor object that is required to complete the request. If you expect to be able tomake the request but cannot, perform the following checks:

  1. Is the grantee referenced in the error message the one you expected? If theerror message refers to an unexpected email address or to "Anonymouscaller", then your request is not using the credentials you intended. Thiscould be because the tool you are using to make the request was set up withthe credentials from another alias or entity, or it could be because therequest is being made on your behalf by aservice account.

  2. Is the permission referenced in the error message one thought you needed? Ifthe permission is unexpected, it's likely because the tool you're usingrequires additional access in order to complete your request. For example,in order to bulk delete objects in a bucket,gcloud must first constructa list of objects in the bucket to delete. This portion of the bulk deleteaction requires thestorage.objects.list permission, which might besurprising, given that the goal is object deletion, which normally requiresonly thestorage.objects.delete permission. If this is the cause of yourerror message, make sure you're grantedIAM roles thathave the additional necessary permissions.

  3. Are you granted the IAM role on the intended resource orparent resource? For example, if you're granted theStorage Object Viewerrole for a project and you're trying to download an object, make sure theobject is in a bucket that's in the project; you might inadvertently havetheStorage Object Viewer permission for a different project.

  4. Is your permission to access a certain bucket or object given through aconvenience value? The removal of access granted to a conveniencevalue can cause previously enabled principals to lose access to resources.

    For example, say jane@example.com has the Owner (roles/owner)basic role for a project namedmy-example-project, and the project'sIAM policy grants the Storage Object Creator(roles/storage.objectCreator) role to the convenience valueprojectOwner:my-example-project. This means that jane@example.com has thepermissions associated with the Storage Object Creator role for bucketswithinmy-example-project. If this grant gets removed, jane@example.comloses the permissions associated with the Storage Object Creator role.

    In such a scenario, you can regain access to the bucket or object by grantingyourself the necessary bucket-level or object-level permissions required toperform the actions you need.

  5. Is there anIAM Deny policy that prevents you from usingcertain permissions? You can contact your organization administrator tofind out whether an IAM Deny policy has been put in place.

403: Permission Denied

Issue: Permission denied error when you configure or manage theStorage Intelligence configuration for a resource.

Solution: If you receive a permission denied error with a message similar topermissionstorage.intelligenceConfigs.update when youconfigure and manageStorage Intelligence for a resource, see the permission section for theoperation you want to perform. To resolve this issue,grant the appropriate permissions. You can grant permissions inany of the followingways:

  • Grant IAM permissions at the sameGoogle Cloud resource hierarchy resource that you are enablingStorage Intelligence on.
  • Ensure that a resource higher in the Google Cloud resource hierarchy passes the permissions to the child resource.

409: Conflict

Issue: I tried to create a bucket but received the following error:

409 Conflict. Sorry, that name is not available. Please try a different one.

Solution: The bucket name you tried to use (e.g.gs://cats orgs://dogs)is already taken. Cloud Storage has a global namespace so you may notname a bucket with the same name as an existing bucket. Choose a name that isnot being used.

412: Custom constraints violated

Issue: My requests are being rejected with a412 orgpolicy error.

Issue: My requests are being rejected with a412 Multiple constraints were violated error.

Solution: Check with your security administrator team to see if the bucketto which you're sending requests is being affected by an organization policythat uses a custom constraint. Your bucket might also be affected by differentorganization policies that conflict with one another. For example, where onepolicy specifies that buckets must have the Standard storage class and anotherpolicy specifies that buckets must have the Coldline storage class.

429: Too Many Requests

Issue: My requests are being rejected with a429 Too Many Requests error.

Solution: You are hitting a limit to the number of requestsCloud Storage allows for a given resource. See theCloud Storage quotas for a discussion of limits inCloud Storage.

  • If your workload consists of 1000's of requests per second to a bucket, seeRequest rate and access distribution guidelines for a discussion of bestpractices, including ramping up your workload gradually and avoidingsequential filenames.

  • If your workload is potentially using 50 Gbps or more of network egress tospecific locations,check your bandwidth usage to ensure you're notencountering a bandwidth quota.

Diagnosing Google Cloud console errors

Issue: When using the Google Cloud console to perform anoperation, I get a generic error message. For example, I see an error messagewhen trying to delete a bucket, but I don't see details for why the operationfailed.

Solution: Use the Google Cloud console's notifications to see detailedinformation about the failed operation:

  1. Click theNotifications button()in the Google Cloud console header.

    A drop-down displays the most recent operations performed by theGoogle Cloud console.

  2. Click the item you want to find out more about.

    A page opens up and displays detailed information about the operation.

  3. Click each row to expand the detailed error information.

Issue: When using the Google Cloud console, I don't see a particular columndisplayed.

Solution: To see a particular column displayed in the Google Cloud console,click theColumn display options icon() and select the column you wantdisplayed.

Simulated folders and managed folders

Issue: I deleted some objects in my bucket, and now the folder thatcontained them does not appear in the Google Cloud console.

Solution: While the Google Cloud console displays your bucket's contentsas if there was a directory structure,folders don't fundamentally existin Cloud Storage. As a result, when you remove all objects with acommon prefix from a bucket, the folder icon representing that group of objectsno longer appears in the Google Cloud console.

Issue: I can't create managed folders.

Solution: Tocreate managed folders, make sure the followingrequirements are met:

  • You have an IAM role that contains thestorage.managedfolders.create permission, such as the Storage Object Admin(roles/storage.objectAdmin) role. For instructions on granting roles, seeUse IAM permissions.

  • Uniform bucket-level access isenabled on the bucket in which you want tocreate managed folders.

  • There are no IAM Conditions on the bucket or the project thatuse the bucket resource type (storage.googleapis.com/Bucket) orthe object resource type (storage.googleapis.com/Object). If anybucket within a project has an IAM Condition that uses eitherof these resource types, managed folders cannot be created in any of thebuckets within that project, even if the condition is later removed.

Issue: I can't disable uniform bucket-level access because there are managedfolders in my bucket.

Solution: Uniform bucket-level access cannot be disabled if there aremanaged folders in the bucket. To disable uniform bucket-level access, you'llneed to firstdelete all managed folders in the bucket.

Static website errors

The following are common issues that you may encounter whensetting up a bucket to host a static website.

HTTPS serving

Issue: I want to serve my content over HTTPS without using aload balancer.

Solution: You can serve static content through HTTPS using direct URIssuch ashttps://storage.googleapis.com/my-bucket/my-object. For other optionsto serve your content through a custom domain over SSL, you can:

Inaccessible page

Issue: I get anAccess denied error message for a web page served by mywebsite.

Solution: Check that the object is shared publicly. If it is not, seeMaking Data Public for instructions on how to do this.

If you previously uploaded and shared an object, but then upload a new versionof it, then you must reshare the object publicly. This is because the publicpermission is replaced with the new upload.

Content download

Issue: I am prompted to download my page's content, instead of being ableto view it in my browser.

Solution: If you specify aMainPageSuffix as an object that does not havea web content type, site visitors are prompted to download the content insteadof being able to see served page content. To resolve this issue, update theContent-Type metadata entry to a suitable value, such astext/html.For instructions, seeEditing object metadata.

Make data public

Issue: I'm trying to make my data public but I get an organizationpolicy error.

Solution: Someorganization policy constraints can prevent you frommaking your data public. For example, the Domain Restricted Sharing constraint(constraints/iam.allowedPolicyMemberDomains) restricts resource sharing basedon the organization's domain. For organization policy failures, contact youradministrator to grant you the project or bucket-level permissions to allowresource sharing byediting the organization policy for the organization,folder, or project resource. If you continue to see this error afteroverriding the organization policy, then you might need to wait a few minutesfor the change to take effect.

Issue: I get a permission error when I attempt tomake my data public.

Solution: Make sure that you have thestorage.buckets.setIamPolicypermission or thestorage.objects.setIamPolicy permission. These permissionsare granted, for example, in theStorage Admin (roles/storage.admin) role. If you have thestorage.buckets.setIamPolicy permission or thestorage.objects.setIamPolicypermission and you still get an error, your bucket might be subject topublic access prevention, which does not allow access toallUsers orallAuthenticatedUsers. Public access prevention might be set on the bucketdirectly, or it might be enforced through anorganization policy that isset at a higher level.

Latency

The following are common latency issues you might encounter. In addition, theGoogle Cloud Service Health Dashboard provides information aboutincidents affecting Google Cloud services such as Cloud Storage.

Upload or download latency

Issue: I'm seeing increased latency when uploading or downloading.

Solution: Consider the following common causes of upload and downloadlatency:

  • CPU or memory constraints: The affected environment's operating system shouldhave tooling to measure local resource consumption such as CPU usage andmemory usage.

  • Disk IO constraints: The performance impact might be caused by local disk IO.

  • Geographical distance: Performance can be impacted by the physical separationof your Cloud Storage bucket and affected environment, particularlyin cross-continental cases. Testing with a bucket located in the same regionas your affected environment can identify the extent to which geographicseparation is contributing to your latency.

    • If applicable, the affected environment's DNS resolver should use theEDNS(0) protocol so that requests from the environmentare routed through an appropriate Google Front End.

CLI or client library latency

Issue: I'm seeing increased latency when accessing Cloud Storagewith the Google Cloud CLI or one of theclient libraries.

Solution: The gcloud CLI and the client librariesautomatically retry requests when it's useful to do so, and this behaviorcan effectively increase latency as seen from the end user. Use theCloud Monitoring metricstorage.googleapis.com/api/request_count tosee if Cloud Storage is consistently serving a retryable response code,such as429 or5xx.

Proxy servers

Issue: I'm connecting through a proxy server. What do I need to do?

Solution: To access Cloud Storage through a proxy server, you mustallow access to these domains:

  • accounts.google.com for creating OAuth2 authentication tokens
  • oauth2.googleapis.com for performing OAuth2 token exchanges
  • *.googleapis.com for storage requests

If your proxy server or security policy doesn't support allowlisting by domainand instead only supports allowlisting by IP network block,we strongly recommend that you configure your proxy server for all Google IPaddress ranges. You can find the address ranges by querying WHOISdata atARIN. As a best practice, you should periodicallyreview your proxy settings to ensure they match Google's IP addresses.

We don't recommend configuring your proxy with individual IP addresses youobtain from one-time lookups ofoauth2.googleapis.com andstorage.googleapis.com. Because Google services are exposed using DNS namesthat map to a large number of IP addresses that can change over time,configuring your proxy based on a one-time lookup may lead to failures toconnect to Cloud Storage.

If your requests are being routed through a proxy server, you may need tocheck with your network administrator to ensure that theAuthorizationheader containing your credentials is not stripped out by the proxy. WithouttheAuthorization header, your requests are rejected and you receive aMissingSecurityHeader error.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-11-03 UTC.