Retry strategy

This page describes how Cloud Storage tools retry failed requestsand how to customize the behavior of retries. It also describesconsiderations for retrying requests.

Overview

There are two factors that determine whether or not a request is safe to retry:

  • Theresponse that you receive from the request.

  • Theidempotency of the request.

Response

The response that you receive from your request indicates whether or not it'suseful to retry the request. Responses related to transient problems aregenerally retryable. On the other hand, response related to permanent errorsindicate you need to make changes, such as authorization or configurationchanges, before it's useful to try the request again. The following responsesindicate transient problems that are useful to retry:

  • HTTP408,429, and5xx response codes.
  • Socket timeouts and TCP disconnects.

For more information, see the status and error codes forJSON andXML.

Idempotency

Requests that areidempotent can be executed repeatedly withoutchanging the final state of the targeted resource, resulting in the sameend state each time. For example, list operations are always idempotent, becausesuch requests don't modify resources. On the other hand, creating a newPub/Sub notification is never idempotent, because it creates a newnotification ID each time the request succeeds.

The following are examples of conditions that make an operation idempotent:

  • The operation has the same observable effect on the targeted resource evenwhen continually requested.

  • The operation only succeeds once.

  • The operation has no observable effect on the state of the targeted resource.

When you receive a retryable response, you should consider the idempotencyof the request, because retrying requests that are not idempotent can lead torace conditions and other conflicts.

Conditional idempotency

A subset of requests areconditionally idempotent, which means they are onlyidempotent if they include specific optional arguments. Operations that areconditionally safe to retry should only be retried by default if the conditioncase passes. Cloud Storage acceptspreconditions and ETags ascondition cases for requests.

Idempotency of operations

The following table lists the Cloud Storage operations that fall intoeach category of idempotency.

IdempotencyOperations
Always idempotent
  • All get and list requests
  • Insert or delete buckets
  • Test bucket IAM policies and permissions
  • Lock retention policies
  • Delete an HMAC key or Pub/Sub notification
Conditionally idempotent
  • Update/patch requests for buckets withIfMetagenerationMatch1 oretag1 as HTTP precondition
  • Update/patch requests for objects withIfMetagenerationMatch1 oretag1 as HTTP precondition
  • Set a bucket IAM policy withetag1 as HTTP precondition or in resource body
  • Update an HMAC key withetag1 as HTTP precondition or in resource body
  • Insert, copy, compose, or rewrite objects withifGenerationMatch1
  • Delete an object withifGenerationMatch1 (or with a generation number for object versions)
Never idempotent
  • Create an HMAC key
  • Create a Pub/Sub notification
  • Create, delete, or send patch/update requests for bucket and object ACLs or default object ACLs

1This field is available for use in the JSON API. For fieldsavailable for use in the client libraries, see the relevantclient library documentation.

How Cloud Storage tools implement retry strategies

Console

The Google Cloud console sends requests to Cloud Storage on yourbehalf and handles any necessary backoff.

Command line

gcloud storage commands retry the errors listed intheResponse section without requiring you to take additional action.You might have to take action for other errors, such as the following:

  • Invalid credentials or insufficient permissions.

  • Network unreachable because of a proxy configuration problem.

For retryable errors, the gcloud CLI retries requests using atruncated binary exponential backoff strategy. The defaultnumber of maximum retries is 32 for the gcloud CLI.

Client libraries

C++

By default, operations support retries for the following HTTP error codes,as well as any socket errors that indicate the connection was lost ornever successfully established.

  • 408 Request Timeout
  • 429 Too Many Requests
  • 500 Internal Server Error
  • 502 Bad Gateway
  • 503 Service Unavailable
  • 504 Gateway Timeout

All exponential backoff and retry settings in the C++ library areconfigurable. If the algorithms implemented in the library don't supportyour needs, you can provide custom code to implement your own strategies.

SettingDefault value
Auto retryTrue
Maximum time retrying a request15 minutes
Initial wait (backoff) time1 second
Wait time multiplier per iteration2
Maximum amount of wait time5 minutes

By default, the C++ library retries all operations with retryableerrors, even those that are never idempotent and can delete or createmultiple resources when repeatedly successful. To only retry idempotentoperations, use thegoogle::cloud::storage::StrictIdempotencyPolicyclass.

C#

The C# client library usesexponential backoff by default.

Go

By default, operations support retries for the following errors:

  • Connection errors:
    • io.ErrUnexpectedEOF: This may occur due totransient network issues.
    • url.Error containingconnection refused: Thismay occur due to transient network issues.
    • url.Error containingconnection reset by peer:This means that Google Cloud has reset the connection.
    • net.ErrClosed: This means that Google Cloud hasclosed the connection.
  • HTTP codes:
    • 408 Request Timeout
    • 429 Too Many Requests
    • 500 Internal Server Error
    • 502 Bad Gateway
    • 503 Service Unavailable
    • 504 Gateway Timeout
  • Errors that implement theTemporary() interface and give a valueoferr.Temporary() == true
  • Any of the above errors that have been wrapped usingGo 1.13 error wrapping

All exponential backoff settings in the Go library are configurable.By default, operations in Go use the following settings forexponential backoff (defaults are taken fromgax):

SettingDefault value (in seconds)
Auto retryTrue if idempotent
Max number of attemptsNo limit
Initial retry delay1 second
Retry delay multiplier2.0
Maximum retry delay30 seconds
Total timeout (resumable upload chunk)32 seconds
Total timeout (all other operations)No limit

In general, retrying continues indefinitely unless the controllingcontext is canceled, the client is closed, or a non-transient error isreceived. To stop retries from continuing, use context timeouts orcancellation. The only exception to this behavior is whenperformingresumable uploads usingWriter, where the data islarge enough that it requires multiple requests. In this scenario, eachchunk times out and stops retrying after 32 seconds by default. You canadjust the default timeout by changingWriter.ChunkRetryDeadline.

There is asubset of Go operations that are conditionallyidempotent (conditionally safe to retry). These operations only retry ifthey meet specific conditions:

  • GenerationMatch orGeneration

    • Safe to retry if aGenerationMatch precondition was applied to thecall, or ifObjectHandle.Generation was set.
  • MetagenerationMatch

    • Safe to retry if aMetagenerationMatch precondition was appliedto the call.
  • Etag

    • Safe to retry if the method inserts anetag into the JSON requestbody. Only used inHMACKeyHandle.Update whenHmacKeyMetadata.Etag has been set.

RetryPolicy is set toRetryPolicy.RetryIdempotent by default. SeeCustomize retries for examples on how to modify the default retrybehavior.

Java

By default, operations support retries for the following errors:

  • Connection errors:
    • Connection reset by peer: This means that Google Cloud has resetthe connection.
    • Unexpected connection closure: This means Google Cloud has closedthe connection.
  • HTTP codes:
    • 408 Request Timeout
    • 429 Too Many Requests
    • 500 Internal Server Error
    • 502 Bad Gateway
    • 503 Service Unavailable
    • 504 Gateway Timeout

All exponential backoff settings in the Java library are configurable. Bydefault, operations through Java use the following settings forexponential backoff:

SettingDefault value (in seconds)
Auto retryTrue if idempotent
Max number of attempts6
Initial retry delay1 second
Retry delay multiplier2.0
Maximum retry delay32 seconds
Total Timeout50 seconds
Initial RPC Timeout50 seconds
RPC Timeout Multiplier1.0
Max RPC Timeout50 seconds
Connect Timeout20 seconds
Read Timeout20 seconds

For more information about the settings, see the Java referencedocumentation forRetrySettings.Builder andHttpTransportOptions.Builder.

There is asubset of Java operations that areconditionally idempotent (conditionally safe to retry). These operationsonly retry if they include specific arguments:

  • ifGenerationMatch orgeneration

    • Safe to retry ififGenerationMatch orgeneration was passed inas an option to the method.
  • ifMetagenerationMatch

    • Safe to retry ififMetagenerationMatch was passed in as an option.

StorageOptions.setStorageRetryStrategy is set toStorageRetryStrategy#getDefaultStorageRetryStrategy by default.SeeCustomize retries for examples on how to modify the defaultretry behavior.

Node.js

By default, operations support retries for the following error codes:

  • Connection errors:
    • EAI_again: This is a DNS lookup error. For more information, seethegetaddrinfo documentation.
    • Connection reset by peer: This means that Google Cloud has resetthe connection.
    • Unexpected connection closure: This means Google Cloud has closedthe connection.
  • HTTP codes:
    • 408 Request Timeout
    • 429 Too Many Requests
    • 500 Internal Server Error
    • 502 Bad Gateway
    • 503 Service Unavailable
    • 504 Gateway Timeout

All exponential backoff settings in the Node.js library are configurable.By default, operations through Node.js use the following settings forexponential backoff:

SettingDefault value (in seconds)
Auto retryTrue if idempotent
Maximum number of retries3
Initial wait time1 second
Wait time multiplier per iteration2
Maximum amount of wait time64 seconds
Default deadline600 seconds

There is asubset of Node.js operations that are conditionallyidempotent (conditionally safe to retry). These operations only retry ifthey include specific arguments:

  • ifGenerationMatch orgeneration

    • Safe to retry ififGenerationMatch orgeneration was passed inas an option to the method. Often, methods only accept one of thesetwo parameters.
  • ifMetagenerationMatch

    • Safe to retry ififMetagenerationMatch was passed in as an option.

retryOptions.idempotencyStrategy is set toIdempotencyStrategy.RetryConditional by default. SeeCustomize retries for examples on how to modify the default retrybehavior.

PHP

The PHP client library usesexponential backoff by default.

By default, operations support retries for the following error codes:

  • Connection errors:
    • connetion-refused: This may occur due to transient network issues.
    • connection-reset: This means that Google Cloud has resetthe connection.
  • HTTP codes:
    • 200: for partial download cases
    • 408 Request Timeout
    • 429 Too Many Requests
    • 500 Internal Server Error
    • 502 Bad Gateway
    • 503 Service Unavailable
    • 504 Gateway Timeout

Some exponential backoff settings in the PHP library are configurable. Bydefault, operations through PHP use the following settings forexponential backoff:

SettingDefault value (in seconds)
Auto retryTrue if idempotent
Initial retry delay1 second
Retry delay multiplier2.0
Maximum retry delay60 seconds
Request timeout0 with REST, 60 with gRPC
Default number of retries3

There is asubset of PHP operations that are conditionallyidempotent (conditionally safe to retry). These operations only retry ifthey include specific arguments:

  • ifGenerationMatch orgeneration

    • Safe to retry ififGenerationMatch orgeneration was passed inas an option to the method. Often, methods only accept one of thesetwo parameters.
  • ifMetagenerationMatch

    • Safe to retry ififMetagenerationMatch was passed in as an option.

When creatingStorageClient theStorageClient::RETRY_IDEMPOTENTstrategy is used by default. SeeCustomize retries for examples onhow to modify the default retry behavior.

Python

By default, operations support retries for the following error codes:

  • Connection errors:
    • requests.exceptions.ConnectionError
    • requests.exceptions.ChunkedEncodingError (only for operations thatfetch or send payload data to objects, like uploads and downloads)
    • ConnectionError
    • http.client.ResponseNotReady
    • urllib3.exceptions.TimeoutError
  • HTTP codes:
    • 408 Request Timeout
    • 429 Too Many Requests
    • 500 Internal Server Error
    • 502 Bad Gateway
    • 503 Service Unavailable
    • 504 Gateway Timeout

Operations through Python use the following default settings forexponential backoff:

SettingDefault value (in seconds)
Auto retryTrue if idempotent
Initial wait time1
Wait time multiplier per iteration2
Maximum amount of wait time60
Default deadline120

In addition to Cloud Storage operations that arealways idempotent, the Python client libraryautomatically retriesObjects: insert,Objects: delete,andObjects: patch by default.

There is asubset of Python operations that are conditionallyidempotent (conditionally safe to retry) when theyinclude specific arguments. These operations only retry if acondition case passes:

  • DEFAULT_RETRY_IF_GENERATION_SPECIFIED

    • Safe to retry ifgeneration orif_generation_match was passed inas an argument to the method. Often methods only accept one of thesetwo parameters.
  • DEFAULT_RETRY_IF_METAGENERATION_SPECIFIED

    • Safe to retry ifif_metageneration_match was passed in as anargument to the method.
  • DEFAULT_RETRY_IF_ETAG_IN_JSON

    • Safe to retry if the method inserts anetag into the JSON requestbody. ForHMACKeyMetadata.update() this means etag must be set ontheHMACKeyMetadata object itself. For theset_iam_policy()method on other classes, this means the etag must be set in the"policy" argument passed into the method.

Ruby

By default, operations support retries for the following error codes:

  • Connection errors:
    • SocketError
    • HTTPClient::TimeoutError
    • Errno::ECONNREFUSED
    • HTTPClient::KeepAliveDisconnected
  • HTTP codes:
    • 408 Request Timeout
    • 429 Too Many Requests
    • 5xx Server Error

All exponential backoff settings in the Ruby client library areconfigurable. By default, operations through the Ruby client library usethe following settings for exponential backoff:

SettingDefault value
Auto retryTrue
Max number of retries3
Initial wait time1 second
Wait time multiplier per iteration2
Maximum amount of wait time60 seconds
Default deadline900 seconds

There is asubset of Ruby operations that are conditionallyidempotent (conditionally safe to retry) when they include specificarguments:

  • if_generation_match orgeneration

    • Safe to retry if thegeneration orif_generation_match parameteris passed in as an argument to the method. Often methods only acceptone of these two parameters.
  • if_metageneration_match

    • Safe to retry if theif_metageneration_match parameter is passed inas an option.

By default, all idempotent operations are retried, and conditionallyidempotent operations are retried only if the condition case passes.Non-idempotent operations are not retried. SeeCustomize retriesfor examples on how to modify the default retry behavior.

REST APIs

When calling the JSON or XML API directly, you should use theexponential backoff algorithm to implement your own retry strategy.

Customizing retries

Console

You cannot customize the behavior of retries using the Google Cloud console.

Command line

Forgcloud storage commands, you can control the retry strategy bycreating anamed configuration and setting some or all of thefollowing properties:

SettingDefault value (in seconds)
base_retry_delay1
exponential_sleep_multiplier2
max_retries32
max_retry_delay32

You then apply the defined configuration either on a per-command basis byusing the--configuration project-wide flag or for allGoogle Cloud CLI commands by using thegcloud config set command.

Client libraries

C++

To customize the retry behavior, provide values for the following optionswhen you initialize thegoogle::cloud::storage::Client object:

  • google::cloud::storage::RetryPolicyOption: The library providesgoogle::cloud::storage::LimitedErrorCountRetryPolicyandgoogle::cloud::storage::LimitedTimeRetryPolicy classes. You canprovide your own class, which must implement thegoogle::cloud::RetryPolicy interface.

  • google::cloud::storage::BackoffPolicyOption: The library providesthegoogle::cloud::storage::ExponentialBackoffPolicy class. You canprovide your own class, which must implement thegoogle::cloud::storage::BackoffPolicy interface.

  • google::cloud::storage::IdempotencyPolicyOption: The libraryprovides thegoogle::cloud::storage::StrictIdempotencyPolicy andgoogle::cloud::storage::AlwaysRetryIdempotencyPolicy classes. Youcan provide your own class, which must implement thegoogle::cloud::storage::IdempotencyPolicy interface.

For more information, see theC++ client library reference documentation.

namespacegcs=::google::cloud::storage;// Create the client configuration:autooptions=google::cloud::Options{};// Retries only idempotent operations.options.set<gcs::IdempotencyPolicyOption>(gcs::StrictIdempotencyPolicy().clone());// On error, it backs off for a random delay between [1, 3] seconds, then [3,// 9] seconds, then [9, 27] seconds, etc. The backoff time never grows larger// than 1 minute.options.set<gcs::BackoffPolicyOption>(gcs::ExponentialBackoffPolicy(/*initial_delay=*/std::chrono::seconds(1),/*maximum_delay=*/std::chrono::minutes(1),/*scaling=*/3.0).clone());// Retries all operations for up to 5 minutes, including any backoff time.options.set<gcs::RetryPolicyOption>(gcs::LimitedTimeRetryPolicy(std::chrono::minutes(5)).clone());returngcs::Client(std::move(options));

C#

You cannot customize the default retry strategy used by theC# client library.

Go

When you initialize a storage client, a default retry configuration willbe set. Unless they're overridden, the optionsin the config are set to thedefault values. Users canconfigure non-default retry behavior for a single library call (usingBucketHandle.Retryer andObjectHandle.Retryer) or for allcalls made by a client (usingClient.SetRetry). To modify retrybehavior, pass in the relevantRetryOptions to one of thesemethods.

See the following code sample to learn how to customize your retrybehavior.

import("context""fmt""io""time""cloud.google.com/go/storage""github.com/googleapis/gax-go/v2")// configureRetries configures a custom retry strategy for a single API call.funcconfigureRetries(wio.Writer,bucket,objectstring)error{// bucket := "bucket-name"// object := "object-name"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()// Configure retries for all operations using this ObjectHandle. Retries may// also be configured on the BucketHandle or Client types.o:=client.Bucket(bucket).Object(object).Retryer(// Use WithBackoff to control the timing of the exponential backoff.storage.WithBackoff(gax.Backoff{// Set the initial retry delay to a maximum of 2 seconds. The length of// pauses between retries is subject to random jitter.Initial:2*time.Second,// Set the maximum retry delay to 60 seconds.Max:60*time.Second,// Set the backoff multiplier to 3.0.Multiplier:3,}),// Use WithPolicy to customize retry so that all requests are retried even// if they are non-idempotent.storage.WithPolicy(storage.RetryAlways),)// Use context timeouts to set an overall deadline on the call, including all// potential retries.ctx,cancel:=context.WithTimeout(ctx,500*time.Second)defercancel()// Delete an object using the specified retry policy.iferr:=o.Delete(ctx);err!=nil{returnfmt.Errorf("Object(%q).Delete: %w",object,err)}fmt.Fprintf(w,"Blob %v deleted with a customized retry strategy.\n",object)returnnil}

Java

When you initializeStorage, an instance ofRetrySettings is initialized as well. Unless they areoverridden, the options in theRetrySettings are set to thedefault values. To modify the default automatic retry behavior, passthe customStorageRetryStrategy into theStorageOptions used toconstruct theStorage instance. To modify any of the other scalarparameters, pass a customRetrySettings into theStorageOptions usedto construct theStorage instance.

See the following example to learn how to customize your retry behavior:

importcom.google.api.gax.retrying.RetrySettings;importcom.google.cloud.storage.BlobId;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importcom.google.cloud.storage.StorageRetryStrategy;importorg.threeten.bp.Duration;publicfinalclassConfigureRetries{publicstaticvoidmain(String[]args){StringbucketName="my-bucket";StringblobName="blob/to/delete";deleteBlob(bucketName,blobName);}staticvoiddeleteBlob(StringbucketName,StringblobName){// Customize retry behaviorRetrySettingsretrySettings=StorageOptions.getDefaultRetrySettings().toBuilder()// Set the max number of attempts to 10 (initial attempt plus 9 retries).setMaxAttempts(10)// Set the backoff multiplier to 3.0.setRetryDelayMultiplier(3.0)// Set the max duration of all attempts to 5 minutes.setTotalTimeout(Duration.ofMinutes(5)).build();StorageOptionsalwaysRetryStorageOptions=StorageOptions.newBuilder()// Customize retry so all requests are retried even if they are non-idempotent..setStorageRetryStrategy(StorageRetryStrategy.getUniformStorageRetryStrategy())// provide the previously configured retrySettings.setRetrySettings(retrySettings).build();// Instantiate a clientStoragestorage=alwaysRetryStorageOptions.getService();// Delete the blobBlobIdblobId=BlobId.of(bucketName,blobName);booleansuccess=storage.delete(blobId);System.out.printf("Deletion of Blob %s completed %s.%n",blobId,success?"successfully":"unsuccessfully");}}

Node.js

When you initialize Cloud Storage, a retryOptions configfile is initialized as well. Unless they're overridden, the optionsin the config are set to thedefault values. To modify thedefault retry behavior, pass the custom retry configurationretryOptions into the storage constructor upon initialization.The Node.js client library can automatically use backoff strategies toretry requests with theautoRetry parameter.

See the following code sample to learn how to customize your retrybehavior.

/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The ID of your GCS file// const fileName = 'your-file-name';// Imports the Google Cloud client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage({retryOptions:{// If this is false, requests will not retry and the parameters// below will not affect retry behavior.autoRetry:true,// The multiplier by which to increase the delay time between the// completion of failed requests, and the initiation of the subsequent// retrying request.retryDelayMultiplier:3,// The total time between an initial request getting sent and its timeout.// After timeout, an error will be returned regardless of any retry attempts// made during this time period.totalTimeout:500,// The maximum delay time between requests. When this value is reached,// retryDelayMultiplier will no longer be used to increase delay time.maxRetryDelay:60,// The maximum number of automatic retries attempted before returning// the error.maxRetries:5,// Will respect other retry settings and attempt to always retry// conditionally idempotent operations, regardless of preconditionidempotencyStrategy:IdempotencyStrategy.RetryAlways,},});console.log('Functions are customized to be retried according to the following parameters:');console.log(`Auto Retry:${storage.retryOptions.autoRetry}`);console.log(`Retry delay multiplier:${storage.retryOptions.retryDelayMultiplier}`);console.log(`Total timeout:${storage.retryOptions.totalTimeout}`);console.log(`Maximum retry delay:${storage.retryOptions.maxRetryDelay}`);console.log(`Maximum retries:${storage.retryOptions.maxRetries}`);console.log(`Idempotency strategy:${storage.retryOptions.idempotencyStrategy}`);asyncfunctiondeleteFileWithCustomizedRetrySetting(){awaitstorage.bucket(bucketName).file(fileName).delete();console.log(`File${fileName} deleted with a customized retry strategy.`);}deleteFileWithCustomizedRetrySetting();

PHP

When you initialize a storage client, a default retry configuration willbe set. Unless they're overridden, the optionsin the config are set to thedefault values. Users canconfigure non-default retry behavior for a client or a single operationcall by passing override options in an array.

See the following code sample to learn how to customize your retrybehavior.

use Google\Cloud\Storage\StorageClient;/** * Configures retries with customizations. * * @param string $bucketName The name of your Cloud Storage bucket. *        (e.g. 'my-bucket') */function configure_retries(string $bucketName): void{    $storage = new StorageClient([        // The maximum number of automatic retries attempted before returning        // the error.        // Default: 3        'retries' => 10,        // Exponential backoff settings        // Retry strategy to signify that we never want to retry an operation        // even if the error is retryable.        // Default: StorageClient::RETRY_IDEMPOTENT        'retryStrategy' => StorageClient::RETRY_ALWAYS,        // Executes a delay        // Defaults to utilizing `usleep`.        // Function signature should match: `function (int $delay) : void`.        // This function is mostly used internally, so the tests don't wait        // the time of the delay to run.        'restDelayFunction' => function ($delay) {            usleep($delay);        },        // Sets the conditions for determining how long to wait between attempts to retry.        // Function signature should match: `function (int $attempt) : int`.        // Allows to change the initial retry delay, retry delay multiplier and maximum retry delay.        'restCalcDelayFunction' => fn ($attempt) => ($attempt + 1) * 100,        // Sets the conditions for whether or not a request should attempt to retry.        // Function signature should match: `function (\Exception $ex) : bool`.        'restRetryFunction' => function (\Exception $e) {            // Custom logic: ex. only retry if the error code is 404.            return $e->getCode() === 404;        },        // Runs after the restRetryFunction. This might be used to simply consume the        // exception and $arguments b/w retries. This returns the new $arguments thus allowing        // modification on demand for $arguments. For ex: changing the headers in b/w retries.        'restRetryListener' => function (\Exception $e, $retryAttempt, &$arguments) {            // logic        },    ]);    $bucket = $storage->bucket($bucketName);    $operationRetriesOverrides = [        // The maximum number of automatic retries attempted before returning        // the error.        // Default: 3        'retries' => 10,        // Exponential backoff settings        // Retry strategy to signify that we never want to retry an operation        // even if the error is retryable.        // Default: StorageClient::RETRY_IDEMPOTENT        'retryStrategy' => StorageClient::RETRY_ALWAYS,        // Executes a delay        // Defaults to utilizing `usleep`.        // Function signature should match: `function (int $delay) : void`.        // This function is mostly used internally, so the tests don't wait        // the time of the delay to run.        'restDelayFunction' => function ($delay) {            usleep($delay);        },        // Sets the conditions for determining how long to wait between attempts to retry.        // Function signature should match: `function (int $attempt) : int`.        // Allows to change the initial retry delay, retry delay multiplier and maximum retry delay.        'restCalcDelayFunction' => fn ($attempt) => ($attempt + 1) * 100,        // Sets the conditions for whether or not a request should attempt to retry.        // Function signature should match: `function (\Exception $ex) : bool`.        'restRetryFunction' => function (\Exception $e) {            // Custom logic: ex. only retry if the error code is 404.            return $e->getCode() === 404;        },        // Runs after the restRetryFunction. This might be used to simply consume the        // exception and $arguments b/w retries. This returns the new $arguments thus allowing        // modification on demand for $arguments. For ex: changing the headers in b/w retries.        'restRetryListener' => function (\Exception $e, $retryAttempt, &$arguments) {            // logic        },    ];    foreach ($bucket->objects($operationRetriesOverrides) as $object) {        printf('Object: %s' . PHP_EOL, $object->name());    }}

Python

To modify the default retry behavior, create a copy of thegoogle.cloud.storage.retry.DEFAULT_RETRY object by calling it with awith_BEHAVIOR method. The Python client library automatically uses backoffstrategies to retry requests if you include theDEFAULT_RETRYparameter.

Note thatwith_predicate is not supported for operations that fetch orsend payload data to objects, like uploads and downloads. It'srecommended that you modify attributes one by one. For more information,see thegoogle-api-core Retry reference.

To configure your own conditional retry, create aConditionalRetryPolicy object and wrap your customRetryobject withDEFAULT_RETRY_IF_GENERATION_SPECIFIED,DEFAULT_RETRY_IF_METAGENERATION_SPECIFIED, orDEFAULT_RETRY_IF_ETAG_IN_JSON.

See the following code sample to learn how to customize your retrybehavior.

fromgoogle.cloudimportstoragefromgoogle.cloud.storage.retryimportDEFAULT_RETRYdefconfigure_retries(bucket_name,blob_name):"""Configures retries with customizations."""# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The ID of your GCS object# blob_name = "your-object-name"storage_client=storage.Client()bucket=storage_client.bucket(bucket_name)blob=bucket.blob(blob_name)# Customize retry with a timeout of 500 seconds (default=120 seconds).modified_retry=DEFAULT_RETRY.with_timeout(500.0)# Customize retry with an initial wait time of 1.5 (default=1.0).# Customize retry with a wait time multiplier per iteration of 1.2 (default=2.0).# Customize retry with a maximum wait time of 45.0 (default=60.0).modified_retry=modified_retry.with_delay(initial=1.5,multiplier=1.2,maximum=45.0)# blob.delete() uses DEFAULT_RETRY by default.# Pass in modified_retry to override the default retry behavior.print(f"The following library method is customized to be retried according to the following configurations:{modified_retry}")blob.delete(retry=modified_retry)print(f"Blob{blob_name} deleted with a customized retry strategy.")

Ruby

When you initialize the storage client, all retry configurations are setto the values shown in the table above. To modify the default retrybehavior, pass retry configurations while initializing the storage client.

To override the number of retries for a particular operation, passretries in theoptions parameter of the operation.

defconfigure_retriesbucket_name:nil,file_name:nil# The ID of your GCS bucket# bucket_name = "your-unique-bucket-name"# The ID of your GCS object# file_name = "your-file-name"require"google/cloud/storage"# Creates a clientstorage=Google::Cloud::Storage.new(# The maximum number of automatic retries attempted before returning# the error.## Customize retry configuration with the maximum retry attempt of 5.retries:5,# The total time in seconds that requests are allowed to keep being retried.# After max_elapsed_time, an error will be returned regardless of any# retry attempts made during this time period.## Customize retry configuration with maximum elapsed time of 500 seconds.max_elapsed_time:500,# The initial interval between the completion of failed requests, and the# initiation of the subsequent retrying request.## Customize retry configuration with an initial interval of 1.5 seconds.base_interval:1.5,# The maximum interval between requests. When this value is reached,# multiplier will no longer be used to increase the interval.## Customize retry configuration with maximum interval of 45.0 seconds.max_interval:45,# The multiplier by which to increase the interval between the completion# of failed requests, and the initiation of the subsequent retrying request.## Customize retry configuration with an interval multiplier per iteration of 1.2.multiplier:1.2)# Uses the retry configuration set during the client initialization above with 5 retriesfile=storage.service.get_filebucket_name,file_name# Maximum retry attempt can be overridden for each operation using options parameter.storage.service.delete_filebucket_name,file_name,options:{retries:4}puts"File#{file.name} deleted with a customized retry strategy."end

REST APIs

Use theexponential backoff algorithm to implement your own retrystrategy.

Exponential backoff algorithm

Anexponential backoff algorithm retries requests usingexponentially increasing waiting times between requests, up to a maximumbackoff time. You should generally use exponential backoff with jitter toretry requests that meet both the response and idempotency criteria. For bestpractices implementing automatic retries with exponential backoff, seeAddressing Cascading Failures.

Retry anti-patterns

It is recommended to use or customize the built-in retry mechanisms where applicable; seecustomizing retries. Whether you are using the default retry mechanisms, customizing them, or implementing your own retry logic, it's crucial to avoid the following commonanti-patterns as they can exacerbate issues rather than resolve them.

Retrying without backoff

Retrying requests immediately or with very short delays can lead to cascading failures which are failures that might trigger other failures.

How to avoid this: Implement exponential backoff with jitter. This strategy progressively increases the wait time between retries and adds a random element to prevent retries from overwhelming the service.

Unconditionally retrying non-idempotent operations

Repeatedly executing operations that are not idempotent can lead to unintended side effects, such as unintended overwrites or deletions of data.

How to avoid this: Thoroughly understand the idempotency characteristics of each operation as detailed in theidempotency of operations section. For non-idempotent operations, ensure your retry logic can handle potential duplicates or avoid retrying them altogether. Be cautious with retries that may lead to race conditions.

Retrying unretryable errors

Treating all errors as able to be tried again can be problematic. Some errors for example, authorization failures or invalid requests are persistent and retrying them without addressing the underlying cause won't be successful and may result in applications getting caught in an infinite loop.

How to avoid this: Categorize errors into transient (retryable) and permanent (non-retryable). Only retry transient errors like408,429, and5xx HTTP codes, or specific connection issues. For permanent errors, log them and handle the underlying cause appropriately.

Ignoring retry limits

Retrying indefinitely can lead to resource exhaustion in your application or continuously send requests to a service that won't recover without intervention.

How to avoid this: Tailor retry limits to the nature of your workload. For latency sensitive workloads, consider setting a total maximum retry duration to ensure a timely response or failure. For batch workloads, which might tolerate longer retry periods for transient server side errors, consider setting a higher total retry limit.

Unnecessarily layering retries

Adding custom application level retry logic on top of the existing retry mechanisms can lead to an excessive number of retry attempts. For example, if your application retries an operation three times, and the underlying client library also retries it three times for each of your application's attempts, you could end up with nine retry attempts. Sending high amounts of retries for errors that cannot be retried might lead to request throttling, limiting the throughput of all workloads. High numbers of retries might also increase latency of requests without improving the success rate.

How to avoid this: We recommend using and configuring the built-in retry mechanisms. If you must implement application-level retries, like for specific business logic that spans multiple operations, do so with a clear understanding of the underlying retry behavior. Consider disabling or significantly limiting retries in one of the layers to prevent multiplicative effects.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.