Data availability and durability Stay organized with collections Save and categorize content based on your preferences.
This page explains the concepts of availability and durability withinCloud Storage:
Availability: The ability to access data immediately upon request.
Durability: Long-term protection to ensure data remains intact anduncorrupted.
The following sections cover how Cloud Storage redundantly storesdata, the default replication behavior for dual-regions and multi-regions,and advanced features like turbo replication and cross-bucketreplication.
Key concepts
The monthly availability of data stored in Cloud Storage depends onthe storage class of the data and the location type of the bucket. For moreinformation, seeavailable storage classes.
Cloud Storage is designed for at least 99.999999999% (11 9's) annualdurability, regardless of storage class and location type.
To achieve this, Cloud Storage uses erasure coding and storesdata pieces redundantly across multiple devices.
Writes to Cloud Storage are only confirmed as successful afterdata has been redundantly stored.
Checksums are stored and regularly revalidated to proactively verify theintegrity of all data at rest, as well as to detect corruption of data intransit. If required, corrections are automatically made using redundantdata.
How a bucket's location type affects availability and durability
Regional buckets store data redundantly in at least two availabilityzones in the region you select. They are designed to tolerate the loss ofany one availability zone in the region.
Object writes to a bucket are only confirmed as successful after datahas been redundantly stored across at least two different availabilityzones.
In the unlikely event of an availability zone outage, such as one causedby a natural disaster, regional buckets remain available, with no need tochange storage paths.
Dual-region andmulti-region buckets store data redundantly in atleast two separate geographic places.
For dual-regions, you select the specific regions in which your objectsare stored.
For multi-regions, the specific data centers used for storing your dataare determined by Cloud Storage as needed, but are located withinthe geographic boundary of the multi-region and are separated by at least100 miles. This provides redundancy across regions at a lower storage costthan dual-regions.
Object writes to a bucket are only confirmed as successful after data hasbeen redundantly stored in one initial region, across at least twodifferent availability zones (same as writes to a regional bucket). Datais then asynchronously replicated usingdefault replication toprovide the expected redundancy across regions.
If one of the regions in which an object is stored becomes unavailableafter the object is successfully uploaded but prior to it beingreplicated for georedundancy, Cloud Storage'sstrong consistency ensures that stale versions of the objectwon't be served and that subsequent overwrites aren't reverted whenthe region becomes available again.
As a premium offering, you can optionally enableturbo replication on dual-region buckets to achieve faster, morepredictable replication times across regions for newly written data.
In the unlikely event of a region-wide outage, such as one caused by anatural disaster, dual-region and multi-region buckets remain available,with no need to change storage paths.
To achieve redundancy between a region pairing not available as adual-region, consider creating a separate bucket for each region and usingStorage Transfer Serviceevent-driven transfers orcross-bucket replication to keep the buckets insync.
Locally redundant data, such as data in a zonal bucket, provides 99.999999999%(11 9's) annual durability against hardware failures like host, rack, ordrive failures. However, because data is not redundant across availabilityzones, it may become unavailable or permanently lost in the event of anavailability zone failure. As a result, locally redundant storage is mostsuitable for data that can be replaced or reconstructed.
Redundancy across regions
While traditional storage models often rely on an active-passive approach with"primary" and "secondary" geographic locations, Cloud Storagedual-regions and multi-regions provide an active-active architecture based on asingle bucket with redundancy across regions. This simplifies thedisaster recovery process by eliminating the need for users to replicatedata from one bucket to another or manually failover to a secondary bucket inthe case of primary region downtime.
Cloud Storage always understands the current state of a bucket andtransparently serves objects from an available region as required. As a result,dual-region and multi-region buckets are designed to have arecovery time objective (RTO) of zero, and temporary regional failures arenormally invisible to users; in the case of a regional outage, dual-region andmulti-region buckets automatically continue serving all data that has beenreplicated across regions.
However, redundancy across regions occurs asynchronously, and any data that doesnot finish replicating across regions prior to a region becoming unavailable isinaccessible until the downed region comes back online. Data could potentiallybe lost in the very unlikely case of physical destruction of the region.
Default replication in Cloud Storage is designed to provideredundancy across regions for 99.9% of newly written objects within a target ofone hour and 100% of newly written objects within a target of 12 hours. Newlywritten objects include uploads, rewrites, copies, and compositions.
Cloud Storage also offers across-bucket replication capabilitythat can be used to replicate data between independent buckets to meetadditional data replication needs that aren't met by dual-region or multi-regionlocations.
Turbo replication
Turbo replication provides faster redundancy across regions for data in yourdual-region buckets, whichreduces the risk of data loss exposure andhelps support uninterrupted service following a regional outage. Whenenabled, turbo replication is designed to replicate 100% of newly writtenobjects to the two regions that constitute a dual-region within therecovery point objective of 15 minutes, regardless of object size.
Note that even for default replication, most objects finish replicationwithin minutes.
While redundancy across regions and turbo replication help supportbusiness continuity and disaster recovery (BCDR) efforts, administratorsshould plan and implement a full BCDR architecture that's appropriate for theirworkload.
For more information, see theStep-by-step guide to designing disaster recovery for applications in Google Cloud.
Limitations
Turbo replication is only available for buckets indual-regions.
Turbo replication cannot be managed through the XML API, including creating anew bucket with turbo replication enabled.
When turbo replication is enabled on a bucket, it can take up to 10 secondsbefore it begins to apply to newly written objects.
Object writes that began prior to enabling turbo replication on a bucketreplicate across regions at the default replication rate.
- Object composition that uses any source objects written usingdefault replication in the last 12 hours creates a composite object thatalso uses default replication.
Cross-bucket replication
In some cases, you might want to maintain a copy of your data in a secondbucket. Cross-bucket replication copies new and updatedobjects asynchronously from a source bucket to a destination bucket.
Cross-bucket replication differs from default replication and turbo replicationin that your data exists in two independent buckets, each with their ownconfigurations such as storage location, encryption, access, and storage class.It is especially suitable for:
- Data sovereignty: Maintain data across geographically distant regions.
- Maintaining separate development and production versions: Create distinctbuckets and namespaces, so that development doesn't affect your productionworkload.
- Sharing data: Replicate data to a bucket owned by a vendor or partner.
- Aggregating data: Combine data from different buckets into a singlebucket to run analytics workloads.
- Managing cost, security, and compliance: Maintain your data underdifferent ownerships, storage classes, and retention periods.
Cross-bucket replication usesStorage Transfer Service to replicate objects andPub/Sub to get alerted of changes to the source and destinationbuckets. You can enable cross-bucket replication on new buckets you create andon existing buckets.
For buckets where the object change rate is under 3,000 per second and objectsare under one GiB, cross-bucket replication commonly takes minutes to tensof minutes, but no specific upper bound is supported. Also, buckets experiencinghigher change rates or having larger objects can expect to see higherreplication delays.
For instructions on using cross-bucket replication, seeUse cross-bucket replication.
Note: When using cross-bucket replication, you might incur charges from datatransfer, data storage, data processing, data retrieval, and operations fromCloud Storage, Storage Transfer Service, and Pub/Sub. For details onpricing, seeCloud Storage pricing,Storage Transfer Service pricing, andPub/Sub pricing.Limitations
Custom names are not supported for cross-bucket replication jobs. Create requests thatcontain a value for the
namefield return an error.Cross-bucket replication is not supported for hierarchical namespace buckets.
Object deletions in the source bucket are not replicated to the destinationbucket.
Object lifecycle configurations aren't replicated.
When objects are replicated, timestamp metadata (for example,
timeCreatedandtimeUpdated) is not preserved. SeeTransfers between Cloud Storage bucketsfor details on metadata preservation.Because cross-bucket replication can be used to replicate data between bucketslocated in any Google Cloud location, cross-bucket replication performancevaries based on the locations selected. Consequently, cross-bucket replicationdoes not offer a Recovery Point Objective (RPO).
Objects that are already in the bucket when a replication job is created arenot automatically replicated. Only new and updated objects are replicated.To replicate existing objects, create a one-time Storage Transfer Service transferjob from your existing bucket to the new bucket. SeeCreate transfers for instructions.
Performance monitoring
Cloud Storage monitors the oldest unreplicated objects in dual-regionand multi-region buckets using default replication or turbo replication. If anobject remains unreplicated for longer than its RPO (Recovery Point Objective)time, it's considered to be out of RPO. Each minute in which one or more objectsare out of RPO is counted as a "bad" minute.
For example, if one object yielded 20 bad minutes from 9:00-9:20 AM, and anotherobject yielded 10 bad minutes from 9:15-9:25 AM, then there are two objects forthe month that are out of RPO. The total number of bad minutes for the monthis 25 minutes, because from 9:00 AM to 9:25 AM there was at least one objectthat was missing its RPO.
For buckets using turbo replication, the RPO for objects is 15 minutes.
For buckets using default replication, the RPO for objects is 12 hours.
- For buckets that use default replication, objects are typically replicatedin one hour or less.
Cross-bucket replication doesn't provide an RPO.
Within the Google Cloud console, thePercent of minutes out of RPOgraph lets you monitor the percentage of bad minutes during the past 30days for your bucket when using default replication or turbo replication withindual-region or multi-region buckets. This service level indicator can be used tomonitor your bucket's Monthly Replication Time Conformance. Similarly, thePercent of objects out of target tracks object replications that did notoccur within the RPO. This service level indicator can be used to monitor thebucket's Monthly Replication Volume Conformance. For more information, seeCloud Storage monitoring andCloud Storage SLA.
What's next
- Enable turbo replication on an existing dual-region bucket.
- Learn more aboutturbo replication pricing.
- Move data to a different bucket in a new location.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.