Bigtable backups overview

This page provides an overview of Bigtable backups. The contentpresented here is intended for Bigtable administrators anddevelopers.

Backups let you save a copy of a table's schema and dataand then restore from the backup to a new table later. Bigtableoffers two types of backups. The type of backup you create depends on yourdisaster recovery (DR) requirements and the type of storage (HDD or SSD) thatyour Bigtable cluster uses.

  • Standard backups are optimized for long-term retention. When you restorefrom a standard backup to an SSD cluster, the restore operation requiresadditional optimization by Bigtable to bring the table toproduction-level performance. For more information, seePerformance when restoring.
  • Hot backups provide the most efficient restoration to production-levelperformance and low-latency serving. For more information, seeHotbackups.

You can create backups in the following ways:

  • Enable automated backup to let Bigtable create daily backupsfor you.
  • Create a backup on demand, by using the Google Cloud console, thegcloud CLI, or a Bigtable client library.
  • Create a copy of a backup.

Before you read this page, you should be familiar with theBigtable overview andManagetables.

Features

  • Fully integrated: Backups are handled entirely by theBigtable service, with no need to import or export.
  • Incremental: A backup shares physical storage with the source table andother backups of the table.
  • Cost effective: Using Bigtable backups lets you avoid thecosts associated with exporting, storing, and importing data using otherservices.
  • Automatic expiration: Each backup has a user-defined expiration datethat can be up to 90 days after the backup iscreated. You can store a copy of a backup for up to30 days.
  • Flexible restore options: You can restore from a backup to a table in adifferent instance from where the backup was created.
  • Automated backup: Enable automated backup to let Bigtablecreate daily backups.
  • Hot backups: Plan for disaster recovery with production-ready hot backups.

Use cases

Backups are useful for the following use cases:

  • Business continuity
  • Regulatory compliance
  • Testing and development
  • Disaster recovery

Consider the following disaster recovery scenarios:

GoalBackup strategyRestoration strategy
Protect against human error:You want to always have a recent backup of your data ready in case of accidental deletion or corruption.Determine the backups creation schedule that's right for your business needs, such as daily. Optionally, create periodic copies of the backups and store them in a different project or region for increased isolation and protection. For even more protection, store the backup copies in a project or instance with restricted access permissions.Restore to a new table from the backup or copy, and then re-route requests to the new table.
Zone unavailability:You need to make sure that in the unlikely event that a Google Cloud zone becomes unavailable, your data is still available.Enable automated backup to let Bigtable create a daily backup on every cluster in the instance. Alternatively, create backups on a regular basis and then periodically create a copy of the most recent backup and store it on one or more clusters in different zones (optionally in a different instance or project).If the zone where your serving cluster becomes unavailable, restore from the remote backup copy to a new table, and then re-route requests to the new table.
Data corruption:Use a backup to recover some of a table's data, such as when part of the source table has become corrupted.Enable replication and automated backup to create daily backups in multiple regions, so that if a table becomes corrupted on one cluster, you have one or more backups that don't share storage on the corrupted cluster.Restore from the backup to a new table on the new cluster or instance. Then write an application using a Bigtable client library orDataflow that reads from the new table and then writes the data back to the source table. When the data has been copied to the original table, delete the new table.
Fast recovery:Restore to full production performance levels quickly, minimizing downtime.Always maintain a recent hot backup of your table.Restore to a new table from the hot backup, and then re-route requests to the new table.

Hot backups

Ahot backup is a production-ready backup that is optimized for speedyrecovery, with lower latency when reading from the new table shortly afterrestoration. Restoring to production performance from a hot backup is fasterthan restoring from a standard backup.

Limitations for hot backups

Hot backups have the following limitations:

  • You can convert a hot backup to a standard backup, but you can't convert astandard backup to a hot backup.
  • You can't create hot backups using automated backup, and you can't create a hotbackup on an HDD cluster.
  • A copy of a hot backup is always a standard backup. As such, the backup copydoesn't have the same efficient restoration to production-level performance andlow-latency serving as the hot backup.

Working with Bigtable backups

The following actions are available for Bigtable backups. In allcases,the destination project, instance, and cluster must already exist.You are not able to create these resources as part of a backup operation.

  1. You can't create a copy of a backup copy.
  2. A copy of a backup is always a standard backup, even if the source is a hot backup.
ActionDestination options
Create a standard backup
  • Any cluster in the same instance as the source table
Create a hot backup
  • Any cluster in the same instance as the source table. The instance must use SSD storage.
Restore from a standard or hot backup to a new table
  • Any instance
  • Any Bigtable region
  • Any project
Copy a backup1, 2
  • Any instance
  • Any Bigtable region
  • Any project

SeeManage backups forstep-by-step instructions on theseactions as well as operations such as updating and deleting backups.

Use the following to work with Bigtable backups:

Backup storage

A table backup that you create manually or programmatically is stored on asingle cluster that you specify. When automated backup isenabled, a backup is stored on each cluster in the instance.

If your cluster exceeds the recommended limits for CPU or storage utilizationwhen you create a backup, your backup creation might be delayed. For moreinformation, seeUnderstand CPU and diskusage.

A backup of a table includes all the data that was in the table at the time thebackup was created, on the cluster where the backup is created. A backup isnever larger than the size of the source table at the time that the backup iscreated.

Bigtable backups are incremental. The amount of storagethat a backup consumes depends on the size of the table and the extent to whichit can share storage of unchanged data with the original table or other backupsof the same table. For that reason, a backup's size depends on the amount ofdata divergence since the prior backup.

You can create up to 150 backups per table percluster.

You can delete a table that has a backup. To protect your backups, you cannotdelete a cluster that contains a backup, and you cannot delete an instance thathas one or more backups in any cluster.

A backup still exists after you restore from it to a new table. You can deleteit or let it expire when you no longer need it. Backup storage does not counttoward thenode storage limit for a project.

Data in backups is encrypted.

Retention

You can specify a retention period of up to 90 days fora backup. If you create a copy of a backup, the maximum retention period for thecopy is 30 days from the time the copy is created.

You can change the retention period for a backup to keep it for up to90 days after the backup creation time. For moreinformation, seeModify a backup or backupcopy.

For tables with automated backup enabled, the retention period isseven days if you set the policy using the--enable-automated-backup flag. You can set a custom retention period bypassing in the--automated-backup-retention-period flag, which accepts a valuefrom 3 days to 90 days. For more information, seeUpdate an automated backuppolicy.

Post-restoration storage

The storage cost for a new table restored from a backup is the same as for anytable.

A table restored from a backup might not consume the same amount of storage asthe original table, and it might decrease in size after restoration. The sizedifference depends on how recentlycompaction has occurred on thesource cluster and the destination cluster.

Because compaction occurs on a rolling basis, it's possible that compactionoccurs as soon as the table is created. However, compaction can take up to aweek to occur.

A new table restored from a backup doesn't inherit the garbage collectionpolicies of the source table. Configure garbage collection policies in the newtable before you begin writing new data to the table. For more information, seeConfigure garbage collection.

Costs

Standard network costs apply when working with backups. You are not charged forbackup operations, including creating, copying, or restoring from a backup.

Storage costs

Storage costs are different for standard backups and hot backups.

Standard backup storage costs

To store a standard backup or a copy of a backup, you're charged thestandardbackup storage rate for the region that the cluster containingthe backup or backup copy is in.

A standard backup is a complete logical copy of a table. Behind the scenes,Bigtable optimizes standard backup storage utilization. Thisoptimization means that a standard backup is incremental — it sharesphysical storage with the original table or with other backups of the tablewhenever possible. Because of Bigtable's built-in storageoptimizations, the cost to store a standard backup or a copy of a backup mightsometimes be less than the cost of a full physical copy of the table backup.

In replicated instances where automated backup is enabled, the storage costsmight be higher because a backup is created on each cluster daily.

Hot backup storage costs

To store a hot backup, you're charged thehot backup storage ratefor the region that the cluster that contains the hot backup is located.

Because a hot backup is stored in a ready state, optimized for quickrestoration, you are charged for storage of the entire logical copy of thetable, rather than for incremental portions, as you are with a standard backup.

Costs when copying a backup

When you create a copy of a backup in a different region than the source backup,you are charged standard network rates for the cost of copying the data to thedestination cluster. You are not charged for network traffic when you create acopy in the same region as the source backup.

Costs when restoring

When you restore a new table from a backup, you are billed for the network costof replication. If the new table is in an instance that uses replication, youare charged aone-time replication cost for the data to becopied to all clusters in the instance.

If you restore to a different instance than where the backup was created, andthe backup's instance and the destination instance don't have at least onecluster in the same region, you are charged a one-time cost for the initial datacopy to the destination cluster at thestandard networkrates.

CMEK

When you create a backup in a cluster that is protected by acustomer-managedencryption key (CMEK), the backup is pinned to the primaryversion of the cluster's CMEK key at the time it is taken. Once the backup iscreated, its key and key version cannot be modified, even if the KMS key isrotated.

When you restore from a backup, the key version that the backup is pinned tomust be enabled for the backup decryption process to succeed. The new table isprotected with the latest primary version of the CMEK key for each cluster inthe destination instance. If you want to restore from a CMEK-protected backup toa different instance, the destination instance must be CMEK-protected as wellbut does not need to have the same CMEK configuration as the source instance.

Replication considerations

This section describes additional concepts to understand when backing up andrestoring a table in an instance that usesreplication.

Replication and backing up

When you take a backup of a table manually in a replicated instance, you choosethe cluster where you want to create and store the backup. For tables withautomated backup enabled, a daily backup is created on each cluster in theinstance.

You don't have to stop writing to the cluster that contains the backup, but youshould understand how Bigtable handles replicated writes to thecluster.

A backup is a copy of the table in its state on the cluster where the backup isstored, at the time the backup is created. Table data that has not yet beenreplicated from another cluster in the instance is not included in the backup.

Each backup has a start and end time. Writes that are sent to the clustershortlybefore orduring the backup operationmight not be included in thebackup. Two factors contribute to this uncertainty:

  • A write might be sent to a section of the table that the backup has alreadycopied.
  • A write to another cluster might not have been replicated to the clusterthat contains the backup.

In other words, there's a chance that some writes with timestamps before thetime of the backup might not be included in the backup.

If this inconsistency is unacceptable for your business requirements, you canuse aconsistency token with your write requests to ensurethat all replicated writes are included in a backup.

Backups of replicated tables that are created as part of automated backup arenot exact copies of each other, because backup times can vary from cluster tocluster.

Replication and restoring

When you restore a backup to a new table, replication to and from the otherclusters in the instance starts immediately after the restore operation hascompleted on the destination cluster.

Performance

While creating backups, use the following best practices to ensure that yourperformance remains optimal.

Performance when backing up

Creating a backup usually takes less than a minute, although it can take up toone hour. Under normal circumstances, backup creation does not affect servingperformance.

For optimal performance, don't create a backup of a single table more than onceevery five minutes. Creating backups more frequently can potentially lead to anobservable increase in serving latency.

Performance when restoring

Restoring from a backup to a table in a single-cluster instance takes a fewminutes. In replicated instances, restoration takes longer because the data hasto be copied to all the clusters. Bigtable always chooses the mostefficient route to copy data.

If you restore to a different instance from where the backup was created, therestore operation takes longer than if you restore to the same instance. This isespecially true if the destination instance does not have a cluster in the samezone as the cluster where the backup was created.

A bigger table takes longer to restore than a smaller table.

If you have an SSD instance, you might initially experience higher read latency,even after a restore is complete, while the table is optimized. You cancheckthe status at any time during the restore operation to see ifoptimization is still in process.

If you restore to a different instance from where the backup was created, thedestination instance can use HDD or SSD storage. It does not need to use thesame storage type as the source instance.

Access control

IAM permissions control access to backup and restoreoperations. Backup permissions are at the instance level and apply to allbackups in the instance.

The account that you use to create a backup of a table must have permission toread the table and create backups in the instance that the table is in (thesource instance).

The account that you use to copy a backup must have permission to read thesource backup and to create a backup in the destination instance and project.

The account that you use to restore a new table from a backup must havepermission to create a table in the instance that you are restoring to.

ActionRequired IAM permission
Create a backupbigtable.tables.readRows, bigtable.backups.create
Get a backupbigtable.backups.get
List backupsbigtable.backups.list
Delete a backupbigtable.backups.delete
Update a backupbigtable.backups.update
Copy a backupbigtable.backups.read, bigtable.backups.create
Restore from a backup to a new tablebigtable.tables.create, bigtable.backups.restore
Get an operationbigtable.instances.get
List operationsbigtable.instances.get

Best practices

Before you create a backup strategy, consider the following best practices. For moreinformation about disaster recovery planning, see the Bigtablesection ofArchitecting disaster recovery for cloud infrastructure outages.

Creating backups

  • Don't back up a table more frequently than once every five minutes.
  • When you back up a table that uses replication, choose the cluster to storethe backup after considering the following factors:
    • Cost. One cluster in your instance may be in a lower-cost regionthan the others.
    • Proximity to your application server. You might want to store thebackup as close to your serving application as possible.
  • If you need to ensure that all replicated writes are included in a backupwhen you back up a table in an instance that uses replication, use aconsistency token with your write requests.

Restoring from backups

  • Plan ahead what you will name the new table if you need to restore from abackup. The key point is to be prepared ahead of time so that you don't haveto decide when you're dealing with a problem.
  • If you are restoring a table for a reason other than accidental deletion,make sure all reads and writes are going to the new table before you deletethe original table.
  • If you plan to restore to a different instance,create the destinationinstance before you initiate the backup restore operation.
  • To avoid slow table restoration, wait for a restore operation tocomplete before you initiate another restoration for the same source tablein the same zone.
  • Wait at least an hour after creation before you restore from a standardbackup. If you need to restore more quickly, use a hot backup instead.

Quotas and limits

Backup and restore requests and backup storage are subject toBigtable quotas and limits.

Limitations

The following limitations apply to Bigtable backups:

General

  • You can't read directly from a backup.
  • A backup is a version of a table in a single cluster at a specific time.Backups don't represent a consistent state. The same also applies to backupsof the same table in different clusters.
  • You cannot back up more than one table in a single operation.
  • You cannot export, copy, or move a Bigtable backup to anotherservice, such as Cloud Storage.
  • Bigtable backups contain only Bigtable data andare not integrated with or related to backups for other Google services.
  • You can't create a backup of a view.

Restoring

  • You can't restore from a backup to an existing table.
  • You can only restore to an instance that already exists.Bigtable does not create a new instance when restoring from abackup. If the destination instance specified in a restore request does notexist, the restore operation fails.
  • If you restore from a backup to a table in an SSD cluster and then deletethe newly restored table, the table deletion might take a while to completebecause Bigtable waits for table optimization to finish.

Copying

  • You can't create a copy of a backup that is within 24 hours of expiring.
  • You can't create a copy of a backup copy.

CMEK

  • A backup that is protected by CMEK must be restored to a new table in aninstance that is CMEK-protected.
  • When you create a copy of a backup that is CMEK-protected, the destinationcluster must also be CMEK-protected.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.