Examples of replication configurations
This page describes some common use cases for Bigtablereplication and presents the settings that you can use to support these usecases.
- Isolate batch analytics workloads from other applications
- Create high availability (HA)
- Provide near-real-time backup
- Maintain high availability and regional resilience
- Store data close to your users
This page also explains how to decidewhat settings to use for other usecases.
Before you read this page, you should be familiar with theoverview of Bigtable replication.
Before you add clusters to an instance, you should be aware of therestrictions that apply when you changegarbage collection policies on replicated tables.
In most cases,enable autoscaling for yourinstance's clusters. Autoscaling lets Bigtableautomatically add and remove nodes to a cluster based on workload.
If you choose manual node allocation instead, provision enough nodes inevery cluster in an instance to ensure that each cluster can handlereplication in addition to the load it receives from applications. If a clusterdoes not have enough nodes, replication delay can increase, the cluster canexperience performance issues due to memory buildup, and writes to otherclusters in the instance might be rejected.
Examples in this document describe creating an instance, but you can also addclusters to an existing instance.
Isolate batch analytics workloads from other applications
When you use a single cluster to run a batch analytics job that performs numerous large readsalongside an application that performs a mix of reads and writes, the large batch job can slowthings down for the application's users. With replication, you can use app profiles withsingle-cluster routing to route batch analytics jobs and application traffic to different clusters,so that batch jobs don't affect your applications' users.
Create an instance with two clusters.
Create two app profiles, one called
live-trafficand another calledbatch-analytics.If your cluster IDs are
cluster-aandcluster-b, thelive-trafficappprofile should route requests tocluster-aand thebatch-analyticsapp profile should route requests tocluster-b. This configurationprovidesread-your-writes consistency for applicationsusing the same app profile, but not for applications using different appprofiles.You can enablesingle-row transactions in the
live-trafficapp profile if necessary. There's no need to enablesingle-row transactions in thebatch-analyticsapp profile, assumingthat you will only use this app profile for reads.Use the
live-trafficapp profile to run a live-traffic workload.While the live-traffic workload is running, use the
batch-analyticsapp profile to run a read-only batch workload.
To isolate two smaller workloads from one larger workload:
Create an instance with three clusters.
These steps assume that your clusters use the IDs
cluster-a,cluster-b, andcluster-c.Create the following app profiles:
live-traffic-app-a: Single-cluster routing from your application tocluster-alive-traffic-app-b: Single-cluster routing from your application tocluster-bbatch-analytics: Single-cluster routing from the batch analytics job tocluster-c
Use the live-traffic app profiles to run live-traffic workloads.
While the live-traffic workloads are running, use the
batch-analyticsapp profile to run a read-only batch workload.
Create high availability (HA)
If an instance has only one cluster, your data's durability and availability are limited to the zonewhere that cluster is located. Replication can improve both durability and availability by storingseparate copies of your data in multiplezones or regionsand automatically failing over between clusters if needed.
To configure your instance for a high availability (HA) use case,create a new app profile that uses multi-clusterrouting, orupdate the default app profile to usemulti-cluster routing. This configuration provideseventual consistency. You won't be able to enablesingle-row transactions because single-row transactions can causedata conflicts when you use multi-cluster routing.
Important: To ensure high availability during a regional failure, locate yourclient in or nearmore than one of the Google Cloud regions where yourBigtable clusters are. This strategy ensures that you are able tosend and receive data to and from Bigtable even if a region thatyour client or application is in or near experiences a failure. Thisrecommendation applies even if Google Cloud does not host yourapplication, because your data enters the Google Cloud network through theGoogle Cloud region that is closest to your application server.Configurations to improve availability include the following.
Clusters in three or more different regions (recommendedconfiguration). The recommended configuration for HA is an instance thathasN+2 clusters that are each in a different region. For example, if theminimum number of clusters that you need to serve your data is 2, then youneed an instance with four clusters to maintain HA. This configurationprovides uptime even in the rare case that two regions become unavailable.We recommend that you spread the clusters across multiple continents.
Example configuration:
cluster-ain zoneus-central1-ain Iowacluster-bin zoneeurope-west1-din Belgiumcluster-cin zoneasia-east1-bin Taiwan
Two clusters in the same region but different zones. This optionprovides high availability within the region's availability, the ability tofail over without generating cross-region replication costs, and no increasedlatency on failover. Your data in a replicated Bigtableinstance is available as long as any of the zones it is replicated to areavailable.
For more information about region-specific considerations, seeGeography and regions.
Example configuration:
cluster-ain zoneaustralia-southeast1-ain Sydneycluster-bin zoneaustralia-southeast1-bin Sydney
Two clusters in different regions. This multi-region configurationprovides high availability like the preceding multi-zone configuration,but your data is available even if you cannot connect to one of the regions.
You are charged forreplicating writes between regions.
Example configuration:
cluster-ain zoneasia-northeast1-cin Tokyocluster-bin zoneasia-east2-bin Hong Kong
Two clusters in region A and a third cluster in region B. This optionmakes your data available even if you cannot connect to one of the regions, andit provides additional capacity in region A.
You are charged forreplicating writes between regions. If you write to region A, you are charged oncebecause you have only one cluster in region B. If you write to region B, youare charged twice because you have two clusters in region A.
Example configuration:
cluster-ain zoneeurope-west1-bin Belgiumcluster-bin zoneeurope-west1-din Belgiumcluster-cin zoneeurope-north1-cin Finland
Provide near-real-time backup
In some cases—for example, if you can't afford to read stale data—you'll always need to routerequests to a single cluster. However, you can still use replication by handling requests with onecluster and keeping another cluster as a near-real-time backup. If the serving cluster becomesunavailable, you can minimize downtime by manually failing over to the backup cluster.
To configure your instance for this use case,create an app profile that uses single-cluster routing orupdate the defaultapp profile to use single-cluster routing. Thecluster that you specified in your app profile handles incoming requests.The other cluster acts as a backup in case you need to fail over. Thisarrangement is sometimes known as an active-passive configuration, and itprovides bothstrong consistency and read-your-writes consistency. You can enablesingle-row transactions in the app profile if necessary.
To implement this configuration:
Use an app profile with single-cluster routing to run a workload.
Use the Google Cloud console tomonitor the instance's clusters and confirm that only one cluster is handling incomingrequests.
The other cluster will still use CPU resources to perform replication andother maintenance tasks.
Update the app profile so that it points to thesecond cluster in your instance.
You receive a warning about losingread-your-writesconsistency, which also means that you lose strongconsistency.
If you enabled single-row transactions, you also receive a warningabout the potential for data loss. You lose data if you sendconflicting writes while the failover is occurring.
Continue to monitor your instance. You should see that the second cluster ishandling incoming requests.
Maintain high availability and regional resilience
Let's say you have concentrations of customers in two distinct regions within acontinent. You want to serve each concentration of customers withBigtable clusters as close to the customers as possible. You wantyour data to be highly available within each region, and you might want afailover option if one or more of your clusters is not available.
For this use case, you can create an instance with two clusters in region A and twoclusters in region B. This configuration provides high availability even if youcannot connect to a Google Cloud region. It also provides regionalresilience because even if a zone becomes unavailable, the other cluster in thatzone's region is still available.
You can choose to use multi-cluster routing or single-cluster routing for thisuse case, depending on your business needs.
To configure your instance for this use case:
Create a Bigtable instance with four clusters: two in region A and twoin region B. Clusters in the same region must be in different zones.
Example configuration:
cluster-ain zoneasia-south1-ain Mumbaicluster-bin zoneasia-south1-cin Mumbaicluster-cin zoneasia-northeast1-ain Tokyocluster-din zoneasia-northeast1-bin Tokyo
Place an application server near each region.
You can choose to use multi-cluster routing or single-cluster routing for thisuse case, depending on your business needs. If you use multi-cluster routing,Bigtable handles failovers automatically. If you usesingle-cluster routing, you use your own judgment to decide when to fail overto a different cluster.
Single-cluster routing option
You can use single-cluster routing for this use case if you don't want yourBigtable cluster to automatically fail over if a zone or regionbecomes unavailable. This option is a good choice if you want to manage thecosts and latency that might occur if Bigtable starts routingtraffic to and from a distant region, or if you prefer to make failoverdecisions based on your own judgment or business rules.
To implement this configuration,create at least one app profile that uses single-cluster routing for each applicationthat sends requests to the instance. You can route the app profilesto any cluster in the Bigtable instance. For example, if youhave three applications running in Mumbai and six in Tokyo, you can configureone app profile for the Mumbai application to route toasia-south1-a and twothat route toasia-south1-c. For the Tokyo application, configure threeapp profiles that route toasia-northeast1-a and three that route toasia-northeast1-b.
With this configuration, if one or more clusters become unavailable, you canperform a manual failover or choose to let your databe temporarily unavailable in that zone until the zone is available again.
Multi-cluster routing option
If you're implementing this use case and you want Bigtable toautomatically fail over to one region if your application cannot reach the otherregion, usemulti-cluster routing.
To implement this configuration,create a new app profile that uses multi-cluster routing for each application, orupdate the default app profile to use multi-clusterrouting.
This configuration provideseventual consistency.If a region becomes unavailable, Bigtable requests areautomatically sent to the other region. When this happens, you arecharged for the network traffic to the other region, andyour application might experience higher latency because of the greaterdistance.
Store data close to your users
If you have users around the globe, you can reduce latency by running yourapplication near your users and putting your data as close to your applicationas possible. With Bigtable, you can create an instance that hasclusters in several Google Cloud regions, and your data is automaticallyreplicated in each region.
For this use case, use app profiles with single-cluster routing. Multi-clusterrouting is undesirable for this use case because of the distance betweenclusters. If a cluster becomes unavailable and its multi-cluster app profileautomatically reroutes traffic across a great distance, your application mightexperience unacceptable latency and incur unexpected, additional network costs.
To configure your instance for this use case:
Create an instance with clusters in three distinct geographic regions, suchas the United States, Europe, and Asia.
Place an application server near each region.
Create app profiles similar to the following:
clickstream-us: Single-cluster routing to the cluster in the UnitedStatesclickstream-eu: Single-cluster routing to the cluster in Europeclickstream-asia: Single-cluster routing to the cluster in Asia
In this setup, your application uses the app profile for theclosest cluster. Writes to any cluster are automatically replicated to theother two clusters.
Other use cases
If you have a use case that isn't described on this page, use the followingquestions to help you decide how to configure your app profiles:
Do you need to performsingle-row transactions,such as read-modify-write operations (including increments and appends) andcheck-and-mutate operations (also known as conditional mutations or conditionalwrites)?
If so, your app profiles must usesingle-clusterrouting to prevent data loss, and you musthandlefailovers manually.
Important: If you have additional app profiles that point to otherclusters, it's best to disable single-row transactions in the additional appprofiles. If you enable single-row transactions in multiple app profiles, you must ensure that you don't sendconflicting writes to your clusters, which could result in data loss.Learn more about conflicting writes.Do you want Bigtable to handle failovers automatically?
If so, your app profiles must usemulti-clusterrouting. If a cluster can't process an incomingrequest, Bigtable automatically fails over to the otherclusters.Learn more about automatic failovers.
To prevent data loss, you can't enablesingle-rowtransactions when you use multi-cluster routing.Learn more.
Do you want to maintain a backup or spare cluster in case your primarycluster is not available?
If so, usesingle-cluster routing in your appprofiles, andfail over to the backup cluster manuallyif necessary.
This configuration also makes it possible to usesingle-rowtransactions if necessary.
Do you want to send different kinds of traffic to different clusters?
If so, usesingle-cluster routing in your appprofiles, and direct each type of traffic to its own cluster.Fail over between clusters manually if necessary.
You can enablesingle-row transactions in yourapp profiles if necessary.
Important: If you enable single-row transactions in multiple app profiles, you must ensure that you don't sendconflicting writes to your clusters, which could result in data loss.Learn more about conflicting writes.
What's next
- Learn more about app profiles.
- Create an app profile or update an existing app profile.
- Find out how failovers work.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.