Examples of replication configurations

This page describes some common use cases for Bigtablereplication and presents the settings that you can use to support these usecases.

This page also explains how to decidewhat settings to use for other usecases.

Before you read this page, you should be familiar with theoverview of Bigtable replication.

Before you add clusters to an instance, you should be aware of therestrictions that apply when you changegarbage collection policies on replicated tables.

In most cases,enable autoscaling for yourinstance's clusters. Autoscaling lets Bigtableautomatically add and remove nodes to a cluster based on workload.

If you choose manual node allocation instead, provision enough nodes inevery cluster in an instance to ensure that each cluster can handlereplication in addition to the load it receives from applications. If a clusterdoes not have enough nodes, replication delay can increase, the cluster canexperience performance issues due to memory buildup, and writes to otherclusters in the instance might be rejected.

Examples in this document describe creating an instance, but you can also addclusters to an existing instance.

Isolate batch analytics workloads from other applications

When you use a single cluster to run a batch analytics job that performs numerous large readsalongside an application that performs a mix of reads and writes, the large batch job can slowthings down for the application's users. With replication, you can use app profiles withsingle-cluster routing to route batch analytics jobs and application traffic to different clusters,so that batch jobs don't affect your applications' users.

To isolate two workloads:

  1. Create an instance with two clusters.

  2. Create two app profiles, one calledlive-trafficand another calledbatch-analytics.

    If your cluster IDs arecluster-a andcluster-b, thelive-traffic appprofile should route requests tocluster-a and thebatch-analyticsapp profile should route requests tocluster-b. This configurationprovidesread-your-writes consistency for applicationsusing the same app profile, but not for applications using different appprofiles.

    You can enablesingle-row transactions in thelive-traffic app profile if necessary. There's no need to enablesingle-row transactions in thebatch-analytics app profile, assumingthat you will only use this app profile for reads.

  3. Use thelive-traffic app profile to run a live-traffic workload.

  4. While the live-traffic workload is running, use thebatch-analyticsapp profile to run a read-only batch workload.

To isolate two smaller workloads from one larger workload:

  1. Create an instance with three clusters.

    These steps assume that your clusters use the IDscluster-a,cluster-b, andcluster-c.

  2. Create the following app profiles:

    • live-traffic-app-a: Single-cluster routing from your application tocluster-a
    • live-traffic-app-b: Single-cluster routing from your application tocluster-b
    • batch-analytics: Single-cluster routing from the batch analytics job tocluster-c
  3. Use the live-traffic app profiles to run live-traffic workloads.

  4. While the live-traffic workloads are running, use thebatch-analyticsapp profile to run a read-only batch workload.

Create high availability (HA)

If an instance has only one cluster, your data's durability and availability are limited to the zonewhere that cluster is located. Replication can improve both durability and availability by storingseparate copies of your data in multiplezones or regionsand automatically failing over between clusters if needed.

To configure your instance for a high availability (HA) use case,create a new app profile that uses multi-clusterrouting, orupdate the default app profile to usemulti-cluster routing. This configuration provideseventual consistency. You won't be able to enablesingle-row transactions because single-row transactions can causedata conflicts when you use multi-cluster routing.

Important: To ensure high availability during a regional failure, locate yourclient in or nearmore than one of the Google Cloud regions where yourBigtable clusters are. This strategy ensures that you are able tosend and receive data to and from Bigtable even if a region thatyour client or application is in or near experiences a failure. Thisrecommendation applies even if Google Cloud does not host yourapplication, because your data enters the Google Cloud network through theGoogle Cloud region that is closest to your application server.

Configurations to improve availability include the following.

Provide near-real-time backup

In some cases—for example, if you can't afford to read stale data—you'll always need to routerequests to a single cluster. However, you can still use replication by handling requests with onecluster and keeping another cluster as a near-real-time backup. If the serving cluster becomesunavailable, you can minimize downtime by manually failing over to the backup cluster.

To configure your instance for this use case,create an app profile that uses single-cluster routing orupdate the defaultapp profile to use single-cluster routing. Thecluster that you specified in your app profile handles incoming requests.The other cluster acts as a backup in case you need to fail over. Thisarrangement is sometimes known as an active-passive configuration, and itprovides bothstrong consistency and read-your-writes consistency. You can enablesingle-row transactions in the app profile if necessary.

To implement this configuration:

  1. Use an app profile with single-cluster routing to run a workload.

  2. Use the Google Cloud console tomonitor the instance's clusters and confirm that only one cluster is handling incomingrequests.

    The other cluster will still use CPU resources to perform replication andother maintenance tasks.

  3. Update the app profile so that it points to thesecond cluster in your instance.

    You receive a warning about losingread-your-writesconsistency, which also means that you lose strongconsistency.

    If you enabled single-row transactions, you also receive a warningabout the potential for data loss. You lose data if you sendconflicting writes while the failover is occurring.

  4. Continue to monitor your instance. You should see that the second cluster ishandling incoming requests.

Maintain high availability and regional resilience

Let's say you have concentrations of customers in two distinct regions within acontinent. You want to serve each concentration of customers withBigtable clusters as close to the customers as possible. You wantyour data to be highly available within each region, and you might want afailover option if one or more of your clusters is not available.

For this use case, you can create an instance with two clusters in region A and twoclusters in region B. This configuration provides high availability even if youcannot connect to a Google Cloud region. It also provides regionalresilience because even if a zone becomes unavailable, the other cluster in thatzone's region is still available.

You can choose to use multi-cluster routing or single-cluster routing for thisuse case, depending on your business needs.

To configure your instance for this use case:

  1. Create a Bigtable instance with four clusters: two in region A and twoin region B. Clusters in the same region must be in different zones.

    Example configuration:

    • cluster-a in zoneasia-south1-a in Mumbai
    • cluster-b in zoneasia-south1-c in Mumbai
    • cluster-c in zoneasia-northeast1-a in Tokyo
    • cluster-d in zoneasia-northeast1-b in Tokyo
  2. Place an application server near each region.

You can choose to use multi-cluster routing or single-cluster routing for thisuse case, depending on your business needs. If you use multi-cluster routing,Bigtable handles failovers automatically. If you usesingle-cluster routing, you use your own judgment to decide when to fail overto a different cluster.

Single-cluster routing option

You can use single-cluster routing for this use case if you don't want yourBigtable cluster to automatically fail over if a zone or regionbecomes unavailable. This option is a good choice if you want to manage thecosts and latency that might occur if Bigtable starts routingtraffic to and from a distant region, or if you prefer to make failoverdecisions based on your own judgment or business rules.

To implement this configuration,create at least one app profile that uses single-cluster routing for each applicationthat sends requests to the instance. You can route the app profilesto any cluster in the Bigtable instance. For example, if youhave three applications running in Mumbai and six in Tokyo, you can configureone app profile for the Mumbai application to route toasia-south1-a and twothat route toasia-south1-c. For the Tokyo application, configure threeapp profiles that route toasia-northeast1-a and three that route toasia-northeast1-b.

Note: Multiple app profiles can route to the same cluster, and an instance canhave clusters that no app profiles route requests to.

With this configuration, if one or more clusters become unavailable, you canperform a manual failover or choose to let your databe temporarily unavailable in that zone until the zone is available again.

Multi-cluster routing option

If you're implementing this use case and you want Bigtable toautomatically fail over to one region if your application cannot reach the otherregion, usemulti-cluster routing.

To implement this configuration,create a new app profile that uses multi-cluster routing for each application, orupdate the default app profile to use multi-clusterrouting.

This configuration provideseventual consistency.If a region becomes unavailable, Bigtable requests areautomatically sent to the other region. When this happens, you arecharged for the network traffic to the other region, andyour application might experience higher latency because of the greaterdistance.

Store data close to your users

If you have users around the globe, you can reduce latency by running yourapplication near your users and putting your data as close to your applicationas possible. With Bigtable, you can create an instance that hasclusters in several Google Cloud regions, and your data is automaticallyreplicated in each region.

For this use case, use app profiles with single-cluster routing. Multi-clusterrouting is undesirable for this use case because of the distance betweenclusters. If a cluster becomes unavailable and its multi-cluster app profileautomatically reroutes traffic across a great distance, your application mightexperience unacceptable latency and incur unexpected, additional network costs.

To configure your instance for this use case:

  1. Create an instance with clusters in three distinct geographic regions, suchas the United States, Europe, and Asia.

  2. Place an application server near each region.

  3. Create app profiles similar to the following:

    • clickstream-us: Single-cluster routing to the cluster in the UnitedStates
    • clickstream-eu: Single-cluster routing to the cluster in Europe
    • clickstream-asia: Single-cluster routing to the cluster in Asia

In this setup, your application uses the app profile for theclosest cluster. Writes to any cluster are automatically replicated to theother two clusters.

Other use cases

If you have a use case that isn't described on this page, use the followingquestions to help you decide how to configure your app profiles:

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.