Instances, clusters, and nodes

To use Bigtable, you createinstances, which containclusters thatyour applications can connect to. Each cluster containsnodes, the computeunits that manage your data and perform maintenance tasks.

This page provides more information about Bigtable instances,clusters, and nodes.

Before you read this page, you should be familiar with theoverview of Bigtable.

Instances

A Bigtable instance is a container for your data. Instances haveone or moreclusters, located in different zones. Eachcluster has at least 1node.

A table belongs to an instance, not to a cluster or node. If you have an instancewith more than one cluster, you are using replication. This means you can'tassign a table to an individual cluster or create unique garbage collectionpolicies for each cluster in an instance. You also can't make each cluster storea different set of data in the same table.

An instance has a few important properties that you need to know about:

  • Thestorage type (SSD or HDD)
  • Theapplication profiles, which are primarily for instances that usereplication

The following sections describe these properties.

Storage types

When youcreate an instance, you must choose whetherthe instance's clusters will store data on solid-state drives (SSD) or hard diskdrives (HDD). SSD is often, but not always, the most efficient andcost-effective choice.

The choice between SSD and HDD is permanent, and every cluster in your instancemust use the same type of storage, so make sure you pick the right storage typefor your use case. SeeChoosing between SSD and HDD storagefor more information to help you decide.

If you need to store historical data for reasons such as regulatoryrequirements, use Bigtable infrequent access storage as part oftiered storage (Preview).This option is available for SSD instances.

Application profiles

After youcreate an instance, Bigtable usesthe instance to storeapplication profiles, or app profiles. Forinstances that use replication, app profiles control how your applicationsconnect to the instance's clusters.

If your instance doesn't use replication, you can still use app profiles toprovide separate identifiers for each of your applications, or each functionwithin an application. You can thenview separate charts for each app profilein the Google Cloud console.

To learn more about app profiles, seeapplication profiles. Tolearn how to set up your instance's app profiles, seeConfiguring appprofiles.

Clusters

A cluster represents the Bigtable service in a specific location.Each cluster belongs to a single Bigtable instance, and aninstance can have clusters in up to 8 regions.When your application sends requests to a Bigtable instance, thoserequests are handled by one of the clusters in the instance.

Each cluster is located in a single zone. An instance can have clusters in up to 8 regions whereBigtable is available.Eachzone in a region can contain onlyone cluster. For example, if an instance has a cluster inus-east1-b, you can add a clusterin a different zone in the same region, such asus-east1-c, or a zone in a separateregion, such aseurope-west2-a.

The number of clusters that you can create in an instance depends on the numberof available zones in the regions that you choose. For example, if you createclusters in 8 regions that have 3 zones each, the maximum number of clustersthat the instance can have is 24. For a list of zones and regions whereBigtable is available, seeBigtable locations.

Bigtable instances that have only 1 cluster don't usereplication. If you add a second cluster to an instance,Bigtable automatically starts replicating your databy keeping separate copies of the data in each of the clusters' zones andsynchronizing updates between the copies. You can choose which cluster yourapplications connect to, which makes it possible to isolate different types oftraffic from one another. You can also let Bigtable balance trafficbetween clusters. If a cluster becomes unavailable, you can fail over from onecluster to another. To learn more about how replication works, see thereplication overview.

In most cases, you shouldenable autoscaling for acluster, so that Bigtable adds and removes nodes as needed tohandle the cluster's workloads.

When you create a cluster, you can enable 2x node scaling, a configuration thatsets the cluster to always scale in increments of two nodes. For moreinformation, seeNode scalingfactor.

Nodes

Each cluster in an instance has 1 or morenodes, which are compute resources that Bigtable uses to manageyour data.

Behind the scenes, Bigtable splits all of the data in atable into separatetablets. Tablets are stored on disk,separate from the nodes but in the same zone as the nodes. A tablet isassociated with a single node.

Each node is responsible for:

  • Keeping track of specific tablets on disk.
  • Handling incoming reads and writes for its tablets.
  • Performing maintenance tasks on its tablets, such as periodiccompactions.

A cluster must have enough nodes to support its current workload and the amountof data it stores. Otherwise, the cluster might not be able to handle incomingrequests, and latency could go up.Monitor your clusters' CPU anddisk usage, andadd nodes to an instancewhen its metrics exceed the recommendations atPlan your capacity.

For more details about how Bigtable stores and manages data, seeBigtable architecture.

For high-throughput read jobs, you can use Data Boost forBigtable for compute instead of your cluster's nodes.Data Boost lets you send large read jobs and queries using serverlesscompute while your core application continues using cluster nodes for compute.For more information, seeData Boostoverview.

For workloads of historical data that you don't need to access often, you canuse the infrequent access tier in SSD instances.

Nodes for replicated clusters

When your instance has more than one cluster, failover becomes a considerationwhen you configure the maximum number of nodes for autoscaling or manuallyallocate the nodes.

  • If you usemulti-cluster routing in any of your app profiles,automatic failover can occur in the event that one ormore clusters is unavailable.

  • When you manually fail over from one cluster to another, or when automaticfailover occurs, the receiving cluster should ideally have enough capacity tosupport the load. You can either always allocate enough nodes to supportfailover, which can be costly, or you can rely on autoscaling to add nodeswhen traffic fails over, but be aware that there might be a brief impact onperformance while the cluster scales up.

  • If all of your app profiles usesingle-cluster routing, eachcluster can have a different number of nodes. Resize each cluster as neededbased on the cluster's workload.

    Because Bigtable stores a separate copy of your data witheach cluster, each cluster must always have enough nodes to support yourdisk usage and to replicate writes between clusters.

    You can stillfail over manually from one cluster toanother if necessary. However, if one cluster has many more nodes thananother, and you need to fail over to the cluster with fewer nodes, youmight need toadd nodes first. There is noguarantee that additional nodes will be available when you need to failover—the only way to reserve nodes in advance is to add them to yourcluster.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.