Modify an instance
After you create a Bigtable instance, you can update the followingsettings without any downtime:
You can enable or disable autoscaling for an instance'sclusters or configure the settings for clusters that already haveautoscaling enabled.
Thenumber of nodes in manually scaled clusters
After you add or remove nodes, it typically takes a few minutes under loadfor Bigtable to optimize the cluster's performance.
The number ofclusters in the instance
After you add a cluster, it takes time for Bigtable toreplicate your data to the new cluster. New clusters are replicated from thegeographically nearest cluster in the instance. In general, the greater thedistance, the longer replication will take.
Theapplication profiles for the instance, which containreplication settings
Thelabels for the instance, which provide metadata about theinstance
Thedisplay name for the instance
You can change a cluster ID only by deleting and recreating the cluster.
To change any of the following, you mustcreate a new instance with your preferred settings,export your datafrom the old instance,import your data into the new instance,and thendelete the old instance.
Instance ID
Storage type (SSD or HDD)
Customer-managed encryption key (CMEK) configuration
Before you begin
If you want to use the command-line interfaces for Bigtable,install the Google Cloud CLI and thecbt CLI if you haven't already.
Configure autoscaling
You can enable or disable autoscaling for any existing cluster. You can alsochange the CPU utilization target, minimum number of nodes, and maximum numberof nodes for a cluster. For guidance on choosing your autoscaling settings, seeAutoscaling. You are not able to use thecbt CLI toconfigure autoscaling.
Enable autoscaling
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
UnderConfigure clusters, clickEdit for the cluster that youwant to update.
SelectAutoscaling.
Enter values for the following:
- Minimum number of nodes
- Maximum number of nodes
- CPU utilization target
- Storage utilization target
ClickSave.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listIf you don't know the instance's cluster IDs, use the
bigtable clusters listcommand to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_IDReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
bigtable clusters updatecommand toenable autoscaling:gcloud bigtable clusters updateCLUSTER_ID \ --instance=INSTANCE_ID \ --autoscaling-max-nodes=AUTOSCALING_MAX_NODES \ --autoscaling-min-nodes=AUTOSCALING_MIN_NODES \ --autoscaling-cpu-target=AUTOSCALING_CPU_TARGET \ --autoscaling-storage-target=AUTOSCALING_STORAGE_TARGETProvide the following:
CLUSTER_ID: The permanent identifier for the cluster.INSTANCE_ID: The permanent identifier for the instance.AUTOSCALING_MAX_NODES: The minimum number of nodesAUTOSCALING_MIN_NODES: The maximum number of nodesAUTOSCALING_CPU_TARGET: The CPU utilization target percentage thatBigtable maintains by adding or removing nodes. Thisvalue must be from 10 to 80.AUTOSCALING_STORAGE_TARGET: The storage utilization target in GiB per node thatBigtable maintains by adding or removing nodesIn many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
Disable autoscaling
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
UnderConfigure clusters, click for the cluster that you wantto update.
SelectManual node allocation.
Enter the number of nodes for the cluster in theQuantity field.
In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
ClickSave.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listIf you don't know the instance's cluster IDs, use the
bigtable clusters listcommand to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_IDReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
bigtable clusters updatecommand to disableautoscaling and configure a constant number of nodes:gcloud bigtable clusters updateCLUSTER_ID \ --instance=INSTANCE_ID \ --num-nodes=NUM_NODES --disable-autoscalingProvide the following:
CLUSTER_ID: The permanent identifier for the cluster.INSTANCE_ID: The permanent identifier for the instance.NUM_NODES: This field is optional. If no value is set, Bigtable automaticallyallocates nodes based on your data footprint and optimizes for 50% storageutilization. If you want to control the number of nodes in a cluster, update theNUM_NODESvalue. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
Change autoscaling settings
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
UnderConfigure clusters, click for the cluster that you wantto update.
Enter new values for any of the floowing that you want to change:
- Minimum number of nodes
- Maximum number of nodes
- CPU utilization target
- Storage utilization target
ClickSave.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listIf you don't know the instance's cluster IDs, use the
bigtable clusters listcommand to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_IDReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
bigtable clusters updatecommand toupdate the settings for autoscaling:gcloud bigtable clusters updateCLUSTER_ID \ --instance=INSTANCE_ID \ --autoscaling-max-nodes=AUTOSCALING_MAX_NODES \ --autoscaling-min-nodes=AUTOSCALING_MIN_NODES \ --autoscaling-cpu-target=AUTOSCALING_CPU_TARGET \ --autoscaling-storage-target=AUTOSCALING_STORAGE_TARGETProvide the following:
CLUSTER_ID: The permanent identifier for the cluster.INSTANCE_ID: The permanent identifier for the instance.
The command accepts optional autoscaling flags. You can use all of theflags or just the flags for the values that you want to change.
AUTOSCALING_MAX_NODES: The minimum number of nodes.AUTOSCALING_MIN_NODES: The maximum number of nodes.AUTOSCALING_CPU_TARGET: The CPU utilization target percentage thatBigtable maintains by adding or removing nodes. Thisvalue must be from 10 to 80.AUTOSCALING_STORAGE_TARGET: The storage utilization target in GiB per node thatBigtable maintains by adding or removing nodes.In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
Add or remove nodes manually
In most cases, we recommend that you enable autoscaling. If you choose not to,and your cluster'snode scaling mode ismanual, you can add or removenodes, and the number of nodes remains constant until you change it again. Toreview the default node quotas per zone per Google Cloud project, seeNodequotas. If you need to provision more nodes than the default, youcanrequest more.
Caution: To avoid performance issues, don't reduce a cluster size by more than10% in a 10-minute period. For details, seeLatency increases caused by scalingdown too quickly.To change the number of nodes in a cluster that uses manual scaling:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
UnderConfigure clusters, clickEdit for the cluster that youwant to update.
In theManual node allocation section, enter the number of nodes forthe cluster in theQuantity field.
In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
ClickSave.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listIf you don't know the instance's cluster IDs, use the
bigtable clusters listcommand to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_IDReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
bigtable clusters updatecommand tochange the number of nodes:gcloud bigtable clusters updateCLUSTER_ID \ --instance=INSTANCE_ID \ --num-nodes=NUM_NODESProvide the following:
CLUSTER_ID: The permanent identifier for the cluster.INSTANCE_ID: The permanent identifier for the instance.NUM_NODES: This field is optional. If no value is set, Bigtable automaticallyallocates nodes based on your data footprint and optimizes for 50% storageutilization. If you want to control the number of nodes in a cluster, update theNUM_NODESvalue. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
cbt
If you don't know the instance ID, use the
listinstancescommand to view a list of yourproject's instances:cbt listinstancesIf you don't know the instance's cluster IDs, use the
listclusterscommand to view alist of clusters in the instance:cbt -instance=INSTANCE_ID listclustersReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
updateclustercommand to change the number of nodes:cbt -instance=INSTANCE_ID updateclusterCLUSTER_IDNUM_NODESProvide the following:
INSTANCE_ID: The permanent identifier for the instance.CLUSTER_ID: The permanent identifier for the cluster.NUM_NODES: This field is optional. If no value is set, Bigtable automaticallyallocates nodes based on your data footprint and optimizes for 50% storageutilization. If you want to control the number of nodes in a cluster, update theNUM_NODESvalue. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
Add a cluster
You can add clusters to an existing instance. An instance can have clusters in up to 8 regions whereBigtable is available.Eachzone in a region can contain onlyone cluster. The ideal locations for additional clustersdepend on your use case.
If your instance isprotected by CMEK, each new cluster mustuse a CMEK key that is in the same region as the cluster. Before you add a newcluster to a CMEK-protected instance, identify orcreate a CMEKkey in the region where you plan to locate the cluster.
Before you add clusters to a single-cluster instance, read about therestrictions that apply when you changegarbage collection policies on replicated tables. Then seeexamples of replication settings for recommendations.
To add a cluster to an instance:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
UnderConfigure clusters, clickAdd cluster.
If this button is disabled, the instance already has the maximum numberof clusters.
Enter a cluster ID for the first cluster.
The cluster ID is a permanent identifier for the cluster.
Choose theregion and zone where the first clusterwill run.
Optional: To configure the cluster to always scale in increments oftwo nodes, selectEnable 2x node scaling. For more information, seeNode scaling factor.
Choose a node scaling mode for the cluster. In most cases, you shouldchoose autoscaling. For scaling guidance, seeAutoscaling.
- ForManual node allocation, enter the number ofBigtable nodes for the first cluster.If you aren't sure how many nodes you need, use the default. You can add more nodes later.
- ForAutoscaling, enter values for the following:
- Minimum number of nodes
- Maximum number of nodes
- CPU utilization target
- Storage utilization target
Optional: To protect your instance with CMEK instead of the defaultGoogle-managed encryption, complete the following:
- ClickShow encryption options.
- Select the radio button next toCustomer-managed encryption key(CMEK).
- Select or enter the resource name for theCMEK key thatyou want to use for the cluster. You cannot add this later.
- If you are prompted to grant permission to the CMEK key's serviceaccount, clickGrant. Your user account must be granted theCloud KMS Admin role to complete this task.
- ClickSave.
Enter the number of nodes for the cluster.
In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
If the instance is CMEK-protected, select or enter a customer-managedkey. The CMEK key must be in the same region as the cluster.
ClickAdd.
Repeat these steps for each additional cluster, then clickSave.Bigtable creates the cluster and starts replicating your datato the new cluster. You may see CPU utilizationincrease as replication begins.
Note: You cannot access your data in the new cluster untilBigtable completes the initial copy to the cluster. ClickTables in the left pane to check whether your replicated tables areavailable in the new cluster.Review the replication settings in the default app profile to see if they make sense for yourreplication use case. You might need toupdate the default appprofile orcreate customapp profiles.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listIf you don't know the instance's cluster IDs, use the
bigtable clusters listcommand to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_IDReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
bigtable clusters createcommand to adda cluster:gcloud bigtable clusters createCLUSTER_ID \ --async \ --instance=INSTANCE_ID \ --zone=ZONE \ [--num-nodes=NUM_NODES] \ [--autoscaling-min-nodes=AUTOSCALING_MIN_NODES, \ --autoscaling-max-nodes=AUTOSCALING_MAX_NODES, \ --autoscaling-cpu-target=AUTOSCALING_CPU_TARGET \ --autoscaling-storage-target=AUTOSCALING_STORAGE_TARGET] \ [--kms-key=KMS_KEY --kms-keyring=KMS_KEYRING \ --kms-location=KMS_LOCATION --kms-project=KMS_PROJECT] \Provide the following:
CLUSTER_ID: The permanent identifier for the cluster.INSTANCE_ID: The permanent identifier for the instance.ZONE: Thezone where the cluster runs.Eachzone in a region can contain onlyone cluster.For example, if an instance has a cluster in
us-east1-b, you can add a clusterin a different zone in the same region, such asus-east1-c, or a zone in a separateregion, such aseurope-west2-a.View the zone list.
The
--asyncflag is not required but is strongly recommended. Withoutthis flag, the command might time out before the operation is complete.Bigtable will continue to create the cluster in thebackground.The command accepts the following optional flags:
--kms-key=KMS_KEY: The CMEK key in use by the cluster. You can addCMEK clusters only to instances that are already CMEK-protected.--kms-keyring=KMS_KEYRING: The KMS keyring ID for the key.--kms-location=KMS_LOCATION: The Google Cloud location for the key.--kms-project=KMS_PROJECT: The Google Cloud project ID for the key.--storage-type=STORAGE_TYPE: The type of storage to use for the cluster. Each cluster in an instance must use the same storagetype. Accepts the valuesSSDandHDD.The default value isSSD.--node-scaling-factor=node-scaling-factor-2x: A flag thatenables2x node scaling.You can enable this feature with both manual scaling and autoscaling.
To view a list of Bigtable zones that aren't availablefor 2x node scaling, seeNode scaling factorlimitations.
If no value is set for the
--num-nodesoption, Bigtableallocates nodes to the cluster automatically based on your datafootprint and optimizes for 50% storage utilization. This automaticallocation of nodes has a pricing impact. If you want to control thenumber of nodes in a cluster, update theNUM_NODESvalue.Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.Forautoscaling, provide all
autoscaling-flags and do notusenum-nodes. SeeAutoscaling for guidance on choosingthe values for your autoscaling settings. Replace the following:AUTOSCALING_MIN_NODES: The minimum number of nodes for the cluster.AUTOSCALING_MAX_NODES: The maximum number of nodes for the cluster.AUTOSCALING_CPU_TARGET: The target CPU utilization for the cluster.This value must be from 10 to 80.AUTOSCALING_STORAGE_TARGET: The storage utilization target in GiB thatBigtable maintains by adding or removing nodes.
Review the replication settings in the default app profile to see if they make sense for yourreplication use case. You might need toupdate the default appprofile orcreate customapp profiles.
cbt
Note: You are not able to create CMEK-protected resources or configureautoscaling using thecbt CLI.If you don't know the instance ID, use the
listinstancescommand to view a list of yourproject's instances:cbt listinstancesIf you don't know the instance's cluster IDs, use the
listclusterscommand to view alist of clusters in the instance:cbt -instance=INSTANCE_ID listclustersReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
createclustercommand to add a cluster:cbt -instance=INSTANCE_ID \createclusterCLUSTER_ID \ZONE \NUM_NODES \STORAGE_TYPEProvide the following:
INSTANCE_ID: The permanent identifier for the instance.CLUSTER_ID: The permanent identifier for the cluster.ZONE: Thezone where the cluster runs.Eachzone in a region can contain onlyone cluster.For example, if an instance has a cluster in
us-east1-b, you can add a clusterin a different zone in the same region, such asus-east1-c, or a zone in a separateregion, such aseurope-west2-a.View the zone list.NUM_NODES: This field is optional. If no value is set, Bigtable automaticallyallocates nodes based on your data footprint and optimizes for 50% storageutilization. If you want to control the number of nodes in a cluster, update theNUM_NODESvalue. Ensure that number of nodes is set to a non-zero value.In many cases, each cluster in an instance should have the same number of nodes, but there areexceptions.Learn about nodesand replication.
STORAGE_TYPE: The type of storage to use for the cluster. Each cluster in an instance must use the same storagetype. Accepts the valuesSSDandHDD.
Review the replication settings in the default app profile to see if they make sense for yourreplication use case. You might need toupdate the default appprofile orcreate customapp profiles.
Delete a cluster
If an instance has multiple clusters, you can delete all but 1 of the clusters.Deleting all but 1 cluster automatically disables replication.
Note: If you delete a cluster, and some writes to that cluster have not beenreplicated yet, Bigtable finishes the replication processbefore it removes the deleted cluster's copy of your data. The remaining clustercontinues to show incoming writes and CPU utilization until the replicationprocess is complete.In some cases, Bigtable does not allow you to delete a cluster:
- If one of yourapplication profiles routes all traffic to asingle cluster, Bigtable will not allow you to delete thatcluster. You mustedit or delete the applicationprofile before you can remove the cluster.
- If you add new clusters to an existing instance, you cannot delete clusters inthat instance until the initial data copy to the new clusters is complete.
To delete a cluster from an instance:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
UnderConfigure clusters, clickDelete cluster for the clusterthat you want to delete.
To cancel the delete operation, clickUndo, which is available untilyou clickSave. Otherwise, clickSave.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listIf you don't know the instance's cluster IDs, use the
bigtable clusters listcommand to view a list of clusters in the instance:gcloud bigtable clusters list --instances=INSTANCE_IDReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
bigtable clusters deletecommand todelete the cluster:gcloud bigtable clusters deleteCLUSTER_ID \ --instance=INSTANCE_IDProvide the following:
CLUSTER_ID: The permanent identifier for the cluster.INSTANCE_ID: The permanent identifier for the instance.
cbt
If you don't know the instance ID, use the
listinstancescommand to view a list of yourproject's instances:cbt listinstancesIf you don't know the instance's cluster IDs, use the
listclusterscommand to view alist of clusters in the instance:cbt -instance=INSTANCE_ID listclustersReplace
INSTANCE_IDwith the permanent identifier for the instance.Use the
deleteclustercommand to delete the cluster:cbt -instance=INSTANCE_ID deleteclusterCLUSTER_IDProvide the following:
INSTANCE_ID: The permanent identifier for the instance.CLUSTER_ID: The permanent identifier for the cluster.
Move data to a new location
To move the data in a Bigtable instance to a new zone or region,add a new cluster in the location that you want to move to,and thendelete the cluster in the location you want tomove from. The deleted cluster remains available until data has been replicatedto the new cluster, so you don't have to worry about any requests failing.Bigtable replicates all data to the new cluster automatically.
Manage app profiles
Application profiles, orapp profiles, control how your applications connect toan instance that uses replication. Every instance with more than 1 cluster hasits own default app profile. You can also create many different custom appprofiles for each instance, using a different app profile for each kind ofapplication that you run.
To learn how to set up an instance's app profiles, seeConfiguring AppProfiles. For examples of settings you can use to implement common use cases, seeExamples of replication configurations.
Manage labels
Labels are key-value pairs that you can use to group related instances and storemetadata about an instance.
To learn how to manage labels, seeAdding or updating an instance'slabels andRemoving a label from aninstance.
Change an instance's display name
To change an instance's display name, which the Google Cloud console uses toidentify the instance:
Console
Open the list of Bigtable instances in the Google Cloud console.
Click the instance you want to change, then clickEdit instance.
Edit the instance name, then clickSave.
gcloud
If you don't know the instance ID, use the
bigtable instanceslistcommand to view a list of your project's instances:gcloud bigtable instances listUse the
bigtable instances updatecommand toupdate the display name:gcloud bigtable instances updateINSTANCE_ID \ --display-name=DISPLAY_NAMEProvide the following:
INSTANCE_ID: The permanent identifier for the instance.DISPLAY_NAME: A human-readable name that identifies the instance in the Google Cloud console.
cbt
This feature is not available in thecbt CLI.
What's next
- Learn how toadd, update, and remove labels for aninstance.
- Find out how tocreate and update an instance's appprofiles, which contain settings for replication.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.