Quotas and limits

This document lists the quotas and system limits that apply to Bigtable.

  • Quotas have default values, but you can typically request adjustments.
  • System limits are fixed values that can't be changed.

Google Cloud uses quotas to help ensure fairness and reducespikes in resource use and availability. A quota restricts how much of aGoogle Cloud resource your Google Cloud project can use. Quotasapply to a range of resource types, including hardware, software, and networkcomponents. For example, quotas can restrict the number of API calls to aservice, the number of load balancers used concurrently by your project, or thenumber of projects that you can create. Quotas protect the community ofGoogle Cloud users by preventing the overloading of services. Quotas alsohelp you to manage your own Google Cloud resources.

The Cloud Quotas system does the following:

In most cases, when you attempt to consume more of a resource than its quotaallows, the system blocks access to the resource, and the task thatyou're trying to perform fails.

Quotas generally apply at the Google Cloud projectlevel. Your use of a resource in one project doesn't affectyour available quota in another project. Within a Google Cloud project, quotasare shared across all applications and IP addresses.

For more information, see theCloud Quotas overview.

To adjust most quotas, use the Google Cloud console. For more information, seeRequest a quota adjustment.

There are alsosystem limits on Bigtable resources. System limits can't be changed.

Note: When you send multiple operations in a single batch, each operation in thebatch counts towards the quota or limit.

Quotas

This section describes default quotas that apply to all of yourBigtable usage.

Admin operation quotas

The following quotas affect the number of Bigtable administrativeoperations (calls to the admin API) that you can perform within a given time.

In general, you are not able to request an increase in admin operation quotas,except where indicated. Exceptions are sometimes granted when strongjustification is provided. However, the number of calls that your applicationmakes to the admin API shouldn't increase when usage increases. If this occurs,it is often a sign that your application code is making unnecessary calls to theadmin API, and you should change your application instead of requesting an adminoperation quota increase.

Daily quotas reset at midnight Pacific Time.

NameDescriptionDefault quota
Instances and clusters
Instance and cluster read requests Reading the configuration for an instance or cluster (for example, the instance name or the number of nodes in a cluster), or reading a list of instances

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

Instance and cluster write requests Changing the configuration for an instance or cluster (for example, the instance name or the number of nodes in a cluster), or creating a new instance

Per day per project: 500 ops

Per minute per user: 100 ops

Application profiles
App profile read requestsReading the configuration for an app profile

Per minute per project: 5,000 ops

Per minute per user: 1,000 ops

App profile write requestsChanging the configuration for an app profile

Per minute per project: 500 ops

Per minute per user: 100 ops

Tables
Table admin read requests Reading the configuration for a table (for example, details about its column families), or reading a list of tables

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

Table admin write requests Changing the configuration for a table (for example, the garbage collection settings for a column family)

Per day per project: 5,000 ops

Per minute per user: 100 ops

DropRowRange method Delete a range of rows from a table in a single operation.

Per day per project: 5,000 ops

Per minute per user: 100 ops

Backups
Backup operations Creating, updating, and deleting a backup.

Per day per project:1,000 ops

Per minute per user: 10 ops1

Backup retrieval requests Getting and listing backups.

Per day per project: 864,000 ops

RestoreTable method Restoring a backup to a new table.

Per day per project: 5,000 ops

Per minute per user: 100 ops

Identity and Access Management
Fine-grained ACL get requests Reading information about the IAM policy for a Bigtable instance, or testing the IAM permissions for an instance.

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

Fine-grained ACL set requests Changing the IAM policy for a Bigtable instance.

Per day per project: 864,000 ops (average of 10 ops/second)

Per minute per user: 1,000 ops

  1. Eligible for quota limit increase.

Node quotas

A Google Cloud project contains Bigtableinstances, which are containers forclusters. A clusterrepresents the actual Bigtable service running in a single zone.Clusters containnodes, which are compute resources that enableBigtable to manage your data.

The default number of nodes that you can provision per zone in each projectdepends on the region. You are able to provision up to the default number ofHDD nodes and up to the default number of SSD nodes per zone in a project.

The default node quotas are as follows:

RegionSSDHDD
asia-east1100100
europe-west1200200
us-central1200200
us-east15050
us-east45050
us-west1100100
All other Bigtable locations3030

If you enable autoscaling for a cluster, the configured maximum numberof nodes counts toward this limit, even if the cluster is not scaled to thatnumber of nodes. If you need to provision more nodes than the default limits,you canrequest an increase.

Quotas and node availability

Node quota is the maximum number of nodes that you can provision per zone ineach project. Quotas don't guarantee that you are always able to add nodes to acluster. If a zone is out of nodes, you might not be able to add nodes to acluster in that zone, even if you have remaining quota in yourproject.

For example, if you attempt to add 10 SSD nodes to a cluster that already has20 nodes, but the zone is out of nodes, you are not able to add those 10 nodes,even if the node quota for SSD nodes in that region is 30.

In these situations, we attempt to increase a zone's node resources and thengrant your requests after those resources are available, with no guarantee oftiming and completion.

Nodes that you have provisioned are always guaranteed to be available.

Data Boost quotas

The following server processing unit (SPU) quotas apply per project per region.

RegionSPUs
asia-east1100,000
europe-west1200,000
us-central1200,000
us-east1100,000
us-east4100,000
us-west1100,000
All other Bigtable locations30,000

For more information about Data Boost, see theData Boostoverview.

View quota information

To find the number of SSD and HDD nodes that your Google Cloud project alreadyhas in each zone, use theGoogle Cloud console. In the left navigation pane, pointtoIAM & admin, clickQuotas, and then use theService drop-down toselect the Bigtable Admin API service.

The page displays rows showing quotas for each combination of service, nodetype, and location. Look for the rows that are subtitledSSD nodes per zoneorHDD nodes per zone. TheLimit column shows the maximum number ofnodes allowed for the given node type and location, and theCurrent usagecolumn shows the number of nodes that currently exist. The difference betweenthose two numbers is the number of nodes you can add without requesting more.

Request a node quota increase

To ensure that there is enough time to process your request, always plan aheadand request additional resources a few days before you might need them. Requestsfor node quota increases are not guaranteed to be granted. For more information,seeWorking with quotas.

You must have at leasteditor-level permissions on theproject that contains the instance you are requesting a node quota increase for.

There is no charge for requesting an increase in node quota. Your costs increaseonly if you use more resources.

  1. Go to theQuotas page.

    Go to the Quotas page

  2. On theQuotas page, select the quotas you want to change.
  3. Click theEdit Quotas button on the top of the page.
  4. In the right pane, type your name, email, and phone number, then clickNext.
  5. Enter the requested new quota limit, then clickNext.
  6. Submit your request.

Limits

This section describes limits that apply to your usage ofBigtable. Limits are built into the service and cannot be changed.

App profiles per instance

The maximum number ofapplication profiles each instance canhave is 2,000.

Authorized views

  • Authorized views per Bigtable instance: up to 10,000
  • Column qualifier prefixes per authorized view: 10

Backups

  • Maximum number of standard backups that can be created: 150 per table per cluster
  • Maximum number of hot backups that can be created: 10 per table per cluster
  • Minimum retention period of a backup: 6 hours after initial creation time
  • Maximum retention period of a backup: 90 days after initial creation date

Data Boost

A cluster can't receive more than 1,000 Data Boost read requests per second.

Data size within tables

Recommended limits

Design your schema to keep the size of yourdata under these recommended limits.

  • Column families per table: 100
  • A single column qualifier: 16 KB
  • A single value in a table cell: 10 MB
  • All values in a single row: 100 MB

Hard limits

In addition, youmust ensure that your data fits within these hard limits:

  • A single row key: 4 KB
  • A single value in a table cell: 100 MB
  • All values in a single row: 256 MB
  • A single mutation: 200 MB

These size limits are measured in binary kilobytes (KB), where 1 KB is210 bytes, and binary megabytes (MB), where 1MB is 220 bytes. These units of measurementare also known askibibytes (KiB) andmebibytes (MiB).

Operation limits

When you send multiple mutations to Bigtable as a single batch,the following limits apply:

  • A batch of conditional mutations, which callCheckAndMutate, can include up to 100,000 true mutationsand up to 100,000 false mutations in the batch.

  • In batches of all other types of mutations, you can include no more than 100,000 mutations in the batch.

Regions per instance

A Bigtable instance can have clusters in up to8 regions where Bigtable isavailable. You can create one cluster in each zone in a region. For alist of available zones, seeBigtable locations.

Row filters

A row filter cannot exceed 20 KB. If you receive an error message, you shouldredesign or shorten your filter.

Storage per node

If a cluster does not have enough nodes, based on its current workload and the amount of data itstores, Bigtable will not have enough CPU resources to manage all of the tablets thatare associated with the cluster. Bigtable will also not be able to perform essentialmaintenance tasks in the background. As a result, the cluster may not be able to handle incomingrequests, and latency will go up. SeeTrade-offs between storage usage andperformance for more details.

To prevent these issues,monitor storage utilization for yourclusters to make sure they have enough nodes tosupport the amount of data in the cluster, based on the following limits:

  • SSD clusters:

    • Withouttiered storage(Preview):5 TB per node.
    • With tiered storage: 32 TB pernode, including a maximum of 5 TB SSD.

    For more information about tiered storage and the infrequent access tier,seeTiered storage overview.

  • HDD clusters: 16 TB per node

These values are measured in binary terabytes (TB), where 1 TB is240 bytes. This unit of measurement is also known as atebibyte (TiB).

Important: If any cluster in an instance exceeds the hard limit on theamount of storage per node, writes to all clusters in that instance will fail until youadd nodes to each cluster that is over thelimit. Also, if you try to remove nodes from a cluster, and the change would cause the clusterto exceed the hard limit on storage, Bigtable will deny the request.

As a best practice, add enough nodes to your cluster so you are only using70% of these limits, which helps accommodate any sudden spikes instorage usage. For example, if you are storing 50 TB of data in a cluster that uses SSD storage, youshould provision at least 15 nodes, which will handle up to 75 TB of data. If you are not addingsignificant amounts of data to the cluster, you can exceed this recommendation and store up to100% of the limit.

Tables per instance

Bigtable supports a maximum of 1,000tables in each instance. Materialized views count toward the number of tables.

ID length limits

The following are the minimum and maximum ID lengths (number of characters)supported by Bigtable.

  • App profile: 1-50
  • authorized view: 1-50
  • Backup: 1-50
  • Cluster: 6-40
  • Column family: 1-64
  • Instance: 6-33
  • Table: 1-50
  • View: 1-128

Logical views per instance

Bigtable supports a maximum of 1,000 logical views in eachinstance.

Saved queries limits

Preview — Saved queries

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

ValueLimit
Maximum number of saved queries per project (including saved queries for other Google Cloud products)10,000
Maximum size for each query1 MiB

Schema bundles

Preview — Schema bundles

This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

The following limits apply to schema bundles:

  • Schema bundles per table: You can create a maximum of 10 schema bundlesper table.
  • Schema bundle size: The total size of the serialized protocol bufferdescriptors within a schema bundle can't exceed 4 MB.

Usage policies

The use of this service must adhere to theTerms of Service as well asGoogle'sPrivacy Policy.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.