Quotas and limits
This document lists the quotas and system limits that apply to BigQuery.
- Quotas have default values, but you can typically request adjustments.
- System limits are fixed values that can't be changed.
Google Cloud uses quotas to help ensure fairness and reducespikes in resource use and availability. A quota restricts how much of aGoogle Cloud resource your Google Cloud project can use. Quotasapply to a range of resource types, including hardware, software, and networkcomponents. For example, quotas can restrict the number of API calls to aservice, the number of load balancers used concurrently by your project, or thenumber of projects that you can create. Quotas protect the community ofGoogle Cloud users by preventing the overloading of services. Quotas alsohelp you to manage your own Google Cloud resources.
The Cloud Quotas system does the following:
- Monitors your consumption of Google Cloud products and services
- Restricts your consumption of those resources
- Provides a way torequest changes to the quota value andautomate quota adjustments
In most cases, when you attempt to consume more of a resource than its quotaallows, the system blocks access to the resource, and the task thatyou're trying to perform fails.
Quotas generally apply at the Google Cloud projectlevel. Your use of a resource in one project doesn't affectyour available quota in another project. Within a Google Cloud project, quotasare shared across all applications and IP addresses.
For more information, see theCloud Quotas overview.
There are alsosystem limits on BigQuery resources. System limits can't be changed.
Some error messages specify quotas or limits that you can increase, while othererror messages specify quotas or limits that you can't increase. Reaching a hardlimit means that you need to implement temporary or permanent workarounds orbest practices for your workload. Doing so is a best practice, even for quotasor limits that can be increased. For details about both types of errors, seeTroubleshoot quota and limit errors.
By default, BigQueryquotas and limits apply on aper-project basis.Quotas and limits that apply on a different basis are indicated assuch; for example, the maximum number of columnsper table, or the maximumnumber of concurrent API requestsper user.Specific policies vary depending on resource availability, user profile,Service Usage history, and other factors, and are subject to change withoutnotice.
Quota replenishment
Daily quotas are replenished at regular intervals throughout the day,reflecting their intent to guide rate limiting behaviors. Intermittent refresh is also doneto avoid long disruptions when quota is exhausted. More quota is typicallymade available within minutes rather than globally replenished once daily.
Request a quota increase
To adjust most quotas, use the Google Cloud console. For more information, seeRequest a quota adjustment.
For step-by-step guidance through the process of requesting a quota increasein Google Cloud console, clickGuide me:
Cap quota usage
To learn how you can limit usage of a particular resource by creating a quotaoverride, seeCreate quota override.
Required permissions
To view and update your BigQuery quotas in theGoogle Cloud console, you need the same permissions as for any Google Cloudquota. For more information, seeGoogle Cloud quota permissions.
Troubleshoot
For information about troubleshooting errors related to quotas and limits, seeTroubleshooting BigQuery quota errors.
Jobs
Quotas and limits apply to jobs that BigQuery runs on your behalfwhether they are run by using Google Cloud console, the bq command-line tool, orprogrammatically using the REST API or client libraries.
Query jobs
The following quotas apply to query jobs created automatically byrunning interactive queries, scheduled queries, and jobs submitted by using thejobs.queryand query-typejobs.insertAPI methods.
For troubleshooting information, see the BigQueryTroubleshooting page.
| Quota | Default | Notes |
|---|---|---|
| Query usage per day | 200 Tebibytes (TiB) | This quota applies only tothe on-demand query pricing model. Your project can run up to 200 TiB in queries per day. You can change this limit anytime. SeeCreate custom query quotas to learn more about cost controls. View quota in Google Cloud console |
| Query usage per day per user | Unlimited | This quota applies only tothe on-demand query pricing model. There is no default limit on how many TiB in queries a user can run per day. You can set the limit anytime. Regardless of the per user limit, the total usage for all users in the project combined can never exceed the query usage per day limit. SeeCreate custom query quotas to learn more about cost controls. View quota in Google Cloud console |
| GoogleSQL federated query cross-region bytes per day | 1 TB | If the BigQuery query processing location and the Cloud SQL instance location are different, then your query is a cross-region query. Your project can run up to 1 TB in cross-region queries per day. SeeCloud SQL federated queries. View quota in Google Cloud console |
| Cross-cloud transferred bytes per day | 1 TB | You can transfer up to 1 TB of data per day froman Amazon S3 bucket or from Azure Blob Storage. View quota in Google Cloud console |
The following limits apply to query jobs created automatically byrunning interactive queries, scheduled queries, and jobs submitted by using thejobs.queryand query-typejobs.insertAPI methods:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of queued interactive queries | 1,000 queries | Your project can queue up to 1,000 interactive queries. Additional interactive queries that exceed this limit return a quota error. To troubleshoot these errors, seeAvoid limits for high-volume interactive queries. |
| Maximum number of queued batch queries | 20,000 queries | Your project can queue up to 20,000 batch queries. Additional batch queries that exceed this limit return a quota error. |
| Maximum number of concurrent interactive queries against Bigtable external data sources | 16 queries | Your project can run up to sixteen concurrent queries against aBigtable external data source. |
| Maximum number of concurrent queries that contain remote functions | 10 queries | You can run up to ten concurrent queries with remote functions per project. |
| Maximum number of concurrent multi-statement queries | 1,000 multi-statement queries | Your project can run up to 1,000 concurrentmulti-statement queries. For other quotas and limits related to multi-statement queries, see Multi-statement queries. |
| Maximum number of concurrent legacy SQL queries that contain UDFs | 6 queries | Your project can run up to six concurrent legacy SQL queries with user-defined functions (UDFs). This limit includes bothinteractive andbatch queries. Interactive queries that contain UDFs also count toward the concurrent limit for interactive queries. This limit does not apply to GoogleSQL queries. |
| Daily query size limit | Unlimited | By default, there is no daily query size limit. However, you can set limits on the amount of data users can query by creatingcustom quotas to controlquery usage per day orquery usage per day per user. |
| Daily destination table update limit | SeeMaximum number of table operations per day. | Updates to destination tables in a query job count toward the limit on the maximum number of table operations per day for the destination tables. Destination table updates include append and overwrite operations that are performed by queries that you run by using the Google Cloud console, using the bq command-line tool, or calling thejobs.query and query-typejobs.insert API methods. |
| Query/multi-statement query execution-time limit | 6 hours | A query or multi-statement query can execute for up to 6 hours, and then it fails. However, sometimes queries are retried. A query can be tried up to three times, and each attempt can run for up to 6 hours. As a result, it's possible for a query to have a total runtime of more than 6 hours.
|
| Maximum number of resources referenced per query | 1,000 resources | A query can reference up to 1,000 total of uniquetables, uniqueviews, unique user-defined functions (UDFs), and uniquetable functions after full expansion. This limit includes the following:
|
| Maximum SQL query character length | 1,024k characters | A SQL query can be up to 1,024k characters long. This limit includes comments and whitespace characters. If your query is longer, you receive the following error:The query is too large. To stay within this limit, consider replacing large arrays or lists with query parameters and breaking a long query into multiple queries in the session. |
| Maximum unresolved legacy SQL query length | 256 KB | An unresolved legacy SQL query can be up to 256 KB long. If your query is longer, you receive the following error:The query is too large. To stay within this limit, consider replacing large arrays or lists with query parameters. |
| Maximum unresolved GoogleSQL query length | 1 MB | An unresolved GoogleSQL query can be up to 1 MB long. If your query is longer, you receive the following error:The query is too large. To stay within this limit, consider replacing large arrays or lists with query parameters. |
| Maximum resolved legacy and GoogleSQL query length | 12 MB | The limit on resolved query length includes the length of all views and wildcard tables referenced by the query. |
| Maximum number of GoogleSQL query parameters | 10,000 parameters | A GoogleSQL query can have up to 10,000 parameters. |
| Maximum request size | 10 MB | The request size can be up to 10 MB, including additional properties like query parameters. |
| Maximum response size | 10 GB compressed | Sizes vary depending on compression ratios for the data. The actual response size might be significantly larger than 10 GB. The maximum response size is unlimited whenwriting large query results to a destination table. |
| Maximum row size | 100 MB | The maximum row size is approximate, because the limit is based on the internal representation of row data. The maximum row size limit is enforced during certain stages of query job execution. |
| Maximum columns in a table, query result, or view definition | 10,000 columns | A table, query result, or view definition can have up to 10,000 columns. This includes nested and repeated columns. Deleted columns can continue to count towards the total number of columns. If you've deleted columns, then you might receive quota errors until the total resets. |
| Maximum concurrent slots for on-demand pricing | 2,000 slots per project 20,000 slots per organization | With on-demand pricing, your project can have up to 2,000 concurrent slots. There is also a 20,000 concurrent slots cap at the organization level. BigQuery tries to allocate slots fairly between projects within an organization if their total demand is higher than 20,000 slots. BigQuery slots are shared among all queries in a single project. BigQuery might exceed this limit to accelerate your queries. The capacity is subject to availability. To check how many slots you're using, seeMonitoring BigQuery using Cloud Monitoring. |
| Maximum CPU usage per scanned data for on-demand pricing | 256 CPU seconds per MiB scanned | With on-demand pricing, your query can use up to approximately 256 CPU seconds per MiB of scanned data. If your query is too CPU-intensive for the amount of data being processed, the query fails with abillingTierLimitExceeded error. For more information, see Error messages. |
| Multi-statement transaction table mutations | 100 tables | A transaction can mutate data in at most 100 tables. |
| Multi-statement transaction partition modifications | 100,000 partition modifications | A transaction can perform at most 100,000 partition modifications. |
| BigQuery Omni maximum query result size | 20 GiB uncompressed | The maximum result size is 20 GiB logical bytes when queryingMicrosoft Azure orAWS data. If your query result is larger than 20 GiB, consider exporting the results toAmazon S3 orBlob Storage. For more information, seeBigQuery Omni Limitations. |
| BigQuery Omni total query result size per day | 1 TB | The total query result sizes for a project is 1 TB per day. For more information, seeBigQuery Omni limitations. |
| BigQuery Omni maximum row size | 10 MiB | The maximum row size is 10 MiB when queryingMicrosoft Azure orAWS data. For more information, seeBigQuery Omni Limitations. |
Although scheduled queries use features of theBigQuery Data Transfer Service,scheduled queries are not transfers, and are not subject toload job limits.
Extract jobs
The following limits apply to jobs thatextract datafrom BigQuery by using the bq command-line tool, Google Cloud console,or the extract-typejobs.insertAPI method.
| Limit | Default | Notes |
|---|---|---|
| Maximum number of extracted bytes per day | 50 TiB | You can extract up to 50 TiB(Tebibytes) of data per day from a project at no cost using the shared slot pool. You canset up a Cloud Monitoring alert policy that provides notification of the number of bytes extracted. To extract more than 50 TiB(Tebibytes) of data per day, do one of the following:
|
| Maximum number of extract jobs per day | 100,000 extract jobs | You can run up to 100,000 extract jobs per day in a project. To run more than 100,000 extract jobs per day, do one of the following:
|
| Maximum table size extracted to a single file | 1 GB | You can extract up to 1 GB of table data to a single file. To extract more than 1 GB of data, use a wildcard to extract the data into multiple files. When you extract data to multiple files, the size of the files varies. In some cases, the size of the output files is more than 1 GB. |
| Wildcard URIs per extract job | 500 URIs | An extract job can have up to 500 wildcard URIs. |
For more information about viewing your current extract job usage, seeView current quota usage. Fortroubleshooting information, seeExport troubleshooting.
Load jobs
The following limits apply when youload datainto BigQuery, using theGoogle Cloud console, the bq command-line tool, or the load-typejobs.insertAPI method.
| Limit | Default | Notes |
|---|---|---|
| Load jobs per table per day | 1,500 jobs | Load jobs, including failed load jobs, count toward the limit on the number of table operations per day for the destination table. For information about limits on the number of table operations per day for standard tables and partitioned tables, seeTables. |
| Load jobs per day | 100,000 jobs | Your project is replenished with a maximum of 100,000 load jobs quota every 24 hours. Failed load jobs count toward this limit. In some cases, it is possible to run more than 100,000 load jobs in 24 hours if a prior day's quota is not fully used. |
| Maximum columns per table | 10,000 columns | A table can have up to 10,000 columns. This includes nested and repeated columns. |
| Maximum size per load job | 15 TB | The total size for all of your CSV, JSON, Avro, Parquet, and ORC input files can be up to 15 TB. This limit does not apply for jobs with a reservation. |
| Maximum number of source URIs in job configuration | 10,000 URIs | A job configuration can have up to 10,000 source URIs. |
| Maximum number of files per load job | 10,000,000 files | A load job can have up to 10 million total files, including all files matching all wildcard URIs. |
| Maximum number of files in the source Cloud Storage bucket | Approximately 60,000,000 files | A load job can read from a Cloud Storage bucket containing up to approximately 60,000,000 files. |
| Load job execution-time limit | 6 hours | A load job fails if it executes for longer than six hours. |
| Avro: Maximum size for file data blocks | 16 MB | The size limit for Avro file data blocks is 16 MB. |
| CSV: Maximum cell size | 100 MB | CSV cells can be up to 100 MB in size. |
| CSV: Maximum row size | 100 MB | CSV rows can be up to 100 MB in size. |
| CSV: Maximum file size - compressed | 4 GB | The size limit for a compressed CSV file is 4 GB. |
| CSV: Maximum file size - uncompressed | 5 TB | The size limit for an uncompressed CSV file is 5 TB. |
| Newline-delimited JSON (ndJSON): Maximum row size | 100 MB | ndJSON rows can be up to 100 MB in size. |
| ndJSON: Maximum file size - compressed | 4 GB | The size limit for a compressed ndJSON file is 4 GB. |
| ndJSON: Maximum file size - uncompressed | 5 TB | The size limit for an uncompressed ndJSON file is 5 TB. |
If you regularly exceed the load job limits due to frequent updates, considerstreaming data into BigQuery instead.
For information on viewing your current load job usage, seeView current quota usage.
BigQuery Data Transfer Service load job quota considerations
Load jobs created by BigQuery Data Transfer Service transfers are included inBigQuery's quotas on load jobs. It's important to considerhow many transfers you enable in each project to prevent transfers and otherload jobs from producingquotaExceeded errors.
You can use the following equation to estimate how many load jobs are requiredby your transfers:
Number of daily jobs = Number of transfers x Number of tables xSchedule frequency x Refresh window
Where:
Number of transfersis the number of transfer configurations you enable inyour project.Number of tablesis the number of tables created by each specific transfertype. The number of tables varies by transfer type:- Campaign Manager transfers create approximately 25 tables.
- Google Ads transfers create approximately 60 tables.
- Google Ad Manager transfers create approximately 40 tables.
- Google Play transfers create approximately 25 tables.
- Search Ads 360 transfers create approximately 50 tables.
- YouTube transfers create approximately 50 tables.
Schedule frequencydescribes how often the transfer runs. Transfer run schedulesare provided for each transfer type:Refresh windowis the number of days to include in the data transfer. If youenter 1, there is no daily backfill.
Copy jobs
The following limits apply to BigQuery jobs forcopying tables, including jobsthat create a copy, clone, or snapshot of a standard table, table clone, ortable snapshot.The limits apply to jobs created by using the Google Cloud console, thebq command-line tool, or thejobs.insert method thatspecifies thecopy fieldin the job configuration.Copy jobs count toward these limits whether they succeed or fail.
| Limit | Default | Notes |
|---|---|---|
| Copy jobs per destination table per day | SeeTable operations per day. | |
| Copy jobs per day | 100,000 jobs | Your project can run up to 100,000 copy jobs per day. |
| Cross-region copy jobs per destination table per day | 100 jobs | Your project can run up to 100 cross-region copy jobs for a destination table per day. |
| Cross-region copy jobs per day | 2,000 jobs | Your project can run up to 2,000 cross-region copy jobs per day. |
| Number of source tables to copy | 1,200 source tables | You can copy from up to 1,200 source tables per copy job. |
For information on viewing your current copy job usage, seeCopy jobs - View current quota usage. For informationon troubleshooting copy jobs, seeMaximum number of copy jobs per day per project quota errors.
The following limits apply to copying datasets:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of tables in the source dataset | 25,000 tables | A source dataset can have up to 25,000 tables. |
| Maximum number of tables that can be copied per run to a destination dataset in the same region | 20,000 tables | Your project can copy a maximum of 20,000 tables per run to a destination dataset within the same region. If a source dataset contains more than 20,000 tables, the BigQuery Data Transfer Service schedules sequential runs, each copying up to 20,000 tables, until all tables are copied. These runs are separated by a default interval of 24 hours, which users can customize down to a minimum of 12 hours. |
| Maximum number of tables that can be copied per run to a destination dataset in a different region | 1,000 tables | Your project can copy a maximum of 1,000 tables per run to a destination dataset in a different region. If a source dataset contains more than 1,000 tables, the BigQuery Data Transfer Service schedules sequential runs, each copying up to 1,000 tables, until all tables are copied. These runs are separated by a default interval of 24 hours, which users can customize down to a minimum of 12 hours. |
Reservations
The following quotas apply toreservations:
| Quota | Default | Notes |
|---|---|---|
| Total number of slots for the EU region | 5,000 slots | The maximum number of BigQuery slots you can purchase in the EU multi-region by using the Google Cloud console. View quotas in Google Cloud console |
| Total number of slots for the US region | 10,000 slots | The maximum number of BigQuery slots you can purchase in the US multi-region by using the Google Cloud console. View quotas in Google Cloud console |
Total number of slots for theus-east1 region | 4,000 slots | The maximum number of BigQuery slots that you can purchase in the listed region by using the Google Cloud console. View quotas in Google Cloud console |
Total number of slots for the following regions:
| 2,000 slots | The maximum number of BigQuery slots that you can purchase in each of the listed regions by using the Google Cloud console. View quotas in Google Cloud console |
Total number of slots for the following regions:
| 1,000 slots | The maximum number of BigQuery slots you can purchase in each of the listed regions by using the Google Cloud console. View quotas in Google Cloud console |
| Total number of slots for BigQuery Omni regions | 100 slots | The maximum number of BigQuery slots you can purchase in theBigQuery Omni regions by using the Google Cloud console. View quotas in Google Cloud console |
| Total number of slots for all other regions | 500 slots | The maximum number of BigQuery slots you can purchase in each other region by using the Google Cloud console. View quotas in Google Cloud console |
The following limits apply toreservations:
| Limit | Value | Notes |
|---|---|---|
| Number of administration projects for slot reservations | 10 projects per organization | The maximum number of projects within an organization that can contain a reservation or an active commitment for slots for a given location / region. |
| Maximum number ofstandard edition reservations | 10 reservations per project | The maximum number of standard edition reservations per administration project within an organization for a given location / region. |
| Maximum number ofEnterprise or Enterprise Plus edition reservations | 200 reservations per project | The maximum number of Enterprise or Enterprise Plus edition reservations per administration project within an organization for a given location / region. |
Maximum number of slots in a reservation that is associated with a reservation assignment with aCONTINUOUS job type. | 500 slots | When you want to create a reservation assignment that has aCONTINUOUS job type, the associated reservation can't have more than 500 slots. |
Datasets
The following limits apply to BigQuerydatasets:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of datasets | Unlimited | There is no limit on the number of datasets that a project can have. |
| Number of tables per dataset | Unlimited | When you use an API call, enumeration performance slows as you approach 50,000 tables in a dataset. The Google Cloud console can display up to 50,000 tables for each dataset. |
| Number of authorized resources in a dataset's access control list | 2,500 resources | A dataset's access control list can have up to 2,500 total authorized resources, includingauthorized views,authorized datasets, andauthorized functions. If you exceed this limit due to a large number of authorized views, consider grouping the views into authorized datasets. As a best practice, group related views into authorized datasets when you design new BigQuery architectures, especially multi-tenant architectures. |
| Number of dataset update operations per dataset per 10 seconds | 5 operations | Your project can make up to five dataset update operations every 10 seconds. The dataset update limit includes all metadata update operations performed by the following:
|
| Maximum length of a dataset description | 16,384 characters | When you add a description to a dataset, the text can be at most 16,384 characters. |
Tables
All tables
The following limits apply to all BigQuery tables.
Note: Quotas and limits are associated with table names. Therefore, when youtruncate the table, or drop the table and then recreate it, the quota/limitdoesn't reset, because the table name hasn't changed.| Limit | Default | Notes |
|---|---|---|
| Maximum length of a column name | 300 characters | Your column name can be at most 300 characters. |
| Maximum length of a column description | 1,024 characters | When you add a description to a column, the text can be at most 1,024 characters. |
| Maximum depth of nested records | 15 levels | Columns of typeRECORD can contain nestedRECORD types, also calledchild records. The maximum nested depth limit is 15 levels. This limit is independent of whether the records are scalar or array-based (repeated). |
| Maximum length of a table description | 16,384 characters | When you add a description to a table, the text can be at most 16,384 characters. |
For troubleshooting information related to table quotas or limits, see theBigQuery Troubleshooting page.
Standard tables
The following limits apply to BigQuery standard (built-in)tables:
| Limit | Default | Notes |
|---|---|---|
| Table modifications per day | 1,500 modifications | Your project can make up to 1,500 table modifications per table per day. Aload job,copy job, or query job that appends or overwrites table data counts as one modification to the table. This limit cannot be changed. DML statements are excluded anddon't count toward the number of table modifications per day. Streaming data is excluded anddoesn't count toward the number of table modifications per day. |
| Maximum rate of table metadata update operations per table | 5 operations per 10 seconds | Your project can make up to five table metadata update operations per 10 seconds per table. This limit applies to all table metadata update operations, performed by the following:
DELETE,INSERT,MERGE,TRUNCATE TABLE, orUPDATE statements to write data to a table. Note that while DML statements count toward this limit, they are not subject to it if it is reached. DML operations havededicated rate limits. If you exceed this limit, you get an error message like To identify the operations that count toward this limit, you can Inspect your logs. Refer toTroubleshoot quota errors for guidance on diagnosing and resolving this error. |
| Maximum number of columns per table | 10,000 columns | Each table, query result,or view definition can have up to 10,000 columns. This includes nested andrepeated columns. |
External tables
The following limits apply to BigQuery tables with data stored onCloud Storage in Parquet, ORC, Avro, CSV, or JSON format:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of source URIs per external table | 10,000 URIs | Each external table can have up to 10,000 source URIs. |
| Maximum number of files per external table | 10,000,000 files | An external table can have up to 10 million files, including all files matching all wildcard URIs. |
| Maximum size of stored data on Cloud Storage per external table | 600 TB | An external table can have up to 600 terabytes across all input files. This limit applies to the file sizes as stored on Cloud Storage; this size is not the same as the size used in the querypricing formula. Forexternally partitioned tables, the limit is applied after partition pruning. |
| Maximum number of files in the source Cloud Storage bucket | Approximately 60,000,000 files | An external table can reference a Cloud Storage bucket containing up to approximately 60,000,000 files. Forexternally partitioned tables, this limit is applied before partition pruning. |
Partitioned tables
The following limits apply to BigQuerypartitioned tables.
Note: These limits don't apply toHive-partitioned external tables.Partition limits apply to the combined total of allload jobs,copy jobs, andquery jobsthat append to or overwrite a destination partition.
A single job can affect multiple partitions. For example, query jobs and loadjobs can write to multiple partitions.
BigQuery uses the number of partitions affected by ajob when determining how much of the limit the job consumes. Streaminginserts do not affect this limit.
For information about strategies to stay within the limits for partitionedtables, seeTroubleshooting quota errors.
| Limit | Default | Notes |
|---|---|---|
| Number of partitions per partitioned table | 10,000 partitions | Each partitioned table can have up to 10,000 partitions. If you exceed this limit, consider using clustering in addition to, or instead of, partitioning. |
| Number of partitions modified by a single job | 4,000 partitions | Each job operation (query or load) can affect up to 4,000 partitions. BigQuery rejects any query or load job that attempts to modify more than 4,000 partitions. |
| Number of partition modifications during ingestion-time per partitioned table per day | 11,000 modifications | Your project can make up to 11,000 partition modifications per day. A partition modification is when you append, update, delete, or truncate data in a partitioned table. A partition modification is counted for each type of data modification that you make. For example, deleting one row would count as one partition modification, just as deleting an entire partition would also count as one modification. If you delete a row from one partition and then insert it into another partition, this would count as two partition modifications. Modifications using DML statements or the streaming API don't count toward the number of partition modifications per day. |
| Number of partition modifications per column-partitioned table per day | 30,000 modifications | Your project can make up to 30,000 partition modifications per day for a column-partitioned table. DML statementsdo not count toward the number of partition modifications per day. Streaming datadoes not count toward the number of partition modifications per day. |
| Maximum rate of table metadata update operations per partitioned table | 50 modifications per 10 seconds | Your project can make up to 50 modifications per partitioned table every 10 seconds. This limit applies to all partitioned table metadata update operations, performed by the following:
DELETE,INSERT,MERGE,TRUNCATE TABLE, orUPDATE statements to write data to a table. If you exceed this limit, you get an error message like To identify the operations that count toward this limit, you can Inspect your logs. |
| Number of possible ranges for range partitioning | 10,000 ranges | A range-partitioned table can have up to 10,000 possible ranges. This limit applies to the partition specification when you create the table. After you create the table, the limit also applies to the actual number of partitions. |
Table clones
The following limits apply to BigQuerytable clones:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of clones and snapshots in a chain | 3 table clones or snapshots | Clones and snapshots in combination are limited to a depth of 3. When you clone or snapshot a base table, you can clone or snapshot the result only two more times; attempting to clone or snapshot the result a third time results in an error. For example, you can create clone A of the base table, create snapshot B of clone A, and create clone C of snapshot B. To make additional duplicates of the third-level clone or snapshot, use acopy operation instead. |
| Maximum number of clones and snapshots for a base table | 1,000 table clones or snapshots | You can have no more than 1,000 existing clones and snapshots combined of a given base table. For example, if you have 600 snapshots and 400 clones, you reach the limit. |
Table snapshots
The following limits apply to BigQuerytable snapshots:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of concurrent table snapshot jobs | 100 jobs | Your project can run up to 100 concurrent table snapshot jobs. |
| Maximum number of table snapshot jobs per day | 50,000 jobs | Your project can run up to 50,000 table snapshot jobs per day. |
| Maximum number of table snapshot jobs per table per day | 50 jobs | Your project can run up to 50 table snapshot jobs per table per day. |
| Maximum number of metadata updates per table snapshot per 10 seconds. | 5 updates | Your project can update a table snapshot's metadata up to five times every 10 seconds. |
| Maximum number of clones and snapshots in a chain | 3 table clones or snapshots | Clones and snapshots in combination are limited to a depth of 3. When you clone or snapshot a base table, you can clone or snapshot the result only two more times; attempting to clone or snapshot the result a third time results in an error. For example, you can create clone A of the base table, create snapshot B of clone A, and create clone C of snapshot B. To make additional duplicates of the third-level clone or snapshot, use acopy operation instead. |
| Maximum number of clones and snapshots for a base table | 1,000 table clones or snapshots | You can have no more than 1,000 existing clones and snapshots combined of a given base table. For example, if you have 600 snapshots and 400 clones, you reach the limit. |
Views
The following quotas and limits apply toviews andmaterialized views.
Logical views
The following limits apply to BigQuery standardviews:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of nested view levels | 16 levels | BigQuery supports up to 16 levels of nested views. Creating views up to this limit is possible, but querying is limited to 15 levels. If the limit is exceeded, BigQuery returns anINVALID_INPUT error. |
| Maximum length of a GoogleSQL query used to define a view | 256 K characters | A single GoogleSQL query that defines a view can be up to 256 K characters long. This limit applies to a single query and does not include the length of the views referenced in the query. |
| Maximum number of authorized views per dataset | SeeDatasets. | |
| Maximum length of a view description | 16,384 characters | When you add a description to a view, the text can be at most 16,384 characters. |
Materialized views
The following limits apply to BigQuerymaterialized views:
| Limit | Default | Notes |
|---|---|---|
| Base table references (same project) | 100 materialized views | Each base table can be referenced by up to 100 materialized views from the same project. |
| Base table references (entire organization) | 500 materialized views | Each base table can be referenced by up to 500 materialized views from the entire organization. |
| Maximum number of authorized views per dataset | SeeDatasets. | |
| Maximum length of a materialized view description | 16,384 characters | When you add a description to a materialized view, the text can be at most 16,384 characters. |
| Materialized view refresh job execution-time limit | 12 hours | Amaterialized view refresh job can run for up to 12 hours before it fails. |
Search indexes
The following limits apply to BigQuerysearch indexes:
| Limit | Default | Notes |
|---|---|---|
Number ofCREATE INDEX DDL statements per project per region per day | 500 operations | Your project can issue up to 500CREATE INDEX DDL operations every day within a region. |
| Number of search index DDL statements per table per day | 20 operations | Your project can issue up to 20CREATE INDEX orDROP INDEX DDL operations per table per day. |
| Maximum total size of table data per organization allowed for search index creation that does not run in a reservation | 100 TB in multi-regions; 20 TB in all other regions | You can create a search index for a table if the overall size of tables with indexes in your organization is below your region's limit: 100 TB for theUS andEU multi-regions, and 20 TB for all other regions. If your index-management jobs run inyour own reservation, then this limit doesn't apply. |
| Number of columns indexed with column granularity per table | 63 columns per table | A table can have up to 63 columns withindex_granularity set toCOLUMN. Columns indexed withCOLUMN granularity from setting thedefault_index_column_granularity option count towards this limit. There is no limit on the number of columns that are indexed withGLOBAL granularity. For more information, seeindex with column granularity. |
Vector indexes
The following limits apply to BigQueryvector indexes:
| Limit | Default | Notes |
|---|---|---|
| Base table minimum number of rows | 5,000 rows | A table must have at least 5,000 rows to create a vector index. |
Base table maximum number of rows for index typeIVF | 10,000,000,000 rows | A table can have at most 10,000,000,000 rows to create anIVF vector index |
Base table maximum number of rows for index typeTREE_AH | 200,000,000 rows | A table can have at most 200,000,000 rows to create anTREE_AH vector index |
Base table maximum number of rows for partitioned index typeTREE_AH | 10,000,000,000 rows in total 200,000,000 rows for each partition | A table can have at most 10,000,000,000 rows, and each partition can have at most 200,000,000 rows to create aTREE_AH partitioned vector index. |
| Maximum size of the array in the indexed column | 1,600 elements | The column to index can have at most 1,600 elements in the array. |
| Minimum table size for vector index population | 10 MB | If you create a vector index on a table that is under 10 MB, then the index is not populated. Similarly, if you delete data from a vector-indexed table such that the table size is under 10 MB, then the vector index is temporarily disabled. This happens regardless of whether you use your own reservation for your index-management jobs. Once a vector-indexed table's size again exceeds 10 MB, its index is populated automatically. |
Number ofCREATE VECTOR INDEX DDL statements per project per region per day | 500 operations | For each project, you can issue up to 500CREATE VECTOR INDEX operations per day for each region. |
| Number of vector index DDL statements per table per day | 10 operations | You can issue up to 10CREATE VECTOR INDEX orDROP VECTOR INDEX operations per table per day. |
| Maximum total size of table data per organization allowed for vector index creation that does not run in a reservation | 6 TB | You can create a vector index for a table if the total size of tables with indexes in your organization is under 6 TB. If your index-management jobs run inyour own reservation, then this limit doesn't apply. |
Routines
The following quotas and limits apply toroutines.
User-defined functions
The following limits apply to both temporary and persistent user-defined functions (UDFs) in GoogleSQL queries.
Note: UDFs and the tables they reference count toward the limit on thenumber of resources referenced in a query.| Limit | Default | Notes |
|---|---|---|
| Maximum output per row | 5 MB | The maximum amount of data that your JavaScript UDF can output when processing a single row is approximately 5 MB. |
| Maximum concurrent legacy SQL queries with Javascript UDFs | 6 queries | Your project can have up to six concurrent legacy SQL queries that contain UDFs in JavaScript. This limit includes both interactive andbatch queries. This limit does not apply to GoogleSQL queries. |
| Maximum JavaScript UDF resources per query | 50 resources | A query job can have up to 50 JavaScript UDF resources, such as inline code blobs or external files. |
| Maximum size of inline code blob | 32 KB | An inline code blob in a UDF can be up to 32 KB in size. |
| Maximum size of each external code resource | 1 MB | The maximum size of each JavaScript code resource is one MB. |
The following limits apply to persistent UDFs:
| Limit | Default | Notes |
|---|---|---|
| Maximum length of a UDF name | 256 characters | A UDF name can be up to 256 characters long. |
| Maximum number of arguments | 256 arguments | A UDF can have up to 256 arguments. |
| Maximum length of an argument name | 128 characters | A UDF argument name can be up to 128 characters long. |
| Maximum depth of a UDF reference chain | 16 references | A UDF reference chain can be up to 16 references deep. |
Maximum depth of aSTRUCT type argument or output | 15 levels | ASTRUCT type UDF argument or output can be up to 15 levels deep. |
Maximum number of fields inSTRUCT type arguments or output per UDF | 1,024 fields | A UDF can have up to 1024 fields inSTRUCT type arguments and output. |
Maximum number of JavaScript libraries in aCREATE FUNCTION statement | 50 libraries | ACREATE FUNCTION statement can have up to 50 JavaScript libraries. |
| Maximum length of included JavaScript library paths | 5,000 characters | The path for a JavaScript library included in a UDF can be up to 5,000 characters long. |
| Maximum update rate per UDF per 10 seconds | 5 updates | Your project can update a UDF up to five times every 10 seconds. |
| Maximum number of authorized UDFs per dataset | SeeDatasets. |
Remote functions
The following limits apply toremote functions inBigQuery.
For troubleshooting information, seeMaximum number of concurrent queries thatcontain remote functions.
| Limit | Default | Notes |
|---|---|---|
| Maximum number of concurrent queries that contain remote functions | 10 queries | You can run up to ten concurrent queries with remote functions per project. |
| Maximum input size | 5 MB | The maximum total size of all input arguments from a single row is 5 MB. |
| HTTP response size limit (Cloud Run functions 1st gen) | 10 MB | HTTP response body from your Cloud Run function 1st gen is up to 10 MB. Exceeding this value causes query failures. |
| HTTP response size limit (Cloud Run functions 2nd gen or Cloud Run) | 15 MB | HTTP response body from your Cloud Run function 2nd gen or Cloud Run is up to 15 MB. Exceeding this value causes query failures. |
| Max HTTP invocation time limit (Cloud Run functions 1st gen) | 9 minutes | You can set your own time limit for your Cloud Run function 1st gen for an individual HTTP invocation, but the max time limit is9 minutes. Exceeding the time limit set for your Cloud Run function 1st gen can cause HTTP invocation failures and query failure. |
| HTTP invocation time limit (Cloud Run functions 2nd gen or Cloud Run) | 20 minutes | The time limit for an individual HTTP invocation to your Cloud Run function 2nd gen or Cloud Run. Exceeding this value can cause HTTP invocation failures and query failure. |
| Maximum number of HTTP invocation retry attempts | 20 | The maximum number of retry attempts for an individual HTTP invocation to your Cloud Run function 1st gen, 2nd gen, or Cloud Run. Exceeding this value can cause HTTP invocation failures and query failure. |
Table functions
The following limits apply to BigQuerytable functions:
| Limit | Default | Notes |
|---|---|---|
| Maximum length of a table function name | 256 characters | The name of a table function can be up to 256 characters in length. |
| Maximum length of an argument name | 128 characters | The name of a table function argument can be up to 128 characters in length. |
| Maximum number of arguments | 256 arguments | A table function can have up to 256 arguments. |
| Maximum depth of a table function reference chain | 16 references | A table function reference chain can be up to 16 references deep. |
Maximum depth of argument or output of typeSTRUCT | 15 levels | ASTRUCT argument for a table function can be up to 15 levels deep. Similarly, aSTRUCT record in a table function's output can be up to 15 levels deep. |
Maximum number of fields in argument or return table of typeSTRUCT per table function | 1,024 fields | ASTRUCT argument for a table function can have up to 1,024 fields. Similarly, aSTRUCT record in a table function's output can have up to 1,024 fields. |
| Maximum number of columns in return table | 1,024 columns | A table returned by a table function can have up to 1,024 columns. |
| Maximum length of return table column names | 128 characters | Column names in returned tables can be up to 128 characters long. |
| Maximum number of updates per table function per 10 seconds | 5 updates | Your project can update a table function up to five times every 10 seconds. |
Stored procedures for Apache Spark
The following limits apply forBigQuery stored procedures forApache Spark:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of concurrent stored procedure queries | 50 | You can run up to 50 concurrent stored procedure queries for each project. |
| Maximum number of in-use CPUs | 12,000 | You can use up to 12,000 CPUs for each project. Queries that have already been processed don't consume this limit. You can use up to 2,400 CPUs for each location for each project, except in the following locations:
In these locations, you can use up to 500 CPUs for each location for each project. If you run concurrent queries in a multi-region location and a single region location that is in the same geographic area, then your queries might consume the same concurrent CPU quota. |
| Maximum total size of in-use standard persistent disks | 204.8 TB | You can use up to 204.8 TB standard persistent disks for each location for each project. Queries that have already been processed don't consume this limit. If you run concurrent queries in a multi-region location and a single region location that is in the same geographic area, then your queries might consume the same standard persistent disk quota. |
Notebooks
AllDataform quotas and limits andColab Enterprise quotas and limits apply tonotebooks in BigQuery.The following limits also apply:
| Limit | Default | Notes |
|---|---|---|
| Maximum notebook size | 20 MB | A notebook's size is the total of its content, metadata, and encoding overhead. You can view the size of notebook content by expanding the notebook header, clickingView, and then clickingNotebook info. |
| Maximum number of requests per second to Dataform | 100 | Notebooks are created and managed through Dataform. Any action that creates or modifies a notebook counts against this quota. This quota is shared with saved queries. For example, if you make 50 changes to notebooks and 50 changes to saved queries within 1 second, you reach the quota. |
Saved queries
AllDataform quotas and limits apply tosaved queries.The followinglimits also apply:
| Limit | Default | Notes |
|---|---|---|
| Maximum saved query size | 10 MB | |
| Maximum number of requests per second to Dataform | 100 | Saved queries are created and managed through Dataform. Any action that creates or modifies a saved query counts against this quota. This quota is shared with notebooks. For example, if you make 50 changes to notebooks and 50 changes to saved queries within 1 second, you reach the quota. |
Data manipulation language
The following limits apply for BigQuerydata manipulation language (DML)statements:
| Limit | Default | Notes |
|---|---|---|
| DML statements per day | Unlimited | The number of DML statements your project can run per day is unlimited. DML statementsdo not count toward the number oftable modifications per day or the number ofpartitioned table modifications per day for partitioned tables. DML statements have the followinglimitations to be aware of. |
ConcurrentINSERT DML statements per table per day | 1,500 statements | The first 1,500INSERT statements run immediately after they are submitted. After this limit is reached, the concurrency ofINSERT statements that write to a table is limited to 10. AdditionalINSERT statements are added to aPENDING queue. Up to 100INSERT statements can be queued against a table at any given time. When anINSERT statement completes, the nextINSERT statement is removed from the queue and run.If you must run DML INSERT statements more frequently, consider streaming data to your table using theStorage Write API. |
| Concurrent mutating DML statements per table | 2 statements | BigQuery runs up to two concurrent mutating DML statements (UPDATE,DELETE, andMERGE) for each table. Additional mutating DML statements for a table are queued. |
| Queued mutating DML statements per table | 20 statements | A table can have up to 20 mutating DML statements in the queue waiting to run. If you submit additional mutating DML statements for the table, then those statements fail. |
| Maximum time in queue for DML statement | 7 hours | An interactive priority DML statement can wait in the queue for up to seven hours. If the statement has not run after seven hours, it fails. |
| Maximum rate of DML statements for each table | 25 statements every 10 seconds | Your project can run up to 25 DML statements every 10 seconds for each table. BothINSERT and mutating DML statements contribute to this limit. |
For more information about mutating DML statements, seeINSERT DML concurrency andUPDATE, DELETE, MERGE DML concurrency.
Multi-statement queries
The following limits apply tomulti-statement queries inBigQuery.
| Limit | Default | Notes |
|---|---|---|
| Maximum number of concurrent multi-statement queries | 1,000 multi-statement queries | Your project can run up to 1,000 concurrentmulti-statement queries. |
| Cumulative time limit | 24 hours | The cumulative time limit for a multi-statement query is 24 hours. |
| Statement time limit | 6 hours | The time limit for an individual statement within a multi-statement query is 6 hours. |
Recursive CTEs in queries
The following limits apply torecursive common table expressions (CTEs) inBigQuery.
| Limit | Default | Notes |
|---|---|---|
| Iteration limit | 500 iterations | The recursive CTE can execute this number of iterations. If this limit is exceeded, an error is produced. To work around iteration limits, seeTroubleshoot iteration limit errors. |
Row-level security
The following limits apply for BigQueryrow-level access policies:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of row-access policies per table | 400 policies | A table can have up to 400 row-access policies. |
| Maximum number of row-access policies per query | 6000 policies | A query can access up to a total of 6000 row-access policies. |
Maximum number ofCREATE /DROP DDL statements per policy per 10 seconds | 5 statements | Your project can make up to fiveCREATE orDROP statements per row-access policy resource every 10 seconds. |
DROP ALL ROW ACCESS POLICIES statements per table per 10 seconds | 5 statements | Your project can make up to fiveDROP ALL ROW ACCESS POLICIES statements per table every 10 seconds. |
Data policies
The following limits apply forcolumn-level dynamic data masking:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of data policies per policy tag. | 8 policies per policy tag | Up to eight data policies per policy tag. One of these policies can be used forcolumn-level access controls. Duplicate masking expressions are not supported. |
BigQuery ML
The following limits apply to BigQuery ML.
Query jobs
Allquery job quotas and limits apply to GoogleSQLquery jobs that use BigQuery ML statements and functions.
CREATE MODEL statements
The following limits apply toCREATE MODELjobs:
| Limit | Default | Notes |
|---|---|---|
CREATE MODEL statement queries per 48 hours for each project | 20,000 statement queries | Some models are trained by utilizingVertex AI services, which have their ownresource and quota management. |
| Execution-time limit | 24 hours or 48 hours | CREATE MODEL job timeout defaults to 24 hours, with the exception of time series, AutoML, and hyperparameter tuning jobs which timeout at 48 hours. |
Generative AI functions
The following limits apply to functions that use Vertex AI largelanguage models (LLMs).
Requests per minute limits
The following limits apply to Vertex AI models that use arequests per minute limit:
| Function | Model | Region | Requests per minute | Rows per job | Number of concurrently running jobs |
|---|---|---|---|---|---|
AI.GENERATE_TEXTML.GENERATE_TEXTAI.GENERATE_TABLEAI.GENERATEAI.GENERATE_BOOLAI.GENERATE_DOUBLEAI.GENERATE_INT | gemini-2.0-flash-lite-001 | US andEU multi-regionsSingle regions as documented for gemini-2.0-flash-lite-001 inGoogle model endpoint locations | No set quota. Quota determined bydynamic shared quota (DSQ)1 andProvisioned Throughput2 | N/A for Provisioned Throughput 10,500,000 for DSQ, for a call with an average of 500 input tokens and 50 output tokens | 5 |
gemini-2.0-flash-001 | US andEU multi-regionsSingle regions as documented for gemini-2.0-flash-001 inGoogle model endpoint locations | N/A for Provisioned Throughput 10,200,000 for DSQ, for a call with an average of 500 input tokens and 50 output tokens | 5 | ||
gemini-2.5-flash | US andEU multi-regionsSingle regions as documented for gemini-2.5-flash inGoogle model endpoint locations | N/A for Provisioned Throughput 9,300,000 for DSQ, for a call with an average of 500 input tokens and 50 output tokens | 5 | ||
gemini-2.5-pro | US andEU multi-regionsSingle regions as documented for gemini-2.5-pro inGoogle model endpoint locations | N/A for Provisioned Throughput 7,600,000 for DSQ, for a call with an average of 500 input tokens and 50 output tokens | 5 | ||
AI.IFAI.SCOREAI.CLASSIFY | Variousgemini-2.5-* models | US andEU multi-regionsAny single region supported for one of the gemini-2.5-* models inGoogle model endpoint locations | No set quota. Quota determined bydynamic shared quota (DSQ)1 | 10,000,000 for a call with an average of 500 tokens in each input row and 50 output tokens. | 5 |
AI.GENERATE_TEXTML.GENERATE_TEXT | Anthropic Claude | SeeQuotas by model and region | SeeQuotas by model and region | The requests per minute value * 60 * 6 | 5 |
| Llama | SeeLlama model region availability and quotas | SeeLlama model region availability and quotas | 5 | ||
| Mistral AI | SeeMistral AI model region availability and quotas | SeeMistral AI model region availability and quotas | 5 | ||
AI.GENERATE_EMBEDDINGAI.EMBEDAI.SIMILARITYAI.SEARCHVECTOR_SEARCHML.GENERATE_EMBEDDING | text-embeddingtext-multilingual-embedding | All regions that support remote models | 1,5003,4 | 80,000,000 for a call with an average of 50 tokens in each input row 14,000,000 for a call with an average of 600 tokens in each input row | 5 |
multimodalembedding | Supported European single regions | 1203 | 14,000 | 5 | |
| Regions other thansupported European single regions | 6003 | 25,000 | 5 |
1 When you use DSQ, there are no predefined quota limits on yourusage. Instead, DSQ provides access to a large shared pool of resources, whichare dynamically allocated based on real-time availability of resources and thecustomer demand for the given model. When more customers are active, eachcustomer gets less throughput. Similarly, when fewer customers are active, eachcustomer might get higher throughput.
2 Provisioned Throughput is a fixed-cost, fixed-termsubscription available in several term-lengths.Provisioned Throughput lets you reserve throughput for supportedgenerative AI models on Vertex AI.
3 To increase the quota, request aQPM quota adjustment inVertex AI. Allow 30 minutes for the increased quota value topropagate.
4 You can increase the quota for Vertex AItext-embedding andtext-multilingual-embedding models to 10,000 RPM without mannual approval. This results in increased throughput of 500,000,000 rows per job or more, based on a call with an average of 50 tokens in each input row.
For more information about quota for Vertex AI LLMs, seeGenerative AI on Vertex AI quota limits.
Tokens per minute limits
The following limits apply to Vertex AI models that use atokens per minute limit:
| Function | Tokens per minute | Rows per job | Number of concurrently running jobs |
|---|---|---|---|
AI.GENERATE_EMBEDDING orML.GENERATE_EMBEDDING when using a remote model over agemini-embedding-001 model | 10,000,000 | 12,000,000, for a call with an average of 300 tokens per row | 5 |
Cloud AI service functions
The following limits apply to functions that use Cloud AI services:
| Function | Requests per minute | Rows per job | Number of concurrently running jobs |
|---|---|---|---|
ML.PROCESS_DOCUMENT with documents averaging fifty pages | 600 | 100,000 (based on an average of 50 pages in each input document) | 5 |
ML.TRANSCRIBE | 200 | 10,000 (based on an average length of 1 minute for each input audio file) | 5 |
ML.ANNOTATE_IMAGE | 1,800 | 648,000 | 5 |
ML.TRANSLATE | 6,000 | 2,160,000 | 5 |
ML.UNDERSTAND_TEXT | 600 | 21,600 | 5 |
For more information about quota for Cloud AI service APIs, see the followingdocuments:
- Cloud Translation API quota and limits
- Vision API quota and limits
- Natural Language API quota and limits
- Document AI quota and limits
- Speech-to-Text quota and limits
Function quota definitions
The following list describes the quotas that apply to generative AI and CloudAI service functions:
- Functions that call a Vertex AI model use oneVertex AI quota, which is queries perminute (QPM). In this context, the queries are request calls from the functionto the Vertex AI model's API. The QPM quota applies to a basemodel and all versions, identifiers, and tuned versions of that model. Formore information on the Vertex AI model quotas, seeGenerative AI on Vertex AI quota limits.
- Functions that call a Cloud AI service use the target service's requestquotas. Check the given Cloud AI service's quota reference for details.
BigQuery ML uses the following quotas:
Requests per minute. This quota is the limit on the number of requestcalls per minute that functions can make to the Vertex AImodel's or Cloud AI service's API. This limit applies to each project.
Calls to Vertex AI Gemini models have no predefined quota limits on your usage, because Gemini models usedynamic shared quota (DSQ). DSQ provides access to a large shared pool of resources, which are dynamically allocated based on real-time availability of resources and the customer demand for the given model.
Tokens per minute. This quota is the limit on the number of tokens per minute that functions can send to the Vertex AI model's API. This limit applies to each project.
For functions that call a Vertex AI foundation model, the number of tokens per minute varies depending on the Vertex AI model endpoint, version, and region, and also your project's reputation. This quota is conceptually the same as the QPM quota used by Vertex AI.
Rows per job. This quota is the limit on the number of rows allowedfor each query job.This quota represents the highest theoretical numberof rows that the system can handle within a 6-hour period. The actualnumber of processed rows depends on many factors, including the size ofthe input request to the model, the size of output responses from themodel, and availability of dynamic shared quota. The following examplesshow some common scenarios:
For the
gemini-2.0-flash-lite-001endpoint, the number of rowsprocessable by theAI.GENERATE_TEXTorML.GENERATE_TEXTfunction depends on input andoutput token counts. The service can process approximately 7.6 millionrows for calls that have an average input token count of 2,000 and amaximum output token count of 50. This number decreases to about 1million rows if the average input token count is 10,000 and the maximumoutput token count is 3,000.Similarly, the
gemini-2.0-flash-001endpoint can process 4.4 millionrows for calls that have an average input token count of 2,000 and amaximum output token count of 50, but only about 1 million rows withfor calls with 10,000 input and 3,000 output tokens.The
ML.PROCESS_DOCUMENTfunction can process more rows per job forshort documents as opposed to long documents.The
ML.TRANSCRIBEfunction can process more rows per job forshort audio clips as opposed to long audio clips.
Number of concurrently running jobs. This quota is thelimit per project on the number of SQL queries that can run at the sametime for the given function.
The following examples show how to interpret quota limitations in typicalsituations:
I have a quota of 1,000 QPM in Vertex AI, so a query with100,000 rows should take around 100 minutes. Why is the job running longer?
Job runtimes can vary even for the same input data. InVertex AI, remote procedure calls (RPCs) have differentpriorities in order to avoid quota drainage. When there isn't enough quota,RPCs with lower priorities wait and possibly fail if it takes too long toprocess them.
How should I interpret the rows per job quota?
In BigQuery, a query can execute for up to six hours. Themaximum supported rows is a function of this timeline and yourVertex AI QPM quota, in order to make sure thatBigQuery can complete query processing in six hours. Sincetypically a query can't use the whole quota, this is a lowernumber than your QPM quota multiplied by 360.
What happens if I run a batch inference job on a table with morerows than the rows per job quota, for example 10,000,000 rows?
BigQuery only processes the number of rows specified by therows per job quota. You are only charged for the successful API calls forthat number of rows, instead of the full 10,000,000 rows in your table. Forthe rest of the rows, BigQuery responds to the request with a
A retryable error occurred: the maximum size quota per query has reachederror, which is returned in thestatuscolumn of the result. You can usethis set ofSQL scripts or thisDataform packageto iterate through inference calls until all rowsare successfully processed.I have many more rows to process than the rows per job quota. Willsplitting my rows across multiple queries and running them simultaneouslyhelp?
No, because these queries are consuming the same BigQuery MLrequests per minute quota and Vertex AI QPM quota. If thereare multiple queries that all stay within the rows per job quota and numberof concurrently running jobs quota, the cumulative processing exhausts therequests per minute quota.
BI Engine
The following limits apply toBigQuery BI Engine.
| Limit | Default | Notes |
|---|---|---|
| Maximum reservation size per project per location (BigQuery BI Engine) | 250 GiB | 250 Gib is the default maximum reservation size per project per location. You can request an increase of the maximum reservation capacity for your projects. Reservation increases are available in most regions, and might take 3 or more business days depending on the size of the increase requested. Please contact your Google Cloud representative or Cloud Customer Care for urgent requests. |
| Maximum number of rows per query | 7 billion | Maximum number of rows per query. |
BigQuery sharing (formerly Analytics Hub)
The following limits apply toBigQuery sharing (formerly Analytics Hub):
| Limit | Default | Notes |
|---|---|---|
| Maximum number of data exchanges per project | 500 exchanges | You can create up to 500 data exchanges in a project. |
| Maximum number of listings per data exchange | 1,000 listings | You can create up to 1,000 listings in a data exchange. |
| Maximum number of linked datasets per shared dataset | 1,000 linked datasets | All BigQuery sharing subscribers, combined, can have a maximum of 1,000 linked datasets per shared dataset. |
Dataplex Universal Catalog automatic discovery
The following limits apply toDataplex Universal Catalog automatic discovery:
| Limit | Default | Notes |
|---|---|---|
| Maximum BigQuery, BigLake, or external tables per Cloud Storage bucket that a discovery scan supports | 1000 BigQuery tables per bucket | You can create up to 1,000 BigQuery tables per Cloud Storage bucket. |
API quotas and limits
These quotas and limits apply toBigQuery API requests.
BigQuery API
The following quotas apply toBigQuery API (core)requests:
| Quota | Default | Notes |
|---|---|---|
| Requests per day | Unlimited | Your project can make an unlimited number of BigQuery API requests per day. View quota in Google Cloud console |
Maximumtabledata.list bytes per minute | 7.5 GB in multi-regions; 3.7 GB in all other regions | Your project can return a maximum of 7.5 GB of table row data per minute viatabledata.list in theus andeu multi-regions, and 3.7 GB of table row data per minute in all other regions. This quota applies to the project that contains the table being read. Other APIs includingjobs.getQueryResults and fetching results fromjobs.query andjobs.insert can also consume this quota. For troubleshooting information, see theTroubleshooting page.View quota in Google Cloud console TheBigQuery Storage Read API can sustain significantly higher throughput than |
The following limits apply toBigQuery API(core) requests:
| Limit | Default | Notes |
|---|---|---|
| Maximum number of API requests per second per user per method | 100 requests | A user can make up to 100 API requests per second to an API method. If a user makes more than 100 requests per second to a method, then throttling can occur. This limit does not apply tostreaming inserts. For troubleshooting information, see theTroubleshooting page. |
| Maximum number of concurrent API requests per user | 300 requests | If a user makes more than 300 concurrent requests, throttling can occur. This limit does not apply to streaming inserts. |
| Maximum request header size | 16 KiB | Your BigQuery API request can be up to 16 KiB, including the request URL and all headers. This limit does not apply to the request body, such as in aPOST request. |
Maximumjobs.get requests per second | 1,000 requests | Your project can make up to 1,000jobs.get requests per second. |
Maximumjobs.query response size | 20 MB | By default, there is no maximum row count for the number of rows of data returned byjobs.query per page of results. However, you are limited to the 20-MB maximum response size. You can alter the number of rows to return by using themaxResults parameter. |
Maximumjobs.getQueryResults row size | 20 MB | The maximum row size is approximate because the limit is based on the internal representation of row data. The limit is enforced during transcoding. |
Maximumprojects.list requests per second | 10 requests | A user can make up to 10projects.list requests per second. |
Maximum number oftabledata.list requests per second | 1,000 requests | Your project can make up to 1,000tabledata.list requests per second. |
Maximum rows pertabledata.list response | 100,000 rows | Atabledata.list call can return up to 100,000 table rows. For more information, seePaging through results using the API. |
Maximumtabledata.list row size | 100 MB | The maximum row size is approximate because the limit is based on the internal representation of row data. The limit is enforced during transcoding. |
Maximumtables.insert requests per second | 10 requests | A user can make up to 10tables.insert requests per second. Thetables.insert method creates a new, empty table in a dataset. |
BigQuery Connection API
The following quotas apply toBigQuery Connection APIrequests:
| Quota | Default | Notes |
|---|---|---|
| Read requests per minute | 1,000 requests per minute | Your project can make up to 1,000 requests per minute to BigQuery Connection API methods that read connection data. View quota in Google Cloud console |
| Write requests per minute | 100 requests per minute | Your project can make up to 100 requests per minute to BigQuery Connection API methods that create or update connections. View quota in Google Cloud console |
| BigQuery Omni connections created per minute | 10 connections created per minute | Your project can create up to 10 BigQuery Omni connections total across both AWS and Azure per minute. |
| BigQuery Omni connection uses | 500 connection uses per minute | Your project can use a BigQuery Omni connection up to 500 times per minute. This applies to operations which use your connection to access your AWS account, such as querying a table. |
BigQuery Migration API
The following limits apply to theBigQuery Migration API:
| Limit | Default | Notes |
|---|---|---|
| Individual file size for batch SQL translation | 10 MB | Each individual source and metadata file can be up to 10 MB. This limit does not apply to the metadata zip file produced by thedwh-migration-dumper command-line extraction tool. |
| Total size of source files for batch SQL translation | 1 GB | The total size of all input files uploaded to Cloud Storage can be up to 1 GB. This includes all source files, and all metadata files if you choose to include them. |
| Input string size for interactive SQL translation | 1 MB | The string that you enter for interactive SQL translation must not exceed 1 MB. When running interactive translations using the Translation API, this limit applies to the total size of all string inputs. |
| Maximum configuration file size for interactive SQL translation | 50 MB | Individual metadata files (compressed) and YAML config files in Cloud Storage must not exceed 50 MB. If the file size exceeds 50 MB, the interactive translator skips that configuration file during translation and produces an error message. One method to reduce the metadata file size is to use the—database or–schema flags to filter on databases when yougenerate the metadata. |
| Maximum number of Gemini suggestions per hour | 1,000 (can accumulate up to 10,000 if not used) | If necessary, you can request a quota increase by contactingCloud Customer Care. |
The following quotas apply to theBigQuery Migration API. The followingdefault values apply in most cases. The defaults for your project might bedifferent:
| Quota | Default | Notes |
|---|---|---|
EDWMigration Service List Requests per minute EDWMigration Service List Requests per minute per user | 12,000 requests 2,500 requests | Your project can make up to 12,000 Migration API List requests per minute. Each user can make up to 2,500 Migration API List requests per minute. View quotas in Google Cloud console |
EDWMigration Service Get Requests per minute EDWMigration Service Get Requests per minute per user | 25,000 requests 2,500 requests | Your project can make up to 25,000 Migration API Get requests per minute. Each user can make up to 2,500 Migration API Get requests per minute. View quotas in Google Cloud console |
EDWMigration Service Other Requests per minute EDWMigration Service Other Requests per minute per user | 25 requests 5 requests | Your project can make up to 25 other Migration API requests per minute. Each user can make up to 5 other Migration API requests per minute. View quotas in Google Cloud console |
Interactive SQL translation requests per minute Interactive SQL translation requests per minute per user | 200 requests 50 requests | Your project can make up to 200 SQL translation service requests per minute. Each user can make up to 50 other SQL translation service requests per minute. View quotas in Google Cloud console |
BigQuery Reservation API
The following quotas apply toBigQuery Reservation API requests:
| Quota | Default | Notes |
|---|---|---|
| Requests per minute per region | 100 requests | Your project can make a total of up to 100 calls to BigQuery Reservation API methods per minute per region. View quotas in Google Cloud console |
Number ofSearchAllAssignments calls per minute per region | 100 requests | Your project can make up to 100 calls to theSearchAllAssignments method per minute per region.View quotas in Google Cloud console |
Requests forSearchAllAssignments per minute per region per user | 10 requests | Each user can make up to 10 calls to theSearchAllAssignments method per minute per region.View quotas in Google Cloud console (In the Google Cloud console search results, search forper user.) |
BigQuery Data Policy API
The following limits apply fortheData Policy API(preview):
| Limit | Default | Notes |
|---|---|---|
Maximum number ofdataPolicies.list calls. | 400 requests per minute per project 600 requests per minute per organization | |
Maximum number ofdataPolicies.testIamPermissions calls. | 400 requests per minute per project 600 requests per minute per organization | |
| Maximum number of read requests. | 1200 requests per minute per project 1800 requests per minute per organization | This includes calls todataPolicies.get anddataPolicies.getIamPolicy. |
| Maximum number of write requests. | 600 requests per minute per project 900 requests per minute per organization | This includes calls to: |
IAM API
The following quotas apply when you useIdentity and Access Managementfeatures in BigQuery to retrieve and set IAMpolicies, and to test IAM permissions.Data control language (DCL) statementscount towardsSetIAMPolicy quota.
| Quota | Default | Notes |
|---|---|---|
IamPolicy requests per minute per user | 1,500 requests per minute per user | Each user can make up to 1,500 requests per minute per project. View quota in Google Cloud console |
IamPolicy requests per minute per project | 3,000 requests per minute per project | Your project can make up to 3,000 requests per minute. View quota in Google Cloud console |
Single-regionSetIAMPolicyrequests per minute per project | 1,000 requests per minute per project | Your single-region project can make up to 1,000 requests per minute. View quota in Google Cloud console |
Multi-regionSetIAMPolicy requests per minute per project | 2,000 requests per minute per project | Your multi-region project can make up to 2,000 requests per minute. View quota in Google Cloud console |
Omni-regionSetIAMPolicy requests per minute per project | 200 requests per minute per project | Your Omni-region project can make up to 200 requests per minute. View quota in Google Cloud console |
Storage Read API
The following quotas apply toBigQuery Storage Read API requests:
| Quota | Default | Notes |
|---|---|---|
| Read data plane requests per minute per user | 25,000 requests | Each user can make up to 25,000ReadRows calls per minute per project.View quota in Google Cloud console |
| Read control plane requests per minute per user | 5,000 requests | Each user can make up to 5,000 Storage Read API metadata operation calls per minute per project. The metadata calls include theCreateReadSession andSplitReadStream methods.View quota in Google Cloud console |
The following limits apply toBigQuery Storage Read API requests:
| Limit | Default | Notes |
|---|---|---|
| Maximum row/filter length | 1 MB | When you use the Storage Read APICreateReadSession call, you are limited to a maximum length of 1 MB for each row or filter. |
| Maximum serialized data size | 128 MB | When you use the Storage Read APIReadRows call, the serialized representation of the data in an individualReadRowsResponse message cannot be larger than 128 MB. |
| Maximum concurrent connections | 2,000 in multi-regions; 400 in regions | You can open a maximum of 2,000 concurrentReadRows connections per project in theus andeu multi-regions, and 400 concurrentReadRows connections in other regions. In some cases you may be limited to fewer concurrent connections than this limit. |
| Maximum per-stream memory usage | 1.5 GB | The maximum per-stream memory is approximate because the limit is based on the internal representation of the row data. Streams utilizing more than 1.5 GB memory for a single row might fail. For more information, seeTroubleshoot resources exceeded issues. |
Storage Write API
The following quotas apply toStorage Write API requests. The following quotas can be applied at the folder level. These quotas are then aggregated and shared across all child projects. To enable this configuration, contactCloud Customer Care.
Note: Projects that have opted in folder level quota enforcement can only check folder level quota usage and limit in the folder's Google Cloud console quotas page. Project level quota usage and limit won't be displayed. In this case, the project levelmonitoring metrics is still a good source for the project level usage.Note: Due to performance optimization, BigQuery might report greater concurrent connections quota usage than the actual quota usage. The deviation can be up to 1% of the total quota or 100 connections, whichever is smaller, multiplied by a factor of 1-4. That means the reported usage can deviate by at most 400 connections in multi-regions with a 10,000 default quota, and 40 connections in small regions with a 1,000 default quota. The quota enforcement is always based on the actual usage, not thereported value.If you plan torequest a quota adjustment,include the quota error message in your request to expedite processing. BigQuery may reduce your provisioned quota if it remains significantly under-utilized for over a year.
| Quota | Default | Notes |
|---|---|---|
| Concurrent connections | 1,000 in a region; 10,000 in a multi-region | The concurrent connections quota is based on the client project that initiates the Storage Write API request, not the project containing the BigQuery dataset resource. The initiating project is the project associated with theAPI key or theservice account. Your project can operate on 1,000 concurrent connections in a region, or 10,000 concurrent connections in the When you use thedefault stream in Java or Go, we recommend usingStorage Write API multiplexing to write to multiple destination tables with shared connections in order to reduce the number of overall connections that are needed. If you are using theBeam connector with at-least-once semantics, you can setUseStorageApiConnectionPool to You can view usage quota and limits metrics for your projects inCloud Monitoring. Select the concurrent connections limit name based on your region. The options are |
| Throughput | 3 GB per second throughput in multi-regions; 300 MB per second in regions | You can stream up to 3 GBps in theus andeu multi-regions, and 300 MBps in other regions per project.View quota in Google Cloud console You can view usage quota and limits metrics for your projects inCloud Monitoring. Select the throughput limit name based on your region. The options are |
CreateWriteStream requests | 10,000 streams every hour, per project per region | You can callCreateWriteStream up to 10,000 times per hour per project per region. Consider using thedefault stream if you don't need exactly-once semantics. This quota is per hour but the metric shown in the Google Cloud console is per minute. |
| Pending stream bytes | 10 TB in multi-regions; 1 TB in regions | For every commit that you trigger, you can commit up to 10 TB in theus andeu multi-regions, and 1 TB in other regions. There is no quota reporting on this quota. |
The following limits apply toStorage Write API requests:
| Limit | Default | Notes |
|---|---|---|
| Batch commits | 10,000 streams per table | You can commit up to 10,000 streams in eachBatchCommitWriteStream call. |
AppendRows request size | 10 MB | The maximum request size is 10 MB. |
Streaming inserts
The following quotas and limits apply when you stream data intoBigQuery by using thelegacy streaming API.For information about strategies to stay within these limits, seeTroubleshooting quota errors.If you exceed these quotas, you getquotaExceeded errors.
| Limit | Default | Notes |
|---|---|---|
Maximum bytes per second per project in theus andeu multi-regions | 1 GB per second | Your project can stream up to 1 GB per second. This quota is cumulative within a given multi-region. In other words, the sum of bytes per second streamed to all tables for a given project within a multi-region is limited to 1 GB. Exceeding this limit causes If necessary, you can request a quota increase by contactingCloud Customer Care. Request any increase as early as possible, at minimum two weeks before you need it. Quota increase takes time to become available, especially in the case of a significant increase. |
| Maximum bytes per second per project in all other locations | 300 MB per second | Your project can stream up to 300 MB per second in all locations except the Exceeding this limit causes If necessary, you can request a quota increase by contactingCloud Customer Care. Request any increase as early as possible, at minimum two weeks before you need it. Quota increase takes time to become available, especially in the case of a significant increase. |
| Maximum row size | 10 MB | Exceeding this value causesinvalid errors. |
| HTTP request size limit | 10 MB | Exceeding this value causes Internally the request is translated from HTTP JSON into an internal data structure. The translated data structure has its own enforced size limit. It's hard to predict the size of the resulting internal data structure, but if you keep your HTTP requests to 10 MB or less, the chance of hitting the internal limit is low. |
| Maximum rows per request | 50,000 rows | A maximum of 500 rows is recommended. Batching can increase performance and throughput to a point, but at the cost of per-request latency. Too few rows per request and the overhead of each request can make ingestion inefficient. Too many rows per request and the throughput can drop. Experiment with representative data (schema and data sizes) to determine the ideal batch size for your data. |
insertId field length | 128 characters | Exceeding this value causesinvalid errors. |
For additional streaming quota, seeRequest a quota increase.
Bandwidth
The following quotas apply to the replication bandwidth:
| Quota | Default | Notes |
|---|---|---|
| Maximum initial backfill replication bandwidth for eachregion that has cross-region data egress from the primary replica to secondary replicas. | 10 physical GiBps per region per organization | |
| Maximum ongoing replication bandwidth for eachregion that has cross-region data egress from the primary replica to secondary replicas. | 5 physical GiBps per region per organization | |
| Maximum turbo replication bandwidth for eachregion that has cross-region data egress from the primary replica to secondary replicas. | 5 physical GiBps per region per organization | Turbo replication bandwidth quota doesn't apply to the initial backfill operation. |
When a project's replication bandwidth exceeds a certain quota, replication fromaffected projects might stop with therateLimitExceeded error thatincludes details of the exceeded quota.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.