Estimate and control costs
This page describes best practices for estimating and controlling costs in BigQuery.
The primary costs in BigQuery are compute, used for query processing,and storage, for data that is stored in BigQuery.BigQuery offers two types of pricing models for query processing,on-demand andcapacity-based pricing. Each model offers differentbest practices for cost control. Fordata stored in BigQuery, costsdepend on thestorage billing modelconfigured for each dataset.
Understand compute pricing for BigQuery
There are subtle differences in compute pricing for BigQuery thataffect capacity planning and cost control.
Pricing models
For on-demand compute in BigQuery, you incur charges per TiB forBigQuery queries.
Alternatively, for capacity compute in BigQuery, you incurcharges for the compute resources (slots) that areused to process the query. To use this model, you configurereservations for slots.
Reservations have the following features:
- They are allocated in pools of slots, and they let you manage capacity andisolate workloads in ways that make sense for your organization.
- They must reside in one administration project and are subject toquotas and limits.
The capacity pricing model offers severaleditions,which all offer a pay-as-you-go option that's charged in slot hours.Enterprise and Enterprise Plus editions also provide optionalone- or three-year slot commitments that can save money over the pay-as-you-go rate.
You can also setautoscaling reservationsusing the pay-as-you-go option. For moreinformation, see the following:
- To compare pricing models, seeChoosing a model.
- For pricing details, seeOn-demand compute pricingandCapacity compute pricing.
Restrict costs for each model
When you use the on-demand pricing model, the only way to restrict costs is toconfigure project-level or user-level daily quotas. However, these quotasenforce a hard cap that prevents users from running queries beyond the quotalimit. To set quotas, seeCreate custom query quotas.
When you use the capacity pricing model using slot reservations, you specify themaximum number of slots that are available to a reservation. You can alsopurchase slot commitments that provide discounted prices for a committed periodof time.
You can use editions fully on demand by setting the baseline of the reservationto 0 and the maximum to a setting that meets your workload needs.BigQuery automatically scales up to the number of slotsneeded for your workload, never exceeding the maximum that you set. For moreinformation, seeWorkload management using reservations.
Control query costs
To control the costs of individual queries, we recommend that you first followbest practices foroptimizing query computationandoptimizing storage.
The following sections outline additional best practices that you can useto further control your query costs.
Create custom query quotas
Best practice: Use custom daily query quotas to limit the amount of dataprocessed per day.
You can manage costs by setting acustom quotathat specifies a limit on the amount of data processed per day per project or per user. Users are not able to run queries once the quota is reached.
To set a custom quota, you needspecific roles or permissions.For quotas to set, seeQuotas and limits.
For more information, seeRestrict costs for each pricing model.
Check the estimated cost before running a query
Best practice: Before running queries, preview them to estimate costs.
When using the on-demand pricing model, queries are billed according to thenumber of bytes read. To estimate costs before running a query:
- Use the query validator in the Google Cloud console.
- Perform a dry run for queries.
Use the query validator
When you enter a query in the Google Cloud console, the query validatorverifies the query syntax and provides an estimate of the number of bytes read.You can use this estimate to calculate query cost in the pricing calculator.
If your query is not valid, then the query validator displays an errormessage. For example:
Not found: Table myProject:myDataset.myTable was not found in location USIf your query is valid, then the query validator provides an estimate of thenumber of bytes required to process the query. For example:
This query will process 623.1 KiB when run.
Perform a dry run
To perform a dry run, do the following:
Console
Go to the BigQuery page.
Enter your query in the query editor.
If the query is valid, then a check mark automatically appears along with the amount of data that the query will process. If the query is invalid, then an exclamation point appears along with an error message.
bq
Enter a query like the following using the--dry_run flag.
bqquery\--use_legacy_sql=false\--dry_run\'SELECT COUNTRY, AIRPORT, IATA FROM `project_id`.dataset.airports LIMIT 1000'
For a valid query, the command produces the following response:
Query successfully validated. Assuming the tables are not modified,running this query will process 10918 bytes of data.
API
To perform a dry run by using the API, submit a query job withdryRun set totrue in theJobConfigurationtype.
Go
Before trying this sample, follow theGo setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryGo API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
import("context""fmt""io""cloud.google.com/go/bigquery")// queryDryRun demonstrates issuing a dry run query to validate query structure and// provide an estimate of the bytes scanned.funcqueryDryRun(wio.Writer,projectIDstring)error{// projectID := "my-project-id"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()q:=client.Query(`SELECTname,COUNT(*) as name_countFROM `+"`bigquery-public-data.usa_names.usa_1910_2013`"+`WHERE state = 'WA'GROUP BY name`)q.DryRun=true// Location must match that of the dataset(s) referenced in the query.q.Location="US"job,err:=q.Run(ctx)iferr!=nil{returnerr}// Dry run is not asynchronous, so get the latest status and statistics.status:=job.LastStatus()iferr:=status.Err();err!=nil{returnerr}fmt.Fprintf(w,"This query will process %d bytes\n",status.Statistics.TotalBytesProcessed)returnnil}Java
Before trying this sample, follow theJava setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryJava API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.JobStatistics;importcom.google.cloud.bigquery.QueryJobConfiguration;// Sample to run dry query on the tablepublicclassQueryDryRun{publicstaticvoidrunQueryDryRun(){Stringquery="SELECT name, COUNT(*) as name_count "+"FROM `bigquery-public-data.usa_names.usa_1910_2013` "+"WHERE state = 'WA' "+"GROUP BY name";queryDryRun(query);}publicstaticvoidqueryDryRun(Stringquery){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();QueryJobConfigurationqueryConfig=QueryJobConfiguration.newBuilder(query).setDryRun(true).setUseQueryCache(false).build();Jobjob=bigquery.create(JobInfo.of(queryConfig));JobStatistics.QueryStatisticsstatistics=job.getStatistics();System.out.println("Query dry run performed successfully."+statistics.getTotalBytesProcessed());}catch(BigQueryExceptione){System.out.println("Query not performed \n"+e.toString());}}}Node.js
Before trying this sample, follow theNode.js setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryNode.js API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionqueryDryRun(){// Runs a dry query of the U.S. given names dataset for the state of Texas.constquery=`SELECT name FROM \`bigquery-public-data.usa_names.usa_1910_2013\` WHERE state = 'TX' LIMIT 100`;// For all options, see https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/queryconstoptions={query:query,// Location must match that of the dataset(s) referenced in the query.location:'US',dryRun:true,};// Run the query as a jobconst[job]=awaitbigquery.createQueryJob(options);// Print the status and statisticsconsole.log('Status:');console.log(job.metadata.status);console.log('\nJob Statistics:');console.log(job.metadata.statistics);}
PHP
Before trying this sample, follow thePHP setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPHP API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $query = 'SELECT id, view_count FROM `bigquery-public-data.stackoverflow.posts_questions`';// Construct a BigQuery client object.$bigQuery = new BigQueryClient([ 'projectId' => $projectId,]);// Set job configs$jobConfig = $bigQuery->query($query);$jobConfig->useQueryCache(false);$jobConfig->dryRun(true);// Extract query results$queryJob = $bigQuery->startJob($jobConfig);$info = $queryJob->info();printf('This query will process %s bytes' . PHP_EOL, $info['statistics']['totalBytesProcessed']);
Python
Set theQueryJobConfig.dry_runproperty toTrue.Client.query()always returns a completedQueryJobwhen provided a dry run query configuration.
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation.
To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()job_config=bigquery.QueryJobConfig(dry_run=True,use_query_cache=False)# Start the query, passing in the extra configuration.query_job=client.query(("SELECT name, COUNT(*) as name_count ""FROM `bigquery-public-data.usa_names.usa_1910_2013` ""WHERE state = 'WA' ""GROUP BY name"),job_config=job_config,)# Make an API request.# A dry run query completes immediately.print("This query will process{} bytes.".format(query_job.total_bytes_processed))Estimate query costs
When using theon-demand pricing model,you can estimate the cost of running aquery by calculating the number of bytes processed.
On-demand query size calculation
To calculate the number of bytes processed by the various types of queries,see the following sections:
Note: The selecteddataset storage billing modeldoes not affect the on-demand query cost calculation. BigQueryalways uses logical (uncompressed) bytes to calculate on-demand query costs.Note: If you are queryingexternal table datais stored inORCorParquet,the number of bytes charged is limited to the columns that BigQuery reads. Because the datatypes from an external data source are converted to BigQuery datatypes by the query, the number of bytes read is computed based on the size ofBigQuery data types.Avoid running queries to explore table data
Best practice: Don't run queries to explore or preview table data.
If you are experimenting with or exploring your data, you can use table previewoptions to view data at no charge and without affecting quotas.
BigQuery supports the following data preview options:
- In the Google Cloud console, on the table details page, click thePreview tab to sample the data.
- In the bq command-line tool, use the
bq headcommand and specify the number of rows to preview. - In the API, use
tabledata.listto retrieve table data from a specified set of rows. - Avoid using
LIMITin non-clustered tables. For non-clustered tables, aLIMITclause won't reduce compute costs.
Restrict the number of bytes billed per query
Best practice: Use the maximum bytes billed setting to limit query costswhen using the on-demand pricing model.
You can limit the number of bytes billed for a query using the maximum bytesbilled setting. When you set maximum bytes billed, the number of bytes that thequery reads is estimated before the query execution. If the number ofestimated bytes is beyond the limit, then the query fails without incurring acharge.
For clustered tables, the estimation of the number of bytes billed for a queryis an upper bound, and can be higher than the actual number of bytes billedafter running the query. So in some cases, if you set the maximum bytes billed,a query on a clustered table can fail, even though the actual bytes billedwouldn't exceed the maximum bytes billed setting.
If a query fails because of the maximum bytes billed setting, an error similarto following is returned:
Error: Query exceeded limit for bytes billed: 1000000. 10485760 or higherrequired.
To set the maximum bytes billed:
Console
- In theQuery editor, clickMore> Query settings>Advanced options.
- In theMaximum bytes billed field, enter an integer.
- ClickSave.
bq
Use thebq query command with the--maximum_bytes_billed flag.
bqquery--maximum_bytes_billed=1000000\--use_legacy_sql=false\'SELECT word FROM `bigquery-public-data`.samples.shakespeare'
API
Set themaximumBytesBilled property inJobConfigurationQuery orQueryRequest.
Avoid usingLIMIT in non-clustered tables
Best practice: For non-clustered tables, don't use aLIMIT clause as amethod of cost control.
For non-clustered tables, applying aLIMIT clause to a query doesn't affectthe amount of data that is read. You are billed for reading all bytes in theentire table as indicated by the query, even though the query returns only asubset. With a clustered table, aLIMIT clause can reduce the number of bytesscanned, because scanning stops when enough blocks are scanned to get theresult. You are billed for only the bytes that are scanned.
Materialize query results in stages
Best practice: If possible, materialize your query results in stages.
If you create a large, multi-stage query, each time you run it,BigQuery reads all the data that is required by the query. You arebilled for all the data that is read each time the query is run.
Instead, break your query into stages where each stage materializes the queryresults by writing them to adestination table.Querying the smaller destination table reduces the amount of data that is readand lowers costs. The cost of storing the materialized results is much less thanthe cost of processing large amounts of data.
Control workload costs
This section describes best practices for controlling costs within a workload.A workload is a set of related queries. For example, a workload can be a datatransformation pipeline that runs daily, a set of dashboards run by a group ofbusiness analysts, or several ad-hoc queries run by a set of data scientists.
Use the Google Cloud pricing calculator
Best practice: Use theGoogle Cloud pricing calculatorto create an overall monthly cost estimate for BigQuerybased on projected usage. You can then compare this estimate to your actualcosts to identify areas for optimization.
On-demand
To estimate costs in theGoogle Cloud pricing calculatorwhen using the on-demand pricing model, follow these steps:
- Open theGoogle Cloud pricing calculator.
- ClickAdd to estimate.
- Select BigQuery.
- Select "On-demand" forService type.
- Choose the location where the your queries will run.
- ForAmount of data queried, enter the estimated bytes read from your dry run orthe query validator.
- Enter your estimations of storage usage forActive storage,Long-term storage,Streaming inserts, andStreaming reads.You only need to estimate either physical storage or logical storage, depending on thedataset storage billing model.
- The estimate appears in theCost details panel. For more information about the estimated cost, clickOpen detailed view. You can also download and share the cost estimate.
For more information, seeOn-demand pricing.
Editions
To estimate costs in theGoogle Cloud pricing calculatorwhen using the capacity-based pricing model withBigQuery editions, follow these steps:
- Open theGoogle Cloud pricing calculator.
- ClickAdd to estimate.
- Select BigQuery.
- Select "Editions" forService type.
- Choose the location where the slots are used.
- Choose yourEdition.
- Choose theMaximum slots,Baseline slots, optionalCommitment, andEstimated utilization of autoscaling.
- Choose the location where the data is stored.
- Enter your estimations of storage usage forActive storage,Long-term storage,Streaming inserts, andStreaming reads.You only need to estimate either physical storage or logical storage, depending on thedataset storage billing model.
- The estimate appears in theCost details panel. For more information about the estimated cost, clickOpen detailed view. You can also download and share the cost estimate.
For more information, seeCapacity-based pricing.
Use reservations and commitments
Best practice: Use BigQuery reservations and commitments to control costs.
For more information, seeRestrict costs for each pricing model.
Use the slot estimator
Best practice: Use slot estimator to estimate the number of slots required for your workloads.
TheBigQuery slot estimatorhelps you to manage slot capacity based on historical performance metrics.
In addition, customers using the on-demand pricing model can view sizingrecommendations for commitments and autoscaling reservations with similar performancewhen moving to capacity-based pricing.
Cancel unnecessary long-running jobs
To free capacity, check on long-running jobs to make sure that they shouldcontinue running. If not,cancelthem.
View costs using a dashboard
Best practice: Create a dashboard to analyze your Cloud Billing data so you canmonitor and make adjustments to your BigQuery usage.
You canexport your billing datato BigQuery and visualize it in a tool such asLooker Studio. For a tutorial about creating a billing dashboard, seeVisualize Google Cloud billing using BigQuery and Looker Studio.
Use billing budgets and alerts
Best practice: UseCloud Billing budgetsto monitor your BigQuery charges in one place.
Cloud Billing budgets let you track your actual costs against your plannedcosts. After you've set a budget amount, you set budget alert threshold rulesthat are used to trigger email notifications. Budget alert emails help you stayinformed about how your BigQuery spend is tracking against yourbudget.
Control storage costs
Use these best practices for optimizing the cost of BigQuerystorage. You can alsooptimize storage for query performance.
Use long-term storage
Best practice: Uselong-term storage pricingto reduce cost of older data.
When you load data into BigQuery storage, the data is subject toBigQuerystorage pricing.For older data, you can automatically take advantage of BigQuerylong-term storage pricing.
If you have a table that is not modified for 90 consecutive days, the price ofstorage for that table automatically drops by 50 percent. If you have apartitioned table, each partition is considered separately for eligibility forlong-term pricing, subject to the same rules as non-partitioned tables.
Configure the storage billing model
Best practice: Optimize the storage billing model based on your usagepatterns.
BigQuery supports storage billing using logical (uncompressed)or physical (compressed) bytes, or a combination of both. Thestorage billing modelconfigured for each dataset determines your storage pricing, but it does notimpact query performance.
You can use theINFORMATION_SCHEMA views to determine the storage billing modelthat works bestbased on your usage patterns.
Avoid overwriting tables
Best practice: When you are using the physical storage billing model, avoidrepeatedly overwriting tables.
When you overwrite a table, for example by using the--replace parameterinbatch load jobsor using theTRUNCATE TABLESQL statement, the replaced data is kept for the duration of the time travel and failsafe windows.If you overwrite a table frequently, you will incur additional storage charges.
Instead, you can incrementally load data into a table by using theWRITE_APPENDparameter in load jobs, theMERGE SQL statement, or using thestorage write API.
Reduce the time travel window
Best practice: Based on your requirements, you can lower the time travel window.
Reducing thetime travel window from the defaultvalue of seven days reduces the retention period for data deleted from or changed in atable. You are billed for time travel storage only when using the physical (compressed)storage billing model.
The time travel window is set at the dataset level. You can also set Thedefault time travel window for new datasets usingconfiguration settings.
Use table expiration for destination tables
Best practice: If you are writing large query results to a destinationtable, use the default table expiration time to remove the data when it's nolonger needed.
Keeping large result sets in BigQuery storage has a cost. If youdon't need permanent access to the results, use thedefault table expirationto automatically delete the data for you.
Archive data to Cloud Storage
Best practice: Consider archiving data in Cloud Storage.
You can move data from BigQuery to Cloud Storagebased on the business need for archival. As a best practice, considerlong-term storage pricing andthephysical storage billing modelbeforeexporting data out of BigQuery.
Troubleshooting BigQuery cost discrepancies and unexpected charges
Follow these steps to troubleshoot unexpected BigQuery charges or cost discrepancies:
To understand where the charges for BigQuery are coming from when looking at the Cloud Billing report, the first recommendation is grouping charges by SKU so that it is easier to observe the usage and charges for the corresponding BigQuery services.
After that, study the pricing for the corresponding SKUs in theSKU documentation page or the
Pricingpage in the Cloud Billing UI to understand which feature it is, for example, BigQuery Storage Read API, long-term storage, on-demand pricing, Standard edition.After identifying the corresponding SKUs, use the
INFORMATION_SCHEMAviews to identify the specific resources associated with these charges, for example:- If you are charged for on-demand analysis, look into the
INFORMATION_SCHEMA.JOBSview examples to determine jobs driving costs and users who launched them. - If you are charged for reservation or commitment SKUs, look into the corresponding
INFORMATION_SCHEMA.RESERVATIONSandINFORMATION_SCHEMA.CAPACITY_COMMITMENTSviews to identify the reservations and commitments that are being charged. - If the charges come from storage SKUs, look at the
INFORMATION_SCHEMA.TABLE_STORAGEview examples to understand which datasets and tables are driving more costs.
- If you are charged for on-demand analysis, look into the
Important troubleshooting considerations:
Take into account that aDaily time period in the Cloud Billing report starts at midnight US and Canadian Pacific Time (UTC-8), and observes daylight saving time shifts in the United States—adjust your calculations and data aggregations to match the same timeframes.
Filter by project if there are multiple projects attached to the billing account and you want to review charges coming from a specific project.
Make sure to select the correct region when performing investigations.
Your project exceeded quota for free query bytes scanned
BigQuery returns this error when you run a query in the freeusage tier and the account reaches the monthly query limit. For more information about query pricing, seeFree usage tier.
Error message
Your project exceeded quota for free query bytes scanned
Resolution
To continue using BigQuery, you need toupgrade the account to apaid Cloud Billing account.
Unexpected charges related to queries, reservations and commitments
Troubleshooting unexpected charges related to job execution depends on the origin of these charges:
- If you see an increase in on-demand analysis costs, this can be related to an increase in the number of jobs that were launched or the change in the amount of data that needs to be processed by jobs. Investigate this using the
INFORMATION_SCHEMA.JOBSview. - If there is an increase in charges for committed slots, investigate this by querying
INFORMATION_SCHEMA.CAPACITY_COMMITMENT_CHANGESto see if new commitments have been purchased or modified. - For increases in charges originating from reservation usage look into changes to reservations that are recorded in
INFORMATION_SCHEMA.RESERVATION_CHANGES. To match autoscaling reservation usage with billing data followthe autoscaling example.
Slot-hours billed larger than INFORMATION_SCHEMA.JOBS view calculated slot-hours
When using an autoscaling reservation, billing is calculated according to the number of scaled slots, not the number of slots used. BigQuery autoscales in multiples of 50 slots, which leads to billing for the nearest multiple even if less than the autoscaled amount is actually used.Autoscaler has a 1 minute minimum period before scaling down, which translates into at least 1 minute being charged even if the query used the slots for less time, for example, for only 10 seconds out of the minute. The correct way to estimate charges for an autoscaling reservation is documented in theSlots Autoscaling page. For more information about using autoscaling efficiently, seeautoscaling best practices to use autoscaling efficiently.
A similar scenario will be observed for non-autoscaling reservations—billing is calculated according to the number of slots provisioned, not the number of slots used. If you want to estimate charges for a non-autoscaling reservation, you can query theRESERVATIONS_TIMELINE view directly.
Billing is less than the total bytes billed calculated through INFORMATION_SCHEMA.JOBS for project running on-demand queries
There can be multiple reasons for the actual billing to be less than the calculated bytes processed:
- Each project is provided with 1 TB of free tier querying per month for no extra charge.
SCRIPTtype jobs were not excluded from the calculation, which could lead to some values being counted twice.- Different types of savings applied to your Cloud Billing account, such as negotiated discounts, promotional credits and others. Check the Savings section of theCloud Billing report. The free tier 1 TB of querying per month is also included here.
Billing is larger than the bytes processed calculated through INFORMATION_SCHEMA.JOBS for project running on-demand queries
If the billing amount is larger than the value you calculated by querying theINFORMATION_SCHEMA.JOBS view, there might be certain conditions that caused this:
Queries over row-level security tables
- Queries over tables with row-level security don't produce a value for
total_bytes_billedin theINFORMATION_SCHEMA.JOBSview, therefore, the billing calculated usingtotal_bytes_billedfromINFORMATION_SCHEMA.JOBSview will be less than the billed value. See theRow Level Security best practices page for more details about why this information is not visible.
- Queries over tables with row-level security don't produce a value for
Performing ML operations in BigQuery
- BigQuery ML pricing for on-demand queries depends on the type of model being created. Some of these model operations are charged at a higer rate than non-ML queries. Therefore, if you just add up all of the
total_billed_bytesfor the project and use the standard on-demand pricing per-TB rate, this won't be a correct pricing aggregation—you need to account for the pricing difference per-TB.
- BigQuery ML pricing for on-demand queries depends on the type of model being created. Some of these model operations are charged at a higer rate than non-ML queries. Therefore, if you just add up all of the
Incorrect pricing amounts
- Confirm that the correct per-TB pricing values are used in the calculations - make sure to choose the correct region as prices are location-dependent. See thePricing documentation.
The general advice is following the recommended way of calculating the on-demand job usage for billing in ourpublic documentation.
Billed for BigQuery Reservations API usage even though the API is disabled and not reservations or commitments used
Inspect the SKU to better understand what services are charged. If the SKU billed isBigQuery Governance SKU—these are charges coming from Dataplex Universal Catalog.Some Dataplex Universal Catalog functionalities trigger job execution using BigQuery. These charges are now processed under the corresponding BigQuery Reservations API SKU. See theDataplex Universal Catalog Pricing documentation for more details.
Project is assigned to a reservation, but still seeing BigQuery Analysis on-demand costs
Read through theTroubleshooting issues with reservations section to identify where theAnalysis charges might be coming from.
Unexpected charges for pay-as-you go (PAYG) slots for the BigQuery Standard Edition
In the Cloud Billing report, apply a filter with the labelgoog-bq-feature-type with the valueBQ_STUDIO_NOTEBOOK. The usage you will see is metered as pay-as-you go slots under the BigQuery Standard Edition - these are charges for using theBigQuery Studio notebook. Read more about theBigQuery Studio notebook pricing.
BigQuery Reservations API charges appearing after the Reservation API is disabled
Disabling the BigQuery won't stop commitment charges. In order to stop commitment charges, you will need to delete a commitment. Set the renewal plan toNONE, and the commitment will be automatically deleted when it expires.
Unexpected storage charges
Scenarios that could lead to storage charge increases:
- Increases in the amount of data that is stored in your tables—use the
INFORMATION_SCHEMA.TABLE_STORAGE_USAGE_TIMELINEview to monitor the change in bytes for your tables - Changingdataset billing models
- Increasing thetime-travel window for physical billing model datasets
- Modification of tables that have data inlong-term storage, causing them to becomeactive storage
Deletion of table(s) or dataset(s) resulted in higher BigQuery storage costs
TheBigQuery time travel feature retains deleted data for duration of the configured time-travel window and an additional 7 days for fail-safe recovery. During this retention window, the deleted data in physical storage billing model datasets contributes to the active physical storage cost, even though the tables will no longer appear inINFORMATION_SCHEMA.TABLE_STORAGE or in the console. If the table data was in long-term storage, deletion causes this data to be moved to active physical storage. This causes the corresponding cost to rise, because active physical bytes are charged approximately 2 times more than long-term physical bytes according to theBigQuery storage pricing page. The recommended approach to minimize costs caused by data deletion for physical storage billing model datasets is to reduce the time-travel window to 2 days.
Storage costs reduced with no modifications to the data
In BigQuery users pay for active and long-term storage. Active storage charges include any table or table partition that has not been modified for 90 consecutive days, whereas long-term storage charges include tables and partitions that haven't been modified for 90 consecutive days. Overall storage cost reduction can be observed when data transitions to long-term storage, which is around 50% cheaper than active storage. Read aboutstorage pricing for more details.
INFORMATION_SCHEMA storage calculations don't match billing
- Use the
INFORMATION_SCHEMA.TABLE_STORAGE_USAGE_TIMELINEview instead ofINFORMATION_SCHEMA.TABLE_STORAGE-TABLE_STORAGE_USAGE_TIMELINEprovides more accurate and granular data to correctly calculate storage costs - The queries run on
INFORMATION_SCHEMAviews don't include taxes, adjustments, and rounding errors—take these into account when comparing the data. Read more about Reports in Cloud Billingon this page. - Data presented in the
INFORMATION_SCHEMAviews is in UTC, whereas billing report data is reported in the US and Canadian Pacific Time (UTC-8).
What's next
- Learn aboutBigQuery pricing.
- Learn how tooptimize queries.
- Learn how tooptimize storage.
To learn about billing, alerts, and visualizing data, see the following topics:
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.