JOBS view
TheINFORMATION_SCHEMA.JOBS view contains near real-time metadata about allBigQuery jobs in the current project.
INFORMATION_SCHEMA.JOBS andINFORMATION_SCHEMA.JOBS_BY_PROJECTare synonymous and can be used interchangeably.Required role
To get the permission that you need to query theINFORMATION_SCHEMA.JOBS view, ask your administrator to grant you theBigQuery Resource Viewer (roles/bigquery.resourceViewer) IAM role on your project. For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains the bigquery.jobs.listAll permission, which is required to query theINFORMATION_SCHEMA.JOBS view.
You might also be able to get this permission withcustom roles or otherpredefined roles.
For more information about BigQuery permissions, seeAccess control with IAM.
Schema
The underlying data is partitioned by thecreation_time column and clusteredbyproject_id anduser_email. Thequery_info column containsadditional information about your query jobs.
TheINFORMATION_SCHEMA.JOBS view has the following schema:
| Column name | Data type | Value |
|---|---|---|
bi_engine_statistics | RECORD | If the project is configured to use theBI Engine, then this field containsBiEngineStatistics. OtherwiseNULL. |
cache_hit | BOOLEAN | Whether the query results of this job were from a cache. If you have amulti-query statement job,cache_hit for your parent query isNULL. |
creation_time | TIMESTAMP | (Partitioning column) Creation time of this job. Partitioning is based on the UTC time of this timestamp. |
destination_table | RECORD | Destinationtable for results, if any. |
dml_statistics | RECORD | If the job is a query with a DML statement, the value is a record with the following fields:
NULL.This column is present in the INFORMATION_SCHEMA.JOBS_BY_USER andINFORMATION_SCHEMA.JOBS_BY_PROJECT views. |
end_time | TIMESTAMP | The end time of this job, in milliseconds since the epoch. This field represents the time when the job enters theDONE state. |
error_result | RECORD | Details of any errors asErrorProto objects. |
job_creation_reason.code | STRING | Specifies the high level reason why a job was created. Possible values are:
|
job_id | STRING | The ID of the job if a job was created. Otherwise, the query ID of a query using optional job creation mode. For example,bquxjob_1234. |
job_stages | RECORD | Query stages of the job. Note: This column's values are empty for queries that read from tables with row-level access policies. For more information, seebest practices for row-level security in BigQuery. |
job_type | STRING | The type of the job. Can beQUERY,LOAD,EXTRACT,COPY, orNULL. ANULL value indicates a background job. |
labels | RECORD | Array of labels applied to the job as key-value pairs. |
parent_job_id | STRING | ID of the parent job, if any. |
priority | STRING | The priority of this job. Valid values includeINTERACTIVE andBATCH. |
project_id | STRING | (Clustering column) The ID of the project. |
project_number | INTEGER | The number of the project. |
query | STRING | SQL query text. |
referenced_tables | RECORD | Array ofSTRUCT values that contain the followingSTRING fields for each table referenced by the query:project_id,dataset_id, andtable_id. Only populated for query jobs that are not cache hits. |
reservation_id | STRING | Name of the primary reservation assigned to this job, in the formatRESERVATION_ADMIN_PROJECT:RESERVATION_LOCATION.RESERVATION_NAME.In this output:
|
edition | STRING | The edition associated with the reservation assigned to this job. For more information about editions, seeIntroduction to BigQuery editions. |
session_info | RECORD | Details about thesession in which this job ran, if any. |
start_time | TIMESTAMP | The start time of this job, in milliseconds since the epoch. This field represents the time when the job transitions from thePENDING state to eitherRUNNING orDONE. |
state | STRING | Running state of the job. Valid states includePENDING,RUNNING, andDONE. |
statement_type | STRING | The type of query statement. For example,DELETE,INSERT,SCRIPT,SELECT, orUPDATE. SeeQueryStatementType for list of valid values. |
timeline | RECORD | Query timeline of the job. Contains snapshots of query execution. |
total_bytes_billed | INTEGER | If the project is configured to useon-demand pricing, then this field contains the total bytes billed for the job. If the project is configured to useflat-rate pricing, then you are not billed for bytes and this field is informational only. Note: This column's values are empty for queries that read from tables with row-level access policies. For more information, seebest practices for row-level security in BigQuery. |
total_bytes_processed | INTEGER | Total bytes processed by the job. Note: This column's values are empty for queries that read from tables with row-level access policies. For more information, seebest practices for row-level security in BigQuery. |
total_modified_partitions | INTEGER | The total number of partitions the job modified. This field is populated forLOAD andQUERY jobs. |
total_slot_ms | INTEGER | Slot milliseconds for the job over its entire duration in theRUNNING state, including retries. |
total_services_sku_slot_ms | INTEGER | Total slot milliseconds for the job that runs on external services and is billed on the services SKU. This field is only populated for jobs that have external service costs, and is the total of the usage for costs whose billing method is"SERVICES_SKU". |
transaction_id | STRING | ID of thetransaction in which this job ran, if any. |
user_email | STRING | (Clustering column) Email address or service account of the user who ran the job. |
principal_subject | STRING | A string representation of the identity of the principal that ran the job. |
query_info.resource_warning | STRING | The warning message that appears if the resource usage during query processing is above the internal threshold of the system. A successful query job can have the resource_warning field populated. Withresource_warning, you get additional data points to optimize your queries and to set up monitoring for performance trends of an equivalent set of queries by usingquery_hashes. |
query_info.query_hashes.normalized_literals | STRING | Contains the hash value of the query.normalized_literals is a hexadecimalSTRING hash that ignores comments, parameter values, UDFs, and literals. The hash value will differ when underlying views change, or if the query implicitly references columns, such asSELECT *, and the table schema changes.This field appears for successfulGoogleSQL queries that are not cache hits. |
query_info.performance_insights | RECORD | Performance insights for the job. |
query_info.optimization_details | STRUCT | Thehistory-based optimizations for the job. Only theJOBS_BY_PROJECT view has this column. |
transferred_bytes | INTEGER | Total bytes transferred for cross-cloud queries, such as BigQuery Omni cross-cloud transfer jobs. |
materialized_view_statistics | RECORD | Statistics of materialized views considered in a query job. (Preview) |
metadata_cache_statistics | RECORD | Statistics for metadata column index usage for tables referenced in a query job. |
search_statistics | RECORD | Statistics for a search query. |
query_dialect | STRING | This field will be available sometime in May, 2025. The query dialect used for the job. Valid values include:
This field is only populated for query jobs. The default selection of query dialect can be controlled by theconfiguration settings. |
continuous | BOOLEAN | Whether the job is acontinuous query. |
continuous_query_info.output_watermark | TIMESTAMP | Represents the point up to which the continuous query has successfully processed data. |
vector_search_statistics | RECORD | Statistics for a vector search query. |
When you queryINFORMATION_SCHEMA.JOBS to find a summary costof query jobs, exclude theSCRIPT statement type,otherwise some values might be counted twice. TheSCRIPT row includessummary values for all child jobs that were executed as part of this job.
For stability, we recommend that you explicitly list columns in your information schema queries instead ofusing a wildcard (SELECT *). Explicitly listing columns prevents queries frombreaking if the underlying schema changes.
Multi-statement query jobs
A multi-statement query job is a query job that uses theprocedurallanguage.Multi-statement query jobs often define variables withDECLARE or have controlflow statements such asIF orWHILE. When you queryINFORMATION_SCHEMA.JOBS, you might need to recognize the difference between amulti-statement query job and other jobs. A multi-statement query job has thefollowing traits:
statement_type=SCRIPTreservation_id=NULL
Child jobs
Each of a multi-statement query job's child jobs has aparent_job_id pointingto the multi-statement query job itself. This includes summary values for allchild jobs that were executed as part of this job.
If you queryINFORMATION_SCHEMA.JOBS to find a summary cost of query jobs,then you should exclude theSCRIPT statement type. Otherwise, some values suchastotal_slot_ms might be counted twice.
Data retention
This view displays running jobs along with job history for the past 180 days.If a project migrates to an organization (either from having no organization orfrom a different one), job information predating the migration date isn'taccessible through theINFORMATION_SCHEMA.JOBS view, as the view onlyretains data starting from the migration date.
Scope and syntax
Queries against this view must include aregion qualifier.The following table explains the region scope for this view:
| View name | Resource scope | Region scope |
|---|---|---|
[PROJECT_ID.]`region-REGION`.INFORMATION_SCHEMA.JOBS[_BY_PROJECT] | Project level | REGION |
- Optional:
PROJECT_ID: the ID of your Google Cloud project. If not specified, the default project is used. REGION: anydataset region name. For example,`region-us`.Note: You must usea region qualifier to query
INFORMATION_SCHEMAviews. The location of the query execution must match the region of theINFORMATION_SCHEMAview.
Examples
To run the query against a project other than your default project, add theproject ID in the following format:
`PROJECT_ID`.`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS
PROJECT_ID: the ID of the project.REGION_NAME: the region for your project.
For example,`myproject`.`region-us-central1`.INFORMATION_SCHEMA.JOBS.
Compare on-demand job usage to billing data
For projects usingon-demand pricing,you can use theINFORMATION_SCHEMA.JOBS view to reviewcompute charges over a given period.
For projects usingcapacity-based (slots) pricing,you can use theINFORMATION_SCHEMA.RESERVATIONS_TIMELINEto review compute charges over a given period.
The following query produces daily estimated aggregates of your billed TiB and the resultingcharges. Thelimitations section explains when these estimatesmay not match your bill.
For this example only, the following additional variables must be set. They can be edited here for ease of use.
START_DATE: the earliest date to aggregate over (inclusive).END_DATE: the latest date to aggregate over (inclusive).PRICE_PER_TIB: theon-demand priceper TiB used for bill estimates.
CREATETEMPFUNCTIONisBillable(error_resultANYTYPE)AS(-- You aren't charged for queries that return an error.error_resultISNULL-- However, canceling a running query might incur charges.ORerror_result.reason='stopped');-- BigQuery hides the number of bytes billed on all queries against tables with-- row-level security.CREATETEMPFUNCTIONisMaybeUsingRowLevelSecurity(job_typeSTRING,tib_billedFLOAT64,error_resultANYTYPE)AS(job_type='QUERY'ANDtib_billedISNULLANDisBillable(error_result));WITHquery_paramsAS(SELECTdate'START_DATE'ASstart_date,-- inclusivedate'END_DATE'ASend_date,-- inclusive),usage_with_multiplierAS(SELECTjob_type,error_result,creation_time,-- Jobs are billed by end_time in PST8PDT timezone, regardless of where-- the job ran.EXTRACT(dateFROMend_timeATTIMEZONE'PST8PDT')billing_date,total_bytes_billed/1024/1024/1024/1024total_tib_billed,CASEstatement_typeWHEN'SCRIPT'THEN0WHEN'CREATE_MODEL'THEN50*PRICE_PER_TIBELSEPRICE_PER_TIBENDASmultiplier,FROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREstatement_type<>'SCRIPT')SELECTbilling_date,sum(total_tib_billed*multiplier)estimated_charge,sum(total_tib_billed)estimated_usage_in_tib,countif(isMaybeUsingRowLevelSecurity(job_type,total_tib_billed,error_result))ASjobs_using_row_level_security,FROMusage_with_multiplier,query_paramsWHERE1=1-- Filter by creation_time for partition pruning.ANDdate(creation_time)BETWEENdate_sub(start_date,INTERVAL2day)ANDdate_add(end_date,INTERVAL1day)ANDbilling_dateBETWEENstart_dateANDend_dateANDisBillable(error_result)GROUPBYbilling_dateORDERBYbilling_date;
Limitations
BigQueryhides some statisticsfor queries over tables with row-level security. The provided query countsthe number of jobs impacted as
jobs_using_row_level_security, but does nothave access to the billable usage.BigQuery MLpricing for on-demand queries depends on the type ofmodel being created.
INFORMATION_SCHEMA.JOBSdoes not track which type ofmodel was created, so the provided query assumes all CREATE_MODEL statementswere creating the higher billed model types.Apache Spark procedures use asimilar pricingmodel, but charges are reported asBigQuery Enterprise edition pay-as-you-goSKU.
INFORMATION_SCHEMA.JOBStracks this usage astotal_bytes_billed, butcannot determine which SKU the usage represents.
Calculate average slot utilization
The following example calculates average slot utilization for all queries overthe past 7 days for a given project. Note that this calculation is mostaccurate for projects that have consistent slot usage throughout the week. Ifyour project does not have consistent slot usage, this number might be lowerthan expected.
To run the query:
SELECTSUM(total_slot_ms)/(1000*60*60*24*7)ASavg_slotsFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHERE-- Filter by the partition column first to limit the amount of data scanned.-- Eight days allows for jobs created before the 7 day end_time filter.creation_timeBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL8DAY)ANDCURRENT_TIMESTAMP()ANDjob_type='QUERY'ANDstatement_type!='SCRIPT'ANDend_timeBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL7DAY)ANDCURRENT_TIMESTAMP();
INFORMATION_SCHEMA view names are case-sensitive.The result is similar to the following:
+------------+| avg_slots |+------------+| 3879.1534 |+------------+
You can check usage for a particular reservation withWHERE reservation_id = "…". This can be helpful to determine percentage useof a reservation over a period of time. For script jobs, the parent job alsoreports the total slot usage from its children jobs. To avoid double counting,useWHERE statement_type != "SCRIPT" to exclude the parent job.
If instead you would like to check the average slot utilization for individualjobs, usetotal_slot_ms / TIMESTAMP_DIFF(end_time, start_time, MILLISECOND).
Count recent active queries by query priority
The following example displays the number of queries, groupedby priority (interactive or batch) that were started within the last 7 hours:
SELECTpriority,COUNT(*)active_jobsFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_time>TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL7hour)ANDjob_type='QUERY'GROUPBYpriority;
The result is similar to the following:
+-------------+-------------+| priority | active_jobs |+-------------+-------------+| INTERACTIVE | 2 || BATCH | 3 |+-------------+-------------+
Thepriority field indicates whether a query isINTERACTIVE orBATCH.
View load job history
The following example lists all users or service accounts that submitted a batchload job for a given project. Because no time boundary is specified,this query scans all available history.
SELECTuser_emailASuser,COUNT(*)num_jobsFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREjob_type='LOAD'GROUPBYuser_email;
INFORMATION_SCHEMA view names are case-sensitive.The result is similar to the following:
+--------------+| user |+--------------+| abc@xyz.com || xyz@xyz.com || bob@xyz.com |+--------------+
Get the number of load jobs to determine the daily job quota used
The following example returns the number of jobs by day, dataset, and table sothat you can determine how much of the daily job quota is used.
SELECTDATE(creation_time)asday,destination_table.project_idasproject_id,destination_table.dataset_idasdataset_id,destination_table.table_idastable_id,COUNT(job_id)ASload_job_countFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_timeBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL8DAY)ANDCURRENT_TIMESTAMP()ANDjob_type="LOAD"GROUPBYday,project_id,dataset_id,table_idORDERBYdayDESC;
INFORMATION_SCHEMA view names are case-sensitive.The result is similar to the following:
+-------------+------------+-------------+----------+-----------------+|day | project_id | dataset_id | table_id | load_job_count |+-------------+------------+-------------+----------+-----------------+| 2020-10-10 | my_project | dataset1 | orders | 58 || 2020-10-10 | my_project | dataset1 | product | 20 || 2020-10-10 | my_project | dataset1 | sales | 11 |+-------------+------------+-------------+----------+-----------------+
Get the last few failed jobs
The following example shows the last three failed jobs:
SELECTjob_id,creation_time,user_email,error_resultFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHEREerror_result.reason!="Null"ORDERBYcreation_timeDESCLIMIT3;
The results should look similar to the following:
+------------+--------------------------+------------------+-------------------------------------+| job_id | creation_time | user_email | error_result |+------------+--------------------------+------------------+-------------------------------------+| bquxjob_1 | 2020-10-10 00:00:00 UTC | abc@example.com | Column 'col1' has mismatched type...|| bquxjob_2 | 2020-10-11 00:00:00 UTC | xyz@example.com | Column 'col1' has mismatched type...|| bquxjob_3 | 2020-10-11 00:00:00 UTC | bob@example.com | Column 'col1' has mismatched type...|+------------+--------------------------+------------------+-------------------------------------+
Query the list of long running jobs
The following example shows the list of long running jobs that are intheRUNNING orPENDING state for more than 30 minutes:
SELECTjob_id,job_type,state,creation_time,start_time,user_emailFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREstate!="DONE"ANDcreation_time<=TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL30MINUTE)ORDERBYcreation_timeASC;
The result is similar to the following:
+-----------+----------+---------+--------------------------------+--------------------------------+------------------+| job_id | job_type | state | creation_time | start_time | user_email |+-----------+----------+---------+--------------------------------+--------------------------------+------------------+| bquxjob_1 | QUERY | RUNNING | 2023-05-03 05:07:22.818000 UTC | 2023-05-03 05:07:22.905000 UTC | abc@example.com || bquxjob_2 | QUERY | PENDING | 2023-05-01 02:05:47.925000 UTC | 2023-05-01 02:05:47.998000 UTC | xyz@example.com || bquxjob_3 | QUERY | PENDING | 2023-05-01 02:05:47.925000 UTC | 2023-05-01 02:05:47.998000 UTC | abc@example.com |+-----------+----------+---------+--------------------------------+--------------------------------+------------------+
Queries using optional job creation mode
The following example shows a list of queries that were executed in optional jobcreation mode for which BigQuery did not create jobs.
SELECTjob_id,FROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHERETIMESTAMP_TRUNC(creation_time,DAY)='2024-06-12'ANDjob_creation_reason.codeISNULLLIMIT3;
The results should look like the following:
+-----------+| job_id | |+-----------+| bquxjob_1 || bquxjob_2 || bquxjob_3 |+-----------+
The following example shows information about a query that was executed inoptional job creation mode for which BigQuery did not create ajob.
SELECTjob_id,statement_type,priority,cache_hit,job_creation_reason.codeASjob_creation_reason_code,total_bytes_billed,total_bytes_processed,total_slot_ms,state,error_result.messageASerror_result_message,FROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHERETIMESTAMP_TRUNC(creation_time,DAY)='2024-06-12'ANDjob_id='2Lm09bHxDEsoVK8zwzWJomLHU_Ud%1910479b151'-- queryId
Note: Thejob_id field contains thequeryId of the query when a job wasnot created for this query.
The results should look like the following:
+-----------+----------------+-------------+-----------+--------------------------+--------------------+---------------------+---------------+-------+----------------------+| job_id | statement_type | priority | cache_hit | job_creation_reason_code | total_bytes_billed | total_bytes_processed | total_slot_ms | state | error_result_message |+-----------+----------------+-------------+-----------+--------------------------+--------------------+---------------------+---------------+-------+----------------------+| bquxjob_1 | SELECT | INTERACTIVE | false | null | 161480704 | 161164718 | 3106 | DONE | null |+-----------+----------------+-------------+-----------+--------------------------+--------------------+---------------------+---------------+-------+----------------------+
The following example shows a list of queries that were executed in optionaljob creation mode for which BigQuery did create jobs.
SELECTjob_id,job_creation_reason.codeASjob_creation_reason_codeFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHERETIMESTAMP_TRUNC(creation_time,DAY)='2024-06-12'ANDjob_creation_reason.codeISNOTNULLANDjob_creation_reason.code!='REQUESTED'LIMIT3
The results should look like the following:
+-----------+--------------------------+| job_id | job_creation_reason_code |+-----------+--------------------------+| bquxjob_1 | LARGE_RESULTS || bquxjob_2 | LARGE_RESULTS || bquxjob_3 | LARGE_RESULTS |+-----------+--------------------------+
Bytes processed per user identity
The following example shows the total bytes billed for query jobs per user:
SELECTuser_email,SUM(total_bytes_billed)ASbytes_billedFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREjob_type='QUERY'ANDstatement_type!='SCRIPT'GROUPBYuser_email;
Note: See the caveat for thetotal_bytes_billed column in theschema documentation for theJOBS views.
The results should look like the following:
+---------------------+--------------+| user_email | bytes_billed |+---------------------+--------------+| bob@example.com | 2847932416 || alice@example.com | 1184890880 || charles@example.com | 10485760 |+---------------------+--------------+
Aggregate Connected Sheets usage by user at the project level
If you don't have organization-level permissions or only need to monitor aspecific project, run the following query to identify the topConnected Sheets users within a project over the last 30 days. Thequery aggregates the total number of queries, total bytes billed, and total slotmilliseconds for each user. This information is useful for understandingadoption and for identifying top consumers of resources.
SELECTuser_email,COUNT(*)AStotal_queries,SUM(total_bytes_billed)AStotal_bytes_billed,SUM(total_slot_ms)AStotal_slot_msFROM-- This view queries the project you are currently running the query in.`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHERE-- Filter for jobs created in the last 30 dayscreation_time>=TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL30DAY)-- Filter for jobs originating from Connected SheetsANDjob_idLIKE'sheets_dataconnector%'-- Filter for completed jobsANDstate='DONE'AND(statement_typeISNULLORstatement_type <>'SCRIPT')GROUPBYuser_emailORDERBYtotal_bytes_billedDESCLIMIT10;ReplaceREGION_NAME with the region for your project.For example,region-us.
INFORMATION_SCHEMA views. Thelocation of the query execution must match the region of theINFORMATION_SCHEMA view.The result looks similar to the following:
+---------------------+---------------+--------------------+-----------------+| user_email | total_queries | total_bytes_billed | total_slot_ms |+---------------------+---------------+--------------------+-----------------+| alice@example.com | 152 | 12000000000 | 3500000 || bob@example.com | 45 | 8500000000 | 2100000 || charles@example.com | 210 | 1100000000 | 1800000 |+---------------------+---------------+--------------------+-----------------+
Find job logs of Connected Sheets queries at the project-level
If you don't have organization-level permissions or only need to monitor aspecific project, run the following query to see a detailed log of allConnected Sheets queries for the current project:
SELECTjob_id,creation_time,user_email,total_bytes_billed,total_slot_ms,queryFROM-- This view queries the project you are currently running the query in.`region-REGION_NAME.INFORMATION_SCHEMA.JOBS_BY_PROJECT`WHEREcreation_time>=TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL30DAY)ANDjob_idLIKE'sheets_dataconnector%'ANDstate='DONE'AND(statement_typeISNULLORstatement_type <>'SCRIPT')ORDERBYcreation_timeDESC;ReplaceREGION_NAME with the region for your project.For example,region-us.
INFORMATION_SCHEMA views. Thelocation of the query execution must match the region of theINFORMATION_SCHEMA view.The result looks similar to the following:
+---------------------------------+---------------------------------+------------------+--------------------+-----------------+---------------------------------+| job_id | creation_time | user_email | total_bytes_billed | total_slot_ms | query |+---------------------------------+---------------------------------+------------------+--------------------+-----------------+---------------------------------+| sheets_dataconnector_bquxjob_1 | 2025-11-06 00:26:53.077000 UTC | abc@example.com | 12000000000 | 3500000 | SELECT ... FROM dataset.table1 || sheets_dataconnector_bquxjob_2 | 2025-11-06 00:24:04.294000 UTC | xyz@example.com | 8500000000 | 2100000 | SELECT ... FROM dataset.table2 || sheets_dataconnector_bquxjob_3 | 2025-11-03 23:17:25.975000 UTC | bob@example.com | 1100000000 | 1800000 | SELECT ... FROM dataset.table3 |+---------------------------------+---------------------------------+------------------+--------------------+-----------------+---------------------------------+
Hourly breakdown of bytes processed
The following example shows total bytes billed for query jobs, in hourlyintervals:
SELECTTIMESTAMP_TRUNC(end_time,HOUR)AStime_window,SUM(total_bytes_billed)ASbytes_billedFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREjob_type='QUERY'ANDstatement_type!='SCRIPT'GROUPBYtime_windowORDERBYtime_windowDESC;
The result is similar to the following:
+-------------------------+--------------+| time_window | bytes_billed |+-------------------------+--------------+| 2022-05-17 20:00:00 UTC | 1967128576 || 2022-05-10 21:00:00 UTC | 0 || 2022-04-15 17:00:00 UTC | 41943040 |+-------------------------+--------------+
Query jobs per table
The following example shows how many times each table queried inmy_projectwas referenced by a query job:
SELECTt.project_id,t.dataset_id,t.table_id,COUNT(*)ASnum_referencesFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS,UNNEST(referenced_tables)AStGROUPBYt.project_id,t.dataset_id,t.table_idORDERBYnum_referencesDESC;
The result is similar to the following:
+------------+------------+----------+----------------+| project_id | dataset_id | table_id | num_references |+------------+------------+----------+----------------+| my_project | dataset1 | orders | 58 || my_project | dataset1 | products | 40 || other_proj | dataset1 | accounts | 12 |+------------+------------+----------+----------------+
Legacy sql query jobs count per project
The 'query_dialect' field in the INFORMATION_SCHEMA has been available since May 2025.The following example shows how many legacy sql query jobs are executed byprojects.
SELECTproject_id,-- Implicitly defaulted to LegacySQL since the query dialect was not specified-- in the request.COUNTIF(query_dialect='DEFAULT_LEGACY_SQL')ASdefault_legacysql_query_jobs,-- Explicitly requested LegacySQL.COUNTIF(query_dialect='LEGACY_SQL')ASlegacysql_query_jobs,FROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREquery_dialect='DEFAULT_LEGACY_SQL'ORquery_dialect='LEGACY_SQL'GROUPBYproject_idORDERBYdefault_legacysql_query_jobsDESC,legacysql_query_jobsDESC;
Number of partitions modified by query and load jobs per table
The following example shows the number of partitions modified by queries withDML statements and load jobs, per table. Note that this query doesn't showthetotal_modified_partitions for copy jobs.
SELECTdestination_table.table_id,SUM(total_modified_partitions)AStotal_modified_partitionsFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREDATE(creation_time,"America/Los_Angeles")=CURRENT_DATE()GROUPBYtable_idORDERBYtotal_modified_partitionsDESC
Average number of slots per millisecond used by a job
The following example shows how to calculate the average number of slots used by a job throughout the execution. This can be helpful when troubleshooting slow queries and comparing a slow execution of a query to a faster execution of the same query. Comparing this value with the total reservation size and the average number of concurrent jobs executed within the project or reservation can help you to understand whether multiple queries were competing for slots at the same time during the execution.
A higher average number of slots means more resources allocated to the job, which generally results in a faster execution.
SELECTROUND(SAFE_DIVIDE(total_slot_ms,TIMESTAMP_DIFF(end_time,start_time,MILLISECOND)),1)asavg_slots_per_msFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHEREjob_id='JOB_ID'
ReplaceJOB_ID with thejob_id you are investigating.
INFORMATION_SCHEMA view names are case-sensitive.The result will be similar to the following:
+------------------+| avg_slots_per_ms |+------------------+| 17.0 |+------------------+
Most expensive queries by project
The following example lists the most expensive queries inmy_project by slotusage time:
SELECTjob_id,query,user_email,total_slot_msFROM`my_project`.`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREEXTRACT(DATEFROMcreation_time)=current_date()ORDERBYtotal_slot_msDESCLIMIT3
You can also list the most expensive queries by data processed with thefollowing example:
SELECTjob_id,query,user_email,total_bytes_processedFROM`my_project`.`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREEXTRACT(DATEFROMcreation_time)=current_date()ORDERBYtotal_bytes_processedDESCLIMIT3
The result for either example is similar to the following:
+-----------+---------------------------------+-----------------------+---------------+| job_id | query | user_email | total_slot_ms |+-----------+---------------------------------+-----------------------+---------------+| bquxjob_1 | SELECT ... FROM dataset.table1 | bob@example.com | 80,000 || bquxjob_2 | SELECT ... FROM dataset.table2 | alice@example.com | 78,000 || bquxjob_3 | SELECT ... FROM dataset.table3 | charles@example.com | 75,000 |+-----------+---------------------------------+-----------------------+---------------+
Get details about a resource warning
If you get aResources exceeded error message, you can inquire about thequeries in a time window:
SELECTquery,query_info.resource_warningFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_timeBETWEENTIMESTAMP("2022-12-01")ANDTIMESTAMP("2022-12-08")ANDquery_info.resource_warningISNOTNULLLIMIT3;
Monitor resource warnings grouped by date
If you get aResources exceeded error message, you can monitor the totalnumber of resource warnings grouped by date to know if there are any changes toworkload:
WITHresource_warningsAS(SELECTEXTRACT(DATEFROMcreation_time)AScreation_dateFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_time>=TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL14DAY)ANDquery_info.resource_warningISNOTNULL)SELECTcreation_date,COUNT(1)ASwarning_countsFROMresource_warningsGROUPBYcreation_dateORDERBYcreation_dateDESC;
Estimate slot usage and cost for queries
The following example computes the average slots and max slots foreach job by usingestimated_runnable_units.
Thereservation_id isNULL if you don't have any reservations.
SELECTproject_id,job_id,reservation_id,EXTRACT(DATEFROMcreation_time)AScreation_date,TIMESTAMP_DIFF(end_time,start_time,SECOND)ASjob_duration_seconds,job_type,user_email,total_bytes_billed,-- Average slot utilization per job is calculated by dividing total_slot_ms by the millisecond duration of the jobSAFE_DIVIDE(job.total_slot_ms,(TIMESTAMP_DIFF(job.end_time,job.start_time,MILLISECOND)))ASjob_avg_slots,query,-- Determine the max number of slots used at ANY stage in the query.-- The average slots might be 55. But a single stage might spike to 2000 slots.-- This is important to know when estimating number of slots to purchase.MAX(SAFE_DIVIDE(unnest_job_stages.slot_ms,unnest_job_stages.end_ms-unnest_job_stages.start_ms))ASjobstage_max_slots,-- Check if there's a job that requests more units of works (slots). If so you need more slots.-- estimated_runnable_units = Units of work that can be scheduled immediately.-- Providing additional slots for these units of work accelerates the query,-- if no other query in the reservation needs additional slots.MAX(unnest_timeline.estimated_runnable_units)ASestimated_runnable_unitsFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSASjobCROSSJOINUNNEST(job_stages)asunnest_job_stagesCROSSJOINUNNEST(timeline)ASunnest_timelineWHEREDATE(creation_time)BETWEENDATE_SUB(CURRENT_DATE(),INTERVAL7DAY)ANDCURRENT_DATE()ANDproject_id='my_project'AND(statement_type!='SCRIPT'ORstatement_typeISNULL)GROUPBY1,2,3,4,5,6,7,8,9,10ORDERBYjob_id;
The result for example is similar to the following:
+-----------+-----------+----------------+---------------+----------------------+----------+-----------------+--------------------+--------------+--------------------------------+--------------------+--------------------------+|project_id | job_id | reservation_id | creation_date | job_duration_seconds | job_type | user_email | total_bytes_billed | job_avg_slots| query | jobstage_max_slots | estimated_runnable_units |+-----------+-----------+----------------+---------------+----------------------+----------+-----------------+--------------------+--------------+--------------------------------+--------------------+--------------------------+| project1 | bquxjob1 | reservation1 | 2020-10-10 | 160 | LOAD | abc@example.com | 161480704 | 2890 | SELECT ... FROM dataset.table1 | 2779.1534 | 1000 || project1 | bquxjob2 | reservation2 | 2020-12-10 | 120 | LOAD | abc@example.com | 161480704 | 2890 | SELECT ... FROM dataset.table1 | 2779.1534 | 1000 || project1 | bquxjob3 | reservation1 | 2020-12-10 | 120 | LOAD | abc@example.com | 161480704 | 2890 | SELECT ... FROM dataset.table1 | 1279.1534 | 998 |+-----------+-----------+----------------+---------------+----------------------+----------+-----------------+--------------------+--------------+--------------------------------+--------------------+--------------------------+
View performance insights for queries
The following example returns all query jobs that have performance insights fromyour project in the last 30 days, along with a URL that links to the queryexecution graph in the Google Cloud console.
SELECT`bigquery-public-data`.persistent_udfs.job_url(project_id||':us.'||job_id)ASjob_url,query_info.performance_insightsFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHEREDATE(creation_time)>=CURRENT_DATE-30-- scan 30 days of query historyANDjob_type='QUERY'ANDstate='DONE'ANDerror_resultISNULLANDstatement_type!='SCRIPT'ANDEXISTS(-- Only include queries which had performance insightsSELECT1FROMUNNEST(query_info.performance_insights.stage_performance_standalone_insights)WHEREslot_contentionORinsufficient_shuffle_quotaORbi_engine_reasonsISNOTNULLORhigh_cardinality_joinsISNOTNULLORpartition_skewISNOTNULLUNIONALLSELECT1FROMUNNEST(query_info.performance_insights.stage_performance_change_insights)WHEREinput_data_change.records_read_diff_percentageISNOTNULL);
View metadata refresh jobs
The following example lists the metadata refresh jobs in last six hours:
SELECT*FROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHEREjob_idLIKE'%metadata_cache_refresh%'ANDcreation_time>TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL6HOUR)ORDERBYstart_timedescLIMIT10;
ReplaceREGION_NAME with your region.
Analyze performance over time for identical queries
The following example returns the top 10 slowest jobs over the past 7 days thathave run the same query:
DECLAREquerytextSTRINGDEFAULT(SELECTqueryFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREjob_id='JOB_ID'LIMIT1);SELECTstart_time,end_time,project_id,job_id,TIMESTAMP_DIFF(end_time,start_time,SECOND)ASrun_secs,total_bytes_processed/POW(1024,3)AStotal_gigabytes_processed,queryFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREquery=querytextANDtotal_bytes_processed>0ANDcreation_time>=TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL7DAY)ORDERBY5DESCLIMIT3;
ReplaceJOB_ID with anyjob_id that ran the query you are analyzing.
View jobs with slot contention insights
To view jobs with their slot contention insights, run the following query:
SELECTjob_id,creation_time,query_info.performance_insights,FROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSj,UNNEST(query_info.performance_insights.stage_performance_standalone_insights)iWHERE(j.statement_type!="SCRIPT"ORj.statement_typeISNULL)ANDiISNOTNULLANDi.slot_contention
The output shows different performance insights about jobs, including slotcontention:
+------------+-------------------------+-------------------------------------------------+----------------------------------------------------------------------------+| job_id | creation_time | performance_insights.avg_previous_execution_ms | performance_insightsstage_performance_standalone_insights.slot_contention |+------------+-------------------------+-------------------------------------------------+----------------------------------------------------------------------------+| bquxjob_1 | 2025-08-08 00:00:00 UTC | null | true || bquxjob_2 | 2025-08-08 00:00:00 UTC | 42689 | true || bquxjob_3 | 2025-08-08 00:00:00 UTC | 42896 | true |+------------+-------------------------+-------------------------------------------------+----------------------------------------------------------------------------+
View average concurrent jobs running alongside a particular job in the same project
The following example demonstrates how to calculate the average number of jobs running at the same time as a specific query job in the same project.
This calculation helps determine if an increased number of concurrent jobs within the same project causedslot contention problems. Gather this data when troubleshooting slow queries or comparing slow and fast query runs.
If there are far more concurrent queries running than expected, check if more jobs were started, queried data changed, or both.
WITHjob_metadataAS(SELECTcreation_time,end_time,job_typeFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTWHEREjob_id='JOB_ID'-- If you know the date the job was created, add the following line to speed up the query by providing the date in UTC:-- AND DATE(creation_time) = 'YYYY-MM-DD'),intervalsAS(SELECTTIMESTAMP_ADD(creation_time,INTERVAL(seconds_offset)SECOND)ASts,job_typeFROMjob_metadata,UNNEST(GENERATE_ARRAY(0,IF(TIMESTAMP_DIFF(end_time,creation_time,SECOND)>0,TIMESTAMP_DIFF(end_time,creation_time,SECOND),1)))asseconds_offset),concurrent_jobsAS(SELECTint.ts,COUNT(*)asconcurrent_jobs_countFROMintervalsintJOIN`region-REGION_NAME`.INFORMATION_SCHEMA.JOBS_BY_PROJECTjONint.tsBETWEENj.creation_timeandj.end_timeWHEREjob_id!='JOB_ID'ANDj.job_type=int.job_typeGROUPBYint.ts)SELECTROUND(AVG(concurrent_jobs_count),1)asaverage_concurrent_jobsFROMconcurrent_jobs
SECOND withMILLISECOND in the query body for milliseconds sampling.Replace the following:
JOB_ID: the job ID of the query that you are analyzingREGION_NAME: the region for your project
The result is similar to the following:
+-------------------------+| average_concurrent_jobs |+-------------------------+| 2.8 |+-------------------------+
Get bytes processed by extract jobs
The following example computes thetotal_bytes_processed value forEXTRACT job types. For information about quotas for extract jobs, seeQuota policy for extract jobs.The total bytes processed can be used to monitor theaggregate usage and verify that extract jobs stays below the 50 TB per-daylimit:
SELECTDATE(creation_time)asday,project_idassource_project_id,SUM(total_bytes_processed)AStotal_bytes_processedFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_timeBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL8DAY)ANDCURRENT_TIMESTAMP()ANDjob_type="EXTRACT"GROUPBYday,source_project_idORDERBYdayDESC;
Get usage of copy jobs
For information about copy jobs, seeCopy a table.The following example provides the usage of copy jobs:
SELECTDATE(creation_time)asday,project_idassource_project_id,CONCAT(destination_table.project_id,":",destination_table.dataset_id,".",destination_table.table_id)asdestination_table,COUNT(job_id)AScopy_job_countFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_timeBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL8DAY)ANDCURRENT_TIMESTAMP()ANDjob_type="COPY"GROUPBYday,source_project_id,destination_tableORDERBYdayDESC;
Get usage of BigLake tables for Apache Iceberg in BigQuery storage optimization
The following example provides the usage of BigLake Iceberg table in BigQuerystorage optimization.
SELECTjob_id,reservation_id,edition,total_slot_ms,total_bytes_processed,stateFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREcreation_time>TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL6HOUR)ANDuser_email="bigquery-adminbot@system.gserviceaccount.com"ANDqueryLIKE"CALL BQ.OPTIMIZE_STORAGE(%)";
Get usage of BigLake Iceberg table in BigQuery export table metadata
The following example provides the usage of IcebergEXPORT TABLE METADATA FROM.
SELECTjob_id,user_email,start_time,end_time,TIMESTAMP_DIFF(end_time,start_time,SECOND)ASduration_seconds,total_bytes_processed,reservation_id,CASEWHENreservation_idISNULLTHEN'PAYG (On-demand)'WHENreservation_id!=''THEN'Reservation'ELSE'Unknown'ENDAScompute_type,queryFROM`region-REGION_NAME`.INFORMATION_SCHEMA.JOBSWHEREjob_type='QUERY'ANDend_timeISNOTNULL-- Filter for queries containing the specified pattern (case-insensitive)ANDREGEXP_CONTAINS(LOWER(query),r"export table metadata from")ORDERBYstart_timeDESCLIMIT3;
Match slot usage behavior from administrative resource charts
To explore slot usage behavior similar to the information in administrativeresource charts, query theINFORMATION_SCHEMA.JOBS_TIMELINE view.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.