WRITE_API_TIMELINE_BY_FOLDER view

TheINFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER view contains per minuteaggregated BigQuery Storage Write API ingestion statistics for the parent folder of the current project, including its subfolders.

You can query theINFORMATION_SCHEMA Write API viewsto retrieve historical and real-time information about data ingestion intoBigQuery that uses the BigQuery Storage Write API. SeeBigQuery Storage Write API for more information.

Required permission

To query theINFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER view, you needthebigquery.tables.list Identity and Access Management (IAM) permission for the parentfolder of the project.

Each of the following predefined IAM roles includes the precedingpermission:

  • roles/bigquery.admin
  • roles/bigquery.user
  • roles/bigquery.dataViewer
  • roles/bigquery.dataEditor
  • roles/bigquery.dataOwner
  • roles/bigquery.metadataViewer
  • roles/bigquery.resourceAdmin
Caution: The required `bigquery.tables.list` permission isnot included in thebasic roles Owner or Editor.

For more information about BigQuery permissions, seeAccess control with IAM.

Schema

When you query theINFORMATION_SCHEMA BigQuery Storage Write API views, the query results contain historical and real-time information about data ingestion intoBigQuery using the BigQuery Storage Write API. Each row in the following views represents statistics for ingestion into a specific table, aggregated overa one minute interval starting atstart_timestamp. Statistics are grouped by stream type and error code, so there will be one row for each stream type andeach encountered error code during the one minute interval for each timestampand table combination. Successful requests have the error code set toOK. Ifno data was ingested into a table during a certain time period, then no rows are present for the corresponding timestamps for that table.

TheINFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER view has thefollowing schema:
Column nameData typeValue
start_timestampTIMESTAMP(Partitioning column) Start timestamp of the 1 minute interval for the aggregated statistics.
folder_numbersREPEATED INTEGER Number IDs of folders that contain the project, starting with the folder that immediately contains the project, followed by the folder that contains the child folder, and so forth. For example, iffolder_numbers is[1, 2, 3], then folder1 immediately contains the project, folder2 contains1, and folder3 contains2. This column is only populated inWRITE_API_TIMELINE_BY_FOLDER.
project_idSTRING(Clustering column) ID of the project.
project_numberINTEGERNumber of the project.
dataset_idSTRING(Clustering column) ID of the dataset.
table_idSTRING(Clustering column) ID of the table.
stream_typeSTRINGThestream type used for the data ingestion with BigQuery Storage Write API. It is supposed to be one of "DEFAULT", "COMMITTED", "BUFFERED", or "PENDING".
error_codeSTRINGError code returned for the requests specified by this row. "OK" for successful requests.
total_requestsINTEGERTotal number of requests within the 1 minute interval.
total_rowsINTEGERTotal number of rows from all requests within the 1 minute interval.
total_input_bytesINTEGERTotal number of bytes from all rows within the 1 minute interval.

For stability, we recommend that you explicitly list columns in your information schema queries instead ofusing a wildcard (SELECT *). Explicitly listing columns prevents queries frombreaking if the underlying schema changes.

Data retention

This view contains the BigQuery Storage Write API ingestion history of the past 180 days.

Scope and syntax

Queries against this view must include aregion qualifier.If you don't specify a regional qualifier, metadata is retrieved from allregions. The following table explains the region scope for this view:

View nameResource scopeRegion scope
[PROJECT_ID.]`region-REGION`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDERFolder that contains the specified projectREGION
Replace the following:
  • Optional:PROJECT_ID: the ID of your Google Cloud project. If not specified, the default project is used.
  • REGION: anydataset region name. For example,`region-us`.

    Note: You must usea region qualifier to queryINFORMATION_SCHEMA views. The location of the query execution must match the region of theINFORMATION_SCHEMA view.

Example

  • To query data in the US multi-region, useregion-us.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER
  • To query data in the EU multi-region, useregion-eu.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER
  • To query data in the asia-northeast1 region, useregion-asia-northeast1.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDER

For a list of available regions, seeDataset locations.

Examples

Example 1: Recent BigQuery Storage Write API ingestion failures

The following example calculates the per minute breakdown of total failedrequests for all tables in the project's folder in the last 30 minutes, split bystream type and error code:

SELECTstart_timestamp,stream_type,error_code,SUM(total_requests)ASnum_failed_requestsFROM`region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDERWHEREerror_code!='OK'ANDstart_timestamp>TIMESTAMP_SUB(CURRENT_TIMESTAMP,INTERVAL30MINUTE)GROUPBYstart_timestamp,stream_type,error_codeORDERBYstart_timestampDESC;
Note:INFORMATION_SCHEMA view names are case-sensitive.

The result is similar to the following:

+---------------------+-------------+------------------+---------------------+|   start_timestamp   | stream_type |    error_code    | num_failed_requests |+---------------------+-------------+------------------+---------------------+| 2023-02-24 00:25:00 | PENDING     | NOT_FOUND        |                   5 || 2023-02-24 00:25:00 | DEFAULT     | INVALID_ARGUMENT |                   1 || 2023-02-24 00:25:00 | DEFAULT     | DEADLINE_EXCEEDED|                   4 || 2023-02-24 00:24:00 | PENDING     | INTERNAL         |                   3 || 2023-02-24 00:24:00 | DEFAULT     | INVALID_ARGUMENT |                   1 || 2023-02-24 00:24:00 | DEFAULT     | DEADLINE_EXCEEDED|                   2 |+---------------------+-------------+------------------+---------------------+
Example 2: Per minute breakdown for all requests with error codes

The following example calculates a per minute breakdown of successful and failedappend requests in the project's folder, split into error code categories.This query could be used to populate a dashboard.

SELECTstart_timestamp,SUM(total_requests)AStotal_requests,SUM(total_rows)AStotal_rows,SUM(total_input_bytes)AStotal_input_bytes,SUM(IF(error_codeIN('INVALID_ARGUMENT','NOT_FOUND','CANCELLED','RESOURCE_EXHAUSTED','ALREADY_EXISTS','PERMISSION_DENIED','UNAUTHENTICATED','FAILED_PRECONDITION','OUT_OF_RANGE'),total_requests,0))ASuser_error,SUM(IF(error_codeIN('DEADLINE_EXCEEDED','ABORTED','INTERNAL','UNAVAILABLE','DATA_LOSS','UNKNOWN'),total_requests,0))ASserver_error,SUM(IF(error_code='OK',0,total_requests))AStotal_error,FROM`region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDERGROUPBYstart_timestampORDERBYstart_timestampDESC;
Note:INFORMATION_SCHEMA view names are case-sensitive.

The result is similar to the following:

+---------------------+----------------+------------+-------------------+------------+--------------+-------------+|   start_timestamp   | total_requests | total_rows | total_input_bytes | user_error | server_error | total_error |+---------------------+----------------+------------+-------------------+------------+--------------+-------------+| 2020-04-15 22:00:00 |         441854 |     441854 |       23784853118 |          0 |           17 |          17 || 2020-04-15 21:59:00 |         355627 |     355627 |       26101982742 |          8 |            0 |          13 || 2020-04-15 21:58:00 |         354603 |     354603 |       26160565341 |          0 |            0 |           0 || 2020-04-15 21:57:00 |         298823 |     298823 |       23877821442 |          2 |            0 |           2 |+---------------------+----------------+------------+-------------------+------------+--------------+-------------+
Example 3: Tables with the most incoming traffic

The following example returns the BigQuery Storage Write API ingestion statistics for the 10 tables in the project's folder with the most incoming traffic:

SELECTproject_id,dataset_id,table_id,SUM(total_rows)ASnum_rows,SUM(total_input_bytes)ASnum_bytes,SUM(total_requests)ASnum_requestsFROM`region-us`.INFORMATION_SCHEMA.WRITE_API_TIMELINE_BY_FOLDERGROUPBYproject_id,dataset_id,table_idORDERBYnum_bytesDESCLIMIT10;
Note:INFORMATION_SCHEMA view names are case-sensitive.

The result is similar to the following:

+----------------------+------------+-------------------------------+------------+----------------+--------------+|      project_id      | dataset_id |           table_id            |  num_rows  |   num_bytes    | num_requests |+----------------------+------------+-------------------------------+------------+----------------+--------------+| my-project1          | dataset1   | table1                        | 8016725532 | 73787301876979 |   8016725532 || my-project2          | dataset1   | table2                        |   26319580 | 34199853725409 |     26319580 || my-project1          | dataset2   | table1                        |   38355294 | 22879180658120 |     38355294 || my-project3          | dataset1   | table3                        |  270126906 | 17594235226765 |    270126906 || my-project2          | dataset2   | table2                        |   95511309 | 17376036299631 |     95511309 || my-project2          | dataset2   | table3                        |   46500443 | 12834920497777 |     46500443 || my-project3          | dataset2   | table4                        |   25846270 |  7487917957360 |     25846270 || my-project4          | dataset1   | table4                        |   18318404 |  5665113765882 |     18318404 || my-project4          | dataset1   | table5                        |   42829431 |  5343969665771 |     42829431 || my-project4          | dataset1   | table6                        |    8771021 |  5119004622353 |      8771021 |+----------------------+------------+-------------------------------+------------+----------------+--------------+

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.