Monitor continuous queries
You can monitor BigQuerycontinuous queries by usingthe following BigQuery tools:
Due to the long running nature of a BigQuery continuous query,metrics that are usually generated upon the completion of a SQL query might beabsent or inaccurate.
UseINFORMATION_SCHEMA views
You can use a number of theINFORMATION_SCHEMA views to monitor continuousqueries and continuous query reservations.
JOBS* INFORMATION_SCHEMA views.View job details
You can use theJOBS view to getcontinuous query job metadata.
The following query returns the metadata for all active continuous queries. Themetadata includes the output watermark timestamp, which represents the point upto which the continuous query has successfully processed data.
In the Google Cloud console, go to theBigQuery page.
In the query editor, run the following query:
SELECTstart_time,job_id,user_email,query,state,reservation_id,continuous_query_info.output_watermarkFROM`PROJECT_ID.region-REGION.INFORMATION_SCHEMA.JOBS`WHEREcreation_time>TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL7day)ANDcontinuousISTRUEANDstate="RUNNING"ORDERBYstart_timeDESC
Replace the following:
PROJECT_ID: the ID of the project.REGION: anydataset region name.For example,region-us.
View reservation assignment details
You can use theASSIGNMENTSandRESERVATIONS views to getcontinuous query reservation assignment details.
Return reservation assignment details for continuous queries:
In the Google Cloud console, go to theBigQuery page.
In the query editor, run the following query:
SELECTreservation.reservation_name,reservation.slot_capacityFROM`ADMIN_PROJECT_ID.region-LOCATION.INFORMATION_SCHEMA.ASSIGNMENTS`ASassignmentINNERJOIN`ADMIN_PROJECT_ID.region-LOCATION.INFORMATION_SCHEMA.RESERVATIONS`ASreservationON(assignment.reservation_name=reservation.reservation_name)WHEREassignment.assignee_id='PROJECT_ID'ANDjob_type='CONTINUOUS';
Replace the following:
ADMIN_PROJECT_ID: the ID of theadministration projectthat owns the reservation.LOCATION: the location of the reservation.PROJECT_ID: the ID of the project that is assigned tothe reservation. Only information about continuous queries running in thisproject is returned.
View slot consumption information
You can use theASSIGNMENTS,RESERVATIONS, andJOBS_TIMELINE views to getcontinuous query slot consumption information.
Return slot consumption information for continuous queries:
In the Google Cloud console, go to theBigQuery page.
In the query editor, run the following query:
SELECTjobs.period_start,reservation.reservation_name,reservation.slot_capacity,SUM(jobs.period_slot_ms)/1000ASconsumed_total_slotsFROM`ADMIN_PROJECT_ID.region-LOCATION.INFORMATION_SCHEMA.ASSIGNMENTS`ASassignmentINNERJOIN`ADMIN_PROJECT_ID.region-LOCATION.INFORMATION_SCHEMA.RESERVATIONS`ASreservationON(assignment.reservation_name=reservation.reservation_name)INNERJOIN`PROJECT_ID.region-LOCATION.INFORMATION_SCHEMA.JOBS_TIMELINE`ASjobsON(UPPER(CONCAT('ADMIN_PROJECT_ID:LOCATION.',assignment.reservation_name))=UPPER(jobs.reservation_id))WHEREassignment.assignee_id='PROJECT_ID'ANDassignment.job_type='CONTINUOUS'ANDjobs.period_startBETWEENTIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL1DAY)ANDCURRENT_TIMESTAMP()GROUPBY1,2,3ORDERBYjobs.period_startDESC;
Replace the following:
ADMIN_PROJECT_ID: the ID of theadministration projectthat owns the reservation.LOCATION: the location of the reservation.PROJECT_ID: the ID of the project that is assigned tothe reservation. Only information about continuous queries running in thisproject is returned.
You can also monitor continuous query reservations using other tools such asMetrics Explorer andadministrative resource charts.For more information, seeMonitor BigQuery reservations.
Use the query execution graph
You can use thequery execution graphto get performance insights and general statistics for a continuous query.For more information, seeView query performance insights.
View job history
You can view continuous query job details in your personal job history or theproject's job history. For more information, seeView job details.
Be aware that the historical list of jobs is sorted by thejob start time, so continuous queries that have been running for a whilemight not be close to the start of the list.
Use the administrative jobs explorer
In the administrative jobs explorer,filter your jobs to showcontinuous queries by setting theJob category filter toContinuous query.
Use Cloud Monitoring
You can view metrics specific to BigQuery continuous queries byusing Cloud Monitoring. For more information, seeCreate dashboards, charts, and alertsand read about themetrics available for visualization.
Alert on failed queries
Instead of routinely checking whether your continuous queries have failed, itcan be helpful to create an alert to notify you offailure. One way to do this is to create a customCloud Logging log-based metricwith a filter for your jobs, and aCloud Monitoring alerting policybased on that metric:
- When you create a continuous query, use acustom job ID prefix.Multiple continuous queries can share the same prefix.For example, you might use the prefix
prod-to indicate a productionquery. In the Google Cloud console, go to theLog-based Metrics page.
ClickCreate metric. TheCreate logs metric panel appears.
ForMetric type, selectCounter.
In theDetails section, give your metric a name. For example,
CUSTOM_JOB_ID_PREFIX-metric.In theFilter selection section, enter the following into theBuild filter editor:
resource.type = "bigquery_project"protoPayload.resourceName : "projects/PROJECT_ID/jobs/CUSTOM_JOB_ID_PREFIX"severity = ERROR
Replace the following:
PROJECT_ID: the name of your project.CUSTOM_JOB_ID_PREFIX: the name of thecustom job ID prefixthat you set for your continuous query.
ClickCreate metric.
In the navigation menu, clickLog-based metrics. The metric you justcreated appears in the list of user-defined metrics.
In your metric's row, clickMore actions, and then clickCreate alert from metric.
ClickNext. You don't need to change the default settings on thePolicy configuration mode page.
ClickNext. You don't need to change the default settings on theConfigure alert trigger page.
Select your notification channels and enter a name for the alert policy.
ClickCreate policy.
You can test your alert by running a continuous query with the customjob ID prefix that you selected and then cancelling it. It might take a fewminutes for the alert to reach your notification channel.
Retry failed queries
Retrying a failed continuous query might help avoid situations where acontinuous pipeline is down for an extended period of time or requires humanintervention to restart. Important things to consider when youretry a failed continuous query include the following:
- Whether reprocessing some amount of data processed by the previous querybefore it failed is tolerable.
- How to handle limiting retries or using exponential backoff.
One possible approach to automating query retry is the following:
Create aCloud Logging sinkbased on an inclusion filter matching the following criteria to route logsto a Pub/Sub topic:
resource.type = "bigquery_project"protoPayload.resourceName : "projects/PROJECT_ID/jobs/CUSTOM_JOB_ID_PREFIX"severity = ERROR
Replace the following:
PROJECT_ID: the name of your project.CUSTOM_JOB_ID_PREFIX: the name of thecustom job ID prefixthat you set for your continuous query.
Create aCloud Run function that istriggered in response to the Pub/Sub receiving logs matchingyour filter.
The Cloud Run function could accept the data payload from thePub/Sub message and attempt to start a new continuous queryusing the same SQL syntax as the failed query, but at beginning justafter the previous job stopped.
For example, you can use a function similar to the following:
Python
Before trying this sample, follow thePython setup instructions in theBigQuery quickstart using client libraries. For more information, see theBigQueryPython API reference documentation. To authenticate to BigQuery, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.importbase64importjsonimportloggingimportreimportuuidimportgoogle.authimportgoogle.auth.transport.requestsimportrequestsdefretry_continuous_query(event,context):logging.info("Cloud Function started.")if"data"notinevent:logging.info("No data in Pub/Sub message.")returntry:# Decode and parse the Pub/Sub message datalog_entry=json.loads(base64.b64decode(event["data"]).decode("utf-8"))# Extract the SQL query and other necessary dataproto_payload=log_entry.get("protoPayload",{})metadata=proto_payload.get("metadata",{})job_change=metadata.get("jobChange",{})job=job_change.get("job",{})job_config=job.get("jobConfig",{})query_config=job_config.get("queryConfig",{})sql_query=query_config.get("query")job_stats=job.get("jobStats",{})end_timestamp=job_stats.get("endTime")failed_job_id=job.get("jobName")# Check if required fields are missingifnotall([sql_query,failed_job_id,end_timestamp]):logging.error("Required fields missing from log entry.")returnlogging.info(f"Retrying failed job:{failed_job_id}")# Adjust the timestamp in the SQL querytimestamp_match=re.search(r"\s*TIMESTAMP\(('.*?')\)(\s*\+ INTERVAL 1 MICROSECOND)?",sql_query)iftimestamp_match:original_timestamp=timestamp_match.group(1)new_timestamp=f"'{end_timestamp}'"sql_query=sql_query.replace(original_timestamp,new_timestamp)elif"CURRENT_TIMESTAMP() - INTERVAL 10 MINUTE"insql_query:new_timestamp=f"TIMESTAMP('{end_timestamp}') + INTERVAL 1 MICROSECOND"sql_query=sql_query.replace("CURRENT_TIMESTAMP() - INTERVAL 10 MINUTE",new_timestamp)# Get access tokencredentials,project=google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])request=google.auth.transport.requests.Request()credentials.refresh(request)access_token=credentials.token# API endpointurl=f"https://bigquery.googleapis.com/bigquery/v2/projects/{project}/jobs"# Request headersheaders={"Authorization":f"Bearer{access_token}","Content-Type":"application/json",}# Generate a random UUIDrandom_suffix=str(uuid.uuid4())[:8]# Take the first 8 characters of the UUID# Combine the prefix and random suffixjob_id=f"CUSTOM_JOB_ID_PREFIX{random_suffix}"# Request payloaddata={"configuration":{"query":{"query":sql_query,"useLegacySql":False,"continuous":True,"connectionProperties":[{"key":"service_account","value":"SERVICE_ACCOUNT"}],# ... other query parameters ...},"labels":{"bqux_job_id_prefix":"CUSTOM_JOB_ID_PREFIX"},},"jobReference":{"projectId":project,"jobId":job_id,# Use the generated job ID here},}# Make the API requestresponse=requests.post(url,headers=headers,json=data)# Handle the responseifresponse.status_code==200:logging.info("Query job successfully created.")else:logging.error(f"Error creating query job:{response.text}")exceptExceptionase:logging.error(f"Error processing log entry or retrying query:{e}",exc_info=True)logging.info("Cloud Function finished.")
What's next
- Learn how to create and runcontinuous queries.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.