Sample SQL queries Stay organized with collections Save and categorize content based on your preferences.
This document contains sample queries that you can use to query yourlog and trace data.
SQL language support
Queries used in theLog Analytics page support GoogleSQL functionswith some exceptions.
The following SQL commands aren't supported for SQL queries issued by using theLog Analytics page:
- DDL and DML commands
- Javascript user-defined functions
- BigQuery ML functions
- SQL variables
The following are supported only when you query a linked dataset by usingtheBigQuery Studio andLooker Studio pages, or by using thebq command-line tool:
- Javascript user-defined functions
- BigQuery ML functions
- SQL variables
Best practices
To set the time range of your query, we recommend that youuse the time-range selector. For example, to view the data for the past week,selectLast 7 days from the time-range selector. You can also use thetime-range selector to specify a start and end time, specify a time toview around, and change time zones.
Log data
If you include atimestamp field in theWHERE clause, then the time-rangeselector setting isn't used. The following example illustrates how tofilter by timestamp:
-- Matches log entries whose timestamp is within the most recent 1 hour.WHEREtimestamp >TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL1HOUR)Trace data
If you include astart_time field in theWHERE clause, then the time-rangeselector setting isn't used. The following example illustrates how tofilter by timestamp:
-- Matches trace spans whose start_time is within the most recent 1 hour.WHEREstart_time >TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL1HOUR)For more information about how to filter by time, seeTime functionsandTimestamp functions.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Observability API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Observability API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.To get the permissions that you need to load theLog Analytics page, write, run, and save privatequeries on your trace data, ask your administrator to grant you the following IAM roles:
- Observability View Accessor (
roles/observability.viewAccessor) on the observability views that you want to query. This role supports IAM conditions, which let you restrict the grant to a specific view. If you don't attach a condition to the role grant, then the principal can access all observability views. Observability views are in Public Preview. - Observability Analytics User (
roles/observability.analyticsUser) on your project. This role contains the permissions required to save and run private queries, and to run shared queries. - Logs Viewer (
roles/logging.viewer) on your project. - Logs View Accessor (
roles/logging.viewAccessor) on the project that stores the log views that you want to query.
For more information about granting roles, seeManage access to projects, folders, and organizations.
You might also be able to get the required permissions throughcustom roles or otherpredefined roles.
- Observability View Accessor (
How to use the queries on this page
In the Google Cloud console, go to themanage_searchLog Analytics page:
If you use the search bar to find this page, then select the result whose subheading isLogging.
In theQuery pane, click thecode SQL, and then copy and pastea query into the SQL query pane.
Note: The delimiters for the path are period characters,Log data
Before you copy a query, in the
FROMclause, replace the following fields- PROJECT_ID: The identifier of the project.
- LOCATION: The location of the log view or the analytics view.
- BUCKET_ID: The name or ID of the log bucket.
- LOG_VIEW_ID: The identifier of the log view, which is limited to 100 characters and can include only letters,digits, underscores, and hyphens.
The following shows the format of the
FROMclause for alog view:FROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`The log samples on this page query a log view. Toquery ananalytics view,use the following path format:
`analytics_view.PROJECT_ID.LOCATION.ANALYTICS_VIEW_ID`.In the previous expression,PROJECT_IDis the ID of your project, andLOCATIONandANALYTICS_VIEW_ID, are the location and name of youranalytics view.Trace data
Before you copy a query, in the
FROMclause, replacePROJECT_ID with theID of your Google Cloud project:FROM`PROJECT_ID.us._Trace.Spans._AllSpans`The fields
us._Trace.Spans._AllSpansidentify the location, theobservability bucket, the dataset, and the view.(.). If a pathelement like the Google Cloud project contains a period character, thenensure every path field is wrapped by single backquotes,(`).
To use the queries shown in this document on theBigQuery Studio page orto use thebq command-line tool, thenedit theFROM clause and enter thepath to the linked dataset.
Log data
For example, to query the_AllLogsview on the linked dataset namedmydatasetthat is in the projectmyproject, the path ismyproject.mydataset._AllLogs.
Trace data
For example, to query the_AllSpans view on the linked dataset namedmy_linked_datasetthat is in the projectmyproject, the path is`myproject.my_linked_dataset._AllSpans`.
Common use cases
This section lists several common use cases, that might help you create yourcustom queries.
Show log entries in the default log bucket
To query the_Default bucket, run the following query:
SELECTtimestamp,severity,resource.type,log_name,text_payload,proto_payload,json_payloadFROM`PROJECT_ID.LOCATION._Default._AllLogs`-- Limit to 1000 entriesLIMIT1000Extract field value by regular expression
To extract a value from a string by using a regular expression, use the functionREGEXP_EXTRACT:
SELECT-- Display the timestamp, and the part of the name that begins with test.timestamp,REGEXP_EXTRACT(JSON_VALUE(json_payload.jobName),r".*(test.*)$")ASname,FROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHERE-- Get the value of jobName, which is a subfield in a JSON structure.JSON_VALUE(json_payload.jobName)ISNOTNULLORDERBYtimestampDESCLIMIT20For more information, see theREGEXP_EXTRACT documentation.
For substring matches, like the previous query, use of theCONTAINS_SUBSTR function results in a more efficient query.
Show all trace data
To query the_AllSpans view, run the following query:
-- Display all data.SELECT*FROM`PROJECT_ID.us._Trace.Spans._AllSpans`-- Limit to 10 entries.LIMIT10Show common span information
To show common span information, like the start time and duration, run therun the following query:
SELECTstart_time,-- Set the value of service name based on the first non-null value in the list.COALESCE(JSON_VALUE(resource.attributes,'$."service.name"'),JSON_VALUE(attributes,'$."service.name"'),JSON_VALUE(attributes,'$."g.co/gae/app/module"'))ASservice_name,nameASspan_name,duration_nano,status.codeASstatus,trace_id,span_idFROM`PROJECT_ID.us._Trace.Spans._AllSpans`LIMIT10To learn more, seeConditional expressions.
Show 50th and 99th percentiles of the span latency
To show the 50th and 99th percentiles of the latency for each rpc service,run the following query:
SELECT-- Compute 50th and 99th percentiles for each serviceSTRING(attributes['rpc.service'])||'/'||STRING(attributes['rpc.method'])ASrpc_service_method,APPROX_QUANTILES(duration_nano,100)[OFFSET(50)]ASduration_nano_p50,APPROX_QUANTILES(duration_nano,100)[OFFSET(99)]ASduration_nano_p99FROM`PROJECT_ID.us._Trace.Spans._AllSpans`WHERE-- Matches spans whose kind field has a value of 2 (SPAN_KIND_SERVER).kind=2GROUPBYrpc_service_methodFor more information about the enumeration, see theOpenTelemetry: SpanKinddocumentation.
To view the results graphically, you might create a chart with the dimensionset torpc_service_method. You might add two measures, one for the average oftheduration_nano_p50 value, and the other for the average of theduration_nano_p99 field.
Filter log entries
To apply a filter to your query, add aWHERE clause. The syntax that youuse in this clause depends on the data type of the field. This sectionprovides several examples for different data types.
Filter log entries by payload type
Log entries can have one of three payload types. To filter log entriesby the payload type, use one of the following clauses:
Text payloads
-- Matches log entries that have a text payload.WHEREtext_payloadISNOTNULLJSON payloads
-- Matches log entries that have a JSON payload.WHEREjson_payloadISNOTNULLProto payloads
-- Matches log entries that have a proto payload.-- Because proto_payload has a data type of RECORD, this statement tests-- whether a mandatory subfield exits.WHEREproto_payload.typeISNOTNULL
In the query results, both thejson_payload andproto_payload fieldsare rendered in JSON, which you can navigate through.
Filter log data by timestamp
To filter log entries by their timestamp, we recommend that you use thetime-range selector. However, you can also specify thetimestamp in theWHERE clause:
-- Matches log entries whose timestamp is within the most recent hourWHEREtimestamp >TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL1HOUR)For more information about how to filter by time, seeTime functions andTimestamp functions.
Note: If you include atimestamp field in theWHERE clause, then thetime-range selector setting isn't used.Filter by resource
To filter your log and trace data by resource, add aresource.typestatement to theWHERE clause:
-- Matches log entries whose resource type is gce_instanceWHEREresource.type="gce_instance"Filter by severity
To filter your log data by a severity, add aseveritystatement to theWHERE clause:
-- Matches log entries whose severity is INFO or ERRORWHEREseverityISNOTNULLANDseverityIN('INFO','ERROR')You can also filter your log entries by theseverity_number, which is aninteger. For example, the following clause matches all log entries whoseseverity level is at leastNOTICE:
-- Matches log entries whose severity level is at least NOTICEWHEREseverity_numberISNOTNULLANDseverity_number >200For information about the enumerated values, seeLogSeverity.
Filter by log name
To filter your log data by a log name, add alog_name orlog_idstatement to theWHERE clause:
Log name specifies the resource path:
-- Matches log entries that have the following log ID.WHERElog_name="projects/cloud-logs-test-project/logs/cloudaudit.googleapis.com%2Factivity"Log ID omits the resource path:
-- Matches log entries that have the following log id.WHERElog_id="cloudaudit.googleapis.com/data_access"
Filter log entries by resource label
Resource labels are stored as a JSON structure. To filter by the value ofa field within a JSON structure, use the functionJSON_VALUE:
SELECTtimestamp,JSON_VALUE(resource.labels.zone)ASzone,json_payload,resource,labelsFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHERE-- Matches log entries whose resource type is gce_instance and whose zone is-- us-central1-f. Because resource has data type JSON, you must use JSON_VALUE-- to get the value for subfields, like zone.resource.type="gce_instance"ANDJSON_VALUE(resource.labels.zone)="us-central1-f"ORDERBYtimestampASCThe previous query relies on the format of resource labels, as they are storedin a log entry. The following is an example of the resource field:
{type:"gce_instance"labels:{instance_id:"1234512345123451"project_id:"my-project"zone:"us-central1-f"}}For information about all functions that can retrieve and transform JSONdata, seeJSON functions.
Filter by HTTP request
To only query log entries that have an HTTP request field, use the followingclause:
-- Matches log entries that have a HTTP request_method field.-- Don't compare http_request to NULL. This field has a data type of RECORD.WHEREhttp_request.request_methodISNOTNULLYou can also use theIN statement:
-- Matches log entries whose HTTP request_method is GET or POST.WHEREhttp_request.request_methodIN('GET','POST')Filter by HTTP status
To only query log entries that have an HTTP status, use the following clause:
-- Matches log entries that have an http_request.status field.WHEREhttp_request.statusISNOTNULLFilter by a field within a JSON data type
To only query log entries when the subfield of a field with a JSON data typehas a specific value, extract the value by using the functionJSON_VALUE:
-- Compare the value of the status field to NULL.WHEREJSON_VALUE(json_payload.status)ISNOTNULLThe previous clause is subtly different than the following clause:
-- Compare the status field to NULL.WHEREjson_payload.statusISNOTNULLThe first clause tests whether the value of the status field isNULL. Thesecond clause tests whether the status field exists. Suppose a log view containstwo log entries. For one log entry, thejson_payload field has thefollowing form:
{status:{measureTime:"1661517845"}}For the other log entry, thejson_payload field has a different structure:
{@type:"type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"jobName:"projects/my-project/locations/us-central1/jobs/test1"relativeUrl:"/food=cake"status:"NOT_FOUND"targetType:"APP_ENGINE_HTTP"}The clauseWHERE json_payload.status IS NOT NULL matches both log entries.However, the clauseWHERE JSON_VALUE(json_payload.status) IS NOT NULL onlymatches the second log entry.
Filter trace entries
To apply a filter to your query, add aWHERE clause. The syntax that youuse in this clause depends on the data type of the field. This sectionprovides several examples for different data types.
Filter by string data types
The fieldname is stored as aString.
To analyze only those spans where
nameis specified,use the following clause:-- Matches spans that have a name field.WHEREnameISNOTNULLTo analyse only those spans where
namehas the value"POST",use the following clase-- Matches spans whose name is POST.WHERESTRPOS(name,"POST") >0To analyse only those spans where
namecontains the value"POST",use theLIKEoperator along with wildcards:-- Matches spans whose name contains POST.WHEREnameLIKE"%POST%"
Filter by integer data types
The fieldkind is an integer, which can take values between zero and five:
To analyze only those spans where
kindis specified, use the followingclause:-- Matches spans that have field named kind.WHEREkindISNOTNULLTo analyze spans whose
kindvalue is either one or two, use the followingclase:-- Matches spans whose kind value is 1 or 2.WHEREkindIN(1,2)
Filter by RECORD data types
Some fields in the trace schema have a data type ofRECORD. These fieldscan either store one or more data structures, or they store repeated entriesof the same data structure.
Filter by status or status code
Thestatus field is an example of a field whose data type isRECORD. Thisfield stores one data structure, with members labeledcode andmessage.
To only analyze spans when the
status.codefield has a value of1,add the following clause:-- Matches spans that have a status.code field that has a value of 1.WHEREstatus.code=1The
status.codefield is stored as an integer.To analyze spans where the
statusfield isn'tEMPTY, add thefollowing clause:-- Matches spans that have status field. When the status field exists, it-- must contain a subfield named code.-- Don't compare status to NULL, because this field has a data type of RECORD.WHEREstatus.codeISNOTNULL
Filter by events or links
Theevents andlinks fields are stored with a data typeofRECORD, but these are repeated fields.
To match spans that have at least one event, use the following clause:
-- Matches spans that have at least one event. Don't compare events to NULL.-- The events field has data type of RECORD and contains a repeated fields.WHEREARRAY_LENGTH(events) >0To match spans that have an event whose
namefield has the value ofmessage, use the following clause:WHERE-- Exists is true when any event in the array has a name field with the-- value of message.EXISTS(SELECT1FROMUNNEST(events)ASevWHEREev.name='message')
Filter by JSON data types
Theattributes field is of typeJSON. Each individual attribute isa key-value pair.
To analyze only those spans where
attributesis specified,use the following clause:-- Matches spans where at least one attribute is specified.WHEREattributesISNOTNULLTo analyze only those spans where the attribute key named
componenthas avalue of"proxy", use the following clause:-- Matches spans that have an attribute named component with a value of proxy.WHEREattributesISNOTNULLANDJSON_VALUE(attributes,'$.component')='proxy'You can also use a
LIKEstatement along with wildcards to perform acontains test:-- Matches spans that have an attribute named component whose value contains proxy.WHEREattributesISNOTNULLANDJSON_VALUE(attributes,'$.component')LIKE'%proxy%'
Group and aggregate log entries
This section builds upon the previous samples and illustrates how you cangroup and aggregate log entries. If you don't specify a grouping but dospecify an aggregation, one result is printed because SQL treats allrows that satisfy theWHERE clause as one group.
EverySELECT expression must be included in the group fields or be aggregated.
Group log entries by timestamp
To group data by timestamp, use the functionTIMESTAMP_TRUNC,which truncates a timestamp to a specified granularity likeHOUR:
SELECT-- Truncate the timestamp by hour.TIMESTAMP_TRUNC(timestamp,HOUR)AShour,JSON_VALUE(json_payload.status)ASstatus,-- Count the number log entries in each group.COUNT(*)AScountFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHERE-- Matches log entries that have a status field whose value isn't NULL.json_payloadISNOTNULLANDJSON_VALUE(json_payload.status)ISNOTNULLGROUPBY-- Group by hour and statushour,statusORDERBYhourASCFor more information, seeTIMESTAMP_TRUNC documentationandDatetime functions.
Group log entries by resource
The following query shows how to group log entries by the resourcetype, and then count the number of log entries in each group:
SELECT-- Count the number of log entries for each resource typeresource.type,COUNT(*)AScountFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`GROUPBYresource.typeLIMIT100Group log entries by severity
The following query shows how to group log entries by the severity,and then count the number of log entries in each group:
SELECT-- Count the number of log entries for each severity.severity,COUNT(*)AScountFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHEREseverityISNOTNULLGROUPBYseverityORDERBYseverityLIMIT100Group log entries by theirlog_id
The following query shows how to group log entries by the log ID,and then count the number of log entries in each group:
SELECT-- Count the number of log entries for each log ID.log_id,COUNT(*)AScountFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`GROUPBYlog_idORDERBYcountDESCLIMIT100Compute average latency of HTTP requests per URL
The following query illustrates how to group log entries by the HTTP requestURL and location, and then count the number of log entries in each group:
SELECT-- Compute the average latency for each group. Because the labels field has a-- data type of JSON, use JSON_VALUE to get the value of checker_location.JSON_VALUE(labels.checker_location)ASlocation,AVG(http_request.latency.seconds)ASsecs,http_request.request_urlFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHERE-- Matches log entries when the request_method field is GET.http_requestISNOTNULLANDhttp_request.request_methodIN('GET')GROUPBY-- Group by request URL and locationhttp_request.request_url,locationORDERBYlocationLIMIT100Compute average bytes sent for a subnetwork test
The following query shows how to group log entries by the location specifiedin the resource labels, and then compute the number of log entries ineach group:
SELECT-- Compute the average number of bytes sent per location. Because labels has-- a data type of JSON, use JSON_VALUE to get the value of the location field.-- bytes_sent is a string. Must cast to a FLOAT64 before computing average.JSON_VALUE(resource.labels.location)ASlocation,AVG(CAST(JSON_VALUE(json_payload.bytes_sent)ASFLOAT64))ASbytesFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHEREresource.type="gce_subnetwork"ANDjson_payloadISNOTNULLGROUPBY-- Group by locationlocationLIMIT100For more information, seeJSON functions andConversion functions.
Count the log entries with a field that matches a pattern
To return the substring that matches a regular expression, use the functionREGEXP_EXTRACT:
SELECT-- Extract the value that begins with test.-- Count the number of log entries for each name.REGEXP_EXTRACT(JSON_VALUE(json_payload.jobName),r".*(test.*)$")ASname,COUNT(*)AScountFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHEREjson_payload.jobNameISNOTNULLGROUPBYnameORDERBYcountLIMIT20For additional examples, see theREGEXP_EXTRACT documentation.
Group and aggregate trace data
This section illustrates how you can group and aggregate spans. If you don'tspecify a grouping but do specify an aggregation, one result is printedbecause SQL treats all entries that satisfy theWHERE clause as one group.
EverySELECT expression must be included in the group fields or be aggregated.
Group spans by start time
To group data by start time, use the functionTIMESTAMP_TRUNC,which truncates a timestamp to a specified granularity likeHOUR:
SELECT-- Truncate the start time to the hour. Count the number of spans per group.TIMESTAMP_TRUNC(start_time,HOUR)AShour,status.codeAScode,COUNT(*)AScountFROM`PROJECT_ID.us._Trace.Spans._AllSpans`WHERE-- Matches spans shows start time is within the previous 12 hours.start_time >TIMESTAMP_SUB(CURRENT_TIMESTAMP(),INTERVAL12HOUR)GROUPBY-- Group by hour and status code.hour,codeORDERBYhourDESCFor more information, seeTIMESTAMP_TRUNC documentationandDatetime functions.
start_time field in theWHERE clause,then the time-range selector setting isn't used.Count spans by status code
To display the count of spans with a specific status code, run thefollowing query:
SELECT-- Count the number of spans for each status code.status.code,COUNT(*)AScountFROM`PROJECT_ID.us._Trace.Spans._AllSpans`WHEREstatus.codeISNOTNULLGROUPBYstatus.codeIf you replacestatus.code withkind, then the previous query reports thenumber of spans for each value of thekind enumeration. Similarly, ifyou replacestatus.code withname, then the query results lists the numberof entries for each span name.
Compute the average duration of all spans
To display the average duration, after grouping span data by span name,run the following query:
SELECT-- Group by name, and then compute the average duration for each group.name,AVG(duration_nano)ASnanosecs,FROM`PROJECT_ID.us._Trace.Spans._AllSpans`GROUPBYnameORDERBYnanosecsDESCCompute average duration and percentiles per service name
The following query computes the span count and various statics foreach service:
SELECT-- Set the service name by the first non-null value.COALESCE(JSON_VALUE(resource.attributes,'$."service.name"'),JSON_VALUE(attributes,'$."service.name"'),JSON_VALUE(attributes,'$."g.co/gae/app/module"'))ASservice_name,-- Count the number spans for each service name. Also compute statistics.COUNT(*)ASspan_count,AVG(duration_nano)ASavg_duration_nano,MIN(duration_nano)ASmin_duration_nano,MAX(duration_nano)ASmax_duration_nano,-- Calculate percentiles for durationAPPROX_QUANTILES(duration_nano,100)[OFFSET(50)]ASp50_duration_nano,APPROX_QUANTILES(duration_nano,100)[OFFSET(95)]ASp95_duration_nano,APPROX_QUANTILES(duration_nano,100)[OFFSET(99)]ASp99_duration_nano,-- Count the number of unique trace IDs. Also, collect up to 5 unique-- span names and status codes.COUNT(DISTINCTtrace_id)ASdistinct_trace_count,ARRAY_AGG(DISTINCTnameIGNORENULLSLIMIT5)ASsample_span_names,ARRAY_AGG(DISTINCTstatus.codeIGNORENULLSLIMIT5)ASsample_status_codesFROM`PROJECT_ID.us._Trace.Spans._AllSpans`GROUPBYservice_nameORDERBYspan_countDESCCross-column search
This section describes two approaches that you can use to searchmultiple columns of the view that you are querying:
Token-based searches: You specify the search location,a search query, and then use the
SEARCHfunction.Because theSEARCHfunction has specific rules on how the data is searched,we recommend that you read theSEARCHdocumentation.Substring-based searches: You provide the search location, a string literal,and then use the function
CONTAINS_SUBSTR. The systemperforms a case-insensitive test to determine whether the string literalexists in an expression. TheCONTAINS_SUBSTRfunction returnsTRUEwhenthe string literal exists andFALSEotherwise. The search value must be aSTRINGliteral, but not the literalNULL.
Token-based search on a log view
The following query retains only those rows that have a fieldthat exactly matches "35.193.12.15":
SELECTtimestamp,log_id,proto_payload,severity,resource.type,resource,labelsFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`AStWHERE-- Search data access audit logs for the IP address that matches 35.193.12.15.-- The use of backticks prevents the string from being tokenized.proto_payloadISNOTNULLANDlog_id="cloudaudit.googleapis.com/data_access"ANDSEARCH(t,"`35.193.12.15`")ORDERBYtimestampASCLIMIT20When backticks are omitted in the query string, the query string is splitbased on rules defined in theSEARCH documentation.For example, when the following statement is run,the query string is split into four tokens: "35", "193", "12", and "15":
SEARCH(t,"35.193.12.15")The previousSEARCH statement matches a row when a single fieldmatches all four tokens. The order of the tokens doesn't matter.
You can include multipleSEARCH statements in a query. For example, in theprevious query, you could replace the filter on the log ID with astatement like the following:
SEARCH(t,"`cloudaudit.googleapis.com/data_access`")The previous statement searches every field of the log entries in the log viewwhile the original statement searches only thelog_id field of thelog entries.
To perform multiple searches on multiple fields, separate the individual stringswith a space. For example, the following statement matches rows where a fieldcontains "Hello World", "happy", and "days":
SEARCH(t,"`Hello World` happy days")Lastly, you can search specific fields instead of searching anentire table. For example, the following statement only searchesthe columns namedtext_payload andjson_payload:
SEARCH((text_payload,json_payload),"`35.222.132.245`")For information about how the parameters of theSEARCH function are processed,see the BigQuery reference pageSearch functions.
Substring search on a log view
For example, the following query fetches all Data Access audit log entrieswith a specific IP address whose timestamps are in a specific time range.Lastly, the query sorts the results and then displays the 20 oldest results:
SELECTtimestamp,log_id,proto_payload,severity,resource.type,resource,labelsFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`AStWHERE-- Search data access audit logs for the IP address that matches 35.193.12.15.-- CONTAINS_SUBSTR performs a contains-test.proto_payloadISNOTNULLANDlog_id="cloudaudit.googleapis.com/data_access"ANDCONTAINS_SUBSTR(t,"35.193.12.15")ORDERBYtimestampASCLIMIT20Query multiple views
Query statements scan one or more tables or expressions and return thecomputed result rows. For example, you can use query statements to merge theresults ofSELECT statements on different tables or datasets in avariety of ways and then select the columns from the combined data.
When you query multiple views, those views must reside in the same location. Forexample, if two views are located in theus-east1 location, then one query canquery both views. You can also query two views that are located in theusmulti-region. However, if a view's location isglobal, then that view canreside in any physical location. Therefore, joins between two views that havethe location ofglobal might fail.
Join two log views by the trace ID
To combine information from two tables, use one of thejoinoperators:
SELECT-- Do an inner join on two tables by using the span ID and trace ID.-- Don't join only by span ID, as this field isn't globally unique.-- From the first view, show the timestamp, severity, and JSON payload.-- From the second view, show the JSON payload.a.timestamp,a.severity,a.json_payload,b.json_payload,a.span_id,a.traceFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_1`aJOIN`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_2`bONa.span_id=b.span_idANDa.trace=b.traceLIMIT100Query two log views with a union statement
To combine the results of two or moreSELECT statements and discardduplicate rows, use theUNION operator. To retain duplicaterows, use theUNION ALL operator:
SELECTtimestamp,log_name,severity,json_payload,resource,labels-- Create a union of two log viewsFROM(SELECT*FROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_1`UNIONALLSELECT*FROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_2`)-- Sort the union by timestamp.ORDERBYtimestampASCLIMIT100Join trace and log data by using the trace ID
The following query joins log and trace data by using the span and trace IDs:
SELECTT.trace_id,T.span_id,T.name,T.start_time,T.duration_nano,L.log_name,L.severity,L.json_payloadFROM`PROJECT_ID.us._Trace.Spans._AllSpans`ASTJOIN`PROJECT_ID.us._Default._AllLogs`ASLON-- Join log and trace data by both the span ID and trace ID.-- Don't join only on span ID, this field isn't globally unique.T.span_id=L.span_id-- A regular expression is required because the storage format of the trace ID-- differs between a log view and a trace view.ANDT.trace_id=REGEXP_EXTRACT(L.trace,r'/([^/]+)$')WHERET.duration_nano >1000000LIMIT10The response of the query lists the trace and span ID, which let you queryfor them individually to gather more information. Additionally, the resultslist the severity of the log entry and the JSON payload.
Remove duplicate log entries
Log Analytics doesn't remove duplicate log entries before a query is run.This behavior is different than when you query log entries by using theLogs Explorer, which removes duplicate entries by comparing thelog names, timestamps, and insert ID fields.
You can use row-level validation to remove duplicate log entries.
Note: Queries that perform row-level validation are slower than those that don't.Don't deduplicate log entries unless the query is sensitive to duplicatelog entries or the query result must not contain duplicate entries.For more information, seeTroubleshoot: There are duplicate log entries in my Log Analytics results.
What's next
For SQL reference documentation or other examples, see the following documents:
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.