Query and analyze telemetry with Log Analytics

This document describes how to query and analyze your log and tracedata by using Log Analytics, which provides aSQL-based query interface.SQL lets youperform aggregate analysis, which can help you generate insights and identifytrends. To view your query results, use the tabular form, or visualize thedata with charts.You can also save these tables and charts to your custom dashboards.

About linked BigQuery datasets

You don't need alinked BigQuery dataset to queryyour log data, your trace data, or both data types when you use theLog Analytics page.

You do need a linked BigQuery datasets when youwant to do any of the following:

  • Join log or trace data with otherBigQuery datasets.
  • Query your log or trace data from another service like theBigQuery Studio page or Looker Studio.
  • Improve the performance of the queries that you run from theLog Analyticsby running them on yourBigQuery reserved slots.
  • Create analerting policy that monitors the resultof a SQL query. This capability is only supported when log data is queried.For more information, seeMonitor your SQL query results with an alerting policy.

This document doesn't describe how to create a linked dataset, which requiresa data-type specific process. To learn how to create a linked dataset, seeQuery log data by using a linked dataset orQuery trace data by using a linked dataset.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Observability API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  6. Verify that billing is enabled for your Google Cloud project.

  7. Enable the Observability API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the API

  8. To get the permissions that you need to load theLog Analytics page, write, run, and save privatequeries on your trace data, ask your administrator to grant you the following IAM roles:

    • Observability View Accessor (roles/observability.viewAccessor) on the observability views that you want to query. This role supports IAM conditions, which let you restrict the grant to a specific view. If you don't attach a condition to the role grant, then the principal can access all observability views. Observability views are in Public Preview.
    • Observability Analytics User (roles/observability.analyticsUser) on your project. This role contains the permissions required to save and run private queries, and to run shared queries.
    • Logs Viewer (roles/logging.viewer) on your project
    • Logs View Accessor (roles/logging.viewAccessor) on the project that stores the log views that you want to query.

    For more information about granting roles, seeManage access to projects, folders, and organizations.

    You might also be able to get the required permissions throughcustom roles or otherpredefined roles.

Query log and trace data

This section describes the approaches that you can use to query yourlog and trace data:

  • Load a system-defined query, edit this query, and then run the query.
  • Enter and run a custom query. For example, you might paste in a query youhave or write one. Custom queries can include joins, nested queries, andother complex SQL statements. For examples, seeSample SQL queries.
  • Build a query by making menu selections and then run that query.Log Analytics converts your selections into a SQL query, which you can bothview and edit.

Load, edit, and run the system-defined query

  1. In the Google Cloud console, go to theLog Analytics page:

    Go toLog Analytics

    If you use the search bar to find this page, then select the result whose subheading isLogging.

    Note: If a window on theLog Analytics pagedisplays a "Get started with Log Analytics" message, then on the window,clickClose.
  2. In theViews menu, select a view.

    To find the view to query, use theFilter baror scroll through the list:

  3. Do one of the following:

    • To load a system-defined query that relies on theQuery Builder, whichlets you define the query with menu selections, make sure that theQuery pane displaysQuery Builder. If a SQL editor is shown, thenclick Builder.

    • To load a system-defined query that extracts JSON values, then make surethatQuery pane displays the SQL editor. If this pane displaysQuery Builder, then click SQL.

  4. In theSchema pane, selectQuery, and then clickOverwrite.

    TheQuery pane displays a system-defined query. If you selected theQuery Builder mode but want to view the SQL query, click SQL.

  5. Optional: Modify the query.

  6. To run the query, go to the toolbar and selectRun Query.

    Log Analytics presents the query results in a table. However, you can createa chart, and you can also save the table or chart toacustom dashboard. For more information, seeChart SQL query results.

    If the toolbar displaysRun in BigQuery, then you need to switchLog Analytics to use the default query engine. To make this change, in thetoolbar of theQuery pane, clickSettingsand then selectAnalytics (default).

Enter and run a custom query

To enter a SQL query, then do the following:

  1. In the Google Cloud console, go to theLog Analytics page:

    Go toLog Analytics

    If you use the search bar to find this page, then select the result whose subheading isLogging.

    Note: If a window on theLog Analytics pagedisplays a "Get started with Log Analytics" message, then on the window,clickClose.
  2. In theQuery pane, click the SQL.

  3. To run the query, go to the toolbar and selectRun Query.

    Log Analytics presents the query results in a table. However, you can createa chart, and you can also save the table or chart toacustom dashboard. For more information, seeChart SQL query results.

    If the toolbar displaysRun in BigQuery, then you need to switchLog Analytics to use the default query engine. To make this change, in thetoolbar of theQuery pane, clickSettingsand then selectAnalytics (default).

Build, edit, and run a query

TheQuery Builder interface lets you build a query by making selections frommenus. Log Analytics converts your selections into a SQL query, which you canview and edit. For example, you might start by using theQuery Builder interface and then switch to the SQL editor to refine yourquery.

Log Analytics can always convert your menu-selections from theQuery Builderinterface into a SQL query. However, not all SQL queries can be representedby theQuery Builder interface. For example, queries with joins can'tbe represented by this interface.

Note: There might be cases where fields that you want to query aren't listedby thequery builder interface. When the data type of a columnis JSON, Log Analytics automatically infers the fields of the column. If you wantyour query to use a JSON field that wasn't automatically inferred, then switchto the SQL editor and write a query.

To build a query, do the following:

  1. In the Google Cloud console, go to theLog Analytics page:

    Go toLog Analytics

    If you use the search bar to find this page, then select the result whose subheading isLogging.

    Note: If a window on theLog Analytics pagedisplays a "Get started with Log Analytics" message, then on the window,clickClose.
  2. If theQuery pane displays a SQL editor, then selectBuilder, which opens theQuery Builder pane.

  3. Use theSource menu to select the view you want to query. Your selectionsare mapped to theFROM clause in the SQL query.

  4. Optional: Use the following menus to restrict or format the result table:

  5. To run the query, go to the toolbar and selectRun Query.

    Log Analytics presents the query results in a table. However, you can createa chart, and you can also save the table or chart toacustom dashboard. For more information, seeChart SQL query results.

    If the toolbar displaysRun in BigQuery, then you need to switchLog Analytics to use the default query engine. To make this change, in thetoolbar of theQuery pane, clickSettingsand then selectAnalytics (default).

Example: Group and aggregate data by using the Query Builder

When you select a column in the Query Builder, each field includes a menu whereyou can add grouping and aggregation. Grouping lets you organize your data intogroups based on the value of one or more columns, and aggregation lets youperform calculations on these groups to return a single value.

Each field that you select in theColumns element has an attached menu withthe following options:

  • None: Don't group or aggregate by this field.
  • Aggregate: Group fields listed in theColumns element except whenthe field has anAggregate selection. For those fields, compute thevalue by performing an operation on all entries in each grouping.The operation might be to compute the average of a fieldor to do something like count the number of entries in each grouping.
  • Group By: Group entries by all fields listed in theColumns element.

The following illustrates how you might construct a query that groups entriesand then performs some type of aggregation.

Log data

This example describes how to use the Query Builder to grouplog entries by severity and timestamp, and then compute the average of thehttp_request.response_size field for each group.

To build a query that groups and aggregates your data, make the followingselections from the Query Builder menus:

  1. In theColumns menu, select thetimestamp,severity, andhttp_request.response_size fields.

    1. To group your data, click thetimestamp field to open the settingsdialog. In this dialog, select theGroup by option, and set theTruncation Granularity toHOUR. Grouping is then automaticallyapplied to all other fields to prevent syntax errors. If there areinvalid fields where grouping can't be applied, then you see an errormessage. Remove the invalid fields from the menu to resolve this error.

    2. To perform aggregation on thehttp_request.response_size field, clickthe field to open the settings dialog. In this dialog, selectAggregate. In theAggregation menu, clickAverage.

      Note: Selecting theAggregate option also automatically appliesgrouping to your columns.
  2. In theFilters menu, addhttp_request.response_size and set thecomparator toIS NOT NULL. This filter matches log entries thatcontain aresponse_size value.

    Your Query Builder menus look similar to the following:

    Aggregate and group by using the Query Builder menus.

  3. To run the query, go to the toolbar and selectRun Query.

    The results of this query is similar to the following:

    +-----------------------------------+----------+---------------+| Row | hour_timestamp              | severity | response_size ||     | TIMESTAMP                   | STRING   | INTEGER       |+-----+-----------------------------+----------+---------------+| 1   | 2025-10-06 16:00:00.000 UTC | NOTICE   | 3082          || 2   | 2025-10-06 17:00:00.000 UTC | WARNING  | 338           || 3   | 2025-10-06 16:00:00.000 UTC | INFO     | 149           |

The corresponding SQL query for the previous example is as follows:

SELECT-- Truncate the timestamp by hour.TIMESTAMP_TRUNC(timestamp,HOUR)AShour_timestamp,severity,-- Compute average response_size.AVG(http_request.response_size)ASaverage_http_request_response_sizeFROM`PROJECT_ID.LOCATION.BUCKET_ID.LOG_VIEW_ID`WHERE-- Matches log entries that have a response_size.http_request.response_sizeISNOTNULLGROUPBY-- Group log entries by timestamp and severity.TIMESTAMP_TRUNC(timestamp,HOUR),severityLIMIT1000

Trace data

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

This example describes how to use the Query Builder to groupspans by start time, span name, and span kind. Then, for each group, the querycomputes the average duration in nanoseconds.

To construct this query, do the following:

  1. In theColumns menu, select thestart_time,name,kind, andduration_nano fields.
  2. To truncate the start time to the hour, expand the menu on thestart_timecolumn and selectGroup By. Make sure that the granularity menu is settoHour.
  3. ClickApply.

    When you selectGroup By for any column, the system groups entries by allcolumns. In this example, the entries are grouped by the truncated value ofthestart_time, the span name, the span kind, and the value of theduration.

    However, the objective for this example is to group entries by thetruncated time, the span name, and the span kind, and then for each group,to compute the average duration. In the next step, you modify the groupingand add an aggregation.

  4. Expand the menu on theduration_nano field, selectAggregate, andthen set theAggregation field toAverage.

    When you run the query, each row corresponds to a group, which consistsof a truncated time, a span name, and a span kind. The final entry ineach row is the average duration for all entries in that group.

    The results of this query is similar to the following:

    +-----------------------------------+----------------+----------+-----------------------+| Row | hour_timestamp              | span_name      | kind     | average_duation_nano  ||     | TIMESTAMP                   | STRING         | INTEGER  | FLOAT                 |+-----+-----------------------------+-----------+---------------+-----------------------+| 1   | 2025-10-09 13:00:00.000 EDT | http.receive   | 3        | 122138.22813990474| 2   | 2025-10-09 13:00:00.000 EDT | query.request  | 1        | 6740819304.390297| 3   | 2025-10-09 13:00:00.000 EDT | client.handler | 2        | 6739339098.409376
  5. Your query can include multiple aggregations. For example, to add a columnthat counts the number of entries in each group, do the following:

    1. In theColumns element, clickAdd column.
    2. SelectAll (*).
    3. In the dialog, selectAggregate, selectCount for theAggregation, and then selectApply.

    With this change the grouping remains the same. Entries are grouped bythe truncated start time, span name, and span kind. However, for each group,the query computes the average duration and the number of entries.

The corresponding SQL query for the previous example is as follows:

WITHscope_queryAS(SELECT*FROM`PROJECT_ID.global._Trace._AllSpans`)SELECT-- Report the truncated start time, span name, span kind, average duration and number-- of entries for each group.TIMESTAMP_TRUNC(start_time,HOUR)AShour_start_time,nameASspan_name,kind,AVG(duration_nano)ASaverage_duration_nano,COUNT(*)AScount_allFROMscope_queryGROUPBYTIMESTAMP_TRUNC(start_time,HOUR),name,kindLIMIT100

Display the schema

The schema defines how the data is stored, which includes the fields and theirdata types. This information is important to you because the schema determinesthe fields you query and whether you need to cast fields to different datatypes. For example, to write a query that computes the average latency ofHTTP requests, you need to know how to access the latency field and whether itis stored as an integer like100 or as a string like"100". If the latencydata is stored as a string, then the query must cast the value to a numericvalue before computing an average.

To identify the schema, do the following:

  1. In the Google Cloud console, go to theLog Analytics page:

    Go toLog Analytics

    If you use the search bar to find this page, then select the result whose subheading isLogging.

  2. In theViews menu, select a view.

    TheSchema pane is updated. Log Analytics automatically infers the fields of a column when the data type is JSON. To view how often these inferred fields appear in your data, clickOptions and selectView info and description.

    Log data

    For log views, the schema is fixed and corresponds to theLogEntry.For analytics views, you can modify the SQL query to change the schema.

    Trace data

    Preview

    This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

    To learn about the schema, seeStorage schema for trace data.

    If you don't see a view named_Trace.Spans._AllSpans, then yourGoogle Cloud project doesn't contain a observability bucket named_Trace. For information about how to resolve this failure, seeTrace storage initialization fails.

Restrictions

If you want to query multiple views, then those views must reside in the samelocation. For example, if you store two views in theus-east1 location, thenone query can query both views. You can also query two views stored in theus multi-region. However, if a view's location isglobal, then that viewcan reside in any physical location. Therefore, joins between two views thathave the location ofglobal might fail.

For a list of restrictions that apply to log data, seeLog Analytics: Restrictions.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.