Trace data exports overview

Caution: As ofFebruary 18, 2026, the export of span data to BigQuery by using Cloud Tracesinks isdeprecated. Sinks used to export spans toBigQuery will be removed on or after February 18, 2027. For information abouthow to view your span data in BigQuery, seeMigrate to Log Analytics.

Beta

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

This page provides a conceptual overview of exporting trace datausing Cloud Trace. You might want to export trace data for the followingreasons:

  • To store trace data for a period longer than thedefault retention period of 30 days.
  • To let you use BigQuery tools to analyze your trace data. Forexample, using BigQuery, you can identify span counts andquantiles. For information on the query used to generate the followingtable, seeHipsterShop query.

    Display the response to the previous query.

Note: Only trace spans ingested through the Cloud Trace API are eligible forexport. Trace spans emitted by Google Cloud services or ingested through theTelemetry API are unsupported.

How exports work

Exporting involves creating asink for a Google Cloud project.Asink defines a BigQuery dataset as the destination.

You can create a sink by using the Cloud Trace API or by using theGoogle Cloud CLI.

Sink properties and terminology

Sinks are defined for a Google Cloud project and have the followingproperties:

How sinks work

Every time a trace span arrives in a project, Traceexports a copy of the span.

Traces that Trace received before the sink was createdcannot be exported.

Access control

To create or modify a sink, you must have one of the following Identity and Access Managementroles:

  • Trace Admin
  • Trace User
  • Project Owner
  • Project Editor

For more information, seeAccess control.

To export traces to a destination, the sink's writer service accountmust be permitted to write to the destination. For more information about writeridentities, seeSink properties on this page.

Quotas and limits

Cloud Trace utilizes theBigQuery streaming APIto send trace spans to the destination. Cloud Trace batches API calls.Cloud Trace doesn't implement a retry or throttling mechanism. Trace spansmight not be exported successfully if the amount of data exceeds thedestination quotas.

For details on BigQuery quotas and limits, seeQuotas and limits.

Pricing

Exporting traces doesn't incur Cloud Trace charges. However, you mightincur BigQuery charges.SeeBigQuery pricing for more information.

Estimating your costs

BigQuery charges for data ingestion and storage. To estimateyour monthly BigQuery costs, do the following:

  1. Estimate the total number of trace spans that are ingested in a month.

    For information about how to view usage, seeView usage by billing account.

  2. Estimate the streaming requirements based on the number of trace spansingested.

    Each span is written to a table row. Each row in BigQuery requiresat least 1024 bytes. Therefore, alower bound on yourBigQuery streaming requirements is toassign 1024 bytes to each span. For example, if your Google Cloudproject ingested 200 spans, then those spans require at least20,400 bytes for the streaming insert.

  3. Use thePricing calculator to estimate yourBigQuery costs due to storage, streaming inserts, and queries.

Viewing and managing your BigQuery usage

You can use Metrics Explorer to view your BigQuery usage. You canalso create an alerting policy that notifies you if your BigQueryusage exceeds predefined limits. The following table contains the settingsto create an alerting policy. You can use the settings in the target panetable when creating a chart or when usingMetrics Explorer.

To create an alerting policy that triggers when the ingestedBigQuery metrics exceed a user-defined level, use the following settings.

Steps to create an alerting policy.

To create an alerting policy, do the following:

  1. In the Google Cloud console, go to the Alerting page:

    Go toAlerting

    If you use the search bar to find this page, then select the result whose subheading isMonitoring.

  2. If you haven't created your notification channels and if you want to be notified, then clickEdit Notification Channels and add your notification channels. Return to theAlerting page after you add your channels.
  3. From theAlerting page, selectCreate policy.
  4. To select the resource, metric, and filters, expand theSelect a metric menu and then use the values in theNew condition table:
    1. Optional: To limit the menu to relevant entries, enter the resource or metric name in the filter bar.
    2. Select aResource type. For example, selectVM instance.
    3. Select aMetric category. For example, selectinstance.
    4. Select aMetric. For example, selectCPU Utilization.
    5. SelectApply.
  5. ClickNext and then configure the alerting policy trigger. To complete these fields, use the values in theConfigure alert trigger table.
  6. ClickNext.
  7. Optional: To add notifications to your alerting policy, clickNotification channels. In the dialog, select one or more notification channels from the menu, and then clickOK.

    To be notified when incidents are openend and closed, checkNotify on incident closure. By default, notifications are sent only when incidents are openend.

  8. Optional: Update theIncident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
  9. Optional: ClickDocumentation, and then add any information that you want included in a notification message.
  10. ClickAlert name and enter a name for the alerting policy.
  11. ClickCreate Policy.
New condition
Field

Value
Resource and MetricIn theResources menu, selectBigQuery Dataset.
In theMetric categories menu, selectStorage.
Select a metric from theMetrics menu. Metrics specific to usage includeStored bytes,Uploaded bytes, andUploaded bytes billed. For a full list of available metrics, seeBigQuery metrics.
Filterproject_id: Your Google Cloud project ID.
dataset_id: Your dataset ID.
Across time series
Time series group by
dataset_id: Your dataset ID.
Across time series
Time series aggregation
sum
Rolling window1 m
Rolling window functionmean
Configure alert trigger
Field

Value
Condition typeThreshold
Alert triggerAny time series violates
Threshold positionAbove threshold
Threshold valueYou determine the acceptable value.
Retest window1 minute

What's next

To configure a sink, seeExporting traces.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.