Trace data exports overview Stay organized with collections Save and categorize content based on your preferences.
Caution: As ofFebruary 18, 2026, the export of span data to BigQuery by using Cloud Tracesinks isdeprecated. Sinks used to export spans toBigQuery will be removed on or after February 18, 2027. For information abouthow to view your span data in BigQuery, seeMigrate to Log Analytics.
Beta
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
This page provides a conceptual overview of exporting trace datausing Cloud Trace. You might want to export trace data for the followingreasons:
- To store trace data for a period longer than thedefault retention period of 30 days.
To let you use BigQuery tools to analyze your trace data. Forexample, using BigQuery, you can identify span counts andquantiles. For information on the query used to generate the followingtable, seeHipsterShop query.

How exports work
Exporting involves creating asink for a Google Cloud project.Asink defines a BigQuery dataset as the destination.
You can create a sink by using the Cloud Trace API or by using theGoogle Cloud CLI.
Sink properties and terminology
Sinks are defined for a Google Cloud project and have the followingproperties:
Name: A name for the sink. For example, a name might be:
"projects/PROJECT_NUMBER/traceSinks/my-sink"
where
PROJECT_NUMBERis the sink's Google Cloud project numberandmy-sinkis the sink identifier.Parent: The resource in which you create the sink. The parent must bea Google Cloud project:
"projects/PROJECT_ID"
The
PROJECT_IDcan either be a Google Cloud project identifier ornumber.Destination: A single place to send trace spans.Trace supports exporting traces to BigQuery.The destination can be sink's Google Cloud project orany other Google Cloud project that is in the same organization.
For example, a valid destination is:
bigquery.googleapis.com/projects/DESTINATION_PROJECT_NUMBER/datasets/DATASET_ID
where
DESTINATION_PROJECT_NUMBERis theGoogle Cloud project number of the destination, andDATASET_IDis the BigQuery dataset identifier.Writer Identity: A service account name. The export destination's ownermust give this service account permissions to write to the exportdestination. When exporting traces, Trace adopts thisidentity for authorization. For increased security, new sinks get a uniqueservice account:
export-PROJECT_NUMBER-GENERATED_VALUE@gcp-sa-cloud-trace.iam.gserviceaccount.com
where
PROJECT_NUMBERis your Google Cloud project number, in Hex,andGENERATED_VALUEis a randomly generated value.You don't create, own, or manage the service account that is identified bythe writer identity of a sink. When you create a sink,Trace creates the service account that the sink requires.This service account isn't included in the list of service accounts foryour project until it has at least one Identity and Access Management binding. You add thisbinding when you configure a sink destination.
For information on using the writer identity, seedestination permissions.
How sinks work
Every time a trace span arrives in a project, Traceexports a copy of the span.
Traces that Trace received before the sink was createdcannot be exported.
Access control
To create or modify a sink, you must have one of the following Identity and Access Managementroles:
- Trace Admin
- Trace User
- Project Owner
- Project Editor
For more information, seeAccess control.
To export traces to a destination, the sink's writer service accountmust be permitted to write to the destination. For more information about writeridentities, seeSink properties on this page.
Quotas and limits
Cloud Trace utilizes theBigQuery streaming APIto send trace spans to the destination. Cloud Trace batches API calls.Cloud Trace doesn't implement a retry or throttling mechanism. Trace spansmight not be exported successfully if the amount of data exceeds thedestination quotas.
For details on BigQuery quotas and limits, seeQuotas and limits.
Pricing
Exporting traces doesn't incur Cloud Trace charges. However, you mightincur BigQuery charges.SeeBigQuery pricing for more information.
Estimating your costs
BigQuery charges for data ingestion and storage. To estimateyour monthly BigQuery costs, do the following:
Estimate the total number of trace spans that are ingested in a month.
For information about how to view usage, seeView usage by billing account.
Estimate the streaming requirements based on the number of trace spansingested.
Each span is written to a table row. Each row in BigQuery requiresat least 1024 bytes. Therefore, alower bound on yourBigQuery streaming requirements is toassign 1024 bytes to each span. For example, if your Google Cloudproject ingested 200 spans, then those spans require at least20,400 bytes for the streaming insert.
Use thePricing calculator to estimate yourBigQuery costs due to storage, streaming inserts, and queries.
Viewing and managing your BigQuery usage
You can use Metrics Explorer to view your BigQuery usage. You canalso create an alerting policy that notifies you if your BigQueryusage exceeds predefined limits. The following table contains the settingsto create an alerting policy. You can use the settings in the target panetable when creating a chart or when usingMetrics Explorer.
To create an alerting policy that triggers when the ingestedBigQuery metrics exceed a user-defined level, use the following settings.
Steps to create an alerting policy.
To create an alerting policy, do the following:
In the Google Cloud console, go to thenotifications Alerting page:
If you use the search bar to find this page, then select the result whose subheading isMonitoring.
- If you haven't created your notification channels and if you want to be notified, then clickEdit Notification Channels and add your notification channels. Return to theAlerting page after you add your channels.
- From theAlerting page, selectCreate policy.
- To select the resource, metric, and filters, expand theSelect a metric menu and then use the values in theNew condition table:
- Optional: To limit the menu to relevant entries, enter the resource or metric name in the filter bar.
- Select aResource type. For example, selectVM instance.
- Select aMetric category. For example, selectinstance.
- Select aMetric. For example, selectCPU Utilization.
- SelectApply.
- ClickNext and then configure the alerting policy trigger. To complete these fields, use the values in theConfigure alert trigger table.
- ClickNext.
Optional: To add notifications to your alerting policy, clickNotification channels. In the dialog, select one or more notification channels from the menu, and then clickOK.
To be notified when incidents are openend and closed, checkNotify on incident closure. By default, notifications are sent only when incidents are openend.
- Optional: Update theIncident autoclose duration. This field determines when Monitoring closes incidents in the absence of metric data.
- Optional: ClickDocumentation, and then add any information that you want included in a notification message.
- ClickAlert name and enter a name for the alerting policy.
- ClickCreate Policy.
| New condition Field | Value |
|---|---|
| Resource and Metric | In theResources menu, selectBigQuery Dataset. In theMetric categories menu, selectStorage. Select a metric from theMetrics menu. Metrics specific to usage include Stored bytes,Uploaded bytes, andUploaded bytes billed. For a full list of available metrics, seeBigQuery metrics. |
| Filter | project_id: Your Google Cloud project ID. dataset_id: Your dataset ID. |
| Across time series Time series group by | dataset_id: Your dataset ID. |
| Across time series Time series aggregation | sum |
| Rolling window | 1 m |
| Rolling window function | mean |
| Configure alert trigger Field | Value |
|---|---|
| Condition type | Threshold |
| Alert trigger | Any time series violates |
| Threshold position | Above threshold |
| Threshold value | You determine the acceptable value. |
| Retest window | 1 minute |
What's next
To configure a sink, seeExporting traces.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.