Overview of the Google-Built OpenTelemetry Collector

This set of documents describes theGoogle-Built OpenTelemetry Collector andexplains how to deploy the Collector to collectOpenTelemetry Protocol (OTLP)traces, metrics, and logs from instrumented applications and export thatdata toGoogle Cloud Observability and other backends. These instructionscan also be used to configure the open-source upstreamOpenTelemetry Collector to export data to Google Cloud Observability.

The Google-Built OpenTelemetry Collector is Google's open-source build of theOpenTelemetry Collector, built from upstream components using asecure supply chain on Google infrastructure. OpenTelemetry, which is part of theCloud Native Computing Foundation,provides open source APIs, libraries, and SDKs to collectdistributed traces, metrics, and logs for application monitoring.

The Google-Built OpenTelemetry Collector lets you send correlated OTLP traces, metrics, andlogs to Google Cloud Observability and other backends from applications instrumentedby usingOpenTelemetry SDKs. The Collector alsocaptures metadata for Google Cloud resources, so you can correlate applicationperformance data with infrastructure telemetry data. Using theGoogle-built Collector with Google Cloud Observability provides insights to improve theperformance of your applications and infrastructure. For more information aboutthe Collector, seeDescription of the Google-Built OpenTelemetry Collector.

Starting with version 0.134.0, theGoogle-Built OpenTelemetry Collector is built to theSupply-chain Levels for Software Artifacts(SLSA) level 3standard. The collector code and its dependencies are continually scannedfor vulnerabilities. For more information, seeSecurity features.

Use the Google-Built OpenTelemetry Collector

You can use the Google-built Collector to collect telemetrydata from your applications running on Kubernetes (including Google Kubernetes Engine (GKE)),Container-Optimized OS, standalone containers, or directly in a hostenvironment (including Compute Engine).The documents in this section describe how to configure and deploythe Google-built Collector in the following environments:

If you don't have an application ready to use the Collector, then you candeploy the OpenTelemetry demo with the Google-built Collector. For moreinformation, seeTry the OpenTelemetry demo.

For information about using OpenTelemetry instrumentation to generate traces,metrics, and logs from your applications, see the following documents:

Description of the Google-Built OpenTelemetry Collector

The Google-Built OpenTelemetry Collector is built by using upstream OpenTelemetry components and tooling,while being built and retrieved entirely from Google build-test-releaseinfrastructure (Artifact Registry). The Google-built Collector is compatible with anOpenTelemetry Collector build from the upstream repository. It is alsohosted as a Docker image for flexible deployment on any container-basedsystem, including Kubernetes and GKE.

The Google-built Collector provides a Google-curated package with the componentsmost users will need for a rich observability experience on Google Cloud.You don't need to select components and manually build your own Collector.By using the Google-built Collector, you can:

  • Collect metadata for Google Cloud resources so you can correlateapplication performance data with infrastructure telemetry data.
  • Route telemetry data to Google Cloud Observability or the backend of your choiceby using exporters, includingbackends that natively supportOpenTelemetry.
  • Simplify onboarding with recommended configurations and best practiceself-monitoring, including health checks and batch processing.
  • Use the hosted Docker image for flexible deployment on any container-basedsystem, including Kubernetes and GKE.

Best practices

OpenTelemetry maintains a list of best practices forconfiguring theOpenTelemetry Collector and forscaling theCollector. This section makes some additionalrecommendations.

Use the health-check extension

The health-check extension enables an HTTP URL that can be probed tocheck the status of the OpenTelemetry Collector. Using this extensionprovides the following benefits:

  • Early problem detection: Health checks facilitate proactivemonitoring of the Collector's status, enabling the detection of potentialissues before these issues negatively impact telemetry data. Thispreventative measure helps ensure the reliability of the observabilitypipeline.
  • Improved troubleshooting: When problems occur, health checks offervaluable insights into the Collector's current state. This informationsimplifies the diagnosis and resolution process, reducing downtime andstreamlining troubleshooting efforts.
  • Enhanced reliability: Continuous monitoring of the Collector'shealth ensures consistent operation and prevents unexpected failures.This proactive measure enhances the overall reliability of the observabilitysystem and minimizes the risk of data loss or gaps in telemetry data.

On Kubernetes and GKE, the health-check extension is compatiblewith Kubernetes liveness and readiness probes. For information about settingup these probes, seeKubernetes best practices: Setting up health checks withreadiness and liveness probes.

On Cloud Run, a single health-check extension can serve as theendpoint for both startup and liveness probes in your Cloud Runservice configuration.

Use the batch processor

Thebatch processor collects traces, metrics, or logs andbundles them into batches for transmission. Using the batch processor providesthe following benefits:

  • Minimizes outgoing connections: By grouping data transmissionsinto batches, the OpenTelemetry Collector significantly reduces the number ofoutgoing connections. This consolidated approach lowers quota usage andhas the potential to lower overall network costs.
  • Improved data compression: Batching enables more efficient datacompression, reducing the overall size of the data transmitted.
  • Flexibility in batching strategy: Support for both size-based andtime-based batching provides flexibility to optimize for differentscenarios. Size-based batching ensures that batches reach a certain sizebefore being sent, while time-based batching sends batches after aspecific time interval has elapsed. This flexibility lets youfine-tune the batching strategy to align with the specific characteristicsof your data and the particular requirements of your application.

Use thegooglesecretmanager provider

Thegooglesecretmanager provider lets youstore sensitive information needed by configuration files inSecret Manager,a service designed specifically for securely storing, accessing, and managingsensitive data. Using thegooglesecretmanager provider offers thefollowing benefits:

  • Enhanced security: Your configuration files don't contain sensitive information like passwords.
  • Reduced risk of exposure: Secret Manager fetches secrets during initialization of the Collector, which prevents plaintext secrets from accidentally being recorded in logs.

For information about using this provider, seeManage secrets in Collectorconfiguration.

Release notes

The Google-Built OpenTelemetry Collector is versioned in sync with the upstream OpenTelemetry Collector.The current version isv0.143.0; the correspondingDocker image, stored in Artifact Registry, isus-docker.pkg.dev/cloud-ops-agents-artifacts/google-cloud-opentelemetry-collector/otelcol-google:0.143.0.For each new version, the changes that are most relevant to Google Cloud usersare included on this page.

Security features

Starting with version 0.134.0, theGoogle-Built OpenTelemetry Collector offers Docker images that are built to theSupply-chain Levelsfor Software Artifacts (SLSA) level 3standard and offer the following security features:

  • Secure build: The CI/CD pipeline for the Google-Built OpenTelemetry Collector generatesattestationsfor each container-image release. These attestations include averification summary attestation (VSA),which is attached as an asset to each release onGitHub.

  • Vulnerability scanning and patching: The Google-Built OpenTelemetry Collector is continuallyscanned for vulnerabilities, and we strive to followFedRamp SLOs for vulnerability remediation. We prioritize usersecurity by regularly rebuilding the Collector with updated packagedependencies. Our CI/CD pipeline is engineered to quickly integratecritical security fixes and dependency updates.This commitment minimizes exposure to known vulnerabilities,providing you with a continuously secure and up-to-date product.

  • Verified and secure releases: We enforce Google-internal code-integritypolicies to ensure that only artifacts meeting our strict security criteriaare released to production. This process means you always receive softwarethat has successfully passed extensive internal security checks.

Supportability

For all Google-Built OpenTelemetry Collector client-side issues, including feature requests,bug reports, and general questions, open an issue in the appropriateGitHub repository. These repositories are monitored by Google, and issuesare triaged and addressed on a best-effort basis.

For issues related to the Google-Built OpenTelemetry Collector's use of Google Cloud Observability servicesand APIs, like server errors or quotas, contactCloud Customer Care.

Pricing

There is no charge for deploying and using the Google-Built OpenTelemetry Collector.

When you send telemetry data to Google Cloud, you are billed by ingestionvolume. For information about costs associated with the ingestion oftraces, logs, and Google Cloud Managed Service for Prometheus metrics, seeGoogle Cloud Observability pricing.

Try the OpenTelemetry demo

This section describes how to deploy and run theOpenTelemetry demo forGoogle Cloud with the Google-Built OpenTelemetry Collector.

This section is optional. If you are ready to integrate the Google-built Collectorinto your own deployments, see the following documents:

Before you begin

The OpenTelemetry demo requires a Kubernetes cluster that hasWorkload Identity Federation configured. For information about settingup Workload Identity Federation for the OpenTelemetry demo, seeWorkload Identity prerequisites.

Update the demo to use the Google-built Collector

By default, the OpenTelemetry demo uses the upstream OpenTelemetry Collector. Touse the Google-Built OpenTelemetry Collector instead, do the following:

  1. Clone the OpenTelemetry demo repository:

    git clone https://github.com/GoogleCloudPlatform/opentelemetry-demo.git
  2. Go to thekubernetes directory.

    cd kubernetes
  3. Edit the fileopentelemetry-demo.yaml to replacethe line for the Collector image to use. The line looks likethe following, although the version might be different:

    image: "otel/opentelemetry-collector-contrib:0.108.0"

    Replace the value of theimage: field withus-docker.pkg.dev/cloud-ops-agents-artifacts/google-cloud-opentelemetry-collector/otelcol-google:0.143.0, so that the linelooks like the following, and then save the file:

    image: "us-docker.pkg.dev/cloud-ops-agents-artifacts/google-cloud-opentelemetry-collector/otelcol-google:0.143.0"

Deploy the demo

Deploy the demo by applying the updatedopentelemetry-demo.yamlfile:

kubectl apply --namespace otel-demo -f opentelemetry-demo.yaml

Connect to the demo

After you have applied the updated configuration, you can forward thedata to a local port. For example, to connect to the demo atlocalhost:8080,issue the following command:

kubectl port-forward --n otel-demo svc/opentelemetry-demo-frontendproxy 8080:8080

You can then use your browser to connect to the demo atlocalhost:8080.

View telemetry

The OpenTelemetry demo sends metrics, traces, and logs to Google Cloud by usingthe Google-Built OpenTelemetry Collector. For information about the specific telemetry sentby the demo, seeSeeing telemetry in the documentationfor the demo.

View your metrics

The Google-Built OpenTelemetry Collector collects Prometheus metrics that you can view by usingtheMetrics Explorer. The metrics collected dependon the instrumentation of the app, although the Google-built Collector alsowrites some self-metrics.

To view the metrics collected by the Google-Built OpenTelemetry Collector,do the following:
  1. In the Google Cloud console, go to the Metrics explorer page:

    Go toMetrics explorer

    If you use the search bar to find this page, then select the result whose subheading isMonitoring.

  2. In the toolbar of the Google Cloud console, select your Google Cloud project. ForApp Hub configurations, select the App Hub host project or the app-enabled folder's management project.
  3. In theMetric element, expand theSelect a metric menu, enterPrometheus Target in the filter bar, and then use the submenus to select a specific resource type and metric:
    1. In theActive resources menu, selectPrometheus Target.
    2. To select a metric, use theActive metric categories andActive metrics menus. Metrics collected by the Google-Built OpenTelemetry Collector have theprefixprometheus.googleapis.com.
    3. ClickApply.
  4. To add filters, which remove time series from the query results, use theFilter element.

  5. Configure how the data is viewed.

    When the measurements for a metric arecumulative, Metrics Explorer automatically normalizes the measured data bythe alignment period, which results in the chart displaying a rate. Formore information, seeKinds, types, and conversions.

    When integer or double values are measured, such as withcounter metrics, Metrics Explorer automatically sums all time series.To change this behavior, set the first menu of theAggregation entrytoNone.

    For more information about configuring a chart, seeSelect metrics when using Metrics Explorer.

View your traces

To view your trace data, do the following:

  1. In the Google Cloud console, go to theTrace explorer page:

    Go toTrace explorer

    You can also find this page by using the search bar.

  2. In the toolbar of the Google Cloud console,select your Google Cloud project. ForApp Hubconfigurations, select the App Hub host project or management project.
  3. In the table section of the page, select a row.
  4. In the Gantt chart on theTrace details panel,select a span.

    A panel opens that displays information about the traced request. Thesedetails include the method, status code, number of bytes, and theuser agent of the caller.

  5. To view the logs associated with this trace,select theLogs & Events tab.

    The tab shows individual logs. To view the details of the log entry,expand the log entry. You can also clickView Logs and view the logby using the Logs Explorer.

For more information about using the Cloud Trace explorer, seeFind and explore traces.

View your logs

From the Logs Explorer, you can inspect your logs, and you can alsoview associated traces, when they exist.

  1. In the Google Cloud console, go to theLogs Explorer page:

    Go toLogs Explorer

    If you use the search bar to find this page, then select the result whose subheading isLogging.

  2. Locate a log entry from your instrumented app.To view the details, expand the log entry.

  3. ClickTraces on a log entry with a tracemessage, and then selectView trace details.

    ATrace details panel opens and displays the selected trace.

For more information about using the Logs Explorer, seeView logs by using the Logs Explorer.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.