Get started with the OpenTelemetry Collector Stay organized with collections Save and categorize content based on your preferences.
This document describes how to set up theOpenTelemetry Collector to scrapestandard Prometheus metrics and report those metrics toGoogle Cloud Managed Service for Prometheus. The OpenTelemetry Collector is an agent that youcan deploy yourself and configure to export toManaged Service for Prometheus. The set-up is similar to runningManaged Service for Prometheus with self-deployed collection.
You might choose the OpenTelemetry Collector over self-deployed collection forthe following reasons:
- The OpenTelemetry Collector lets you route your telemetry data tomultiple backends by configuring different exporters in your pipeline.
- The Collector also supports signals from metrics, logs, and traces, so byusing it you can handle all three signal types in one agent.
- OpenTelemetry's vendor-agnostic data format (the OpenTelemetry Protocol, orOTLP) supports a strong ecosystem of libraries and pluggable Collectorcomponents. This allows for a range of customizability options for receiving,processing, and exporting your data.
The trade-off for these benefits is that running an OpenTelemetry Collectorrequires a self-managed deployment and maintenance approach. Which approach youchoose will depend on your specific needs, but in this document we offerrecommended guidelines for configuring the OpenTelemetry Collector usingManaged Service for Prometheus as a backend.
Note: Google Cloud technical support provides limited assistance for collection with the OpenTelemetry Collector.Before you begin
This section describes the configuration needed for the tasks describedin this document.
Set up projects and tools
To use Google Cloud Managed Service for Prometheus, you need the following resources:
A Google Cloud project with the Cloud Monitoring API enabled.
If you don't have a Google Cloud project, then do the following:
In the Google Cloud console, go toNew Project:
In theProject Name field, enter a name for your projectand then clickCreate.
Go toBilling:
Select the project you just created if it isn't alreadyselected at the top of the page.
You are prompted to choose an existing payments profile or tocreate a new one.
The Monitoring API is enabled by default for new projects.
If you already have a Google Cloud project, then ensure that theMonitoring API is enabled:
Go toAPIs & services:
Select your project.
ClickEnable APIs and Services.
Search for "Monitoring".
In the search results, click through to "Cloud Monitoring API".
If "API enabled" is not displayed, then click theEnable button.
A Kubernetes cluster. If you do not have a Kubernetes cluster,then follow the instructions in theQuickstart forGKE.
You also need the following command-line tools:
gcloudkubectl
Thegcloud andkubectl tools are part of theGoogle Cloud CLI. For information about installingthem, seeManaging Google Cloud CLI components. To see thegcloud CLI components you have installed, run the following command:
gcloud components list
Configure your environment
To avoid repeatedly entering your project ID or cluster name,perform the following configuration:
Configure the command-line tools as follows:
Configure the gcloud CLI to refer to the ID of yourGoogle Cloud project:
gcloud config set projectPROJECT_ID
If running on GKE, use gcloud CLI to setyour cluster:
gcloud container clusters get-credentialsCLUSTER_NAME --locationLOCATION --projectPROJECT_ID
Otherwise, use the
kubectlCLI to set your cluster:kubectl config set-clusterCLUSTER_NAME
For more information about these tools, see the following:
Set up a namespace
Create theNAMESPACE_NAME Kubernetes namespace for resources you createas part of the example application. We recommend using the namespace namegmp-test when using this documentation to configure an example Prometheussetup.
Create the namespace by running the following:
kubectl create nsNAMESPACE_NAME
Verify service account credentials
If your Kubernetes cluster hasWorkload Identity Federation for GKE enabled,then you can skip this section.
When running on GKE, Managed Service for Prometheusautomatically retrieves credentials from the environment based on theCompute Engine default service account. The default service account has thenecessary permissions. If you don't use Workload Identity Federation for GKE and if you havepreviously removed either themonitoring.metricWriter andmonitoring.viewerrole grant from the default node service account, then youhave tore-add those missing roles before continuing.
Configure a service account for Workload Identity Federation for GKE
If your Kubernetes cluster doesn't haveWorkload Identity Federation for GKEenabled, then you can skip this section.
Managed Service for Prometheus captures metric data by using theCloud Monitoring API. If your cluster is using Workload Identity Federation for GKE,you must grant your Kubernetes service account permission to theMonitoring API. This section describes the following:
- Creating a dedicatedGoogle Cloud service account,
gmp-test-sa. - Binding the Google Cloud service account to the defaultKubernetesservice account in a test namespace,
NAMESPACE_NAME. - Granting the necessary permission to the Google Cloud service account.
Create and bind the service account
This step appears in several places in the Managed Service for Prometheusdocumentation. If you have already performed this step as part of a priortask, then you don't need to repeat it. Skip ahead toAuthorize theservice account.
First, create a service account if you haven't yet done so:
gcloud config set projectPROJECT_ID \&&gcloud iam service-accounts creategmp-test-sa
Then use the following command sequence to bind thegmp-test-sa serviceaccount to the default Kubernetes service account in theNAMESPACE_NAME namespace:
gcloud config set projectPROJECT_ID \&&gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --condition=None \ --member "serviceAccount:PROJECT_ID.svc.id.goog[NAMESPACE_NAME/default]" \gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \&&kubectl annotate serviceaccount \ --namespaceNAMESPACE_NAME \ default \ iam.gke.io/gcp-service-account=gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com
If you are using a different GKE namespace or service account,adjust the commands appropriately.
Authorize the service account
Groups of related permissions are collected intoroles, andyou grant the roles to a principal, in this example, the Google Cloudservice account. For more information about Monitoring roles,seeAccess control.
The following command grants the Google Cloud service account,gmp-test-sa, the Monitoring API roles it needs towritemetric data.
If you have already granted the Google Cloud service accounta specific role as part of prior task, then you don't need to do it again.
gcloud projects add-iam-policy-bindingPROJECT_ID\ --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/monitoring.metricWriter \ --condition=None \&& \gcloud projects add-iam-policy-bindingPROJECT_ID \ --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/iam.serviceAccountTokenCreator \ --condition=None
Debug your Workload Identity Federation for GKE configuration
If you are having trouble getting Workload Identity Federation for GKE to work, see thedocumentation forverifying your Workload Identity Federation for GKE setupand theWorkload Identity Federation for GKE troubleshooting guide.
As typos and partial copy-pastes are the most common sources of errors whenconfiguring Workload Identity Federation for GKE, westrongly recommend using the editablevariables and clickable copy-paste icons embedded in the code samples in theseinstructions.
Workload Identity Federation for GKE in production environments
The example described in this document binds the Google Cloud serviceaccount to the default Kubernetes service account and gives the Google Cloudservice account all necessary permissions to use the Monitoring API.
In a production environment, you might want to use a finer-grained approach,with a service account for each component, each with minimal permissions.For more information on configuring service accounts forworkload-identity management, seeUsing Workload Identity Federation for GKE.
Set up the OpenTelemetry Collector
This section guides you through setting up and using the OpenTelemetry Collectorto scrape metrics from an example application and send the datato Google Cloud Managed Service for Prometheus. For detailed configuration information,see the following sections:
The OpenTelemetry Collector is analogous to theManaged Service for Prometheus agent binary. The OpenTelemetry communityregularly publishesreleasesincluding source code, binaries, and container images.
You can either deploy these artifacts on VMs or Kubernetes clusters using thebest-practice defaults, or you can use thecollectorbuilderto build your own collector consisting of only the components you need. To builda collector for use with Managed Service for Prometheus, you need thefollowing components:
- TheManaged Service for Prometheusexporter,which writes your metrics to Managed Service for Prometheus.
- A receiver to scrape your metrics. This document assumes that you are usingtheOpenTelemetry Prometheusreceiver,but the Managed Service for Prometheus exporter is compatible with anyOpenTelemetry metrics receiver.
- Processors to batch and mark up your metrics to include important resourceidentifiers depending on your environment.
These components are enabled by using aconfigurationfilethat is passed to the Collector with the--config flag.
The following sections discuss how to configure each of these components in moredetail. This document describes how to run the collectoronGKE andelsewhere.
Configure and deploy the Collector
Whether you are running your collection on Google Cloud or in anotherenvironment, you can still configure the OpenTelemetry Collector to export toManaged Service for Prometheus. The biggest difference will be in how youconfigure the Collector. In non-Google Cloud environments, there may beadditional formatting of metric data that is needed for it to be compatible withManaged Service for Prometheus. On Google Cloud, however, much of thisformatting can be automatically detected by the Collector.
Run the OpenTelemetry Collector on GKE
You can copy the following config into a file calledconfig.yaml to set up theOpenTelemetry Collector on GKE:
receivers: prometheus: config: scrape_configs: - job_name: 'SCRAPE_JOB_NAME' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name] action: keep regex: prom-example - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: $$1:$$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+)processors: resourcedetection: detectors: [gcp] timeout: 10s transform: # "location", "cluster", "namespace", "job", "instance", and "project_id" are reserved, and # metrics containing these labels will be rejected. Prefix them with exported_ to prevent this. metric_statements: - context: datapoint statements: - set(attributes["exported_location"], attributes["location"]) - delete_key(attributes, "location") - set(attributes["exported_cluster"], attributes["cluster"]) - delete_key(attributes, "cluster") - set(attributes["exported_namespace"], attributes["namespace"]) - delete_key(attributes, "namespace") - set(attributes["exported_job"], attributes["job"]) - delete_key(attributes, "job") - set(attributes["exported_instance"], attributes["instance"]) - delete_key(attributes, "instance") - set(attributes["exported_project_id"], attributes["project_id"]) - delete_key(attributes, "project_id") batch: # batch metrics before sending to reduce API usage send_batch_max_size: 200 send_batch_size: 200 timeout: 5s memory_limiter: # drop metrics if memory usage gets too high check_interval: 1s limit_percentage: 65 spike_limit_percentage: 20# Note that the googlemanagedprometheus exporter block is intentionally blankexporters: googlemanagedprometheus:service: pipelines: metrics: receivers: [prometheus] processors: [batch, memory_limiter, resourcedetection, transform] exporters: [googlemanagedprometheus]
The preceding config uses thePrometheusreceiverand theManaged Service for Prometheusexporterto scrape the metrics endpoints on Kubernetes Pods and export those metrics toManaged Service for Prometheus. The pipeline processors format and batch thedata.
For more details on what each part of this configdoes, along with configurations for different platforms, see the detailedfollowing sections onscraping metrics andaddingprocessors.
When using an existing Prometheus configuration with the OpenTelemetry Collector'sprometheus receiver, replace any single dollar sign characters,
, with double characters,
,to avoid triggering environment variable substitution. For more information, seeScrape Prometheus metrics.
You can modify this config based on your environment, provider, and the metricsyou want to scrape, but the example config is a recommended starting point forrunning on GKE.
Run the OpenTelemetry Collector outside Google Cloud
Running the OpenTelemetry Collector outside Google Cloud, such as on-premises oron other cloud providers, is similar to running the Collector onGKE. However, the metrics you scrape are less likely toautomatically include data that best formats it forManaged Service for Prometheus. Therefore, you must take extra care toconfigure the collector to format the metrics so they are compatible withManaged Service for Prometheus.
You can copy the following config into a file calledconfig.yaml to set up theOpenTelemetry Collector for deployment on a non-GKE Kubernetescluster:
receivers: prometheus: config: scrape_configs: - job_name: 'SCRAPE_JOB_NAME' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name] action: keep regex: prom-example - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: $$1:$$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+)processors: resource: attributes: - key: "cluster" value: "CLUSTER_NAME" action: upsert - key: "namespace" value: "NAMESPACE_NAME" action: upsert - key: "location" value: "REGION" action: upsert transform: # "location", "cluster", "namespace", "job", "instance", and "project_id" are reserved, and # metrics containing these labels will be rejected. Prefix them with exported_ to prevent this. metric_statements: - context: datapoint statements: - set(attributes["exported_location"], attributes["location"]) - delete_key(attributes, "location") - set(attributes["exported_cluster"], attributes["cluster"]) - delete_key(attributes, "cluster") - set(attributes["exported_namespace"], attributes["namespace"]) - delete_key(attributes, "namespace") - set(attributes["exported_job"], attributes["job"]) - delete_key(attributes, "job") - set(attributes["exported_instance"], attributes["instance"]) - delete_key(attributes, "instance") - set(attributes["exported_project_id"], attributes["project_id"]) - delete_key(attributes, "project_id") batch: # batch metrics before sending to reduce API usage send_batch_max_size: 200 send_batch_size: 200 timeout: 5s memory_limiter: # drop metrics if memory usage gets too high check_interval: 1s limit_percentage: 65 spike_limit_percentage: 20exporters: googlemanagedprometheus: project: "PROJECT_ID"service: pipelines: metrics: receivers: [prometheus] processors: [batch, memory_limiter, resource, transform] exporters: [googlemanagedprometheus]
This config does the following:
- Sets up a Kubernetes service discovery scrape config for Prometheus.For more information, seescraping Prometheus metrics.
- Manually sets
cluster,namespace, andlocationresource attributes.For more information about resource attributes, including resource detectionfor Amazon EKS and Azure AKS, seeDetect resourceattributes. - Sets the
projectoption in thegooglemanagedprometheusexporter.For more information about the exporter, seeConfigure thegooglemanagedprometheusexporter.
When using an existing Prometheus configuration with the OpenTelemetry Collector'sprometheus receiver, replace any single dollar sign characters,
, with double characters,
,to avoid triggering environment variable substitution. For more information, seeScrape Prometheus metrics.
For information about best practices for configuringthe Collector on other clouds, seeAmazon EKS orAzure AKS.
Deploy the example application
Theexample application emits theexample_requests_total counter metric and theexample_random_numbershistogram metric (among others) on itsmetrics port.The manifest for this example defines three replicas.
To deploy the example application, run the following command:
kubectl -nNAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.17.2/examples/example-app.yaml
Create your collector config as a ConfigMap
After you have created your config and placed it in a file calledconfig.yaml,use that file to create a Kubernetes ConfigMap based on yourconfig.yaml file.When the collector is deployed, it mounts the ConfigMap and loads the file.
To create a ConfigMap namedotel-config with your config, use the followingcommand:
kubectl -nNAMESPACE_NAME create configmap otel-config --from-file config.yaml
Deploy the collector
Create a file calledcollector-deployment.yaml with the following content:
apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name:NAMESPACE_NAME:prometheus-testrules:- apiGroups: [""] resources: - pods verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name:NAMESPACE_NAME:prometheus-testroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name:NAMESPACE_NAME:prometheus-testsubjects:- kind: ServiceAccount namespace:NAMESPACE_NAME name: default---apiVersion: apps/v1kind: Deploymentmetadata: name: otel-collectorspec: replicas: 1 selector: matchLabels: app: otel-collector template: metadata: labels: app: otel-collector spec: containers: - name: otel-collector image: otel/opentelemetry-collector-contrib:0.140.0 args: - --config - /etc/otel/config.yaml - --feature-gates=exporter.googlemanagedprometheus.intToDouble volumeMounts: - mountPath: /etc/otel/ name: otel-config volumes: - name: otel-config configMap: name: otel-config
Create the Collector deployment in your Kubernetes cluster by running thefollowing command:
kubectl -nNAMESPACE_NAME create -f collector-deployment.yaml
After the pod starts, it scrapes the sample application and reports metricsto Managed Service for Prometheus.
For information about ways to query your data,seeQuery using Cloud Monitoring orQuery using Grafana.
Provide credentials explicitly
When running on GKE, the OpenTelemetry Collectorautomatically retrieves credentials from the environment based on thenode's service account.In non-GKE Kubernetes clusters, credentials must be explicitlyprovided to the OpenTelemetry Collector by using flags or theGOOGLE_APPLICATION_CREDENTIALS environment variable.
Set the context to your target project:
gcloud config set projectPROJECT_ID
Create a service account:
gcloud iam service-accounts creategmp-test-sa
This step creates the service account that you might havealready created in theWorkload Identity Federation for GKE instructions.
Grant the required permissions to the service account:
gcloud projects add-iam-policy-bindingPROJECT_ID\ --member=serviceAccount:gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/monitoring.metricWriter
Create and download a key for the service account:
gcloud iam service-accounts keys creategmp-test-sa-key.json \ --iam-account=gmp-test-sa@PROJECT_ID.iam.gserviceaccount.com
Add the key file as a secret to your non-GKE cluster:
kubectl -nNAMESPACE_NAME create secret genericgmp-test-sa \ --from-file=key.json=gmp-test-sa-key.json
Open the OpenTelemetry Deployment resource for editing:
kubectl -nNAMESPACE_NAME edit deployment otel-collector
Add the text shown in bold to the resource:
apiVersion: apps/v1kind: Deploymentmetadata: namespace:NAMESPACE_NAME name: otel-collectorspec: template spec: containers: - name: otel-collectorenv: - name: "GOOGLE_APPLICATION_CREDENTIALS" value: "/gmp/key.json"... volumeMounts:- name: gmp-sa mountPath: /gmp readOnly: true... volumes:- name: gmp-sa secret: secretName:gmp-test-sa...
Save the file and close the editor. After the change is applied, thepods are re-created and start authenticating to the metricbackend with the given service account.
Scrape Prometheus metrics
This section and the subsequent section provide additional customizationinformation for using the OpenTelemetry Collector. This information mightbe helpful in certain situations, but none of it is necessary to run theexample described inSet up the OpenTelemetry Collector.
If your applications are already exposing Prometheus endpoints, theOpenTelemetry Collector can scrape those endpoints using the samescrape configformatyou would use with any standard Prometheus config. To do this, enable thePrometheusreceiverin your collector config.
A Prometheus receiver config for Kubernetes pods might look like thefollowing:
receivers: prometheus: config: scrape_configs: - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: $$1:$$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+)service: pipelines: metrics: receivers: [prometheus]
This is a service discovery-based scrape config that you can modifyas needed to scrape your applications.
When using an existing Prometheus configuration with the OpenTelemetry Collector'sprometheus receiver, replace any single dollar sign characters,
, with double characters,
,to avoid triggering environment variable substitution. This is especially important to do forthereplacementvalue within yourrelabel_configs section. For example, if you have thefollowingrelabel_config section:
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: $1:$2 target_label: __address__
Then rewrite it to be:
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: (.+):(?:\d+);(\d+) replacement: $$1:$$2 target_label: __address__
For more information, seethe OpenTelemetry documentation.
Next, we strongly recommend that you use processors to format your metrics. Inmany cases, processors must be used to properly format your metrics.
Add processors
OpenTelemetryprocessorsmodify telemetry data before it is exported. You can use thefollowing processorsto make sure that your metrics are written in a format compatible withManaged Service for Prometheus.
Detect resource attributes
The Managed Service for Prometheus exporter for OpenTelemetry uses theprometheus_target monitoredresourceto uniquely identify time series data points. The exporter parses the requiredmonitored-resource fields from resource attributes on the metric data points.The fields and the attributes from which the values are scraped are:
- project_id: auto-detected byApplication DefaultCredentials,
gcp.project.id, orprojectin exporter config (seeconfiguring theexporter) - location:
location,cloud.availability_zone,cloud.region - cluster:
cluster,k8s.cluster.name - namespace:
namespace,k8s.namespace.name - job:
service.name+service.namespace - instance:
service.instance.id
Failure to set these labels to unique values can result in "duplicatetimeseries" errors when exporting to Managed Service for Prometheus.In many cases, values can be automatically detected for these labels,but in some cases, you might have to map them yourself. The rest of thissection describes these scenarios.
Note: The termslabels andattributes, when referring to metric data points, represent essentially the same concept in Prometheus and OpenTelemetry, respectively. In this context, a Prometheus metric with the labelfoo will be converted into an OpenTelemetry data point with an attributefoo. The specific labels or attributes previously listed are converted intoresource attributes, which are another OpenTelemetry concept for identifying data points specific to the source of the data. These resource attributes are then mapped to the monitored resource fields listed.The Prometheus receiver automatically sets theservice.name attributebased on thejob_name in the scrape config, andservice.instance.idattribute based on the scrape target'sinstance. The receiver also setsk8s.namespace.name when usingrole: pod in the scrape config.
When possible, populate the other attributes automatically by using theresource detectionprocessor.However, depending on your environment, some attributes might not beautomatically detectable. In this case, you can use other processors to eithermanually insert these values or parse them from metric labels. The followingsections illustrate configurations for detecting resources on various platforms.
GKE
When running OpenTelemetry on GKE, you need to enable theresource-detection processor to fill out the resource labels. Be sure that yourmetrics don't already contain any of the reserved resource labels. If this isunavoidable, seeAvoid resource attribute collisions by renamingattributes.
processors: resourcedetection: detectors: [gcp] timeout: 10s
This section can be copied directly into your config file, replacing theprocessors section if it already exists.
Amazon EKS
The EKS resource detector does not automatically fill in thecluster ornamespace attributes. You can provide these values manually by usingtheresourceprocessor,as shown in the following example:
processors: resourcedetection: detectors: [eks] timeout: 10s resource: attributes: - key: "cluster" value: "my-eks-cluster" action: upsert - key: "namespace" value: "my-app" action: upsert
You can also convert these values from metric labels using thegroupbyattrsprocessor (seemove metric labels to resource labels below).
Azure AKS
The AKS resource detector does not automatically fill in thecluster ornamespace attributes. You can provide these values manually by using theresourceprocessor,as shown in the following example:
processors: resourcedetection: detectors: [aks] timeout: 10s resource: attributes: - key: "cluster" value: "my-eks-cluster" action: upsert - key: "namespace" value: "my-app" action: upsert
You can also convert these values from metric labels by using thegroupbyattrsprocessor; seeMove metric labels to resource labels.
On-premises and non-cloud environments
With on-premises or non-cloud environments, you probably can'tdetect any of the necessary resource attributes automatically. In this case, youcan emit these labels in your metrics and move them to resource attributes (seeMove metric labels to resource labels), or manually set allof the resource attributes as shown in the following example:
processors: resource: attributes: - key: "cluster" value: "my-on-prem-cluster" action: upsert - key: "namespace" value: "my-app" action: upsert - key: "location" value: "us-east-1" action: upsert
Create your collector config as a ConfigMap describes howto use the config. That section assumes you have put your config in a filecalledconfig.yaml.
Theproject_id resource attribute can still be automatically set when runningthe Collector withApplication DefaultCredentials.If your Collector does not have access to Application Default Credentials, seeSettingproject_id.
Alternatively, you can manually set the resource attributes you need in anenvironment variable,OTEL_RESOURCE_ATTRIBUTES, with a comma-separated list ofkey-value pairs, for example:
export OTEL_RESOURCE_ATTRIBUTES="cluster=my-cluster,namespace=my-app,location=us-east-1"
Then use theenv resource detectorprocessorto set the resource attributes:
processors: resourcedetection: detectors: [env]
Avoid resource attribute collisions by renaming attributes
If your metrics already contain labels that collide with the requiredresource attributes (such aslocation,cluster, ornamespace), rename themto avoid the collision. The Prometheus convention is to add the prefixexported_to the label name. To add this prefix, use thetransformprocessor.
The followingprocessors config renames any potential collisions andresolves any conflicting keys from the metric:
processors: transform: # "location", "cluster", "namespace", "job", "instance", and "project_id" are reserved, and # metrics containing these labels will be rejected. Prefix them with exported_ to prevent this. metric_statements: - context: datapoint statements: - set(attributes["exported_location"], attributes["location"]) - delete_key(attributes, "location") - set(attributes["exported_cluster"], attributes["cluster"]) - delete_key(attributes, "cluster") - set(attributes["exported_namespace"], attributes["namespace"]) - delete_key(attributes, "namespace") - set(attributes["exported_job"], attributes["job"]) - delete_key(attributes, "job") - set(attributes["exported_instance"], attributes["instance"]) - delete_key(attributes, "instance") - set(attributes["exported_project_id"], attributes["project_id"]) - delete_key(attributes, "project_id")
Move metric labels to resource labels
In some cases, your metrics might be intentionally reporting labels such asnamespace because your exporter is monitoring multiple namespaces. Forexample, when running thekube-state-metricsexporter.
In this scenario, these labels can be moved to resource attributes using thegroupbyattrsprocessor:
processors: groupbyattrs: keys: - namespace - cluster - location
In the previous example, given a metric with the labelsnamespace,cluster,orlocation, those labels will be converted to the matching resourceattributes.
Limit API requests and memory usage
Two other processors, thebatchprocessorandmemory limiterprocessorallow you to limit the resource consumption of your collector.
Batch processing
Batching requests lets you define how many data points to send in a singlerequest. Note that Cloud Monitoringlimit of 200 time series perrequest. Enable the batch processor by using the following settings:
processors: batch: # batch metrics before sending to reduce API usage send_batch_max_size: 200 send_batch_size: 200 timeout: 5s
Memory limiting
We recommend enabling the memory-limiter processor to prevent your collectorfrom crashing at times of high throughput. Enable the processing by usingthe following settings:
processors: memory_limiter: # drop metrics if memory usage gets too high check_interval: 1s limit_percentage: 65 spike_limit_percentage: 20
Configure thegooglemanagedprometheus exporter
By default, using thegooglemanagedprometheus exporter on GKErequires no additional configuration. For many use cases you only need to enableit with an empty block in theexporters section:
exporters: googlemanagedprometheus:
However, the exporter does provide some optional configuration settings. Thefollowing sections describe the other configuration settings.
Settingproject_id
To associate your time series with a Google Cloud project, theprometheus_target monitored resource must haveproject_id set.
When running OpenTelemetry on Google Cloud, theManaged Service for Prometheus exporter defaults to setting this valuebased on theApplication DefaultCredentialsit finds. If no credentials are available, or you want to override the defaultproject, you have two options:
- Set
projectin the exporter config - Add a
gcp.project.idresource attribute to your metrics.
We strongly recommend using the default (unset) value forproject_id ratherthan explicitly setting it, when possible.
project_id, the Collector's Service Account must havetheroles/monitoring.metricWriter Identity and Access Management role for the destinationproject.Setproject in the exporter config
The following config excerpt sends metrics toManaged Service for Prometheus in the Google Cloud projectMY_PROJECT:
receivers: prometheus: config: ...processors: resourcedetection: detectors: [gcp] timeout: 10sexporters: googlemanagedprometheus:project: MY_PROJECTservice: pipelines: metrics: receivers: [prometheus] processors: [resourcedetection] exporters: [googlemanagedprometheus]
The only change from previous examples is the new lineproject: MY_PROJECT.This setting is useful if you know that every metric coming through thisCollector should be sent toMY_PROJECT.
Setgcp.project.id resource attribute
You can set project association on a per-metric basis by adding agcp.project.id resource attribute to your metrics. Set the value of theattribute to the name of the project the metric should be associated with.
For example, if your metric already has a labelproject, this label can bemoved to a resource attribute and renamed togcp.project.id by usingprocessors in the Collector config, as shown in the following example:
receivers: prometheus: config: ...processors: resourcedetection: detectors: [gcp] timeout: 10sgroupbyattrs:keys:- projectresource:attributes:- key: "gcp.project.id"from_attribute: "project"action: upsertexporters: googlemanagedprometheus:service: pipelines: metrics: receivers: [prometheus] processors: [resourcedetection, groupbyattrs, resource] exporters: [googlemanagedprometheus]
Setting client options
Thegooglemanagedprometheus exporter uses gRPC clients forManaged Service for Prometheus. Therefore, optional settingsare available for configuring the gRPC client:
compression: Enables gzip compression for gRPC requests, which is useful forminimizing data transfer fees when sending data from other clouds toManaged Service for Prometheus (valid values:gzip).user_agent: Overrides the user-agent string sent on requests toCloud Monitoring; only applies to metrics.Defaults to the build and version number of your OpenTelemetry Collector,for example,opentelemetry-collector-contrib 0.140.0.endpoint: Sets the endpoint to which metric data is going to be sent.use_insecure: If true, uses gRPC as the communication transport. Has aneffect only when theendpointvalue is not "".grpc_pool_size: Sets the size of the connection pool in the gRPC client.prefix: Configures the prefix of metrics sent toManaged Service for Prometheus. Defaults toprometheus.googleapis.com.Don't change this prefix; doing so causes metrics to not bequeryable with PromQL in the Cloud Monitoring UI.
In most cases, you don't need to change these values from theirdefaults. However, you can change them to accommodate specialcircumstances.
All of these settings are set under ametric block in thegooglemanagedprometheus exporter section, as shown in the following example:
receivers: prometheus: config: ...processors: resourcedetection: detectors: [gcp] timeout: 10sexporters: googlemanagedprometheus:metric:compression: gzipuser_agent: opentelemetry-collector-contrib 0.140.0endpoint: ""use_insecure: falsegrpc_pool_size: 1prefix: prometheus.googleapis.comservice: pipelines: metrics: receivers: [prometheus] processors: [resourcedetection] exporters: [googlemanagedprometheus]
What's next
- Use PromQL in Cloud Monitoring to query Prometheus metrics.
- Use Grafana to query Prometheus metrics.
- Set up theOpenTelemetry Collector as a sidecar agent in Cloud Run.
The Cloud MonitoringMetrics Management page provides informationthat can help you control the amount you spend on billable metricswithout affecting observability. TheMetrics Management page reports thefollowing information:
- Ingestion volumes for both byte- and sample-based billing, across metric domains and for individual metrics.
- Data about labels and cardinality of metrics.
- Number of reads for each metric.
- Use of metrics in alerting policies and custom dashboards.
- Rate of metric-write errors.
You can also use theMetrics Management page toexclude unneeded metrics,eliminating the cost of ingesting them.For more information about theMetrics Management page, seeView and manage metric usage.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.