Ingestion and querying with managed and self-deployed collection

This document describes how you might set up an environment that mixesself-deployed collectors with managed collectors, across differentGoogle Cloud projects and clusters.

We strongly recommend using managed collection for all Kubernetesenvironments; doing so practically eliminates the overhead of runningPrometheus collectors within your cluster.You can run managed and self-deployed collectors within the same cluster.We recommend using a consistent approach to monitoring, but you might chooseto mix deployment methods for some specific use cases, such as hosting apush gateway, as illustrated in this document.

The following diagram illustrates a configuration that uses twoGoogle Cloud projects, three clusters, and mixes managed and self-deployedcollection. If you use only managed or self-deployed collection, then thediagram is still applicable; just ignore the collection style you aren't using:

You can set up Managed Service for Prometheus with a mix of managed and self-deployed collection.

To set up and use a configuration like the one in the diagram, note thefollowing:

  • You must install any necessaryexporters in your clusters.Google Cloud Managed Service for Prometheus does not install any exporters on your behalf.

  • Project 1 has a cluster running managed collection, which runs as a nodeagent. Collectors are configured withPodMonitoring resources to scrapetargets within a namespace and withClusterPodMonitoring resourcesto scrape targets across a cluster. PodMonitorings must be applied inevery namespace in which you want to collect metrics.ClusterPodMonitorings are applied once per cluster.

    All data collected in Project 1 is saved in Monarch underProject 1. This data is stored by default in the Google Cloud regionfrom which it emitted.

  • Project 2 has a cluster running self-deployed collection usingprometheus-operator and running as a standalone service. This clusteris configured to use prometheus-operator PodMonitors or ServiceMonitorsto scrape exporters on pods or VMs.

    Project 2 also hosts a push gateway sidecar to gather metrics fromephemeral workloads.

    All data collected in Project 2 is saved in Monarch underProject 2. This data is stored by default in the Google Cloud regionfrom which it emitted.

  • Project 1 also has a cluster runningGrafana and thedata source syncer. In this example, thesecomponents are hosted in a standalone cluster, but they can be hosted inany single cluster.

    The data source syncer is configured to use scoping_project_A,and its underlying service account hasMonitoring Viewerpermissions for scoping_project_A.

    When a user issues queries from Grafana,Monarch expands scoping_project_A into its constituentmonitored projects and returns results for both Project 1 and Project 2across all Google Cloud regions. All metrics retain their originalproject_id andlocation (Google Cloud region) labels forgrouping and filtering purposes.

If your cluster is not running inside Google Cloud, you must manuallyconfigure theproject_id andlocation labels. For information about settingthese values, seeRun Managed Service for Prometheus outside ofGoogle Cloud.

Do not federate when using Managed Service for Prometheus.To reduce cardinality and cost by "rolling up" data before sending it toMonarch, use local aggregation instead. For more information,seeConfigure local aggregation.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.