Google Cloud Managed Service for Prometheus Stay organized with collections Save and categorize content based on your preferences.
Google Cloud Managed Service for Prometheus is Google Cloud's fully managed, multi-cloud,cross-project solution forPrometheus andOpenTelemetrymetrics. It lets you globally monitor and alert on your workloads, usingPrometheus and OpenTelemetry, without having to manually manage and operate Prometheus at scale.
Managed Service for Prometheus collects metrics from Prometheus exporters andlets you query the data globally using PromQL, meaning that you can keepusing any existingGrafana dashboards,PromQL-based alerts, and workflows. It is hybrid- and multi-cloudcompatible, can monitor Kubernetes, VMs, and serverless workloads onCloud Run, retains datafor 24 months, and maintains portability by stayingcompatible with upstream Prometheus. You can also supplement your Prometheusmonitoring by queryingover 6,500 freemetrics in Cloud Monitoring, includingfree GKEsystem metrics, using PromQL.
This document gives an overview of the managed service, and further documentsdescribe how to set up and run the service. To receive regular updates about newfeatures and releases, submit the optionalsign-up form.
Hear how The Home Depot uses Managed Service for Prometheus to getunified observability across 2,200 stores running on-prem Kubernetes clusters:
System overview
Google Cloud Managed Service for Prometheus gives you the familiarity of Prometheus backedby the global, multi-cloud, and cross-project infrastructure ofCloud Monitoring.

Managed Service for Prometheus is built on top of Monarch, thesameglobally scalable datastore used for Google's own monitoring.Because Managed Service for Prometheus uses the same backend and APIs asCloud Monitoring, both Cloud Monitoring metricsand metrics ingested by Managed Service for Prometheus are queryableby usingPromQL in Cloud Monitoring,Grafana, orany other tool thatcan read the Prometheus API.
In a standard Prometheus deployment, data collection, query evaluation, rule andalert evaluation, and data storage are all handled within a single Prometheusserver. Managed Service for Prometheus splits responsibilities for thesefunctions into multiple components:
- Data collection is handled by managed collectors,self-deployed collectors, the OpenTelemetry Collector, orthe Ops Agent, which scrape local exporters and forwardthe collected data to Monarch. These collectors can be used forKubernetes, serverless, and traditional VM workloads and can run everywhere,including other clouds and on-prem deployments.
- Query evaluation is handled by Monarch,which executes queries and unions results across all Google Cloudregions and across up to 3,500Google Cloud projects.
- Rule and alert evaluation is handled either by writingPromQL alerts in Cloud Monitoring which fully execute in the cloud, or by using locally runand locally configured rule evaluator components which execute rules andalerts against the global Monarch data store and forward anyfired alerts toPrometheus AlertManager.
- Data storage is handled by Monarch, whichstores all Prometheus data for 24 months at noadditional cost.
Grafana connects to the global Monarch data store instead ofconnecting to individual Prometheus servers. If you haveManaged Service for Prometheus collectors configured in all yourdeployments, then this single Grafana instance gives you a unified view of allyour metrics across all your clouds.
Data collection
You can use Managed Service for Prometheus in one of four modes: withmanaged data collection, withself-deployed datacollection, withthe OpenTelemetry Collector, orwiththe Ops Agent.
Managed Service for Prometheus offers an operator for managed data collectionin Kubernetes environments. We recommend that you use managed collection; usingit eliminates the complexity of deploying, scaling, sharding, configuring, andmaintaining Prometheus servers. Managed collection is supported for bothGKE and non-GKE Kubernetes environments.
With self-deployed data collection, you manage your Prometheus installation asyou always have. The only difference from upstream Prometheus is that you runthe Managed Service for Prometheus drop-in replacement binary instead of theupstream Prometheus binary.
The OpenTelemetry Collector can be used to scrape Prometheus exporters and senddata to Managed Service for Prometheus. OpenTelemetry supports asingle-agent strategy for all signals, where one collector can be used formetrics (including Prometheus metrics), logs, and traces in any environment.
You can configure the Ops Agent on any Compute Engine instance to scrape andsend Prometheus metrics to the global datastore. Using an agent greatlysimplifies VM discovery and eliminates the need to install, deploy, or configurePrometheus in VM environments.
If you have a Cloud Run service that writesPrometheus metrics orOTLP metrics,then you can use a sidecar and Managed Service for Prometheus tosend the metrics to Cloud Monitoring.
- To collect Prometheus metrics from Cloud Run, use thePrometheus sidecar.
- To collect OTLP metrics from Cloud Run, use theOpenTelemetry sidecar.
You can run managed, self-deployed, and OpenTelemetry collectors inon-prem deployments and on any cloud. Collectors running outside of Google Cloudsend data to Monarch for long-term storage and global querying.
When choosing between collection options, consider the following:
Managed collection:
- Google's recommended approach for all Kubernetes environments.
- Deployed by using the GKE UI, the gcloud CLI,the
kubectlCLI, or Terraform. - Operation of Prometheus—generating scrape configurations, scalingingestion, scoping rules to the right data, and so forth—is fullyhandled by the Kubernetes operator.
- Scraping and rules are configured by using lightweight custom resources(CRs).
- Good for those who want a more hands-off, fully managed experience.
- Intuitive migration fromprometheus-operator configs.
- Supports most current Prometheus use cases.
- Full assistance from Google Cloud technical support.
Self-deployed collection:
- A drop-in replacement for the upstream Prometheus binary.
- You can use your preferred deployment mechanism, likeprometheus-operator or manualdeployment.
- Scraping is configured by using your preferred methods, like annotationsor prometheus-operator.
- Scaling and functional sharding is done manually.
- Good for quick integration into more complex existing setups. You canreuse your existing configs and run upstream Prometheus andManaged Service for Prometheus side by side.
- Rules and alerts typically run within individual Prometheus servers,which might be preferable for edge deployments as local rule evaluationdoes not incur any network traffic.
- Might support long-tail use cases that aren't yet supported by managedcollection, such aslocal aggregations to reducecardinality.
- Limited assistance from Google Cloud technical support.
The OpenTelemetry Collector:
- A single collector that can collect metrics (including Prometheusmetrics) from any environment and send them to any compatible backend.Can also be used to collect logs and traces and send them to anycompatible backend, including Cloud Logging and Cloud Trace.
- Deployed in any compute or Kubernetes environment either manually or byusing Terraform. Can be used to send metrics from stateless environmentssuch as Cloud Run.
- Scraping is configured using Prometheus-like configs in the collector'sPrometheus receiver.
- Supports push-based metric collection patterns.
- Metadata is injected from any cloud using resource detector processors.
- Rules and alerts can be executed using a Cloud Monitoring alertingpolicy or the stand-alone rule evaluator.
- Best supports cross-signal workflows and features such as exemplars.
- Limited assistance from Google Cloud technical support.
The Ops Agent:
- The easiest way to collect and send Prometheus metric data originatingfrom Compute Engine environments, including both Linux and Windowsdistros.
- Deployed by using the gcloud CLI, the Compute Engine UI,or Terraform.
- Scraping is configured using Prometheus-like configs in the Agent'sPrometheus receiver, powered by OpenTelemetry.
- Rules and alerts can be executed using Cloud Monitoring or thestand-alone rule evaluator.
- Comes bundled with optional Logging agents andprocessmetrics.
- Full assistance from Google Cloud technical support.
To get started, seeGet started with managed collection,Getstarted with self-deployed collection,Get started with theOpenTelemetry Collector, orGet started with theOps Agent.
If you use the managed service outside of Google Kubernetes Engine or Google Cloud, someadditional configuration might be necessary; seeRun managed collection outsideof Google Kubernetes Engine,Run self-deployed collection outside ofGoogle Cloud, orAdd OpenTelemetryprocessors.
Query evaluation
Managed Service for Prometheus supports any query UI that can call thePrometheus query API, including Grafana and the Cloud Monitoring UI. ExistingGrafana dashboards continue to work when switching from local Prometheus toManaged Service for Prometheus, and you can continue using PromQL found inpopular open-source repositories and on community forums.
You can use PromQL to queryover 6,500 freemetrics in Cloud Monitoring, even without sending data toManaged Service for Prometheus. You can also use PromQL to queryfreeKubernetes metrics,custom metrics andlog-based metrics.
For information on how to configure Grafana to queryManaged Service for Prometheus data, seeQuery using Grafana.
For information on how to query Cloud Monitoring metrics using PromQL, seePromQL in Cloud Monitoring.
Rule and alert evaluation
Managed Service for Prometheus provides both a fully cloud-based alertingpipeline and a stand-alone rule evaluator, both of whichevaluate rules against all Monarch dataaccessible in ametrics scope. Evaluating rules against amulti-project metrics scope eliminates the need to co-locate all data ofinterest on a single Prometheus server or within a single Google Cloud project,and it lets you set IAM permissions on groups of projects.
Because all rule evaluation options accept the standard Prometheusrule_filesformat, you can easily migrate to Managed Service for Prometheus by copy-pastingexisting rules or by copy-pasting rules found in popular open sourcerepositories. For those using self-deployed collectors, you can continue toevaluate recording rules locally in your collectors. The results of recordingand alerting rules are stored in Monarch, just like directlycollected metric data. You can also migrate your Prometheus alerting rules toPromQL-based alerting policies in Cloud Monitoring.
For alert evaluation with Cloud Monitoring, seePromQL alerts in Cloud Monitoring.
For rule evaluation with managed collection, seeManaged rule evaluation andalerting.
For rule evaluation with self-deployed collection, the OpenTelemetry Collector,and the Ops Agent, seeSelf-deployed rule evaluation andalerting.
For information on reducing cardinality using recording rules on self-deployedcollectors, seeCost controls and attribution.
Data storage
All Managed Service for Prometheus data is stored for24 months at no additional cost.
Managed Service for Prometheus supports a minimum scrape interval of5 seconds. Data is stored at fullgranularity for 1 week, then is downsampled to 1-minute points for the next 5weeks, then is downsampled to 10-minute points and stored for the remainder ofthe retention period.
Managed Service for Prometheus has no limit on the number of activetime series or total time series.
For more information, seeQuotas and limits within theCloud Monitoring documentation.
Billing and quotas
Managed Service for Prometheus is a Google Cloud product, andbilling and usage quotas apply.
Billing
Billing for the service is based primarily onthe number of metric samples ingested into storage. There is also anominal charge for read API calls. Managed Service for Prometheus doesnot charge for storage or retention of metric data.
- For current pricing, see the Cloud Monitoring sections of theGoogle Cloud Observability pricing page.
- To estimate your bill based on your expected number of time series or yourexpected samples per second, see the Cloud Operations tab within theGoogle Cloud Pricing Calculator.
- For tips on how to lower your bill or determine the sources of high costs,seeCost controls and attribution.
- For information about the rationale for the pricing model, seeOptimize costs for Google Cloud Managed Service for Prometheus.
- For pricing examples, seeMetric data charged by samples ingested.
Quotas
Managed Service for Prometheus shares ingest and read quotas withCloud Monitoring. The default ingest quota is 500 QPS per project with up to200 samples in a single call, equivalent to 100,000 samples per second. Thedefault read quota is 100 QPS permetrics scope.
You can increase these quotas to support your metric and queryvolumes. For information about managing quotas and requesting quota increases,seeWorking with quotas.
Terms of Service and compliance
Managed Service for Prometheus is part of Cloud Monitoring and thereforeinherits certain agreements and certifications from Cloud Monitoring,including (but not limited to):
- TheGoogle Cloud terms of service
- TheOperations Service Level Agreement (SLA)
- US DISA andFedRAMP compliance levels
- VPC-SC (VPC Service Controls)support
What's next
- Get started withmanaged collection.
- Get started withself-deployed collection.
- Get started withthe OpenTelemetry Collector.
- Get started withthe Ops Agent.
- Use PromQL in Cloud Monitoring to query Prometheus metrics.
- Use Grafana to query Prometheus metrics.
- Query Cloud Monitoring metrics using PromQL.
- Read up onbest practices and view architecture diagrams.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.