Monitor Pub/Sub in Cloud Monitoring Stay organized with collections Save and categorize content based on your preferences.
You can use the Google Cloud console or theCloud Monitoring API to monitor Pub/Sub.
Key Point: Learn how to view or create a monitoring dashboard andhow to view a metric in Cloud Monitoring. Also, learn about the variousmetrics you can use tomonitor your topics and subscriptions.This document shows you how to monitor your Pub/Sub usage inthe Google Cloud console using Monitoring.
If you want to view metrics from other Google Cloud resources in addition toPub/Sub metrics, use Monitoring.
Otherwise, you can use the monitoring dashboards provided withinPub/Sub. SeeMonitor topicsandMonitor subscriptions.
For best practices about using metrics in yourautoscaling, seeBest practices for using Pub/Sub metrics as a scaling signal.
Before you begin
Before you use Monitoring, ensure that you've prepared the following:
A Cloud Billing account
A Pub/Sub project with billing enabled
One way to ensure that you've obtained both is to completetheQuickstart using the Cloud console.
View an existing dashboard
A dashboard lets you view and analyze data from different sources in the samecontext. Google Cloud provides both predefined and custom dashboards. Forexample, you can view a predefined Pub/Sub dashboard or create a customdashboard that displays metric data, alerting policies, and log entries relatedto Pub/Sub.
To monitor your Pub/Sub project by using Cloud Monitoring,perform the following steps:
In the Google Cloud console, go to theMonitoring page.
Select the name of your project if it is not already selected at the top ofthe page.
ClickDashboards from the navigation menu.
In theDashboards overview page, create a new dashboardor select the existingPub/Sub dashboard.
To search for the existingPub/Sub dashboard, in the filter forAll Dashboards, select theName property and enter
Pub/Sub.
For more information on how to create, edit, and manage acustom dashboard, seeManage custom dashboards.
View a single Pub/Sub metric
To view a single Pub/Sub metric by usingthe Google Cloud console, perform the following steps:
In the Google Cloud console, go to theMonitoring page.
In the navigation pane, selectMetrics explorer.
In theConfiguration section, clickSelect a metric.
In the filter, enter
Pub/Sub.InActive resources, selectPub/Sub SubscriptionorPub/Sub Topic.
Drill down to a specific metric and clickApply.
The page for a specific metric opens.
You can learn more about the monitoring dashboard by reading theCloud Monitoring documentation.
View Pub/Sub metrics and resource types
To see what metrics Pub/Sub reports to Cloud Monitoring, seethePub/Sub metrics list in theCloud Monitoring documentation.
To see the details for the
pubsub_topic,pubsub_subscription, orpubsub_snapshotmonitored resource types, seeMonitored resource typesin the Cloud Monitoring documentation.
Access the PromQL editor
Metrics Explorer is an interface within Cloud Monitoring designed forexploring and visualizing your metrics data. WithinMetrics Explorer,you can usePrometheus Query Language (PromQL) to queryand analyze your Pub/Sub metrics.
To access the code editor and query Cloud Monitoring metrics with PromQL inMetrics Explorer, seeUse the code editor for PromQL.
For example, you can input a PromQL query to monitor the count of messagessent to a specific subscription over a rolling 1-hour period:
sum( increase({ "__name__"="pubsub.googleapis.com/subscription/sent_message_count", "monitored_resource"="pubsub_subscription", "project_id"="your-project-id", "subscription_id"="your-subscription-id" }[1h]))Monitor quota usage
For a given project, you can use theIAM & Admin Quotas dashboardto view current quotas and usage.
You can view your historical quota usage by using the following metrics:
These metrics use theconsumer_quota monitoredresource type. For more quota-related metrics, see theMetrics list.
For example, the following PromQL query creates a chartwith the fraction of publisher quota being used in each region:
sum by (quota_metric, location) ( rate({ "__name__"="serviceruntime.googleapis.com/quota/rate/net_usage", "monitored_resource"="consumer_quota", "service"="pubsub.googleapis.com", "quota_metric"="pubsub.googleapis.com/regionalpublisher" }[${__interval}]))/(max by (quota_metric, location) ( max_over_time({ "__name__"="serviceruntime.googleapis.com/quota/limit", "monitored_resource"="consumer_quota", "service"="pubsub.googleapis.com", "quota_metric"="pubsub.googleapis.com/regionalpublisher" }[${__interval}])) / 60 )If you anticipate your usage exceeding thedefault quota limits, createalerting policies for all the relevant quotas. Thesealerts fire when your usage reaches some fraction of the limit. Forexample, the following PromQL query triggers an alerting policy whenany Pub/Sub quota exceeds 80% usage:
sum by (quota_metric, location) ( increase({ "__name__"="serviceruntime.googleapis.com/quota/rate/net_usage", "monitored_resource"="consumer_quota", "service"="pubsub.googleapis.com" }[1m]))/max by (quota_metric, location) ( max_over_time({ "__name__"="serviceruntime.googleapis.com/quota/limit", "monitored_resource"="consumer_quota", "service"="pubsub.googleapis.com" }[1m]))> 0.8For more customized monitoring and alerting on quota metrics, seeUsing quota metrics.
SeeQuotas and limits for more information about quotas.
Maintain a healthy subscription
To maintain a healthy subscription, you can monitor several subscriptionproperties using Pub/Sub-provided metrics. For example, you canmonitor the volume of unacknowledged messages, the expiration of messageacknowledgment deadlines, and so on.You can also check whether your subscription is healthy enough to achievea lowmessage delivery latency.
Refer to the next sections to get more details about the specific metrics.
Monitor message backlog
To ensure that your subscribers are keeping up with the flow of messages, createa dashboard. The dashboard can show the following backlog metrics, aggregated byresource, for all your subscriptions:
Unacknowledged messages (
subscription/num_unacked_messages_by_region)to see the number ofunacknowledged messages.Oldest unacknowledged message age (
subscription/oldest_unacked_message_age_by_region)to see the age of the oldest unacknowledged message in the backlog of the subscription.Delivery latency health score (
subscription/delivery_latency_health_score)to check the overall subscription health in relation to delivery latency.For more information about this metric,see therelevant section of this document.
Create alerting policies that trigger when these values are outside of theacceptable range in the context of your system. For instance, the absolutenumber of unacknowledged messages is not necessarily meaningful. A backlog of amillion messages might be acceptable for a million message-per-secondsubscription, but unacceptable for a one message-per-second subscription.
Note: Backlog metrics might have gaps in values for up to several minutes.Common backlog issues
| Symptoms | Problem | Solutions |
|---|---|---|
Both theoldest_unacked_message_age_by_region andnum_unacked_messages_by_region are growing in tandem. | Subscribers not keeping up with message volume |
|
If there's a steady, small backlog size combined with a steadily growingoldest_unacked_message_age_by_region, there may be a few messages that cannot be processed. | Stuck messages |
|
Theoldest_unacked_message_age_by_region exceeds the subscription message retention duration. | Permanent data loss |
|
Monitor delivery latency health
In Pub/Sub, delivery latency is the time it takes for a publishedmessage to be delivered to a subscriber.If your message backlog is increasing, you can use theDelivery latency healthscore (subscription/delivery_latency_health_score) to check which factors are contributing to an increased latency.
This metric measures the health of a single subscription over a rolling10-minute window. The metric provides insight into the following criteria,which are necessary for a subscription to achieve consistent low latency:
Negligible seek requests.
Negligible negatively acknowledged messages (nacked) messages.
Negligible expired message acknowledgment deadlines.
Consistent acknowledgment latency less than 30 seconds.
Consistent low utilization, meaning that the subscription consistently hasadequate capacity to process new messages.
TheDelivery latency health score metric reports a score of either 0 or 1for each of the specified criteria. A score of 1 denotes a healthy state anda score of 0 denotes an unhealthy state.
Seek requests: If the subscription had any seek requests in the last10 minutes, the score is set to 0.Seekinga subscription might cause old messages to be replayed long after they were firstpublished, giving them an increased delivery latency.
Negatively acknowledged (nacked) messages: If the subscription had anynegative acknowledgment (nack) requests in the last 10 minutes, the score isset to 0. A negative acknowledgment causes a message to beredelivered with an increased delivery latency.
Expired acknowledgment deadlines: If the subscription had any expiredacknowledgment deadlines in the last 10 minutes, the score is set to 0. Messageswhose acknowledgment deadline expired are redelivered with an increaseddelivery latency.
Acknowledgment latencies: If the 99.9th percentile of all acknowledgmentlatencies over the past 10 minutes was ever greater than 30 seconds, the scoreis set to 0. A high acknowledgment latency is a sign that a subscriber clientis taking an abnormally long time to process a message. This score could imply a bugor some resource constraints on the subscriber client side.
Low utilization: Utilization is calculated differently for eachsubscription type.
StreamingPull: If you do not have enough streams open, the score is setto 0. Open more streams to ensure you have adequate capacity for new messages.
Push: If you have too many messages outstanding to your push endpoint,the score is set to 0. Add more capacity to your push endpoint so you havecapacity for new messages.
Pull: If you do not have enough outstanding pull requests, the score isset to 0. Open more concurrent pull requests to ensure you're ready to receivenew messages.
To view the metric, inMetrics explorer,select theDelivery latency healthscore metric for the Pub/Sub subscription resource type. Add afilter to select just one subscription at a time. Select theStacked areachart and point to a specific time to check the criteria scores for thesubscription for that point in time.
The following is a screenshot of the metric plotted for a one-hour period usinga stacked area chart. The combined health score goes up to 5 at 4:15 AM, with ascore of 1 for each criterion. Later, the combined score decreases to 4 at4:20 AM, when the utilization score drops down to 0.

PromQL provides an expressive, text-based interface toCloud Monitoring time-series data. The following PromQL querycreates a chart to measure the delivery latency health score for a subscription.
sum_over_time( { "__name__"="pubsub.googleapis.com/subscription/delivery_latency_health_score", "monitored_resource"="pubsub_subscription", "subscription_id"="$SUBSCRIPTION" }[${__interval}])Monitor acknowledgment deadline expiration
In order to reduce message delivery latency, Pub/Sub allowssubscriber clients a limited amount of time to acknowledge (ack) a givenmessage. This time period is known as the ack deadline. If your subscribers taketoo long to acknowledge messages, the messages are redelivered, resulting in thesubscribers seeing duplicate messages. This redelivery can happen for variousreasons:
Your subscribers are under-provisioned (you need more threads or machines).
Each message takes longer to process than the message acknowledgmentdeadline. Cloud Client Libraries generally extend thedeadline for individual messages up to a configurable maximum. However, amaximum extension deadline is also in effect for the libraries.
Some messages consistently crash the client.
You can measure the rate at which subscribers miss the ack deadline.The specific metric depends on the subscription type:
Pull and StreamingPull:
subscription/expired_ack_deadlines_countPush:
subscription/push_request_countfiltered byresponse_code != "success"
Excessive ack deadline expiration rates can result in costly inefficiencies inyour system. You pay for every redelivery and for attempting to process eachmessage repeatedly. Conversely, a small expiration rate (for example, 0.1–1%)might be healthy.
Monitor message throughput
Pull and StreamingPull subscribers might receivebatches of messages in each pull response;push subscriptions receive a single message in each push request. You canmonitor thebatch message throughput being processed by your subscriberswith these metrics:
Pull:
subscription/pull_request_count(note that this metric may also include pull requests which returned withno messages)StreamingPull:
subscription/streaming_pull_response_count
You can monitor theindividual or unbatched message throughput beingprocessed by your subscribers with the metricsubscription/sent_message_countfiltered by thedelivery_type label.
The following PromQL query gives you a time-series chart showing the total numberof messages sent to a specific Pub/Sub subscription over arolling 10-minute period. Replace the placeholder values for$PROJECT_NAME and$SUBSCRIPTION_NAME with your actual project and topic identifiers.
sum( increase({ "__name__"="pubsub.googleapis.com/subscription/sent_message_count", "monitored_resource"="pubsub_subscription", "project_id"="$PROJECT_NAME", "subscription_id"="$SUBSCRIPTION_NAME" }[10m]))Monitor push subscriptions
For push subscriptions, monitor these metrics:
subscription/push_request_countGroup the metric by
response_codeandsubscription_id.Since Pub/Sub push subscriptions useresponse codes as implicit message acknowledgments, it's important tomonitor push request response codes. Because push subscriptions exponentiallyback off whenthey encounter timeouts or errors, your backlog can grow quickly based onhow your endpoint responds.Consider setting an alert for high error rates since these rates lead toslow delivery and a growing backlog. You can create a metric filtered byresponse class. However, push request counts are likely to be more useful asa tool for investigating the growing backlog size and age.
subscription/num_outstanding_messagesPub/Sub generally limitsthe number of outstanding messages. Aim for fewer than 1,000 outstanding messages inmost situations. After the throughput achieves a rate on the order of10,000 messages per second, the service adjusts the limit for the number ofoutstanding messages. This limitation is done in increments of 1,000. Nospecific guarantees are made beyond the maximum value, so 1,000outstanding messages is a good guide.
subscription/push_request_latenciesThis metric helps you understand the response latency distribution of the push endpoint.Because of the limit on the number of outstanding messages, endpoint latencyaffects subscription throughput. If it takes 100 milliseconds to process eachmessage, your throughput limit is likely to be 10 messages per second.
To accesshigher outstanding message limits,push subscribers must acknowledge more than 99% of the messages that they receive.
You can calculate the fraction of messages that subscribers acknowledge usingthePromQL. The following PromQL query creates achart with the fraction of messages that subscribers acknowledge ona subscription:
rate({ "__name__"="pubsub.googleapis.com/subscription/push_request_count", "monitored_resource"="pubsub_subscription", "subscription_id"="$SUBSCRIPTION", "response_class"="ack"}[${__interval}])/rate({ "__name__"="pubsub.googleapis.com/subscription/push_request_count", "monitored_resource"="pubsub_subscription", "subscription_id"="$SUBSCRIPTION"}[${__interval}])Monitor subscriptions with filters
If you configure a filter on a subscription, Pub/Subautomatically acknowledges messages that don't matchthefilter. You can monitor this auto-acknowledgment.
Thebacklog metrics only include messagesthat match the filter.
To monitor the rate of auto-acked messages that don't match the filter, use thesubscription/ack_message_countmetric with thedelivery_type label set tofilter.
To monitor the throughput and cost of auto-acked messages that don't match thefilter, use thesubscription/byte_costmetric with theoperation_type label set tofilter_drop. For more information about the fees for these messages, seethePub/Sub pricing page.
Monitor subscriptions with SMTs
If your subscription contains anSMT thatfilters out messages, thebacklog metrics include thefiltered-out messages until the SMT actually runs on them. This means that thebacklog might appear larger and the oldest unacked message age might appearolder than what will be delivered to your subscriber. It is especially importantto keep this in mind if you are using these metrics to autoscale subscribers.
Monitor forwarded undeliverable messages
To monitor undeliverable messages that Pub/Subforwards to a dead-letter topic, use thesubscription/dead_letter_message_countmetric. This metric shows the numberof undeliverable messages that Pub/Sub forwards from asubscription.
To verify that Pub/Sub is forwarding undeliverable messages, youcan compare thesubscription/dead_letter_message_count metric with thetopic/send_request_countmetric. Do the comparison for the dead-letter topic to whichPub/Sub forwards these messages.
You can also attach a subscription to the dead-letter topic and then monitor theforwarded undeliverable messages on this subscription using the following metrics:
subscription/num_unacked_messages_by_region- the number of forwarded messages that have accumulated in the subscription
subscription/oldest_unacked_message_age_by_region- the age of the oldest forwarded message in the subscription
Maintain a healthy publisher
The primary goal of a publisher is to persist message data quickly. Monitor thisperformance usingtopic/send_request_count,grouped byresponse_code. Thismetric gives you an indication of whether Pub/Sub is healthy andaccepting requests.
A background rate of retryable errors (lower than 1%) is not acause for concern, since most Cloud Client Libraries retrymessage failures. Investigate error rates that are greater than 1%.Because non-retryable codes are handled by your application (rather than by theclient library), you should examine response codes. If your publisherapplication does not have a good way of signaling an unhealthy state, considersetting an alert on thetopic/send_request_count metric.
It's equally important to track failed publish requests in your publish client.While client libraries generally retry failed requests, they do not guaranteepublication. Refer toPublishing messages forways to detect permanent publish failures when using Cloud ClientLibraries. At a minimum, your publisher application must log permanent publish errors. Ifyou log those errors to Cloud Logging, you can set up alogs-based metric with an alerting policy.
Monitor message throughput
Publishers might send messages inbatches. Youcan monitor the message throughput sent by your publishers with thesemetrics:
topic/send_request_count:the volume ofbatch messages being sent by publishers.Acountof
topic/message_sizes:the volume ofindividual (unbatched) messages being sent by publishers.
To obtain a precise count of published messages, use the followingPromQL query. This PromQL query effectively retrieves the count of individualmessages published to a specific Pub/Sub topic within definedtime intervals. Replace the placeholder values for$PROJECT_NAME and$TOPIC_ID with your actual project and topic identifiers.
sum by (topic_id) ( increase({ "__name__"="pubsub.googleapis.com/topic/message_sizes_count", "monitored_resource"="pubsub_topic", "project_id"="$PROJECT_NAME", "topic_id"="$TOPIC_ID" }[${__interval}]))For better visualization, especially for daily metrics, consider the following:
View your data over a longer period to provide more context for daily trends.
Use bar charts to represent daily message counts.
What's next
To create an alert for a specific metric,seeManaging metric-based alerting policies.
To learn more about using PromQL to buildmonitoring charts, seeUse the code editor for PromQL.
To learn more about API resources for the Monitoring API, such as metrics,monitored resources, monitored-resource groups, and alerting policies,seeAPI Resources.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.