vLLM

This document describes how to configure your Google Kubernetes Engine deploymentso that you can use Google Cloud Managed Service for Prometheus to collect metrics fromvLLM. This document shows you how to do the following:

  • Enableautomatic application monitoring for vLLM, or set up vLLM manually to report metrics.
  • Access a predefined dashboard in Cloud Monitoring to view the metrics.

These instructions apply only if you are usingmanaged collectionwith Managed Service for Prometheus.If you are using self-deployed collection, then see thevLLM documentationfor installation information.

These instructions are provided as an example and are expected to work inmost Kubernetes environments.If you are having trouble installing anapplication or exporter due to restrictive security or organizational policies,then we recommend you consult open-source documentation for support.

For information about vLLM, seevLLM.For information about setting up vLLM on Google Kubernetes Engine,see the GKE guide for vLLM.

Prerequisites

To collect metrics fromvLLMby usingManaged Service for Prometheus and managed collection, your deployment mustmeet the following requirements:

  • Your cluster must be running Google Kubernetes Engine version 1.28.15-gke.2475000 or later.
  • You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection.

vLLM exposes Prometheus-format metrics automatically; you do not have to install it separately. To verify that vLLM is emitting metrics on the expected endpoints, do the following:

  1. Set up port forwarding by using the following command:
    kubectl -nNAMESPACE_NAME port-forwardPOD_NAME 8000
  2. Access the endpointlocalhost:8000/metrics by using the browseror thecurl utility in another terminal session.

Use automatic application monitoring

vLLM supports the use ofautomatic application monitoring.When you use automatic application monitoring, Google Kubernetes Engine does the following:

  • Detects deployed instances of vLLM workloads.
  • Deploys a PodMonitoring resource for each detected workload instance.
  • Installs Cloud Monitoring dashboards for the vLLM metrics.

To use automatic application monitoring, you must enable the feature onyour GKE cluster. You can use the Google Cloud console, theGoogle Cloud CLI (version 492.0.0 or later), or theGKE API. For more information, seeEnable automaticapplication monitoring.

Define a PodMonitoring resource

For target discovery, the Managed Service for Prometheus Operatorrequires a PodMonitoring resource that corresponds tovLLM in the same namespace.

You can use the following PodMonitoring configuration:

# Copyright 2025 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.apiVersion:monitoring.googleapis.com/v1kind:PodMonitoringmetadata:name:vllmlabels:app.kubernetes.io/name:vllmapp.kubernetes.io/part-of:google-cloud-managed-prometheusspec:endpoints:-port:8000scheme:httpinterval:30spath:/metricsselector:matchLabels:app:vllm-gemma-server
Ensure that the values of theport andmatchLabels fields match those of the vLLM pods you want to monitor.

To apply configuration changes from a local file, run the following command:

kubectl apply -nNAMESPACE_NAME -fFILE_NAME

You can alsouse Terraformto manage your configurations.

Verify the configuration

You can use Metrics Explorer to verify that you correctly configuredvLLM. It might take one or two minutes forCloud Monitoring to ingest your metrics.

To verify the metrics are ingested, do the following:

  1. In the Google Cloud console, go to the Metrics explorer page:

    Go toMetrics explorer

    If you use the search bar to find this page, then select the result whose subheading isMonitoring.

  2. In the toolbar of thequery-builder pane, select the button whose name is either MQL or PromQL.
  3. Verify thatPromQL is selectedin theLanguage toggle. The language toggle is in the same toolbar thatlets you format your query.
  4. Enter and run the following query:
    up{job="vllm", cluster="CLUSTER_NAME", namespace="NAMESPACE_NAME"}

View dashboards

The Cloud Monitoring integration includesthevLLM Prometheus Overview dashboard.Dashboards are automatically installed when you configure the integration.You can also view static previews of dashboards without installing theintegration.

To view an installed dashboard, do the following:

  1. In the Google Cloud console, go to the Dashboards page:

    Go toDashboards

    If you use the search bar to find this page, then select the result whose subheading isMonitoring.

  2. Select theDashboard List tab.
  3. Choose theIntegrations category.
  4. Click the name of the dashboard, for example,vLLM Prometheus Overview.

To view a static preview of the dashboard, do the following:

  1. In the Google Cloud console, go to the Integrations page:

    Go toIntegrations

    If you use the search bar to find this page, then select the result whose subheading isMonitoring.

  2. Click theKubernetes Engine deployment-platform filter.
  3. Locate the vLLM integration and clickView Details.
  4. Select theDashboards tab.

Troubleshooting

For information about troubleshooting metric ingestion problems, seeProblems with collection from exporters inTroubleshooting ingestion-side problems.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-18 UTC.