Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Prometheus-based Kubernetes Resource Recommendations

License

NotificationsYou must be signed in to change notification settings

robusta-dev/krr

Repository files navigation

Product Name Screen Shot

Kubernetes Resource Recommendations Based on Historical Data

Get recommendations based on your existing data in Prometheus/Coralogix/Thanos/Mimir and more!

Installation .How KRR works .Free KRR UI
Usage ·Report Bug ·Request Feature ·Support

robusta-dev%2Fkrr | Trendshift

About The Project

Robusta KRR (Kubernetes Resource Recommender) is a CLI tool foroptimizing resource allocation in Kubernetes clusters. It gathers pod usage data from Prometheus andrecommends requests and limits for CPU and memory. Thisreduces costs and improves performance.

Auto-Apply Mode

New: Put right-sizing on auto-pilot by applying recommendations automatically.Request beta access.

Data Integrations

Used to send data to KRR

View Instructions for:Prometheus,Thanos,Victoria Metrics,Google Managed Prometheus,Amazon Managed Prometheus,Azure Managed Prometheus,Coralogix,Grafana Cloud andGrafana Mimir

Reporting Integrations

Used to receive information from KRR

View instructions for:Seeing recommendations in a UI,Sending recommendations to Slack,Setting up KRR as a k9s plugin,Azure Blob Storage Export with Teams Notification

Features

  • No Agent Required: Run a CLI tool on your local machine for immediate results. (Or run in-cluster for weeklySlack reports.)
  • Prometheus Integration: Get recommendations based on the data you already have
  • Explainability:Understand how recommendations were calculated with explanation graphs
  • Extensible Strategies: Easily create and use your own strategies for calculating resource recommendations.
  • Free SaaS Platform: See why KRR recommends what it does, by using thefree Robusta SaaS platform.
  • Future Support: Upcoming versions will support custom resources (e.g. GPUs) and custom metrics.

How Much Can I Expect to Save with KRR?

According to a recentSysdig study, on average, Kubernetes clusters have:

  • 69% unused CPU
  • 18% unused memory

By right-sizing your containers with KRR, you can save an average of 69% on cloud costs.

Read more abouthow KRR works

Difference with Kubernetes VPA

Feature 🛠️Robusta KRR 🚀Kubernetes VPA 🌐
Resource Recommendations 💡✅ CPU/Memory requests and limits✅ CPU/Memory requests and limits
Installation Location 🌍✅ Not required to be installed inside the cluster, can be used on your own device, connected to a cluster❌ Must be installed inside the cluster
Workload Configuration 🔧✅ No need to configure a VPA object for each workload❌ Requires VPA object configuration for each workload
Immediate Results ⚡✅ Gets results immediately (given Prometheus is running)❌ Requires time to gather data and provide recommendations
Reporting 📊✅ Json, CSV, Markdown,Web UI, and more!❌ Not supported
Extensibility 🔧✅ Add your own strategies with few lines of Python⚠️ Limited extensibility
Explainability 📖See graphs explaining the recommendations❌ Not supported
Custom Metrics 📏🔄 Support in future versions❌ Not supported
Custom Resources 🎛️🔄 Support in future versions (e.g., GPU)❌ Not supported
Autoscaling 🔀🔄 Support in future versions✅ Automatic application of recommendations
Default History 🕒14 days8 days
Supports HPA 🔥✅ Enable using--allow-hpa flag❌ Not supported

Installation

Requirements

KRR requires Prometheus 2.26+,kube-state-metrics &cAdvisor.

Which metrics does KRR need?No setup is required if you use kube-prometheus-stack orRobusta's Embedded Prometheus.

If you have a different setup, make sure the following metrics exist:

  • container_cpu_usage_seconds_total
  • container_memory_working_set_bytes
  • kube_replicaset_owner
  • kube_pod_owner
  • kube_pod_status_phase

Note: If one of last three metrics is absent KRR will still work, but it will only consider currently-running pods when calculating recommendations. Historic pods that no longer exist in the cluster will not be taken into consideration.

Installation Methods

Brew (Mac/Linux)
  1. Add our tap:
brew tap robusta-dev/homebrew-krr
  1. Install KRR:
brew install krr
  1. Check that installation was successful:
krr --help
  1. Run KRR (first launch might take a little longer):
krr simple
Windows

You can install using brew (see above) onWSL2, or install from source (see below).

Docker image, binaries, and airgapped installation (offline environments)

You can download pre-built binaries fromReleases or use the prebuilt Docker container. For example, the container for version 1.8.3 is:

us-central1-docker.pkg.dev/genuine-flight-317411/devel/krr:v1.8.3

We donot recommend installing KRR from source in airgapped environments due to the headache of installing Python dependencies. Use one of the above methods instead and contact us (via Slack, GitHub issues, or email) if you need assistance.

In-Cluster

Apart from running KRR as a CLI tool you can also run KRR inside your cluster. We suggest installing KRR via theRobusta Platform. It gives you afree UI with some features like the following

  • View application usage history graphs on which recommendations are based.
  • Get application, namespace and cluster level recommendations.
  • YAML configuration to apply the suggested recommendation and more

You can also run KRR in-cluster as a Kubernetes Job, if you don't want to view results easily in aUI.

kubectl apply -f https://raw.githubusercontent.com/robusta-dev/krr/refs/heads/main/docs/krr-in-cluster/krr-in-cluster-job.yaml
From Source
  1. Make sure you havePython 3.9 (or greater) installed
  2. Clone the repo:
git clone https://github.com/robusta-dev/krr
  1. Navigate to the project root directory (cd ./krr)
  2. Install requirements:
pip install -r requirements.txt
  1. Run the tool:
python krr.py --help

Notice that using source code requires you to run as a python script, when installing with brew allows to runkrr.All above examples show running command askrr ..., replace it withpython krr.py ... if you are using a manual installation.

Additional Options

Environment-Specific Instructions

Setup KRR for...

(back to top)

Trusting custom Certificate Authority (CA) certificate:

If your llm provider url uses a certificate from a custom CA, in order to trust it, base-64 encode the certificate, and store it in an environment variable namedCERTIFICATE

Free KRR UI on Robusta SaaS

We highly recommend using thefree Robusta SaaS platform. You can:

  • Understand individual app recommendations with app usage history

  • Sort and filter recommendations by namespace, priority, and more

  • Give devs a YAML snippet to fix the problems KRR finds

  • Analyze impact using KRR scan history

Usage

Basic usage
krr simple
Tweak the recommendation algorithm (strategy)

Most helpful flags:

  • --cpu-min Sets the minimum recommended cpu value in millicores
  • --mem-min Sets the minimum recommended memory value in MB
  • --history_duration The duration of the Prometheus history data to use (in hours)

More specific information on Strategy Settings can be found using

krr simple --help
Giving an Explicit Prometheus URL

If your Prometheus is not auto-connecting, you can usekubectl port-forward for manually forwarding Prometheus.

For example, if you have a Prometheus Pod calledkube-prometheus-st-prometheus-0, then run this command to port-forward it:

kubectl port-forward pod/kube-prometheus-st-prometheus-0 9090

Then, open another terminal and run krr in it, giving an explicit Prometheus url:

krr simple -p http://127.0.0.1:9090
Run on specific namespaces

List as many namespaces as you want with-n (in this case,default andingress-nginx)

krr simple -n default -n ingress-nginx

The -n flag also supports regex matches like -n kube-.*. To use regexes, you must have permissions to list namespaces in the target cluster.

krr simple -n default -n'ingress-.*'

Seeexample ServiceAccount and RBAC permissions

Run on workloads filtered by label

Use alabel selector

python krr.py simple --selector'app.kubernetes.io/instance in (robusta, ingress-nginx)'
Group jobs by specific labels

Group jobs that have specific labels into GroupedJob objects for consolidated resource recommendations. This is useful for batch jobs, data processing pipelines, or any workload where you want to analyze resource usage across multiple related jobs.

krr simple --job-grouping-labels app,team

This will:

  • Group jobs that have eitherapp orteam labels (or both)
  • Create GroupedJob objects with names likeapp=frontend,team=backend, etc.
  • Provide resource recommendations for the entire group instead of individual jobs
  • Jobs with the specified labels will be excluded from regular Job listing

You can specify multiple labels separated by commas:

krr simple --job-grouping-labels app,team,environment

Each job will be grouped by each label it has, so a job withapp=api,team=backend will appear in bothapp=api andteam=backend groups.

Limiting how many jobs are included per group

Use--job-grouping-limit <N> to cap how many jobs are includedper group (useful when there are many historical jobs).

krr simple --job-grouping-labels app,team --job-grouping-limit 3
  • Each label group will include at mostN jobs (e.g., the first 3 returned by the API).
  • Other matching jobs beyond the limit are ignored for that group.
  • If not specified, the default limit is500 jobs per group.
Override the kubectl context

By default krr will run in the current context. If you want to run it in a different context:

krr simple -c my-cluster-1 -c my-cluster-2
Output formats for reporting (JSON, YAML, CSV, and more)

Currently KRR ships with a few formatters to represent the scan data:

  • table - a pretty CLI table used by default, powered byRich library
  • json
  • yaml
  • pprint - data representation from python's pprint library
  • csv - export data to a csv file in the current directory
  • csv-raw - csv with raw data for calculation
  • html

To run a strategy with a selected formatter, add a-f flag. Usually this should be combined with--fileoutput <filename> to write clean output to file without logs:

krr simple -f json --fileoutput krr-report.json

If you prefer, you can also use--logtostderr to get clean formatted output in one file and error logs in another:

krr simple --logtostderr -f json> result.json2> logs-and-errors.log
Centralized Prometheus (multi-cluster)

See below on filtering output from a centralized prometheus, so it matches only one cluster

Prometheus Authentication

KRR supports all known authentication schemes for Prometheus, VictoriaMetrics, Coralogix, and other Prometheus compatible metric stores.

Refer tokrr simple --help, and look at the flags--prometheus-url,--prometheus-auth-header,--prometheus-headers--prometheus-ssl-enabled,--coralogix-token, and the various--eks-* flags.

If you need help, contact us on Slack, email, or by opening a GitHub issue.

Debug modeIf you want to see additional debug logs:
krr simple -v

(back to top)

How KRR works

Metrics Gathering

Robusta KRR uses the following Prometheus queries to gather usage data:

  • CPU Usage:

    sum(irate(container_cpu_usage_seconds_total{{namespace="{object.namespace}", pod="{pod}", container="{object.container}"}}[{step}]))
  • Memory Usage:

    sum(container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!="", namespace="{object.namespace}", pod="{pod}", container="{object.container}"})

Need to customize the metrics? Tell us and we'll add support.

Get a free breakdown of KRR recommendations in theRobusta SaaS.

Algorithm

By default, we use asimple strategy to calculate resource recommendations. It is calculated as follows (The exact numbers can be customized in CLI arguments):

  • For CPU, we set a request at the 95th percentile with no limit. Meaning, in 95% of the cases, your CPU request will be sufficient. For the remaining 5%, we set no limit. This means your pod can burst and use any CPU available on the node - e.g. CPU that other pods requested but aren’t using right now.

  • For memory, we take the maximum value over the past week and add a 15% buffer.

Prometheus connection

Find about how KRR tries to find the default Prometheus to connecthere.

(back to top)

Data Source Integrations

Prometheus, Victoria Metrics and Thanos auto-discovery

By default, KRR will try to auto-discover the running Prometheus Victoria Metrics and Thanos.For discovering Prometheus it scans services for those labels:

"app=kube-prometheus-stack-prometheus""app=prometheus,component=server""app=prometheus-server""app=prometheus-operator-prometheus""app=rancher-monitoring-prometheus""app=prometheus-prometheus"

For Thanos its these labels:

"app.kubernetes.io/component=query,app.kubernetes.io/name=thanos","app.kubernetes.io/name=thanos-query","app=thanos-query","app=thanos-querier",

And for Victoria Metrics its the following labels:

"app.kubernetes.io/name=vmsingle","app.kubernetes.io/name=victoria-metrics-single","app.kubernetes.io/name=vmselect","app=vmselect",

If none of those labels result in finding Prometheus, Victoria Metrics or Thanos, you will get an error and will have to pass the working url explicitly (using the-p flag).

(back to top)

Scanning with a Centralized Prometheus

If your Prometheus monitors multiple clusters we require the label you defined for your cluster in Prometheus.

For example, if your cluster has the Prometheus labelcluster: "my-cluster-name", then run this command:

krr.py simple --prometheus-label cluster -l my-cluster-name

You may also need the-p flag to explicitly give Prometheus' URL.

Azure Managed Prometheus

For Azure managed Prometheus you need to generate an access token, which can be done by running the following command:

# If you are not logged in to Azure, uncomment out the following line# az loginAZURE_BEARER=$(az account get-access-token --resource=https://prometheus.monitor.azure.com  --query accessToken --output tsv);echo$AZURE_BEARER

Than run the following command with PROMETHEUS_URL substituted for your Azure Managed Prometheus URL:

python krr.py simple --namespace default -p PROMETHEUS_URL --prometheus-auth-header"Bearer$AZURE_BEARER"

See here about configuring labels for centralized prometheus

(back to top)

Google Managed Prometheus (GMP)

Please find the detailed GMP usage instructionshere

(back to top)

Amazon Managed Prometheus

For Amazon Managed Prometheus you need to add your Prometheus link and the flag --eks-managed-prom and krr will automatically use your aws credentials

python krr.py simple -p"https://aps-workspaces.REGION.amazonaws.com/workspaces/..." --eks-managed-prom

Additional optional parameters are:

--eks-profile-name PROFILE_NAME_HERE# to specify the profile to use from your config--eks-access-key ACCESS_KEY# to specify your access key--eks-secret-key SECRET_KEY# to specify your secret key--eks-service-name SERVICE_NAME# to use a specific service name in the signature--eks-managed-prom-region REGION_NAME# to specify the region the Prometheus is in

See here about configuring labels for centralized prometheus

(back to top)

Coralogix Managed Prometheus

For Coralogix managed Prometheus you need to specify your Prometheus link and add the flag coralogix_token with your Logs Query Key

python krr.py simple -p"https://prom-api.coralogix..." --coralogix_token

See here about configuring labels for centralized prometheus

(back to top)

Grafana Cloud Managed Prometheus

For Grafana Cloud managed Prometheus you need to specify Prometheus link, Prometheus user, and an access token of your Grafana Cloud stack. The Prometheus link and user for the stack can be found on the Grafana Cloud Portal. An access token with ametrics:read scope can also be created using Access Policies on the same portal.

Next, run the following command, after setting the values of PROM_URL, PROM_USER, and PROM_TOKEN variables with your Grafana Cloud stack's Prometheus link, Prometheus user, and access token.

python krr.py simple -p$PROM_URL --prometheus-auth-header"Bearer${PROM_USER}:${PROM_TOKEN}" --prometheus-ssl-enabled

See here about configuring labels for centralized prometheus

(back to top)

Grafana Mimir auto-discovery

By default, KRR will try to auto-discover the running Grafana Mimir.

For discovering Prometheus it scans services for those labels:

"app.kubernetes.io/name=mimir,app.kubernetes.io/component=query-frontend"

(back to top)

Integrations

Free UI for KRR recommendations

We highly recommend using thefree Robusta SaaS platform. You can:

  • Understand individual app recommendations with app usage history

  • Sort and filter recommendations by namespace, priority, and more

  • Give dev's a YAML snippet to fix the problems KRR finds

  • Analyze impact using KRR scan history

Slack Notification

Put cost savings on autopilot. Get notified in Slack about recommendations above X%. Send a weekly global report, or one report per team.

Slack Screen Shot

Prerequisites

  • A Slack workspace

Setup

  1. Install Robusta with Helm to your cluster and configure slack
  2. Create your KRR slack playbook by adding the following togenerated_values.yaml:
customPlaybooks:# Runs a weekly krr scan on the namespace devs-namespace and sends it to the configured slack channelcustomPlaybooks:- triggers:  - on_schedule:      fixed_delay_repeat:        repeat: -1 # number of times to run or -1 to run forever        seconds_delay: 604800 # 1 week  actions:  - krr_scan:      args: "--namespace devs-namespace" ## KRR args here  sinks:      - "main_slack_sink" # slack sink you want to send the report to here
  1. Do a Helm upgrade to apply the new values:helm upgrade robusta robusta/robusta --values=generated_values.yaml --set clusterName=<YOUR_CLUSTER_NAME>

(back to top)

k9s Plugin

Install our k9s Plugin to get recommendations directly in deployments/daemonsets/statefulsets views.

Plugin:resource recommender

Installation instructions:k9s docs

Azure Blob Storage Export with Microsoft Teams Notifications

Export KRR reports directly to Azure Blob Storage and get notified in Microsoft Teams when reports are generated.

Teams Notification Screenshot

Prerequisites

  • An Azure Storage Account with a container for storing reports
  • A Microsoft Teams channel with an incoming webhook configured
  • Azure SAS URL with write permissions to your storage container

Setup

  1. Create Azure Storage Container: Set up a container in your Azure Storage Account (e.g.,fileuploads)

  2. Generate SAS URL: Create a SAS URL for your container with write permissions:

    # Example SAS URL format (replace with your actual values)https://yourstorageaccount.blob.core.windows.net/fileuploads?sv=2024-11-04&ss=bf&srt=o&sp=wactfx&se=2026-07-21T21:12:48Z&st=2025-07-21T12:57:48Z&spr=https&sig=...
  3. Configure Teams Webhook: Set up an incoming webhook in your Microsoft Teams channel (located in the Workflows tab)

  4. Run KRR with Azure Integration:

    krr simple -f html \  --azurebloboutput"https://yourstorageaccount.blob.core.windows.net/fileuploads?sv=..." \  --teams-webhook"https://your-teams-webhook-url" \  --azure-subscription-id"your-subscription-id" \  --azure-resource-group"your-resource-group"

Features

  • Automatic File Upload: Reports are automatically uploaded to Azure Blob Storage with timestamped filenames
  • Teams Notifications: Rich adaptive cards are sent to Teams when reports are generated
  • Direct Links: Teams notifications include direct links to view files in Azure Portal
  • Multiple Formats: Supports all KRR output formats (JSON, CSV, HTML, YAML, etc.)
  • Secure: Uses SAS URLs for secure, time-limited access to your storage

Command Options

FlagDescription
--azurebloboutputAzure Blob Storage SAS URL base path (make sure you include the container name; filename will be auto-appended)
--teams-webhookMicrosoft Teams webhook URL for notifications
--azure-subscription-idAzure Subscription ID (for Azure Portal links in Teams)
--azure-resource-groupAzure Resource Group name (for Azure Portal links in Teams)

Example Usage

# Basic Azure Blob exportkrr simple -f json --azurebloboutput"https://mystorageaccount.blob.core.windows.net/reports?sv=..."# With Teams notificationskrr simple -f html \  --azurebloboutput"https://mystorageaccount.blob.core.windows.net/reports?sv=..." \  --teams-webhook"https://outlook.office.com/webhook/..." \  --azure-subscription-id"12345678-1234-1234-1234-123456789012" \  --azure-resource-group"my-resource-group"

Teams Notification Features

The Teams adaptive card includes:

  • 📊 Report generation announcement
  • Namespace and format details
  • Generation timestamp
  • Storage account and container information
  • Direct "View in Azure Storage" button linking to Azure Portal

(back to top)

Creating a Custom Strategy/Formatter

Look into theexamples directory for examples on how to create a custom strategy/formatter.

(back to top)

Testing

We use pytest to run tests.

  1. Install the project manually (see above)
  2. Navigate to the project root directory
  3. Installpoetry
  4. Install dev dependencies:
poetry install --group dev
  1. Install robusta_krr as editable dependency:
pip install -e.
  1. Run the tests:
poetry run pytest

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make aregreatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. SeeLICENSE.txt for more information.

(back to top)

Support

If you have any questions, feel free to contactsupport@robusta.dev or message us onrobustacommunity.slack.com

(back to top)


[8]ページ先頭

©2009-2026 Movatter.jp