Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

License

NotificationsYou must be signed in to change notification settings

kubernetes-sigs/metrics-server

Metrics Server is a scalable, efficient source of container resource metrics for Kubernetesbuilt-in autoscaling pipelines.

Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver throughMetrics APIfor use byHorizontal Pod Autoscaler andVertical Pod Autoscaler. Metrics API can also be accessed bykubectl top,making it easier to debug autoscaling pipelines.

Caution

Metrics Server is meant only for autoscaling purposes. For example, don't use it to forward metrics to monitoring solutions, or as a source of monitoring solution metrics. In such cases please collect metrics from Kubelet/metrics/resource endpoint directly.

Metrics Server offers:

  • A single deployment that works on most clusters (seeRequirements)
  • Fast autoscaling, collecting metrics every 15 seconds.
  • Resource efficiency, using 1 mili core of CPU and 2 MB of memory for each node in a cluster.
  • Scalable support up to 5,000 node clusters.

Use cases

You can use Metrics Server for:

Don't use Metrics Server when you need:

  • Non-Kubernetes clusters
  • An accurate source of resource usage metrics
  • Horizontal autoscaling based on other resources than CPU/Memory

For unsupported use cases, check out full monitoring solutions likePrometheus.

Requirements

Metrics Server has specific requirements for cluster and network configuration. These requirements aren't the default for all clusterdistributions. Please ensure that your cluster distribution supports these requirements before using Metrics Server:

  • The kube-apiserver mustenable an aggregation layer.
  • Nodes must have Webhookauthentication and authorization enabled.
  • Kubelet certificate needs to be signed by cluster Certificate Authority (or disable certificate validation by passing--kubelet-insecure-tls to Metrics Server)
  • Container runtime must implement acontainer metrics RPCs (or havecAdvisor support)
  • Network should support following communication:
    • Control plane to Metrics Server. Control plane node needs to reach Metrics Server's pod IP and port 10250 (or node IP and custom port ifhostNetwork is enabled). Read more aboutcontrol plane to node communication.
    • Metrics Server to Kubelet on all nodes. Metrics server needs to reach node address and Kubelet port. Addresses and ports are configured in Kubelet and published as part of Node object. Addresses in.status.addresses and port in.status.daemonEndpoints.kubeletEndpoint.port field (default 10250). Metrics Server will pick first node address based on the list provided bykubelet-preferred-address-types command line flag (defaultInternalIP,ExternalIP,Hostname in manifests).

Installation

Metrics Server can be installed either directly from YAML manifest or via the officialHelm chart. To install the latest Metrics Server release from thecomponents.yaml manifest, run the following command.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Installation instructions for previous releases can be found inMetrics Server releases.

Compatibility Matrix

Metrics ServerMetrics API group/versionSupported Kubernetes version
0.8.xmetrics.k8s.io/v1beta11.31+
0.7.xmetrics.k8s.io/v1beta11.27+
0.6.xmetrics.k8s.io/v1beta11.25+
0.5.xmetrics.k8s.io/v1beta1*1.8+
0.4.xmetrics.k8s.io/v1beta1*1.8+
0.3.xmetrics.k8s.io/v1beta11.8-1.21

*Kubernetes versions lower than v1.16 require passing the--authorization-always-allow-paths=/livez,/readyz command line flag

High Availability

Metrics Server can be installed in high availability mode directly from a YAML manifest or via the officialHelm chart by setting thereplicas value greater than1. To install the latest Metrics Server release in high availability mode from thehigh-availability.yaml manifest, run the following command.

On Kubernetes v1.21+:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

On Kubernetes v1.19-1.21:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml

Note

This configurationrequires having a cluster with at least 2 nodes on which Metrics Server can be scheduled.

Also, to maximize the efficiency of this highly available configuration, it isrecommended to add the--enable-aggregator-routing=true CLI flag to the kube-apiserver so that requests sent to Metrics Server are load balanced between the 2 instances.

Helm Chart

TheHelm chart is maintained as an additional component within this repo and released into a chart repository backed on thegh-pages branch. A new version of the chart will be released for each Metrics Server release and can also be released independently if there is a need. The chart on themaster branch shouldn't be referenced directly as it might contain modifications since it was last released, to view the chart code use the chart release tag.

Security context

Metrics Server requires theCAP_NET_BIND_SERVICE capability in order to bind to a privileged ports as non-root.If you are running Metrics Server in an environment that usesPSSs or other mechanisms to restrict pod capabilities, ensure that Metrics Server is allowedto use this capability.This applies even if you use the--secure-port flag to change the port that Metrics Server binds to a non-privileged port.

Scaling

Starting from v0.5.0 Metrics Server comes with default resource requests that should guarantee good performance for most cluster configurations up to 100 nodes:

  • 100m core of CPU
  • 200MiB of memory

Metrics Server resource usage depends on multiple independent dimensions, creating aScalability Envelope.Default Metrics Server configuration should work in clusters that don't exceed any of the thresholds listed below:

QuantityNamespace thresholdCluster threshold
#Nodesn/a100
#Pods per node7070
#Deployments with HPAs100100

Resources can be adjusted proportionally based on number of nodes in the cluster.For clusters of more than 100 nodes, allocate additionally:

  • 1m core per node
  • 2MiB memory per node

You can use the same approach to lower resource requests, but there is a boundarywhere this may impact other scalability dimensions like maximum number of pods per node.

Configuration

Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container.Most useful flags:

  • --kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
  • --kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.
  • --requestheader-client-ca-file - Specify a root certificate bundle for verifying client certificates on incoming requests.
  • --node-selector -Can complete to scrape the metrics from the Specified nodes based on labels

You can get a full list of Metrics Server configuration flags by running:

docker run --rm registry.k8s.io/metrics-server/metrics-server:v0.8.0 --help

Design

Metrics Server is a component in the core metrics pipeline described inKubernetes monitoring architecture.

For more information, see:

This diagram shows how metrics-server handles akubectl top pods request:

sequenceDiagram    participant User    participant APIServer    participant MS as Metrics-server    User->>APIServer: GET /apis/metrics.k8s.io/v1beta1/pods    APIServer->>MS: GET /apis/metrics.k8s.io/v1beta1/pods    MS->>MS: use Pod Informer to get a list of pods    MS->>MS: lookup each pod's memory and cpu from its in-memory cache    MS->>APIServer: metrics.PodMetricsList    APIServer->>User: Response
Loading
sequenceDiagram    participant MS as Metrics-server    participant KL as Kubelet    MS->>MS: use Node informer to get a list of nodes and their IPs periodically     MS->>KL: GET /metrics/resource    KL->>MS: returns memory and cpu data for each pod    MS->>MS: update its in-memory cache to store memory and cpu data for each pod
Loading

Have a question?

Before posting an issue, first checkoutFrequently Asked Questions andKnown Issues.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on thecommunity page.

You can reach the maintainers of this project at:

This project is maintained bySIG Instrumentation

Code of conduct

Participation in the Kubernetes community is governed by theKubernetes Code of Conduct.

About

Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp