Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Prometheus instrumentation library for Ruby applications

License

NotificationsYou must be signed in to change notification settings

prometheus/client_ruby

A suite of instrumentation metric primitives for Ruby that can be exposedthrough a HTTP interface. Intended to be used together with aPrometheus server.

Gem VersionBuild Status

Usage

Installation

For a global installation rungem install prometheus-client.

If you're usingBundler addgem "prometheus-client" to yourGemfile.Make sure to runbundle install afterwards.

Overview

require'prometheus/client'# returns a default registryprometheus=Prometheus::Client.registry# create a new counter metrichttp_requests=Prometheus::Client::Counter.new(:http_requests,docstring:'A counter of HTTP requests made')# register the metricprometheus.register(http_requests)# equivalent helper functionhttp_requests=prometheus.counter(:http_requests,docstring:'A counter of HTTP requests made')# start using the counterhttp_requests.increment

Rack middleware

There are twoRack middlewares available, one to expose a metrics HTTPendpoint to be scraped by a Prometheus server (Exporter) and one to trace all HTTPrequests (Collector).

It's highly recommended to enable gzip compression for the metrics endpoint,for example by including theRack::Deflater middleware.

# config.rurequire'rack'require'prometheus/middleware/collector'require'prometheus/middleware/exporter'useRack::DeflaterusePrometheus::Middleware::CollectorusePrometheus::Middleware::Exporterrun->(_){[200,{'content-type'=>'text/html'},['OK']]}

Start the server and have a look at the metrics endpoint:http://localhost:5123/metrics.

For further instructions and other scripts to get started, have a look at theintegratedexample application.

Pushgateway

The Ruby client can also be used to push its collected metrics to aPushgateway. This comes in handy with batch jobs or in other scenarioswhere it's not possible or feasible to let a Prometheus server scrape a Rubyprocess. TLS and HTTP basic authentication are supported.

require'prometheus/client'require'prometheus/client/push'registry=Prometheus::Client.registry# ... register some metrics, set/increment/observe/etc. their values# push the registry state to the default gatewayPrometheus::Client::Push.new(job:'my-batch-job').add(registry)# optional: specify a grouping key that uniquely identifies a job instance, and gateway.## Note: the labels you use in the grouping key must not conflict with labels set on the# metrics being pushed. If they do, an error will be raised.Prometheus::Client::Push.new(job:'my-batch-job',gateway:'https://example.domain:1234',grouping_key:{instance:'some-instance',extra_key:'foobar'}).add(registry)# If you want to replace any previously pushed metrics for a given grouping key,# use the #replace method.## Unlike #add, this will completely replace the metrics under the specified grouping key# (i.e. anything currently present in the pushgateway for the specified grouping key, but# not present in the registry for that grouping key will be removed).## See https://github.com/prometheus/pushgateway#put-method for a full explanation.Prometheus::Client::Push.new(job:'my-batch-job').replace(registry)# If you want to delete all previously pushed metrics for a given grouping key,# use the #delete method.Prometheus::Client::Push.new(job:'my-batch-job').delete

Basic authentication

By design,Prometheus::Client::Push doesn't read credentials for HTTP basicauthentication when they are passed in via the gateway URL using thehttp://user:password@example.com:9091 syntax, and will in fact raise an error if they'resupplied that way.

The reason for this is that when using that syntax, the username and passwordhave to follow the usual rules for URL encoding of charactersper RFC3986.

Rather than place the burden of correctly performing that encoding on users of this gem,we decided to have a separate method for supplying HTTP basic authentication credentials,with no requirement to URL encode the characters in them.

Instead of passing credentials like this:

push=Prometheus::Client::Push.new(job:"my-job",gateway:"http://user:password@localhost:9091")

please pass them like this:

push=Prometheus::Client::Push.new(job:"my-job",gateway:"http://localhost:9091")push.basic_auth("user","password")

Metrics

The following metric types are currently supported.

Counter

Counter is a metric that exposes merely a sum or tally of things.

counter=Prometheus::Client::Counter.new(:service_requests_total,docstring:'...',labels:[:service])# increment the counter for a given label setcounter.increment(labels:{service:'foo'})# increment by a given valuecounter.increment(by:5,labels:{service:'bar'})# get current value for a given label setcounter.get(labels:{service:'bar'})# => 5

Gauge

Gauge is a metric that exposes merely an instantaneous value or some snapshotthereof.

gauge=Prometheus::Client::Gauge.new(:room_temperature_celsius,docstring:'...',labels:[:room])# set a valuegauge.set(21.534,labels:{room:'kitchen'})# retrieve the current value for a given label setgauge.get(labels:{room:'kitchen'})# => 21.534# increment the value (default is 1)gauge.increment(labels:{room:'kitchen'})# => 22.534# decrement the value by a given valuegauge.decrement(by:5,labels:{room:'kitchen'})# => 17.534

Histogram

A histogram samples observations (usually things like request durations orresponse sizes) and counts them in configurable buckets. It also provides a sumof all observed values.

histogram=Prometheus::Client::Histogram.new(:service_latency_seconds,docstring:'...',labels:[:service])# record a valuehistogram.observe(Benchmark.realtime{service.call(arg)},labels:{service:'users'})# retrieve the current bucket valueshistogram.get(labels:{service:'users'})# => { 0.005 => 3, 0.01 => 15, 0.025 => 18, ..., 2.5 => 42, 5 => 42, 10 = >42 }

Histograms provide default buckets of[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]

You can specify your own buckets, either explicitly, or using theHistogram.linear_bucketsorHistogram.exponential_buckets methods to define regularly spaced buckets.

Summary

Summary, similar to histograms, is an accumulator for samples. It capturesNumeric data and provides an efficient percentile calculation mechanism.

For now, onlysum andtotal (count of observations) are supported, no actual quantiles.

summary=Prometheus::Client::Summary.new(:service_latency_seconds,docstring:'...',labels:[:service])# record a valuesummary.observe(Benchmark.realtime{service.call()},labels:{service:'database'})# retrieve the current sum and total valuessummary_value=summary.get(labels:{service:'database'})summary_value['sum']# => 123.45summary_value['count']# => 100

Labels

All metrics can have labels, allowing grouping of related time series.

Labels are an extremely powerful feature, but one that must be used with care.Refer to the best practices onnaming andlabels.

Most importantly, avoid labels that can have a large number of possible values (highcardinality). For example, an HTTP Status Code is a good label. A User ID isnot.

Labels are specified optionally when updating metrics, as a hash oflabel_name => value.Refer tothe Prometheus documentationas to what's a validlabel_name.

In order for a metric to accept labels, their names must be specified when first initializingthe metric. Then, when the metric is updated, all the specified labels must be present.

Example:

https_requests_total=Counter.new(:http_requests_total,docstring:'...',labels:[:service,:status_code])# increment the counter for a given label sethttps_requests_total.increment(labels:{service:"my_service",status_code:response.status_code})

Pre-set Label Values

You can also "pre-set" some of these label values, if they'll always be the same, so you don'tneed to specify them every time:

https_requests_total=Counter.new(:http_requests_total,docstring:'...',labels:[:service,:status_code],preset_labels:{service:"my_service"})# increment the counter for a given label sethttps_requests_total.increment(labels:{status_code:response.status_code})

with_labels

Similar to pre-setting labels, you can get a new instance of an existing metric object,with a subset (or full set) of labels set, so that you can increment / observe the metricwithout having to specify the labels for every call.

Moreover, if all the labels the metric can take have been pre-set, validation of the labelsis done on the call towith_labels, and then skipped for each observation, which canlead to performance improvements. If you are incrementing a counter in a fast loop, youdefinitely want to be doing this.

Examples:

Pre-setting labels for ease of use:

# in the metric definition:records_processed_total=registry.counter.new(:records_processed_total,docstring:'...',labels:[:service,:component],preset_labels:{service:"my_service"})# in one-off calls, you'd specify the missing labels (component in this case)records_processed_total.increment(labels:{component:'a_component'})# you can also have a "view" on this metric for a specific component where this label is# pre-set:classMyComponentdefmetric@metric ||=records_processed_total.with_labels(component:"my_component")enddefprocessrecords.eachdo |record|# process the recordmetric.incrementendendend

init_label_set

The time series of a metric are not initialized until something happens. For counters, for example, this means that the time series do not exist until the counter is incremented for the first time.

To get around this problem the client provides theinit_label_set method that can be used to initialise the time series of a metric for a given label set.

Reserved labels

The following labels are reserved by the client library, and attempting to use them in ametric definition will result in aPrometheus::Client::LabelSetValidator::ReservedLabelError being raised:

  • :job
  • :instance
  • :pid

Data Stores

The data for all the metrics (the internal counters associated with each labelset)is stored in a global Data Store object, rather than in the metric objects themselves.(This "storage" is ephemeral, generally in-memory, it's not "long-term storage")

The main reason to do this is that different applications may have different requirementsfor their metrics storage. Applications running in pre-fork servers (like Unicorn, forexample), require a shared store between all the processes, to be able to report coherentnumbers. At the same time, other applications may not have this requirement but be verysensitive to performance, and would prefer instead a simpler, faster store.

By having a standardized and simple interface that metrics use to access this store,we abstract away the details of storing the data from the specific needs of each metric.This allows us to then simply swap around the stores based on the needs of differentapplications, with no changes to the rest of the client.

The client provides 3 built-in stores, but if neither of these is ideal for yourrequirements, you can easily make your own store and use that instead. More on this below.

Configuring which store to use.

By default, the Client uses theSynchronized store, which is a simple, thread-safe Storefor single-process scenarios.

If you need to use a different store, set it in the Client Config:

Prometheus::Client.config.data_store=Prometheus::Client::DataStores::DataStore.new(store_specific_params)

NOTE: Youmust make sure to set thedata_store before initializing any metrics.If using Rails, you probably want to set up your Data Store onconfig/application.rb,orconfig/environments/*, both of which run beforeconfig/initializers/*

Also note thatconfig.data_store is set to aninstance of aDataStore, not to theclass. This is so that the stores can receive parameters. Most of the built-in storesdon't require any, butDirectFileStore does, for example.

When instantiating metrics, there is an optionalstore_settings attribute. This is usedto set up store-specific settings for each metric. For most stores, this is not used, butfor multi-process stores, this is used to specify how to aggregate the values of eachmetric across multiple processes. For the most part, this is used for Gauges, to specifywhether you want to report theSUM,MAX,MIN, orMOST_RECENT value observed acrossall processes. For almost all other cases, you'd leave the default (SUM). More on thison theAggregation section below.

Custom stores may also accept extra parameters besides:aggregation. See thedocumentation of each store for more details.

Built-in stores

There are 3 built-in stores, with different trade-offs:

  • Synchronized: Default store. Thread safe, but not suitable for multi-processscenarios (e.g. pre-fork servers, like Unicorn). Stores data in Hashes, with all accessesprotected by Mutexes.
  • SingleThreaded: Fastest store, but only suitable for single-threaded scenarios.This store does not make any effort to synchronize access to its internal hashes, soit's absolutely not thread safe.
  • DirectFileStore: Stores data in binary files, one file per process and per metric.This is generally the recommended store to use with pre-fork servers and other"multi-process" scenarios. There are some important caveats to using this store, soplease read on the section below.

DirectFileStore caveats and things to keep in mind

Each metric gets a file for each process, and manages its contents by storing keys andbinary floats next to them, and updating the offsets of those Floats directly. Whenexporting metrics, it will find all the files that apply to each metric, read them,and aggregate them.

Aggregation of metrics: Since there will be several files per metrics (one per process),these need to be aggregated to present a coherent view to Prometheus. Depending on youruse case, you may need to control how this works. When using this store,each Metric allows you to specify an:aggregation setting, defining howto aggregate the multiple possible values we can get for each labelset. By default,Counters, Histograms and Summaries areSUMmed, and Gauges report all their values (onefor each process), tagged with apid label. You can also selectSUM,MAX,MIN, orMOST_RECENT for your gauges, depending on your use case.

Please note that theMOST_RECENT aggregation only works for gauges, and it does notallow the use ofincrement /decrement, you can only useset.

Memory Usage: When scraped by Prometheus, this store will read all these files, get allthe values and aggregate them. We have notice this can have a noticeable effect on memoryusage for your app. We recommend you test this in a realistic usage scenario to make sureyou won't hit any memory limits your app may have.

Resetting your metrics on each run: You should also make sure that the directory whereyou store your metric files (specified when initializing theDirectFileStore) is emptiedwhen your app starts. Otherwise, each app run will continue exporting the metrics from theprevious run.

If you have this issue, one way to do this is to run code similar to this as part of youinitialization:

Dir["#{app_path}/tmp/prometheus/*.bin"].eachdo |file_path|File.unlink(file_path)end

If you are running in pre-fork servers (such as Unicorn or Puma with multiple processes),make sure you do thisbefore the server forks. Otherwise, each child process may deletefiles created by other processes onthis run, instead of deleting old files.

Declare metrics before fork: As well as deleting files before your process forks, youshould make sure to declare your metrics before forking too. Because the metric registryis held in memory, any metrics declared after forking will only be present in childprocesses where the code declaring them ran, and as a result may not be consistentlyexported when scraped (i.e. they will only appear when a child process that declared themis scraped).

If you're absolutely sure that every child process will run the metric declaration code,then you won't run into this issue, but the simplest approach is to declare the metricsbefore forking.

Large numbers of files: Because there is an individual file per metric and per process(which is done to optimize for observation performance), you may end up with a large numberof files. We don't currently have a solution for this problem, but we're working on it.

Performance: Even though this store saves data on disk, it's still much faster thanwould probably be expected, because the files are never actuallyfsynced, so the storenever blocks while waiting for disk. The kernel's page cache is incredibly efficient inthis regard. If in doubt, check the benchmark scripts described in the documentation forcreating your own stores and run them in your particular runtime environment to make surethis provides adequate performance.

Building your own store, and stores other than the built-in ones.

If none of these stores is suitable for your requirements, you can easily make your own.

The interface and requirements of Stores are specified in detail in theREADME.mdin theclient/data_stores directory. This thoroughly documents how to make your ownstore.

There are also links there to non-built-in stores created by others that may be useful,either as they are, or as a starting point for making your own.

Aggregation settings for multi-process stores

If you are in a multi-process environment (such as pre-fork servers like Unicorn), eachprocess will probably keep their own counters, which need to be aggregated when receivinga Prometheus scrape, to report coherent total numbers.

For Counters, Histograms and quantile-less Summaries this is simply a matter ofsumming the values of each process.

For Gauges, however, this may not be the right thing to do, depending on what they'remeasuring. You might want to take the maximum or minimum value observed in any process,rather than the sum of all of them. By default, we export each process's individualvalue, with apid label identifying each one.

If these defaults don't work for your use case, you should use thestore_settingsparameter when registering the metric, to specify an:aggregation setting.

free_disk_space=registry.gauge(:free_disk_space_bytes,docstring:"Free disk space, in bytes",store_settings:{aggregation::max})

NOTE: This will only work if the store you're using supports the:aggregation setting.Of the built-in stores, onlyDirectFileStore does.

Also note that the:aggregation setting works for all metric types, not just for gauges.It would be unusual to use it for anything other than gauges, but if your use-caserequires it, the store will respect your aggregation wishes.

Tests

Install necessary development gems withbundle install and run tests withrspec:

rake

About

Prometheus instrumentation library for Ruby applications

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp