- Notifications
You must be signed in to change notification settings - Fork220
Simple server that scrapes HAProxy stats and exports them via HTTP for Prometheus consumption
License
prometheus/haproxy_exporter
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This is a simple server that scrapes HAProxy stats and exports them via HTTP forPrometheus consumption.
In all supported versions of HAProxy, the official source includes a Prometheus exporter module that can be built into your binary with a single flag during build time and offers a native Prometheus endpoint. For more information seedown below.
Please transition to using the built-in support as soon as possible.
To run it:
./haproxy_exporter [flags]
Help on flags:
./haproxy_exporter --help
For more information check thesource code documentation. All of thecore developers are accessible via the Prometheus Developersmailinglist.
Specify custom URLs for the HAProxy stats port using the--haproxy.scrape-uri
flag. For example, if you have setstats uri /baz
,
haproxy_exporter --haproxy.scrape-uri="http://localhost:5000/baz?stats;csv"
Or to scrape a remote host:
haproxy_exporter --haproxy.scrape-uri="http://haproxy.example.com/haproxy?stats;csv"
Note that the;csv
is mandatory (and needs to be quoted).
If your stats port is protected bybasic auth, add the credentials to thescrape URL:
haproxy_exporter --haproxy.scrape-uri="http://user:pass@haproxy.example.com/haproxy?stats;csv"
Alternatively, provide the password through a file, so that it does not appear in the processtable or in the output of the/debug/pprof/cmdline
profiling service:
echo'--haproxy.scrape-uri=http://user:pass@haproxy.example.com/haproxy?stats;csv'> argshaproxy_exporter @args
You can also scrape HTTPS URLs. Certificate validation is enabled by default, butyou can disable it using the--no-haproxy.ssl-verify
flag:
haproxy_exporter --no-haproxy.ssl-verify --haproxy.scrape-uri="https://haproxy.example.com/haproxy?stats;csv"
If scraping a remote HAProxy must be done via an HTTP proxy, you can enable reading of thestandard$http_proxy
/$https_proxy
/$no_proxy
environment variables by using the--http.proxy-from-env
flag (these variables will be ignored otherwise):
export HTTP_PROXY="http://proxy:3128"haproxy_exporter --http.proxy-from-env --haproxy.scrape-uri="http://haproxy.example.com/haproxy?stats;csv"
As alternative to localhost HTTP a stats socket can be used. Enable the statssocket in HAProxy with for example:
stats socket /run/haproxy/admin.sock mode 660 level admin
The scrape URL uses the 'unix:' scheme:
haproxy_exporter --haproxy.scrape-uri=unix:/run/haproxy/admin.sock
To run the haproxy exporter as a Docker container, run:
docker run -p 9101:9101 quay.io/prometheus/haproxy-exporter:latest --haproxy.scrape-uri="http://user:pass@haproxy.example.com/haproxy?stats;csv"
make build
maketest
The HAProxy Exporter supports TLS and basic authentication.
To use TLS and/or basic authentication, you need to pass a configuration fileusing the--web.config.file
parameter. The format of the file is describedin the exporter-toolkit repository.
Apache License 2.0, seeLICENSE.
As of 2.0.0, HAProxy includes a Prometheus exporter module that can be built into your binary during build time.For HAProxy 2.4 and higher, pass theUSE_PROMEX
flag tomake
:
make TARGET=linux-glibc USE_PROMEX=1
Pre-built versions, including theDocker image, typically have this enabled already.
Once built, you can enable and configure the Prometheus endpoint from yourhaproxy.cfg
file as a typical frontend:
frontendstats bind *:8404 http-request use-service prometheus-exporterif { path /metrics } stats enable stats uri /stats stats refresh10s
For more information, seethis official blog post.
About
Simple server that scrapes HAProxy stats and exports them via HTTP for Prometheus consumption
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.