This page describes howmonitoring works as deployed and managed by the WMCS team, for bothCloud VPS andToolforge.
We have our own instance in the wikiprodPrometheus setup. As of writing (Oct 2023), it's only in eqiad, but that might change. It's configured via theprofile::prometheus::cloud Puppet profile.
To query it, usehttps://thanos.wikimedia.org orhttps://prometheus-eqiad.wikimedia.org/cloud/. To craft dashboards, use the Grafana instance athttps://grafana.wikimedia.org.
TheCloud VPS project "metricsinfra" provides the base infrastructure and services for multi-tenant instance monitoring on Cloud VPS. Technical documentation for the setup is atNova Resource:Metricsinfra/Documentation.
Themetricsinfra Prometheus server scrapes base instance-level metrics from ALL Puppetized Cloud VPS instances.
Metricsinfra Prometheus CAN be used for:
Metricsinfra Prometheus MUST NOT be used for:
The monitoring configuration is mostly kept in a Trove database. There is no interface for more user-friendly management yet, but for now you can ssh tometricsinfra-controller-2.metricsinfra.eqiad1.wikimedia.cloud and usesudo -i mariadb to edit the database by hand.
Scrape targets are defined in thescrapes table:
MariaDB[prometheusconfig]>select*fromscrapessjoinprojectspons.project_id=p.id;
The monitoring configuration is mostly kept in a Trove database. There is no interface for more user-friendly management yet, but for now you can ssh tometricsinfra-controller-2.metricsinfra.eqiad1.wikimedia.cloud and usesudo -i mariadb to edit the database by hand.
Project-specific rules are defined in thealerts table. Global rules that apply to all Cloud VPS projects are defined in theglobal_alerts table. You can add a new alert with a query like the following one:
MariaDB[prometheusconfig]>INSERTINTOalertsVALUES(NULL,12,'ToolsDBReplicationLagIsTooHigh','mysql_slave_status_seconds_behind_master{project="tools"} > 3600','1m','warning','{"summary": "ToolsDB replication on {{ $labels.instance }} is lagging behind the primary, the current lag is {{ $value }}"}');
The new alert should appear athttps://prometheus.wmcloud.org/alerts after a few minutes.
Note thatthese alerts can not query metrics that are not stored in the metricsinfra Prometheus instance, which includes most notably various Toolforge components. Other Prometheus instances can have however separate mechanisms for configuring alert rules.
The metricsinfra project has anAlertmanager instance that will send out alerts via IRC, email or VictorOps. In addition to the metricsinfra Prometheus instance, other Prometheus instances in WMCS-managed projects can use this instance to send out alerts.
By default project viewers and members can useprometheus-alerts.wmcloud.org to create and edit silences for the projects they are in. (Toolforge is an exemption for this general rule: access to creating and editing silences for the tools project is restricted to maintainers of the "admin" tool.) In addition, members of the "admin" and "metricsinfra" projects can manage silences for any project.
Alternatively to silence existing or expected (downtime) notifications you can use the `amtool` command on any metricsinfra alertmanager server (currently for example metricsinfra-alertmanager-1.metricsinfra.eqiad1.wikimedia.cloud). For example to silence all Toolsbeta alerts you could use:
metricsinfra-alertmanager-1:~$amtoolsilenceaddproject=toolsbeta-c"per T123456"-d30d3e68bf51-63f6-4406-a009-e6765acf5d8e
To change this default behavior you have to set theacl_group column in theproject table on the DB to theprometheusconfig database.
Change theprofile::wmcs::metricsinfra::alertmanager::project_proxy::trusted_hosts Hiera key (managed via Horizon on the metricsinfra project) to include the per-project Prometheus servers to allow. Right now it is just host-level authentication, no secrets involved unfortunately.
Then, in the Prometheus server config, use something like this:
alerting:alertmanagers:-openstack_sd_configs:-role:instanceregion:eqiad1-ridentity_endpoint:https://openstack.eqiad1.wikimediacloud.org:25000/v3username:novaobserverpassword:$NOVAOBSERVER_PASSWORDdomain_name:defaultproject_name:metricsinfraall_tenants:falserefresh_interval:5mport:8643relabel_configs:-source_labels:-__meta_openstack_instance_nameaction:keepregex:metricsinfra-alertmanager-\d+-source_labels:-__meta_openstack_instance_nametarget_label:instance-source_labels:-__meta_openstack_instance_statusaction:keepregex:ACTIVEalert_relabel_configs:-target_label:sourcereplacement:prometheusaction:replace-target_label:projectreplacement:$YOUR_OPENSTACK_PROJECT_NAMEaction:replace
TheMetricsinfra Grafana instance is used to draw dashboards from Prometheus data. Like the metricsinfra Alertmanager instance, it can be used with per-project Prometheus servers in addition to the metricsinfra Prometheus server.
Data sources are managed viamodules/profile/files/wmcs/metricsinfra/grafana/datasources.yaml in the Puppet repository.
In addition to the Metricsinfra setup, Toolforge has its own Prometheus server for Kubernetes metrics. It's queriable viahttps://prometheus.svc.toolforge.org/tools/, and uses the metricsinfra grafana and alertmanager instances. Alerts are configured viahttps://gitlab.wikimedia.org/repos/cloud/toolforge/alerts. The toolsbeta equivalent is queriable viahttps://prometheus.svc.beta.toolforge.org/tools/.
If you want to get an overview of what's going on the Cloud VPS infra, open these links:
| Datacenter | What | Mechanism | Comments | Link |
|---|---|---|---|---|
| eqiad | NFS servers | icinga | labstore1xxx servers | [1] |
| eqiad | NFS Server Statistics | grafana | labstore and cloudstore NFS operations, connections and various details | [2] |
| eqiad | Cloud VPS main services | icinga | service servers, non virts | [3] |
| codfw | Cloud VPS labtest servers | icinga | all physical servers | [4] |
| eqiad | Toolforge basic alerts | grafana | some interesting metrics from Toolforge | [5] |
| eqiad | ToolsDB (Toolforge R/W MariaDB) | grafana | Database metrics for ToolsDB servers | [6] |
| eqiad | Toolforge grid status | custom tool | jobs running on Toolforge's grid | [7] |
| any | cloud servers | icinga | all physical servers with the cloudXXXX naming scheme | [8] |
| eqiad | Cloud VPS eqiad1 capacity | grafana | capacity planning | [9] |
| eqiad | labstore1004/labstore1005 | grafana | load & general metrics | [10] |
| eqiad | Cloud VPS eqiad1 | grafana | load & general metrics | [11] |
| eqiad | Cloud VPS eqiad1 | grafana | internal openstack metrics | [12] |
| eqiad | Cloud VPS eqiad1 | grafana | hypervisor metrics from openstack | [13] |
| eqiad | Cloud VPS memcache | grafana | cloudservices servers | [14] |
| eqiad | openstack database backend (per host) | grafana | mariadb/galera on cloudcontrols | [15] |
| eqiad | openstack database backend (aggregated) | grafana | mariadb/galera on cloudcontrols | [16] |
| eqiad | Toolforge | grafana | Arturo's metrics | [17] |
| eqiad | Cloud HW eqiad | icinga | Icinga group for WMCS in eqiad | [18] |
| eqiad | Toolforge, new kubernetes cluster | prometheus/grafana | Generic dashboard for the new Kubernetes cluster | [19] |
| eqiad | Toolforge, new kubernetes cluster, namespaces | prometheus/grafana | Per-namspace dashboard for the new Kubernetes cluster | [20] |
| eqiad | Toolforge, new kubernetes cluster, ingress | prometheus/grafana | dashboard about the ingress for the new kubernetes cluster | [21] |
| eqiad | Toolforge | prometheus/grafana | dashboard showing a table with basic information about all VMs in the tools project | [22] |
| eqiad | Toolforge email server | prometheus/grafana | dashboard showing data about Toolforge exim email server | [23] |
| Datacenter | What | Mechanism | Comments | Link |