- Notifications
You must be signed in to change notification settings - Fork0
A reference for tools, configurations, and documentation used to monitor CircleCI server.
License
CircleCI-Public/circleci-server-monitoring-reference
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
A reference for tools, configurations, and documentation used to monitor CircleCI server.
🚧Under Development
This repository is currently under active development and is not yet a supported resource. Please refer to it at your own discretion until further notice.
A reference Helm chart for setting up a monitoring stack for CircleCI server
Repository | Name | Version |
---|---|---|
https://grafana.github.io/helm-charts | grafanaoperator(grafana-operator) | v5.18.0 |
https://prometheus-community.github.io/helm-charts | prometheusOperator(prometheus-operator-crds) | 19.0.0 |
To set up monitoring for a CircleCI server instance, you need to configure Telegraf to set up a Prometheus client and expose a metrics endpoint. Add the following configuration to the CircleCIserver Helm chart values:
telegraf:config:outputs: -file:files:["stdout"] -prometheus_client:listen:":9273"
First, add the CircleCI Server Monitoring Stack Helm repository:
$ helm repo add server-monitoring-stack https://packagecloud.io/circleci/server-monitoring-stack/helm$ helm repo update
Before installing the full chart, you must first install the dependency subcharts and operators.
Install the Prometheus Custom Resource Definitions (CRDs) and the Grafana operator chart. This assumes you are installing it in the same namespace as your CircleCI server installation:
$ helm install server-monitoring-stack server-monitoring-stack/server-monitoring-stack --set global.enabled=false --set prometheusOperator.installCRDs=true --version 0.1.0-alpha.8 -n<your-server-namespace>
NOTE: It's possible to install the monitoring stack in a different namespace than the CircleCI server installation. If you do so, set the
prometheus.serviceMonitor.selectorNamespaces
value with the target namespace.
If you plan to enable distributed tracing with Tempo (tempo.enabled=true
), you must manually install the Tempo Operator. There is currently no official Helm chart available for the Tempo Operator or its CRDs, so manual installation is required. The Tempo Operator also requires cert-manager to be installed in your cluster. Additionally, this reference chart requires thegrafanaOperator
feature gate to be enabled for proper integration with Grafana.
For more detailed installation instructions, refer to theofficial Tempo Operator documentation.
Prerequisites:
- cert-manager must be installed in your cluster
Example installation steps:
- Install the Tempo Operator:
$ kubectl apply -f https://github.com/grafana/tempo-operator/releases/download/v0.17.0/tempo-operator.yaml
- Enable the
grafanaOperator
feature gate (required for integration with Grafana):
$ kubectl get cm tempo-operator-manager-config -n tempo-operator-system -o yaml| \ sed's/^ *grafanaOperator: false$/ grafanaOperator: true/'| \ kubectl apply -f -
- Restart the operator deployment to apply the configuration:
$ kubectl rollout restart deployment/tempo-operator-controller -n tempo-operator-system$ kubectlwait --for=condition=available --timeout=120s deployment/tempo-operator-controller -n tempo-operator-system
Next, install the Helm chart using the following command:
$ helm upgrade --install server-monitoring-stack server-monitoring-stack/server-monitoring-stack --reset-values --version 0.1.0-alpha.8 -n<your-server-namespace>
To verify that Prometheus is working correctly and targeting Telegraf, use the following command to port-forward Prometheus:
$ kubectl port-forward svc/server-monitoring-prometheus 9090:9090 -n<your-namespace-here>
Then visithttp://localhost:9090/targets in your browser. Verify that Telegraf appears as a target and that its state is "up".
To verify that Grafana is working correctly and connected to Prometheus, use the following command to port-forward Grafana:
$ kubectl port-forward svc/server-monitoring-grafana-service 3000:3000<your-namespace-here>
Then visithttp://localhost:3000 in your browser. Once logged in with the default credentials, navigate tohttp://localhost:3000/dashboards and verify that the default dashboards are present and populating with data.
After ensuring both Prometheus and Grafana are operational, consider these enhancements:
Secure Grafana by configuring credentials:
grafana:credentials:# Directly set these for quick setupsadminUser:"admin"adminPassword:"<your-secure-password-here>"# For production, use a Kubernetes secret to manage credentials securelyexistingSecretName:"<your-secret-here>"
For external access, modify the service or ingress values. For example:
grafana:service:type:LoadBalancer
Persist data by enabling storage for Prometheus and Grafana:
prometheus:persistence:enabled:truestorageClass:<your-custom-storage-class>grafana:persistence:enabled:truestorageClass:<your-custom-storage-class>
NOTE: Use a custom storage class with a 'Retain' policy to allow for data retention even after uninstalling the chart.
When Tempo is enabled, it's recommended to use object storage instead of in-memory storage for trace persistence. Compatible storage backends for Tempo and CircleCI server include S3, GCS, and MinIO.
Configure object storage using thetempo.storage
values detailed in thevalues section below.
NOTE: For production deployments, object storage provides better durability and scalability compared to in-memory storage, which loses traces on pod restarts.
For detailed configuration options, consult theofficial Tempo documentation.
The default dashboards are located in thedashboards
directory of the reference chart. To add new dashboards or modify existing ones, follow these steps.
Dashboards are provisioned directly from CRDs, which means any manual edits will be lost upon a refresh. As such, the workflow outlined below is recommended for making changes:
- Create a Copy:
- SelectEdit in the upper right corner.
- ChooseSave dashboard ->Save as copy.
- After saving, navigate to the copy.
- Make Edits:
- Modify the copy as needed and exit edit mode.
- Export as JSON:
- SelectExport in the upper right corner and thenExport as JSON.
- Ensure that
Export the dashboard to use in another instance
is toggled on.
- Update the JSON File:
- Download the file and replace the
./dashboards/server-slis.json
file with the updated copy. - Run the following command to automatically validate the JSON and apply necessary updates:
./do validate-dashboards
- Download the file and replace the
- Commit and Open a PR:
- Review and commit the changes.
- Open a pull request for the On-Prem team to review.
Key | Type | Default | Description |
---|---|---|---|
global.enabled | bool | true | |
global.fullnameOverride | string | "server-monitoring" | Override the full name for resources |
global.imagePullSecrets | list | [] | List of image pull secrets to be used across the deployment |
global.nameOverride | string | "" | Override the release name |
grafana.credentials.adminPassword | string | "admin" | Grafana admin password. Change from default for production environments. |
grafana.credentials.adminUser | string | "admin" | Grafana admin username. |
grafana.credentials.existingSecretName | string | "" | Name of an existing secret for Grafana credentials. Leave empty to create a new secret. |
grafana.customConfig | string | "" | Add any custom Grafana configurations you require here. This should be a YAML-formatted string of additional settings for Grafana. |
grafana.dashboards.jsonDirectory | string | "dashboards" | The directory containing JSON files for Grafana dashboards. |
grafana.datasource.jsonData.timeInterval | string | "5s" | The time interval for Grafana to poll Prometheus. Specifies the frequency of data requests. |
grafana.enabled | string | "-" | |
grafana.image.repository | string | "grafana/grafana" | Image repository for Grafana. |
grafana.image.tag | string | "12.0.0-security-01" | Tag for the Grafana image. |
grafana.ingress.className | string | "" | Specifies the class of the Ingress controller. Required if the Kubernetes cluster includes multiple Ingress controllers. |
grafana.ingress.enabled | bool | false | Enable to create an Ingress resource for Grafana. Disabled by default. |
grafana.ingress.host | string | "" | Hostname to use for the Ingress. Must be set if Ingress is enabled. |
grafana.ingress.tls.enabled | bool | false | Enable TLS for Ingress. Requires a TLS secret to be specified. |
grafana.ingress.tls.secretName | string | "" | Name of the TLS secret used for securing the Ingress. Must be provided if TLS is enabled. |
grafana.persistence.accessModes | list | ["ReadWriteOnce"] | Access modes for the persistent volume. |
grafana.persistence.enabled | bool | false | Enable persistent storage for Grafana. |
grafana.persistence.size | string | "10Gi" | Size of the persistent volume claim. |
grafana.persistence.storageClass | string | "" | Storage class for persistent volume provisioner. You can create a custom storage class with a "retain" policy to ensure the persistent volume remains even after the chart is uninstalled. |
grafana.replicas | int | 1 | Number of Grafana replicas to deploy. |
grafana.service.annotations | object | {} | Metadata annotations for the service. |
grafana.service.port | int | 3000 | Port on which the Grafana service will be exposed. |
grafana.service.type | string | "ClusterIP" | Specifies the type of service for Grafana. Options include ClusterIP, NodePort, or LoadBalancer. Use NodePort or LoadBalancer to expose Grafana externally. Ensure that grafana.credentials are set for security purposes. |
grafanaoperator | object | {"fullnameOverride":"server-monitoring-grafana-operator","image":{"repository":"quay.io/grafana-operator/grafana-operator","tag":"v5.18.0"}} | Full values for the Grafana Operator chart can be obtained at:https://github.com/grafana/grafana-operator/blob/master/deploy/helm/grafana-operator/values.yaml |
grafanaoperator.fullnameOverride | string | "server-monitoring-grafana-operator" | Overrides the fully qualified app name. |
grafanaoperator.image.repository | string | "quay.io/grafana-operator/grafana-operator" | Image repository for the Grafana Operator. |
grafanaoperator.image.tag | string | "v5.18.0" | Tag for the Grafana Operator image. |
prometheus.enabled | string | "-" | |
prometheus.image.repository | string | "quay.io/prometheus/prometheus" | Image repository for Prometheus. |
prometheus.image.tag | string | "v3.2.1" | Tag for the Prometheus image. |
prometheus.persistence.accessModes | list | ["ReadWriteOnce"] | Access modes for the persistent volume. |
prometheus.persistence.enabled | bool | false | Enable persistent storage for Prometheus. |
prometheus.persistence.size | string | "10Gi" | Size of the persistent volume claim. |
prometheus.persistence.storageClass | string | "" | Storage class for persistent volume provisioner. You can create a custom storage class with a "retain" policy to ensure the persistent volume remains even after the chart is uninstalled. |
prometheus.replicas | int | 2 | Number of Prometheus replicas to deploy. |
prometheus.serviceMonitor.endpoints[0].metricRelabelings[0].action | string | "labeldrop" | |
prometheus.serviceMonitor.endpoints[0].metricRelabelings[0].regex | string | "instance" | |
prometheus.serviceMonitor.endpoints[0].port | string | "prometheus-client" | Port name for the Prometheus client service. |
prometheus.serviceMonitor.endpoints[0].relabelings[0].action | string | "labeldrop" | |
prometheus.serviceMonitor.endpoints[0].relabelings[0].regex | string | `"(container | endpoint |
prometheus.serviceMonitor.selectorLabels | object | {"app.kubernetes.io/instance":"circleci-server","app.kubernetes.io/name":"telegraf"} | Labels to select ServiceMonitors for scraping metrics. By default, it's configured to scrape the existing Telegraf deployment in CircleCI server. |
prometheus.serviceMonitor.selectorNamespaces | list | [] | Namespaces to look for ServiceMonitor objects. Set this if the CircleCI server monitoring stack is deploying in a different namespace than the actual CircleCI server installation. |
prometheusOperator.crds.annotations."helm.sh/resource-policy" | string | "keep" | |
prometheusOperator.enabled | string | "-" | |
prometheusOperator.image.repository | string | "quay.io/prometheus-operator/prometheus-operator" | Image repository for Prometheus Operator. |
prometheusOperator.image.tag | string | "v0.81.0" | Tag for the Prometheus Operator image. |
prometheusOperator.installCRDs | bool | false | |
prometheusOperator.prometheusConfigReloader.image.repository | string | "quay.io/prometheus-operator/prometheus-config-reloader" | Image repository for Prometheus Config Reloader. |
prometheusOperator.prometheusConfigReloader.image.tag | string | "v0.81.0" | Tag for the Prometheus Config Reloader image. |
prometheusOperator.replicas | int | 1 | Number of Prometheus Operator replicas to deploy. |
tempo.customConfig | object | {} | Add any custom Tempo configurations you require here. This should be a YAML object of additional settings for Tempo. |
tempo.enabled | string | "-" | Enable Tempo distributed tracing Requires manual installation of Tempo Operator Set to true to enable, false to disable, "-" to use global default |
tempo.podSecurityContext | object | {"fsGroup":10001,"runAsGroup":10001,"runAsNonRoot":true,"runAsUser":10001} | Pod security context for Tempo containers |
tempo.podSecurityContext.fsGroup | int | 10001 | Filesystem group ID for volume ownership and permissions |
tempo.podSecurityContext.runAsGroup | int | 10001 | Group ID to run the container processes |
tempo.podSecurityContext.runAsNonRoot | bool | true | Run containers as non-root user |
tempo.podSecurityContext.runAsUser | int | 10001 | User ID to run the container processes |
tempo.resources | object | {"limits":{"cpu":"1000m","memory":"2Gi"},"requests":{"cpu":"500m","memory":"1Gi"}} | Resource requirements for Tempo pods Adjust based on your trace volume and cluster capacity |
tempo.resources.limits.cpu | string | "1000m" | Maximum CPU Tempo pods can use |
tempo.resources.limits.memory | string | "2Gi" | Maximum memory Tempo pods can use |
tempo.resources.requests.cpu | string | "500m" | Minimum CPU guaranteed to Tempo pods |
tempo.resources.requests.memory | string | "1Gi" | Minimum memory guaranteed to Tempo pods |
tempo.storage | object | {"traces":{"backend":"memory","size":"20Gi","storageClassName":""}} | Storage configuration for trace data |
tempo.storage.traces.backend | string | "memory" | Storage backend for traces Default: in-memory storage (traces lost on pod restart) Suitable for development/testing environments only |
tempo.storage.traces.size | string | "20Gi" | Storage volume size For memory/pv: actual volume size For cloud backends: size of WAL (Write-Ahead Log) volume Increase for higher trace volumes or longer retention |
tempo.storage.traces.storageClassName | string | "" | Storage class for persistent volume provisioner. Applies to both persistent volume and object storage backends. |
Releases are managed by the CI/CD pipeline on the main branch, with an approval job gate calledapprove-deploy-chart
. Before releasing, increment the Helm chart version inChart.yaml
and regenerate the documentation using./do helm-docs
. Once approved, the release will be available in thepackage repository.
This monitoring reference is not part of CircleCI’s Server product. CircleCI provides it as a monitoring tooling and configuration repository for CircleCI Server User(s) that may be referred to when the User(s) plan and deploy their own monitoring implementations.
CircleCI strives to ensure that the monitoring tooling and configurations in this reference are functional and up to date. While CircleCI may provide reference to, answer questions regarding, and/or review contributions to the monitoring tooling and configurations, CircleCI does not make any judgment or recommendation as to the suitability for any customer installation of them with CircleCI Server, nor provide support for their installation and/or management in any customer’s system.
This monitoring reference and the monitoring tooling and configurations are provided on an ‘as-is’ and ‘as available’ basis without any warranties of any kind. CircleCI disclaims all warranties, express or implied, including, but not limited to, all implied warranties of merchantability, title, fitness for a particular purpose, and noninfringement.
About
A reference for tools, configurations, and documentation used to monitor CircleCI server.
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Contributors4
Uh oh!
There was an error while loading.Please reload this page.