Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

added section for hpa#229

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
kostis-codefresh merged 2 commits intomasterfromhpa-documentation
Feb 17, 2021
Merged
Show file tree
Hide file tree
Changes fromall commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
284 changes: 282 additions & 2 deletions_docs/administration/codefresh-on-prem.md
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -722,7 +722,287 @@ consul:
enabled: false
```

## Upgrade the Codefresh platform
## App Cluster Autoscaling

Autoscaling in Kubernetes is implemented as an interaction between Cluster Autoscaler and Horizontal Pod Autoscaler

{: .table .table-bordered .table-hover}
| | Scaling Target| Trigger | Controller | How it Works |
| ----------- | ------------- | ------- | --------- | --------- |
| [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)| Nodes | **Up:** Pending pod <br/> **Down:** Node resource allocations is low | On GKE we can turn on/off autoscaler and configure min/max per node group can be also installed separately | Listens on pending pods for scale up and node allocations for scaledown. Should have permissions to call cloud api. Considers pod affinity, pdb, storage, special annotations |
| [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) | replicas on deployments or StatefulSets | metrics value thresholds defined in HPA object | part of Kubernetes controller | Controller gets metrics from "metrics.k8s.io/v1beta1" , "custom.metrics.k8s.io/v1beta1", "external.metrics.k8s.io/v1beta1" requires [metrics-server](https://github.com/kubernetes-sigs/metrics-server) and custom metrics adapters ([prometheus-adapter](https://github.com/kubernetes-sigs/prometheus-adapter), [stackdriver-adapter](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/custom-metrics-stackdriver-adapter)) to listen on this API (see note (1) below) and adjusts deployment or sts replicas according to definitions in HorizontalPodAutocaler <br/> There are v1 and beta api versions for HorizontalPodAutocaler: <br/> [v1](https://github.com/kubernetes/api/blob/master/autoscaling/v1/types.go) - supports for resource metrics (cpu, memory) - `kubect get hpa` <br/> [v2beta2](https://github.com/kubernetes/api/blob/master/autoscaling/v2beta2/types.go) and [v2beta1](https://github.com/kubernetes/api/blob/master/autoscaling/v2beta1/types.go) - supports for both resource and custom metrics - `kubectl get hpa.v2beta2.autoscaling` **The metric value should decrease on adding new pods.** <br/> *Wrong metrics Example:* request rate <br/> *Right metrics Example:* average request rate per pod |

Note (1)
```
kubectl get apiservices | awk 'NR==1 || $1 ~ "metrics"'
NAME SERVICE AVAILABLE AGE
v1beta1.custom.metrics.k8s.io monitoring/prom-adapter-prometheus-adapter True 60d
v1beta1.metrics.k8s.io kube-system/metrics-server True 84d
```


**Implementation in Codefresh**

* Default “Enable Autoscaling” settings for GKE
* Using [prometheus-adapter](https://github.com/kubernetes-sigs/prometheus-adapter) with custom metrics

We define HPA for cfapi and pipeline-manager services

**CFapi HPA object**

It's based on three metrics (HPA controller scales of only one of the targetValue reached):

```
kubectl get hpa.v2beta1.autoscaling cf-cfapi -oyaml
```

{% highlight yaml %}
{% raw %}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
annotations:
meta.helm.sh/release-name: cf
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: Helm
name: cf-cfapi
namespace: default
spec:
maxReplicas: 16
metrics:
- object:
metricName: requests_per_pod
target:
apiVersion: v1
kind: Service
name: cf-cfapi
targetValue: "10"
type: Object
- object:
metricName: cpu_usage_avg
target:
apiVersion: apps/v1
kind: Deployment
name: cf-cfapi-base
targetValue: "1"
type: Object
- object:
metricName: memory_working_set_bytes_avg
target:
apiVersion: apps/v1
kind: Deployment
name: cf-cfapi-base
targetValue: 3G
type: Object
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cf-cfapi-base
{% endraw%}
{% endhighlight %}

* `requests_per_pod` is based on `rate(nginx_ingress_controller_requests)` metric ingested from nginx-ingress-controller
* `cpu_usage_avg` based on cadvisor (from kubelet) rate `(rate(container_cpu_user_seconds_total)`
* `memory_working_set_bytes_avg` based on cadvisor `container_memory_working_set_bytes`

**pipeline-manager HPA**

based on `cpu_usage_avg`

{% highlight yaml %}
{% raw %}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
annotations:
meta.helm.sh/release-name: cf
meta.helm.sh/release-namespace: default
labels:
app.kubernetes.io/managed-by: Helm
name: cf-pipeline-manager
spec:
maxReplicas: 8
metrics:
- object:
metricName: cpu_usage_avg
target:
apiVersion: apps/v1
kind: Deployment
name: cf-pipeline-manager-base
targetValue: 400m
type: Object
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: cf-pipeline-manager-base
{% endraw%}
{% endhighlight %}

**prometheus-adapter configuration**

Reference: [https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config.md
)

{% highlight yaml %}
{% raw %}
Rules:
- metricsQuery: |
kube_service_info{<<.LabelMatchers>>} * on() group_right(service)
(sum(rate(nginx_ingress_controller_requests{<<.LabelMatchers>>}[2m]))
/ on() kube_deployment_spec_replicas{deployment='<<index .LabelValuesByName "service">>-base',namespace='<<index .LabelValuesByName "namespace">>'})
name:
as: requests_per_pod
matches: ^(.*)$
resources:
overrides:
namespace:
resource: namespace
service:
resource: service
seriesQuery: kube_service_info{service=~".*cfapi.*"}
- metricsQuery: |
kube_deployment_labels{<<.LabelMatchers>>} * on(label_app) group_right(deployment)
(label_replace(
avg by (container) (rate(container_cpu_user_seconds_total{container=~"cf-(tasker-kubernetes|cfapi.*|pipeline-manager.*)", job="kubelet", namespace='<<index .LabelValuesByName "namespace">>'}[15m]))
, "label_app", "$1", "container", "(.*)"))
name:
as: cpu_usage_avg
matches: ^(.*)$
resources:
overrides:
deployment:
group: apps
resource: deployment
namespace:
resource: namespace
seriesQuery: kube_deployment_labels{label_app=~"cf-(tasker-kubernetes|cfapi.*|pipeline-manager.*)"}
- metricsQuery: "kube_deployment_labels{<<.LabelMatchers>>} * on(label_app) group_right(deployment)\n
\ (label_replace(\n avg by (container) (avg_over_time (container_memory_working_set_bytes{container=~\"cf-.*\",
job=\"kubelet\", namespace='<<index .LabelValuesByName \"namespace\">>'}[15m]))\n
\ , \"label_app\", \"$1\", \"container\", \"(.*)\"))\n \n"
name:
as: memory_working_set_bytes_avg
matches: ^(.*)$
resources:
overrides:
deployment:
group: apps
resource: deployment
namespace:
resource: namespace
seriesQuery: kube_deployment_labels{label_app=~"cf-.*"}
- metricsQuery: |
kube_deployment_labels{<<.LabelMatchers>>} * on(label_app) group_right(deployment)
label_replace(label_replace(avg_over_time(newrelic_apdex_score[15m]), "label_app", "cf-$1", "exported_app", '(cf-api.*|pipeline-manager|tasker-kuberentes)\\[kubernetes\\]'), "label_app", "$1cfapi$3", "label_app", '(cf-)(cf-api)(.*)')
name:
as: newrelic_apdex
matches: ^(.*)$
resources:
overrides:
deployment:
group: apps
resource: deployment
namespace:
resource: namespace
seriesQuery: kube_deployment_labels{label_app=~"cf-(tasker-kubernetes|cfapi.*|pipeline-manager)"}
{% endraw%}
{% endhighlight %}

**How to define HPA in Codefresh installer (kcfi) config**

Most of Codefresh's Microservices subcharts contain `templates/hpa.yaml`:

{% highlight yaml %}
{% raw %}
{{- if .Values.HorizontalPodAutoscaler }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "cfapi.fullname" . }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "cfapi.fullname" . }}-{{ .version | default "base" }}
minReplicas: {{ coalesce .Values.HorizontalPodAutoscaler.minReplicas .Values.replicaCount 1 }}
maxReplicas: {{ coalesce .Values.HorizontalPodAutoscaler.maxReplicas .Values.replicaCount 2 }}
metrics:
{{- if .Values.HorizontalPodAutoscaler.metrics }}
{{ toYaml .Values.HorizontalPodAutoscaler.metrics | indent 4 }}
{{- else }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: 60
{{- end }}
{{- end }}
{% endraw%}
{% endhighlight %}

To configure HPA for CFapi add `HorizontalPodAutoscaler` values to config.yaml, for example:

(assuming that we already have prometheus adapter configured for metrics `requests_per_pod`, `cpu_usage_avg`, `memory_working_set_bytes_avg`)

{% highlight yaml %}
{% raw %}
cfapi:
replicaCount: 4
resources:
requests:
memory: "4096Mi"
cpu: "1100m"
limits:
memory: "4096Mi"
cpu: "2200m"
HorizontalPodAutoscaler:
minReplicas: 2
maxReplicas: 16
metrics:
- type: Object
object:
metricName: requests_per_pod
target:
apiVersion: "v1"
kind: Service
name: cf-cfapi
targetValue: 10
- type: Object
object:
metricName: cpu_usage_avg
target:
apiVersion: "apps/v1"
kind: Deployment
name: cf-cfapi-base
targetValue: 1
- type: Object
object:
metricName: memory_working_set_bytes_avg
target:
apiVersion: "apps/v1"
kind: Deployment
name: cf-cfapi-base
targetValue: 3G
{% endraw%}
{% endhighlight %}

**Querying metrics (for debugging)**

CPU Metric API Call

```
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/codefresh/pods/cf-cfapi-base-****-/ | jq
```

Custom Metrics Call

```
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/codefresh/services/cf-cfapi/requests_per_pod | jq
```

## Upgrade the Codefresh Platform

To upgrade Codefresh to a newer version

Expand All@@ -741,7 +1021,7 @@ Notice that only `kfci` should be used for Codefresh upgrades. If you still have

#### Mongo

All services using the MongoDB are dependent on the `mongo` pod being up and running.If the `mongo` pod is down, the following dependencies will not work:
All services using the MongoDB are dependent on the `mongo` pod being up and running. If the `mongo` pod is down, the following dependencies will not work:

- `runtime-environment-manager`
- `pipeline-manager`
Expand Down
2 changes: 2 additions & 0 deletions_docs/whats-new/whats-new.md
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -18,9 +18,11 @@ toc: true

### February 2021


- Concurrency behavior for pending builds - [documentation]({{site.baseurl}}/docs/codefresh-yaml/steps/approval/#define-concurrency-limits)
- Jira integration - [documentation]({{site.baseurl}}/docs/yaml-examples/examples/sending-the-notification-to-jira/)
- SLA details - [documentation]({{site.baseurl}}/docs/terms-and-privacy-policy/sla/)
- Autoscaling recommendations for Codefresh on-prem - [documentation]({{site.baseurl}}/docs/administration/codefresh-on-prem/#app-cluster-autoscaling)
- Hide inaccessible clusters in the Kubernetes dashboard - [documentation]({{site.baseurl}}/docs/deploy-to-kubernetes/manage-kubernetes/#accessing-the-kubernetes-dashboard)
- Define access for non-admins in Helm repositories and shared config - [documentation]({{site.baseurl}}/docs/configure-ci-cd-pipeline/shared-configuration/#level-of-access)
- Okta auto-sync of teams - [documentation]({{site.baseurl}}/docs/administration/single-sign-on/sso-okta/#syncing-of-teams-after-initial-sso-setup)
Expand Down

[8]ページ先頭

©2009-2025 Movatter.jp