You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _docs/administration/codefresh-runner.md
+36-35Lines changed: 36 additions & 35 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -170,7 +170,7 @@ Installing the Codefresh Runner with Helm requires you to first create a `genera
170
170
1. Optional. If the Kubernetes cluster with the Codefresh Runner is behind a proxy,continue with [Complete Codefresh Runner installation](#complete-codefresh-runner-installation).
171
171
172
172
<!--- what is this -->
173
-
For reference, have a look at the repository with the chart: [https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime](https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime).
173
+
For reference, have a look at the repository with the chart: [https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime](https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime){:target="\_blank"}.
174
174
175
175
176
176
```shell
@@ -235,7 +235,7 @@ After installation, configure the Kubernetes cluster with the Codefresh Runner t
235
235
236
236
### AWS backend volume configuration
237
237
238
-
For Codefresh Runners on [EKS](https://aws.amazon.com/eks/){:target=\_blank"} or any other custom cluster in Amazon, such as kops for example, configure the Runner to work with EBS volumes to support [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/) during pipeline execution.
238
+
For Codefresh Runners on [EKS](https://aws.amazon.com/eks/){:target="\_blank"} or any other custom clusterin Amazon, such as kopsfor example, configure the Runner to work with EBS volumes to support [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/) during pipeline execution.
239
239
240
240
> The configuration assumes that you have installed the Runner with the default options:`codefresh runner init`
241
241
@@ -246,11 +246,11 @@ The `dind-volume-provisioner` deployment should have permissions to create/attac
246
246
247
247
There are three optionsfor this:
248
248
1. Run`dind-volume-provisioner` pod on the node/node-group with IAM role
249
-
1. Mount K8s secret in [AWS credential format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html):
249
+
1. Mount K8s secretin [AWS credential format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html){:target="\_blank"}:
250
250
To~/.aws/credentials
251
251
OR
252
252
By passing the`AWS_ACCESS_KEY_ID` and`AWS_SECRET_ACCESS_KEY` as environment variables to the`dind-volume-provisioner` pod
253
-
1. Use [AWS identity for Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) IAM role assigned to`volume-provisioner-runner` service account
253
+
1. Use [AWS identityfor Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html){:target="\_blank"} IAM role assigned to`volume-provisioner-runner` service account
Configure the Codefresh Runner to uselocal SSDsfor your pipeline volumes:
450
452
451
-
[How-to: Configuring an existing Runtime Environment with Local SSDs (GKE only)](https://support.codefresh.io/hc/en-us/articles/360016652920-How-to-Configuring-an-existing-Runtime-Environment-with-Local-SSDs-GKE-only-)
453
+
[How-to: Configuring an existing Runtime Environment with Local SSDs (GKE only)](https://support.codefresh.io/hc/en-us/articles/360016652920-How-to-Configuring-an-existing-Runtime-Environment-with-Local-SSDs-GKE-only-){:target="\_blank"}
452
454
453
455
<br />
454
456
@@ -461,12 +463,12 @@ There are three options to provide cloud credentials:
461
463
462
464
1. Run`dind-volume-provisioner-runner` pod on a node with an IAM role which can create/delete/get GCE disks
463
465
1. Create Google Service Account with`ComputeEngine.StorageAdmin` role, download its keyin JSON format, and pass it to`codefresh runner init` with`--set-file=Storage.GooogleServiceAccount=/path/to/google-service-account.json`
464
-
1. Use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to assign IAM role to`volume-provisioner-runner` service account
466
+
1. Use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity){:target="\_blank"} to assign IAM role to`volume-provisioner-runner` service account
465
467
466
468
Notice that builds runin a single Availability Zone (AZ), so you must specify Availability Zone parameters.
467
469
468
470
**Configuration**
469
-
[How-to: Configuring an existing Runtime Environment with GCE disks](https://support.codefresh.io/hc/en-us/articles/360016652900-How-to-Configuring-an-existing-Runtime-Environment-with-GCE-disks)
471
+
[How-to: Configuring an existing Runtime Environment with GCE disks](https://support.codefresh.io/hc/en-us/articles/360016652900-How-to-Configuring-an-existing-Runtime-Environment-with-GCE-disks){:target="\_blank"}
470
472
471
473
<br />
472
474
@@ -531,7 +533,7 @@ The Codefresh Runner does not currently support [Regional Persistent Disks](http
531
533
532
534
You can configure your Codefresh Runner to use an internal registry as a mirrorforany container images that are specifiedin your pipelines.
533
535
534
-
1. Set up an internal registry as described in [https://docs.docker.com/registry/recipes/mirror/](https://docs.docker.com/registry/recipes/mirror/).
536
+
1. Set up an internal registry as describedin [https://docs.docker.com/registry/recipes/mirror/](https://docs.docker.com/registry/recipes/mirror/){:target="\_blank"}.
535
537
1. Locate the`codefresh-dind-config` config mapin the namespace that houses the Runner.
@@ -682,22 +684,22 @@ Node size and count depends entirely on how many pipelines you want to be “rea
682
684
For the storage options needed by the`dind` pod, we suggest:
683
685
684
686
* [Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local){:target="\_blank"} `/var/lib/codefresh/dind-volumes` on the K8S nodes filesystem (**default**)
685
-
* [EBS](https://aws.amazon.com/ebs/){:target="\_blank"} in the case of AWS. See also the [notes](#installing-on-aws) about getting caching working.
686
-
* [Local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd){:target="\_blank"} or [GCE Disks](https://cloud.google.com/compute/docs/disks#pdspecs){:target="\_blank"} in the case of GCP. See [notes](#installing-on-google-kubernetes-engine) about configuration.
687
+
* [EBS](https://aws.amazon.com/ebs/){:target="\_blank"}in thecase of AWS. See also the [notes](#aws-backend-volume-configuration) about getting caching working.
688
+
* [Local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd){:target="\_blank"} or [GCE Disks](https://cloud.google.com/compute/docs/disks#pdspecs){:target="\_blank"}in thecase of GCP. See [notes](#gke-google-kubernetes-engine-backend-volume-configuration) about configuration.
687
689
688
690
689
691
**Networking Requirements**
690
692
691
693
*`dind`: Pod creates an internal networkin the cluster to run all the pipeline steps; needs outgoing/egress access to Docker Hub and`quay.io`.
692
-
*`runner`: Pod needs outgoing/egress access to`g.codefresh.io`; needs network access to [app-proxy]({{site.baseurl}}/docs/administration/codefresh-runner/#optional-installation-of-the-app-proxy) if installed.
694
+
*`runner`: Pod needs outgoing/egress access to`g.codefresh.io`; needs network access to [app-proxy](#app-proxy-installation) if installed.
693
695
*`engine`: Pod needs outgoing/egress access to`g.codefresh.io`,`*.firebaseio.com` and`quay.io`; needs network access to`dind` pod
694
696
695
697
All CNI providers/plugins are compatible with the runner components.
696
698
697
699
## Monitoring disk space in Codefresh Runner
698
700
699
701
Codefresh pipelines require disk space for:
700
-
* [Pipeline Shared Volume]({{site.baseurl}}/docs/example-catalog/ci-examples/shared-volumes-between-builds/) (`/codefresh/volume`, implemented as [docker volume](https://docs.docker.com/storage/volumes/){:target="\_blank"})
702
+
* [Pipeline Shared Volume]({{site.baseurl}}/docs/yaml-examples/examples/shared-volumes-between-builds/) (`/codefresh/volume`, implemented as [docker volume](https://docs.docker.com/storage/volumes/){:target="\_blank"})
701
703
* Docker containers, both running and stopped
702
704
* Docker images and cached layers
703
705
@@ -754,7 +756,7 @@ dockerDaemonScheduler:
754
756
**Where it runs:** On Runtime Cluster as CronJob
755
757
(`kubectl get cronjobs -n codefresh -l app=dind-volume-cleanup`). Installedincase the Runner uses non-local volumes (`Storage.Backend!= local`)
756
758
757
-
**Triggered by:** CronJob every 10min (configurable), part of [runtime-cluster-monitor](https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/charts/cf-monitoring/templates/dind-volume-cleanup.yaml) and runner deployment
759
+
**Triggered by:** CronJob every 10min (configurable), part of [runtime-cluster-monitor](https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/charts/cf-monitoring/templates/dind-volume-cleanup.yaml){:target="\_blank"} and runner deployment
758
760
759
761
**Configuration:**
760
762
@@ -820,7 +822,7 @@ Override environment variables for `dind-lv-monitor` daemonset if necessary:
820
822
4. Volume Provisioner listensfor PVC events (create) and based on StorageClass definition it creates PV object with the corresponding underlying volume backend (ebs/gcedisk/local).
821
823
5. During the build, each step (clone/build/push/freestyle/composition) is represented as docker container inside dind (docker-in-docker) pod. Shared Volume (`/codefresh/volume`) is represented as docker volume and mounted to every step (docker containers). PV mount point inside dind pod is`/var/lib/docker`.
822
824
6. Engine pod controls dind pod. It deserializes pipeline yaml to docker API calls, terminates dind after build has been finished or per user request (sigterm).
823
-
7.`dind-lv-monitor` DaemonSet OR`dind-volume-cleanup` CronJob are part of [Runtime Cleaner]({{site.baseurl}}/docs/administration/codefresh-runner/#runtime-cleaners),`app-proxy` Deployment and Ingress are described in the [next section]({{site.baseurl}}/docs/administration/codefresh-runner/#app-proxy-installation),`monitor` Deployment is for [Kubernetes Dashboard]({{site.baseurl}}/docs/deploy-to-kubernetes/manage-kubernetes/).
825
+
7.`dind-lv-monitor` DaemonSet OR`dind-volume-cleanup` CronJob are part of [runtime cleaners](#types-of-runtime-cleaners), `app-proxy` Deployment and Ingress are described in the [App-Proxy installation](#app-proxy-installation), `monitor` Deployment is for [Kubernetes Dashboard]({{site.baseurl}}/docs/deploy-to-kubernetes/manage-kubernetes/).
824
826
825
827
## Customized Codefresh Runner installations
826
828
@@ -834,7 +836,7 @@ The App-Proxy is an **optional** component of the Runner, used mainly when the G
834
836
App-Proxy requires a Kubernetes cluster:
835
837
836
838
1. With the Codefresh Runner installed<!--- is this correct? -->
837
-
1. With an active [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress/){:target="\_blank"}
839
+
1. With an active [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress/){:target="\_blank"}.
838
840
The ingress controller must allow incoming connections from the VPC/VPN where users are browsing the Codefresh UI.
839
841
The ingress connection**must** have a hostname assignedfor this route, and**must** be configured to perform SSL termination.
840
842
@@ -1238,29 +1240,27 @@ Once the cluster is up and running, install the [cluster autoscaler](https://doc
1238
1240
1239
1241
Because we used IAM AddonPolicies`"autoScaler: true"`in the`cluster.yaml` file, everything isdone automatically, and there is no need to create a separate IAM policy or add Auto Scaling group tags.
* Replace`<YOUR CLUSTER NAME>` with the name of the cluster`cluster.yaml`
1261
-
* Add the following options:
1262
-
`--balance-similar-node-groups`
1263
-
`--skip-nodes-with-system-pods=false`
1258
+
1259
+
* Do the following asin the example below:
1260
+
* Replace`<YOUR CLUSTER NAME>` with the name of the cluster`cluster.yaml`
1261
+
* Add the following options:
1262
+
`--balance-similar-node-groups`
1263
+
`--skip-nodes-with-system-pods=false`
1264
1264
1265
1265
```
1266
1266
spec:
@@ -1276,8 +1276,8 @@ spec:
1276
1276
- --balance-similar-node-groups
1277
1277
- --skip-nodes-with-system-pods=false
1278
1278
```
1279
-
{:start="5"}
1280
-
1. Set the autoscaler version:
1279
+
1280
+
* Set the autoscaler version:
1281
1281
If the version of the EKS cluster 1.15, the corresponding autoscaler version according to [https://github.com/kubernetes/autoscaler/releases](https://github.com/kubernetes/autoscaler/releases){:target="\_blank"} is 1.15.6.
* Specify the zonein which to create your volumes,for example:`--set-value=Storage.AvailabilityZone=us-west-2a`.
1306
1307
* (Optional) To assign the volume-provisioner to a specific node,for example, a specific node group with an IAM role that can create EBS volumes,`--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons`.
@@ -1325,10 +1326,10 @@ codefresh runner init \
1325
1326
{:start="3"}
1326
1327
1. When the Wizard completes the installation, modify the runtime environment of`my-aws-runner` to specify the necessary toleration, nodeSelector and disk size:
1327
1328
* Run:
1328
-
1329
1329
```shell
1330
1330
codefresh get re --limit=100 my-aws-runner/cf -o yaml> my-runtime.yml
1331
1331
```
1332
+
1332
1333
* Modify the file`my-runtime.yml` as shown below:
1333
1334
1334
1335
```yaml
@@ -1615,12 +1616,12 @@ az role assignment create --assignee $NODE_SERVICE_PRINCIPAL --scope /subscripti
1615
1616
1616
1617
{:start="2"}
1617
1618
1. Install Codefresh Runner using one of these options: