Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit5eedd36

Browse files
committed
Update codefresh-runner.md
1 parenta738935 commit5eedd36

File tree

1 file changed

+36
-35
lines changed

1 file changed

+36
-35
lines changed

‎_docs/administration/codefresh-runner.md‎

Lines changed: 36 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ Installing the Codefresh Runner with Helm requires you to first create a `genera
170170
1. Optional. If the Kubernetes cluster with the Codefresh Runner is behind a proxy,continue with [Complete Codefresh Runner installation](#complete-codefresh-runner-installation).
171171

172172
<!--- what is this -->
173-
For reference, have a look at the repository with the chart: [https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime](https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime).
173+
For reference, have a look at the repository with the chart: [https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime](https://github.com/codefresh-io/venona/tree/release-1.0/.deploy/cf-runtime){:target="\_blank"}.
174174

175175

176176
```shell
@@ -235,7 +235,7 @@ After installation, configure the Kubernetes cluster with the Codefresh Runner t
235235

236236
### AWS backend volume configuration
237237

238-
For Codefresh Runners on [EKS](https://aws.amazon.com/eks/){:target=\_blank"} or any other custom cluster in Amazon, such as kops for example, configure the Runner to work with EBS volumes to support [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/) during pipeline execution.
238+
For Codefresh Runners on [EKS](https://aws.amazon.com/eks/){:target="\_blank"} or any other custom clusterin Amazon, such as kopsfor example, configure the Runner to work with EBS volumes to support [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/) during pipeline execution.
239239

240240
> The configuration assumes that you have installed the Runner with the default options:`codefresh runner init`
241241

@@ -246,11 +246,11 @@ The `dind-volume-provisioner` deployment should have permissions to create/attac
246246

247247
There are three optionsfor this:
248248
1. Run`dind-volume-provisioner` pod on the node/node-group with IAM role
249-
1. Mount K8s secret in [AWS credential format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html):
249+
1. Mount K8s secretin [AWS credential format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html){:target="\_blank"}:
250250
To~/.aws/credentials
251251
OR
252252
By passing the`AWS_ACCESS_KEY_ID` and`AWS_SECRET_ACCESS_KEY` as environment variables to the`dind-volume-provisioner` pod
253-
1. Use [AWS identity for Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) IAM role assigned to`volume-provisioner-runner` service account
253+
1. Use [AWS identityfor Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html){:target="\_blank"} IAM role assigned to`volume-provisioner-runner` service account
254254

255255

256256
**Minimal policyfor`dind-volume-provisioner`**
@@ -345,18 +345,20 @@ kubectl apply -f dind-ebs.yaml
345345
```shell
346346
codefresh get runtime-environments
347347
```
348+
348349
* Select the runtime you just added, and get its YAML representation:
349350
```shell
350351
codefresh get runtime-environments my-eks-cluster/codefresh -o yaml> runtime.yaml
351352
```
352353

353354
**Step 4:** Modify the YAML:
354355
* In`dockerDaemonScheduler.cluster`, add`nodeSelector: topology.kubernetes.io/zone:<your_az_here>`.
355-
> Make sure you define the same AZ you selected for Runtime Configuration.
356+
> Make sure you define the same AZ you selectedfor Runtime Configuration.
356357
* Modify`pvcs.dind` to use the Storage Class you created above (`dind-ebs`).
357358

358359
Here is an example of the`runtime.yaml` including the required updates:
359360

361+
360362
```yaml
361363
version: 1
362364
metadata:
@@ -448,7 +450,7 @@ GKE volume configuration includes:
448450

449451
Configure the Codefresh Runner to uselocal SSDsfor your pipeline volumes:
450452

451-
[How-to: Configuring an existing Runtime Environment with Local SSDs (GKE only)](https://support.codefresh.io/hc/en-us/articles/360016652920-How-to-Configuring-an-existing-Runtime-Environment-with-Local-SSDs-GKE-only-)
453+
[How-to: Configuring an existing Runtime Environment with Local SSDs (GKE only)](https://support.codefresh.io/hc/en-us/articles/360016652920-How-to-Configuring-an-existing-Runtime-Environment-with-Local-SSDs-GKE-only-){:target="\_blank"}
452454

453455
<br />
454456

@@ -461,12 +463,12 @@ There are three options to provide cloud credentials:
461463

462464
1. Run`dind-volume-provisioner-runner` pod on a node with an IAM role which can create/delete/get GCE disks
463465
1. Create Google Service Account with`ComputeEngine.StorageAdmin` role, download its keyin JSON format, and pass it to`codefresh runner init` with`--set-file=Storage.GooogleServiceAccount=/path/to/google-service-account.json`
464-
1. Use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to assign IAM role to`volume-provisioner-runner` service account
466+
1. Use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity){:target="\_blank"} to assign IAM role to`volume-provisioner-runner` service account
465467

466468
Notice that builds runin a single Availability Zone (AZ), so you must specify Availability Zone parameters.
467469

468470
**Configuration**
469-
[How-to: Configuring an existing Runtime Environment with GCE disks](https://support.codefresh.io/hc/en-us/articles/360016652900-How-to-Configuring-an-existing-Runtime-Environment-with-GCE-disks)
471+
[How-to: Configuring an existing Runtime Environment with GCE disks](https://support.codefresh.io/hc/en-us/articles/360016652900-How-to-Configuring-an-existing-Runtime-Environment-with-GCE-disks){:target="\_blank"}
470472

471473
<br />
472474

@@ -531,7 +533,7 @@ The Codefresh Runner does not currently support [Regional Persistent Disks](http
531533

532534
You can configure your Codefresh Runner to use an internal registry as a mirrorforany container images that are specifiedin your pipelines.
533535

534-
1. Set up an internal registry as described in [https://docs.docker.com/registry/recipes/mirror/](https://docs.docker.com/registry/recipes/mirror/).
536+
1. Set up an internal registry as describedin [https://docs.docker.com/registry/recipes/mirror/](https://docs.docker.com/registry/recipes/mirror/){:target="\_blank"}.
535537
1. Locate the`codefresh-dind-config` config mapin the namespace that houses the Runner.
536538
```shell
537539
kubectl -n codefresh edit configmap codefresh-dind-config
@@ -682,22 +684,22 @@ Node size and count depends entirely on how many pipelines you want to be “rea
682684
For the storage options needed by the`dind` pod, we suggest:
683685

684686
* [Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local){:target="\_blank"} `/var/lib/codefresh/dind-volumes` on the K8S nodes filesystem (**default**)
685-
* [EBS](https://aws.amazon.com/ebs/){:target="\_blank"} in the case of AWS. See also the [notes](#installing-on-aws) about getting caching working.
686-
* [Local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd){:target="\_blank"} or [GCE Disks](https://cloud.google.com/compute/docs/disks#pdspecs){:target="\_blank"} in the case of GCP. See [notes](#installing-on-google-kubernetes-engine) about configuration.
687+
* [EBS](https://aws.amazon.com/ebs/){:target="\_blank"}in thecase of AWS. See also the [notes](#aws-backend-volume-configuration) about getting caching working.
688+
* [Local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd){:target="\_blank"} or [GCE Disks](https://cloud.google.com/compute/docs/disks#pdspecs){:target="\_blank"}in thecase of GCP. See [notes](#gke-google-kubernetes-engine-backend-volume-configuration) about configuration.
687689

688690

689691
**Networking Requirements**
690692

691693
*`dind`: Pod creates an internal networkin the cluster to run all the pipeline steps; needs outgoing/egress access to Docker Hub and`quay.io`.
692-
*`runner`: Pod needs outgoing/egress access to`g.codefresh.io`; needs network access to [app-proxy]({{site.baseurl}}/docs/administration/codefresh-runner/#optional-installation-of-the-app-proxy) if installed.
694+
*`runner`: Pod needs outgoing/egress access to`g.codefresh.io`; needs network access to [app-proxy](#app-proxy-installation) if installed.
693695
*`engine`: Pod needs outgoing/egress access to`g.codefresh.io`,`*.firebaseio.com` and`quay.io`; needs network access to`dind` pod
694696

695697
All CNI providers/plugins are compatible with the runner components.
696698

697699
## Monitoring disk space in Codefresh Runner
698700

699701
Codefresh pipelines require disk space for:
700-
* [Pipeline Shared Volume]({{site.baseurl}}/docs/example-catalog/ci-examples/shared-volumes-between-builds/) (`/codefresh/volume`, implemented as [docker volume](https://docs.docker.com/storage/volumes/){:target="\_blank"})
702+
* [Pipeline Shared Volume]({{site.baseurl}}/docs/yaml-examples/examples/shared-volumes-between-builds/) (`/codefresh/volume`, implemented as [docker volume](https://docs.docker.com/storage/volumes/){:target="\_blank"})
701703
* Docker containers, both running and stopped
702704
* Docker images and cached layers
703705

@@ -754,7 +756,7 @@ dockerDaemonScheduler:
754756
**Where it runs:** On Runtime Cluster as CronJob
755757
(`kubectl get cronjobs -n codefresh -l app=dind-volume-cleanup`). Installedincase the Runner uses non-local volumes (`Storage.Backend!= local`)
756758

757-
**Triggered by:** CronJob every 10min (configurable), part of [runtime-cluster-monitor](https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/charts/cf-monitoring/templates/dind-volume-cleanup.yaml) and runner deployment
759+
**Triggered by:** CronJob every 10min (configurable), part of [runtime-cluster-monitor](https://github.com/codefresh-io/runtime-cluster-monitor/blob/master/charts/cf-monitoring/templates/dind-volume-cleanup.yaml){:target="\_blank"} and runner deployment
758760

759761
**Configuration:**
760762

@@ -820,7 +822,7 @@ Override environment variables for `dind-lv-monitor` daemonset if necessary:
820822
4. Volume Provisioner listensfor PVC events (create) and based on StorageClass definition it creates PV object with the corresponding underlying volume backend (ebs/gcedisk/local).
821823
5. During the build, each step (clone/build/push/freestyle/composition) is represented as docker container inside dind (docker-in-docker) pod. Shared Volume (`/codefresh/volume`) is represented as docker volume and mounted to every step (docker containers). PV mount point inside dind pod is`/var/lib/docker`.
822824
6. Engine pod controls dind pod. It deserializes pipeline yaml to docker API calls, terminates dind after build has been finished or per user request (sigterm).
823-
7.`dind-lv-monitor` DaemonSet OR`dind-volume-cleanup` CronJob are part of [Runtime Cleaner]({{site.baseurl}}/docs/administration/codefresh-runner/#runtime-cleaners),`app-proxy` Deployment and Ingress are described in the [next section]({{site.baseurl}}/docs/administration/codefresh-runner/#app-proxy-installation),`monitor` Deployment is for [Kubernetes Dashboard]({{site.baseurl}}/docs/deploy-to-kubernetes/manage-kubernetes/).
825+
7.`dind-lv-monitor` DaemonSet OR`dind-volume-cleanup` CronJob are part of [runtime cleaners](#types-of-runtime-cleaners), `app-proxy` Deployment and Ingress are described in the [App-Proxy installation](#app-proxy-installation), `monitor` Deployment is for [Kubernetes Dashboard]({{site.baseurl}}/docs/deploy-to-kubernetes/manage-kubernetes/).
824826

825827
## Customized Codefresh Runner installations
826828

@@ -834,7 +836,7 @@ The App-Proxy is an **optional** component of the Runner, used mainly when the G
834836
App-Proxy requires a Kubernetes cluster:
835837

836838
1. With the Codefresh Runner installed<!--- is this correct? -->
837-
1. With an active [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress/){:target="\_blank"}
839+
1. With an active [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress/){:target="\_blank"}.
838840
The ingress controller must allow incoming connections from the VPC/VPN where users are browsing the Codefresh UI.
839841
The ingress connection**must** have a hostname assignedfor this route, and**must** be configured to perform SSL termination.
840842

@@ -1238,29 +1240,27 @@ Once the cluster is up and running, install the [cluster autoscaler](https://doc
12381240

12391241
Because we used IAM AddonPolicies`"autoScaler: true"`in the`cluster.yaml` file, everything isdone automatically, and there is no need to create a separate IAM policy or add Auto Scaling group tags.
12401242

1241-
1. Deploy the cluster autoscaler:
1243+
* Deploy the cluster autoscaler:
12421244

12431245
```shell
12441246
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
12451247
```
1246-
{:start="2"}
1247-
1. Add the`cluster-autoscaler.kubernetes.io/safe-to-evict` annotation:
1248-
1248+
* Add the`cluster-autoscaler.kubernetes.io/safe-to-evict` annotation:
12491249
```shell
12501250
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
12511251
```
1252-
{:start="3"}
1253-
1. Edit the`cluster-autoscaler` container command:
1252+
1253+
* Edit the`cluster-autoscaler` container command:
12541254

12551255
```shell
12561256
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
12571257
```
1258-
{:start="4"}
1259-
1. Do the following as in the example below:
1260-
* Replace`<YOUR CLUSTER NAME>` with the name of the cluster`cluster.yaml`
1261-
* Add the following options:
1262-
`--balance-similar-node-groups`
1263-
`--skip-nodes-with-system-pods=false`
1258+
1259+
* Do the following asin the example below:
1260+
* Replace`<YOUR CLUSTER NAME>` with the name of the cluster`cluster.yaml`
1261+
* Add the following options:
1262+
`--balance-similar-node-groups`
1263+
`--skip-nodes-with-system-pods=false`
12641264

12651265
```
12661266
spec:
@@ -1276,8 +1276,8 @@ spec:
12761276
- --balance-similar-node-groups
12771277
- --skip-nodes-with-system-pods=false
12781278
```
1279-
{:start="5"}
1280-
1. Set the autoscaler version:
1279+
1280+
* Set the autoscaler version:
12811281
If the version of the EKS cluster 1.15, the corresponding autoscaler version according to [https://github.com/kubernetes/autoscaler/releases](https://github.com/kubernetes/autoscaler/releases){:target="\_blank"} is 1.15.6.
12821282

12831283
```shell
@@ -1301,6 +1301,7 @@ https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#h
13011301
$ kubectl config current-context
13021302
my-aws-runner
13031303
```
1304+
13041305
1. Install the Runner with additional options:
13051306
* Specify the zonein which to create your volumes,for example:`--set-value=Storage.AvailabilityZone=us-west-2a`.
13061307
* (Optional) To assign the volume-provisioner to a specific node,for example, a specific node group with an IAM role that can create EBS volumes,`--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons`.
@@ -1325,10 +1326,10 @@ codefresh runner init \
13251326
{:start="3"}
13261327
1. When the Wizard completes the installation, modify the runtime environment of`my-aws-runner` to specify the necessary toleration, nodeSelector and disk size:
13271328
* Run:
1328-
13291329
```shell
13301330
codefresh get re --limit=100 my-aws-runner/cf -o yaml> my-runtime.yml
13311331
```
1332+
13321333
* Modify the file`my-runtime.yml` as shown below:
13331334

13341335
```yaml
@@ -1615,12 +1616,12 @@ az role assignment create --assignee $NODE_SERVICE_PRINCIPAL --scope /subscripti
16151616
16161617
{:start="2"}
16171618
1. Install Codefresh Runner using one of these options:
1618-
*CLI Wizard:
1619+
**CLI Wizard:**
16191620
```
16201621
codefresh runner init --set-value Storage.Backend=azuredisk --set Storage.VolumeProvisioner.MountAzureJson=true
16211622
```
1622-
* [values-example.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml){:target="\_blank"}:
1623-
1623+
1624+
**[values-example.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml){:target="\_blank"}:**
16241625
```
16251626
Storage:
16261627
Backend: azuredisk
@@ -1630,7 +1631,7 @@ Storage:
16301631
```shell
16311632
codefresh runner init --values values-example.yaml
16321633
```
1633-
*Helm chart [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/charts/cf-runtime/values.yaml){:target="\_blank"}:
1634+
**Helm chart [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/charts/cf-runtime/values.yaml){:target="\_blank"}:**
16341635
16351636
```
16361637
storage:

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp