Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit3fd0e82

Browse files
upd runner docs
1 parent315bf75 commit3fd0e82

File tree

1 file changed

+137
-122
lines changed

1 file changed

+137
-122
lines changed

‎_docs/administration/codefresh-runner.md‎

Lines changed: 137 additions & 122 deletions
Original file line numberDiff line numberDiff line change
@@ -415,55 +415,106 @@ You can fine tune the installation of the runner to better match your environmen
415415
416416
### Installing on AWS
417417
418-
If you install the Codefresh runner on [EKS](https://aws.amazon.com/eks/) or any other custom cluster (e.g. with kops) in Amazon you need to configure it properly to work with EBSvolume in order to gain [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/).
418+
If you've installed the Codefresh runner on [EKS](https://aws.amazon.com/eks/) or any other custom cluster (e.g. with kops)in Amazon you need to configure it properly to work with EBSvolumesin order to gain [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/).
419419

420-
Make sure that the node group where `dind-volume-provisioner-runner` deployment is running hastheappropriate permissions to create, attach, detach volumes:
420+
> This section assumes you already installedtheRunner with default options:`codefresh runner init`
421421

422+
**Prerequesits**
423+
424+
`dind-volume-provisioner` deployment should have permissions to create/attach/detach/delete/get ebs volumes.
425+
426+
There are 3 options:
427+
* running`dind-volume-provisioniner` pod on the node (node-group) with iam role
428+
* k8s secret with [aws credentials format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) mounted to~/.aws/credentials (or`AWS_ACCESS_KEY_ID` and`AWS_SECRET_ACCESS_KEY` env vars passed) to the`dind-volume-provisioniner` pod
429+
* using [Aws Identityfor Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) iam role assigned to`volume-provisioner-runner` service account
430+
431+
Minimal policyfor`dind-volume-provisioner`:
422432
```json
423-
{
424-
"Version": "2012-10-17",
425-
"Statement": [
426-
{
427-
"Effect": "Allow",
428-
"Action": [
429-
"ec2:DescribeVolumes"
430-
],
431-
"Resource": [
432-
"*"
433-
]
434-
},
435-
{
436-
"Effect": "Allow",
437-
"Action": [
438-
"ec2:CreateVolume",
439-
"ec2:ModifyVolume",
440-
"ec2:CreateTags",
441-
"ec2:DescribeTags",
442-
"ec2:DetachVolume",
443-
"ec2:AttachVolume"
444-
],
445-
"Resource": [
446-
"*"
447-
]
448-
}
449-
]
450-
}
451-
```
452-
453-
Then in order to proceed with Storage Class installation please choose one of the Availability Zones you want to be used for your pipeline builds:
433+
{
434+
"Version":"2012-10-17",
435+
"Statement": [
436+
{
437+
"Effect":"Allow",
438+
"Action": [
439+
"ec2:AttachVolume",
440+
"ec2:CreateSnapshot",
441+
"ec2:CreateTags",
442+
"ec2:CreateVolume",
443+
"ec2:DeleteSnapshot",
444+
"ec2:DeleteTags",
445+
"ec2:DeleteVolume",
446+
"ec2:DescribeInstances",
447+
"ec2:DescribeSnapshots",
448+
"ec2:DescribeTags",
449+
"ec2:DescribeVolumes",
450+
"ec2:DetachVolume"
451+
],
452+
"Resource":"*"
453+
}
454+
]
455+
}
456+
```
457+
458+
Create Storage Classfor EBS volumes:
459+
>Choose**one** of the Availability Zones you want to be usedfor your pipeline builds. Multi AZ configuration is not supported.
460+
461+
**Storage Class (gp2)**
462+
463+
```yaml
464+
kind: StorageClass
465+
apiVersion: storage.k8s.io/v1
466+
metadata:
467+
name: dind-ebs
468+
### Specify name of provisioner
469+
provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-># <---- rename <-NAMESPACE-> with the runner namespace
470+
volumeBindingMode: Immediate
471+
parameters:
472+
# ebs or ebs-csi
473+
volumeBackend: ebs
474+
# Valid zone
475+
AvailabilityZone: us-central1-a # <---- change it to your AZ
476+
# gp2, gp3 or io1
477+
VolumeType: gp2
478+
# in case of io1 you can set iops
479+
# iops: 1000
480+
# ext4 or xfs (default to xfs, ensure that there is xfstools )
481+
fsType: xfs
482+
```
483+
**Storage Class (gp3)**
454484

455485
```yaml
456-
apiVersion: storage.k8s.io/v1
457-
kind: StorageClass
458-
metadata:
459-
name: runner-ebs
460-
parameters:
461-
AvailabilityZone: us-west-2c # <----(Please change it to yours)
462-
volumeBackend: ebs
463-
provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-> # <---- rename <-NAMESPACE-> with the runner namespace
486+
kind: StorageClass
487+
apiVersion: storage.k8s.io/v1
488+
metadata:
489+
name: dind-ebs
490+
### Specify name of provisioner
491+
provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-># <---- rename <-NAMESPACE-> with the runner namespace
492+
volumeBindingMode: Immediate
493+
parameters:
494+
# ebs or ebs-csi
495+
volumeBackend: ebs
496+
# Valid zone
497+
AvailabilityZone: us-central1-a # <---- change it to your AZ
498+
# gp2, gp3 or io1
499+
VolumeType: gp3
500+
# ext4 or xfs (default to xfs, ensure that there is xfstools )
501+
fsType: xfs
502+
# I/O operations per second. Only effetive when gp3 volume type is specified.
503+
# Default value - 3000.
504+
# Max - 16,000
505+
iops: "5000"
506+
# Throughput in MiB/s. Only effective when gp3 volume type is specified.
507+
# Default value - 125.
508+
# Max - 1000.
509+
throughput: "500"
510+
```
511+
512+
Apply storage class manifest:
513+
```shell
514+
kubectl apply -f dind-ebs.yaml
464515
```
465516

466-
Finally you need to changeyourCodefreshruntime configuration.
517+
Changeyour[runtime environment]({{site.baseurl}}/docs/administration/codefresh-runner/#full-runtime-environment-specification) configuration:
467518

468519
The same AZ you selected before should be usedin nodeSelector inside Runtime Configuration:
469520

@@ -476,60 +527,59 @@ codefresh get runtime-environments
476527
Choose the runtime you have just added and get its yaml representation:
477528

478529
```shell
479-
codefresh get runtime-environmentsivan@acme-ebs.us-west-2.eksctl.io/codefresh-runtime -o yaml > runtime.yaml
530+
codefresh get runtime-environmentsmy-eks-cluster/codefresh -o yaml> runtime.yaml
480531
```
481532

482-
ThenodeSelector `failure-domain.beta.kubernetes.io/zone:us-west-2c` (Please change it to yours) should be added to the `dockerDaemonScheduler` block. It should be at the same level as `clusterProvider`or `namespace`. Also the `pvcs` block should be modified to use the Storage Class you created above (`runner-ebs`). Here is the example:
533+
Under`dockerDaemonScheduler.cluster` block add thenodeSelector`topology.kubernetes.io/zone:<your_az_here>`. It should be at the same level as`clusterProvider`and`namespace`. Also, the`pvcs.dind` block should be modified to use the Storage Class you created above (`dind-ebs`).
483534

484-
`runtime.yaml`
535+
`runtime.yaml` example:
485536

486537
```yaml
487-
version:null
538+
version:1
488539
metadata:
489-
agent: true
490-
trial:
491-
endingAt: 1577273400263
492-
reason: Codefresh hybrid runtime
493-
started: 1576063800333
494-
name: ivan@acme-ebs.us-west-2.eksctl.io/codefresh-runtime
495-
changedBy: ivan-codefresh
496-
creationTime:'2019/12/11 11:30:00'
540+
...
497541
runtimeScheduler:
498542
cluster:
499543
clusterProvider:
500-
accountId: 5cb563d0506083262ba1f327
501-
selector: ivan@acme-ebs.us-west-2.eksctl.io
502-
namespace: codefresh-runtime
544+
accountId: 5f048d85eb107d52b16c53ea
545+
selector: my-eks-cluster
546+
namespace: codefresh
547+
serviceAccount: codefresh-engine
503548
annotations: {}
504549
dockerDaemonScheduler:
505550
cluster:
506551
clusterProvider:
507-
accountId:5cb563d0506083262ba1f327
508-
selector:ivan@acme-ebs.us-west-2.eksctl.io
509-
namespace: codefresh-runtime
552+
accountId:5f048d85eb107d52b16c53ea
553+
selector:my-eks-cluster
554+
namespace: codefresh
510555
nodeSelector:
511-
failure-domain.beta.kubernetes.io/zone: us-west-2c
556+
topology.kubernetes.io/zone: us-central1-a
557+
serviceAccount: codefresh-engine
512558
annotations: {}
513-
dockerDaemonParams: null
559+
userAccess:true
560+
defaultDindResources:
561+
requests:''
514562
pvcs:
515563
dind:
516564
volumeSize: 30Gi
517-
storageClassName:runner-ebs
565+
storageClassName:dind-ebs
518566
reuseVolumeSelector:'codefresh-app,io.codefresh.accountName'
519-
reuseVolumeSortOrder:'pipeline_id,trigger'
520-
userAccess: true
521567
extends:
522-
- system/default/hybrid/k8s
523-
description: >-
524-
Runtime environment configure to cluster: ivan@acme-ebs.us-west-2.eksctl.io
525-
and namespace: codefresh-runtime
526-
accountId: 5cb563d0506083262ba1f327
568+
- system/default/hybrid/k8s_low_limits
569+
description:'...'
570+
accountId: 5f048d85eb107d52b16c53ea
527571
```
528572

529573
Update your runtime environment with the [patch command](https://codefresh-io.github.io/cli/operate-on-resources/patch/):
530574

531575
```shell
532-
codefresh patch runtime-environment ivan@acme-ebs.us-west-2.eksctl.io/codefresh-runtime -f codefresh-runner.yaml
576+
codefresh patch runtime-environment my-eks-cluster/codefresh -f runtime.yaml
577+
```
578+
579+
If necessary, delete all existing PV and PVC objects left from defaultlocal provisioner:
580+
```
581+
kubectl delete pvc -l codefresh-app=dind -n<your_runner_ns>
582+
kubectl delete pv -l codefresh-app=dind -n<your_runner_ns>
533583
```
534584

535585
### Installing to EKS with Autoscaling
@@ -754,7 +804,8 @@ Install the runner passing additional options:
754804
```shell
755805
codefresh runner init \
756806
--name my-aws-runner \
757-
--kube-node-selector=failure-domain.beta.kubernetes.io/zone=us-west-2a \
807+
--kube-node-selector=topology.kubernetes.io/zone=us-west-2a \
808+
--build-node-selector=topology.kubernetes.io/zone=us-west-2a \
758809
--kube-namespace cf --kube-context-name my-aws-runner \
759810
--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons \
760811
--set-value=Storage.Backend=ebs \
@@ -770,7 +821,8 @@ If you already have a key - add its ARN via `--set-value=Storage.KmsKeyId=<key i
770821
```shell
771822
codefresh runner init \
772823
--name my-aws-runner \
773-
--kube-node-selector=failure-domain.beta.kubernetes.io/zone=us-west-2a \
824+
--kube-node-selector=topology.kubernetes.io/zone=us-west-2a \
825+
--build-node-selector=topology.kubernetes.io/zone=us-west-2a \
774826
--kube-namespace cf --kube-context-name my-aws-runner \
775827
--set-value Storage.VolumeProvisioner.NodeSelector=node-type=addons \
776828
--set-value=Storage.Backend=ebs \
@@ -779,7 +831,7 @@ codefresh runner init \
779831
--set-value=Storage.KmsKeyId=<key id>
780832
```
781833

782-
For an explanation of all other optionssee the[global parameter table](#customizing-the-wizard-installation).
834+
For an explanation of all other optionsrun`codefresh runner init --help` ([global parameter table](#customizing-the-wizard-installation)).
783835

784836
At this point the quick start wizard will start the installation.
785837

@@ -789,43 +841,6 @@ Once that is done we need to modify the runtime environment of `my-aws-runner` t
789841
codefresh get re --limit=100 my-aws-runner/cf -o yaml> my-runtime.yml
790842
```
791843

792-
```yaml
793-
version: null
794-
metadata:
795-
agent: true
796-
trial:
797-
endingAt: 1593596844167
798-
reason: Codefresh hybrid runtime
799-
started: 1592387244207
800-
name: my-aws-runner/cf
801-
changedBy: ivan-codefresh
802-
creationTime:'2020/06/17 09:47:24'
803-
runtimeScheduler:
804-
cluster:
805-
clusterProvider:
806-
accountId: 5cb563d0506083262ba1f327
807-
selector: my-aws-runner
808-
namespace: cf
809-
annotations: {}
810-
dockerDaemonScheduler:
811-
cluster:
812-
clusterProvider:
813-
accountId: 5cb563d0506083262ba1f327
814-
selector: my-aws-runner
815-
namespace: cf
816-
annotations: {}
817-
defaultDindResources:
818-
requests:''
819-
pvcs:
820-
dind:
821-
storageClassName: dind-local-volumes-runner-cf
822-
userAccess: true
823-
extends:
824-
- system/default/hybrid/k8s_low_limits
825-
description:'Runtime environment configure to cluster: my-aws-runner and namespace: cf'
826-
accountId: 5cb563d0506083262ba1f327
827-
```
828-
829844
Modify the file my-runtime.yml as shown below:
830845

831846
```yaml
@@ -881,7 +896,7 @@ description: 'Runtime environment configure to cluster: my-aws-runner and namesp
881896
accountId: 5cb563d0506083262ba1f327
882897
```
883898

884-
Finally apply changes.
899+
Apply changes.
885900

886901
```shell
887902
codefresh patch re my-aws-runner/cf -f my-runtime.yml
@@ -891,14 +906,14 @@ That's all. Now you can go to UI and try to run a pipeline on RE my-aws-runner/c
891906
892907
### Injecting AWS arn roles into the cluster
893908
894-
Step 1 - Make sure the OIDC provider is connected to the cluster
909+
**Step 1** - Make sure the OIDC provider is connected to the cluster
895910
896911
See:
897912
898913
* [https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html)
899914
* [https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/)
900915
901-
Step 2 - Create IAM role and policy as explainedin [https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)
916+
**Step 2** - Create IAM role and policy as explained in [https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)
902917
903918
Here, in addition to the policy explained, you need a Trust Relationship established between this role and the OIDC entity.
904919
@@ -922,7 +937,7 @@ Here, in addition to the policy explained, you need a Trust Relationship establi
922937
}
923938
```
924939
925-
Step 3 - Annotate the`codefresh-engine` Kubernetes Service Accountin the namespace where the Codefresh Runner is installed with the proper IAM role.
940+
**Step 3** - Annotate the `codefresh-engine` Kubernetes Service Account in the namespace where the Codefresh Runner is installed with the proper IAM role.
926941
927942
```shell
928943
kubectl annotate -n ${CODEFRESH_NAMESPACE} sa codefresh-engine eks.amazonaws.com/role-arn=${ROLE_ARN}
@@ -944,7 +959,7 @@ Tokens: codefresh-engine-token-msj8d
944959
Events: <none>
945960
```
946961
947-
Step 4 - Using the AWS assumed role identity
962+
**Step 4** - Using the AWS assumed role identity
948963
949964
After annotating the Service Account, run a pipeline to test the AWS resource access:
950965
@@ -970,9 +985,9 @@ RunAwsCli:
970985
971986
If you want to deploy the Codefresh runner on a Kubernetes cluster that doesn’t have direct access to `g.codefresh.io`, and has to go trough a proxy server to access `g.codefresh.io`, you will need to follow these additional steps:
972987
973-
*Step 1* - Follow the installation instructions of the previous section
988+
**Step 1** - Follow the installation instructions of the previous section
974989
975-
*Step 2* - Run`kubectl edit deployment runner -n codefresh-runtime` and add the proxy variables like this
990+
**Step 2** - Run `kubectl edit deployment runner -n codefresh-runtime` and add the proxy variables like this
976991
977992
```yaml
978993
spec:
@@ -992,7 +1007,7 @@ spec:
9921007
value: localhost,127.0.0.1,<local_ip_of_machine>
9931008
```
9941009
995-
*Step 3* - Add the following variables to your runtime.yaml, both under the`runtimeScheduler:` and under`dockerDaemonScheduler:` blocks inside the`envVars:` section
1010+
**Step 3** - Add the following variables to your runtime.yaml, both under the `runtimeScheduler:` and under `dockerDaemonScheduler:` blocks inside the `envVars:` section
9961011
9971012
```yaml
9981013
HTTP_PROXY: http://<ip of proxy server>:port
@@ -1003,16 +1018,16 @@ No_proxy: localhost, 127.0.0.1, <local_ip_of_machine>
10031018
NO_PROXY: localhost, 127.0.0.1, <local_ip_of_machine>
10041019
```
10051020
1006-
*Step 4* - Add`.firebaseio.com` to the allowed-sites of the proxy server
1021+
**Step 4** - Add `.firebaseio.com` to the allowed-sites of the proxy server
10071022
1008-
*Step 5* - Exec into the`dind` pod and run`ifconfig`
1023+
**Step 5** - Exec into the `dind` pod and run `ifconfig`
10091024
10101025
If the MTU value for `docker0` is higher than the MTU value of `eth0` (sometimes the `docker0` MTU is 1500, while `eth0` MTU is 1440) - you need to change this, the `docker0` MTU should be lower than `eth0` MTU
10111026
10121027
To fix this, edit the configmap in the codefresh-runtime namespace:
10131028
10141029
```shell
1015-
kubectl edit cm codefresh-dind-config -ncodefresh-runtime
1030+
kubectl edit cm codefresh-dind-config -n codefresh-runtime
10161031
```
10171032
10181033
And add this after one of the commas:

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp