You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
@@ -415,55 +415,106 @@ You can fine tune the installation of the runner to better match your environmen
415
415
416
416
### Installing on AWS
417
417
418
-
If you install the Codefresh runner on [EKS](https://aws.amazon.com/eks/) or any other custom cluster (e.g. with kops) in Amazon you need to configure it properly to work with EBSvolume in order to gain [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/).
418
+
If you've installed the Codefresh runner on [EKS](https://aws.amazon.com/eks/) or any other custom cluster (e.g. with kops)in Amazon you need to configure it properly to work with EBSvolumesin order to gain [caching]({{site.baseurl}}/docs/configure-ci-cd-pipeline/pipeline-caching/).
419
419
420
-
Make sure that the node group where `dind-volume-provisioner-runner` deployment is running hastheappropriate permissions to create, attach, detach volumes:
420
+
> This section assumes you already installedtheRunner with default options:`codefresh runner init`
421
421
422
+
**Prerequesits**
423
+
424
+
`dind-volume-provisioner` deployment should have permissions to create/attach/detach/delete/get ebs volumes.
425
+
426
+
There are 3 options:
427
+
* running`dind-volume-provisioniner` pod on the node (node-group) with iam role
428
+
* k8s secret with [aws credentials format](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) mounted to~/.aws/credentials (or`AWS_ACCESS_KEY_ID` and`AWS_SECRET_ACCESS_KEY` env vars passed) to the`dind-volume-provisioniner` pod
429
+
* using [Aws Identityfor Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) iam role assigned to`volume-provisioner-runner` service account
430
+
431
+
Minimal policyfor`dind-volume-provisioner`:
422
432
```json
423
-
{
424
-
"Version": "2012-10-17",
425
-
"Statement": [
426
-
{
427
-
"Effect": "Allow",
428
-
"Action": [
429
-
"ec2:DescribeVolumes"
430
-
],
431
-
"Resource": [
432
-
"*"
433
-
]
434
-
},
435
-
{
436
-
"Effect": "Allow",
437
-
"Action": [
438
-
"ec2:CreateVolume",
439
-
"ec2:ModifyVolume",
440
-
"ec2:CreateTags",
441
-
"ec2:DescribeTags",
442
-
"ec2:DetachVolume",
443
-
"ec2:AttachVolume"
444
-
],
445
-
"Resource": [
446
-
"*"
447
-
]
448
-
}
449
-
]
450
-
}
451
-
```
452
-
453
-
Then in order to proceed with Storage Class installation please choose one of the Availability Zones you want to be used for your pipeline builds:
433
+
{
434
+
"Version":"2012-10-17",
435
+
"Statement": [
436
+
{
437
+
"Effect":"Allow",
438
+
"Action": [
439
+
"ec2:AttachVolume",
440
+
"ec2:CreateSnapshot",
441
+
"ec2:CreateTags",
442
+
"ec2:CreateVolume",
443
+
"ec2:DeleteSnapshot",
444
+
"ec2:DeleteTags",
445
+
"ec2:DeleteVolume",
446
+
"ec2:DescribeInstances",
447
+
"ec2:DescribeSnapshots",
448
+
"ec2:DescribeTags",
449
+
"ec2:DescribeVolumes",
450
+
"ec2:DetachVolume"
451
+
],
452
+
"Resource":"*"
453
+
}
454
+
]
455
+
}
456
+
```
457
+
458
+
Create Storage Classfor EBS volumes:
459
+
>Choose**one** of the Availability Zones you want to be usedfor your pipeline builds. Multi AZ configuration is not supported.
460
+
461
+
**Storage Class (gp2)**
462
+
463
+
```yaml
464
+
kind: StorageClass
465
+
apiVersion: storage.k8s.io/v1
466
+
metadata:
467
+
name: dind-ebs
468
+
### Specify name of provisioner
469
+
provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-># <---- rename <-NAMESPACE-> with the runner namespace
470
+
volumeBindingMode: Immediate
471
+
parameters:
472
+
# ebs or ebs-csi
473
+
volumeBackend: ebs
474
+
# Valid zone
475
+
AvailabilityZone: us-central1-a # <---- change it to your AZ
476
+
# gp2, gp3 or io1
477
+
VolumeType: gp2
478
+
# in case of io1 you can set iops
479
+
# iops: 1000
480
+
# ext4 or xfs (default to xfs, ensure that there is xfstools )
481
+
fsType: xfs
482
+
```
483
+
**Storage Class (gp3)**
454
484
455
485
```yaml
456
-
apiVersion: storage.k8s.io/v1
457
-
kind: StorageClass
458
-
metadata:
459
-
name: runner-ebs
460
-
parameters:
461
-
AvailabilityZone: us-west-2c # <----(Please change it to yours)
462
-
volumeBackend: ebs
463
-
provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-> # <---- rename <-NAMESPACE-> with the runner namespace
486
+
kind: StorageClass
487
+
apiVersion: storage.k8s.io/v1
488
+
metadata:
489
+
name: dind-ebs
490
+
### Specify name of provisioner
491
+
provisioner: codefresh.io/dind-volume-provisioner-runner-<-NAMESPACE-># <---- rename <-NAMESPACE-> with the runner namespace
492
+
volumeBindingMode: Immediate
493
+
parameters:
494
+
# ebs or ebs-csi
495
+
volumeBackend: ebs
496
+
# Valid zone
497
+
AvailabilityZone: us-central1-a # <---- change it to your AZ
498
+
# gp2, gp3 or io1
499
+
VolumeType: gp3
500
+
# ext4 or xfs (default to xfs, ensure that there is xfstools )
501
+
fsType: xfs
502
+
# I/O operations per second. Only effetive when gp3 volume type is specified.
503
+
# Default value - 3000.
504
+
# Max - 16,000
505
+
iops: "5000"
506
+
# Throughput in MiB/s. Only effective when gp3 volume type is specified.
507
+
# Default value - 125.
508
+
# Max - 1000.
509
+
throughput: "500"
510
+
```
511
+
512
+
Apply storage class manifest:
513
+
```shell
514
+
kubectl apply -f dind-ebs.yaml
464
515
```
465
516
466
-
Finally you need to changeyourCodefreshruntime configuration.
The same AZ you selected before should be usedin nodeSelector inside Runtime Configuration:
469
520
@@ -476,60 +527,59 @@ codefresh get runtime-environments
476
527
Choose the runtime you have just added and get its yaml representation:
477
528
478
529
```shell
479
-
codefresh get runtime-environmentsivan@acme-ebs.us-west-2.eksctl.io/codefresh-runtime -o yaml > runtime.yaml
530
+
codefresh get runtime-environmentsmy-eks-cluster/codefresh -o yaml> runtime.yaml
480
531
```
481
532
482
-
ThenodeSelector `failure-domain.beta.kubernetes.io/zone:us-west-2c` (Please change it to yours) should be added to the `dockerDaemonScheduler` block. It should be at the same level as `clusterProvider`or `namespace`. Also the `pvcs` block should be modified to use the Storage Class you created above (`runner-ebs`). Here is the example:
533
+
Under`dockerDaemonScheduler.cluster` block add thenodeSelector`topology.kubernetes.io/zone:<your_az_here>`. It should be at the same level as`clusterProvider`and`namespace`. Also, the`pvcs.dind` block should be modified to use the Storage Class you created above (`dind-ebs`).
Step 2 - Create IAM role and policy as explainedin [https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)
916
+
**Step 2** - Create IAM role and policy as explained in [https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html)
902
917
903
918
Here, in addition to the policy explained, you need a Trust Relationship established between this role and the OIDC entity.
904
919
@@ -922,7 +937,7 @@ Here, in addition to the policy explained, you need a Trust Relationship establi
922
937
}
923
938
```
924
939
925
-
Step 3 - Annotate the`codefresh-engine` Kubernetes Service Accountin the namespace where the Codefresh Runner is installed with the proper IAM role.
940
+
**Step 3** - Annotate the `codefresh-engine` Kubernetes Service Account in the namespace where the Codefresh Runner is installed with the proper IAM role.
926
941
927
942
```shell
928
943
kubectl annotate -n ${CODEFRESH_NAMESPACE} sa codefresh-engine eks.amazonaws.com/role-arn=${ROLE_ARN}
After annotating the Service Account, run a pipeline to test the AWS resource access:
950
965
@@ -970,9 +985,9 @@ RunAwsCli:
970
985
971
986
If you want to deploy the Codefresh runner on a Kubernetes cluster that doesn’t have direct access to `g.codefresh.io`, and has to go trough a proxy server to access `g.codefresh.io`, you will need to follow these additional steps:
972
987
973
-
*Step 1* - Follow the installation instructions of the previous section
988
+
**Step 1** - Follow the installation instructions of the previous section
974
989
975
-
*Step 2* - Run`kubectl edit deployment runner -n codefresh-runtime` and add the proxy variables like this
990
+
**Step 2** - Run `kubectl edit deployment runner -n codefresh-runtime` and add the proxy variables like this
976
991
977
992
```yaml
978
993
spec:
@@ -992,7 +1007,7 @@ spec:
992
1007
value: localhost,127.0.0.1,<local_ip_of_machine>
993
1008
```
994
1009
995
-
*Step 3* - Add the following variables to your runtime.yaml, both under the`runtimeScheduler:` and under`dockerDaemonScheduler:` blocks inside the`envVars:` section
1010
+
**Step 3** - Add the following variables to your runtime.yaml, both under the `runtimeScheduler:` and under `dockerDaemonScheduler:` blocks inside the `envVars:` section
*Step 4* - Add`.firebaseio.com` to the allowed-sites of the proxy server
1021
+
**Step 4** - Add `.firebaseio.com` to the allowed-sites of the proxy server
1007
1022
1008
-
*Step 5* - Exec into the`dind` pod and run`ifconfig`
1023
+
**Step 5** - Exec into the `dind` pod and run `ifconfig`
1009
1024
1010
1025
If the MTU value for `docker0` is higher than the MTU value of `eth0` (sometimes the `docker0` MTU is 1500, while `eth0` MTU is 1440) - you need to change this, the `docker0` MTU should be lower than `eth0` MTU
1011
1026
1012
1027
To fix this, edit the configmap in the codefresh-runtime namespace:
1013
1028
1014
1029
```shell
1015
-
kubectl edit cm codefresh-dind-config -ncodefresh-runtime
1030
+
kubectl edit cm codefresh-dind-config -n codefresh-runtime