Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitad47e50

Browse files
committed
Update codefresh-runner.md
1 parent4db838f commitad47e50

File tree

1 file changed

+14
-24
lines changed

1 file changed

+14
-24
lines changed

‎_docs/administration/codefresh-runner.md‎

Lines changed: 14 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ Make sure you have [installed the Codefresh Runner](#codefresh-runner-installati
190190

191191
**How to**
192192
1. Run`kubectl edit deployment runner -n codefresh-runtime` and add the proxy variables:
193-
```yaml
193+
```
194194
spec:
195195
containers:
196196
- env:
@@ -281,6 +281,7 @@ There are three options for this:
281281
282282
#### Configuration
283283
284+
{:start="1"}
284285
1. Create Storage Class for EBS volumes:
285286
>Choose **one** of the Availability Zones (AZs)to be used for your pipeline builds. Multi AZ configuration is not supported.
286287
@@ -349,9 +350,9 @@ codefresh get runtime-environments my-eks-cluster/codefresh -o yaml > runtime.ya
349350
```
350351
{:start="4"}
351352
1. Modify the YAML:
352-
* In`dockerDaemonScheduler.cluster`, add`nodeSelector: topology.kubernetes.io/zone:<your_az_here>`.
353-
> Make sure you define the same AZ you selected for Runtime Configuration.
354-
* Modify`pvcs.dind` to use the Storage Class you created above (`dind-ebs`).
353+
* In`dockerDaemonScheduler.cluster`, add`nodeSelector: topology.kubernetes.io/zone:<your_az_here>`.
354+
> Make sure you define the same AZ you selected for Runtime Configuration.
355+
* Modify`pvcs.dind` to use the Storage Class you created above (`dind-ebs`).
355356
356357
Here is an example of the`runtime.yaml` including the required updates:
357358
```yaml
@@ -992,7 +993,6 @@ codefresh runner init [options] \
992993
```
993994
994995
**With the values`values-example.yaml` file:**
995-
996996
```yaml
997997
...
998998
### Storage parameter example for GCE disks
@@ -1213,7 +1213,6 @@ nodeGroups:
12131213
autoScaler: true
12141214
availabilityZones: ["us-west-2a", "us-west-2b", "us-west-2c"]
12151215
```
1216-
{:start="2"}
12171216
1. Execute:
12181217
```shell
12191218
eksctl create cluster -f my-eks-cluster.yaml
@@ -1234,7 +1233,7 @@ To leverage [Bottlerocket-based nodes](https://aws.amazon.com/bottlerocket/){:ta
12341233
12351234
#### Step 2: Install autoscaler on EKS cluster
12361235
1237-
Once the cluster is up and running, install the [cluster autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html){:target="\_blank"}:
1236+
Once the cluster is up and running, install the [cluster autoscaler](https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html){:target="\_blank"}.
12381237
12391238
Because we used IAM AddonPolicies`"autoScaler: true"` in the`cluster.yaml` file, everything is done automatically, and there is no need to create a separate IAM policy or add Auto Scaling group tags.
12401239
@@ -1315,6 +1314,7 @@ codefresh runner init \
13151314
--set-value=Storage.KmsKeyId=<key id>
13161315
```
13171316
For descriptions of the other options, run`codefresh runner init --help` ([global parameter table](#customizing-the-wizard-installation)).
1317+
13181318
{:start="3"}
13191319
1. When the Wizard completes the installation, modify the runtime environment of`my-aws-runner` to specify the necessary toleration, nodeSelector and disk size:
13201320
* Run:
@@ -1587,6 +1587,7 @@ kubectl edit deploy runner -n codefresh
15871587
**How to**
15881588
15891589
1. If you use AKS with managed [identitiesfor node group](https://docs.microsoft.com/en-us/azure/aks/use-managed-identity), you can run the script below to assign`CodefreshDindVolumeProvisioner` role to AKS node identity:
1590+
15901591
```
15911592
export ROLE_DEFINITIN_FILE=dind-volume-provisioner-role.json
15921593
export SUBSCRIPTION_ID=$(az account show --query"id"| xargsecho)
@@ -1599,14 +1600,15 @@ export NODE_SERVICE_PRINCIPAL=$(az aks show -g $RESOURCE_GROUP -n $AKS_NAME --qu
15991600
az role definition create --role-definition @${ROLE_DEFINITIN_FILE}
16001601
az role assignment create --assignee$NODE_SERVICE_PRINCIPAL --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$NODES_RESOURCE_GROUP --role CodefreshDindVolumeProvisioner
16011602
```
1603+
16021604
{:start="2"}
16031605
1. Install Codefresh Runner using one of these options:
16041606
* CLI Wizard:
16051607
```
16061608
codefresh runner init --set-value Storage.Backend=azuredisk --set Storage.VolumeProvisioner.MountAzureJson=true
16071609
```
1608-
* [values-example.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml){:target="\_blank"}:
1609-
```yaml
1610+
* [values-example.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/venonactl/example/values-example.yaml){:target="\_blank"}:
1611+
```
16101612
Storage:
16111613
Backend: azuredisk
16121614
VolumeProvisioner:
@@ -1616,7 +1618,7 @@ Storage:
16161618
codefresh runner init --values values-example.yaml
16171619
```
16181620
* Helm chart [values.yaml](https://github.com/codefresh-io/venona/blob/release-1.0/charts/cf-runtime/values.yaml){:target="\_blank"}:
1619-
```yaml
1621+
```
16201622
storage:
16211623
backend: azuredisk
16221624
azuredisk:
@@ -1625,7 +1627,6 @@ storage:
16251627
volumeProvisioner:
16261628
mountAzureJson: true
16271629
```
1628-
16291630
```
16301631
helm install cf-runtime cf-runtime/cf-runtime -f ./generated_values.yaml -f values.yaml --create-namespace --namespace codefresh
16311632
```
@@ -2048,10 +2049,10 @@ With the Codefresh Runner, you can run native ARM64v8 builds.
20482049
20492050
The following scenario is an example of how to set up ARM Runner on existing EKS cluster:
20502051
2051-
**Step 1: Preparing nodes**
2052+
**Step 1: Preparing nodes**
20522053
2053-
1. Create new ARM nodegroup:
20542054
2055+
1. Create new ARM nodegroup:
20552056
```shell
20562057
eksctl utils update-coredns --cluster <cluster-name>
20572058
eksctl utils update-kube-proxy --cluster <cluster-name> --approve
@@ -2067,15 +2068,11 @@ eksctl create nodegroup \
20672068
--nodes-max <4>\
20682069
--managed
20692070
```
2070-
20712071
1. Check nodes status:
2072-
20732072
```shell
20742073
kubectl get nodes -l kubernetes.io/arch=arm64
20752074
```
2076-
20772075
1. Also it's recommeded to label and taint the required ARM nodes:
2078-
20792076
```shell
20802077
kubectl taint nodes<node> arch=aarch64:NoSchedule
20812078
kubectl label nodes<node> arch=arm
@@ -2120,38 +2117,31 @@ Runtime:
21202117
```
21212118
21222119
1. Install the Runner:
2123-
21242120
```shell
21252121
codefresh runner init --values values-arm.yaml --exec-demo-pipelinefalse --skip-cluster-integrationtrue
21262122
```
21272123
21282124
**Step 3 - Post-installation fixes**
21292125
21302126
1. Change`engine` image versionin Runtime Environment specification:
2131-
21322127
```shell
21332128
# get the latest engine ARM64 tag
21342129
curl -X GET"https://quay.io/api/v1/repository/codefresh/engine/tag/?limit=100" --silent| jq -r'.tags[].name'| grep"^1.*arm64$"
21352130
1.136.1-arm64
21362131
```
2137-
21382132
```shell
21392133
# get runtime spec
21402134
codefresh get re$RUNTIME_NAME -o yaml> runtime.yaml
21412135
```
2142-
21432136
1. Under`runtimeScheduler.image` change image tag:
2144-
21452137
```yaml
21462138
runtimeScheduler:
21472139
image:'quay.io/codefresh/engine:1.136.1-arm64'
21482140
```
2149-
21502141
```shell
21512142
# patch runtime spec
21522143
codefresh patch re -f runtime.yaml
21532144
```
2154-
21552145
1. For`local` storage patch`dind-lv-monitor-runner` DaemonSet and add`nodeSelector`:
21562146
21572147
```shell

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp