Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

updated runner installation on GKE#281

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
jesse-codefresh merged 3 commits intomasterfromrunner-installation
Jun 10, 2021
Merged
Changes fromall commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 66 additions & 8 deletions_docs/administration/codefresh-runner.md
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -1309,38 +1309,96 @@ kubectl create clusterrolebinding NAME --clusterrole cluster-admin --user <YOUR_
```

### Docker cache support for GKE

##### Local SSD
If you want to use *LocalSSD* in GKE:

*Prerequisite:* [GKE cluster with local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd)
*Prerequisites:* [GKE cluster with local SSD](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd)

*Installrunner using GKE Local SSD:*
InstallRunner using GKE Local SSD:
```
codefresh runner init [options] --set-value=Storage.LocalVolumeParentDir=/mnt/disks/ssd0/codefresh-volumes \
--build-node-selector=cloud.google.com/gke-local-ssd=true
```

`values-example.yaml`
{% highlight yaml %}
{% raw %}
...
### Storage parameters example for gke-local-ssd
Storage:
Backend: local
LocalVolumeParentDir: /mnt/disks/ssd0/codefresh-volumes
NodeSelector: cloud.google.com/gke-local-ssd=true
...
Runtime:
NodeSelector: # dind and engine pods node-selector (--build-node-selector)
cloud.google.com/gke-local-ssd: "true"
...
{% endraw %}
{% endhighlight %}

To configure existing Runner with Local SSDs follow this article:

[How-to: Configuring an existing Runtime Environment with Local SSDs (GKE only)](https://support.codefresh.io/hc/en-us/articles/360016652920-How-to-Configuring-an-existing-Runtime-Environment-with-Local-SSDs-GKE-only-)

##### GCE Disks
If you want to use *GCE Disks*:

*Prerequisite:* volume provisioner (dind-volume-provisioner) should have permissions to create/delete/getof Google disks
*Prerequisites:* volume provisioner (dind-volume-provisioner) should have permissions to create/delete/getGCE disks

There are 3 options to provide cloud credentials on GCE:

* run `dind-volume-provisioner-runner` on node withiam role which is allowed to create/delete/getof Google disks
* create Google Service Account with `ComputeEngine.StorageAdmin`, download its key and pass it tovenona installed with `--set-file=Storage.GooogleServiceAccount=/path/to/google-service-account.json`
* use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to assigniam role to `volume-provisioner-venona` service account
* run `dind-volume-provisioner-runner`podonanode withIAM role which is allowed to create/delete/getGCE disks
* create Google Service Account with `ComputeEngine.StorageAdmin` role, download its keyin JSON formatand pass it to`codefresh runner init` with `--set-file=Storage.GooogleServiceAccount=/path/to/google-service-account.json`
* use [Google Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) to assignIAM role to `volume-provisioner-runner` service account

Notice that builds will be running in a single availability zone, so you must specify AvailabilityZone parameters.

Install Runner usingGKE Disks:
Install Runner usingGCE Disks:

```
codefresh runner init [options] --set-value=Storage.Backend=gcedisk \
--set-value=Storage.AvailabilityZone=us-central1-a \
[--kube-node-selector=failure-domain.beta.kubernetes.io/zone=us-central1-a \]
--build-node-selector=failure-domain.beta.kubernetes.io/zone=us-central1-a \
[--set-file=Storage.GoogleServiceAccount=/path/to/google-service-account.json]
```

`values-example.yaml`
{% highlight yaml %}
{% raw %}
...
### Storage parameter example for GCE disks
Storage:
Backend: gcedisk
AvailabilityZone: us-central1-c
GoogleServiceAccount: > #serviceAccount.json content
{
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
NodeSelector: failure-domain.beta.kubernetes.io/zone=us-central1-c
...
Runtime:
NodeSelector: # dind and engine pods node-selector (--build-node-selector)
failure-domain.beta.kubernetes.io/zone: us-central1-c
...
{% endraw %}
{% endhighlight %}

To configure existing Runner with GCE Disks follow this article:

[How-to: Configuring an existing Runtime Environment with GCE disks](https://support.codefresh.io/hc/en-us/articles/360016652900-How-to-Configuring-an-existing-Runtime-Environment-with-GCE-disks)


#### Using multiple Availability Zones

Currently, to support effective caching with GCE disks, the builds/pods need to be scheduled in a single AZ (this is more related to a GCP limitation than a Codefresh runner issue).
Expand Down

[8]ページ先頭

©2009-2025 Movatter.jp