|
| 1 | +--- |
| 2 | +title:"Shared runtime configuration repo" |
| 3 | +description:"" |
| 4 | +group:runtime |
| 5 | +toc:true |
| 6 | +--- |
| 7 | + |
| 8 | + |
| 9 | +A Codefresh account with a hosted or a hybrid runtime can store configuration settings in a Git repository. This repository can be shared with other runtimes in the same account, avoiding the need to create and maintain configurations for each runtime. |
| 10 | + |
| 11 | +* Hosted runtimes |
| 12 | + As part of the setup for a hosted runtime, you must select the Git Organization for which to create the runtime installation repo. Codefresh then creates the shared configuration repository. |
| 13 | + |
| 14 | +* Hybrid runtimes |
| 15 | + When you install the first hybrid runtime for an account, you can define the shared configuration repo through the`--shared-config-repo` flag. If the flag is omitted, and the runtime account does not have a shared configuration repo, it is created in the runtime installation repo, in`shared-config` root. |
| 16 | + |
| 17 | +>Currently, Codefresh supports a single shared configuration repo per account. |
| 18 | +
|
| 19 | + |
| 20 | +###Shared runtime configuration repo structure |
| 21 | +Below is a representation of the structure of the shared configuration repo for runtimes. |
| 22 | +See a[sample repo](https://github.dev/noam-codefresh/shared-gs){:target="\_blank"}. |
| 23 | + |
| 24 | +``` |
| 25 | +. |
| 26 | +├── resources <───────────────────┐ |
| 27 | +│ ├── all-runtimes-all-clusters │ |
| 28 | +│ │ ├── cm-all.yaml │ |
| 29 | +│ │ └── subfolder │ |
| 30 | +│ │ └── manifest2.yaml │ |
| 31 | +│ ├── control-planes │ |
| 32 | +│ │ └── manifest3.yaml │ |
| 33 | +│ ├── runtimes │ |
| 34 | +│ │ ├── runtime1 │ |
| 35 | +│ │ │ └── manifest4.yaml │ |
| 36 | +│ │ └── runtime2 │ |
| 37 | +│ │ └── manifest5.yaml │ |
| 38 | +│ └── manifest6.yaml │ |
| 39 | +└── runtimes │ |
| 40 | + ├── production │ # referenced by <install_repo_1>/apps/runtime1/config_dir.json |
| 41 | + │ ├── in-cluster.yaml ─┤ # manage `include` field to decide which dirs/files to sync to cluster |
| 42 | + │ └── remote-cluster.yaml ─┤ # manage `include` field to decide which dirs/files to sync to cluster |
| 43 | + └── staging │ # referenced by <install_repo_2>/apps/runtime2/config_dir.json |
| 44 | + └── in-cluster.yaml ─┘ # manage `include` field to decide which dirs/files to sync to cluster |
| 45 | +``` |
| 46 | + |
| 47 | +####`resources` directory |
| 48 | + |
| 49 | +The`resources` directory holds the resources shared by all clusters managed by the runtime: |
| 50 | + |
| 51 | +*`all-runtimes-all-clusters`: Every resource manifest in this directory is applied to all the runtimes in the account, and to all the clusters managed by those runtimes. |
| 52 | +*`control-planes`: Optional. Valid for hosted runtimes only. When defined, every resource manifest in this directory is applied to each hosted runtime’s`in-cluster`. |
| 53 | +*`runtimes/<runtime_name>`: Optional. Runtime-specific subdirectory. Every resource manifest in a runtime-specific subdirectory is applied to only that runtime.`manifest4.yaml` in the above example is applied only to`runtime1`. |
| 54 | + |
| 55 | +####`runtimes` directory |
| 56 | +Includes subdirectories specific to each runtime installed in the cluster, always with`in-cluster.yaml`, and optionally application manifests for other clusters. |
| 57 | + |
| 58 | +**Example application manifest for in-cluster.yaml** |
| 59 | + |
| 60 | +```yaml |
| 61 | +apiVersion:argoproj.io/v1alpha1 |
| 62 | +kind:Application |
| 63 | +metadata: |
| 64 | +labels: |
| 65 | +codefresh.io/entity:internal-config |
| 66 | +codefresh.io/internal:'true' |
| 67 | +name:in-cluster |
| 68 | +spec: |
| 69 | +project:default |
| 70 | +source: |
| 71 | +repoURL:<account's-isc-repository> |
| 72 | +path:resources# or shared-config/resources |
| 73 | +directory: |
| 74 | +include:'{all-runtimes-all-clusters/*.yaml,all-runtimes-all-clusters/**/*.yaml,runtimes/<runtime_name>/*.yaml,runtimes/<runtime_name>/**/*.yaml,control-planes/*.yaml,control-planes/**/*.yaml}' |
| 75 | +recurse:true |
| 76 | +destination: |
| 77 | +namespace:<runtime_name> |
| 78 | +server:https://kubernetes.default.svc |
| 79 | +syncPolicy: |
| 80 | +automated: |
| 81 | +allowEmpty:true |
| 82 | +prune:true |
| 83 | +selfHeal:true |
| 84 | +syncOptions: |
| 85 | + -allowEmpty=true |
| 86 | +``` |
| 87 | +
|
| 88 | +
|
| 89 | +### Git Source application per runtime |
| 90 | +In addition to the application manifests for the runtimes in the shared configuration repository, every runtime has a Git-Source Application that references`runtimes/<runtime-name>` in the shared configuration repo. |
| 91 | + |
| 92 | +This Git Source application creates an application manifest with the `<cluster-name>` for every cluster managed by the runtime. The `include` field in the `<cluster-name>` application manifest determines which subdirectories in the `resources` directory are synced to the target cluster. |
| 93 | + |
| 94 | + |
| 95 | +### Adding resources |
| 96 | +When creating a new resource, such as a new integration for example in the Codefresh UI, you can define the runtimes and clusters to which to apply that resource. The app-proxy saves the resource in the correct location and updates the relevant Argo CD Applications to include it. |
| 97 | + |
| 98 | +### Upgrading hybrid runtimes |
| 99 | +Older hybrid runtimes that do not have the shared configuration repository must be upgraded to the latest version. |
| 100 | +You have two options to define the shared configuration repository during upgrade: |
| 101 | +* Upgrade the runtime, and let the app-proxy create the shared runtime configuration repo automatically. |
| 102 | +* Manually define the shared runtime configuration repository, by adding the `--shared-config-repo` flag in the runtime upgrade command. |
| 103 | + |
| 104 | +>If the shared runtime configuration repo is not created for an account, Codefresh creates it in the installation repo, in `shared-config` root. |
| 105 | + |
| 106 | +If the hybrid runtime being upgraded has managed clusters, once the shared configuration repo is created for the account either automatically or manually on upgrade, all clusters are migrated to the same repo when app-proxy is initialized. An Argoproj application manifest is committed to the repo for each cluster managed by the runtime. |
| 107 | + |
| 108 | + |
| 109 | + |