- Notifications
You must be signed in to change notification settings - Fork1
codefresh-io/codefresh-onprem-helm
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Helm chart for deployingCodefresh On-Premises to Kubernetes.
- Prerequisites
- Get Repo Info
- Install Chart
- Chart Configuration
- Installing on OpenShift
- Firebase Configuration
- Additional configuration
- Configuring OIDC Provider
- Maintaining MongoDB indexes
- Upgrading
- Rollback
- Troubleshooting
- Values
Since version 2.1.7 chart is pushedonly to OCI registry at
oci://quay.io/codefresh/codefresh
Versions prior to 2.1.7 are still available in ChartMuseum at
http://chartmuseum.codefresh.io/codefresh
- Kubernetes>= 1.28 && <= 1.32 (Supported versions mean that installation passed for the versions listed; however, itmay work on older k8s versions as well)
- Helm3.8.0+
- PV provisioner support in the underlying infrastructure (withresizing available)
- Minimal 4vCPU and 8Gi Memory available in the cluster (for production usage the recommended minimal cluster capacity is at least 12vCPUs and 36Gi Memory)
- GCR Service Account JSON
sa.json(provided by Codefresh, contactsupport@codefresh.io) - FirebaseRealtime Database URL withlegacy token. SeeFirebase Configuration
- Valid TLS certificates for Ingress
- Whenexternal PostgreSQL is used,
pg_cronandpg_partmanextensionsmust be enabled foranalytics to work (seeAWS RDS example). Thepg_cronextension should be the 1.4 version or higher for Azure Postgres DB.
helm show all oci://quay.io/codefresh/codefreshImportant: only helm 3.8.0+ is supported
Edit defaultvalues.yaml or create emptycf-values.yaml
- Pass
sa.json(as a single line) to.Values.imageCredentials.password
# -- Credentials for Image Pull Secret objectimageCredentials:registry:us-docker.pkg.devusername:_json_keypassword:'{ "type": "service_account", "project_id": "codefresh-enterprise", "private_key_id": ... }'
- Specify
.Values.global.appUrl,.Values.global.firebaseUrl,.Values.global.firebaseSecret,.Values.global.env.MONGOOSE_AUTO_INDEX,.Values.global.env.MONGO_AUTOMATIC_INDEX_CREATION
global:# -- Application root url. Will be used in Ingress as hostnameappUrl:onprem.mydomain.com# -- Firebase URL for logs streaming.firebaseUrl:<># -- Firebase URL for logs streaming from existing secretfirebaseUrlSecretKeyRef:{}# E.g.# firebaseUrlSecretKeyRef:# name: my-secret# key: firebase-url# -- Firebase Secret.firebaseSecret:<># -- Firebase Secret from existing secretfirebaseSecretSecretKeyRef:{}# E.g.# firebaseSecretSecretKeyRef:# name: my-secret# key: firebase-secret# -- Enable index creation in MongoDB# This is required for first-time installations!# Before usage in Production, you must set it to `false` or remove it!env:MONGOOSE_AUTO_INDEX:"true"MONGO_AUTOMATIC_INDEX_CREATION:"true"
- Specify
.Values.ingress.tls.certand.Values.ingress.tls.keyOR.Values.ingress.tls.existingSecret
ingress:# -- Enable the Ingressenabled:true# -- Set the ingressClass that is used for the ingress.# Default `nginx-codefresh` is created from `ingress-nginx` controller subchart# If you specify a different ingress class, disable `ingress-nginx` subchart (see below)ingressClassName:nginx-codefreshtls:# -- Enable TLSenabled:true# -- Default secret name to be created with provided `cert` and `key` belowsecretName:"star.codefresh.io"# -- Certificate (base64 encoded)cert:""# -- Private key (base64 encoded)key:""# -- Existing `kubernetes.io/tls` type secret with TLS certificates (keys: `tls.crt`, `tls.key`)existingSecret:""ingress-nginx:# -- Enable ingress-nginx controllerenabled:true
- Or specify your own
.Values.ingress.ingressClassName(disable built-in ingress-nginx subchart)
ingress:# -- Enable the Ingressenabled:true# -- Set the ingressClass that is used for the ingress.ingressClassName:nginxingress-nginx:# -- Disable ingress-nginx controllerenabled:false
- Install the chart
helm upgrade --install cf oci://quay.io/codefresh/codefresh \ -f cf-values.yaml \ --namespace codefresh \ --create-namespace \ --debug \ --wait \ --timeout 15m
Once your Codefresh On-Prem instance is installed, configured, and confirmed to be ready for production use, the following variables must be set tofalse or removed:
global:env:MONGOOSE_AUTO_INDEX:"false"MONGO_AUTOMATIC_INDEX_CREATION:"false"
SeeCustomizing the Chart Before Installing. To see all configurable options with detailed comments, visit the chart'svalues.yaml, or run these configuration commands:
helm show values codefresh/codefreshThe following table displays the list ofpersistent services created as part of the on-premises installation:
| Database | Purpose | Required version |
|---|---|---|
| MongoDB | Stores all account data (account settings, users, projects, pipelines, builds etc.) | 7.x |
| Postgresql | Stores data about events for the account (pipeline updates, deletes, etc.). The audit log uses the data from this database. | 16.x or 17.x |
| Redis | Used for caching, and as a key-value store for cron trigger manager. | 7.0.x |
| RabbitMQ | Used for message queueing. | 3.13 | 4.0.x |
Running on netfs (nfs, cifs) is not recommended.
Docker daemon (
cf-builderstateful set) can be run on block storage only.
All of them can be externalized. See the next sections.
The chart contains required dependencies for the corresponding services
However, you might need to use external services likeMongoDB Atlas Database orAmazon RDS for PostgreSQL. In order to use them, adjust the values accordingly:
⚠️ Important! If you use MongoDB Atlas, you must create user withWritepermissions before installing Codefresh:
Then, provide the user credentials in the chart values at.Values.global.mongodbUser/mongodbRootUserSecretKeyRef.Values.global.mongodbPassword/mongodbRootPasswordSecretKeyRef.Values.seed.mongoSeedJob.mongodbRootUser/mongodbRootUserSecretKeyRef.Values.seed.mongoSeedJob.mongodbRootPassword/mongodbRootPasswordSecretKeyRef
Ref:
Create Users in Atlas
values.yaml for external MongoDB:
seed:mongoSeedJob:# -- Enable mongo seed job. Seeds the required data (default idp/user/account), creates cfuser and required databases.enabled:true# -- Root user in plain text (required ONLY for seed job!).mongodbRootUser:"root"# -- Root user from existing secretmongodbRootUserSecretKeyRef:{}# E.g.# mongodbRootUserSecretKeyRef:# name: my-secret# key: mongodb-root-user# -- Root password in plain text (required ONLY for seed job!).mongodbRootPassword:"password"# -- Root password from existing secretmongodbRootPasswordSecretKeyRef:{}# E.g.# mongodbRootPasswordSecretKeyRef:# name: my-secret# key: mongodb-root-passwordglobal:# -- LEGACY (but still supported) - Use `.global.mongodbProtocol` + `.global.mongodbUser/mongodbUserSecretKeyRef` + `.global.mongodbPassword/mongodbPasswordSecretKeyRef` + `.global.mongodbHost/mongodbHostSecretKeyRef` + `.global.mongodbOptions` instead# Default MongoDB URI. Will be used by ALL services to communicate with MongoDB.# Ref: https://www.mongodb.com/docs/manual/reference/connection-string/# Note! `defaultauthdb` is omitted on purpose (i.e. mongodb://.../[defaultauthdb]).mongoURI:""# E.g.# mongoURI: "mongodb://cfuser:mTiXcU2wafr9@cf-mongodb:27017/"# -- Set mongodb protocol (`mongodb` / `mongodb+srv`)mongodbProtocol:mongodb# -- Set mongodb user in plain textmongodbUser:"cfuser"# -- Set mongodb user from existing secretmongodbUserSecretKeyRef:{}# E.g.# mongodbUserSecretKeyRef:# name: my-secret# key: mongodb-user# -- Set mongodb password in plain textmongodbPassword:"password"# -- Set mongodb password from existing secretmongodbPasswordSecretKeyRef:{}# E.g.# mongodbPasswordSecretKeyRef:# name: my-secret# key: mongodb-password# -- Set mongodb host in plain textmongodbHost:"my-mongodb.prod.svc.cluster.local:27017"# -- Set mongodb host from existing secretmongodbHostSecretKeyRef:{}# E.g.# mongodbHostSecretKeyRef:# name: my-secret# key: monogdb-host# -- Set mongodb connection string options# Ref: https://www.mongodb.com/docs/manual/reference/connection-string/#connection-string-optionsmongodbOptions:"retryWrites=true"mongodb:# -- Disable mongodb subchart installationenabled:false
In order to use MTLS (Mutual TLS) for MongoDB, you need:
- Create a K8S secret that contains the certificate (certificate file and private key).The K8S secret should have one
ca.pemkey.
cat cert.crt > ca.pemcat cert.key >> ca.pemkubectl create secret generic my-mongodb-tls --from-file=ca.pem
- Add
.Values.global.volumesand.Values.global.volumeMountsto mount the secret into all the services.
global:volumes:mongodb-tls:enabled:truetype:secretnameOverride:my-mongodb-tlsoptional:truevolumeMounts:mongodb-tls:path: -mountPath:/etc/ssl/mongodb/ca.pemsubPath:ca.pemenv:MONGODB_SSL_ENABLED:trueMTLS_CERT_PATH:/etc/ssl/mongodb/ca.pemRUNTIME_MTLS_CERT_PATH:/etc/ssl/mongodb/ca.pemRUNTIME_MONGO_TLS:"true"# Set these env vars to 'false' if self-signed certificate is used to avoid x509 errorsRUNTIME_MONGO_TLS_VALIDATE:"false"MONGO_MTLS_VALIDATE:"false"
seed:postgresSeedJob:# -- Enable postgres seed job. Creates required user and databases.enabled:true# -- (optional) "postgres" admin user in plain text (required ONLY for seed job!)# Must be a privileged user allowed to create databases and grant roles.# If omitted, username and password from `.Values.global.postgresUser/postgresPassword` will be used.postgresUser:"postgres"# -- (optional) "postgres" admin user from exising secretpostgresUserSecretKeyRef:{}# E.g.# postgresUserSecretKeyRef:# name: my-secret# key: postgres-user# -- (optional) Password for "postgres" admin user (required ONLY for seed job!)postgresPassword:"password"# -- (optional) Password for "postgres" admin user from existing secretpostgresPasswordSecretKeyRef:{}# E.g.# postgresPasswordSecretKeyRef:# name: my-secret# key: postgres-passwordglobal:# -- Set postgres user in plain textpostgresUser:cf_user# -- Set postgres user from existing secretpostgresUserSecretKeyRef:{}# E.g.# postgresUserSecretKeyRef:# name: my-secret# key: postgres-user# -- Set postgres password in plain textpostgresPassword:password# -- Set postgres password from existing secretpostgresPasswordSecretKeyRef:{}# E.g.# postgresPasswordSecretKeyRef:# name: my-secret# key: postgres-password# -- Set postgres service address in plain text.postgresHostname:"my-postgres.domain.us-east-1.rds.amazonaws.com"# -- Set postgres service from existing secretpostgresHostnameSecretKeyRef:{}# E.g.# postgresHostnameSecretKeyRef:# name: my-secret# key: postgres-hostname# -- Set postgres port numberpostgresPort:5432postgresql:# -- Disable postgresql subchart installationenabled:false
Provide the following env vars to enforce SSL connection to PostgresSQL:
global:env:# More info in the official docs: https://www.postgresql.org/docs/current/libpq-envars.htmlPGSSLMODE:"require"helm-repo-manager:env:POSTGRES_DISABLE_SSL:"false"
⚠️ Important!
We do not support custom CA configuration for PostgreSQL, including self-signed certificates. This may cause incompatibility with some providers' default configurations.
In particular, Amazon RDS for PostgreSQL version 15 and later requires SSL encryption by default (ref).
We recommend disabling SSL on the provider side in such cases or using the following steps to mount custom CA certificates:Mounting private CA certs
global:# -- Set redis password in plain textredisPassword:password# -- Set redis service portredisPort:6379# -- Set redis password from existing secretredisPasswordSecretKeyRef:{}# E.g.# redisPasswordSecretKeyRef:# name: my-secret# key: redis-password# -- Set redis hostname in plain text. Takes precedence over `global.redisService`!redisUrl:"my-redis.namespace.svc.cluster.local"# -- Set redis hostname from existing secret.redisUrlSecretKeyRef:{}# E.g.# redisUrlSecretKeyRef:# name: my-secret# key: redis-urlredis:# -- Disable redis subchart installationenabled:false
If ElastiCache is used, set
REDIS_TLStotruein.Values.global.env
⚠️ ElastiCache withCluster mode is not supported!
global:env:REDIS_TLS:true
In order to useMTLS (Mutual TLS) for Redis, you need:
- Create a K8S secret that contains the certificate (ca, certificate and private key).
cat ca.crt tls.crt > tls.crtkubectl create secret tls my-redis-tls --cert=tls.crt --key=tls.key --dry-run=client -o yaml | kubectl apply -f -
- Add
.Values.global.volumesand.Values.global.volumeMountsto mount the secret into all the services.
global:volumes:redis-tls:enabled:truetype:secret# Existing secret with TLS certificates (keys: `ca.crt` , `tls.crt`, `tls.key`)nameOverride:my-redis-tlsoptional:truevolumeMounts:redis-tls:path: -mountPath:/etc/ssl/redisenv:REDIS_TLS:trueREDIS_CA_PATH:/etc/ssl/redis/ca.crtREDIS_CLIENT_CERT_PATH:/etc/ssl/redis/tls.crtREDIS_CLIENT_KEY_PATH:/etc/ssl/redis/tls.key# Set these env vars like that if self-signed certificate is used to avoid x509 errorsREDIS_REJECT_UNAUTHORIZED:falseREDIS_TLS_SKIP_VERIFY:true
global:# -- Set rabbitmq protocol (`amqp/amqps`)rabbitmqProtocol:amqp# -- Set rabbitmq username in plain textrabbitmqUsername:user# -- Set rabbitmq username from existing secretrabbitmqUsernameSecretKeyRef:{}# E.g.# rabbitmqUsernameSecretKeyRef:# name: my-secret# key: rabbitmq-username# -- Set rabbitmq password in plain textrabbitmqPassword:password# -- Set rabbitmq password from existing secretrabbitmqPasswordSecretKeyRef:{}# E.g.# rabbitmqPasswordSecretKeyRef:# name: my-secret# key: rabbitmq-password# -- Set rabbitmq service address in plain text. Takes precedence over `global.rabbitService`!rabbitmqHostname:"my-rabbitmq.namespace.svc.cluster.local:5672"# -- Set rabbitmq service address from existing secret.rabbitmqHostnameSecretKeyRef:{}# E.g.# rabbitmqHostnameSecretKeyRef:# name: my-secret# key: rabbitmq-hostnamerabbitmq:# -- Disable rabbitmq subchart installationenabled:false
The chart deploys theingress-nginx and exposes controller behind a Service ofType=LoadBalancer
All installation options foringress-nginx are described atConfiguration
Relevant examples for Codefesh are below:
certificate provided from ACM
ingress-nginx:controller:service:annotations:service.beta.kubernetes.io/aws-load-balancer-backend-protocol:"tcp"service.beta.kubernetes.io/aws-load-balancer-ssl-ports:"443"service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:'3600'service.beta.kubernetes.io/aws-load-balancer-ssl-cert:< CERTIFICATE ARN >targetPorts:http:httphttps:http# -- Ingressingress:tls:# -- Disable TLSenabled:false
certificate provided as base64 string or as exisiting k8s secret
ingress-nginx:controller:service:annotations:service.beta.kubernetes.io/aws-load-balancer-type:nlbservice.beta.kubernetes.io/aws-load-balancer-backend-protocol:tcpservice.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:'3600'service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:'true'# -- Ingressingress:tls:# -- Enable TLSenabled:true# -- Default secret name to be created with provided `cert` and `key` belowsecretName:"star.codefresh.io"# -- Certificate (base64 encoded)cert:"LS0tLS1CRUdJTiBDRVJ...."# -- Private key (base64 encoded)key:"LS0tLS1CRUdJTiBSU0E..."# -- Existing `kubernetes.io/tls` type secret with TLS certificates (keys: `tls.crt`, `tls.key`)existingSecret:""
Application Load Balancer should be deployed to the cluster
ingress-nginx:# -- Disable ingress-nginx subchart installationenabled:falseingress:# -- ALB contoller ingress classingressClassName:albannotations:alb.ingress.kubernetes.io/actions.ssl-redirect:'{"Type": "redirect", "RedirectConfig":{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'alb.ingress.kubernetes.io/backend-protocol:HTTPalb.ingress.kubernetes.io/certificate-arn:<ARN>alb.ingress.kubernetes.io/listen-ports:'[{"HTTP": 80}, {"HTTPS":443}]'alb.ingress.kubernetes.io/scheme:internet-facingalb.ingress.kubernetes.io/success-codes:200,404alb.ingress.kubernetes.io/target-type:ipservices:# For ALB /* asterisk is required in pathinternal-gateway: -/*
If you install/upgrade Codefresh on an air-gapped environment without access to public registries (i.e.quay.io/docker.io) or Codefresh Enterprise registry atgcr.io, you will have to mirror the images to your organization’s container registry.
Obtainimage list for specific release
Push images to private docker registry
Specify image registry in values
global:imageRegistry:myregistry.domain.com
There are 3 types of images, with the values above in rendered manifests images will be converted as follows:
non-Codefresh like:
bitnami/mongo:4.2registry.k8s.io/ingress-nginx/controller:v1.4.0postgres:13
converted to:
myregistry.domain.com/bitnami/mongodb:4.2myregistry.domain.com/ingress-nginx/controller:v1.2.0myregistry.domain.com/postgres:13
Codefreshpublic images like:
quay.io/codefresh/dind:20.10.13-1.25.2quay.io/codefresh/engine:1.147.8quay.io/codefresh/cf-docker-builder:1.1.14
converted to:
myregistry.domain.com/codefresh/dind:20.10.13-1.25.2myregistry.domain.com/codefresh/engine:1.147.8myregistry.domain.com/codefresh/cf-docker-builder:1.1.14
Codefreshprivate images like:
gcr.io/codefresh-enterprise/codefresh/cf-api:21.153.6gcr.io/codefresh-enterprise/codefresh/cf-ui:14.69.38gcr.io/codefresh-enterprise/codefresh/pipeline-manager:3.121.7
converted to:
myregistry.domain.com/codefresh/cf-api:21.153.6myregistry.domain.com/codefresh/cf-ui:14.69.38myregistry.domain.com/codefresh/pipeline-manager:3.121.7
Use the example below to override repository for all templates:
global:imagePullSecrets: -cf-registryingress-nginx:controller:image:registry:myregistry.domain.comimage:codefresh/controllermongodb:image:repository:codefresh/mongodbpostgresql:image:repository:codefresh/postgresqlconsul:image:repository:codefresh/consulredis:image:repository:codefresh/redisrabbitmq:image:repository:codefresh/rabbitmqnats:image:repository:codefresh/natsbuilder:container:image:repository:codefresh/dockerrunner:container:image:repository:codefresh/dockerinternal-gateway:container:image:repository:codefresh/nginx-unprivilegedhelm-repo-manager:chartmuseum:image:repository:myregistry.domain.com/codefresh/chartmuseumcf-platform-analytics-platform:redis:image:repository:codefresh/redis
The chart installs cf-api as a single deployment. Though, at a larger scale, we do recommend to split cf-api to multiple roles (one deployment per role) as follows:
global:# -- Change internal cfapi service addresscfapiService:cfapi-internal# -- Change endpoints cfapi service addresscfapiEndpointsService:cfapi-endpointscfapi:&cf-api# -- Disable default cfapi deploymentenabled:false# -- (optional) Enable the autoscaler# The value will be merged into each cfapi role. So you can specify it once.hpa:enabled:true# Enable cf-api rolescfapi-auth:<<:*cf-apienabled:truecfapi-internal:<<:*cf-apienabled:truecfapi-ws:<<:*cf-apienabled:truecfapi-admin:<<:*cf-apienabled:truecfapi-endpoints:<<:*cf-apienabled:truecfapi-terminators:<<:*cf-apienabled:truecfapi-sso-group-synchronizer:<<:*cf-apienabled:truecfapi-buildmanager:<<:*cf-apienabled:truecfapi-cacheevictmanager:<<:*cf-apienabled:truecfapi-eventsmanagersubscriptions:<<:*cf-apienabled:truecfapi-kubernetesresourcemonitor:<<:*cf-apienabled:truecfapi-environments:<<:*cf-apienabled:truecfapi-gitops-resource-receiver:<<:*cf-apienabled:truecfapi-downloadlogmanager:<<:*cf-apienabled:truecfapi-teams:<<:*cf-apienabled:truecfapi-kubernetes-endpoints:<<:*cf-apienabled:truecfapi-test-reporting:<<:*cf-apienabled:true
The chart installs the non-HA version of Codefresh by default. If you want to run Codefresh in HA mode, use the example values below.
Note!
cronusis not supported in HA mode, otherwise builds with CRON triggers will be duplicated
values.yaml
cfapi:hpa:enabled:true# These are the defaults for all Codefresh subcharts# minReplicas: 2# maxReplicas: 10# targetCPUUtilizationPercentage: 70argo-platform:abac:hpa:enabled:trueanalytics-reporter:hpa:enabled:trueapi-events:hpa:enabled:trueapi-graphql:hpa:enabled:trueaudit:hpa:enabled:truecron-executor:hpa:enabled:trueevent-handler:hpa:enabled:trueui:hpa:enabled:truecfui:hpa:enabled:trueinternal-gateway:hpa:enabled:truecf-broadcaster:hpa:enabled:truecf-platform-analytics-platform:hpa:enabled:truecharts-manager:hpa:enabled:truecluster-providers:hpa:enabled:truecontext-manager:hpa:enabled:truegitops-dashboard-manager:hpa:enabled:truehelm-repo-manager:hpa:enabled:truehermes:hpa:enabled:truek8s-monitor:hpa:enabled:truekube-integration:hpa:enabled:truenomios:hpa:enabled:truepipeline-manager:hpa:enabled:trueruntime-environment-manager:hpa:enabled:truetasker-kubernetes:hpa:enabled:true
For infra services (MongoDB, PostgreSQL, RabbitMQ, Redis, Consul, Nats, Ingress-NGINX) from built-in Bitnami charts you can use the following example:
Note! UsetopologySpreadConstraints for better resiliency
values.yaml
global:postgresService:postgresql-ha-pgpoolmongodbHost:cf-mongodb-0,cf-mongodb-1,cf-mongodb-2# Replace `cf` with your Helm Release namemongodbOptions:replicaSet=rs0&retryWrites=trueredisUrl:cf-redis-ha-haproxybuilder:controller:replicas:3consul:replicaCount:3cfsign:controller:replicas:3persistence:certs-data:enabled:falsevolumes:certs-data:type:emptyDirinitContainers:volume-permissions:enabled:falseingress-nginx:controller:autoscaling:enabled:truemongodb:architecture:replicasetreplicaCount:3externalAccess:enabled:trueservice:type:ClusterIPnats:replicaCount:3postgresql:enabled:falsepostgresql-ha:enabled:truevolumePermissions:enabled:truerabbitmq:replicaCount:3redis:enabled:falseredis-ha:enabled:true
global:env:NODE_EXTRA_CA_CERTS:/etc/ssl/custom/ca.crtvolumes:custom-ca:enabled:truetype:secretexistingName:my-custom-ca-cert# exisiting K8s secret object with the CA certoptional:truevolumeMounts:custom-ca:path: -mountPath:/etc/ssl/custom/ca.crtsubPath:ca.crt
To deploy Codefresh On-Prem on OpenShift use the following values example:
ingress:ingressClassName:openshift-defaultglobal:dnsService:dns-defaultdnsNamespace:openshift-dnsclusterDomain:cluster.local# Requires privileged SCC.builder:enabled:falsecfapi:podSecurityContext:enabled:falsecf-platform-analytics-platform:redis:master:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:falsecfsign:podSecurityContext:enabled:falseinitContainers:volume-permissions:enabled:falsecfui:podSecurityContext:enabled:falseinternal-gateway:podSecurityContext:enabled:falsehelm-repo-manager:chartmuseum:securityContext:enabled:falseconsul:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:falsecronus:podSecurityContext:enabled:falseingress-nginx:enabled:falsemongodb:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:falsepostgresql:primary:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:falseredis:master:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:falserabbitmq:podSecurityContext:enabled:falsecontainerSecurityContext:enabled:false# Requires privileged SCC.runner:enabled:false
As outlined inprerequisites, it's required to set up a Firebase database for builds logs streaming:
- Create a Database.
- Create aLegacy token for authentication.
- Set the following rules for the database:
{"rules": {"build-logs": {"$jobId":{".read":"!root.child('production/build-logs/'+$jobId).exists() || (auth != null && auth.admin == true) || (auth == null && data.child('visibility').exists() && data.child('visibility').val() == 'public') || ( auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val() )",".write":"auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val()" } },"environment-logs": {"$environmentId":{".read":"!root.child('production/environment-logs/'+$environmentId).exists() || ( auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val() )",".write":"auth != null && data.child('accountId').exists() && auth.accountId == data.child('accountId').val()" } } }}However, if you're in an air-gapped environment, you can omit this prerequisite and use a built-in logging system (i.e.OfflineLogging feature-flag).Seefeature management
With this method, Codefresh by default deletes builds older than six months.
The retention mechanism removes data from the following collections:workflowproccesses,workflowrequests,workflowrevisions
cfapi:env:# Determines if automatic build deletion through the Cron job is enabled.RETENTION_POLICY_IS_ENABLED:true# The maximum number of builds to delete by a single Cron job. To avoid database issues, especially when there are large numbers of old builds, we recommend deleting them in small chunks. You can gradually increase the number after verifying that performance is not affected.RETENTION_POLICY_BUILDS_TO_DELETE:50# The number of days for which to retain builds. Builds older than the defined retention period are deleted.RETENTION_POLICY_DAYS:180
Configuration for Codefresh On-Prem >= 2.x
Previous configuration example (i.e.
RETENTION_POLICY_IS_ENABLED=true) is also supported in Codefresh On-Prem >= 2.x
For existing environments, for the retention mechanism to work, you must first drop thecreated index inworkflowprocesses collection. This requires a maintenance window that depends on the number of builds.
cfapi:env:# Determines if automatic build deletion is enabled.TTL_RETENTION_POLICY_IS_ENABLED:true# The number of days for which to retain builds, and can be between 30 (minimum) and 365 (maximum). Builds older than the defined retention period are deleted.TTL_RETENTION_POLICY_IN_DAYS:180
pipeline-manager:env:# Determines project's pipelines limit (default: 500)PROJECT_PIPELINES_LIMIT:500
cfapi:env:# Generate a unique session cookie (cf-uuid) on each loginDISABLE_CONCURRENT_SESSIONS:true# Customize cookie domainCF_UUID_COOKIE_DOMAIN:.mydomain.com
Note! Ingress host forgitops-runtime and ingress host for control plane must share the same root domain (i.e.
onprem.mydomain.comandruntime.mydomain.com)
cfapi:env:# Set value to the `X-Frame-Options` response header. Control the restrictions of embedding Codefresh page into the iframes.# Possible values: sameorigin(default) / denyFRAME_OPTIONS:sameorigincfui:env:FRAME_OPTIONS:sameorigin
Read more about header athttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options.
CONTENT_SECURITY_POLICY is the string describing content policies. Use semi-colons to separate between policies.CONTENT_SECURITY_POLICY_REPORT_TO is a comma-separated list of JSON objects. Each object must have a name and an array of endpoints that receive the incoming CSP reports.
For detailed information, see theContent Security Policy article on MDN.
cfui:env:CONTENT_SECURITY_POLICY:"<YOUR SECURITY POLICIES>"CONTENT_SECURITY_POLICY_REPORT_ONLY:"default-src 'self'; font-src 'self' https://fonts.gstatic.com; script-src 'self' https://unpkg.com https://js.stripe.com; style-src 'self' https://fonts.googleapis.com; 'unsafe-eval' 'unsafe-inline'"CONTENT_SECURITY_POLICY_REPORT_TO:"<LIST OF ENDPOINTS AS JSON OBJECTS>"
For detailed information, see theSecuring your webhooks andWebhooks.
cfapi: env: USE_SHA256_GITHUB_SIGNATURE: "true"In Codefresh On-Prem 2.6.x all Codefresh owner microservices include image digests in the default subchart values.
For example, default values forcfapi might look like this:
container:image:registry:us-docker.pkg.dev/codefresh-enterprise/gcr.iorepository:codefresh/cf-apitag:21.268.1digest:"sha256:bae42f8efc18facc2bf93690fce4ab03ef9607cec4443fada48292d1be12f5f8"pullPolicy:IfNotPresent
this resulting in the following image reference in the pod spec:
spec:containers: -name:cfapiimage:us-docker.pkg.dev/codefresh-enterprise/gcr.io/codefresh/cf-api:21.268.1@sha256:bae42f8efc18facc2bf93690fce4ab03ef9607cec4443fada48292d1be12f5f8
Note! When the
digestis providerd, thetagis ignored! You can omit digest and use tag only like the followingvalues.yamlexample:
cfapi:container:image:tag:21.268.1# -- Set empty tag for digestdigest:""
OpenID Connect (OIDC) allows Codefresh Builds to access resources in your cloud provider (such as AWS, Azure, GCP), without needing to store cloud credentials as long-lived pipeline secret variables.
- DNS name for OIDC Provider
- Valid TLS certificates for Ingress
- K8S secret containing JWKS (JSON Web Key Sets). Can be generated atmkjwk.org
- K8S secret containing Cliend ID (public identifier for app) and Client Secret (application password; cryptographically strong random string)
NOTE! In production usage useExternal Secrets Operator orHashiCorp Vault to create secrets. The following example uses
kubectlfor brevity.
For JWKS usePublic and Private Keypair Set (if generated atmkjwk.org), for example:
cf-oidc-provider-jwks.json:
{"keys": [ {"p":"...","kty":"RSA","q":"...","d":"...","e":"AQAB","use":"sig","qi":"...","dp":"...","alg":"RS256","dq":"...","n":"..." } ]}#Creating secret containing JWKS.#The secret KEY is`cf-oidc-provider-jwks.json`. Itthen referencedin`OIDC_JWKS_PRIVATE_KEYS_PATH` environment variablein`cf-oidc-provider`.#The secret NAME is referencedin`.volumes.jwks-file.nameOverride` (volumeMount is configuredin the chart already)kubectl create secret generic cf-oidc-provider-jwks \ --from-file=cf-oidc-provider-jwks.json \ -n $NAMESPACE#Creating secret containing Client ID and Client Secret#Secret NAME is`cf-oidc-provider-client-secret`.#Itthen referencedin`OIDC_CF_PLATFORM_CLIENT_ID` and`OIDC_CF_PLATFORM_CLIENT_SECRET` environment variablesin`cf-oidc-provider`#andin`OIDC_PROVIDER_CLIENT_ID` and`OIDC_PROVIDER_CLIENT_SECRET`in`cfapi`.kubectl create secret generic cf-oidc-provider-client-secret \ --from-literal=client-id=codefresh \ --from-literal=client-secret='verysecureclientsecret' \ -n $NAMESPACE
values.yaml
global:# -- Set OIDC Provider URLoidcProviderService:"oidc.mydomain.com"# -- Default OIDC Provider service client ID in plain text.# Optional! If specified here, no need to specify CLIENT_ID/CLIENT_SECRET env vars in cfapi and cf-oidc-provider below.oidcProviderClientId:null# -- Default OIDC Provider service client secret in plain text.# Optional! If specified here, no need to specify CLIENT_ID/CLIENT_SECRET env vars in cfapi and cf-oidc-provider below.oidcProviderClientSecret:nullcfapi:# -- Set additional variables for cfapi# Reference a secret containing Client ID and Client Secretenv:OIDC_PROVIDER_CLIENT_ID:valueFrom:secretKeyRef:name:cf-oidc-provider-client-secretkey:client-idOIDC_PROVIDER_CLIENT_SECRET:valueFrom:secretKeyRef:name:cf-oidc-provider-client-secretkey:client-secretcf-oidc-provider:# -- Enable OIDC Providerenabled:truecontainer:env:OIDC_JWKS_PRIVATE_KEYS_PATH:/secrets/jwks/cf-oidc-provider-jwks.json# -- Reference a secret containing Client ID and Client SecretOIDC_CF_PLATFORM_CLIENT_ID:valueFrom:secretKeyRef:name:cf-oidc-provider-client-secretkey:client-idOIDC_CF_PLATFORM_CLIENT_SECRET:valueFrom:secretKeyRef:name:cf-oidc-provider-client-secretkey:client-secretvolumes:jwks-file:enabled:truetype:secret# -- Secret name containing JWKSnameOverride:"cf-oidc-provider-jwks"optional:falseingress:main:# -- Enable ingress for OIDC Providerenabled:trueannotations:{}# -- Set ingress class nameingressClassName:""hosts:# -- Set OIDC Provider URL -host:"oidc.mydomain.com"paths: -path:/# For ALB (Application Load Balancer) /* asterisk is required in path# e.g.# - path: /*tls:[]
Deploy HELM chart with newvalues.yaml
Usehttps://oidc.mydomain.com/.well-known/openid-configuration to verify OIDC Provider configuration
To add Codefresh OIDC provider to IAM, see theAWS documentation
- For theprovider URL: Use
.Values.global.oidcProviderServicevalue withhttps://prefix (i.e.https://oidc.mydomain.com) - For theAudienece: Use
.Values.global.appUrlvalue withhttps://prefix (i.e.https://onprem.mydomain.com)
To configure the role and trust in IAM, seeAWS documentation
Edit the trust policy to add the sub field to the validation conditions. For example, useStringLike to allow only builds from specific pipeline to assume a role in AWS.
{"Version":"2012-10-17","Statement": [ {"Effect":"Allow","Principal": {"Federated":"arn:aws:iam::<ACCOUNT_ID>:oidc-provider/oidc.mydomain.com" },"Action":"sts:AssumeRoleWithWebIdentity","Condition": {"StringEquals": {"oidc.mydomain.com:aud":"https://onprem.mydomain.com" },"StringLike": {"oidc.mydomain.com:sub":"account:64884faac2751b77ca7ab324:pipeline:64f7232ab698cfcb95d93cef:*" } } } ]}To see all the claims supported by Codefresh OIDC provider, seeclaims_supported entries athttps://oidc.mydomain.com/.well-known/openid-configuration
"claims_supported": ["sub","account_id","account_name","pipeline_id","pipeline_name","workflow_id","initiator","scm_user_name","scm_repo_url","scm_ref","scm_pull_request_target_branch","sid","auth_time","iss"]
Useobtain-oidc-id-token andaws-sts-assume-role-with-web-identity steps to exchange the OIDC token (JWT) for a cloud access token.
Sometimes, in new releases of Codefresh On-Prem, index requirements change. When this happens, it's mentioned in theUpgrading section for the specific release.
ℹ️ If you're upgrading from version
Xto versionY, and index requirements were updated in any of the intermediate versions, you only need to align your indexes with the index requirements of versionY. To do that, followIndex alignment instructions.
The required index definitions for each release can be found at the following resources:
2.6https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.6/indexes2.7https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.7/indexes2.8https://github.com/codefresh-io/codefresh-onprem-helm/tree/release-2.8/indexes
The indexes are stored in JSON files with keys and options specified.
The directory structure is:
indexes├── <DB_NAME> # MongoDB database name│ ├── <COLLECTION_NAME>.json # MongoDB indexes for the specified collection
Overview of the index alignment process:
- Identify the differences between the indexes in your MongoDB instance and the required index definitions.
- Create any missing indexes one by one. (It's important not to create them in bulk.)
- Perform the upgrade of Codefresh On-Prem installation.
- Then remove any unnecessary indexes.
⚠️ Note! Any changes to indexes should be performed during a defined maintenance window or during periods of lowest traffic to MongoDB.Building indexes during time periods where the target collection is under heavy write load can result in reduced write performance and longer index builds. (Source: MongoDB official documentation)
Even minor changes to indexes (e.g., index removal) can cause brief but noticeable performance degradation (Source: MongoDB official documentation)
For self-hosted MongoDB, follow the instructions below:
- Connect to the MongoDB server using themongosh shell. Open your terminal or command prompt and run the following command, replacing
<connection_string>with the appropriate MongoDB connection string for your server:
mongosh"<connection_string>"- Retrieve the list of indexes for a specific collection:
db.getSiblingDB('<db_name>').getCollection('<collection_name>').getIndexes()
- Compare your indexes with the required indexes for the target release, and adjust them by creating any missing indexes or removing any unnecessary ones
Index creation
⚠Note! Always create indexes sequentially, one by one. Don't create them in bulk.
- To create an index, use the
createIndex()method:
db.getSiblingDB('<db_name>').getCollection('<collection_name>').createIndex(<keys_object>,<options_object>)
After executing thecreateIndex() command, you should see a result indicating that the index was created successfully.
Index removal
- To remove an index, use the
dropIndex()method with<index_name>:
db.getSiblingDB('<db_name>').getCollection('<collection_name>').dropIndex('<index_name>')
If you're hosting MongoDB onAtlas, use the followingManage Indexes guide to View, Create or Remove indexes.
⚠️ Important! In Atlas, for production environments, it is recommended to use rolling index builds by enabling the "Build index via rolling process" checkbox. (MongoDB official documentation)
This major chart version change (v1.4.X -> v2.0.0) contains someincompatible breaking change needing manual actions.
Before applying the upgrade, read through this section!
Codefesh 2.0 chart includes additional dependent microservices (charts):
argo-platform: Main Codefresh GitOps module.internal-gateway: NGINX that proxies requests to the correct components (api-graphql, api-events, ui).argo-hub-platform: Service for Argo Workflow templates.platform-analyticsandetl-starter: Service forPipelines dasboard
These services require additional databases in MongoDB (audit/read-models/platform-analytics-postgres) and in Postgresql (analytics andanalytics_pre_aggregations)The helm chart is configured to re-run seed jobs to create necessary databases and users during the upgrade.
seed:# -- Enable all seed jobsenabled:true
Starting from version 2.0.0, two new MongoDB indexes have been added that are vital for optimizing database queries and enhancing overall system performance. It is crucial to create these indexes before performing the upgrade to avoid any potential performance degradation.
account_1_annotations.key_1_annotations.value_1(db:codefresh; collection:annotations)
{"account" :1,"annotations.key" :1,"annotations.value" :1}accountId_1_entityType_1_entityId_1(db:codefresh; collection:workflowprocesses)
{"accountId" :1,"entityType" :1,"entityId" :1}To prevent potential performance degradation during the upgrade, it is important to schedule a maintenance window during a period of low activity or minimal user impact and create the indexes mentioned above before initiating the upgrade process. By proactively creating these indexes, you can avoid the application automatically creating them during the upgrade and ensure a smooth transition with optimized performance.
Index Creation
If you're hosting MongoDB onAtlas, use the followingCreate, View, Drop, and Hide Indexes guide to create indexes mentioned above. It's important to create them in a rolling fashion (i.e.Build index via rolling process checkbox enabled) in produciton environment.
For self-hosted MongoDB, see the following instruction:
- Connect to the MongoDB server using themongosh shell. Open your terminal or command prompt and run the following command, replacing <connection_string> with the appropriate MongoDB connection string for your server:
mongosh "<connection_string>"- Once connected, switch to the
codefreshdatabase where the index will be located using theusecommand.
use codefresh- To create the indexes, use the createIndex() method. The createIndex() method should be executed on the db object.
db.workflowprocesses.createIndex({ account: 1, 'annotations.key': 1, 'annotations.value': 1 }, { name: 'account_1_annotations.key_1_annotations.value_1', sparse: true, background: true })db.annotations.createIndex({ accountId: 1, entityType: 1, entityId: 1 }, { name: 'accountId_1_entityType_1_entityId_1', background: true })After executing the createIndex() command, you should see a result indicating the successful creation of the index.
⚠️ Kcfi Deprecation
This major release deprecateskcfi installer. The recommended way to install Codefresh On-Prem isHelm.Due to that, Kcficonfig.yaml will not be compatible for Helm-based installation.You still can reuse the sameconfig.yaml for the Helm chart, but you need to remove (or update) the following sections.
.Values.metadatais deprecated. Remove it fromconfig.yaml
1.4.xconfig.yaml
metadata:kind:codefreshinstaller:type:helmhelm:chart:codefreshrepoUrl:http://chartmuseum.codefresh.io/codefreshversion:1.4.x
.Values.kubernetesis deprecated. Remove it fromconfig.yaml
1.4.xconfig.yaml
kubernetes:namespace:codefreshcontext:context-name
.Values.tls(.Values.webTLS) is moved under.Values.ingress.tls. Remove.Values.tlsfromconfig.yamlafterwards.See fullvalues.yaml.
1.4.xconfig.yaml
tls:selfSigned:falsecert:certs/certificate.crtkey:certs/private.key
2.0.0config.yaml
# -- Ingressingress:# -- Enable the Ingressenabled:true# -- Set the ingressClass that is used for the ingress.ingressClassName:nginx-codefreshtls:# -- Enable TLSenabled:true# -- Default secret name to be created with provided `cert` and `key` belowsecretName:"star.codefresh.io"# -- Certificate (base64 encoded)cert:"LS0tLS1CRUdJTiBDRVJ...."# -- Private key (base64 encoded)key:"LS0tLS1CRUdJTiBSU0E..."# -- Existing `kubernetes.io/tls` type secret with TLS certificates (keys: `tls.crt`, `tls.key`)existingSecret:""
.Values.imagesis deprecated. Remove.Values.imagesfromconfig.yaml..Values.images.codefreshRegistrySais changed to.Values.imageCredentials.Values.privateRegistry.addressis changed to.Values.global.imageRegistry(no trailing slash/at the end)
1.4.xconfig.yaml
images:codefreshRegistrySa:sa.jsonusePrivateRegistry:trueprivateRegistry:address:myprivateregistry.domainusername:usernamepassword:password
2.0.0config.yaml
# -- Credentials for Image Pull Secret objectimageCredentials:{}# Pass sa.json (as a single line). Obtain GCR Service Account JSON (sa.json) at support@codefresh.io# E.g.:# imageCredentials:# registry: gcr.io# username: _json_key# password: '{ "type": "service_account", "project_id": "codefresh-enterprise", "private_key_id": ... }'
2.0.0config.yaml
global:# -- Global Docker image registryimageRegistry:"myprivateregistry.domain"
.Values.dbinfrais deprecated. Remove it fromconfig.yaml
1.4.xconfig.yaml
dbinfra:enabled:false
.Values.firebaseUrland.Values.firebaseSecretis moved under.Values.global
1.4.xconfig.yaml
firebaseUrl:<url>firebaseSecret:<secret>newrelicLicenseKey:<key>
2.0.0config.yaml
global:# -- Firebase URL for logs streaming.firebaseUrl:""# -- Firebase Secret.firebaseSecret:""# -- New Relic KeynewrelicLicenseKey:""
.Values.global.certsJobsand.Values.global.seedJobsis deprecated. Use.Values.seed.mongoSeedJoband.Values.seed.postgresSeedJob.See fullvalues.yaml.
1.4.xconfig.yaml
global:certsJobs:trueseedJobs:true
2.0.0config.yaml
seed:# -- Enable all seed jobsenabled:true# -- Mongo Seed Job. Required at first install. Seeds the required data (default idp/user/account), creates cfuser and required databases.# @default -- See belowmongoSeedJob:enabled:true# -- Postgres Seed Job. Required at first install. Creates required user and databases.# @default -- See belowpostgresSeedJob:enabled:true
⚠️ Migration toLibrary Charts
All Codefresh subchart templates (i.e.cfapi,cfui,pipeline-manager,context-manager, etc) have been migrated to use Helmlibrary charts.That allows unifying the values structure across all Codefresh-owned charts. However, there are someimmutable fields in the old charts which cannot be upgraded during a regularhelm upgrade, and require additional manual actions.
Run the following commands before appying the upgrade.
- Delete
cf-runnerandcf-builderstateful sets.
kubectl delete sts cf-runner --namespace $NAMESPACEkubectl delete sts cf-builder --namespace $NAMESPACE
- Delete all jobs
kubectl delete job --namespace $NAMESPACE -l release=cf- In
values.yaml/config.yamlremove.Values.nomios.ingresssection if you have it
nomios:# Remove ingress sectioningress:...
Due to deprecation of legacy ChartMuseum subchart in favor of upstreamchartmuseum, you need to remove the old deployment before the upgrade due toimmutablematchLabels field change in the deployment spec.
kubectl delete deploy cf-chartmuseum --namespace $NAMESPACE- If you have
.persistence.enabled=truedefined and NOT.persistence.existingClaimlike:
helm-repo-manager:chartmuseum:persistence:enabled:true
then youhave to backup the content of old PVC (mounted as/storage in the old deployment)before the upgrade!
POD_NAME=$(kubectl get pod -l app=chartmuseum -n$NAMESPACE --no-headers -o custom-columns=":metadata.name")kubectl cp -n$NAMESPACE$POD_NAME:/storage$(pwd)/storage
After the upgrade, restore the content into new deployment:
POD_NAME=$(kubectl get pod -l app.kubernetes.io/name=chartmuseum -n$NAMESPACE --no-headers -o custom-columns=":metadata.name")kubectl cp -n$NAMESPACE$(pwd)/storage$POD_NAME:/storage
- If you have
.persistence.existingClaimdefined, you can keep it as is:
helm-repo-manager:chartmuseum:existingClaim:my-claim-name
- If you have
.Values.global.imageRegistryspecified, itwon't be applied for the new chartmuseum subchart. Add image registry explicitly for the subchart as follows
global:imageRegistry:myregistry.domain.comhelm-repo-manager:chartmuseum:image:repository:myregistry.domain.com/codefresh/chartmuseum
Values structure for argo-platform images has been changed.Addedregistry to align with the rest of the services.
values for <= v2.0.16
argo-platform:api-graphql:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-api-graphqlabac:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-abacanalytics-reporter:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-analytics-reporterapi-events:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-api-eventsaudit:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-auditcron-executor:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-cron-executorevent-handler:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-event-handlerui:image:repository:gcr.io/codefresh-enterprise/codefresh-io/argo-platform-ui
values for >= v2.0.17
argo-platform:api-graphql:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-api-graphqlabac:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-abacanalytics-reporter:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-analytics-reporterapi-events:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-api-eventsaudit:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-auditcron-executor:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-cron-executorevent-handler:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-event-handlerui:image:registry:gcr.io/codefresh-enterpriserepository:codefresh-io/argo-platform-ui
Changed default ingress paths. All paths point to
internal-gatewaynow.Remove any overrides at.Values.ingress.services! (updated example for ALB)Deprecated
global.mongoURI.Supported for backward compatibility!Added
global.mongodbProtocol/global.mongodbUser/global.mongodbPassword/global.mongodbHost/global.mongodbOptionsAdded
global.mongodbUserSecretKeyRef/global.mongodbPasswordSecretKeyRef/global.mongodbHostSecretKeyRefAdded
seed.mongoSeedJob.mongodbRootUserSecretKeyRef/seed.mongoSeedJob.mongodbRootPasswordSecretKeyRefAdded
seed.postgresSeedJob.postgresUserSecretKeyRef/seed.postgresSeedJob.postgresPasswordSecretKeyRefAdded
global.firebaseUrlSecretKeyRef/global.firebaseSecretSecretKeyRefAdded
global.postgresUserSecretKeyRef/global.postgresPasswordSecretKeyRef/global.postgresHostnameSecretKeyRefAdded
global.rabbitmqUsernameSecretKeyRef/global.rabbitmqPasswordSecretKeyRef/global.rabbitmqHostnameSecretKeyRefAdded
global.redisPasswordSecretKeyRef/global.redisUrlSecretKeyRefRemoved
global.runtimeMongoURI(defaults toglobal.mongoURIorglobal.mongodbHost/global.mongodbHostSecretKeyRef/etc like values)Removed
global.runtimeMongoDb(defaults toglobal.mongodbDatabase)Removed
global.runtimeRedisHost(defaults toglobal.redisUrl/global.redisUrlSecretKeyReforglobal.redisService)Removed
global.runtimeRedisPort(defaults toglobal.redisPort)Removed
global.runtimeRedisPassword(defaults toglobal.redisPassword/global.redisPasswordSecretKeyRef)Removed
global.runtimeRedisDb(defaults to values below)
cfapi:env:RUNTIME_REDIS_DB:0cf-broadcaster:env:REDIS_DB:0
Since version 2.1.7 chart is pushedonly to OCI registry at
oci://quay.io/codefresh/codefresh
Versions prior to 2.1.7 are still available in ChartMuseum at
http://chartmuseum.codefresh.io/codefresh
Codefresh On-Prem 2.2.x uses MongoDB 5.x (4.x is still supported). If you run external MongoDB, it ishighly recommended to upgrade it to 5.x after upgrading Codefresh On-Prem to 2.2.x.
If you run external Redis, this is not applicable to you.
Codefresh On-Prem 2.2.x adds (not replaces!) anoptional Redis-HA (master/slave configuration with Sentinel sidecars for failover management) instead of a single Redis instance.To enable it, see the following values:
global:redisUrl:cf-redis-ha-haproxy# Replace `cf` with your Helm release name# -- Disable standalone Redis instanceredis:enabled:false# -- Enable Redis HAredis-ha:enabled:true
gcr.io) to GAR (us-docker.pkg.dev)
Update.Values.imageCredentials.registry tous-docker.pkg.dev if it's explicitly set togcr.io in your values file.
Default.Values.imageCredentials for Onpremv2.2.x and below
imageCredentials:registry:gcr.iousername:_json_keypassword:<YOUR_SERVICE_ACCOUNT_JSON_HERE>
Default.Values.imageCredentials for Onpremv2.3.x and above
imageCredentials:registry:us-docker.pkg.devusername:_json_keypassword:<YOUR_SERVICE_ACCOUNT_JSON_HERE>
Usehelm history to determine which release has worked, then usehelm rollback to perform a rollback
When rollback from 2.x prune these resources due to immutabled fields changes
kubectl delete sts cf-runner --namespace $NAMESPACEkubectl delete sts cf-builder --namespace $NAMESPACEkubectl delete deploy cf-chartmuseum --namespace $NAMESPACEkubectl delete job --namespace $NAMESPACE -l release=$RELEASE_NAME
helm rollback $RELEASE_NAME $RELEASE_NUMBER \ --namespace $NAMESPACE \ --debug \ --wait
Newcfapi-auth role is introduced in 2.4.x.
If you run onprem withmulti-role cfapi configuration, make sure toenablecfapi-auth role:
cfapi-auth:<<:*cf-apienabled:true
Since 2.4.x,SYSTEM_TYPE is changed toPROJECT_ONE by default.
If you want to preserve originalCLASSIC values, update cfapi environment variables:
cfapi:container:env:DEFAULT_SYSTEM_TYPE:CLASSIC
⚠️ WARNING! MongoDB indexes changed!Please, followMaintaining MongoDB indexes guide to meet index requirementsBEFORE the upgrade process.
⚠️ WARNING! MongoDB indexes changed!Please, followMaintaining MongoDB indexes guide to meet index requirementsBEFORE the upgrade process.
- Added option to provide global
tolerations/nodeSelector/affinityfor all Codefresh subcharts
Note! These global settings will not be applied to Bitnami subcharts (e.g.
mongodb,redis,rabbitmq,postgres. etc)
global:tolerations: -key:"key"operator:"Equal"value:"value"effect:"NoSchedule"nodeSelector:key:"value"affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms: -matchExpressions: -key:"key"operator:"In"values: -"value"
⚠️ WARNING! MongoDB indexes changed!Please, followMaintaining MongoDB indexes guide to meet index requirementsBEFORE the upgrade process.
Default MongoDB image is changed from 6.x to 7.x.
If you run external MongoDB (i.e.Atlas), it isrequired to upgrade it to 7.x after upgrading Codefresh On-Prem to 2.8.x.
- Before the upgrade, for backward compatibility (in case you need to rollback to 6.x), you should set
featureCompatibilityVersionto6.0in your values file.
mongodb:migration:enabled:truefeatureCompatibilityVersion:"6.0"
Perform Codefresh On-Prem upgrade to 2.8.x. Make sure all systems are up and running.
After the upgrade, if all system are stable, you need to set
featureCompatibilityVersionto7.0in your values file and re-deploy the chart.
mongodb:migration:enabled:truefeatureCompatibilityVersion:"7.0"
mongodb:migration:enabled:false
Default PostgreSQL image is changed from 13.x to 17.x
If you run external PostgreSQL, follow theofficial instructions to upgrade to 17.x.
⚠️ Important!
The default SSL configuration may change on your provider's side when you upgrade.
Please read the following section before the upgrade:Using SSL with a PostgreSQL
bitnami/postgresql subchart, direct upgrade is not supported due toincompatible breaking changes in the database files. You will see the following error in the logs:
postgresql 17:36:28.41 INFO ==> ** Starting PostgreSQL **2025-05-21 17:36:28.432 GMT [1] FATAL: database files are incompatible with server2025-05-21 17:36:28.432 GMT [1] DETAIL: The data directory was initialized by PostgreSQL version 13, which is not compatible with this version 17.2.You need to backup your data, delete the old PostgreSQL StatefulSet with PVCs and restore the data into a new PostgreSQL StatefulSet.
Before the upgrade, backup your data on a separate PVC
Create PVC with the same or bigger size as your current PostgreSQL PVC:
apiVersion:v1kind:PersistentVolumeClaimmetadata:name:postgresql-dumpspec:storageClassName:<STORAGE_CLASS>resources:requests:storage:<PVC_SIZE>volumeMode:FilesystemaccessModes: -ReadWriteOnce
- Create a job to dump the data from the old PostgreSQL StatefulSet into the new PVC:
apiVersion:batch/v1kind:Jobmetadata:name:postgresql-dumpspec:ttlSecondsAfterFinished:300template:spec:containers: -name:postgresql-dumpimage:quay.io/codefresh/postgresql:17resources:requests:memory:"128Mi"cpu:"100m"limits:memory:"1Gi"cpu:"1"env: -name:PGUSERvalue:"<POSTGRES_USER>" -name:PGPASSWORDvalue:"<POSTGRES_PASSWORD>" -name:PGHOSTvalue:"<POSTGRES_HOST>" -name:PGPORTvalue:"<POSTGRES_PORT>"command: -"/bin/bash" -"-c" -| pg_dumpall --verbose > /opt/postgresql-dump/dump.sqlvolumeMounts: -name:postgresql-dumpmountPath:/opt/postgresql-dumpsecurityContext:runAsUser:0fsGroup:0volumes: -name:postgresql-dumppersistentVolumeClaim:claimName:postgresql-dumprestartPolicy:Never
- Delete old PostgreSQL StatefulSet and PVC
STS_NAME=$(kubectl get sts -n $NAMESPACE -l app.kubernetes.io/instance=$RELEASE_NAME -l app.kubernetes.io/name=postgresql -o jsonpath='{.items[0].metadata.name}')PVC_NAME=$(kubectl get pvc -n $NAMESPACE -l app.kubernetes.io/instance=$RELEASE_NAME -l app.kubernetes.io/name=postgresql -o jsonpath='{.items[0].metadata.name}')kubectl delete sts $STS_NAME -n $NAMESPACEkubectl delete pvc $PVC_NAME -n $NAMESPACE
- Peform the upgrade to 2.8.x with PostgreSQL seed job enabled to re-create users and databases
seed:postgresSeedJob:enabled:true
- Create a job to restore the data from the new PVC into the new PostgreSQL StatefulSet:
apiVersion:batch/v1kind:Jobmetadata:name:postgresql-restorespec:ttlSecondsAfterFinished:300template:spec:containers: -name:postgresql-restoreimage:quay.io/codefresh/postgresql:17resources:requests:memory:"128Mi"cpu:"100m"limits:memory:"1Gi"cpu:"1"env: -name:PGUSERvalue:"<POSTGRES_USER>" -name:PGPASSWORDvalue:"<POSTGRES_PASSWORD>" -name:PGHOSTvalue:"<POSTGRES_HOST>" -name:PGPORTvalue:"<POSTGRES_PORT>"command: -"/bin/bash" -"-c" -| psql -f /opt/postgresql-dump/dump.sqlvolumeMounts: -name:postgresql-dumpmountPath:/opt/postgresql-dumpsecurityContext:runAsUser:0fsGroup:0volumes: -name:postgresql-dumppersistentVolumeClaim:claimName:postgresql-dumprestartPolicy:Never
Default RabbitMQ image is changed from 3.x to 4.0
If you run external RabbitMQ, follow theofficial instructions to upgrade to 4.0
For built-in RabbitMQbitnami/rabbitmq subchart, pre-upgrade hook was added to enable all stable feature flags.
Added option to provide
.Values.global.tolerations/.Values.global.nodeSelector/.Values.global.affinityfor all Codefresh subchartsChanged default location for public images from
quay.io/codefreshtous-docker.pkg.dev/codefresh-inc/public-gcr-io/codefresh.Values.hookswas splitted into.Values.hooks.mongodband.Values.hooks.consul
Builds are stuck in pending withError: Failed to validate connection to Docker daemon; caused by Error: certificate has expired
Reason: Runtime certificates have expiried.
To check if runtime internal CA expired:
kubectl -n $NAMESPACE get secret/cf-codefresh-certs-client -o jsonpath="{.data['ca\.pem']}" | base64 -d | openssl x509 -enddate -nooutResolution: Replace internal CA and re-issue dind certs for runtime
- Delete k8s secret with expired certificate
kubectl -n $NAMESPACE delete secret cf-codefresh-certs-client- Set
.Values.global.gencerts.enabled=true(.Values.global.certsJob=truefor onprem < 2.x version)
# -- Job to generate internal runtime secrets.# @default -- See belowgencerts:enabled:true
- Upgrade Codefresh On-Prem Helm release. It will recreate
cf-codefresh-certs-clientsecret
helm upgrade --install cf codefresh/codefresh \ -f cf-values.yaml \ --namespace codefresh \ --create-namespace \ --debug \ --wait \ --timeout 15m
- Restart
cfapiandcfsigndeployments
kubectl -n $NAMESPACE rollout restart deployment/cf-cfapikubectl -n $NAMESPACE rollout restart deployment/cf-cfsign
Case A: Codefresh Runner installed with HELM chart (charts/cf-runtime)
Re-apply thecf-runtime helm chart. Post-upgradegencerts-dind helm hook will regenerate the dind certificates using a new CA.
Case B: Codefresh Runner installed with legacy CLI (codefresh runner init)
Deletecodefresh-certs-server k8s secret and run./configure-dind-certs.sh in your runtime namespace.
kubectl -n $NAMESPACE delete secret codefresh-certs-server./configure-dind-certs.sh -n $RUNTIME_NAMESPACE https://$CODEFRESH_HOST $CODEFRESH_API_TOKEN
Consul Error: Refusing to rejoin cluster because the server has been offline for more than the configured server_rejoin_age_max
After platform upgrade, Consul fails with the errorrefusing to rejoin cluster because the server has been offline for more than the configured server_rejoin_age_max - consider wiping your data dir. There isknown issue ofhashicorp/consul behaviour. Try to wipe out or delete the consul PV with config data and restart Consul StatefulSet.
| Key | Type | Default | Description |
|---|---|---|---|
| argo-hub-platform | object | See below | argo-hub-platform |
| argo-platform | object | See below | argo-platform |
| argo-platform.abac | object | See below | abac |
| argo-platform.analytics-reporter | object | See below | analytics-reporter |
| argo-platform.anchors | object | See below | Anchors |
| argo-platform.api-events | object | See below | api-events |
| argo-platform.api-graphql | object | See below | api-graphql All other services under.Values.argo-platform follows the same values structure. |
| argo-platform.api-graphql.affinity | object | {} | Set pod's affinity |
| argo-platform.api-graphql.env | object | See below | Env vars |
| argo-platform.api-graphql.hpa | object | {"enabled":false} | HPA |
| argo-platform.api-graphql.hpa.enabled | bool | false | Enable autoscaler |
| argo-platform.api-graphql.image | object | {"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/argo-platform-api-graphql"} | Image |
| argo-platform.api-graphql.image.registry | string | "us-docker.pkg.dev/codefresh-enterprise/gcr.io" | Registry |
| argo-platform.api-graphql.image.repository | string | "codefresh/argo-platform-api-graphql" | Repository |
| argo-platform.api-graphql.kind | string | "Deployment" | Controller kind. Currently, onlyDeployment is supported |
| argo-platform.api-graphql.pdb | object | {"enabled":false} | PDB |
| argo-platform.api-graphql.pdb.enabled | bool | false | Enable pod disruption budget |
| argo-platform.api-graphql.podAnnotations | object | `{"checksum/secret":"{{ include (print $.Template.BasePath "/api-graphql/secret.yaml") . | sha256sum }}"}` |
| argo-platform.api-graphql.resources | object | See below | Resource limits and requests |
| argo-platform.api-graphql.secrets | object | See below | Secrets |
| argo-platform.api-graphql.tolerations | list | [] | Set pod's tolerations |
| argo-platform.argocd-hooks | object | See below | argocd-hooks Don't enable! Not used in onprem! |
| argo-platform.audit | object | See below | audit |
| argo-platform.broadcaster | object | See below | broadcaster |
| argo-platform.cron-executor | object | See below | cron-executor |
| argo-platform.event-handler | object | See below | event-handler |
| argo-platform.promotion-orchestrator | object | See below | promotion-orchestrator |
| argo-platform.runtime-manager | object | See below | runtime-manager Don't enable! Not used in onprem! |
| argo-platform.runtime-monitor | object | See below | runtime-monitor Don't enable! Not used in onprem! |
| argo-platform.ui | object | See below | ui |
| argo-platform.useExternalSecret | bool | false | Use regular k8s secret object. Keepfalse! |
| builder | object | {"affinity":{},"container":{"image":{"registry":"docker.io","repository":"library/docker","tag":"28.3-dind"}},"enabled":true,"imagePullSecrets":[],"initContainers":{"register":{"image":{"registry":"us-docker.pkg.dev/codefresh-inc/public-gcr-io","repository":"codefresh/curl","tag":"8.14.1"}}},"nodeSelector":{},"podSecurityContext":{},"resources":{},"tolerations":[]} | builder |
| cf-broadcaster | object | See below | broadcaster |
| cf-oidc-provider | object | See below | cf-oidc-provider |
| cf-platform-analytics-etlstarter | object | See below | etl-starter |
| cf-platform-analytics-etlstarter.redis.enabled | bool | false | Disable redis subchart |
| cf-platform-analytics-etlstarter.system-etl-postgres | object | {"container":{"env":{"BLUE_GREEN_ENABLED":true}},"controller":{"cronjob":{"ttlSecondsAfterFinished":300}},"enabled":true} | Only postgres ETL should be running in onprem |
| cf-platform-analytics-platform | object | See below | platform-analytics |
| cfapi | object | {"affinity":{},"container":{"env":{"AUDIT_AUTO_CREATE_DB":true,"DEFAULT_SYSTEM_TYPE":"PROJECT_ONE","GITHUB_API_PATH_PREFIX":"/api/v3","LOGGER_LEVEL":"debug","OIDC_PROVIDER_PORT":"{{ .Values.global.oidcProviderPort }}","OIDC_PROVIDER_PROTOCOL":"{{ .Values.global.oidcProviderProtocol }}","OIDC_PROVIDER_TOKEN_ENDPOINT":"{{ .Values.global.oidcProviderTokenEndpoint }}","OIDC_PROVIDER_URI":"{{ .Values.global.oidcProviderService }}","ON_PREMISE":true,"RUNTIME_MONGO_DB":"codefresh","RUNTIME_REDIS_DB":0},"image":{"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/cf-api"}},"controller":{"replicas":2},"enabled":true,"hpa":{"enabled":false,"maxReplicas":10,"minReplicas":2,"targetCPUUtilizationPercentage":70},"imagePullSecrets":[],"nodeSelector":{},"pdb":{"enabled":false,"minAvailable":"50%"},"podSecurityContext":{},"resources":{"limits":{},"requests":{"cpu":"200m","memory":"256Mi"}},"secrets":{"secret":{"enabled":true,"stringData":{"OIDC_PROVIDER_CLIENT_ID":"{{ .Values.global.oidcProviderClientId }}","OIDC_PROVIDER_CLIENT_SECRET":"{{ .Values.global.oidcProviderClientSecret }}"},"type":"Opaque"}},"tolerations":[]} | cf-api |
| cfapi-internal.<<.affinity | object | {} | Affinity configuration |
| cfapi-internal.<<.container | object | {"env":{"AUDIT_AUTO_CREATE_DB":true,"DEFAULT_SYSTEM_TYPE":"PROJECT_ONE","GITHUB_API_PATH_PREFIX":"/api/v3","LOGGER_LEVEL":"debug","OIDC_PROVIDER_PORT":"{{ .Values.global.oidcProviderPort }}","OIDC_PROVIDER_PROTOCOL":"{{ .Values.global.oidcProviderProtocol }}","OIDC_PROVIDER_TOKEN_ENDPOINT":"{{ .Values.global.oidcProviderTokenEndpoint }}","OIDC_PROVIDER_URI":"{{ .Values.global.oidcProviderService }}","ON_PREMISE":true,"RUNTIME_MONGO_DB":"codefresh","RUNTIME_REDIS_DB":0},"image":{"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/cf-api"}} | Container configuration |
| cfapi-internal.<<.container.env | object | See below | Env vars |
| cfapi-internal.<<.container.image | object | {"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/cf-api"} | Image |
| cfapi-internal.<<.container.image.registry | string | "us-docker.pkg.dev/codefresh-enterprise/gcr.io" | Registry prefix |
| cfapi-internal.<<.container.image.repository | string | "codefresh/cf-api" | Repository |
| cfapi-internal.<<.controller | object | {"replicas":2} | Controller configuration |
| cfapi-internal.<<.controller.replicas | int | 2 | Replicas number |
| cfapi-internal.<<.enabled | bool | true | Enable cf-api |
| cfapi-internal.<<.hpa | object | {"enabled":false,"maxReplicas":10,"minReplicas":2,"targetCPUUtilizationPercentage":70} | Autoscaler configuration |
| cfapi-internal.<<.hpa.enabled | bool | false | Enable HPA |
| cfapi-internal.<<.hpa.maxReplicas | int | 10 | Maximum number of replicas |
| cfapi-internal.<<.hpa.minReplicas | int | 2 | Minimum number of replicas |
| cfapi-internal.<<.hpa.targetCPUUtilizationPercentage | int | 70 | Average CPU utilization percentage |
| cfapi-internal.<<.imagePullSecrets | list | [] | Image pull secrets |
| cfapi-internal.<<.nodeSelector | object | {} | Node selector configuration |
| cfapi-internal.<<.pdb | object | {"enabled":false,"minAvailable":"50%"} | Pod disruption budget configuration |
| cfapi-internal.<<.pdb.enabled | bool | false | Enable PDB |
| cfapi-internal.<<.pdb.minAvailable | string | "50%" | Minimum number of replicas in percentage |
| cfapi-internal.<<.podSecurityContext | object | {} | Pod security context configuration |
| cfapi-internal.<<.resources | object | {"limits":{},"requests":{"cpu":"200m","memory":"256Mi"}} | Resource requests and limits |
| cfapi-internal.<<.secrets | object | {"secret":{"enabled":true,"stringData":{"OIDC_PROVIDER_CLIENT_ID":"{{ .Values.global.oidcProviderClientId }}","OIDC_PROVIDER_CLIENT_SECRET":"{{ .Values.global.oidcProviderClientSecret }}"},"type":"Opaque"}} | Secrets configuration |
| cfapi-internal.<<.tolerations | list | [] | Tolerations configuration |
| cfapi-internal.enabled | bool | false | |
| cfapi.affinity | object | {} | Affinity configuration |
| cfapi.container | object | {"env":{"AUDIT_AUTO_CREATE_DB":true,"DEFAULT_SYSTEM_TYPE":"PROJECT_ONE","GITHUB_API_PATH_PREFIX":"/api/v3","LOGGER_LEVEL":"debug","OIDC_PROVIDER_PORT":"{{ .Values.global.oidcProviderPort }}","OIDC_PROVIDER_PROTOCOL":"{{ .Values.global.oidcProviderProtocol }}","OIDC_PROVIDER_TOKEN_ENDPOINT":"{{ .Values.global.oidcProviderTokenEndpoint }}","OIDC_PROVIDER_URI":"{{ .Values.global.oidcProviderService }}","ON_PREMISE":true,"RUNTIME_MONGO_DB":"codefresh","RUNTIME_REDIS_DB":0},"image":{"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/cf-api"}} | Container configuration |
| cfapi.container.env | object | See below | Env vars |
| cfapi.container.image | object | {"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/cf-api"} | Image |
| cfapi.container.image.registry | string | "us-docker.pkg.dev/codefresh-enterprise/gcr.io" | Registry prefix |
| cfapi.container.image.repository | string | "codefresh/cf-api" | Repository |
| cfapi.controller | object | {"replicas":2} | Controller configuration |
| cfapi.controller.replicas | int | 2 | Replicas number |
| cfapi.enabled | bool | true | Enable cf-api |
| cfapi.hpa | object | {"enabled":false,"maxReplicas":10,"minReplicas":2,"targetCPUUtilizationPercentage":70} | Autoscaler configuration |
| cfapi.hpa.enabled | bool | false | Enable HPA |
| cfapi.hpa.maxReplicas | int | 10 | Maximum number of replicas |
| cfapi.hpa.minReplicas | int | 2 | Minimum number of replicas |
| cfapi.hpa.targetCPUUtilizationPercentage | int | 70 | Average CPU utilization percentage |
| cfapi.imagePullSecrets | list | [] | Image pull secrets |
| cfapi.nodeSelector | object | {} | Node selector configuration |
| cfapi.pdb | object | {"enabled":false,"minAvailable":"50%"} | Pod disruption budget configuration |
| cfapi.pdb.enabled | bool | false | Enable PDB |
| cfapi.pdb.minAvailable | string | "50%" | Minimum number of replicas in percentage |
| cfapi.podSecurityContext | object | {} | Pod security context configuration |
| cfapi.resources | object | {"limits":{},"requests":{"cpu":"200m","memory":"256Mi"}} | Resource requests and limits |
| cfapi.secrets | object | {"secret":{"enabled":true,"stringData":{"OIDC_PROVIDER_CLIENT_ID":"{{ .Values.global.oidcProviderClientId }}","OIDC_PROVIDER_CLIENT_SECRET":"{{ .Values.global.oidcProviderClientSecret }}"},"type":"Opaque"}} | Secrets configuration |
| cfapi.tolerations | list | [] | Tolerations configuration |
| cfsign | object | See below | tls-sign |
| cfui | object | See below | cf-ui |
| charts-manager | object | See below | charts-manager |
| ci.enabled | bool | false | |
| cluster-providers | object | See below | cluster-providers |
| consul | object | See below | consul Ref:https://github.com/bitnami/charts/blob/main/bitnami/consul/values.yaml |
| context-manager | object | See below | context-manager |
| cronus | object | See below | cronus |
| developmentChart | bool | false | |
| dockerconfigjson | object | {} | DEPRECATED - Use.imageCredentials instead dockerconfig (forkcfi tool backward compatibility) for Image Pull Secret. Obtain GCR Service Account JSON (sa.json) atsupport@codefresh.io ```shell GCR_SA_KEY_B64=$(cat sa.json |
| gencerts | object | See below | Job to generate internal runtime secrets. Required at first install. |
| gitops-dashboard-manager | object | See below | gitops-dashboard-manager |
| global | object | See below | Global parameters |
| global.affinity | object | {} | Global affinity constraints Apply affinity to all Codefresh subcharts. Will not be applied on Bitnami subcharts. |
| global.appProtocol | string | "https" | Application protocol. |
| global.appUrl | string | "onprem.codefresh.local" | Application root url. Will be used in Ingress objects as hostname |
| global.auditPostgresSchemaName | string | "public" | Set postgres schema name for audit database in plain text. |
| global.broadcasterPort | int | 80 | Default broadcaster service port. |
| global.broadcasterService | string | "cf-broadcaster" | Default broadcaster service name. |
| global.builderService | string | "builder" | Default builder service name. |
| global.cfapiEndpointsService | string | "cfapi" | Default API endpoints service name |
| global.cfapiInternalPort | int | 3000 | Default API service port. |
| global.cfapiService | string | "cfapi" | Default API service name. |
| global.cfk8smonitorService | string | "k8s-monitor" | Default k8s-monitor service name. |
| global.chartsManagerPort | int | 9000 | Default chart-manager service port. |
| global.chartsManagerService | string | "charts-manager" | Default charts-manager service name. |
| global.clusterProvidersPort | int | 9000 | Default cluster-providers service port. |
| global.clusterProvidersService | string | "cluster-providers" | Default cluster-providers service name. |
| global.codefresh | string | "codefresh" | LEGACY - Keep as is! Used for subcharts to access external secrets and configmaps. |
| global.consulHttpPort | int | 8500 | Default Consul service port. |
| global.consulService | string | "consul-headless" | Default Consul service name. |
| global.contextManagerPort | int | 9000 | Default context-manager service port. |
| global.contextManagerService | string | "context-manager" | Default context-manager service name. |
| global.disablePostgresForEventbus | string | "true" | Disables saving events from eventbus into postgres. When it is set to “false” all events (workflows, jobs, user etc.) from eventbus are starting saving to postgres and following services (charts-manager, cluster-providers, context-manager, cfapi, cf-platform-analytics, gitops-dashboard-manager, pipeline-manager, kube-integration, tasker-kubernetes, runtime-environment-manager) start requiring postgres connection. |
| global.dnsService | string | "kube-dns" | Definitions for internal-gateway nginx resolver |
| global.env | object | {} | Global Env vars |
| global.firebaseSecret | string | "" | Firebase Secret in plain text |
| global.firebaseSecretSecretKeyRef | object | {} | Firebase Secret from existing secret |
| global.firebaseUrl | string | "https://codefresh-on-prem.firebaseio.com/on-prem" | Firebase URL for logs streaming in plain text |
| global.firebaseUrlSecretKeyRef | object | {} | Firebase URL for logs streaming from existing secret |
| global.gitopsDashboardManagerDatabase | string | "pipeline-manager" | Default gitops-dashboarad-manager db collection. |
| global.gitopsDashboardManagerPort | int | 9000 | Default gitops-dashboarad-manager service port. |
| global.gitopsDashboardManagerService | string | "gitops-dashboard-manager" | Default gitops-dashboarad-manager service name. |
| global.helmRepoManagerService | string | "helm-repo-manager" | Default helm-repo-manager service name. |
| global.hermesService | string | "hermes" | Default hermes service name. |
| global.imagePullSecrets | list | ["codefresh-registry"] | Global Docker registry secret names as array |
| global.imageRegistry | string | "" | Global Docker image registry |
| global.kubeIntegrationPort | int | 9000 | Default kube-integration service port. |
| global.kubeIntegrationService | string | "kube-integration" | Default kube-integration service name. |
| global.mongoURI | string | "" | LEGACY (but still supported) - Use.global.mongodbProtocol +.global.mongodbUser/mongodbUserSecretKeyRef +.global.mongodbPassword/mongodbPasswordSecretKeyRef +.global.mongodbHost/mongodbHostSecretKeyRef +.global.mongodbOptions instead Default MongoDB URI. Will be used by ALL services to communicate with MongoDB. Ref:https://www.mongodb.com/docs/manual/reference/connection-string/ Note!defaultauthdb is omitted on purpose (i.e. mongodb://.../[defaultauthdb]). |
| global.mongodbDatabase | string | "codefresh" | Default MongoDB database name. Don't change! |
| global.mongodbHost | string | "cf-mongodb" | Set mongodb host in plain text |
| global.mongodbHostSecretKeyRef | object | {} | Set mongodb host from existing secret |
| global.mongodbOptions | string | "retryWrites=true" | Set mongodb connection string options Ref:https://www.mongodb.com/docs/manual/reference/connection-string/#connection-string-options |
| global.mongodbPassword | string | "mTiXcU2wafr9" | Set mongodb password in plain text |
| global.mongodbPasswordSecretKeyRef | object | {} | Set mongodb password from existing secret |
| global.mongodbProtocol | string | "mongodb" | Set mongodb protocol (mongodb /mongodb+srv) |
| global.mongodbRootUser | string | "" | DEPRECATED Use.Values.seed.mongoSeedJob instead. |
| global.mongodbUser | string | "cfuser" | Set mongodb user in plain text |
| global.mongodbUserSecretKeyRef | object | {} | Set mongodb user from existing secret |
| global.natsPort | int | 4222 | Default nats service port. |
| global.natsService | string | "nats" | Default nats service name. |
| global.newrelicLicenseKey | string | "" | New Relic Key |
| global.nodeSelector | object | {} | Global nodeSelector constraints Apply nodeSelector to all Codefresh subcharts. Will not be applied on Bitnami subcharts. |
| global.oidcProviderClientId | string | nil | Default OIDC Provider service client ID in plain text. |
| global.oidcProviderClientSecret | string | nil | Default OIDC Provider service client secret in plain text. |
| global.oidcProviderPort | int | 443 | Default OIDC Provider service port. |
| global.oidcProviderProtocol | string | "https" | Default OIDC Provider service protocol. |
| global.oidcProviderService | string | "" | Default OIDC Provider service name (Provider URL). |
| global.oidcProviderTokenEndpoint | string | "/token" | Default OIDC Provider service token endpoint. |
| global.pipelineManagerPort | int | 9000 | Default pipeline-manager service port. |
| global.pipelineManagerService | string | "pipeline-manager" | Default pipeline-manager service name. |
| global.platformAnalyticsPort | int | 80 | Default platform-analytics service port. |
| global.platformAnalyticsService | string | "platform-analytics" | Default platform-analytics service name. |
| global.postgresDatabase | string | "codefresh" | Set postgres database name |
| global.postgresHostname | string | "" | Set postgres service address in plain text. Takes precedence overglobal.postgresService! |
| global.postgresHostnameSecretKeyRef | object | {} | Set postgres service from existing secret |
| global.postgresPassword | string | "eC9arYka4ZbH" | Set postgres password in plain text |
| global.postgresPasswordSecretKeyRef | object | {} | Set postgres password from existing secret |
| global.postgresPort | int | 5432 | Set postgres port number |
| global.postgresService | string | "postgresql" | Default internal postgresql service address from bitnami/postgresql subchart |
| global.postgresUser | string | "postgres" | Set postgres user in plain text |
| global.postgresUserSecretKeyRef | object | {} | Set postgres user from existing secret |
| global.rabbitService | string | "rabbitmq:5672" | Default internal rabbitmq service address from bitnami/rabbitmq subchart. |
| global.rabbitmqHostname | string | "" | Set rabbitmq service address in plain text. Takes precedence overglobal.rabbitService! |
| global.rabbitmqHostnameSecretKeyRef | object | {} | Set rabbitmq service address from existing secret. |
| global.rabbitmqPassword | string | "cVz9ZdJKYm7u" | Set rabbitmq password in plain text |
| global.rabbitmqPasswordSecretKeyRef | object | {} | Set rabbitmq password from existing secret |
| global.rabbitmqProtocol | string | "amqp" | Set rabbitmq protocol (amqp/amqps) |
| global.rabbitmqUsername | string | "user" | Set rabbitmq username in plain text |
| global.rabbitmqUsernameSecretKeyRef | object | {} | Set rabbitmq username from existing secret |
| global.redisPassword | string | "hoC9szf7NtrU" | Set redis password in plain text |
| global.redisPasswordSecretKeyRef | object | {} | Set redis password from existing secret |
| global.redisPort | int | 6379 | Set redis service port |
| global.redisService | string | "redis-master" | Default internal redis service address from bitnami/redis subchart |
| global.redisUrl | string | "" | Set redis hostname in plain text. Takes precedence overglobal.redisService! |
| global.redisUrlSecretKeyRef | object | {} | Set redis hostname from existing secret. |
| global.runnerService | string | "runner" | Default runner service name. |
| global.runtimeEnvironmentManagerPort | int | 80 | Default runtime-environment-manager service port. |
| global.runtimeEnvironmentManagerService | string | "runtime-environment-manager" | Default runtime-environment-manager service name. |
| global.security | object | {"allowInsecureImages":true} | Bitnami |
| global.storageClass | string | "" | Global StorageClass for Persistent Volume(s) |
| global.tlsSignPort | int | 4999 | Default tls-sign service port. |
| global.tlsSignService | string | "cfsign" | Default tls-sign service name. |
| global.tolerations | list | [] | Global tolerations constraints Apply toleratons to all Codefresh subcharts. Will not be applied on Bitnami subcharts. |
| helm-repo-manager | object | See below | helm-repo-manager |
| hermes | object | See below | hermes |
| hooks | object | See below | Pre/post-upgrade Job hooks. |
| hooks.consul | object | {"affinity":{},"enabled":true,"image":{"registry":"us-docker.pkg.dev/codefresh-inc/public-gcr-io","repository":"codefresh/kubectl","tag":"1.33.3"},"nodeSelector":{},"podSecurityContext":{},"resources":{},"tolerations":[]} | Recreatesconsul-headless service due to duplicated ports in Service during the upgrade. |
| hooks.mongodb | object | {"affinity":{},"enabled":true,"image":{"registry":"us-docker.pkg.dev/codefresh-inc/public-gcr-io","repository":"codefresh/mongosh","tag":"2.5.0"},"nodeSelector":{},"podSecurityContext":{},"resources":{},"tolerations":[]} | Updates images insystem/default runtime. |
| hooks.rabbitmq | object | {"affinity":{},"enabled":true,"image":{"registry":"us-docker.pkg.dev/codefresh-inc/public-gcr-io","repository":"codefresh/rabbitmqadmin","tag":"2.8.0"},"nodeSelector":{},"podSecurityContext":{},"resources":{},"tolerations":[]} | Enable stable feature flags in RabbitMQ. |
| imageCredentials | object | {} | Credentials for Image Pull Secret object |
| ingress | object | {"annotations":{"nginx.ingress.kubernetes.io/service-upstream":"true","nginx.ingress.kubernetes.io/ssl-redirect":"false","nginx.org/redirect-to-https":"false"},"enabled":true,"ingressClassName":"nginx-codefresh","labels":{},"nameOverride":"","services":{"internal-gateway":["/"]},"tls":{"cert":"","enabled":false,"existingSecret":"","key":"","secretName":"star.codefresh.io"}} | Ingress |
| ingress-nginx | object | See below | ingress-nginx Ref:https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml |
| ingress.annotations | object | See below | Set annotations for ingress. |
| ingress.enabled | bool | true | Enable the Ingress |
| ingress.ingressClassName | string | "nginx-codefresh" | Set the ingressClass that is used for the ingress. Defaultnginx-codefresh is created fromingress-nginx controller subchart |
| ingress.labels | object | {} | Set labels for ingress |
| ingress.nameOverride | string | "" | Override Ingress resource name |
| ingress.services | object | See below | Default services and corresponding paths |
| ingress.tls.cert | string | "" | Certificate (base64 encoded) |
| ingress.tls.enabled | bool | false | Enable TLS |
| ingress.tls.existingSecret | string | "" | Existingkubernetes.io/tls type secret with TLS certificates (keys:tls.crt,tls.key) |
| ingress.tls.key | string | "" | Private key (base64 encoded) |
| ingress.tls.secretName | string | "star.codefresh.io" | Default secret name to be created with providedcert andkey below |
| internal-gateway | object | See below | internal-gateway |
| k8s-monitor | object | See below | k8s-monitor |
| kube-integration | object | See below | kube-integration |
| mailer.enabled | bool | false | |
| mongodb | object | See below | mongodb Ref:https://github.com/bitnami/charts/blob/main/bitnami/mongodb/values.yaml |
| nats | object | See below | nats Ref:https://github.com/bitnami/charts/blob/main/bitnami/nats/values.yaml |
| nomios | object | See below | nomios |
| payments.enabled | bool | false | |
| pipeline-manager | object | See below | pipeline-manager |
| postgresql | object | See below | postgresql Ref:https://github.com/bitnami/charts/blob/main/bitnami/postgresql/values.yaml |
| postgresql-ha | object | See below | postgresql Ref:https://github.com/bitnami/charts/blob/main/bitnami/postgresql-ha/values.yaml |
| postgresqlCleanJob | object | See below | Maintenance postgresql clean job. Removes a certain number of the last records in the event store table. |
| rabbitmq | object | See below | rabbitmq Ref:https://github.com/bitnami/charts/blob/main/bitnami/rabbitmq/values.yaml |
| redis | object | See below | redis Ref:https://github.com/bitnami/charts/blob/main/bitnami/redis/values.yaml |
| redis-ha | object | {"auth":true,"enabled":false,"haproxy":{"enabled":true,"resources":{"requests":{"cpu":"100m","memory":"128Mi"}}},"persistentVolume":{"enabled":true,"size":"10Gi"},"redis":{"resources":{"requests":{"cpu":"100m","memory":"128Mi"}}},"redisPassword":"hoC9szf7NtrU"} | redis-ha # Ref:https://github.com/DandyDeveloper/charts/blob/master/charts/redis-ha/values.yaml |
| runner | object | See below | runner |
| runtime-environment-manager | object | See below | runtime-environment-manager |
| runtimeImages | object | See below | runtimeImages |
| salesforce-reporter.enabled | bool | false | |
| seed | object | See below | Seed jobs |
| seed-e2e | object | {"affinity":{},"backoffLimit":10,"enabled":false,"image":{"registry":"docker.io","repository":"mongo","tag":"latest"},"nodeSelector":{},"podSecurityContext":{},"resources":{},"tolerations":[],"ttlSecondsAfterFinished":300} | CI |
| seed.enabled | bool | true | Enable all seed jobs |
| seed.mongoSeedJob | object | See below | Mongo Seed Job. Required at first install. Seeds the required data (default idp/user/account), creates cfuser and required databases. |
| seed.mongoSeedJob.env | object | {} | Extra env variables for seed job. |
| seed.mongoSeedJob.mongodbRootOptions | string | "" | Extra options for connection string (e.g.authSource=admin). |
| seed.mongoSeedJob.mongodbRootPassword | string | "XT9nmM8dZD" | Root password in plain text (required ONLY for seed job!). |
| seed.mongoSeedJob.mongodbRootPasswordSecretKeyRef | object | {} | Root password from existing secret |
| seed.mongoSeedJob.mongodbRootUser | string | "root" | Root user in plain text (required ONLY for seed job!). |
| seed.mongoSeedJob.mongodbRootUserSecretKeyRef | object | {} | Root user from existing secret |
| seed.postgresSeedJob | object | See below | Postgres Seed Job. Required at first install. Creates required user and databases. |
| seed.postgresSeedJob.postgresPassword | optional | "" | Password for "postgres" admin user (required ONLY for seed job!) |
| seed.postgresSeedJob.postgresPasswordSecretKeyRef | optional | {} | Password for "postgres" admin user from existing secret |
| seed.postgresSeedJob.postgresUser | optional | "" | "postgres" admin user in plain text (required ONLY for seed job!) Must be a privileged user allowed to create databases and grant roles. If omitted, username and password from.Values.global.postgresUser/postgresPassword will be used. |
| seed.postgresSeedJob.postgresUserSecretKeyRef | optional | {} | "postgres" admin user from exising secret |
| segment-reporter.enabled | bool | false | |
| tasker-kubernetes | object | {"affinity":{},"container":{"image":{"registry":"us-docker.pkg.dev/codefresh-enterprise/gcr.io","repository":"codefresh/tasker-kubernetes"}},"enabled":true,"hpa":{"enabled":false},"imagePullSecrets":[],"nodeSelector":{},"pdb":{"enabled":false},"podSecurityContext":{},"resources":{"limits":{},"requests":{"cpu":"100m","memory":"128Mi"}},"tolerations":[]} | tasker-kubernetes |
| webTLS | object | {"cert":"","enabled":false,"key":"","secretName":"star.codefresh.io"} | DEPRECATED - Use.Values.ingress.tls instead TLS secret for Ingress |
About
Codefresh platform Helm chart for on-premises installation
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.