- Notifications
You must be signed in to change notification settings - Fork4
Alfresco/alfresco-process-infrastructure-deployment
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Helm chart to install the Alfresco Activiti Enterprise (AAE) infrastructure to model and deploy your process applications:
- Alfresco Identity Service
- Modeling Service
- Modeling App
- Deployment Service
- Admin App
- Transformation (Tika) Service
Once installed, you can deploy new AAE applications:
- via theAdmin App using theDeployment Service
- manually customising thealfresco-process-application helm chart.
For all the available values, see the chartREADME.md.
Setup a Kubernetes cluster following your preferred procedure.
Install the latest version of helm.
Aningress-nginx should be installed and bound to an external DNS address, for example:
helm upgrade -i ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ -n ingress-nginx --create-namespaceFor any command on helm, please verify the output with--dry-run option, then execute without.
To install from the development chart repo, usealfresco-incubator rather thanalfresco asCHART_REPO variable.
Check deployment progress withkubectl get pods -w -A until all containers are running.If anything is stuck, check events withkubectl get events -w -A.
export DESIRED_NAMESPACE=${DESIRED_NAMESPACE:-aae}kubectl create ns$DESIRED_NAMESPACE
Configure access to pull images from quay.io in the installation namespace:
kubectl create secret \ -n$DESIRED_NAMESPACE \ docker-registry quay-registry-secret \ --docker-server=quay.io \ --docker-username=$QUAY_USERNAME \ --docker-password=$QUAY_PASSWORD
where:
- QUAY_USERNAME is your username on Quay
- QUAY_PASSWORD is your password on Quay
export RELEASE_NAME=aaeexport CHART_NAME=alfresco-process-infrastructureexport HELM_OPTS="-n$DESIRED_NAMESPACE"
A custom extra values file to add settings forlocalhost is provided:
export DOMAIN=host.docker.internalHELM_OPTS+=" -f values-localhost.yaml"
Make sure your local cluster has at least 16GB of memory and 8 CPUs.The startup might take as much as 10 minutes, usekubectl get pods -A -w to check the status.
NB if not already present in your/etc/hosts file, please add a DNS mapping fromhost.docker.internal to127.0.0.1.
If the hostnamehost.docker.internal is not resolved correctly on some deployments, patch them after calling helm via:
kubectl patch deployment -n$DESIRED_NAMESPACE${RELEASE_NAME}-alfresco-modeling-service -p"$(cat deployment-localhost-patch.yaml)"
export CLUSTER=aaedevexport DOMAIN=$CLUSTER.envalfresco.com
HELM_OPTS+="\ --set global.gateway.domain=$DOMAIN"
To disable alfresco-deployment-service in the infrastructure:
HELM_OPTS+=" --set alfresco-deployment-service.enabled=false"
A StorageClass that can work across multiple availability zones need to be available to store project release files per each application:
- for EKS always use EFS
- for AKS only if Multi-AZ is configured, use AFS
Add the helm values to use it:
HELM_OPTS+=" --set alfresco-deployment-service.projectReleaseVolume.storageClass=${STORAGE_CLASS_NAME}\ --set alfresco-deployment-service.projectReleaseVolume.permission=ReadWriteMany"
NBIn order to set email connector all the variables need to be set. If these variables are set then deployment service will use these configs as default for any applications deployed. Once these variables are configured at the deployment of chart via Helm customer won’t have the possibility to override these values from the admin app. In case you want to configure email connector variable from admin-app please dont not configure email connector during helm deployment.
Add the helm properties to configure email connector:
HELM_OPTS+=" --set alfresco-deployment-service.applications.connectors.emailConnector.username=${email_connecor_username} --set alfresco-deployment-service.applications.connectors.emailConnector.password=${email_connector_password} --set alfresco-deployment-service.applications.connectors.emailConnector.host=${email_connector_host} --set alfresco-deployment-service.applications.connectors.emailConnector.port=${email_connector_port}"
To verify the k8s yaml output:
HELM_OPTS+=" --debug --dry-run"If all good then launch again without--dry-run.
Install from the stable repo using a released chart version:
helm upgrade -i --wait \ --repo https://kubernetes-charts.alfresco.com/stable \$HELM_OPTS$RELEASE_NAME$CHART_NAME
or from the incubator repo for a development chart version:
helm upgrade -i --wait \ --repo https://kubernetes-charts.alfresco.com/incubator \$HELM_OPTS$RELEASE_NAME$CHART_NAME
or from the current repository directory:
helm repo updatehelm dependency update helm/$CHART_NAMEhelm upgrade -i --wait \$HELM_OPTS$RELEASE_NAME helm/$CHART_NAME
Open browser and login to IDS:
open$SSO_URLTo read back the realm from the secret, use:
kubectl get secret \ -n$DESIRED_NAMESPACE \ realm-secret -o jsonpath="{['data']['alfresco-realm\.json']}"| base64 -D> alfresco-realm.json
In anair gapped environment where the Kubernetes cluster has no direct access to external image repositories, use a tool likehelm-image-mirror to tag and push images to your internal registry and modify helm charts with the new image locations.
Modify the file values-external-postgresql.yaml providing values for your external database per each service, then run:
HELM_OPTS+=" -f values-external-postgresql.yaml"Running on GH Actions.
For Dependabot PRs to be validated by CI, the label "CI" should be added to the PR.
Requires the following secrets to be set:
| Name | Description |
|---|---|
| BOT_GITHUB_TOKEN | Token to launch other builds on GH |
| BOT_GITHUB_USERNAME | Username to issue propagation PRs |
| RANCHER2_URL | Rancher URL to perform helm tests |
| RANCHER2_ACCESS_KEY | Rancher access key |
| RANCHER2_SECRET_KEY | Rancher secret key |
| TEAMS_NOTIFICATION_AUTOMATE_BACKEND_WORKFLOW_WEBHOOK | Webhook to notify Teams on failure |
About
Helm chart to deploy the AAE infrastructure
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.