Run Bloomreach Experience Manager on Kubernetes 101
Kenan Salic
2019-11-13

Welcome to a series of blog articles on how to run Experience Manager on Kubernetes and advanced features. We will first start with the very basics by getting Experience Manager up and running.
This blog is intended for DevOps Engineers, Application Developers and System Administrators who want to learn how to deploy Experience Manager on their Kubernetes cluster.
For this blog, we will use a single node cluster withMinikube. Disclaimer: this will not be an official or supported manual on how to run Experience Manager on production but will give you an insight on how you can configure Kubernetes to run Experience Manager for any purpose intended.
While creating this blog we used the following versions:
- Docker version: 19.03.2
- Minikube version: v1.0.0
- Kubectl client version: v1.14.6
- Kubectl server version: v1.14.0
- Experience Manager version: v13.4.0
- HELM version: v2.14.3
We will first start by explaining the intended architecture. If you are new to Kubernetes and the components used in Kubernetes, here is ablog article explaining the components on a high-level.
A traditional Experience Manager Architecture is a traditional 3-tier architecture and looks as follows:
We will recreate this in Kubernetes. In our Kubernetes setup we will make use of MySQL as the database and NGINX as the reverse proxy/web server and load balancer. For the CMS and Site Application we are going to build a Docker image using ourdocker run and develop documentation.
The end result will be the following diagram:
First start Minikube with the following command:
minikube start --memory 8192 --cpus 2 startDatabase
In the next section we will get a MySQL database up and running. For this it is useful to use a Kubernetes package manager such as HELM, to make sure we start a MySQL database instance with following best practises.
If you are not familiar with Helm, here isa 6-minute introduction to HELM.
Once installed, execute the following commands to install a MySQL database on your local Kubernetes cluster:
helm init
helm install --name my-mysql --set mysqlUser=cms,mysqlDatabase=brxm,image=mysql,imageTag=5.7.19 stable/mysqlCheck the pods by executing the following command:
kubectl get podsNAME READY STATUS RESTARTS AGE
my-mysql-57f877d975-b7k96 1/1 Running 0 174m
The MySQL pod is up and running!
During install of the Helm MySQL Chart we used the following properties and values:
- name: my-myql
- mysqlUser: cms
- mysqlDatabase: brxm
- image: mysql
- imageTag: 5.7.19
It's best to store these properties and values directly in a ConfigMap. We will need them later on.
my-mysql-variables.yaml
apiVersion: v1data: mysqlDatabase: brxm mysqlUser: cms name: my-mysqlkind: ConfigMapmetadata: name: my-mysql-variables
kubectl create -f my-mysql-variables.yamlAha! - The Password
After installation of the MySQL Helm Chart, a Secret object named my-mysql, with generated passwords is created.
kubectl get secret my-mysql -o yamlapiVersion: v1data: mysql-password: UnRsOEFYSWUwMQ== mysql-root-password: ZW5OemlCVDJWaw==kind: Secretmetadata:….
The mysql-password is a base64 encoded password for the mysql cms user as defined before.
We can decode the mysql-password with the following command:
echo -n 'UnRsOEFYSWUwMQ==' | base64 --decodemysql-password: Rtl8AXIe01
Quick test to the mysql pod to ensure this works:
kubectl exec -it my-mysql-57f877d975-b7k96 bashmysql -u cms -pEnter password: Rtl8AXIe01 Welcome to the MySQL monitor...
We will use the mysql-password secret reference when we configure our deployment for the BrXM image for connection of the database later on.
Web server and load balancer
Next is the reverse proxy/web server and the load balancer with NGINX.
Again we can use HELM to help us on the way. Execute the following command:
helm install --name my-nginx-ingress stable/nginx-ingressCheck the pods by executing the following command:
kubectl get podsNAME READY STATUS RESTARTS AGE
my-mysql-57f877d975-b7k96 1/1 Running 0 3h23m
my-nginx-ingress-controller-7c4b9b4484-k4vxc 1/1 Running 0 3h29m
my-nginx-ingress-default-backend-6fbd886cf4-4knt9 1/1 Running 0 3h29m
Great success! We have my-nginx-ingress-controller as the load balancer and reverse proxy, and the my-nginx-ingress-default-backend as the default backend server running.
Experience Manager
Next we will create an Experience Manager docker image. Follow thedocumentation to build your own image using your local project. One important extra step you will need is to install the "Bloomreach Cloud feature" through Essentials (which is available for Enterprise projects).
Make sure to tag and push this image to your local docker registry or a registry on docker hub. For demo purposes we already build an image for you! It's available on docker hub:https://hub.docker.com/r/bloomreach/xm-kubernetes-training.
The image name is bloomreach/xm-kubernetes-training we will use this in the next section of this blog article.
We have the following yaml files required for deploying the Experience Manager docker image.
deployment.yaml
apiVersion: apps/v1kind: Deploymentmetadata: name: brxm-cms-site labels: app: brxm-cms-sitespec: replicas: 2 selector: matchLabels: app: brxm-cms-site template: metadata: labels: app: brxm-cms-site spec: initContainers: - name: wait-for-mysql image: mysql:5.7.19 command: ['sh', '-c', '/usr/bin/mysql -h$MYSQL_DB_HOST -P$MYSQL_DB_PORT -u$MYSQL_DB_USER -p$MYSQL_DB_PASSWORD $MYSQL_DB_NAME -e ""'] env: - name: MYSQL_DB_HOST# value: my-mysql valueFrom: configMapKeyRef: key: name name: my-mysql-variables - name: MYSQL_DB_PORT value: "3306" - name: MYSQL_DB_USER# value: cms valueFrom: configMapKeyRef: key: mysqlUser name: my-mysql-variables - name: MYSQL_DB_PASSWORD valueFrom: secretKeyRef: key: mysql-password name: my-mysql - name: MYSQL_DB_NAME# value: brxm valueFrom: configMapKeyRef: key: mysqlDatabase name: my-mysql-variables containers: - name: brxm-cms-site image: bloomreach/xm-kubernetes-training ports: - containerPort: 8080 env: - name: profile value: "mysql" - name: MYSQL_DB_HOST valueFrom: configMapKeyRef: key: name name: my-mysql-variables - name: MYSQL_DB_PORT value: "3306" - name: MYSQL_DB_USER valueFrom: configMapKeyRef: key: mysqlUser name: my-mysql-variables - name: MYSQL_DB_PASSWORD valueFrom: secretKeyRef: key: mysql-password name: my-mysql - name: MYSQL_DB_NAME valueFrom: configMapKeyRef: key: mysqlDatabase name: my-mysql-variables - name: MYSQL_DB_DRIVER value: "com.mysql.jdbc.Driver" - name: REPO_BOOTSTRAP value: "true" - name: REPO_AUTOEXPORT_ALLOWED value: "true" - name: REPO_CONFIG value: "file:/usr/local/tomcat/conf/repository-mysql.xml" - name: REPO_WORKSPACE_BUNDLE_CACHE value: "256" - name: REPO_VERSIONING_BUNDLE_CACHE value: "64" livenessProbe: httpGet: path: /site/ping/ port: 8080 initialDelaySeconds: 45 periodSeconds: 10 failureThreshold: 20 readinessProbe: httpGet: path: /site/ping/ port: 8080 initialDelaySeconds: 45 periodSeconds: 10 failureThreshold: 20
service.yaml
apiVersion: v1kind: Servicemetadata: name: brxm-cms-site labels: app: brxm-cms-sitespec: selector: app: brxm-cms-site ports: - name: http protocol: TCP port: 8080 targetPort: 8080
Configure Ingress with host: example.com. So that we will be able to access the cms on example.com/cms and the site on example.com/site
Ingress.yaml
apiVersion: networking.k8s.io/v1beta1kind: Ingressmetadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/proxy-body-size: 10m nginx.ingress.kubernetes.io/proxy-read-timeout: "180" nginx.ingress.kubernetes.io/session-cookie-hash: sha1 nginx.ingress.kubernetes.io/session-cookie-name: SERVERID nginx.ingress.kubernetes.io/session-cookie-path: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/upstream-fail-timeout: 1s nginx.ingress.kubernetes.io/upstream-max-fails: "250" name: brxm-cms-sitespec: rules: - host: example.com http: paths: - path: /cms backend: serviceName: brxm-cms-site servicePort: 8080 - path: /site backend: serviceName: brxm-cms-site servicePort: 8080Aha! - redirect / to /cms/ on the cms entry point
nginx.ingress.kubernetes.io/configuration-snippet: | location ~* /verify/([A-Za-z0-9]*) { return 302 /(modal:verify/$1); }You can create all of these objects at once running the following command, if you have them all in the same folder.
kubectl create -f .Check the pods
kubectl get podsNAME READY STATUS RESTARTS AGE
brxm-cms-site-796f67f549-2qr4x 1/1 Running 0 3h32m
brxm-cms-site-796f67f549-t9vn4 1/1 Running 0 3h32m
my-mysql-57f877d975-b7k96 1/1 Running 0 4h6m
my-nginx-ingress-controller-7c4b9b4484-k4vxc 1/1 Running 0 4h12m
my-nginx-ingress-default-backend-6fbd886cf4-4knt9 1/1 Running 0 4h12m
Yes! We have 2 pods of BrXM running, because we indicated we noted 2 replicas in the deployment.yaml
Aha! - Scaling up
Easily scale up from 2 to 3 replicas with the following command:
kubectl scale --replicas=3 deployment/brxm-cms-siteCheck pods
kubectl get podsYou should now be able to see 3 brxm-cms-site pods.
Please note:
The new Experience Manager replica will run a pod with the Experience Manager CMS and Site container. This will automatically add a new entry in the REPOSITORY_LOCAL_REVISION table in MySQL!
mysql> SELECT * FROM REPOSITORY_LOCAL_REVISIONS;+--------------------------------+-------------+| JOURNAL_ID | REVISION_ID |+--------------------------------+-------------+| brxm-cms-site-796f67f549-t9vn4 | 328 || brxm-cms-site-796f67f549-2qr4x | 328 || brxm-cms-site-796f67f549-zw4kv | 328 |+--------------------------------+-------------+
Notice that the pod names are used for the JOURNAL_ID.
In this tutorial we are not copying an existing index to the newly created pod, which is a highly requested feature which will speed up application start up time. Index is being generated during startup which can take quite some time with a large repository. We will most likely cover this in a future blog article.
Another Aha! - Scaling down
Scale back to 2 replicas with the following command:
kubectl scale --replicas=2 deployment/brxm-cms-sitePlease note that scaling back down will not remove the terminated pod entry from the REPOSITORY_LOCAL_REVISION table (as seen above). This is something that will need to be executed or implemented separately. We will most likely cover this in a future blog article.
Result
To view the CMS and Site in your browser you need to do a couple of more steps:
First get the ip address of minikube with the following command:
minikube statusThis will give you an ip address of the VM which is running minikube:
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.107
Access your host settings and add the following entry based on the host defined in ingress.yaml (example.com) and the ip address of minikube (192.168.99.107)
E.g.
192.168.99.107 example.com
Next is to figure out the port number from the load balancer service
kubectl get service | grep LoadBalancermy-nginx-ingress-controller LoadBalancer 10.111.2.114 <pending> 80:32010/TCP,443:32196/TCP
Nodeport is configured at 32010.
Access:http://example.com:32010/cms
Do not forget to configure the hst:host and hst:platform configuration for example.com or else you will not be able to use the channel manager or see the site working.
Voila!
Final words
This blog article has been a collaboration between Bloomreach's Cloud and Professional Services team. We've taken the best practises from our on demand offering and have converted this into this comprehendable step by step blog article.
Also worth looking into is one of our community implementations of Kubernetes with Experience Manager: https://github.com/bcanvural/hippo-minikube/