Introduction
Hey 👋, in this post, we shall see how to create a helm chart for HarperDB based on the boilerplate helm chart with the helm cli, lint/dry run it and push it to the artifacthub, and then reuse it to install a helm release on a Kubernetes cluster. You can get the helm chart used in this post from thislink.
You may check thispost if you are looking to install harperdb with a custom, minimal helm chart.
Search
As of this writing, there is no chart available on artifact hub for harperdb, the screenshot below should validate that.
So our goal is to push the harpderdb chart to artifact hub, so that the search result shows an entry.
We could also search from the helm cli, to see if there exists a chart for harperdb. For which you have to first installhelm in your system.
On Mac, it could be installed as follows.
$ brew install helm
Now, we can search for the chart.
$ helm search hub harperdbNo results found
This result matches with the search we did on website.
Chart
Ok, we can create our chart. Let's first create a boilerplate chart with the name harperdb.
$ helm create harperdbCreating harperdb
A chart is created for us, it's nothing but a directory with a specific layout.
$ ls -R harperdb Chart.yaml charts templates values.yamlharperdb/charts:harperdb/templates:NOTES.txt deployment.yaml ingress.yaml serviceaccount.yaml_helpers.tpl hpa.yaml service.yaml testsharperdb/templates/tests:test-connection.yaml
Let's make a few changes.
Change the appVersion, I am going to use the latest version of harperdb found in thetags section at docker hub.
$ cat harperdb/Chart.yaml| grep appVersionappVersion: "1.16.0"$ sed -i 's/appVersion: "1.16.0"/appVersion: "4.0.4"/g' harperdb/Chart.yaml$ cat harperdb/Chart.yaml| grep appVersionappVersion: "4.0.4"
We don't need any sub charts for now, so we can remove that directory.
$ rm -r harperdb/charts
We can also remove the tests directory.
$ rm -r harperdb/templates/tests
Values
We need to make a few modifications in the values file.
Image
Let's set the image.
$ cat harperdb/templates/deployment.yaml | grep image: image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
The tag can come from the appVersion, we just need to set the image repository in values. By default, it would have nginx.
$ grep repository: harperdb/values.yaml repository: nginx
Edit values by replacing nginx with harperdb/harperdb.
$ sed -i 's#repository: nginx#repository: harperdb/harperdb#g' harperdb/values.yaml$ grep repository: harperdb/values.yaml repository: harperdb/harperdb
Service
HarperDB uses the port 9925 for the rest API, we would be exposing only this here, though there are other ports like 9926, 9932 for custom functions, clustering etc.
In our chart they are setting the service port in.Values.service.port
and the same port is used as port the container port too, we can stick with that for simplicity.
$ grep -ir service.port harperdbharperdb/templates/NOTES.txt: echo http://$SERVICE_IP:{{ .Values.service.port }}harperdb/templates/ingress.yaml:{{- $svcPort := .Values.service.port -}}harperdb/templates/service.yaml: - port: {{ .Values.service.port }}harperdb/templates/deployment.yaml: containerPort: {{ .Values.service.port }}
Let' change the service port in values.
$ grep port: harperdb/values.yaml port: 80$ sed -i 's/port: 80/port: 9925/g' harperdb/values.yaml $ grep port: harperdb/values.yaml port: 9925
Also set the service type to LoadBlancer
$ sed -i 's/type: ClusterIP/type: LoadBalancer/g' harperdb/values.yaml$ cat harperdb/values.yaml--TRUNCATED--service: type: LoadBalancer port: 9925--TRUNCATED
Security context
Modify the pod security context, you may check thispost to know why we used 1000 as the fsGroup.
$ grep -i -A 2 podSecurityContext harperdb/values.yamlpodSecurityContext: fsGroup: 1000
Resources
Similary set the cpu and memory requirements in values.
$ grep -A 6 resources harperdb/values.yamlresources: limits: cpu: 500m memory: 1Gi requests: cpu: 100m memory: 128Mi
Secret
The chart we created doesn't have a secret manifest, we can create it. This manifest follows standards similar to the service account manifest.
$ cat <<EOF > harperdb/templates/secret.yaml {{- if .Values.secret.create }}apiVersion: v1kind: Secretmetadata: name: {{ include "harperdb.secretName" . }} labels: {{- include "harperdb.labels" . | nindent 4 }}stringData: {{- toYaml .Values.secret.entries | nindent 2 }}{{- end }}EOF
We can set the appropriate values for the secret manifest.
$ cat <<EOF >> harperdb/values.yamlsecret: entries: HDB_ADMIN_USERNAME: admin HDB_ADMIN_PASSWORD: password12345 create: true name: harperdbEOF
We can then modify the helpers file.
$ cat <<EOF >> harperdb/templates/_helpers.tpl {{/*Create the name of the secret to use*/}}{{- define "harperdb.secretName" -}}{{- default "default" .Values.secret.name }}{{- end }}EOF
PVC
Likewise, there is no pvc template in the chart. So we can add that.
$ cat <<EOF > harperdb/templates/pvc.yaml {{- if .Values.pvc.create }}apiVersion: v1kind: PersistentVolumeClaimmetadata: name: {{ include "harperdb.pvcName" . }} labels: {{- include "harperdb.labels" . | nindent 4 }}spec: accessModes: - {{ .Values.pvc.accessMode }} resources: requests: storage: {{ .Values.pvc.storage }}{{- end }}EOF
Set the appropriate values for pvc.
$ cat <<EOF >> harperdb/values.yaml pvc: accessMode: ReadWriteOnce create: true mountPath: /opt/harperdb/hdb name: harperdb storage: 5GiEOF
We can then modify the helpers file.
$ cat <<EOF >> harperdb/templates/_helpers.tpl {{/*Create the name of the pvc to use*/}}{{- define "harperdb.pvcName" -}}{{- default "default" .Values.pvc.name }}{{- end }}EOF
Deployment
We are going to make a few changes to the deployment manifest. So that it looks like below.
$ cat <<EOF > harperdb/templates/deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: {{ include "harperdb.fullname" . }} labels: {{- include "harperdb.labels" . | nindent 4 }}spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: matchLabels: {{- include "harperdb.selectorLabels" . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include "harperdb.selectorLabels" . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include "harperdb.serviceAccountName" . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} {{- if .Values.pvc.create }} volumes: - name: data persistentVolumeClaim: claimName: {{ include "harperdb.pvcName" . }} {{- end }} containers: - name: {{ .Chart.Name }} {{- if .Values.pvc.create }} volumeMounts: - name: data mountPath: {{ .Values.pvc.mountPath }} {{- end }} {{- if .Values.secret.create }} envFrom: - secretRef: name: {{ include "harperdb.secretName" . }} {{- end }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: {{ .Values.service.port }} protocol: TCP resources: {{- toYaml .Values.resources | nindent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }}EOF
In the template above, we have injected the secrets as env vars in the envFrom section of the container. And made changes related to the volume by defining the volume in the pod and mount it in the container.
Lint
Our chart is kinda ready...
Ok so now let's do the linting to see if it's proper.
$ helm lint harperdb==> Linting harperdb[INFO] Chart.yaml: icon is recommended1 chart(s) linted, 0 chart(s) failed
Seems good.
We can now try generating the kubernetes manifests, it won't deploy yet. You can tryhelm template harperdb
orhelm template harperdb --debug
, the debug flag helps debugging issues.
Kubeconfig
Make sure you have a running kubernetes cluster. I have an EKS cluster, and I would be using theaws cli to update the kubeconfig.
$ aws eks update-kubeconfig --name k8s-cluster-01 --region us-east-1
There are two nodes in my cluster.
$ kubectl get nodesNAME STATUS ROLES AGE VERSIONip-192-168-22-158.ec2.internal Ready <none> 24d v1.23.13-eks-fb459a0ip-192-168-38-226.ec2.internal Ready <none> 24d v1.23.13-eks-fb459a0
Create a namespace with kubectl.
$ kubectl create ns harperdb
Dry run
As the cluster is ready we can try to do a dry run installation with helm.
$ helm install harperdb harperdb -n harperdb --dry-run --debug
If there are no errors, we can proceed to the packaging.
Package
Our chart seems good so we can package it.
$ helm package harperdb
This should create a compressed file.
$ ls | grep tgzharperdb-0.1.0.tgz
Here 0.1.0 refers to the chart version.
$ cat harperdb/Chart.yaml | grep version:version: 0.1.0
Repo
We should need a repo where we can keep this package. I am using thisrepo for this purpose. And this repo is also setup with GitHub pages and the website is accessible on thisURL. So you may create a github repo withpages setup.
Alright I am cloning my repo.
git clone git@github.com:networkandcode/networkandcode.github.io.git
Create a directory there for helm packages.
$ cd networkandcode.github.io/$ mkdir helm-packages
We can move the package we created earlier in to this directory.
$ mv ~/harperdb-0.1.0.tgz helm-packages/$ ls helm-packages/harperdb-0.1.0.tgz
We need to now create an index file.
$ helm repo index helm-packages/$ ls helm-packages/harperdb-0.1.0.tgz index.yaml
The index file is populated automatically with these details.
$ cat helm-packages/index.yamlapiVersion: v1entries: harperdb: - apiVersion: v2 appVersion: 4.0.1 created: "2023-02-02T05:58:37.022518464Z" description: A Helm chart for Kubernetes digest: 1282e5919f2d6889f1e3dd849f27f2992d8288087502e1872ec736240dfd6ebf name: harperdb type: application urls: - harperdb-0.1.0.tgz version: 0.1.0generated: "2023-02-02T05:58:37.020383374Z"
You can also add the artifacthub repofile, to claim ownership, it's optional though.
Ok we can now push the changes to GitHub, note that I am directly pushing to the master branch.
$ git add --all$ git commit -m 'add helm package for harperdb'$ git push
Add repository
Our repository is ready with the package, we need to add it to artifact hub. Login to artifact hub, and go to control panel and click add repository.
The repository is added, but it takes some to process. You need to wait until there is a green tick in the Last processed section.
Search again
Once the repo is processed, we can repeat the searching process we did while starting this post.
Well, worth to know that ChatGPT's knowledge is cut off in 2021.
Now let's do the cli way.
$ helm search hub harperdb --max-col-width 1000URL CHART VERSION APP VERSION DESCRIPTIONhttps://artifacthub.io/packages/helm/networkandcode/harperdb 0.1.0 4.0.1 A Helm chart for Kubernetes
Wow our chart is showing up…
Install
We can open the URL shown above and see the installation instructions.
Let's run those commands, I am going to use -n for installing it in a separate namespace.
$ helm repo add networkandcode https://networkandcode.github.io/helm-packages"networkandcode" has been added to your repositories$ helm install my-harperdb networkandcode/harperdb --version 0.1.0 -n harperdb
Validate
Alright, so the release is installed, it's time to validate. First let's check the helm release status.
$ helm list -n harperdbNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSIONmy-harperdb harperdb 1 2023-02-02 07:14:59.586892384 +0000 UTC deployed harperdb-0.1.0 4.0.1
Check the Kubernetes workloads.
$ kubectl get all -n harperdbNAME READY STATUS RESTARTS AGEpod/my-harperdb-7b66d4f7c5-xtpvw 1/1 Running 0 2m7sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/my-harperdb LoadBalancer 10.100.127.117 a6e762ccc1e2d482a8528a7760544761-2140283724.us-east-1.elb.amazonaws.com 9925:30478/TCP 2m9sNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/my-harperdb 1/1 1 1 2m8sNAME DESIRED CURRENT READY AGEreplicaset.apps/my-harperdb-7b66d4f7c5 1 1 1 2m9s
API
We can test schema creation with an API call.
$ HDB_API_ENDPOINT_HOSTNAME=$(kubectl get svc my-harperdb -n harperdb -o jsonpath={.status.loadBalancer.ingress[0].hostname})$ curl --location --request POST http://${HDB_API_ENDPOINT_HOSTNAME}:9925 --header 'Content-Type: application/json' --header 'Authorization: Basic YWRtaW46cGFzc3dvcmQxMjM0NQ==' --data-raw '{ "operation": "create_schema", "schema": "my-schema"}'{"message":"schema 'my-schema' successfully created"}
Note that I have parsed the hostname as it gives a hostname for the external IP in EKS. Well, so the API call was successful. Nice, we were able to accomplish the goal !!!
Clean up
I am going to just clean up the helm and Kubernetes objects.
$ helm uninstall my-harperdb -n harperdbrelease "my-harperdb" uninstalled$ kubectl delete ns harperdbnamespace "harperdb" deleted
Summary
So we have seen some constructs of helm, and understood how we can make a chart for harperdb, push it to artifact hub and subsequently use it to install a harperdb release. Note that you can customise the chart with more options such as adding readme, enabling tests, claiming ownership for the chart, adding more harperdb specific variables w.r.t clustering, custom functions etc.
Thank you for reading!!!
Top comments(0)
For further actions, you may consider blocking this person and/orreporting abuse