Inthe last part I introduced in the exercises and talked about the complications I had with building a Dockerimage with Chrome inside on anarm64
platform. Also the usedNodeJS app and its’Dockerfile
was presented. This part will be more aboutKubernetes, setting up the Cluster and deploying the NodeJS app.
Setting up a Kubernetes Cluster
After creating theDockerfile
it was about setting up a Kubernetes cluster. Which program I use for this was up to me. In myDevOps with Kubernetes course we usek3d, this implementsK3s in Docker. During the course I had not experienced any problems with this solution, so I was sure to use k3d to solve the assignments.
Installing k3d on macOS is easy viaHomebrew withbrew install k3d
. The cluster can then be created withk3d cluster create --port '8082:30080@agent[0]' -p 8081:80@loadbalancer --agents 2
. The8081:80@loadbalancer
will allow our apps to be accessible to us vialocalhost:8081
.
Deploying the app
With a running Kubernetes cluster, deployment was now on the agenda. I decided from the beginning to logically separate the app from the rest of the cluster and created a separate namespace. This is possible withkubectl create namespace exercise-ns
in Kubernetes. The basic structure for deployment can be found in the Kubernetes documentation (Deployments | Kubernetes). So I created adeployment.yaml
in mymanifests
folder with the following content:
apiVersion:apps/v1kind:Deploymentmetadata:name:exercise-appnamespace:exercise-nslabels:app:exercise-appspec:replicas:1selector:matchLabels:app:exercise-apptemplate:metadata:labels:app:exercise-appspec:containers:-name:exercise-appimage:niklasmtj/exercise-app:v1
To make the deployment discoverable I also created a service. The Kubernetes documentation for services (Deployments | Kubernetes) again provides the basic structure. The service definition can be found undermanifests/service.yaml
:
apiVersion:v1kind:Servicemetadata:name:exercise-svcnamespace:exercise-nsspec:type:ClusterIPselector:app:exercise-appports:-protocol:TCPport:80targetPort:3000
It is important to note here that thetargetPort
corresponds to the port exposed by the Docker container.
Ingress
After preparing the deployment, the task was now to be able to access it. Thus, the connection via anIngress came into play. By default, K3s uses the Traefik Ingress Controller for Ingress routing, which I also used. The ingress configuration is quite simple. In aningress.yaml
file I used the following configuration:
apiVersion:extensions/v1beta1kind:Ingressmetadata:name:exercise-ingressnamespace:exercise-nsspec:rules:-http:paths:-path:/backend:serviceName:exercise-svcservicePort:80
This way the/
path is directly forwarded to theexercise
service we defined inservice.yaml
on port 80.
IP-Whitelisting
Another task I had to solve with an Ingress was to limit the IP range that is accepted at all. Unfortunately, due to my previous knowledge, I did not had the possibility to implement the whole thing at that time. Nevertheless I read up on how I would implement it and documented it in my readme. During the task I took a closer look at the NGINX Ingress Controller, because I found the most results about it during my research. There it seems to work by a simple annotation in theingress.yaml
:
metadata:annotations:ingress.kubernetes.io/whitelist-source-range:"192.168.0.0/16"
This will make sure that only IPs from the192.168.0.0/16
net is able to connect to the apps. So every IP starting at192.168.0.1
to196.168.255.254
are eligible to connect. NGINX will drop every request not coming from this IP range.
Also, attempts with Traefik as the Ingress controller, which is used by k3s by default, failed. This is an area I need to look at in more detail in the near future to understand exactly how Ingress works and at what level you can block IPs. Additionally, I need to look at whether it makes a difference that my k3s cluster is running inside Docker and how that affects incoming IPs.
Thank you for reading,
Niklas
The code from this post can also be found on GitHub:niklasmtj/kubernetes-exercise.
Additionally, I created anarm64
as well as anamd64
docker image forniklasmtj/exercise-app:v1
. So the example app should be usable on other devices as well.
The series:
- Part 1
- Part 2
- Part 3 - coming April 29
Top comments(3)
Thanks for sharing. I didn't know aboutdevopswithkubernetes
, seems interesting!
I have a small question which I'd be very thankful if you could share your thoughts on. What would be a good way to set up a dev environment for an application that uses microservices?
Assuming there will be like 5 containers(1 for front-end and the rest for back-end), myinexperienced approach would be to clone each service locally and properly configuredocker-compose
. However, I feel like this is not the wisest approach.
Thank you!
Hey Andrei,
to be honest I would also configure myself adocker-compose
setup. For a quick setup this should be straightforward. You can easily expose the container ports to your local machine. You can also use something likek3d
like I did but for development this can be a little bit overkill since you also have to manage the Kubernetes resources. So yeah, go with a good olddocker-compose
setup is a good start :)
Thanks for your input!
For further actions, you may consider blocking this person and/orreporting abuse