
Kubernetes PODS & SERVICES Discovery
Introduction✍️
In this particular blog, we are going to talk about how all these workloads can be exposed outside to the world with the help of these Services and DNS. So that external users can also access your application.
How to expose Kubernetes workloads to the outside world using Services?🌏
Before heading to these topics you must have the knowledge of all the workloads.
Pre-requisites👇
PODS📦
DNS🧑💻
Services⚙️
Services are available in three main types:
ClusterIP
NodePort
LoadBalancer
ClusterIP is the default service created by k8s. With the help of this type, you can access your application just within your cluster. It will provide you with the benefit of load balancing and discovery, which we will learn about in the below section in detail.
If you want to check all your created services, use get command:
Also, you can write with the alias:
kubectl
akak
services
akasvc
Trust me, this saves a lot of time.🥺
And also, I have covered the file creation already in detail.
NodePort
can provide you to access your application within the cluster as well as to those who have access to the worker nodes you have set up during the creation.
Creating aservice
ofNodePort
type, the/root/service-definition-1.yaml
file as follows:
---apiVersion: v1kind: Servicemetadata: name: webapp-service namespace: defaultspec: ports: - nodePort: 30080 port: 8080 targetPort: 8080 selector: name: simple-webapp type: NodePort
Run the following command to create awebapp-service
service as follows: -
kubectl apply -f /root/service-definition-1.yaml
Create the service-definition-1.yaml -> Add the service template using vi/vim -> Edit the details in the file -> Use command apply to create it
I am not going into details about the types of services because I have covered everything in my previous blog learnings.🫣
We will only be focusing on:
Exposing Kubernetes workloads to the outside world👶
LoadBalancer
is the type of service that will expose your application to outside the world.
This will only work oncloud providers. There will be an elastic load balancer that will be created by the cloud providers which is a public IP address through which you can access your application.
Cloud Control Manager which is inside your master node will generate public IP adders using any cloud provider eg AWS and returns to the service so that anyone can access your application using that IP address.
To create a service obviously, we need all the pods anddeployment-definition
file.
So here it is:
apiVersion: apps/v1kind: Deploymentmetadata: name: webappspec: replicas: 3 selector: matchLabels: app: webapp template: metadata: labels: app: webapp spec: containers: - name: webapp image: my/webapp:latest ports: - containerPort: 80
Now, time to create aservice-definition-2
yaml file withLoadBalance
type:
apiVersion: v1kind: Servicemetadata: name: webapp-servicespec: selector: app: webapp ports: - name: http protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
First run,
kubectl apply -f /root/service-definition-2.yaml
Now the service is created. You can access this through the internet with the correct IP address and port number provided by this service.
Check the IP address and Ports byget
command:
kubectl get svc <name>
How to discover Services and Pods within a Kubernetes cluster using DNS and other mechanisms?🧐
As we know how service works and its purpose But did you think how it tackled the problems? How does it send requests to the users? How does it provide everything requested by the users?
Let's see the solution:
Service act as aload balancer by using a component known askube-proxy
.
So instead of accessing the IP addresses, it helps the users to give a specific service name to the users to access the application.
kube-proxy
will forward the requests coming from several users.
So without Services, your application will not work even if you have everything like pods, deployments, etc ready. When a pod goes down then you're unable to provide your applications to the users in case of no service available.
But the question remains the same. How service is handling the IP addresses, as every time a pod goes down, it comes up with a different address. So how this is going on?
If three of the pods are deployed in the cluster and each one has its IP address and whenever a pod goes down and comes up always with the new IPs then how service is managed all the time with the new IPs and managing all the user's requests?
This is handled by Service Discovery.
Service Discovery🕵️
This comes up with a new process calledLabels &Selectors
Instead of working with IP addresses and keeping track of it. Service is using the concept of labels & selectors.
Labels📋
It is just a key-value pair.
You can name it by your conventions.
It is used to organize and select the objects in the cluster
You can label Pods, Deployments, and Services.
Example,
key=app
andvalue=myapp
metadata: labels: app: myapp
Selectors✅
It must be defined same name as the labels defined.
It is used to select objects based on their labels.
It can be used for a single or a groped objects that matches specific labels.
Example, create a Service that selects all Pods with the label
app=myapp
:
yamlCopy codespec: selector: app: myapp
We will give labels in our pod definition yaml file so that whenever a pod goes down and comes up with new IPs, it will always have the same label in it as we defined in the template. A new pod is always created with the given template.
The replica sets will deploy the pods using thelabels.
Services will keep track of all the deployments by the labels.
Labels are just a name given to a pod nothing else.
So this is the service discovery mechanism that uses labels & selectors.
So you created a deployment definition yaml file and on that the metadata section you will define a label which can be any name provided by you for your deployed application.
Pod Discovery Using DNS🌐
TheCoreDNS server is deployed as a POD inkube-system
namespace in the k8s cluster.
It creates a service to make it available to other components within a cluster. And the service is namedkube-dns
by default.
kubectl get service -n kube-system
It uses a file calledCorefile which is located at/etc/coredns
cat /etc/coredns/Corefile
In this file, you have all the configurations of plugins.
One of the plugins that makeCoreDNS
work with k8s is the Kubernetes plugin.
A top-level domain name is set as acluster.local
TheDNS configuration onPODS is done by k8s automatically when the pod is created.
config
file of thekubelet
will give you the IP of the DNS server and the domain:
cat /var/lib/kubelet/config.yml
You can access yourservice
by just
name-service
or
name-service.default
or
name-service.default.svc
or
name-service.default.svc.cluster.local
<service-name>.<namespace>.svc.cluster.local
Example,service-name=mywebapp
andnamespace=default
which is created automatically as default. Then,
mywebapp.default.svc.cluster.local
If you try to manually lookup for the service usingnslookup
or the host command
host <name-service>
Pod Discovery Using Environment Variables🔗
The container which has the information of Services and PODS that are running inside the k8s cluster has an environment variable automatically set by Kubernetes.
These variables are used to discover the IP address, port numbers and hostname for the services and running pods.
Example,
theSERVICE_HOST
andSERVICE_PORT
environment variables are automatically set in containers running inside a Service's Pod.
theHOSTNAME
environment variable is set to the hostname of the Pod and theMY_POD_IP
environment variable is set to the Pod's IP address.
Let's create a simpleservice
yaml file to use for further process with typeClusterIP
:
apiVersion: v1kind: Servicemetadata: name: myapp-service labels: app: myappspec: type: ClusterIP selector: app: myapp ports: - name: http port: 80 targetPort: 8080
Now, let's create apod
YAML file that runsimage=nginx
by setting twoenv
variablesMYAPP_SERVICE_HOST
andMYAPP_POD_IP
apiVersion: v1kind: Podmetadata: name: myapp-pod labels: app: myappspec: containers: - name: myapp-container image: nginx env: - name: MYAPP_SERVICE_HOST value: "myapp-service.default.svc.cluster.local" - name: MYAPP_POD_IP valueFrom: fieldRef: fieldPath: status.podIP
MYAPP_SERVICE_HOST
is set tomyapp.default.svc.cluster.local
and
MYAPP_POD_IP
is set using afieldRef
that retrieves the Pod's IP address from its status.
Together, these files define a web application that can be accessed using the virtual IP address assigned to the Service.
When a client sends a request to the Service's IP address on port 80, the request will be forwarded to one of the Pods with theapp: myapp
label and the response will be sent back to the client.
This is how you can discover pods using environment variables.
Thank you!🖤
Top comments(0)
For further actions, you may consider blocking this person and/orreporting abuse