I tried configuring ingress on my kubernetes cluster. I followed thedocumentation to install ingress controller and ran the following commands
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yamlkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yamlAfter that default-http-backend and nginx-ingress-controller were running:
ingress-nginx default-http-backend-846b65fb5f-6kwvp 1/1 Running 0 23h 192.168.2.28 node1ingress-nginx nginx-ingress-controller-d658896cd-6m76j 1/1 Running 0 6m 192.168.2.31 node1I tried testing ingress and I deployed the following service:
apiVersion: apps/v1kind: Deploymentmetadata: name: echoserver-deployspec: replicas: 2 selector: matchLabels: app: echo template: metadata: labels: app: echo spec: containers: - name: my-echo image: gcr.io/google_containers/echoserver:1.8---apiVersion: v1kind: Servicemetadata: name: echoserver-svcspec: selector: app: echo ports: - protocol: TCP port: 8080 targetPort: 8080And the following ingress:
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: happy-ingress annotations: INGRESS.kubernetes.io/rewrite-target: /spec: rules: - host: happy.k8s.io http: paths: - path: /echoserver backend: serviceName: echoserver-svc servicePort: 8080When I ran the command 'kubectl get ing' I received:
NAME HOSTS ADDRESS PORTS AGEhappy-ingress happy.k8s.io 80 14mI didn't have ADDRESS resolved and I can’t figure out what the problem is because all the pods are running. Can you give me a hint as to what the issue can be?
Thanks
- Any useful info in logs of
nginx-ingress-controllerpod?Vishal Biyani– Vishal Biyani2018-07-25 07:35:02 +00:00CommentedJul 25, 2018 at 7:35 - Hello, , I found the issue. I was expecting the service to be exported on port 80, but it is exported on 30927. Can I configure this to be exported on port 80?Dorin– Dorin2018-07-25 17:32:10 +00:00CommentedJul 25, 2018 at 17:32
- https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/Crou– Crou2018-07-26 10:20:25 +00:00CommentedJul 26, 2018 at 10:20
19 Answers19
You have to enableingress addons by following command before creating ingress rules. You can also enable it before executing any other command
$ minikube addons enable ingressingress was successfully enabledWait until the pods are up and running. You can check by executing following command and wait for the similar output
kubectl get pods -n kube-system | grep nginx-ingress-controllernginx-ingress-controller-5984b97644-jjng2 1/1 Running 2 1h
ForDeployment you have to specify thecontainerPort and forService you have to specifyhttp protocol.
apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: echoserver-deployspec: replicas: 2 selector: matchLabels: app: my-echo template: metadata: labels: app: my-echo spec: containers: - name: my-echo image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1 ports: - containerPort: 8080---apiVersion: v1kind: Servicemetadata: name: echoserver-svcspec: selector: app: my-echo ports: - protocol: TCP port: 80 targetPort: 8080 name: httpFor ingress rule change the portservicePort from 8080 to 80 the default http port.
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: happy-ingress annotations: INGRESS.kubernetes.io/rewrite-target: /spec: rules: - host: happy.k8s.io http: paths: - path: /echoserver backend: serviceName: echoserver-svc servicePort: 80Now apply those files and create your pods, service and ingress rule. Wait few moment, it will take few moments to get ADDRESS for your ingress rule.
Now you can visit your service usingminikube ip address but not by host name yet. For that you have to add the host and respective IP address in/etc/hosts file. So open/etc/hosts file in your favorite editor and add below line where is the actual IP of you minikube
<minikube_ip> happy.k8s.ioNow you access you service using host name. Verify be following command
curl http://happy.k8s.io/echoserverComments
As officialdocument say:
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages
You have deployed the NGINX Ingress controller as described in the installation guide, so it is normal for your ADDRESS was empty!
Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.
ps. This question is not about minikube
2 Comments
http://node.mydomain:nodeportIngress problems are specific to your implementation, so let me just speak to a bare-metal, LAN implementation with no load-balancer.
The key validation point of an ingress resource is that it gets an address (after all, what am I supposed to hit, if the ingress doesn't have an address associated with it?) So if you do
kubectl get ingress some-ingressOver and over (give it 30 seconds or so) and it never shows an address - what now?
On bare-metal LAN there are a few trouble spots. First, ensure that your Ingressresource can find your Ingresscontroller - so (setting aside the controller spec for now) your resource needs to be able to find your controller with something like:
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: web-api-ingress annotations: kubernetes.io/ingress.class: nginxWhere your controller has entries like:
Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=1.0.4 helm.sh/chart=ingress-nginx-4.0.6But now let's go back a step - your controller - is it setup appropriately for your (in my case) bare-metal LAN implementation? (this is all made super-easy by the cloud providers, so I'm providing this answer for my friends in the private cloud community)
There the issue is that you need this essential hostNetwork setting to be true, in your Ingresscontroller deployment. So for that, instead of using the basic yamil (https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/baremetal/deploy.yaml) - wget it, and modify it to set the spec of the template of the spec of the Deployment, so that hostNetwork: true - something like (scroll down to the end):
# Source: ingress-nginx/templates/controller-deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata: labels: helm.sh/chart: ingress-nginx-4.0.6 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.4 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginxspec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst containers: - name: controller image: k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: LD_PRELOAD value: /usr/local/lib/libmimalloc.so livenessProbe: failureThreshold: 5 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi nodeSelector: kubernetes.io/os: linux serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 hostNetwork: true volumes: - name: webhook-cert secret: secretName: ingress-nginx-admissionDeploy that, then setup your ingressresource, as indicated above.
Your key tests are:
- Does my ingress have a dedicated node in my bare-metal implementation?
- if I hit port 80 on the ingress node, do I get the same thing as hitting the NodePort? (let's say NodePort is 31207 - do I get the same thing hitting port 80 on the ingress node as hitting port 31207 on any node?)
Final note: Ingress has changed a lot over the past few years, and tutorials are often providing examples that don't pass validation - please feel free to comment on this answer if it has become out of date
3 Comments
kubernetes.io/ingress.class: nginx annotation solved my problem.kubernetes.io/ingress.class: nginx is the recipe!I don't know if this will help, but i had the same problem.I didn't install no ingress controller or whatever like some people said on Github, i just created an Ingress to be able to point my subdomain to a different service and at first my Ingress had no IP address(it was empty).
NAME HOSTS ADDRESS PORTS AGEingress delivereo.tk,back.delivereo.tk 80 39sI think it was because my 2 services for front-end app and back-end api had the Type of LoadBalancer.I changed it to NodePort because they don't need external ip now that the ingress controller will manage what url goes where.
And when i did change the type of my services to NodePort, after 2 or 3 minutes the IP address of the Ingress appeared.When i pointed my Cloudflare DNS to the new Ingress IP, i tested my subdomain and it worked !
BEFORE
apiVersion: v1kind: Servicemetadata: name: delivereotkspec: ports: - port: 80 selector: app: delivereotk type: LoadBalancerAFTER
apiVersion: v1kind: Servicemetadata: name: delivereotkspec: ports: - port: 80 selector: app: delivereotk type: NodePortIngress
apiVersion: extensions/v1beta1kind: Ingressmetadata: name: ingressspec: rules: - host: delivereo.tk http: paths: - backend: serviceName: delivereotk servicePort: 80 - host: back.delivereo.tk http: paths: - backend: serviceName: backdelivereotk servicePort: 80Comments
Yourhostname happy.k8s.io should resolve to an actual IP address of the nginx-ingress-controller, which points to the front of your load balancer.
You can check under which IP is the cluster working:
bash-3.2$ kubectl cluster-infoKubernetes master is running at https://192.168.1.100:8443KubeDNS is running at https://192.168.1.100:8443/api/v1/namespaces/kube- system/services/kube-dns:dns/proxyTest the ingress controller for your cluster usingcurl
bash-3.2$ curl http://192.168.1.100:8080default backend - 404In the end, you should just add the domain entry to/etc/hosts :
192.168.1.100 happy.k8s.ioComments
Ran into the exact same issue, and lost some time trying to understand.
After tailing my nginx-controller logs, the error was quite obvious :
W0612 21:00:52.496281 7 controller.go:240] ignoring ingress my-ing-my-ns-ing in my-ns based on annotation : ingress does not contain a valid IngressClassI0612 21:00:52.496311 7 main.go:100] "successfully validated configuration, accepting" ingress="my-ns/my-ing"I0612 21:00:52.518778 7 store.go:423] "Ignoring ingress because of error while validating ingress class" ingress="my-ns/my-ing" error="ingress does not contain a valid IngressClass"And indeed, after describing my ingress resource, my annotations weren't there!
So after adding (among others) the following label to my ingress resource:kubernetes.io/ingress.class: nginx
It's all good :) My Ingress resource is recognized by nginx ingress-controller, and taken care of.
PS: if you're using helm subcharts (for example for a generic micro service helm chart) don't forget to helm dependency upgrade before you redeploy ^^", could have saved some time with this one
1 Comment
I encountered the same problem on GKE (Google Kubernetes Engine). It turned out there was a problem with ingress settings, I used incorrect names for desired services. You should check for errors by executing this command:
kubectl describe ingress <your-ingress-name>Comments
I faced similar issue, realized I needed to :
Enable the NGINX Ingress controller, run the following command:
minikube addons enable ingressVerify that the NGINX Ingress controller is running
kubectl get pods -n kube-system
Expect nginx-ingress-controller pod in running status.
1 Comment
For me this issue was happening because ingress controller was not running in my cluster. So I had to manually install and run the below command to install ingress controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/baremetal/deploy.yamlOnce this is done check if ingress controller pod is up it will be present iningress-nginx namespace.
kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGEingress-nginx-admission-create-xv8nh 0/1 Completed 0 2m2singress-nginx-admission-patch-6gn6s 0/1 Completed 0 2m1singress-nginx-controller-5cb8d9c6dd-lml8w 1/1 Running 0 2m3sOnce this was done the ingress address had appeared. It takes around 2-3 minutes.
Reference :Install Ingress Controller
Comments
What you are looking for is a way to test your ingress resource.
You can do this by:
- Look for the ip address of the ingress controller pod/s and use port 80/443 along with this ip.
- Look for the service which is exposing the ingress controller deployment
- If there is no service you can create one and expose the ingress controller deployment
- If you want a hostname then you will need to manage the dns entries.
Comments
In my case the problem was simply a missing secret in the relevant namespace.
Comments
In my case, for api version 1.20+, the only supportedpathType value isImplementationSpecific.
For details, check this linkhttps://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Comments
If TLS is part of your ingress, then missing secret can be the case.To find the secret name, first describe your ingress:
kubectl describe ing <ing-name> -n <namespace>Or find 'secretName' in your yaml file:
tls: - hosts: - example.com secretName: secret-nameAnd make sure that the secret is available when you list your secrets:
kubectl get secrets -n <namespace>Comments
first run - minikube addons enable ingressThen run - minikube tunnel
1 Comment
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yamlkubectl get pods -n ingress-nginx2 Comments
I had a similar case. The ingress we have written is just a ingress rule. To make the address available we need to have ingress controller pods running too.
Simply check if the controller pods is running or not
kubectl get podsYou should have the controller pods running as show in the image. If not you can install it using helm.
if you are using helm2 simply use the following command:
helm install stable/nginx-ingress --name my-nginxfollow this docs for different ways to install it.https://kubernetes.github.io/ingress-nginx/deploy/
When doing helm install you might get the following issue if you don't have tiller installed.
Error: could not find tillerTo fix it install tiller as following:
kubectl -n kube-system create serviceaccount tillerkubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tillerhelm init --service-account tillerTo verify tiller is working:
kubectl get pods --namespace kube-systemComments
I had the same prob, and In my case, the cluster was configured with annginx Ingress controller, where as myIngress.yaml(see below) was referencingnsx.
apiVersion: ...kind: Ingressmetadata: name: example-ingress annotations: kubernetes.io/ingress.class: nsx <- should have been nginx ......My Learning: Always check with the Kubernetes admin for the configuration, instead of spending hours (~8 hrs in my case) trying to debug xD
Comments
In a cluster you need to have installed a LoadBalencer like metallb.
If you follow all the steps described in this post, you can create ingress object with an address from the metallb loadballancer,https://itnext.io/kubernetes-loadbalancer-service-for-on-premises-6b7f75187be8
Please find below how an ingress object looks like after followed all steps above
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"creationTimestamp":"2024-01-19T08:58:11Z","generation":1,"name":"hello-world-ingress","namespace":"default","resourceVersion":"8249","uid":"e3797acf-779f-44dd-ab28-4f9a977ff0b8"},"spec":{"ingressClassName":"nginx","rules":[{"host":"hello-world.exposed","http":{"paths":[{"backend":{"service":{"name":"web","port":{"number":3000}}},"path":"/","pathType":"Prefix"}]}}]},"status":{"loadBalancer":{}}} creationTimestamp: "2024-01-19T21:21:50Z" generation: 1 name: hello-world-ingress namespace: default resourceVersion: "96582" uid: e20d676b-46a6-407d-9835-dabce6848e3cspec: ingressClassName: nginx rules: - host: hello-world.exposed http: paths: - backend: service: name: web port: number: 3000 path: / pathType: Prefixstatus: loadBalancer: ingress: - ip: 192.168.74.96As result
$ kubectl get ingressNAME CLASS HOSTS ADDRESS PORTS AGEhello-world-ingress nginx hello-world.exposed 192.168.74.96 80 65mComments
As @DylanLai pointed out, it's fine that your ADDRESS is empty.
I'm not sure why people talk about the LoadBalancer or NodePort service here. Your Service is of type ClusterIP (because you didn't state anything else).
I had the same situation and for me the configuration was fine like that. You just need to verify one thing:Your DNS settings should point the domain happy.k8s.io to an IP addressof a node that is running the nginx-ingress-controller.
Check which nodes the ingress-controller pods are running on:
kubectl get pods --namespace=nginx-ingress -o wide
Point your domain to one of themkubectl get nodes -o wide.
Voilà, your domain gets load balanced automatically to the pods of your deployment. To verify that, access the domain with different browser windows because it will sticky cache once it picked one pod.
Comments
Explore related questions
See similar questions with these tags.
















