Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Bare-metal considerations

In traditionalcloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster.Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.

Cloud environmentBare-metal environment

The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.

A pure software solution: MetalLB

MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

This section demonstrates how to use theLayer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that haspublicly accessible nodes. In this mode, one node attracts all the traffic for theingress-nginx Service IP. SeeTraffic policies for more details.

MetalLB in L2 mode

Note

The description of other supported configuration modes is off-scope for this document.

Warning

MetalLB is currently inbeta. Read about theProject maturity and make sure you inform yourself by reading the official documentation thoroughly.

MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following theInstallation instructions, and that the Ingress-Nginx Controller was installed using the steps described in thequickstart section of the installation guide.

MetalLB requires a pool of IP addresses in order to be able to take ownership of theingress-nginx Service. This pool can be defined throughIPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPsmust be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

$kubectlgetnodeNAME     STATUS   ROLES    EXTERNAL-IPhost-1   Ready    master   203.0.113.1host-2   Ready    node     203.0.113.2host-3   Ready    node     203.0.113.3

After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates theloadBalancer IP field of theingress-nginx Service accordingly.

---apiVersion:metallb.io/v1beta1kind:IPAddressPoolmetadata:name:defaultnamespace:metallb-systemspec:addresses:-203.0.113.10-203.0.113.15autoAssign:true---apiVersion:metallb.io/v1beta1kind:L2Advertisementmetadata:name:defaultnamespace:metallb-systemspec:ipAddressPools:-default
$kubectl-ningress-nginxgetsvcNAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)default-http-backend   ClusterIP     10.0.64.249    <none>       80/TCPingress-nginx          LoadBalancer  10.0.220.217   203.0.113.10  80:30100/TCP,443:30101/TCP

As soon as MetalLB sets the external IP address of theingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:

$curl-D-http://203.0.113.10-H'Host: myapp.example.com'HTTP/1.1 200 OKServer: nginx/1.15.2

Tip

In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use theLocal traffic policy. Traffic policies are described in more details inTraffic policies as well as in the next section.

Over a NodePort Service

Due to its simplicity, this is the setup a user will deploy by default when following the steps described in theinstallation guide.

Info

A Service of typeNodePort exposes, via thekube-proxy component, thesame unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, seeServices.

In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to theingress-nginx Service to HTTP requests.

NodePort request flow

You cancustomize the exposed node port numbers by setting thecontroller.service.nodePorts.* Helm values, but they still have to be in the 30000-32767 range.

Example

Given the NodePort30100 allocated to theingress-nginx Service

$kubectl-ningress-nginxgetsvcNAME                   TYPE        CLUSTER-IP     PORT(S)default-http-backend   ClusterIP   10.0.64.249    80/TCPingress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP

and a Kubernetes node with the public IP address203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is <None>)

$kubectlgetnodeNAME     STATUS   ROLES    EXTERNAL-IPhost-1   Ready    master   203.0.113.1host-2   Ready    node     203.0.113.2host-3   Ready    node     203.0.113.3

a client would reach an Ingress withhost: myapp.example.com athttp://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.

Impact on the host system

While it may sound tempting to reconfigure the NodePort range using the--service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grantkube-proxy privileges it may otherwise not require.

This practice is thereforediscouraged. See the other approaches proposed in this page for alternatives.

This approach has a few other limitations one ought to be aware of:

Source IP address

Services of type NodePort performsource address translation by default. This means the source IP of a HTTP request is alwaysthe IP address of the Kubernetes node that received the request from the perspective of NGINX.

The recommended way to preserve the source IP in a NodePort setup is to set the value of theexternalTrafficPolicy field of theingress-nginx Service spec toLocal (example).

Warning

This setting effectivelydrops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Considerassigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.

Example

In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)

$kubectlgetnodeNAME     STATUS   ROLES    EXTERNAL-IPhost-1   Ready    master   203.0.113.1host-2   Ready    node     203.0.113.2host-3   Ready    node     203.0.113.3

with aingress-nginx-controller Deployment composed of 2 replicas

$kubectl-ningress-nginxgetpod-owideNAME                                       READY   STATUS    IP           NODEdefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1   host-2ingress-nginx-controller-cf9ff8c96-8vvf8   1/1     Running   172.17.0.3   host-3ingress-nginx-controller-cf9ff8c96-pxsds   1/1     Running   172.17.1.4   host-2

Requests sent tohost-2 andhost-3 would be forwarded to NGINX and original client's IP would be preserved, while requests tohost-1 would get dropped because there is no NGINX replica running on that node.

Other ways to preserve the source IP in a NodePort setup are described here:Source IP address.

Ingress status

Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controllerdoes not update the status of Ingress objects it manages.

$kubectlgetingressNAME           HOSTS               ADDRESS   PORTStest-ingress   myapp.example.com             80

Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting theexternalIPs field of theingress-nginx Service.

Warning

There is more to settingexternalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in theServices page of official Kubernetes documentation as well as the section aboutExternal IPs in this document for more information.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

$kubectlgetnodeNAME     STATUS   ROLES    EXTERNAL-IPhost-1   Ready    master   203.0.113.1host-2   Ready    node     203.0.113.2host-3   Ready    node     203.0.113.3

one could edit theingress-nginx Service and add the following field to the object spec

spec:externalIPs:-203.0.113.1-203.0.113.2-203.0.113.3

which would in turn be reflected on Ingress objects as follows:

$kubectlgetingress-owideNAME           HOSTS               ADDRESS                               PORTStest-ingress   myapp.example.com   203.0.113.1,203.0.113.2,203.0.113.3   80

Redirects

As NGINX isnot aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.

Example

Redirects generated by NGINX, for instance HTTP to HTTPS ordomain towww.domain, are generated without NodePort:

$curl-D-http://myapp.example.com:30100`HTTP/1.1 308 Permanent RedirectServer: nginx/1.15.2Location: https://myapp.example.com/  #-> missing NodePort in HTTPS redirect

Via the host network

In a setup where there is no external load balancer available but using NodePorts is not an option, one can configureingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.

Note

This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If theingress-nginx Service exists in the target cluster, it isrecommended to delete it.

This can be achieved by enabling thehostNetwork option in the Pods' spec.

template:spec:hostNetwork:true

Security considerations

Enabling this optionexposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.

Example

Consider thisingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.

$kubectl-ningress-nginxgetpod-owideNAME                                       READY   STATUS    IP            NODEdefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2ingress-nginx-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3ingress-nginx-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2

One major limitation of this deployment approach is that onlya single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:

$kubectl-ningress-nginxdescribepod<unschedulable-ingress-nginx-controller-pod>...Events:  Type     Reason            From               Message  ----     ------            ----               -------  Warning  FailedScheduling  default-scheduler  0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.

One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as aDaemonSet instead of a traditional Deployment.

Info

A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured torepel those Pods. For more information, seeDaemonSet.

Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.

DaemonSet with hostNetwork flow

Like with NodePorts, this approach has a few quirks it is important to be aware of.

DNS resolution

Pods configured withhostNetwork: true do not use the internal DNS resolver (i.e.kube-dns orCoreDNS), unless theirdnsPolicy spec field is set toClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.

Ingress status

Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default--publish-service flag used in standard cloud setupsdoes not apply and the status of all Ingress objects remains blank.

$kubectlgetingressNAME           HOSTS               ADDRESS   PORTStest-ingress   myapp.example.com             80

Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the--report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.

Example

Given aingress-nginx-controller DaemonSet composed of 2 replicas

$kubectl-ningress-nginxgetpod-owideNAME                                       READY   STATUS    IP            NODEdefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2ingress-nginx-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3ingress-nginx-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2

the controller sets the status of all Ingress objects it manages to the following value:

$kubectlgetingress-owideNAME           HOSTS               ADDRESS                   PORTStest-ingress   myapp.example.com   203.0.113.2,203.0.113.3   80

Note

Alternatively, it is possible to override the address written to Ingress objects using the--publish-status-address flag. SeeCommand line arguments.

Using a self-provisioned edge

Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g.HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.

Such deployment builds upon the NodePort Service described above inOver a NodePort Service, with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.

On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:

User edge

External IPs

Source IP address

This method does not allow preserving the source IP of HTTP requests in any manner, it is thereforenot recommended to use it despite its apparent simplicity.

TheexternalIPs Service option was previously mentioned in theNodePort section.

As per theServices page of the official Kubernetes documentation, theexternalIPs option causeskube-proxy to route traffic sent to arbitrary IP addressesand on the Service ports to the endpoints of that Service. These IP addressesmust belong to the target node.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

$kubectlgetnodeNAME     STATUS   ROLES    EXTERNAL-IPhost-1   Ready    master   203.0.113.1host-2   Ready    node     203.0.113.2host-3   Ready    node     203.0.113.3

and the followingingress-nginx NodePort Service

$kubectl-ningress-nginxgetsvcNAME                   TYPE        CLUSTER-IP     PORT(S)ingress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP

One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:

spec:externalIPs:-203.0.113.2-203.0.113.3
$curl-D-http://myapp.example.com:30100HTTP/1.1 200 OKServer: nginx/1.15.2$curl-D-http://myapp.example.comHTTP/1.1 200 OKServer: nginx/1.15.2

We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.


[8]ページ先頭

©2009-2025 Movatter.jp