Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Install and configure a high available Kubernetes cluster with Ansible

License

NotificationsYou must be signed in to change notification settings

penggu/ansible-k8s

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub issuesGitHubGitHub forksGitHub stars

Install and configure a high available Kubernetes cluster

This ansible role will install and configure a high available Kubernetes cluster. This repo automate the installation process of Kubernetes usingkubeadm.

This repo is only a example on how to use Ansible automation to install and configure a Kubernetes cluster. For a production environment useKubespray

Requirements

Install ansible, ipaddr and netaddr:

pip install -r requirements.txt

Download the role form GitHub:

ansible-galaxy install git+https://github.com/penggu/ansible-k8s.git

Role Variables

This role accept this variables:

VarRequiredDefaultDesc
kubernetes_subnetyes192.168.25.0/24Subnet where Kubernetess will be deployed. If the VM or bare metal server has more than one interface, Ansible will filter the interface used by Kubernetes based on the interface subnet
disable_firewallnonoIf set to yes Ansible will disable the firewall.
kubernetes_versionno1.25.0Kubernetes version to install
kubernetes_crinocontainerdKubernetesCRI to install.
kubernetes_cninoflannelKubernetesCNI to install.
kubernetes_dns_domainnocluster.localKubernetes default DNS domain
kubernetes_pod_subnetno10.244.0.0/16Kubernetes pod subnet
kubernetes_service_subnetno10.96.0.0/12Kubernetes service subnet
kubernetes_api_portno6443kubeapi listen port
setup_vipnonoSetup kubernetes VIP addres usingkube-vip
kubernetes_vip_ipno192.168.25.225Required if setup_vip is set toyes. Vip ip address for the control plane
kubevip_versionnov0.4.3kube-vip container version
install_longhornnonoInstallLonghorn, Cloud native distributed block storage for Kubernetes.
longhorn_versionnov1.3.1Longhorn release.
install_nginx_ingressnonoInstallnginx ingress controller
nginx_ingress_controller_versionnocontroller-v1.3.0nginx ingress controller version
nginx_ingress_controller_http_nodeportno30080NodePort used by nginx ingress controller for the incoming http traffic
nginx_ingress_controller_https_nodeportno30443NodePort used by nginx ingress controller for the incoming https traffic
enable_nginx_ingress_proxy_protocolnonoEnable nginx ingress controller proxy protocol mode
enable_nginx_real_ipnonoEnable nginx ingress controller real-ip module
nginx_ingress_real_ip_cidrno0.0.0.0/0Required if enable_nginx_real_ip is set toyes Trusted subnet to use with the real-ip module
nginx_ingress_proxy_body_sizeno20mnginx ingress controller max proxy body size
sans_baseno[list of values, see defaults/main.yml]list of ip addresses or FQDN uset to sign the kube-api certificate

Extra Variables

This role accept an extra variablekubernetes_init_host. This variable is used when the cluster is bootstrapped for the first time. The value of this variable must be the hostname of one of the master nodes. When ansible will run on the matched host kubernetes will be initialized.

Cluster resource deployed

Whit this roleNginx ingress controller andLonghorn will be installed.

Nginx ingress controller

Nginx ingress controller is used as ingress controller.

The installation is thebare metal installation, the ingress controller then is exposed via a NodePort Service.You can customize the ports exposed by the NodePort service, useRole Variables to change this values.

Longhorn

Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes.

Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.

Vagrant

To test this role you can useVagrant andVirtualbox to bring up a example infrastructure. Once you have downloaded this repo use Vagrant to start the virtual machines:

vagrant up

In the Vagrantfile you can inject your public ssh key directly in the authorized_keys of the vagrant user. You have to change theCHANGE_ME placeholder in the Vagrantfile. You can also adjust the number of the vm deployed by changing the NNODES variable (Default: 6)

Using this role

To use this role you follow the example in theexamples/ dir. Adjust the hosts.ini file with your hosts and run the playbook:

lorenzo@mint-virtual:~$ ansible-playbook -i hosts-ubuntu.ini site.yml -e kubernetes_init_host=k8s-ubuntu-0PLAY [kubemaster] ***************************************************************************************************************************************************TASK [Gathering Facts] **********************************************************************************************************************************************ok: [k8s-ubuntu-2]ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-0]TASK [ansible-role-kubernetes : include_tasks] **********************************************************************************************************************included: /home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/setup_repo_Debian.yml for k8s-ubuntu-0, k8s-ubuntu-1, k8s-ubuntu-2 => (item=/home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/setup_repo_Debian.yml)TASK [ansible-role-kubernetes : Install required system packages] ***************************************************************************************************ok: [k8s-ubuntu-2]ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-0]TASK [ansible-role-kubernetes : Add Google GPG apt Key] *************************************************************************************************************ok: [k8s-ubuntu-0]ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : Add K8s Repository] *****************************************************************************************************************ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-2]ok: [k8s-ubuntu-0]TASK [ansible-role-kubernetes : Add Docker GPG apt Key] *************************************************************************************************************ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-0]ok: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : shell] ******************************************************************************************************************************changed: [k8s-ubuntu-1]changed: [k8s-ubuntu-2]changed: [k8s-ubuntu-0]TASK [ansible-role-kubernetes : Add Docker Repository] **************************************************************************************************************ok: [k8s-ubuntu-0]ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : setup] ******************************************************************************************************************************ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-0]ok: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : include_tasks] **********************************************************************************************************************included: /home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/preflight.yml for k8s-ubuntu-0, k8s-ubuntu-1, k8s-ubuntu-2TASK [ansible-role-kubernetes : disable ufw] ************************************************************************************************************************ok: [k8s-ubuntu-2]ok: [k8s-ubuntu-0]ok: [k8s-ubuntu-1]TASK [ansible-role-kubernetes : Install iptables-legacy] ************************************************************************************************************skipping: [k8s-ubuntu-0]skipping: [k8s-ubuntu-1]skipping: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : Remove zram-generator-defaults] *****************************************************************************************************skipping: [k8s-ubuntu-0]skipping: [k8s-ubuntu-1]skipping: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : disable firewalld] ******************************************************************************************************************skipping: [k8s-ubuntu-0]skipping: [k8s-ubuntu-1]skipping: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : Put SELinux in permissive mode, logging actions that would be blocked.] *************************************************************skipping: [k8s-ubuntu-0]skipping: [k8s-ubuntu-1]skipping: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : Disable SELinux] ********************************************************************************************************************skipping: [k8s-ubuntu-0]skipping: [k8s-ubuntu-1]skipping: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : Install openssl] ********************************************************************************************************************ok: [k8s-ubuntu-2]ok: [k8s-ubuntu-1]ok: [k8s-ubuntu-0]TASK [ansible-role-kubernetes : load overlay kernel module] *********************************************************************************************************changed: [k8s-ubuntu-1]changed: [k8s-ubuntu-0]changed: [k8s-ubuntu-2]TASK [ansible-role-kubernetes : load br_netfilter kernel module] ****************************************************************************************************changed: [k8s-ubuntu-1]changed: [k8s-ubuntu-0]changed: [k8s-ubuntu-2][...][...][...]TASK [ansible-role-kubernetes : Add KUBELET_ROOT_DIR env var] *******************************************************************************************************skipping: [k8s-ubuntu-3]TASK [ansible-role-kubernetes : Add KUBELET_ROOT_DIR env var, set value] ********************************************************************************************skipping: [k8s-ubuntu-3]TASK [ansible-role-kubernetes : Install longhorn] *******************************************************************************************************************skipping: [k8s-ubuntu-3]TASK [ansible-role-kubernetes : Install longhorn storageclass] ******************************************************************************************************skipping: [k8s-ubuntu-3]TASK [ansible-role-kubernetes : include_tasks] **********************************************************************************************************************included: /home/lorenzo/workspaces-local/ansible-role-kubernetes/tasks/install_nginx_ingress.yml for k8s-ubuntu-3, k8s-ubuntu-4, k8s-ubuntu-5TASK [ansible-role-kubernetes : Check if ingress-nginx is installed] ************************************************************************************************changed: [k8s-ubuntu-3 -> k8s-ubuntu-0(192.168.25.110)]TASK [ansible-role-kubernetes : Install ingress-nginx] **************************************************************************************************************skipping: [k8s-ubuntu-3]TASK [ansible-role-kubernetes : render nginx_ingress_config.yml] ****************************************************************************************************skipping: [k8s-ubuntu-3]TASK [ansible-role-kubernetes : Apply nginx ingress config] *********************************************************************************************************skipping: [k8s-ubuntu-3]PLAY RECAP **********************************************************************************************************************************************************k8s-ubuntu-0               : ok=78   changed=24   unreachable=0    failed=0    skipped=25   rescued=0    ignored=3   k8s-ubuntu-1               : ok=52   changed=12   unreachable=0    failed=0    skipped=30   rescued=0    ignored=1   k8s-ubuntu-2               : ok=52   changed=12   unreachable=0    failed=0    skipped=30   rescued=0    ignored=1k8s-ubuntu-3               : ok=58   changed=30   unreachable=0    failed=0    skipped=35   rescued=0    ignored=1   k8s-ubuntu-4               : ok=52   changed=28   unreachable=0    failed=0    skipped=27   rescued=0    ignored=1   k8s-ubuntu-5               : ok=52   changed=28   unreachable=0    failed=0    skipped=27   rescued=0    ignored=1

Now we have a Kubernetes cluster deployed in high available mode, we can check the status of the cluster:

root@k8s-ubuntu-0:~# kubectl get nodesNAME           STATUS   ROLES           AGE    VERSIONk8s-ubuntu-0   Ready    control-plane   139m   v1.24.3k8s-ubuntu-1   Ready    control-plane   136m   v1.24.3k8s-ubuntu-2   Ready    control-plane   136m   v1.24.3k8s-ubuntu-3   Ready    <none>          117m   v1.24.3k8s-ubuntu-4   Ready    <none>          117m   v1.24.3k8s-ubuntu-5   Ready    <none>          117m   v1.24.3

Check the pods status:

root@k8s-ubuntu-0:~# kubectl get pods --all-namespacesNAMESPACE         NAME                                           READY   STATUS      RESTARTS       AGEingress-nginx     ingress-nginx-admission-create-tsc8p           0/1     Completed   0              135mingress-nginx     ingress-nginx-admission-patch-48tpn            0/1     Completed   0              135mingress-nginx     ingress-nginx-controller-6dc865cd86-kfq88      1/1     Running     0              135mkube-flannel      kube-flannel-ds-fm4s6                          1/1     Running     0              117mkube-flannel      kube-flannel-ds-hhvxx                          1/1     Running     0              117mkube-flannel      kube-flannel-ds-ngdtc                          1/1     Running     0              117mkube-flannel      kube-flannel-ds-q5ncb                          1/1     Running     0              136mkube-flannel      kube-flannel-ds-vq4kk                          1/1     Running     0              139mkube-flannel      kube-flannel-ds-zshpf                          1/1     Running     0              137mkube-system       coredns-6d4b75cb6d-8dh9h                       1/1     Running     0              139mkube-system       coredns-6d4b75cb6d-xq98k                       1/1     Running     0              139mkube-system       etcd-k8s-ubuntu-0                              1/1     Running     0              139mkube-system       etcd-k8s-ubuntu-1                              1/1     Running     0              136mkube-system       etcd-k8s-ubuntu-2                              1/1     Running     0              136mkube-system       kube-apiserver-k8s-ubuntu-0                    1/1     Running     0              139mkube-system       kube-apiserver-k8s-ubuntu-1                    1/1     Running     0              135mkube-system       kube-apiserver-k8s-ubuntu-2                    1/1     Running     0              136mkube-system       kube-controller-manager-k8s-ubuntu-0           1/1     Running     0              139mkube-system       kube-controller-manager-k8s-ubuntu-1           1/1     Running     0              136mkube-system       kube-controller-manager-k8s-ubuntu-2           1/1     Running     0              135mkube-system       kube-proxy-59jqx                               1/1     Running     0              136mkube-system       kube-proxy-8mjwr                               1/1     Running     0              139mkube-system       kube-proxy-8nhbw                               1/1     Running     0              117mkube-system       kube-proxy-j2rrx                               1/1     Running     0              117mkube-system       kube-proxy-qwd5r                               1/1     Running     0              117mkube-system       kube-proxy-vcs7g                               1/1     Running     0              137mkube-system       kube-scheduler-k8s-ubuntu-0                    1/1     Running     0              139mkube-system       kube-scheduler-k8s-ubuntu-1                    1/1     Running     0              136mkube-system       kube-scheduler-k8s-ubuntu-2                    1/1     Running     0              135mkube-system       kube-vip-k8s-ubuntu-0                          1/1     Running     1 (136m ago)   139mkube-system       kube-vip-k8s-ubuntu-1                          1/1     Running     0              136mkube-system       kube-vip-k8s-ubuntu-2                          1/1     Running     0              136mlonghorn-system   csi-attacher-dcb85d774-jrggr                   1/1     Running     0              114mlonghorn-system   csi-attacher-dcb85d774-slhqt                   1/1     Running     0              114mlonghorn-system   csi-attacher-dcb85d774-xcbxn                   1/1     Running     0              114mlonghorn-system   csi-provisioner-5d8dd96b57-74x6h               1/1     Running     0              114mlonghorn-system   csi-provisioner-5d8dd96b57-kdzdf               1/1     Running     0              114mlonghorn-system   csi-provisioner-5d8dd96b57-xmpjf               1/1     Running     0              114mlonghorn-system   csi-resizer-7c5bb5fd65-4262v                   1/1     Running     0              114mlonghorn-system   csi-resizer-7c5bb5fd65-mfjgv                   1/1     Running     0              114mlonghorn-system   csi-resizer-7c5bb5fd65-qw944                   1/1     Running     0              114mlonghorn-system   csi-snapshotter-5586bc7c79-bs2xn               1/1     Running     0              114mlonghorn-system   csi-snapshotter-5586bc7c79-d927b               1/1     Running     0              114mlonghorn-system   csi-snapshotter-5586bc7c79-v99t6               1/1     Running     0              114mlonghorn-system   engine-image-ei-766a591b-hrs6g                 1/1     Running     0              114mlonghorn-system   engine-image-ei-766a591b-n9fsn                 1/1     Running     0              114mlonghorn-system   engine-image-ei-766a591b-vxhbb                 1/1     Running     0              114mlonghorn-system   instance-manager-e-3dba6914                    1/1     Running     0              114mlonghorn-system   instance-manager-e-7bd8b1ff                    1/1     Running     0              114mlonghorn-system   instance-manager-e-aca0fdc4                    1/1     Running     0              114mlonghorn-system   instance-manager-r-244c040c                    1/1     Running     0              114mlonghorn-system   instance-manager-r-39bd81b1                    1/1     Running     0              114mlonghorn-system   instance-manager-r-3b7f12b1                    1/1     Running     0              114mlonghorn-system   longhorn-admission-webhook-858d86b96b-j5rcv    1/1     Running     0              135mlonghorn-system   longhorn-admission-webhook-858d86b96b-lphkq    1/1     Running     0              135mlonghorn-system   longhorn-conversion-webhook-576b5c45c7-4p55x   1/1     Running     0              135mlonghorn-system   longhorn-conversion-webhook-576b5c45c7-lq686   1/1     Running     0              135mlonghorn-system   longhorn-csi-plugin-f7zmn                      2/2     Running     0              114mlonghorn-system   longhorn-csi-plugin-hs58p                      2/2     Running     0              114mlonghorn-system   longhorn-csi-plugin-wfpfs                      2/2     Running     0              114mlonghorn-system   longhorn-driver-deployer-96cf98c98-7hzft       1/1     Running     0              135mlonghorn-system   longhorn-manager-92xws                         1/1     Running     0              116mlonghorn-system   longhorn-manager-b6knm                         1/1     Running     0              116mlonghorn-system   longhorn-manager-tg2zc                         1/1     Running     0              116mlonghorn-system   longhorn-ui-86b56b95c8-ctbvf                   1/1     Running     0              135m

we can see, longhorn, nginx ingress and all the kube-system pods.

We can also inspect the service of the nginx ingress controller:

root@k8s-ubuntu-0:~# kubectl get svc -n ingress-nginxNAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGEingress-nginx-controller                NodePort    10.111.203.177   <none>        80:30080/TCP,443:30443/TCP   136mingress-nginx-controller-admission      ClusterIP   10.105.11.11     <none>        443/TCP                      136m

we can see the nginx ingress controller listening port, in this case the http port is 30080 and the https port is 30443. From an external machine we can test the ingress controller:

lorenzo@mint-virtual:~$ curl -v http://192.168.25.110:30080*   Trying 192.168.25.110:30080...* TCP_NODELAY set* Connected to 192.168.25.110 (192.168.25.110) port 30080 (#0)> GET / HTTP/1.1> Host: 192.168.25.110:30080> User-Agent: curl/7.68.0> Accept: */*> * Mark bundle as not supporting multiuse< HTTP/1.1 404 Not Found< Date: Wed, 17 Aug 2022 12:26:17 GMT< Content-Type: text/html< Content-Length: 146< Connection: keep-alive< <html><head><title>404 Not Found</title></head><body><center><h1>404 Not Found</h1></center><hr><center>nginx</center></body></html>* Connection #0 to host 192.168.25.110 left intact

About

Install and configure a high available Kubernetes cluster with Ansible

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jinja100.0%

[8]ページ先頭

©2009-2025 Movatter.jp