Movatterモバイル変換


[0]ホーム

URL:


VF的部落格

kubeadm快速部署kubernetes(1.13.1,HA)

2018/12/14dockerCloud

当前版本的kubeadm已经原生支持部署HA模式集群,非常方便即可实现HA模式的kubernetes集群。本次部署基于Ubuntu16.04,并使用最新的docker版本:18.06.1,kubernetes适用1.13.x版本,本文采用1.13.1。

1 环境准备

准备了六台机器作安装测试工作,机器信息如下:

IPNameRoleOS
172.16.2.1Master01Controller,etcdUbuntu16.04
172.16.2.2Master02Controller,etcdUbuntu16.04
172.16.2.3Master03Controller,etcdUbuntu16.04
172.16.2.11Node01ComputeUbuntu16.04
172.16.2.12Node02ComputeUbuntu16.04
172.16.2.13Node03ComputeUbuntu16.04
172.16.2.251Dns01DNSUbuntu16.04
172.16.2.252Dns01DNSUbuntu16.04

2 安装docker

apt update && apt install -y apt-transport-https software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"apt updateapt install docker-ce=18.06.1~ce~3-0~ubuntu

3 安装etcd集群

使用了docker-compose安装,当然,如果觉得麻烦,也可以直接docker run。

Master01节点的ETCD的docker-compose.yml:

etcd:image:quay.io/coreos/etcd:v3.2.25command:etcd --name etcd-srv1 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://172.16.2.1:2379,http://172.16.2.1:2380 --initial-advertise-peer-urls http://172.16.2.1:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://172.16.2.1:2380,etcd-srv2=http://172.16.2.2:2380,etcd-srv3=http://172.16.2.3:2380" -initial-cluster-state newnet:"bridge"ports:-"2379:2379"-"2380:2380"restart:alwaysstdin_open:truetty:truevolumes:-/store/etcd:/var/etcd

Master02节点的ETCD的docker-compose.yml:

etcd:image:quay.io/coreos/etcd:v3.2.25command:etcd --name etcd-srv2 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://172.16.2.2:2379,http://172.16.2.2:2380 --initial-advertise-peer-urls http://172.16.2.2:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://172.16.2.1:2380,etcd-srv2=http://172.16.2.2:2380,etcd-srv3=http://172.16.2.3:2380" -initial-cluster-state newnet:"bridge"ports:-"2379:2379"-"2380:2380"restart:alwaysstdin_open:truetty:truevolumes:-/store/etcd:/var/etcd

Master03节点的ETCD的docker-compose.yml:

etcd:image:quay.io/coreos/etcd:v3.2.25command:etcd --name etcd-srv3 --data-dir=/var/etcd/calico-data --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://172.16.2.3:2379,http://172.16.2.3:2380 --initial-advertise-peer-urls http://172.16.2.3:2380 --listen-peer-urls http://0.0.0.0:2380 -initial-cluster-token etcd-cluster -initial-cluster "etcd-srv1=http://172.16.2.1:2380,etcd-srv2=http://172.16.2.2:2380,etcd-srv3=http://172.16.2.3:2380" -initial-cluster-state newnet:"bridge"ports:-"2379:2379"-"2380:2380"restart:alwaysstdin_open:truetty:truevolumes:-/store/etcd:/var/etcd

创建好docker-compose.yml文件后,使用命令docker-compose up -d部署。

关于docker-compose的使用,可以参考:docker-compose安装文档

3 安装k8s工具包

阿里源安装

curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb http://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainEOFapt-get updateapt-get install -y kubelet kubeadm kubectl ipvsadm ipset

4 启用ipvs模块

本方案中采用ipvs作为kube-proxy的转发机制,效率比iptables高很多,开启ipvs模块支持。

modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh

启用的ipvs相关模块重启机器后需要重启加载,为了避免麻烦,可以将加载模块配置在为开机启动(所有节点上都需要配置):

root@master01:~# vi /etc/modules# /etc/modules: kernel modules to load at boot time.## This file contains the names of kernel modules that should be loaded# at boot time, one per line. Lines beginning with "#" are ignored.ip_vs_rrip_vs_wrrip_vs_ship_vs

5 Api-Server负载均衡

配置负载均衡器对kube-apiserver进行负载均衡,可采用DNS轮询解析或者Haproxy(Nginx)反向代理实现负载均衡。

本文采用DNS轮询解析实现简单的负载均衡,在Dns01,Dns02节点上部署DNS。

1、修改/etc/hosts文件,添加域名解析

172.16.2.1 api.me172.16.2.2 api.me172.16.2.3 api.me

2、docker-compose部署dnsmasq服务:

version:"3"services:dnsmasq:image:cloudnil/dnsmasq:2.76command:-q --log-facility=- --all-serversnetwork_mode:"host"cap_add:-NET_ADMINrestart:alwaysstdin_open:truetty:true

3、除了部署dnsmasq服务的其他所有节点上(包括Master和Node),配置DNS

cat <<EOF >/etc/resolvconf/resolv.conf.d/basenameserver 172.16.2.251nameserver 172.16.2.252EOF

记得重启解析服务resolvconf

/etc/init.d/resolvconf restart

6 安装master节点

kubeadm配置文件kubeadm-config.yaml:

apiVersion:kubeadm.k8s.io/v1beta1kind:ClusterConfigurationimageRepository:registry.cn-hangzhou.aliyuncs.com/google_containersetcd:external:endpoints:-http://172.16.2.1:2379-http://172.16.2.2:2379-http://172.16.2.3:2379networking:serviceSubnet:10.96.0.0/12podSubnet:10.68.0.0/16kubernetesVersion:v1.13.1controlPlaneEndpoint:api.me:6443apiServer:certSANs:-api.me---apiVersion:kubelet.config.k8s.io/v1beta1kind:KubeletConfigurationsystemReserved:cpu:"0.25"memory:128MiimageGCHighThresholdPercent:85imageGCLowThresholdPercent:80imageMinimumGCAge:2m0s---apiVersion:kubeproxy.config.k8s.io/v1alpha1kind:KubeProxyConfigurationipvs:minSyncPeriod:1s#rr-轮询  wrr-加权轮询  sh-地址哈希scheduler:rrsyncPeriod:10smode:ipvs

说明:因为gcr.io在墙外,导致镜像无法获取,感谢阿里云提供了镜像仓库:registry.cn-hangzhou.aliyuncs.com/google_containers,镜像下载需要点时间,也可以提前下载镜像:kubeadm config images pull --config kubeadm-config.yaml

相关镜像:

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.1registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.1registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.1registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.1registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

master01初始化指令:

kubeadm init --config kubeadm-config.yaml

如果镜像已经提前下载,安装过程大概30秒,输出结果如下:

[init] Using Kubernetes version: v1.13.1[preflight] Running pre-flight checks  [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] External etcd mode: Skipping etcd/ca certificate authority generation[certs] External etcd mode: Skipping apiserver-etcd-client certificate authority generation[certs] External etcd mode: Skipping etcd/server certificate authority generation[certs] External etcd mode: Skipping etcd/peer certificate authority generation[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local api.me api.me] and IPs [10.96.0.1 172.16.2.1][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 21.509043 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master01" as an annotation[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: uhh19f.larqd6hbrknuqxg7[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root:  kubeadm join api.me:6443 --token uhj19f.laeqd6hbrkniqxg7 --discovery-token-ca-cert-hash sha256:aa570bd8126fa83b605540854f53c27840nc0ba7560ab6a6644ba75629194bea

PS:token是使用指令kubeadm token generate生成的,执行过程如有异常,用命令kubeadm reset初始化后重试,生成的token有效时间为24小时,超过24小时后需要重新使用命令kubeadm token create创建新的tokendiscovery-token-ca-cert-hash的值可以使用命令生成:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

加载管理员配置文件

方式一:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

方式二:

export KUBECONFIG=/etc/kubernetes/admin.conf

7 安装calico网络

网络组件选择很多,可以根据自己的需要选择calico、weave、flannel,calico性能最好,flannel的vxlan也不错,默认的UDP性能较差,weave的性能比较差,测试环境用下可以,生产环境不建议使用。calico的安装配置可以参考官方部署:点击查看

calico-rbac.yml:

apiVersion:v1kind:ServiceAccountmetadata:name:calico-kube-controllersnamespace:kube-system---apiVersion:v1kind:ServiceAccountmetadata:name:calico-nodenamespace:kube-system---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1beta1metadata:name:calico-kube-controllersrules:-apiGroups:-""resources:-pods-nodes-namespaces-serviceaccountsverbs:-watch-list-apiGroups:-networking.k8s.ioresources:-networkpoliciesverbs:-watch-list---kind:ClusterRoleBindingapiVersion:rbac.authorization.k8s.io/v1beta1metadata:name:calico-kube-controllersroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:calico-kube-controllerssubjects:-kind:ServiceAccountname:calico-kube-controllersnamespace:kube-system---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1beta1metadata:name:calico-noderules:-apiGroups:[""]resources:-pods-nodes-namespacesverbs:-get-apiGroups:[""]resources:-endpoints-servicesverbs:-watch-list-apiGroups:[""]resources:-nodes/statusverbs:-patch---apiVersion:rbac.authorization.k8s.io/v1beta1kind:ClusterRoleBindingmetadata:name:calico-noderoleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:calico-nodesubjects:-kind:ServiceAccountname:calico-nodenamespace:kube-system

calico.yml:

# Calico Version v3.4.0# https://docs.projectcalico.org/v3.4/releases#v3.4.0# This manifest includes the following component versions:#   quay.io/calico/node:v3.4.0#   quay.io/calico/cni:v3.4.0#   quay.io/calico/kube-controllers:v3.4.0# This ConfigMap is used to configure a self-hosted Calico installation.kind:ConfigMapapiVersion:v1metadata:name:calico-confignamespace:kube-systemdata:# Configure this with the location of your etcd cluster.etcd_endpoints:"http://172.16.2.1:2379,http://172.16.2.2:2379,http://172.16.2.3:2379"# If you're using TLS enabled etcd uncomment the following.# You must also populate the Secret below with these files.etcd_ca:""# "/calico-secrets/etcd-ca"etcd_cert:""# "/calico-secrets/etcd-cert"etcd_key:""# "/calico-secrets/etcd-key"# Configure the Calico backend to use.calico_backend:"bird"# Configure the MTU to useveth_mtu:"1440"# The CNI network configuration to install on each node.  The special# values in this config will be automatically populated.cni_network_config:|-{"name": "k8s-pod-network","cniVersion": "0.3.0","plugins": [{"type": "calico","log_level": "info","etcd_endpoints": "__ETCD_ENDPOINTS__","etcd_key_file": "__ETCD_KEY_FILE__","etcd_cert_file": "__ETCD_CERT_FILE__","etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__","mtu": __CNI_MTU__,"ipam": {"type": "calico-ipam"},"policy": {"type": "k8s"},"kubernetes": {"kubeconfig": "__KUBECONFIG_FILEPATH__"}},{"type": "portmap","snat": true,"capabilities": {"portMappings": true}}]}---# The following contains k8s Secrets for use with a TLS enabled etcd cluster.# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/apiVersion:v1kind:Secrettype:Opaquemetadata:name:calico-etcd-secretsnamespace:kube-systemdata:# Populate the following with etcd TLS configuration if desired, but leave blank if# not using TLS for etcd.# The keys below should be uncommented and the values populated with the base64# encoded contents of each file that would be associated with the TLS data.# Example command for encoding a file contents: cat <file> | base64 -w 0# etcd-key: null# etcd-cert: null# etcd-ca: null---# This manifest installs the calico/node container, as well# as the Calico CNI plugins and network config on# each master and worker node in a Kubernetes cluster.kind:DaemonSetapiVersion:extensions/v1beta1metadata:name:calico-nodenamespace:kube-systemlabels:k8s-app:calico-nodespec:selector:matchLabels:k8s-app:calico-nodeupdateStrategy:type:RollingUpdaterollingUpdate:maxUnavailable:1template:metadata:labels:k8s-app:calico-nodeannotations:# This, along with the CriticalAddonsOnly toleration below,# marks the pod as a critical add-on, ensuring it gets# priority scheduling and that its resources are reserved# if it ever gets evicted.scheduler.alpha.kubernetes.io/critical-pod:''spec:nodeSelector:beta.kubernetes.io/os:linuxhostNetwork:truetolerations:# Make sure calico-node gets scheduled on all nodes.-effect:NoScheduleoperator:Exists# Mark the pod as a critical add-on for rescheduling.-key:CriticalAddonsOnlyoperator:Exists-effect:NoExecuteoperator:ExistsserviceAccountName:calico-node# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.terminationGracePeriodSeconds:0initContainers:# This container installs the Calico CNI binaries# and CNI network config file on each node.-name:install-cniimage:quay.io/calico/cni:v3.4.0command:["/install-cni.sh"]env:# Name of the CNI config file to create.-name:CNI_CONF_NAMEvalue:"10-calico.conflist"# The CNI network config to install on each node.-name:CNI_NETWORK_CONFIGvalueFrom:configMapKeyRef:name:calico-configkey:cni_network_config# The location of the Calico etcd cluster.-name:ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name:calico-configkey:etcd_endpoints# CNI MTU Config variable-name:CNI_MTUvalueFrom:configMapKeyRef:name:calico-configkey:veth_mtu# Prevents the container from sleeping forever.-name:SLEEPvalue:"false"volumeMounts:-mountPath:/host/opt/cni/binname:cni-bin-dir-mountPath:/host/etc/cni/net.dname:cni-net-dir-mountPath:/calico-secretsname:etcd-certscontainers:# Runs calico/node container on each Kubernetes node.  This# container programs network policy and routes on each# host.-name:calico-nodeimage:quay.io/calico/node:v3.4.0env:# The location of the Calico etcd cluster.-name:ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name:calico-configkey:etcd_endpoints# Location of the CA certificate for etcd.-name:ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name:calico-configkey:etcd_ca# Location of the client key for etcd.-name:ETCD_KEY_FILEvalueFrom:configMapKeyRef:name:calico-configkey:etcd_key# Location of the client certificate for etcd.-name:ETCD_CERT_FILEvalueFrom:configMapKeyRef:name:calico-configkey:etcd_cert# Set noderef for node controller.-name:CALICO_K8S_NODE_REFvalueFrom:fieldRef:fieldPath:spec.nodeName# Choose the backend to use.-name:CALICO_NETWORKING_BACKENDvalueFrom:configMapKeyRef:name:calico-configkey:calico_backend# Cluster type to identify the deployment type-name:CLUSTER_TYPEvalue:"k8s,bgp"# Auto-detect the BGP IP address.-name:IPvalue:"autodetect"# Enable IPIP-name:CALICO_IPV4POOL_IPIPvalue:"Always"# Set MTU for tunnel device used if ipip is enabled-name:FELIX_IPINIPMTUvalueFrom:configMapKeyRef:name:calico-configkey:veth_mtu# The default IPv4 pool to create on startup if none exists. Pod IPs will be# chosen from this range. Changing this value after installation will have# no effect. This should fall within `--cluster-cidr`.-name:CALICO_IPV4POOL_CIDRvalue:"10.68.0.0/16"# Disable file logging so `kubectl logs` works.-name:CALICO_DISABLE_FILE_LOGGINGvalue:"true"# Set Felix endpoint to host default action to ACCEPT.-name:FELIX_DEFAULTENDPOINTTOHOSTACTIONvalue:"ACCEPT"# Disable IPv6 on Kubernetes.-name:FELIX_IPV6SUPPORTvalue:"false"# Set Felix logging to "info"-name:FELIX_LOGSEVERITYSCREENvalue:"info"-name:FELIX_HEALTHENABLEDvalue:"true"securityContext:privileged:trueresources:requests:cpu:250mlivenessProbe:httpGet:path:/livenessport:9099host:localhostperiodSeconds:10initialDelaySeconds:10failureThreshold:6readinessProbe:exec:command:-/bin/calico-node--bird-ready--felix-readyperiodSeconds:10volumeMounts:-mountPath:/lib/modulesname:lib-modulesreadOnly:true-mountPath:/run/xtables.lockname:xtables-lockreadOnly:false-mountPath:/var/run/caliconame:var-run-calicoreadOnly:false-mountPath:/var/lib/caliconame:var-lib-calicoreadOnly:false-mountPath:/calico-secretsname:etcd-certsvolumes:# Used by calico/node.-name:lib-moduleshostPath:path:/lib/modules-name:var-run-calicohostPath:path:/var/run/calico-name:var-lib-calicohostPath:path:/var/lib/calico-name:xtables-lockhostPath:path:/run/xtables.locktype:FileOrCreate# Used to install CNI.-name:cni-bin-dirhostPath:path:/opt/cni/bin-name:cni-net-dirhostPath:path:/etc/cni/net.d# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/-name:etcd-certssecret:secretName:calico-etcd-secretsdefaultMode:0400---# This manifest deploys the Calico Kubernetes controllers.# See https://github.com/projectcalico/kube-controllersapiVersion:extensions/v1beta1kind:Deploymentmetadata:name:calico-kube-controllersnamespace:kube-systemlabels:k8s-app:calico-kube-controllersannotations:scheduler.alpha.kubernetes.io/critical-pod:''spec:# The controllers can only have a single active instance.replicas:1strategy:type:Recreatetemplate:metadata:name:calico-kube-controllersnamespace:kube-systemlabels:k8s-app:calico-kube-controllersspec:nodeSelector:beta.kubernetes.io/os:linux# The controllers must run in the host network namespace so that# it isn't governed by policy that would prevent it from working.hostNetwork:truetolerations:# Mark the pod as a critical add-on for rescheduling.-key:CriticalAddonsOnlyoperator:Exists-key:node-role.kubernetes.io/mastereffect:NoScheduleserviceAccountName:calico-kube-controllerscontainers:-name:calico-kube-controllersimage:quay.io/calico/kube-controllers:v3.4.0env:# The location of the Calico etcd cluster.-name:ETCD_ENDPOINTSvalueFrom:configMapKeyRef:name:calico-configkey:etcd_endpoints# Location of the CA certificate for etcd.-name:ETCD_CA_CERT_FILEvalueFrom:configMapKeyRef:name:calico-configkey:etcd_ca# Location of the client key for etcd.-name:ETCD_KEY_FILEvalueFrom:configMapKeyRef:name:calico-configkey:etcd_key# Location of the client certificate for etcd.-name:ETCD_CERT_FILEvalueFrom:configMapKeyRef:name:calico-configkey:etcd_cert# Choose which controllers to run.-name:ENABLED_CONTROLLERSvalue:policy,namespace,serviceaccount,workloadendpoint,nodevolumeMounts:# Mount in the etcd TLS secrets.-mountPath:/calico-secretsname:etcd-certsreadinessProbe:exec:command:-/usr/bin/check-status--rvolumes:# Mount in the etcd TLS secrets with mode 400.# See https://kubernetes.io/docs/concepts/configuration/secret/-name:etcd-certssecret:secretName:calico-etcd-secretsdefaultMode:0400

执行命令:

kubectl apply -f calico-rbac.ymlkubectl apply -f calico.yml

检查各节点组件运行状态:

root@master01:~# kubectl get nodesNAME       STATUS   ROLES    AGE   VERSIONmaster01   Ready    master   13m   v1.13.1root@master01:~# kubectl get po -n kube-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-697d964cc4-p8jcn   1/1     Running   0          32scalico-node-wg9l4                          1/1     Running   0          32scoredns-89cc84847-l48q8                    1/1     Running   0          13mcoredns-89cc84847-mf5nr                    1/1     Running   0          13mkube-apiserver-master01                    1/1     Running   0          12mkube-controller-manager-master01           1/1     Running   0          12mkube-proxy-9l287                           1/1     Running   0          13mkube-scheduler-master01                    1/1     Running   0          12m

8 安装Master02和Master03节点

复制/etc/kubernetes/pki下的以下文件到Master02和Master03对应目录,.ssh/cloudnil.pem是方便节点之间访问的证书,可以使用ssh-keygen生成,具体使用不详细阐述,网络上文章很多,也可以直接使用账号密码。

USER=rootCONTROL_PLANE_IPS="172.16.2.2 172.16.2.3"for host in ${CONTROL_PLANE_IPS}; do    scp -i .ssh/cloudnil.pem /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki    scp -i .ssh/cloudnil.pem /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki    scp -i .ssh/cloudnil.pem /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki    scp -i .ssh/cloudnil.pem /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki    scp -i .ssh/cloudnil.pem /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki    scp -i .ssh/cloudnil.pem /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki    scp -i .ssh/cloudnil.pem /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/admin.confdone

在master02,master03上执行加入集群命令:

kubeadm join api.me:6443 --token uhj19f.laeqd6hbrkniqxg7 --discovery-token-ca-cert-hash sha256:aa570bd8126fa83b605540854f53c27840nc0ba7560ab6a6644ba75629194bea --experimental-control-plane

注意:Master节点和Node节点加入集群的区别就在于最后一个flag:--experimental-control-plane

可以查看下各节点及组件运行状态:

root@master01:~# kubectl get nodesNAME       STATUS   ROLES    AGE     VERSIONmaster01   Ready    master   25m     v1.13.1master02   Ready    master   8m6s    v1.13.1master03   Ready    master   7m33s   v1.13.1root@master01:~# kubectl get po -n kube-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-697d964cc4-p8jcn   1/1     Running   0          12mcalico-node-lvnl8                          1/1     Running   0          7m35scalico-node-p8h5z                          1/1     Running   0          8m9scalico-node-wg9l4                          1/1     Running   0          12mcoredns-89cc84847-l48q8                    1/1     Running   0          25mcoredns-89cc84847-mf5nr                    1/1     Running   0          25mkube-apiserver-master01                    1/1     Running   0          24mkube-apiserver-master02                    1/1     Running   0          8m9skube-apiserver-master03                    1/1     Running   0          7m35skube-controller-manager-master01           1/1     Running   0          24mkube-controller-manager-master02           1/1     Running   0          8m9skube-controller-manager-master03           1/1     Running   0          7m35skube-proxy-9l287                           1/1     Running   0          25mkube-proxy-jmsfb                           1/1     Running   0          8m9skube-proxy-wzh62                           1/1     Running   0          7m35skube-scheduler-master01                    1/1     Running   0          24mkube-scheduler-master02                    1/1     Running   0          8m9skube-scheduler-master03                    1/1     Running   0          7m35s

9 安装Node节点

Master节点安装好了Node节点就简单了,在各个Node节点上执行。

kubeadm join api.me:6443 --token uhj19f.laeqd6hbrkniqxg7 --discovery-token-ca-cert-hash sha256:aa570bd8126fa83b605540854f53c27840nc0ba7560ab6a6644ba75629194bea

10 DNS集群部署

删除原单点coredns

kubectl delete deploy coredns -n kube-system

部署多实例的coredns集群,参考配置coredns.yml:

apiVersion: apps/v1kind: Deploymentmetadata:  labels:    k8s-app: kube-dns  name: coredns  namespace: kube-systemspec:  #集群规模可自行配置  replicas: 3  selector:    matchLabels:      k8s-app: kube-dns  strategy:    rollingUpdate:      maxSurge: 25%      maxUnavailable: 1    type: RollingUpdate  template:    metadata:      labels:        k8s-app: kube-dns    spec:      affinity:        podAntiAffinity:          preferredDuringSchedulingIgnoredDuringExecution:          - weight: 100            podAffinityTerm:              labelSelector:                matchExpressions:                - key: k8s-app                  operator: In                  values:                  - kube-dns              topologyKey: kubernetes.io/hostname      containers:      - args:        - -conf        - /etc/coredns/Corefile        image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6        imagePullPolicy: IfNotPresent        livenessProbe:          failureThreshold: 5          httpGet:            path: /health            port: 8080            scheme: HTTP          initialDelaySeconds: 60          periodSeconds: 10          successThreshold: 1          timeoutSeconds: 5        name: coredns        ports:        - containerPort: 53          name: dns          protocol: UDP        - containerPort: 53          name: dns-tcp          protocol: TCP        - containerPort: 9153          name: metrics          protocol: TCP        resources:          limits:            memory: 170Mi          requests:            cpu: 100m            memory: 70Mi        securityContext:          allowPrivilegeEscalation: false          capabilities:            add:            - NET_BIND_SERVICE            drop:            - all          readOnlyRootFilesystem: true        terminationMessagePath: /dev/termination-log        terminationMessagePolicy: File        volumeMounts:        - mountPath: /etc/coredns          name: config-volume          readOnly: true      dnsPolicy: Default      restartPolicy: Always      schedulerName: default-scheduler      securityContext: {}      serviceAccount: coredns      serviceAccountName: coredns      terminationGracePeriodSeconds: 30      tolerations:      - key: CriticalAddonsOnly        operator: Exists      - effect: NoSchedule        key: node-role.kubernetes.io/master      volumes:      - configMap:          defaultMode: 420          items:          - key: Corefile            path: Corefile          name: coredns        name: config-volume

10 部署Metrics-Server

kubernetesv1.11以后不再支持通过heaspter采集监控数据,支持新的监控数据采集组件metrics-server,比heaspter轻量很多,也不做数据的持久化存储,提供实时的监控数据查询还是很好用的。

获取部署文档,点击这里

下载所有yaml到目录metrics-server,修改metrics-server-deployment.yaml为以下内容:

---apiVersion:v1kind:ServiceAccountmetadata:name:metrics-servernamespace:kube-system---apiVersion:apps/v1kind:Deploymentmetadata:name:metrics-servernamespace:kube-systemlabels:k8s-app:metrics-serverspec:selector:matchLabels:k8s-app:metrics-servertemplate:metadata:name:metrics-serverlabels:k8s-app:metrics-serverspec:tolerations:-key:node-role.kubernetes.io/mastereffect:NoScheduleserviceAccountName:metrics-servervolumes:# mount in tmp so we can safely use from-scratch images and/or read-only containers-name:tmp-diremptyDir:{}containers:-name:metrics-serverimage:cloudnil/metrics-server-amd64:v0.3.1imagePullPolicy:Alwayscommand:-/metrics-server---kubelet-insecure-tls---kubelet-preferred-address-types=InternalIPvolumeMounts:-name:tmp-dirmountPath:/tmp

执行部署命令:

kubectl apply -f metrics-server/

查看监控数据:

root@master01:~# kubectl top nodesNAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   master01   153m         8%     1748Mi          46%       master02   108m         6%     1250Mi          33%       master03   91m          5%     1499Mi          40%       node01     256m         7%     1047Mi          13%       node02     196m         5%     976Mi           10%       node03     206m         5%     907Mi           12%

11 部署Dashboard

下载kubernetes-dashboard.yaml

curl -O https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

修改配置内容,增加ingress配置,后边配置了nginx-ingress后就可以直接绑定域名访问了。

apiVersion:v1kind:Secretmetadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboard-certsnamespace:kube-systemtype:Opaque---apiVersion:v1kind:ServiceAccountmetadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kube-system---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:kubernetes-dashboard-minimalnamespace:kube-systemrules:# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.-apiGroups:[""]resources:["secrets"]verbs:["create"]# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.-apiGroups:[""]resources:["configmaps"]verbs:["create"]# Allow Dashboard to get, update and delete Dashboard exclusive secrets.-apiGroups:[""]resources:["secrets"]resourceNames:["kubernetes-dashboard-key-holder","kubernetes-dashboard-certs"]verbs:["get","update","delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.-apiGroups:[""]resources:["configmaps"]resourceNames:["kubernetes-dashboard-settings"]verbs:["get","update"]# Allow Dashboard to get metrics from heapster.-apiGroups:[""]resources:["services"]resourceNames:["heapster"]verbs:["proxy"]-apiGroups:[""]resources:["services/proxy"]resourceNames:["heapster","http:heapster:","https:heapster:"]verbs:["get"]---apiVersion:rbac.authorization.k8s.io/v1kind:RoleBindingmetadata:name:kubernetes-dashboard-minimalnamespace:kube-systemroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:kubernetes-dashboard-minimalsubjects:-kind:ServiceAccountname:kubernetes-dashboardnamespace:kube-system---kind:DeploymentapiVersion:apps/v1metadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kube-systemspec:replicas:1revisionHistoryLimit:10selector:matchLabels:k8s-app:kubernetes-dashboardtemplate:metadata:labels:k8s-app:kubernetes-dashboardspec:containers:-name:kubernetes-dashboardimage:registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0ports:-containerPort:8443protocol:TCPargs:---auto-generate-certificatesvolumeMounts:-name:kubernetes-dashboard-certsmountPath:/certs# Create on-disk volume to store exec logs-mountPath:/tmpname:tmp-volumelivenessProbe:httpGet:scheme:HTTPSpath:/port:8443initialDelaySeconds:30timeoutSeconds:30volumes:-name:kubernetes-dashboard-certssecret:secretName:kubernetes-dashboard-certs-name:tmp-volumeemptyDir:{}serviceAccountName:kubernetes-dashboardtolerations:-key:node-role.kubernetes.io/mastereffect:NoSchedule---kind:ServiceapiVersion:v1metadata:labels:k8s-app:kubernetes-dashboardname:kubernetes-dashboardnamespace:kube-systemspec:ports:-port:443targetPort:8443selector:k8s-app:kubernetes-dashboard---apiVersion:extensions/v1beta1kind:Ingressmetadata:name:dashboard-ingressnamespace:kube-systemspec:rules:-host:dashboard.cloudnil.comhttp:paths:-path:/backend:serviceName:kubernetes-dashboardservicePort:443

暴露本地端口到Master01访问测试:

#直接暴露Pod端口到本地kubectl port-forward  pod/kubernetes-dashboard-fc78cd558-thdrv --address 0.0.0.0 12345:9090#直接暴露Service端口到本地kubectl port-forward  svc/kubernetes-dashboard --address 0.0.0.0 12345:80

访问地址:http://172.16.2.1:12345

12 服务暴露到公网

kubernetes中的Service暴露到外部有三种方式,分别是:

  • LoadBlancer Service
  • NodePort Service
  • Ingress

LoadBlancer Service是kubernetes深度结合云平台的一个组件;当使用LoadBlancer Service暴露服务时,实际上是通过向底层云平台申请创建一个负载均衡器来向外暴露服务;目前LoadBlancer Service支持的云平台已经相对完善,比如国外的GCE、DigitalOcean,国内的 阿里云,私有云 Openstack 等等,由于LoadBlancer Service深度结合了云平台,所以只能在一些云平台上来使用。

NodePort Service顾名思义,实质上就是通过在集群的每个node上暴露一个端口,然后将这个端口映射到某个具体的service来实现的,虽然每个node的端口有很多(0~65535),但是由于安全性和易用性(服务多了就乱了,还有端口冲突问题)实际使用可能并不多。

Ingress可以实现使用nginx等开源的反向代理负载均衡器实现对外暴露服务,可以理解Ingress就是用于配置域名转发的一个东西,在nginx中就类似upstream,它与ingress-controller结合使用,通过ingress-controller监控到pod及service的变化,动态地将ingress中的转发信息写到诸如nginx、apache、haproxy等组件中实现方向代理和负载均衡。

13 部署Nginx-ingress-controller

Nginx-ingress-controller是kubernetes官方提供的集成了Ingress-controller和Nginx的一个docker镜像。

本次部署中,将Nginx-ingress部署到master01、master02、master03上,监听宿主机的80端口:

apiVersion:v1kind:Namespacemetadata:name:ingress-nginx---kind:ConfigMapapiVersion:v1metadata:name:nginx-configurationnamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginx---kind:ConfigMapapiVersion:v1metadata:name:tcp-servicesnamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginx---kind:ConfigMapapiVersion:v1metadata:name:udp-servicesnamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginx---apiVersion:v1kind:ServiceAccountmetadata:name:nginx-ingress-serviceaccountnamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginx---apiVersion:rbac.authorization.k8s.io/v1beta1kind:ClusterRolemetadata:name:nginx-ingress-clusterrolelabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxrules:-apiGroups:-""resources:-configmaps-endpoints-nodes-pods-secretsverbs:-list-watch-apiGroups:-""resources:-nodesverbs:-get-apiGroups:-""resources:-servicesverbs:-get-list-watch-apiGroups:-"extensions"resources:-ingressesverbs:-get-list-watch-apiGroups:-""resources:-eventsverbs:-create-patch-apiGroups:-"extensions"resources:-ingresses/statusverbs:-update---apiVersion:rbac.authorization.k8s.io/v1beta1kind:Rolemetadata:name:nginx-ingress-rolenamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxrules:-apiGroups:-""resources:-configmaps-pods-secrets-namespacesverbs:-get-apiGroups:-""resources:-configmapsresourceNames:-"ingress-controller-leader-nginx"verbs:-get-update-apiGroups:-""resources:-configmapsverbs:-create-apiGroups:-""resources:-endpointsverbs:-get---apiVersion:rbac.authorization.k8s.io/v1beta1kind:RoleBindingmetadata:name:nginx-ingress-role-nisa-bindingnamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxroleRef:apiGroup:rbac.authorization.k8s.iokind:Rolename:nginx-ingress-rolesubjects:-kind:ServiceAccountname:nginx-ingress-serviceaccountnamespace:ingress-nginx---apiVersion:rbac.authorization.k8s.io/v1beta1kind:ClusterRoleBindingmetadata:name:nginx-ingress-clusterrole-nisa-bindinglabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxroleRef:apiGroup:rbac.authorization.k8s.iokind:ClusterRolename:nginx-ingress-clusterrolesubjects:-kind:ServiceAccountname:nginx-ingress-serviceaccountnamespace:ingress-nginx---apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-ingress-controllernamespace:ingress-nginxlabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxspec:replicas:3selector:matchLabels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxtemplate:metadata:labels:app.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxannotations:prometheus.io/port:"10254"prometheus.io/scrape:"true"spec:hostNetwork:trueaffinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key:kubernetes.io/hostnameoperator:Invalues:-master01-master02-master03podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:-labelSelector:matchExpressions:-key:app.kubernetes.io/nameoperator:Invalues:-ingress-nginxtopologyKey:"kubernetes.io/hostname"tolerations:-key:node-role.kubernetes.io/mastereffect:NoScheduleserviceAccountName:nginx-ingress-serviceaccountcontainers:-name:nginx-ingress-controllerimage:registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.21.0args:-/nginx-ingress-controller---configmap=$(POD_NAMESPACE)/nginx-configuration---tcp-services-configmap=$(POD_NAMESPACE)/tcp-services---udp-services-configmap=$(POD_NAMESPACE)/udp-services# - --publish-service=$(POD_NAMESPACE)/ingress-nginx---annotations-prefix=nginx.ingress.kubernetes.iosecurityContext:capabilities:drop:-ALLadd:-NET_BIND_SERVICE# www-data -> 33runAsUser:33env:-name:POD_NAMEvalueFrom:fieldRef:fieldPath:metadata.name-name:POD_NAMESPACEvalueFrom:fieldRef:fieldPath:metadata.namespaceports:-name:httpcontainerPort:80-name:httpscontainerPort:443livenessProbe:failureThreshold:3httpGet:path:/healthzport:10254scheme:HTTPinitialDelaySeconds:10periodSeconds:10successThreshold:1timeoutSeconds:1readinessProbe:failureThreshold:3httpGet:path:/healthzport:10254scheme:HTTPperiodSeconds:10successThreshold:1timeoutSeconds:1resources:limits:cpu:1memory:1024Mirequests:cpu:0.25memory:512Mi

部署完Nginx-ingress-controller后,解析域名dashboard.cloudnil.com到master01、master02、master03的外网IP,就可以使用dashboard.cloudnil.com访问dashboard。

版权声明:允许转载,请注明原文出处:http://cloudnil.com/2018/12/24/Deploy-kubernetes(1.13.1)-HA-with-kubeadm/。

Search

    Post Directory


    [8]ページ先頭

    ©2009-2025 Movatter.jp