- Notifications
You must be signed in to change notification settings - Fork51
v5tech/vagrant-kubernetes-cluster
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Vagrant一键安装Kubernetes集群。安装 Metrics Server 、Kuboard 、Kubernetes Dashboard、KubePi、Kubernetes集群监控prometheus-operator等。
安装环境:
- Vagrant 版本: 2.2.18
- VirtualBox 版本: 6.1.26
虚拟机网卡设置如图所示:
CentOS7 环境安装版本:
- CentOS 版本: centos7
- Containerd 版本: 1.4.11
- Kubernetes 版本: v1.22.2
Ubuntu 环境安装版本:
- Ubuntu 版本: 20.04.2 LTS
- Containerd 版本: 1.5.5
- Kubernetes 版本: v1.22.0
vagrant upBringing machine'kmaster' up with'virtualbox' provider...Bringing machine'kworker1' up with'virtualbox' provider...Bringing machine'kworker2' up with'virtualbox' provider...==> kmaster: Importing base box'generic/ubuntu2004'...==> kmaster: Matching MAC addressfor NAT networking...==> kmaster: Setting the name of the VM: kmaster==> kmaster: Clearing any previouslyset network interfaces...==> kmaster: Preparing network interfaces based on configuration... kmaster: Adapter 1: nat kmaster: Adapter 2: hostonly==> kmaster: Forwarding ports... kmaster: 22 (guest) => 2222 (host) (adapter 1)==> kmaster: Running'pre-boot' VM customizations...==> kmaster: Booting VM...==> kmaster: Waitingfor machine to boot. This may take a few minutes... kmaster: SSH address: 127.0.0.1:2222 kmaster: SSH username: vagrant kmaster: SSH auth method: private key kmaster: kmaster: Vagrant insecure key detected. Vagrant will automatically replace kmaster: this with a newly generated keypairfor better security. kmaster: kmaster: Inserting generated public key within guest... kmaster: Removing insecure key from the guestif it's present... kmaster: Key inserted! Disconnecting and reconnecting using new SSH key...==> kmaster: Machine booted and ready!==> kmaster: Checking for guest additions in VM...==> kmaster: Setting hostname...==> kmaster: Configuring and enabling network interfaces...==> kmaster: Mounting shared folders... kmaster: /vagrant => D:/Vagrant/kubernetes-cluster==> kmaster: Running provisioner: shell... kmaster: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-1qfj4jz.sh kmaster: [TASK 0] Setting TimeZone kmaster: [TASK 1] Setting DNS kmaster: [TASK 2] Setting Ubuntu System Mirrors kmaster: [TASK 3] Disable and turn off SWAP kmaster: [TASK 4] Stop and Disable firewall kmaster: [TASK 5] Enable and Load Kernel modules kmaster: [TASK 6] Add Kernel settings kmaster: [TASK 7] Install containerd runtime kmaster: [TASK 8] Add apt repo for kubernetes kmaster: Warning: apt-key output should not be parsed (stdout is not a terminal) kmaster: OK kmaster: [TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl) kmaster: [TASK 10] Enable ssh password authentication kmaster: [TASK 11] Set root password kmaster: [TASK 12] Update /etc/hosts file==> kmaster: Running provisioner: shell... kmaster: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-11nj6h4.sh kmaster: [TASK 1] Pull required containers kmaster: [TASK 2] Initialize Kubernetes Cluster kmaster: [TASK 3] Deploy Calico network kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh==> kworker1: Importing base box'generic/ubuntu2004'...==> kworker1: Matching MAC address for NAT networking...==> kworker1: Setting the name of the VM: kworker1==> kworker1: Fixed port collision for 22 => 2222. Now on port 2200.==> kworker1: Clearing any previously set network interfaces...==> kworker1: Preparing network interfaces based on configuration... kworker1: Adapter 1: nat kworker1: Adapter 2: hostonly==> kworker1: Forwarding ports... kworker1: 22 (guest) => 2200 (host) (adapter 1)==> kworker1: Running'pre-boot' VM customizations...==> kworker1: Booting VM...==> kworker1: Waiting for machine to boot. This may take a few minutes... kworker1: SSH address: 127.0.0.1:2200 kworker1: SSH username: vagrant kworker1: SSH auth method: private key kworker1: kworker1: Vagrant insecure key detected. Vagrant will automatically replace kworker1: this with a newly generated keypair for better security. kworker1: kworker1: Inserting generated public key within guest... kworker1: Removing insecure key from the guest if it's present... kworker1: Key inserted! Disconnecting and reconnecting using new SSH key...==> kworker1: Machine booted and ready!==> kworker1: Checkingforguest additionsin VM...==> kworker1: Setting hostname...==> kworker1: Configuring and enabling network interfaces...==> kworker1: Mounting shared folders... kworker1: /vagrant => D:/Vagrant/kubernetes-cluster==> kworker1: Running provisioner: shell... kworker1: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-6qmkd4.sh kworker1: [TASK 0] Setting TimeZone kworker1: [TASK 1] Setting DNS kworker1: [TASK 2] Setting Ubuntu System Mirrors kworker1: [TASK 3] Disable and turn off SWAP kworker1: [TASK 4] Stop and Disable firewall kworker1: [TASK 5] Enable and Load Kernel modules kworker1: [TASK 6] Add Kernel settings kworker1: [TASK 7] Install containerd runtime kworker1: [TASK 8] Add apt repofor kubernetes kworker1: Warning: apt-key output should not be parsed (stdout is not a terminal) kworker1: OK kworker1: [TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl) kworker1: [TASK 10] Enable ssh password authentication kworker1: [TASK 11] Set root password kworker1: [TASK 12] Update /etc/hosts file==> kworker1: Running provisioner: shell... kworker1: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-vmdbxa.sh kworker1: [TASK 1] Join node to Kubernetes Cluster==> kworker2: Importing base box'generic/ubuntu2004'...==> kworker2: Matching MAC addressfor NAT networking...==> kworker2: Setting the name of the VM: kworker2==> kworker2: Fixed port collisionfor 22 => 2222. Now on port 2201.==> kworker2: Clearing any previouslyset network interfaces...==> kworker2: Preparing network interfaces based on configuration... kworker2: Adapter 1: nat kworker2: Adapter 2: hostonly==> kworker2: Forwarding ports... kworker2: 22 (guest) => 2201 (host) (adapter 1)==> kworker2: Running'pre-boot' VM customizations...==> kworker2: Booting VM...==> kworker2: Waitingfor machine to boot. This may take a few minutes... kworker2: SSH address: 127.0.0.1:2201 kworker2: SSH username: vagrant kworker2: SSH auth method: private key kworker2: kworker2: Vagrant insecure key detected. Vagrant will automatically replace kworker2: this with a newly generated keypairfor better security. kworker2: kworker2: Inserting generated public key within guest... kworker2: Removing insecure key from the guestif it's present... kworker2: Key inserted! Disconnecting and reconnecting using new SSH key...==> kworker2: Machine booted and ready!==> kworker2: Checking for guest additions in VM...==> kworker2: Setting hostname...==> kworker2: Configuring and enabling network interfaces...==> kworker2: Mounting shared folders... kworker2: /vagrant => D:/Vagrant/kubernetes-cluster==> kworker2: Running provisioner: shell... kworker2: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-1s6ys4c.sh kworker2: [TASK 0] Setting TimeZone kworker2: [TASK 1] Setting DNS kworker2: [TASK 2] Setting Ubuntu System Mirrors kworker2: [TASK 3] Disable and turn off SWAP kworker2: [TASK 4] Stop and Disable firewall kworker2: [TASK 5] Enable and Load Kernel modules kworker2: [TASK 6] Add Kernel settings kworker2: [TASK 7] Install containerd runtime kworker2: [TASK 8] Add apt repo for kubernetes kworker2: Warning: apt-key output should not be parsed (stdout is not a terminal) kworker2: OK kworker2: [TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl) kworker2: [TASK 10] Enable ssh password authentication kworker2: [TASK 11] Set root password kworker2: [TASK 12] Update /etc/hosts file==> kworker2: Running provisioner: shell... kworker2: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-1qxwo1n.sh kworker2: [TASK 1] Join node to Kubernetes Cluster
安装后三台机器的 IP 为:
| 机器名 | IP |
|---|---|
| kmaster | 192.168.56.100 |
| kworker1 | 192.168.56.101 |
| kworker2 | 192.168.56.102 |
root用户密码为kubeadmin
root@kmaster:~# mkdir -p $HOME/.kuberoot@kmaster:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configroot@kmaster:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
集群状态:
root@kmaster:~# kubectl cluster-infoKubernetes control plane is running at https://kmaster.k8s.com:6443CoreDNS is running at https://kmaster.k8s.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
root@kmaster:~# kubectl get node,po,svc -A -owideEvery 2.0s: kubectl get node,po,svc -A -owide kmaster: Tue Oct 12 13:53:57 2021NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEnode/kmaster Ready control-plane,master 20m v1.22.0 192.168.56.100<none> Ubuntu 20.04.2 LTS 5.4.0-77-generic containerd://1.5.5node/kworker1 Ready<none> 9m40s v1.22.0 192.168.56.101<none> Ubuntu 20.04.2 LTS 5.4.0-77-generic containerd://1.5.5node/kworker2 Ready<none> 7m35s v1.22.0 192.168.56.102<none> Ubuntu 20.04.2 LTS 5.4.0-77-generic containerd://1.5.5NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkube-system pod/calico-kube-controllers-7659fb8886-dwvc4 1/1 Running 0 20m 192.168.189.2 kmaster<none><none>kube-system pod/calico-node-2w8x5 1/1 Running 0 20m 192.168.56.100 kmaster<none><none>kube-system pod/calico-node-vqjsc 1/1 Running 0 7m35s 192.168.56.102 kworker2<none><none>kube-system pod/calico-node-zj98h 1/1 Running 0 9m40s 192.168.56.101 kworker1<none><none>kube-system pod/coredns-7568f67dbd-4jssz 1/1 Running 0 20m 192.168.189.3 kmaster<none><none>kube-system pod/coredns-7568f67dbd-vn8ph 1/1 Running 0 20m 192.168.189.1 kmaster<none><none>kube-system pod/etcd-kmaster 1/1 Running 0 20m 192.168.56.100 kmaster<none><none>kube-system pod/kube-apiserver-kmaster 1/1 Running 0 20m 192.168.56.100 kmaster<none><none>kube-system pod/kube-controller-manager-kmaster 1/1 Running 0 20m 192.168.56.100 kmaster<none><none>kube-system pod/kube-proxy-2sqmm 1/1 Running 0 7m35s 192.168.56.102 kworker2<none><none>kube-system pod/kube-proxy-8z758 1/1 Running 0 20m 192.168.56.100 kmaster<none><none>kube-system pod/kube-proxy-brgl8 1/1 Running 0 9m40s 192.168.56.101 kworker1<none><none>kube-system pod/kube-scheduler-kmaster 1/1 Running 0 20m 192.168.56.100 kmaster<none><none>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORdefault service/kubernetes ClusterIP 10.96.0.1<none> 443/TCP 20m<none>kube-system service/kube-dns ClusterIP 10.96.0.10<none> 53/UDP,53/TCP,9153/TCP 20m k8s-app=kube-dns
root@kmaster:/vagrant/metrics# kubectl apply -f metrics.yamlserviceaccount/metrics-server createdclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader createdclusterrole.rbac.authorization.k8s.io/system:metrics-server createdrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader createdclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator createdclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server createdservice/metrics-server createddeployment.apps/metrics-server createdapiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
root@kmaster:~# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yamlnamespace/kuboard createdconfigmap/kuboard-v3-config createdserviceaccount/kuboard-boostrap createdclusterrolebinding.rbac.authorization.k8s.io/kuboard-boostrap-crb createddaemonset.apps/kuboard-etcd createddeployment.apps/kuboard-v3 createdservice/kuboard-v3 created
访问 kuboardhttp://192.168.56.100:30080
用户名: admin密码: Kuboard123
root@kmaster:/vagrant/kubernetes-dashboard# kubectl apply -f kubernetes-dashboard.yamlnamespace/kubernetes-dashboard createdserviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createdWarning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19; use the"seccompProfile" field insteaddeployment.apps/dashboard-metrics-scraper createdserviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user created# 执行下面命令后手动将type: ClusterIP 改为 type: NodePortroot@kmaster:~# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard# 查看svc,放行端口root@kmaster:~# kubectl get svc -A |grep kubernetes-dashboardkubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.111.109.182<none> 8000/TCP 2m53skubernetes-dashboard kubernetes-dashboard NodePort 10.97.250.165<none> 443:31825/TCP 2m53s# 获取访问令牌root@kmaster:~# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9BODl1TGtTRjUzWUl4dnJKUHdpYnB1V0RIZGpxNkxoT2VMWEEzNW1yVk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXdtN3hqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzAzOGNhZC1jYjE2LTQ3ZjAtYTIxZS1hODNlNjhjYjA4ZGMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.iPxLZnueJz9y2ngFTtgEuZ36Ae0QLK2oFXEBXinYcsM5712_sw3iyYODB9Eyu9AzscMDin-jL4ssctl6dQt-3PD6vdrLjSWAlDNK_PXXYlnFCTehrcFjZNGWv3yM7e5dfUOqmrl0ROwYEKFtF93sQAYPtXHZUqDnQOQ15VE-NVd7RyCgHHNtCiV_UeDrRg7M0YBvPtL24w35MaaKyeLIs_YWZpNgjV3zNfdl86Lo3SEoU0_nVAqwZzBroUxrE6ekBDGisWvQ6NtrEZLRTgk2izPCUiT3XOj4bENwf3Ba1bCKGvIzmWx41KIVdNamN_c1YOiY1HL__1ryKwMad4JR-w
访问 kubernetes-dashboardhttps://192.168.56.100:31825
Every 2.0s: kubectl get node,po,svc -A -owide kmaster: Tue Oct 12 14:08:09 2021NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEnode/kmaster Ready control-plane,master 35m v1.22.0 192.168.56.100<none> Ubuntu 20.04.2 LTS 5.4.0-77-generic containerd://1.5.5node/kworker1 Ready<none> 23m v1.22.0 192.168.56.101<none> Ubuntu 20.04.2 LTS 5.4.0-77-generic containerd://1.5.5node/kworker2 Ready<none> 21m v1.22.0 192.168.56.102<none> Ubuntu 20.04.2 LTS 5.4.0-77-generic containerd://1.5.5NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkube-system pod/calico-kube-controllers-7659fb8886-dwvc4 1/1 Running 0 34m 192.168.189.2 kmaster<none><none>kube-system pod/calico-node-2w8x5 1/1 Running 0 34m 192.168.56.100 kmaster<none><none>kube-system pod/calico-node-vqjsc 1/1 Running 0 21m 192.168.56.102 kworker2<none><none>kube-system pod/calico-node-zj98h 1/1 Running 0 23m 192.168.56.101 kworker1<none><none>kube-system pod/coredns-7568f67dbd-4jssz 1/1 Running 0 34m 192.168.189.3 kmaster<none><none>kube-system pod/coredns-7568f67dbd-vn8ph 1/1 Running 0 34m 192.168.189.1 kmaster<none><none>kube-system pod/etcd-kmaster 1/1 Running 0 34m 192.168.56.100 kmaster<none><none>kube-system pod/kube-apiserver-kmaster 1/1 Running 0 35m 192.168.56.100 kmaster<none><none>kube-system pod/kube-controller-manager-kmaster 1/1 Running 0 34m 192.168.56.100 kmaster<none><none>kube-system pod/kube-proxy-2sqmm 1/1 Running 0 21m 192.168.56.102 kworker2<none><none>kube-system pod/kube-proxy-8z758 1/1 Running 0 34m 192.168.56.100 kmaster<none><none>kube-system pod/kube-proxy-brgl8 1/1 Running 0 23m 192.168.56.101 kworker1<none><none>kube-system pod/kube-scheduler-kmaster 1/1 Running 0 35m 192.168.56.100 kmaster<none><none>kube-system pod/metrics-server-9577d976b-xzrgt 1/1 Running 0 9m27s 192.168.41.129 kworker1<none><none>kubernetes-dashboard pod/dashboard-metrics-scraper-856586f554-kdgtw 1/1 Running 0 6m57s 192.168.41.130 kworker1<none><none>kubernetes-dashboard pod/kubernetes-dashboard-67484c44f6-lbp5l 1/1 Running 0 6m57s 192.168.77.129 kworker2<none><none>kuboard pod/kuboard-agent-2-767f88b647-pr7br 1/1 Running 1 (5m57s ago) 6m26s 192.168.189.5 kmaster<none><none>kuboard pod/kuboard-agent-656c95877f-g968n 1/1 Running 1 (5m37s ago) 6m26s 192.168.189.6 kmaster<none><none>kuboard pod/kuboard-etcd-th9nq 1/1 Running 0 8m39s 192.168.56.100 kmaster<none><none>kuboard pod/kuboard-questdb-68d5bfb5b-2tnwf 1/1 Running 0 6m26s 192.168.189.7 kmaster<none><none>kuboard pod/kuboard-v3-5fc46b5557-44hlj 1/1 Running 0 8m39s 192.168.189.4 kmaster<none><none>
https://kubeoperator.io/docs/kubepi/install/
kubectl apply -f https://raw.githubusercontent.com/KubeOperator/KubePi/master/docs/deploy/kubectl/kubepi.yaml
获取访问地址
# 获取 NodeIpexport NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")# 获取 NodePortexport NODE_PORT=$(kubectl -n kube-system get services kubepi -o jsonpath="{.spec.ports[0].nodePort}")# 获取 Addressecho http://$NODE_IP:$NODE_PORT
登录
地址: http://$NODE_IP:$NODE_PORT用户名: admin密码: kubepi导入集群,获取token
kubectl -n kubernetes-dashboard get secret$(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
以下环境需要调整虚拟机配置,至少需4核8G内存
安装nfs文件系统
# 在每个机器。yum install -y nfs-utils# 在kmaster 执行以下命令 192.168.56.100echo"/nfs/data/ *(insecure,rw,sync,no_root_squash)"> /etc/exports# 执行以下命令,启动 nfs 服务;创建共享目录mkdir -p /nfs/data# 在master执行systemctlenable rpcbindsystemctlenable nfs-serversystemctl start rpcbindsystemctl start nfs-server# 使配置生效exportfs -r#检查配置是否生效exportfs
showmount -e 192.168.56.100mkdir -p /nfs/datamount -t nfs 192.168.56.100:/nfs/data /nfs/data
配置动态供应的默认存储类
## 创建了一个存储类apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:nfs-storageannotations:storageclass.kubernetes.io/is-default-class:"true"provisioner:k8s-sigs.io/nfs-subdir-external-provisionerparameters:archiveOnDelete:"true"## 删除pv的时候,pv的内容是否要备份---apiVersion:apps/v1kind:Deploymentmetadata:name:nfs-client-provisionerlabels:app:nfs-client-provisioner# replace with namespace where provisioner is deployednamespace:defaultspec:replicas:1strategy:type:Recreateselector:matchLabels:app:nfs-client-provisionertemplate:metadata:labels:app:nfs-client-provisionerspec:serviceAccountName:nfs-client-provisionercontainers: -name:nfs-client-provisionerimage:docker.io/v5cn/nfs-subdir-external-provisioner:v4.0.2# resources:# limits:# cpu: 10m# requests:# cpu: 10mvolumeMounts: -name:nfs-client-rootmountPath:/persistentvolumesenv: -name:PROVISIONER_NAMEvalue:k8s-sigs.io/nfs-subdir-external-provisioner -name:NFS_SERVERvalue:192.168.56.100## 指定自己nfs服务器地址 -name:NFS_PATHvalue:/nfs/data## nfs服务器共享的目录volumes: -name:nfs-client-rootnfs:server:192.168.56.100path:/nfs/data---apiVersion:v1kind:ServiceAccountmetadata:name:nfs-client-provisioner# replace with namespace where provisioner is deployednamespace:default---kind:ClusterRoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:nfs-client-provisioner-runnerrules: -apiGroups:[""]resources:["nodes"]verbs:["get", "list", "watch"] -apiGroups:[""]resources:["persistentvolumes"]verbs:["get", "list", "watch", "create", "delete"] -apiGroups:[""]resources:["persistentvolumeclaims"]verbs:["get", "list", "watch", "update"] -apiGroups:["storage.k8s.io"]resources:["storageclasses"]verbs:["get", "list", "watch"] -apiGroups:[""]resources:["events"]verbs:["create", "update", "patch"]---kind:ClusterRoleBindingapiVersion:rbac.authorization.k8s.io/v1metadata:name:run-nfs-client-provisionersubjects: -kind:ServiceAccountname:nfs-client-provisioner# replace with namespace where provisioner is deployednamespace:defaultroleRef:kind:ClusterRolename:nfs-client-provisioner-runnerapiGroup:rbac.authorization.k8s.io---kind:RoleapiVersion:rbac.authorization.k8s.io/v1metadata:name:leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace:defaultrules: -apiGroups:[""]resources:["endpoints"]verbs:["get", "list", "watch", "create", "update", "patch"]---kind:RoleBindingapiVersion:rbac.authorization.k8s.io/v1metadata:name:leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace:defaultsubjects: -kind:ServiceAccountname:nfs-client-provisioner# replace with namespace where provisioner is deployednamespace:defaultroleRef:kind:Rolename:leader-locking-nfs-client-provisionerapiGroup:rbac.authorization.k8s.io
kubectl get sc
KubeSphere目前还不支持kubernetes 1.22,这部分内容稍后就来...
kubectl cluster-info
git clone https://github.com/prometheus-operator/kube-prometheus.gitcd kube-prometheus因为原配置里面的好多镜拉取不下来,因此应用修改过的配置文件(当前目录下的kube-prometheus)
kubectl apply -f manifests/setup
kubectl get ns monitoring
kubectl get pods -n monitoring
kubectl apply -f manifests/
kubectl get pods,svc -n monitoring
Prometheus:
kubectl --namespace monitoring patch svc prometheus-k8s -p'{"spec": {"type": "NodePort"}}'Alertmanager:
kubectl --namespace monitoring patch svc alertmanager-main -p'{"spec": {"type": "NodePort"}}'Grafana:
kubectl --namespace monitoring patch svc grafana -p'{"spec": {"type": "NodePort"}}'$ kubectl -n monitoring get svc| grep NodePortalertmanager-main NodePort 10.96.212.116<none> 9093:30496/TCP,8080:30519/TCP 7m53sgrafana NodePort 10.96.216.187<none> 3000:31045/TCP 7m50sprometheus-k8s NodePort 10.96.180.95<none> 9090:30253/TCP,8080:30023/TCP 7m44s
访问 Grafana Dashboard
Username: adminPassword: admin访问 Prometheus Dashboard
访问 Alert Manager Dashboard
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
https://computingforgeeks.com/setup-prometheus-and-grafana-on-kubernetes
About
Vagrant一键安装Kubernetes集群。安装 Metrics Server 、Kuboard 、Kubernetes Dashboard、KubePi、Kubernetes集群监控prometheus-operator
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.










