小结
在Kubernetes集群的环境中,同一个机器里如何同时运行 Kubernetes Control Plane Master Node 和 Worker Node,这样同一个机器承担了两个角色,本文描述了将Kubernetes Control Plane Master Node进行设置使其承担Worker Node的功能。
问题
参考kubeadm join 192.168.238.100:4300 --token si5oek.mbrw418p8mr357qt --discovery-token-ca-cert-hash sha256:0e23eb637e09afc4c6dbb1f891409b314d5731e46fe33d84793ba2d58da006d6
返回类似以下错误: Kubectl和Kubeadm的正本如下: 默认情况下 这个 需要去掉 注意以上有了个 可以用以下脚本同时去掉三个节点的标签: 注意,以上是为了测试才将 Kubernetes Control Plane Master Node承担了Worker Node的角色,一般不建议如此操作,因为Control Plane Master Node是关键组件,负责管理整个集群,包括调度集群任务和工作量,监测节点和容器运行状态等等,让Control Plane Master Node承担Worker Node功能会有负面作用,例如消耗了资源,导致时间延迟,以及系统不稳定。 最后,也有安全风险。 Kubernetes: k8s-ha how to join worker node to master node ,when master and worker node are in one machine #2219deploy k8s-ha ,when join worker node to master,which master and worker node are in one machine ,return this error:
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[root@Master ~]# kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.27.7
[root@Master ~]#
[root@Master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:52:26Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
[root@Master ~]#
解决
Kubernetes Control Plane Master Node
被设置为不能部署pod
的,因为Control Plane
节点被默认设置了以下NoSchedule
标签:[root@Master ~]# kubectl get nodes --selector='node-role.kubernetes.io/control-plane'
NAME STATUS ROLES AGE VERSION
master Ready control-plane 20h v1.27.3
node1 Ready control-plane 19h v1.27.3
node2 Ready control-plane 19h v1.27.3
[root@Master ~]#
[root@Master ~]# kubectl describe node master | grep Taint
Taints: node-role.kubernetes.io/control-plane:NoSchedule
NoSchedule
标签的意义如下:
NoSchedule
标签即可解决问题,如下操作 (以Master节点为例,其它Control Plane节点同样操作):[root@Master ~]# kubectl taint node master node-role.kubernetes.io/control-plane:NoSchedule-
node/master untainted
-
小横线,是表示删除。
检查确认已经去掉:[root@Master ~]# kubectl describe node node2 | grep Taint
Taints: <none>
[root@Master ~]#
for node in $(kubectl get nodes --selector='node-role.kubernetes.io/control-plane' | awk 'NR>1 {print $1}' ) ; do kubectl taint node $node node-role.kubernetes.io/control-plane- ; done
参考
stackoverflow: Node had taints that the pod didn’t tolerate error when deploying to Kubernetes cluster
stackoverflow: Should I run “join” or “taint” after “kubeadm init”?
stackoverflow: Master tainted – no pods can be deployed
51CTO: 如何实现kubectl taint nodes –all node-role.kubernetes.io/master-的具体操作步骤
Huawei Cloud: Managing Node Taints
Scheduling workloads on control plane nodes in kubernetes – a bad idea?