Issue:
Would like to re-use a node removed from the cluster.
(Yes. I know, the mindset says just delete and create a new one. However, I'm not in control of the nodes.)
Solution:
Note:
This assumes the worker has already been drained with something similar to:
$ kubectl drain <oldworkername> --delete-local-data --force --ignore-daemonsets
$ kubectl delete node <oldworkername>
Perform the following:
Part A - Reset the name of the worker:
$ sudo /etc/hostname
$ sudo /etc/hosts
Part B - Reset the config:
$ kubeadm reset
Clear old folder data:
$ sudo systemctl stop kubelet
$ sudo systemctl stop docker
$ sudo systemctl disable docker
$ sudo rm -rf /etc/kubernetes/
$ sudo rm -rf /var/lib/kubelet/*
Clear IPTables (first command only) or IPVS tables (second command only):
$ sudo iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
or
$ sudo ipvsadm -C
If the node was a secondary master, add:
$ sudo rm -rf ~/.kube/
$ sudo rm -rf /var/lib/etcd/
Shutdown the network components applicable:
$ sudo ifconfig docker0 down
(e.g. ifconfig flannel.1 down, ip link delete flannel.1)
Uninstall:
$ sudo apt-get remove kubeadm kubectl kubelet
$ sudo apt-get autoremove
Reinstall Version Desired:
$ sudo apt-get install kubeadm=version kubeect=version kubelet=version
$ sudo systemctl enable docker
$ sudo reboot
Verify docker and kubelet running, if not:
$ sudo systemctl enable docker
$ sudo systemctl enable kubelet
$ systemctl start docker
$ systemctl start kubelet
Notes: Check for any errors for each service starting.
In my case, both were already running and w/o errors.
Re-join:
$ sudo kubeadmin join -...
previous page
|