Kubernetes Configuration Backup and ETC Backup

Mindwatering Incorporated

Author: Tripp W Black

Created: 01/02/2020 at 10:13 PM

 

Category:
Linux
Kubernetes

Commands to Backup Kubernetes:

Backup Certificates
$ sudo cp -r /etc/kubernetes/pki /nas/bkups/

Backup/create etcd snapshot
$ sudo docker run --rm -v $(pwd)/nas/bkups:/nas/bkups/ \
--network host \
-v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \
--env ETCDCTL_API=3 \
k8s.gcr.io/etcd-amd64:3.2.18 \
etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key=/etc/kubernetes/pki/etcd/healthcheck-client.key \
snapshot save /nas/bkups/etcd-snapshot-latest.db

Backup kubeadm Config
$ sudo cp /etc/kubeadm/kubeadm-config.yaml /nas/bkups/


Commands to Restore K8s:

Restore certificates
$ sudo cp -r /nas/bkups/pki /etc/kubernetes/

Restore etcd backup
$ sudo mkdir -p /var/lib/etcd
$ sudo docker run --rm \
-v $(pwd)/nas/bkups:/nas/bkups \
-v /var/lib/etcd:/var/lib/etcd \
--env ETCDCTL_API=3 \
k8s.gcr.io/etcd-amd64:3.2.18 \
/bin/sh -c "etcdctl snapshot restore '/nas/bkups/etcd-snapshot-latest.db' ; mv /default.etcd/member/ /var/lib/etcd/"

Restore kubeadm-config
$ sudo mkdir /etc/kubeadm
$ sudo cp /nas/bkups/kubeadm-config.yaml /etc/kubeadm/

Initialize the master with backup
$ sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd \
--config /etc/kubeadm/kubeadm-config.yaml







Command to Backup Just the ETCD Database Based on Official Docs:

The following command creates a snapshot backup on the local master node to the folder /nas/bkups/etcdbackupfile.db:
$ ETCDCTL_API=3 etcdctl \
--endpoints 127.0.0.1:2379 \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key=/etc/kubernetes/pki/etcd/healthcheck-client.key \
snapshot save /nas/bkups/etcdbackupfile.db

Note:
Backup snapshots taken outside the ectd folder/directory to a remote directory won't have a matching hash check. To restore, you must use the flag: --skip-hash-check.

Hint:
To get the --endpoints, cacert, cert, and key sections, copy and paste them from the /etc/kubernetes/manifests/kube-apiserver.yaml.
e.g.
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379


ETC Datababase Restore:
The following is a 3 control-plane master cluster restore. Howver, I have never gotten this to work per the github documentation w/o doing what others do by using a new etcd directory and updating the etc pod yaml file. Those additional lines are below in red:
m1 = master1
m2 = master2
m3 = master3

$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--name m1 \
--initial-cluster m1=http://master1:2380,m2=http://master2:2380,m3=http://master3:2380 \
--initial-cluster-token etcd-cluster-1 \
--data-dir /var/lib/etc-restore \
--initial-advertise-peer-urls http://master1:2380
$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--name m2 \
--initial-cluster m1=http://master1:2380,m2=http://master2:2380,m3=http://master3:2380 \
--initial-cluster-token etcd-cluster-1 \
--data-dir /var/lib/etc-restore \
--initial-advertise-peer-urls http://master2:2380
$ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--name m3 \
--initial-cluster m1=http://master1:2380,m2=http://master2:2380,m3=http://master3:2380 \
--initial-cluster-token etcd-cluster-1 \
--data-dir /var/lib/etc-restore \
--initial-advertise-peer-urls http://master3:2380

Note: Ensure DNS is resolving for each of the master1, master2, and master3 nodes to each other.

Start-up etc:
$ etcd \
--name m1 \
--listen-client-urls http://master1:2379 \
--advertise-client-urls http://master1:2379 \
--listen-peer-urls http://master1:2380 &
$ etcd \
--name m2 \
--listen-client-urls http://master2:2379 \
--advertise-client-urls http://master2:2379 \
--listen-peer-urls http://master2:2380 &
$ etcd \
--name m3 \
--listen-client-urls http://master3:2379 \
--advertise-client-urls http://master3:2379 \
--listen-peer-urls http://master3:2380 &

We have to update the YAML file and include the --initial-cluster-token, and update the path to the /var/lib/etc to /var/lib/etc-restore.
# vi /etc/kubernetes/manifests/etcd.yaml
...
Update the --data-dir line, to:
--data-dir=/var/lib/etc-restore
In the same section (list) of parameters, add the --initial-cluster-token:
--initial-cluster-token=etcd-cluster-1
...
For the volumneMounts, monthPath entry, update like below:
volumeMounts:
- mountPath: /var/lib/etcd-restore
...
For the hostPath, path entry, update like below:
- hostPath:
path: /var/lib/etc-restore
...

Save the file, and kubelet service will automatically restart the ETCD pod. Within 10 or so seconds, you can do a kubectl get nodes and see the nodes.
If it doesn't work, you'll get the generic "no resources found" for whatever you "get". Check for path issues/typos, and check the systemctl status kubelet.service messages.


previous page

×