Chapter 5. Control plane backup and restore
5.1. Backing up etcd
etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects.
Back up your cluster’s etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
Be sure to take an etcd backup after you update your cluster. This is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.y.z cluster must use an etcd backup that was taken from 4.y.z.
Back up your cluster’s etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host.
After you have an etcd backup, you can restore to a previous cluster state.
5.1.1. Backing up etcd data
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. You have checked whether the cluster-wide proxy is enabled.
TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.
Procedure
Start a debug session as root for a control plane node:
$ oc debug --as-root node/<node_name>
Change your root directory to
/host
in the debug shell:sh-4.4# chroot /host
If the cluster-wide proxy is enabled, export the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables by running the following commands:$ export HTTP_PROXY=http://<your_proxy.example.com>:8080
$ export HTTPS_PROXY=https://<your_proxy.example.com>:8080
$ export NO_PROXY=<example.com>
Run the
cluster-backup.sh
script in the debug shell and pass in the location to save the backup to.TipThe
cluster-backup.sh
script is maintained as a component of the etcd Cluster Operator and is a wrapper around theetcdctl snapshot save
command.sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
Example script output
found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup
In this example, two files are created in the
/home/core/assets/backup/
directory on the control plane host:-
snapshot_<datetimestamp>.db
: This file is the etcd snapshot. Thecluster-backup.sh
script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz
: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.NoteIf etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
5.1.2. Additional resources
5.1.3. Creating automated etcd backups
The automated backup feature for etcd supports both recurring and single backups. Recurring backups create a cron job that starts a single backup each time the job triggers.
Automating etcd backups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Follow these steps to enable automated backups for etcd.
Enabling the TechPreviewNoUpgrade
feature set on your cluster prevents minor version updates. The TechPreviewNoUpgrade
feature set cannot be disabled. Do not enable this feature set on production clusters.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have access to the OpenShift CLI (
oc
).
Procedure
Create a
FeatureGate
custom resource (CR) file namedenable-tech-preview-no-upgrade.yaml
with the following contents:apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade
Apply the CR and enable automated backups:
$ oc apply -f enable-tech-preview-no-upgrade.yaml
It takes time to enable the related APIs. Verify the creation of the custom resource definition (CRD) by running the following command:
$ oc get crd | grep backup
Example output
backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z
5.1.3.1. Creating a single etcd backup
Follow these steps to create a single etcd backup by creating and applying a custom resource (CR).
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have access to the OpenShift CLI (
oc
).
Procedure
If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup:
Create a persistent volume claim (PVC) named
etcd-backup-pvc.yaml
with contents such as the following example:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem
- 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Verify the creation of the PVC by running the following command:
$ oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s
NoteDynamic PVCs stay in the
Pending
state until they are mounted.Create a CR file named
etcd-single-backup.yaml
with contents such as the following example:apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1
- 1
- The name of the PVC to save the backup to. Adjust this value according to your environment.
Apply the CR to start a single backup:
$ oc apply -f etcd-single-backup.yaml
If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup:
Create a
StorageClass
CR file namedetcd-backup-local-storage.yaml
with the following contents:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate
Apply the
StorageClass
CR by running the following command:$ oc apply -f etcd-backup-local-storage.yaml
Create a PV named
etcd-backup-pv-fs.yaml
with contents such as the following example:apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2
Verify the creation of the PV by running the following command:
$ oc get pv
Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s
Create a PVC named
etcd-backup-pvc.yaml
with contents such as the following example:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1
- 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Create a CR file named
etcd-single-backup.yaml
with contents such as the following example:apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1
- 1
- The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment.
Apply the CR to start a single backup:
$ oc apply -f etcd-single-backup.yaml
5.1.3.2. Creating recurring etcd backups
Follow these steps to create automated recurring backups of etcd.
Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. -
You have access to the OpenShift CLI (
oc
).
Procedure
If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups:
Create a persistent volume claim (PVC) named
etcd-backup-pvc.yaml
with contents such as the following example:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage
- 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
NoteEach of the following providers require changes to the
accessModes
andstorageClassName
keys:Provider accessModes
valuestorageClassName
valueAWS with the
versioned-installer-efc_operator-ci
profile- ReadWriteMany
efs-sc
Google Cloud Platform
- ReadWriteMany
filestore-csi
Microsoft Azure
- ReadWriteMany
azurefile-csi
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Verify the creation of the PVC by running the following command:
$ oc get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s
NoteDynamic PVCs stay in the
Pending
state until they are mounted.
If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps:
WarningIf you delete or otherwise lose access to the node that contains the stored backup data, you can lose data.
Create a
StorageClass
CR file namedetcd-backup-local-storage.yaml
with the following contents:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate
Apply the
StorageClass
CR by running the following command:$ oc apply -f etcd-backup-local-storage.yaml
Create a PV named
etcd-backup-pv-fs.yaml
from the appliedStorageClass
with contents such as the following example:apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2
TipRun the following command to list the available nodes:
$ oc get nodes
Verify the creation of the PV by running the following command:
$ oc get pv
Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s
Create a PVC named
etcd-backup-pvc.yaml
with contents such as the following example:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage
- 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
Apply the PVC by running the following command:
$ oc apply -f etcd-backup-pvc.yaml
Create a custom resource definition (CRD) file named
etcd-recurring-backups.yaml
. The contents of the created CRD define the schedule and retention type of automated backups.For the default retention type of
RetentionNumber
with 15 retained backups, use contents such as the following example:apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: "20 4 * * *" 1 timeZone: "UTC" pvcName: etcd-backup-pvc
- 1
- The
CronTab
schedule for recurring backups. Adjust this value for your needs.
To use retention based on the maximum number of backups, add the following key-value pairs to the
etcd
key:spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2
WarningA known issue causes the number of retained backups to be one greater than the configured value.
For retention based on the file size of backups, use the following:
spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1
- 1
- The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified.
WarningA known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value.
Create the cron job defined by the CRD by running the following command:
$ oc create -f etcd-recurring-backup.yaml
To find the created cron job, run the following command:
$ oc get cronjob -n openshift-etcd
5.2. Replacing an unhealthy etcd member
This document describes the process to replace a single unhealthy etcd member.
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure.
If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
5.2.1. Prerequisites
- Take an etcd backup prior to replacing an unhealthy etcd member.
5.2.2. Identifying an unhealthy etcd member
You can identify if your cluster has an unhealthy etcd member.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Check the status of the
EtcdMembersAvailable
status condition using the following command:$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
This example output shows that the
ip-10-0-131-183.ec2.internal
etcd member is unhealthy.
5.2.3. Determining the state of the unhealthy etcd member
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
- The machine is not running or the node is not ready
- The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have identified an unhealthy etcd member.
Procedure
Determine if the machine is not running:
$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
Example output
ip-10-0-131-183.ec2.internal stopped 1
- 1
- This output lists the node and the status of the node’s machine. If the status is anything other than
running
, then the machine is not running.
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
Example output
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1
- 1
- If the node is listed with an
unreachable
taint, then the node is not ready.
If the node is still reachable, then check whether the node is listed as
NotReady
:$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
Example output
ip-10-0-131-183.ec2.internal NotReady master 122m v1.30.3 1
- 1
- If the node is listed as
NotReady
, then the node is not ready.
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all control plane nodes are listed as
Ready
:$ oc get nodes -l node-role.kubernetes.io/master
Example output
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.30.3
Check whether the status of an etcd pod is either
Error
orCrashloopBackoff
:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
- 1
- Since this status of this pod is
Error
, then the etcd pod is crashlooping.
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
5.2.4. Replacing the unhealthy etcd member
Depending on the state of your unhealthy etcd member, use one of the following procedures:
5.2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure.
Prerequisites
- You have identified the unhealthy etcd member.
You have verified that either the machine is not running or the node is not ready.
ImportantYou must wait if the other control plane nodes are powered off. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
View the member list:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+
Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. The
$ etcdctl endpoint health
command will list the removed member until the procedure of replacement is finished and a new member is added.Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:sh-4.2# etcdctl member remove 6fc1e7c9db35841d
Example output
Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346
View the member list again and verify that the member was removed:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+
You can now exit the node shell.
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
ImportantAfter you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change.
Noteetcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure.
Delete the affected node by running the following command:
$ oc delete node <node_name>
Example command
$ oc delete node ip-10-0-131-183.ec2.internal
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1
- 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Delete the serving secret:
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Delete the metrics secret:
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc get machines -n openshift-machine-api -o wide
Example output
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
- 1
- This is the control plane machine for the unhealthy node,
ip-10-0-131-183.ec2.internal
.
Delete the machine of the unhealthy member:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1
- 1
- Specify the name of the control plane machine for the unhealthy node.
A new machine is automatically provisioned after deleting the machine of the unhealthy member.
Verify that a new machine has been created:
$ oc get machines -n openshift-machine-api -o wide
Example output
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
- 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready once the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the
unsupportedConfigOverrides
section is removed from the object by entering this command:$ oc get etcd/cluster -oyaml
If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
Example output
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 1
- 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
View the member list:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+
If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
WarningBe sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
Additional resources
5.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc debug node/ip-10-0-131-183.ec2.internal 1
- 1
- Replace this with the name of the unhealthy node.
Change your root directory to
/host
:sh-4.2# chroot /host
Move the existing etcd pod file out of the kubelet manifest directory:
sh-4.2# mkdir /var/lib/etcd-backup
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
Move the etcd data directory to a different location:
sh-4.2# mv /var/lib/etcd/ /tmp
You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
View the member list:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+
Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:sh-4.2# etcdctl member remove 62bcf33650a7170a
Example output
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
View the member list again and verify that the member was removed:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+
You can now exit the node shell.
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1
- 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Delete the serving secret:
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Delete the metrics secret:
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 1
- 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod.
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the
unsupportedConfigOverrides
section is removed from the object by entering this command:$ oc get etcd/cluster -oyaml
If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
Example output
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
Verification
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Verify that all members are healthy:
sh-4.2# etcdctl endpoint health
Example output
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
5.2.4.3. Replacing an unhealthy bare metal etcd member whose machine is not running or whose node is not ready
This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready.
If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it.
Prerequisites
- You have identified the unhealthy bare metal etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantYou must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Verify and remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
Example output
etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>
Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc rsh -n openshift-etcd etcd-openshift-control-plane-0
View the member list:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+
Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The
etcdctl endpoint health
command will list the removed member until the replacement procedure is completed and the new member is added.Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:WarningBe sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
sh-4.2# etcdctl member remove 7a8197040a5126c8
Example output
Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b
View the member list again and verify that the member was removed:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+
You can now exit the node shell.
ImportantAfter you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot.
Turn off the quorum guard by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
Remove the old secrets for the unhealthy etcd member that was removed by running the following commands.
List the secrets for the unhealthy etcd member that was removed.
$ oc get secrets -n openshift-etcd | grep openshift-control-plane-2
Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m
Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
$ oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret "etcd-peer-openshift-control-plane-2" deleted
Delete the serving secret:
$ oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-metrics-openshift-control-plane-2" deleted
Delete the metrics secret:
$ oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-openshift-control-plane-2" deleted
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc get machines -n openshift-machine-api -o wide
Example output
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
- 1
- This is the control plane machine for the unhealthy node,
examplecluster-control-plane-2
.
Ensure that the Bare Metal Operator is available by running the following command:
$ oc get clusteroperator baremetal
Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.17.0 True False False 3d15h
Remove the old
BareMetalHost
object by running the following command:$ oc delete bmh openshift-control-plane-2 -n openshift-machine-api
Example output
baremetalhost.metal3.io "openshift-control-plane-2" deleted
Delete the machine of the unhealthy member by running the following command:
$ oc delete machine -n openshift-machine-api examplecluster-control-plane-2
After you remove the
BareMetalHost
andMachine
objects, then theMachine
controller automatically deletes theNode
object.If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field.
ImportantDo not interrupt machine deletion by pressing
Ctrl+c
. You must allow the command to proceed to completion. Open a new terminal window to edit and delete the finalizer fields.A new machine is automatically provisioned after deleting the machine of the unhealthy member.
Edit the machine configuration by running the following command:
$ oc edit machine -n openshift-machine-api examplecluster-control-plane-2
Delete the following fields in the
Machine
custom resource, and then save the updated file:finalizers: - machine.machine.openshift.io
Example output
machine.machine.openshift.io/examplecluster-control-plane-2 edited
Verify that the machine was deleted by running the following command:
$ oc get machines -n openshift-machine-api -o wide
Example output
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
Verify that the node has been deleted by running the following command:
$ oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.30.3 openshift-control-plane-1 Ready master 3h24m v1.30.3 openshift-compute-0 Ready worker 176m v1.30.3 openshift-compute-1 Ready worker 176m v1.30.3
Create the new
BareMetalHost
object and the secret to store the BMC credentials:$ cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF
NoteThe username and password can be found from the other bare metal host’s secrets. The protocol to use in
bmc:address
can be taken from other bmh objects.ImportantIf you reuse the
BareMetalHost
object definition from an existing control plane host, do not leave theexternallyProvisioned
field set totrue
.Existing control plane
BareMetalHost
objects may have theexternallyProvisioned
flag set totrue
if they were provisioned by the OpenShift Container Platform installation program.After the inspection is complete, the
BareMetalHost
object is created and available to be provisioned.Verify the creation process using available
BareMetalHost
objects:$ oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m
Verify that a new machine has been created:
$ oc get machines -n openshift-machine-api -o wide
Example output
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
- 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready after the phase changes fromProvisioning
toRunning
.
It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verify that the bare metal host becomes provisioned and no error reported by running the following command:
$ oc get bmh -n openshift-machine-api
Example output
$ oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m
Verify that the new node is added and in a ready state by running this command:
$ oc get nodes
Example output
$ oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.30.3 openshift-control-plane-1 Ready master 4h26m v1.30.3 openshift-control-plane-2 Ready master 12m v1.30.3 openshift-compute-0 Ready worker 3h58m v1.30.3 openshift-compute-1 Ready worker 3h58m v1.30.3
Turn the quorum guard back on by entering the following command:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the
unsupportedConfigOverrides
section is removed from the object by entering this command:$ oc get etcd/cluster -oyaml
If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
Example output
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m
If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 1
- 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc rsh -n openshift-etcd etcd-openshift-control-plane-0
View the member list:
sh-4.2# etcdctl member list -w table
Example output
+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+
NoteIf the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
Verify that all etcd members are healthy by running the following command:
# etcdctl endpoint health --cluster
Example output
https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms
Validate that all nodes are at the latest revision by running the following command:
$ oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
AllNodesAtLatestRevision
5.2.5. Additional resources
5.3. Disaster recovery
5.3.1. About disaster recovery
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.
Disaster recovery requires you to have at least one healthy control plane host.
- Restoring to a previous cluster state
This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state.
If applicable, you might also need to recover from expired control plane certificates.
WarningRestoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.
Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster.
NoteIf you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
- Recovering from expired control plane certificates
- This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
5.3.2. Restoring to a previous cluster state
To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.
5.3.2.1. About restoring cluster state
You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:
- The cluster has lost the majority of control plane hosts (quorum loss).
- An administrator has deleted something critical and must restore to recover the cluster.
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.
If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.
Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OpenShift operators, including the network operator.
It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.
In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.
5.3.2.2. Restoring to a previous cluster state
You can use a saved etcd
backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd
recovery procedure.
When you restore your cluster, you must use an etcd
backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd
backup that was taken from 4.7.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role through a certificate-basedkubeconfig
file, like the one that was used during installation. - A healthy control plane host to use as the recovery host.
- SSH access to control plane hosts.
-
A backup directory containing both the
etcd
snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:snapshot_<datetimestamp>.db
andstatic_kuberesources_<datetimestamp>.tar.gz
.
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.
Procedure
- Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
kube-apiserver
becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.ImportantIf you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the
etcd
backup directory to the recovery control plane host.This procedure assumes that you copied the
backup
directory containing theetcd
snapshot and the resources for the static pods to the/home/core/
directory of your recovery control plane host.Stop the static pods on any other control plane nodes.
NoteYou do not need to stop the static pods on the recovery host.
- Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory by running:
$ sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp
Verify that the
etcd
pods are stopped by using:$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-apiserver
file out of the kubelet manifest directory by running:$ sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
Verify that the
kube-apiserver
containers are stopped by running:$ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"
If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-controller-manager
file out of the kubelet manifest directory by using:$ sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp
Verify that the
kube-controller-manager
containers are stopped by running:$ sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard"
If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-scheduler
file out of the kubelet manifest directory by using:$ sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp
Verify that the
kube-scheduler
containers are stopped by using:$ sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"
If the output of this command is not empty, wait a few minutes and check again.
Move the
etcd
data directory to a different location with the following example:$ sudo mv -v /var/lib/etcd/ /tmp
If the
/etc/kubernetes/manifests/keepalived.yaml
file exists and the node is deleted, follow these steps:Move the
/etc/kubernetes/manifests/keepalived.yaml
file out of the kubelet manifest directory:$ sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp
Verify that any containers managed by the
keepalived
daemon are stopped:$ sudo crictl ps --name keepalived
The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Check if the control plane has any Virtual IPs (VIPs) assigned to it:
$ ip -o address | egrep '<api_vip>|<ingress_vip>'
For each reported VIP, run the following command to remove it:
$ sudo ip address del <reported_vip> dev <reported_vip_device>
- Repeat this step on each of the other control plane hosts that is not the recovery host.
- Access the recovery control plane host.
If the
keepalived
daemon is in use, verify that the recovery control plane node owns the VIP:$ ip -o address | grep <api_vip>
The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables.TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.Run the restore script on the recovery control plane host and pass in the path to the
etcd
backup directory:$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup
Example script output
...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml
The cluster-restore.sh script must show that
etcd
,kube-apiserver
,kube-controller-manager
, andkube-scheduler
pods are stopped and then started at the end of the restore process.NoteThe restore process can cause nodes to enter the
NotReady
state if the node certificates were updated after the lastetcd
backup.Check the nodes to ensure they are in the
Ready
state.Run the following command:
$ oc get nodes -w
Sample output
NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3
It can take several minutes for all nodes to report their state.
If any nodes are in the
NotReady
state, log in to the nodes and remove all of the PEM files from the/var/lib/kubelet/pki
directory on each node. You can SSH into the nodes or use the terminal window in the web console.$ ssh -i <ssh-key-path> core@<master-hostname>
Sample
pki
directorysh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem
Restart the kubelet service on all control plane hosts.
From the recovery host, run:
$ sudo systemctl restart kubelet.service
- Repeat this step on all other control plane hosts.
Approve the pending Certificate Signing Requests (CSRs):
NoteClusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step.
Get the list of current CSRs by running:
$ oc get csr
Example output
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4 ...
Review the details of a CSR to verify that it is valid by running:
$ oc describe csr <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapper
CSR by running:$ oc adm certificate approve <csr_name>
For user-provisioned installations, approve each valid kubelet service CSR by running:
$ oc adm certificate approve <csr_name>
Verify that the single member control plane has started successfully.
From the recovery host, verify that the
etcd
container is running by using:$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
Example output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
From the recovery host, verify that the
etcd
pod is running by using:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
If the status is
Pending
, or the output lists more than one runningetcd
pod, wait a few minutes and check again.
If you are using the
OVNKubernetes
network plugin, you must restartovnkube-controlplane
pods.Delete all of the
ovnkube-controlplane
pods by running:$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane
Verify that all of the
ovnkube-controlplane
pods were redeployed by using:$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane
If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node:
ImportantRestart OVN-Kubernetes pods in the following order
- The recovery control plane host
- The other control plane hosts (if available)
- The other nodes
NoteValidating and mutating admission webhooks can reject pods. If you add any additional webhooks with the
failurePolicy
set toFail
, then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again.Alternatively, you can temporarily set the
failurePolicy
toIgnore
while restoring the cluster state. After the cluster state is restored successfully, you can set thefailurePolicy
toFail
.Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run:
$ sudo rm -f /var/lib/ovn-ic/etc/*.db
Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command:
$ sudo systemctl restart ovs-vswitchd ovsdb-server
Delete the
ovnkube-node
pod on the node by running the following command, replacing<node>
with the name of the node that you are restarting:$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
Verify that the
ovnkube-node
pod is running again with:$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
NoteIt might take several minutes for the pods to restart.
Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and
etcd
automatically scales up.If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".
WarningDo not delete and re-create the machine for the recovery host.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:
WarningDo not delete and re-create the machine for the recovery host.
For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node".
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
$ oc get machines -n openshift-machine-api -o wide
Example output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
- 1
- This is the control plane machine for the lost control plane host,
ip-10-0-131-183.ec2.internal
.
Delete the machine of the lost control plane host by running:
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1
- 1
- Specify the name of the control plane machine for the lost control plane host.
A new machine is automatically provisioned after deleting the machine of the lost control plane host.
Verify that a new machine has been created by running:
$ oc get machines -n openshift-machine-api -o wide
Example output:
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
- 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready after the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The
etcd
cluster Operator will automatically sync when the machine or node returns to a healthy state.- Repeat these steps for each lost control plane host that is not the recovery host.
Turn off the quorum guard by entering:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'
This command ensures that you can successfully re-create secrets and roll out the static pods.
In a separate terminal window within the recovery host, export the recovery
kubeconfig
file by running:$ export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
Force
etcd
redeployment.In the same terminal window where you exported the recovery
kubeconfig
file, run:$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 1
- 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
The
etcd
redeployment starts.When the
etcd
cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.Turn the quorum guard back on by entering:
$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
You can verify that the
unsupportedConfigOverrides
section is removed from the object by running:$ oc get etcd/cluster -oyaml
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-admin
user, run:$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition foretcd
to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7 1
- 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.After
etcd
is redeployed, force new rollouts for the control plane.kube-apiserver
will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.In a terminal that has access to the cluster as a
cluster-admin
user, run:Force a new rollout for
kube-apiserver
:$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision.
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7 1
- 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes controller manager by running the following command:
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision by running:
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7 1
- 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the
kube-scheduler
by running:$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Verify all nodes are updated to the latest revision by using:
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7 1
- 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Monitor the platform Operators by running:
$ oc adm wait-for-stable-cluster
This process can take up to 15 minutes.
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:$ oc -n openshift-etcd get pods -l k8s-app=etcd
Example output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes.
On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login
might not immediately work until the OAuth server pods are restarted.
Consider using the system:admin
kubeconfig
file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig
Issue the following command to display your authenticated user name:
$ oc whoami
5.3.2.3. Additional resources
5.3.2.4. Issues and workarounds for restoring a persistent storage state
If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet
object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.
The following are some example scenarios that produce an out-of-date status:
- MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
- Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
- Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from
/dev/disk/by-id
or/dev
directories. This situation might cause the local PVs to refer to devices that no longer exist.To fix this problem, an administrator must:
- Manually remove the PVs with invalid devices.
- Remove symlinks from respective nodes.
-
Delete
LocalVolume
orLocalVolumeSet
objects (see StorageConfiguring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources).
5.3.3. Recovering from expired control plane certificates
5.3.3.1. Recovering from expired control plane certificates
The cluster can automatically recover from expired control plane certificates.
However, you must manually approve the pending node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs.
Use the following steps to approve the pending CSRs:
Procedure
Get the list of current CSRs:
$ oc get csr
Example output
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...
Review the details of a CSR to verify that it is valid:
$ oc describe csr <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapper
CSR:$ oc adm certificate approve <csr_name>
For user-provisioned installations, approve each valid kubelet serving CSR:
$ oc adm certificate approve <csr_name>