This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Backup and restore
Backing up and restoring your OpenShift Container Platform cluster
Abstract
Chapter 1. Backing up etcd 링크 복사링크가 클립보드에 복사되었습니다!
etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects.
Back up your cluster’s etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours, as it is a blocking action.
Be sure to take an etcd backup after you upgrade your cluster. This is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.4.2 cluster must use an etcd backup that was taken from 4.4.2.
Back up your cluster’s etcd data by performing a single invocation of the backup script on a master host. Do not take a backup for each master host.
After you have an etcd backup, you can restore to a previous cluster state.
You can perform the etcd data backup process on any master host that has a running etcd instance.
1.1. Backing up etcd data 링크 복사링크가 클립보드에 복사되었습니다!
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single master host. Do not take a backup from each master host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. You have checked whether the cluster-wide proxy is enabled.
TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.
Procedure
Start a debug session for a master node:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change your root directory to the host:
chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables. Run the
cluster-backup.sh
script and pass in the location to save the backup to.Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two files are created in the
/home/core/assets/backup/
directory on the master host:-
snapshot_<datetimestamp>.db
: This file is the etcd snapshot. static_kuberesources_<datetimestamp>.tar.gz
: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.NoteIf etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required in order to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
Chapter 2. Replacing an unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
This document describes the process to replace a single unhealthy etcd member.
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
If you have lost the majority of your master hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure.
If a master node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- Take an etcd backup prior to replacing an unhealthy etcd member.
2.2. Identifying an unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
You can identify if your cluster has an unhealthy etcd member.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Check the status of the
EtcdMembersAvailable
status condition using the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example output shows that the
ip-10-0-131-183.ec2.internal
etcd member is unhealthy.
2.3. Determining the state of the unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
- The machine is not running or the node is not ready
- The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have identified an unhealthy etcd member.
Procedure
Determine if the machine is not running:
oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running ip-10-0-131-183.ec2.internal stopped
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This output lists the node and the status of the node’s machine. If the status is anything other than
running
, then the machine is not running.
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed with an
unreachable
taint, then the node is not ready.
If the node is still reachable, then check whether the node is listed as
NotReady
:oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" ip-10-0-131-183.ec2.internal NotReady master 122m v1.17.1
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed as
NotReady
, then the node is not ready.
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all master nodes are listed as
Ready
:oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/master NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.17.1 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.17.1 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.17.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the status of an etcd pod is either
Error
orCrashloopBackoff
:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m
1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Since this status of this pod is
Error
, then the etcd pod is crashlooping.
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
2.4. Replacing the unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
Depending on the state of your unhealthy etcd member, use one of the following procedures:
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 6fc1e7c9db35841d
sh-4.2# etcdctl member remove 6fc1e7c9db35841d Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete and recreate the master machine. After this machine is recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the master machine for the unhealthy node,
ip-10-0-131-183.ec2.internal
.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
$ oc get machine clustername-8qw5l-master-0 \
1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Edit the
new-master-machine.yaml
file that was created in the previous step.Remove the entire
status
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
providerID
field:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
name
field to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0
is changed toclustername-8qw5l-master-3
.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
selfLink
field to use the new machine name from the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Verify that the machine was deleted:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine using the
new-master-machine.yaml
file:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine has been created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready once the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verify that all etcd pods are running properly:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping 링크 복사링크가 클립보드에 복사되었습니다!
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc debug node/ip-10-0-131-183.ec2.internal
$ oc debug node/ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace this with the name of the unhealthy node.
Change your root directory to the host:
chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the existing etcd pod file out of the kubelet manifest directory:
mkdir /var/lib/etcd-backup
sh-4.2# mkdir /var/lib/etcd-backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the etcd data directory to a different location:
mv /var/lib/etcd/ /tmp
sh-4.2# mv /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 62bcf33650a7170a
sh-4.2# etcdctl member remove 62bcf33650a7170a Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd pod.
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all members are healthy:
etcdctl endpoint health --cluster
sh-4.2# etcdctl endpoint health --cluster https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Disaster recovery 링크 복사링크가 클립보드에 복사되었습니다!
3.1. About disaster recovery 링크 복사링크가 클립보드에 복사되었습니다!
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures in order to return your cluster to a working state.
- Restoring to a previous cluster state
This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your master hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state.
If applicable, you might also need to recover from expired control plane certificates.
NoteIf you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
- Recovering from expired control plane certificates
- This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
3.2. Recovering from lost master hosts 링크 복사링크가 클립보드에 복사되었습니다!
As of OpenShift Container Platform 4.4, follow the procedure to restore to a previous cluster state in order to recover from lost master hosts.
If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
3.3. Restoring to a previous cluster state 링크 복사링크가 클립보드에 복사되었습니다!
To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.
3.3.1. Restoring to a previous cluster state 링크 복사링크가 클립보드에 복사되었습니다!
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single master host. Then the etcd cluster Operator handles scaling to the remaining master hosts.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.4.2 cluster must use an etcd backup that was taken from 4.4.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. - SSH access to master hosts.
-
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:
snapshot_<datetimestamp>.db
andstatic_kuberesources_<datetimestamp>.tar.gz
.
Procedure
- Select a master host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the master nodes, including the recovery host.
The Kubernetes API server will become inaccessible once the restore process has started, so you cannot access the master nodes. For this reason, it is recommended to establish SSH connectivity to each master host in a separate terminal.
ImportantIf you do not complete this step, you will not be able to access the master hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the etcd backup directory to the recovery master host.
This procedure assumes that you copied the
backup
directory containing the etcd snapshot and the resources for the static pods to the/home/core/
directory of your recovery master host.Stop the static pods on all other master nodes.
NoteIt is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.
- Access a master host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory:
sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the etcd pods are stopped.
sudo crictl ps | grep etcd | grep -v operator
[core@ip-10-0-154-194 ~]$ sudo crictl ps | grep etcd | grep -v operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Kubernetes API server pods are stopped.
sudo crictl ps | grep kube-apiserver | grep -v operator
[core@ip-10-0-154-194 ~]$ sudo crictl ps | grep kube-apiserver | grep -v operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the etcd data directory to a different location:
sudo mv /var/lib/etcd/ /tmp
[core@ip-10-0-154-194 ~]$ sudo mv /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on each of the other master hosts that is not the recovery host.
- Access the recovery master host.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables.TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.Run the restore script on the recovery master host and pass in the path to the etcd backup directory:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the kubelet service on all master hosts.
From the recovery host, run the following command:
sudo systemctl restart kubelet.service
[core@ip-10-0-143-125 ~]$ sudo systemctl restart kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on all other master hosts.
Verify that the single member control plane has started successfully.
From the recovery host, verify that the etcd container is running.
sudo crictl ps | grep etcd | grep -v operator
[core@ip-10-0-143-125 ~]$ sudo crictl ps | grep etcd | grep -v operator 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the recovery host, verify that the etcd pod is running.
oc get pods -n openshift-etcd | grep etcd
[core@ip-10-0-143-125 ~]$ oc get pods -n openshift-etcd | grep etcd NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you attempt to run
oc login
prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.Unable to connect to the server: EOF
Unable to connect to the server: EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the status is
Pending
, or the output lists more than one running etcd pod, wait a few minutes and check again.
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition for etcd to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 3
AllNodesAtLatestRevision 3 nodes are at revision 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output shows a message such as
2 nodes are at revision 3; 1 nodes are at revision 4
, this means that the update is still in progress. Wait a few minutes and try again.After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following commands.Update the
kubeapiserver
:oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 3
AllNodesAtLatestRevision 3 nodes are at revision 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
kubecontrollermanager
:oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 3
AllNodesAtLatestRevision 3 nodes are at revision 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
kubescheduler
:oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 3
AllNodesAtLatestRevision 3 nodes are at revision 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that all master hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login
might not immediately work until the OAuth server pods are restarted.
3.4. Recovering from expired control plane certificates 링크 복사링크가 클립보드에 복사되었습니다!
3.4.1. Recovering from expired control plane certificates 링크 복사링크가 클립보드에 복사되었습니다!
As of OpenShift Container Platform 4.4.8, the cluster can automatically recover from expired control plane certificates. You no longer need to perform the manual steps that were required in previous versions.
The exception is that you must manually approve the pending node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates.
Use the following steps to approve the pending node-bootstrapper
CSRs.
Procedure
Get the list of current CSRs:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapper
CSR:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
링크 복사링크가 클립보드에 복사되었습니다!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.