This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Backup and restore
Backing up and restoring your OpenShift Container Platform cluster
摘要
第 1 章 Backing up etcd 复制链接链接已复制到粘贴板!
etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects.
Back up your cluster’s etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours, as it is a blocking action.
Be sure to take an etcd backup after you upgrade your cluster. This is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.5.2 cluster must use an etcd backup that was taken from 4.5.2.
Back up your cluster’s etcd data by performing a single invocation of the backup script on a master host. Do not take a backup for each master host.
After you have an etcd backup, you can restore to a previous cluster state.
You can perform the etcd data backup process on any master host that has a running etcd instance.
1.1. Backing up etcd data 复制链接链接已复制到粘贴板!
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single master host. Do not take a backup from each master host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. You have checked whether the cluster-wide proxy is enabled.
提示You can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.
Procedure
Start a debug session for a master node:
oc debug node/<node_name>
$ oc debug node/<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change your root directory to the host:
chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables. Run the
cluster-backup.sh
script and pass in the location to save the backup to.提示The
cluster-backup.sh
script is maintained as a component of the etcd Cluster Operator and is a wrapper around theetcdctl snapshot save
command./usr/local/bin/cluster-backup.sh /home/core/assets/backup
sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two files are created in the
/home/core/assets/backup/
directory on the master host:-
snapshot_<datetimestamp>.db
: This file is the etcd snapshot. static_kuberesources_<datetimestamp>.tar.gz
: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.注意If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required in order to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
第 2 章 Replacing an unhealthy etcd member 复制链接链接已复制到粘贴板!
This document describes the process to replace a single unhealthy etcd member.
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
If you have lost the majority of your master hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure.
If a master node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
2.1. Prerequisites 复制链接链接已复制到粘贴板!
- Take an etcd backup prior to replacing an unhealthy etcd member.
2.2. Identifying an unhealthy etcd member 复制链接链接已复制到粘贴板!
You can identify if your cluster has an unhealthy etcd member.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Check the status of the
EtcdMembersAvailable
status condition using the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example output shows that the
ip-10-0-131-183.ec2.internal
etcd member is unhealthy.
2.3. Determining the state of the unhealthy etcd member 复制链接链接已复制到粘贴板!
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
- The machine is not running or the node is not ready
- The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have identified an unhealthy etcd member.
Procedure
Determine if the machine is not running:
oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal stopped
ip-10-0-131-183.ec2.internal stopped
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This output lists the node and the status of the node’s machine. If the status is anything other than
running
, then the machine is not running.
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed with an
unreachable
taint, then the node is not ready.
If the node is still reachable, then check whether the node is listed as
NotReady
:oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal NotReady master 122m v1.18.3
ip-10-0-131-183.ec2.internal NotReady master 122m v1.18.3
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed as
NotReady
, then the node is not ready.
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all master nodes are listed as
Ready
:oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.18.3
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the status of an etcd pod is either
Error
orCrashloopBackoff
:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m
1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Since this status of this pod is
Error
, then the etcd pod is crashlooping.
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
2.4. Replacing the unhealthy etcd member 复制链接链接已复制到粘贴板!
Depending on the state of your unhealthy etcd member, use one of the following procedures:
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
重要It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 6fc1e7c9db35841d
sh-4.2# etcdctl member remove 6fc1e7c9db35841d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete and recreate the master machine. After this machine is recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the master machine for the unhealthy node,
ip-10-0-131-183.ec2.internal
.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
$ oc get machine clustername-8qw5l-master-0 \
1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Edit the
new-master-machine.yaml
file that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
status
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
metadata.name
field to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0
is changed toclustername-8qw5l-master-3
.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
metadata.selfLink
field to use the new machine name from the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
spec.providerID
field:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.annotations
andmetadata.generation
fields:annotations: machine.openshift.io/instance-state: running ... generation: 2
annotations: machine.openshift.io/instance-state: running ... generation: 2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.resourceVersion
andmetadata.uid
fields:resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Verify that the machine was deleted:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine using the
new-master-machine.yaml
file:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine has been created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready once the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
警告Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
重要It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc debug node/ip-10-0-131-183.ec2.internal
$ oc debug node/ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace this with the name of the unhealthy node.
Change your root directory to the host:
chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the existing etcd pod file out of the kubelet manifest directory:
mkdir /var/lib/etcd-backup
sh-4.2# mkdir /var/lib/etcd-backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the etcd data directory to a different location:
mv /var/lib/etcd/ /tmp
sh-4.2# mv /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 62bcf33650a7170a
sh-4.2# etcdctl member remove 62bcf33650a7170a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd pod.
Verification
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all members are healthy:
etcdctl endpoint health --cluster
sh-4.2# etcdctl endpoint health --cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
第 3 章 Shutting down the cluster gracefully 复制链接链接已复制到粘贴板!
This document describes the process to gracefully shut down your cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs.
3.1. Prerequisites 复制链接链接已复制到粘贴板!
- Take an etcd backup prior to shutting down the cluster.
3.2. Shutting down the cluster 复制链接链接已复制到粘贴板!
You can shut down your cluster in a graceful manner so that it can be restarted at a later date.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
重要It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues when restarting the cluster.
Procedure
Shut down all of the nodes in the cluster. You can do this from your cloud provider’s web console, or you can use the below commands:
Obtain the list of nodes:
nodes=$(oc get nodes -o jsonpath='{.items[*].metadata.name}')
$ nodes=$(oc get nodes -o jsonpath='{.items[*].metadata.name}')
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down all of the nodes:
for node in ${nodes[@]} do echo "==== Shut down $node ====" ssh core@$node sudo shutdown -h 1 done
$ for node in ${nodes[@]} do echo "==== Shut down $node ====" ssh core@$node sudo shutdown -h 1 done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Shutting down the nodes using one of these methods allows pods to terminate gracefully, which reduces the chance for data corruption.
注意It is not necessary to drain master nodes of the standard pods that ship with OpenShift Container Platform prior to shutdown.
Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained master nodes prior to shutdown because of custom workloads, you must mark the master nodes as schedulable before the cluster will be functional again after restart.
- Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor’s documentation before doing so.
Additional resources
第 4 章 Restarting the cluster gracefully 复制链接链接已复制到粘贴板!
This document describes the process to restart your cluster after a graceful shutdown.
Even though the cluster is expected to be functional after the restart, the cluster might not recover due to unexpected conditions, for example:
- etcd data corruption during shutdown
- Node failure due to hardware
- Network connectivity issues
If your cluster fails to recover, follow the steps to restore to a previous cluster state.
4.1. Prerequisites 复制链接链接已复制到粘贴板!
- You have gracefully shut down your cluster.
4.2. Restarting the cluster 复制链接链接已复制到粘贴板!
You can restart your cluster after it has been shut down gracefully.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - This procedure assumes that you gracefully shut down the cluster.
Procedure
- Power on any cluster dependencies, such as external storage or an LDAP server.
Start all cluster machines.
Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider’s web console.
Wait approximately 10 minutes before continuing to check the status of master nodes.
Verify that all master nodes are ready.
oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The master nodes are ready if the status is
Ready
, as shown in the following output:NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.18.3 ip-10-0-170-223.ec2.internal Ready master 75m v1.18.3 ip-10-0-211-16.ec2.internal Ready master 75m v1.18.3
NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.18.3 ip-10-0-170-223.ec2.internal Ready master 75m v1.18.3 ip-10-0-211-16.ec2.internal Ready master 75m v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the master nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the master nodes are ready, verify that all worker nodes are ready.
oc get nodes -l node-role.kubernetes.io/worker
$ oc get nodes -l node-role.kubernetes.io/worker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The worker nodes are ready if the status is
Ready
, as shown in the following output:NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.18.3 ip-10-0-182-134.ec2.internal Ready worker 64m v1.18.3 ip-10-0-250-100.ec2.internal Ready worker 64m v1.18.3
NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.18.3 ip-10-0-182-134.ec2.internal Ready worker 64m v1.18.3 ip-10-0-250-100.ec2.internal Ready worker 64m v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the worker nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the cluster started properly.
Check that there are no degraded cluster Operators.
oc get clusteroperators
$ oc get clusteroperators
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that there are no cluster Operators with the
DEGRADED
condition set toTrue
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that all nodes are in the
Ready
state:oc get nodes
$ oc get nodes
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the status for all nodes is
Ready
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the cluster did not start properly, you might need to restore your cluster using an etcd backup.
Additional resources
- See Restoring to a previous cluster state for how to use an etcd backup to restore if your cluster failed to recover after restarting.
第 5 章 Disaster recovery 复制链接链接已复制到粘贴板!
5.1. About disaster recovery 复制链接链接已复制到粘贴板!
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures in order to return your cluster to a working state.
- Restoring to a previous cluster state
This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your master hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state.
If applicable, you might also need to recover from expired control plane certificates.
注意If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
- Recovering from expired control plane certificates
- This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
5.2. Recovering from lost master hosts 复制链接链接已复制到粘贴板!
As of OpenShift Container Platform 4.4, follow the procedure to restore to a previous cluster state in order to recover from lost master hosts.
If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
5.3. Restoring to a previous cluster state 复制链接链接已复制到粘贴板!
To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.
5.3.1. Restoring to a previous cluster state 复制链接链接已复制到粘贴板!
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining master hosts.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.5.2 cluster must use an etcd backup that was taken from 4.5.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role. - SSH access to master hosts.
-
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:
snapshot_<datetimestamp>.db
andstatic_kuberesources_<datetimestamp>.tar.gz
.
Procedure
- Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
重要If you do not complete this step, you will not be able to access the master hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the etcd backup directory to the recovery control plane host.
This procedure assumes that you copied the
backup
directory containing the etcd snapshot and the resources for the static pods to the/home/core/
directory of your recovery control plane host.Stop the static pods on all other control plane nodes.
注意It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.
- Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory:
sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the etcd pods are stopped.
sudo crictl ps | grep etcd | grep -v operator
[core@ip-10-0-154-194 ~]$ sudo crictl ps | grep etcd | grep -v operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
[core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Kubernetes API server pods are stopped.
sudo crictl ps | grep kube-apiserver | grep -v operator
[core@ip-10-0-154-194 ~]$ sudo crictl ps | grep kube-apiserver | grep -v operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the etcd data directory to a different location:
sudo mv /var/lib/etcd/ /tmp
[core@ip-10-0-154-194 ~]$ sudo mv /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on each of the other master hosts that is not the recovery host.
- Access the recovery control plane host.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY
,HTTP_PROXY
, andHTTPS_PROXY
environment variables.提示You can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml
. The proxy is enabled if thehttpProxy
,httpsProxy
, andnoProxy
fields have values set.Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:
sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup
[core@ip-10-0-143-125 ~]$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the kubelet service on all master hosts.
From the recovery host, run the following command:
sudo systemctl restart kubelet.service
[core@ip-10-0-143-125 ~]$ sudo systemctl restart kubelet.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on all other master hosts.
Verify that the single member control plane has started successfully.
From the recovery host, verify that the etcd container is running.
sudo crictl ps | grep etcd | grep -v operator
[core@ip-10-0-143-125 ~]$ sudo crictl ps | grep etcd | grep -v operator
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the recovery host, verify that the etcd pod is running.
oc get pods -n openshift-etcd | grep etcd
[core@ip-10-0-143-125 ~]$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意If you attempt to run
oc login
prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.Unable to connect to the server: EOF
Unable to connect to the server: EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the status is
Pending
, or the output lists more than one running etcd pod, wait a few minutes and check again.
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition for etcd to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following commands.Update the
kubeapiserver
:oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Update the
kubecontrollermanager
:oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.Update the
kubescheduler
:oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressing
status condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevision
upon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 7
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7
.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7
, this means that the update is still in progress. Wait a few minutes and try again.
Verify that all master hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login
might not immediately work until the OAuth server pods are restarted.
5.4. Recovering from expired control plane certificates 复制链接链接已复制到粘贴板!
As of OpenShift Container Platform 4.4.8, the cluster can automatically recover from expired control plane certificates. You no longer need to perform the manual steps that were required in previous versions.
The exception is that you must manually approve the pending node-bootstrapper
certificate signing requests (CSRs) to recover kubelet certificates.
Use the following steps to approve the pending node-bootstrapper
CSRs.
Procedure
Get the list of current CSRs:
oc get csr
$ oc get csr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>
is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapper
CSR:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
复制链接链接已复制到粘贴板!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.