This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 2. Replacing an unhealthy etcd member
This document describes the process to replace a single unhealthy etcd member.
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
If you have lost the majority of your master hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure.
If a master node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
2.1. Prerequisites Copy linkLink copied to clipboard!
- Take an etcd backup prior to replacing an unhealthy etcd member.
2.2. Identifying an unhealthy etcd member Copy linkLink copied to clipboard!
You can identify if your cluster has an unhealthy etcd member.
Prerequisites
-
Access to the cluster as a user with the
cluster-admin
role.
Procedure
Check the status of the
EtcdMembersAvailable
status condition using the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example output shows that the
ip-10-0-131-183.ec2.internal
etcd member is unhealthy.
2.3. Determining the state of the unhealthy etcd member Copy linkLink copied to clipboard!
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
- The machine is not running or the node is not ready
- The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have identified an unhealthy etcd member.
Procedure
Determine if the machine is not running:
oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal stopped
ip-10-0-131-183.ec2.internal stopped
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This output lists the node and the status of the node’s machine. If the status is anything other than
running
, then the machine is not running.
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed with an
unreachable
taint, then the node is not ready.
If the node is still reachable, then check whether the node is listed as
NotReady
:oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal NotReady master 122m v1.18.3
ip-10-0-131-183.ec2.internal NotReady master 122m v1.18.3
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed as
NotReady
, then the node is not ready.
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all master nodes are listed as
Ready
:oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/master
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.18.3
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.18.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.18.3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the status of an etcd pod is either
Error
orCrashloopBackoff
:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m
1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Since this status of this pod is
Error
, then the etcd pod is crashlooping.
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
2.4. Replacing the unhealthy etcd member Copy linkLink copied to clipboard!
Depending on the state of your unhealthy etcd member, use one of the following procedures:
2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready Copy linkLink copied to clipboard!
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 6fc1e7c9db35841d
sh-4.2# etcdctl member remove 6fc1e7c9db35841d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete and recreate the master machine. After this machine is recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the master machine for the unhealthy node,
ip-10-0-131-183.ec2.internal
.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
$ oc get machine clustername-8qw5l-master-0 \
1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Edit the
new-master-machine.yaml
file that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
status
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
metadata.name
field to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0
is changed toclustername-8qw5l-master-3
.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
metadata.selfLink
field to use the new machine name from the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
spec.providerID
field:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.annotations
andmetadata.generation
fields:annotations: machine.openshift.io/instance-state: running ... generation: 2
annotations: machine.openshift.io/instance-state: running ... generation: 2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.resourceVersion
andmetadata.uid
fields:resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Verify that the machine was deleted:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine using the
new-master-machine.yaml
file:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine has been created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready once the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
WarningBe sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping Copy linkLink copied to clipboard!
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc debug node/ip-10-0-131-183.ec2.internal
$ oc debug node/ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace this with the name of the unhealthy node.
Change your root directory to the host:
chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the existing etcd pod file out of the kubelet manifest directory:
mkdir /var/lib/etcd-backup
sh-4.2# mkdir /var/lib/etcd-backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the etcd data directory to a different location:
mv /var/lib/etcd/ /tmp
sh-4.2# mv /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 62bcf33650a7170a
sh-4.2# etcdctl member remove 62bcf33650a7170a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd pod.
Verification
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all members are healthy:
etcdctl endpoint health --cluster
sh-4.2# etcdctl endpoint health --cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow