This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.2.4. Replacing the unhealthy etcd member
Depending on the state of your unhealthy etcd member, use one of the following procedures:
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
重要It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 6fc1e7c9db35841d
sh-4.2# etcdctl member remove 6fc1e7c9db35841d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Member 6fc1e7c9db35841d removed from cluster baa565c8919b060e
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete and recreate the master machine. After this machine is recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the master machine for the unhealthy node,
ip-10-0-131-183.ec2.internal
.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
$ oc get machine clustername-8qw5l-master-0 \
1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Edit the
new-master-machine.yaml
file that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
status
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
metadata.name
field to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0
is changed toclustername-8qw5l-master-3
.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
metadata.selfLink
field to use the new machine name from the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
spec.providerID
field:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.annotations
andmetadata.generation
fields:annotations: machine.openshift.io/instance-state: running ... generation: 2
annotations: machine.openshift.io/instance-state: running ... generation: 2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.resourceVersion
andmetadata.uid
fields:resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the master machine for the unhealthy node.
Verify that the machine was deleted:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine using the
new-master-machine.yaml
file:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine has been created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3
is being created and is ready once the phase changes fromProvisioning
toRunning
.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
警告Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-admin
role. You have taken an etcd backup.
重要It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc debug node/ip-10-0-131-183.ec2.internal
$ oc debug node/ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace this with the name of the unhealthy node.
Change your root directory to the host:
chroot /host
sh-4.2# chroot /host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the existing etcd pod file out of the kubelet manifest directory:
mkdir /var/lib/etcd-backup
sh-4.2# mkdir /var/lib/etcd-backup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the etcd data directory to a different location:
mv /var/lib/etcd/ /tmp
sh-4.2# mv /var/lib/etcd/ /tmp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc get pods -n openshift-etcd | grep etcd
$ oc get pods -n openshift-etcd | grep etcd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member remove
command:etcdctl member remove 62bcf33650a7170a
sh-4.2# etcdctl member remove 62bcf33650a7170a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w table
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-admin
user, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReason
value must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all master nodes have a functioning etcd pod.
Verification
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all members are healthy:
etcdctl endpoint health --cluster
sh-4.2# etcdctl endpoint health --cluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow