Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 14. Replacing storage nodes
You can choose one of the following procedures to replace storage nodes:
14.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Copier lienLien copié sur presse-papiers!
Procedure
- 
						Log in to the OpenShift Web Console, and click Compute 
Nodes.  - Identify the node that you need to replace. Take a note of its Machine Name.
 Mark the node as unschedulable:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow <node_name>- Specify the name of node that you need to replace.
 
Drain the node:
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when when you label the new node, and it is functional.
- 
						Click Compute 
Machines. Search for the required machine.  - 
						Besides the required machine, click Action menu (⋮) 
Delete Machine.  - Click Delete to confirm that the machine is deleted. A new machine is automatically created.
 Wait for the new machine to start and transition into Running state.
ImportantThis activity might take at least 5 - 10 minutes or more.
- 
						Click Compute 
Nodes. Confirm that the new node is in Ready state.  Apply the OpenShift Data Foundation label to the new node:
- From the user interface
 - 
											For the new node, click Action Menu (⋮) 
Edit Labels.  - 
											Add 
cluster.ocs.openshift.io/openshift-storage, and click Save. 
- 
											For the new node, click Action Menu (⋮) 
 - From the command-line interface
 - Apply the OpenShift Data Foundation label to the new node:
 
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Specify the name of the new node.
 
Verification steps
Verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods. Confirm that at least the following pods on the new node are in Running state: - 
								
csi-cephfsplugin-* - 
								
csi-rbdplugin-* 
- 
								
 - Verify that all the other required OpenShift Data Foundation pods are in Running state.
 Verify that the new Object Storage Device (OSD) pods are running on the replacement node:
oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the one or more selected hosts:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the list of available block devices:
lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the
cryptkeyword beside the one or moreocs-devicesetnames.
- If the verification steps fail, contact Red Hat Support.
 
14.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Copier lienLien copié sur presse-papiers!
Procedure
- 
						Log in to the OpenShift Web Console, and click Compute 
Nodes.  - Identify the faulty node, and click on its Machine Name.
 - 
						Click Actions 
Edit Annotations, and click Add More.  - 
						Add 
machine.openshift.io/exclude-node-draining, and click Save. - 
						Click Actions 
Delete Machine, and click Delete.  A new machine is automatically created, wait for new machine to start.
ImportantThis activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional.
- 
						Click Compute 
Nodes. Confirm that the new node is in Ready state.  Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From the user interface
 - 
											For the new node, click Action Menu (⋮) 
Edit Labels.  - 
											Add 
cluster.ocs.openshift.io/openshift-storage, and click Save. 
- 
											For the new node, click Action Menu (⋮) 
 - From the command-line interface
 - Apply the OpenShift Data Foundation label to the new node:
 
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow <new_node_name>- Specify the name of the new node.
 
- Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console.
 
Verification steps
Verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods. Confirm that at least the following pods on the new node are in Running state: - 
								
csi-cephfsplugin-* - 
								
csi-rbdplugin-* 
- 
								
 - Verify that all the other required OpenShift Data Foundation pods are in Running state.
 Verify that the new Object Storage Device (OSD) pods are running on the replacement node:
oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in the previous step, do the following:
Create a debug pod and open a chroot environment for the one or more selected hosts:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow chroot /host
$ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the list of available block devices:
lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check for the
cryptkeyword beside the one or moreocs-devicesetnames.
- If the verification steps fail, contact Red Hat Support.