OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Replacing nodes
How to prepare replacement nodes and replace failed nodes
Abstract
Preface Copy linkLink copied to clipboard!
For OpenShift Container Storage, node replacement can be performed proactively for an operational node and reactively for a failed node for the following deployments:
For Amazon Web Services (AWS)
- User-provisioned infrastructure
- Installer-provisioned infrastructure
For VMware
- User-provisioned infrastructure
For Microsoft Azure
- Installer-provisioned infrastructure
For local storage devices
- Bare metal
- Amazon EC2 I3
- VMware
- IBM Power Systems
- IBM Z or LinuxONE
- For replacing your storage nodes in external mode, see Red Hat Ceph Storage documentation.
Chapter 1. OpenShift Container Storage deployed on AWS Copy linkLink copied to clipboard!
1.1. Replacing an operational AWS node on user-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace an operational node on AWS user-provisioned infrastructure.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
- Identify the node that needs to be replaced.
Mark the node as unschedulable using the following command:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node using the following command:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
Delete the node using the following command:
oc delete nodes <node_name>
$ oc delete nodes <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new AWS machine instance with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform node using the new AWS machine instance.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in
Pendingstate:oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node.
- From the web user interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From the command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
1.2. Replacing an operational AWS node on installer-provisioned infrastructure Copy linkLink copied to clipboard!
Use this procedure to replace an operational node on AWS installer-provisioned infrastructure (IPI).
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the node that needs to be replaced. Take a note of its Machine Name.
Mark the node as unschedulable using the following command:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node using the following command:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
- Click Compute → Machines. Search for the required machine.
- Besides the required machine, click the Action menu (⋮) → Delete Machine.
- Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
1.3. Replacing a failed AWS node on user-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace a failed node which is not operational on AWS user-provisioned infrastructure (UPI) for OpenShift Container Storage.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
- Identify the AWS machine instance of the node that needs to be replaced.
- Log in to AWS and terminate the identified AWS machine instance.
- Create a new AWS machine instance with the required infrastructure. See platform requirements.
- Create a new OpenShift Container Platform node using the new AWS machine instance.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in
Pendingstate:oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
1.4. Replacing a failed AWS node on installer-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace a failed node which is not operational on AWS installer-provisioned infrastructure (IPI) for OpenShift Container Storage.
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the faulty node and click on its Machine Name.
- Click Actions → Edit Annotations, and click Add More.
-
Add
machine.openshift.io/exclude-node-drainingand click Save. - Click Actions → Delete Machine, and click Delete.
A new machine is automatically created, wait for new machine to start.
ImportantThis activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- [Optional]: If the failed AWS instance is not removed automatically, terminate the instance from AWS console.
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
Chapter 2. OpenShift Container Storage deployed on VMware Copy linkLink copied to clipboard!
2.1. Replacing an operational VMware node on user-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace an operational node on VMware user-provisioned infrastructure (UPI).
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
- Identify the node and its VM that needs to be replaced.
Mark the node as unschedulable using the following command:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node using the following command:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
Delete the node using the following command:
oc delete nodes <node_name>
$ oc delete nodes <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to vSphere and terminate the identified VM.
ImportantVM should be deleted only from the inventory and not from the disk.
- Create a new VM on vSphere with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in
Pendingstate:oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.2. Replacing a failed VMware node on user-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace a failed node on VMware user-provisioned infrastructure (UPI).
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
- Identify the node and its VM that needs to be replaced.
Delete the node using the following command:
oc delete nodes <node_name>
$ oc delete nodes <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to vSphere and terminate the identified VM.
ImportantVM should be deleted only from the inventory and not from the disk.
- Create a new VM on vSphere with the required infrastructure. See Platform requirements.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in
Pendingstate:oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
Chapter 3. OpenShift Container Storage deployed on Microsoft Azure Copy linkLink copied to clipboard!
3.1. Replacing operational nodes on Azure installer-provisioned infrastructure Copy linkLink copied to clipboard!
Use this procedure to replace an operational node on Azure installer-provisioned infrastructure (IPI).
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the node that needs to be replaced. Take a note of its Machine Name.
Mark the node as unschedulable using the following command:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node using the following command:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
- Click Compute → Machines. Search for the required machine.
- Besides the required machine, click the Action menu (⋮) → Delete Machine.
- Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
3.2. Replacing failed nodes on Azure installer-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace a failed node which is not operational on Azure installer-provisioned infrastructure (IPI) for OpenShift Container Storage.
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the faulty node and click on its Machine Name.
- Click Actions → Edit Annotations, and click Add More.
-
Add
machine.openshift.io/exclude-node-drainingand click Save. - Click Actions → Delete Machine, and click Delete.
A new machine is automatically created, wait for new machine to start.
ImportantThis activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- [Optional]: If the failed Azure instance is not removed automatically, terminate the instance from Azure console.
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
Chapter 4. OpenShift Container Storage deployed using local storage devices Copy linkLink copied to clipboard!
4.1. Replacing storage nodes on bare metal infrastructure Copy linkLink copied to clipboard!
- To replace an operational node, see Section 4.1.1, “Replacing an operational node on bare metal user-provisioned infrastructure”
- To replace a failed node, see Section 4.1.2, “Replacing a failed node on bare metal user-provisioned infrastructure”
4.1.1. Replacing an operational node on bare metal user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
- If you upgraded to OpenShift Container Storage 4.6 from a previous version instead of performing a fresh installation, ensure that you have completed Post-update configuration changes.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Get a new bare metal machine with required infrastructure. See Installing a cluster on bare metal.
- Create a new OpenShift Container Platform node using the new bare metal machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in OpenShift Web Console, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.oc get -n local-storage-project localvolumeset NAME AGE localblock 25h
# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV.
oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deleted
# oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the
crashcollectorpod deployment.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob.oc delete job ocs-osd-removal-${osd_id_to_remove}# oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in
Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162m
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.1.2. Replacing a failed node on bare metal user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
- If you upgraded to OpenShift Container Storage 4.6 from a previous version instead of performing a fresh installation, ensure that you have completed Post-update configuration changes.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Get a new bare metal machine with required infrastructure. See Installing a cluster on bare metal.
- Create a new OpenShift Container Platform node using the new bare metal machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in OpenShift Web Console, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.oc get -n local-storage-project localvolumeset NAME AGE localblock 25h
# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV.
oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deleted
# oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the
crashcollectorpod deployment.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob.oc delete job ocs-osd-removal-${osd_id_to_remove}# oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in
Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162m
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.2. Replacing storage nodes on IBM Z or LinuxONE infrastructure Copy linkLink copied to clipboard!
You can choose one of the following procedures to replace storage nodes:
4.2.1. Replacing operational nodes on IBM Z or LinuxONE infrastructure Copy linkLink copied to clipboard!
Use this procedure to replace an operational node on IBM Z or LinuxONE infrastructure.
Procedure
- Log in to OpenShift Web Console.
- Click Compute → Nodes.
- Identify the node that needs to be replaced. Take a note of its Machine Name.
Mark the node as unschedulable using the following command:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node using the following command:
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity may take at least 5-10 minutes. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
- Click Compute → Machines. Search for the required machine.
- Besides the required machine, click the Action menu (⋮) → Delete Machine.
- Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.2.2. Replacing failed nodes on IBM Z or LinuxONE infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace a failed node which is not operational on IBM Z or LinuxONE infrastructure for OpenShift Container Storage.
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the faulty node and click on its Machine Name.
- Click Actions → Edit Annotations, and click Add More.
-
Add
machine.openshift.io/exclude-node-drainingand click Save. - Click Actions → Delete Machine, and click Delete.
A new machine is automatically created, wait for new machine to start.
ImportantThis activity may take at least 5-10 minutes. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
- Click Compute → Nodes, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From the web user interface
- For the new node, click Action Menu (⋮) → Edit Labels
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From the command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= | cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= | cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Container Storage pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.3. Replacing storage nodes on Amazon EC2 infrastructure Copy linkLink copied to clipboard!
To replace an operational Amazon EC2 node on user-provisioned and installer provisioned infrastructures, see:
To replace a failed Amazon EC2 node on user-provisioned and installer provisioned infrastructures, see:
4.3.1. Replacing an operational Amazon EC2 node on user-provisioned infrastructure Copy linkLink copied to clipboard!
Perform this procedure to replace an operational node on Amazon EC2 I3 user-provisioned infrastructure (UPI).
Replacing storage nodes in Amazon EC2 I3 infrastructure is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and OSDs that are running in the node to be replaced.
oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the nodes as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new Amazon EC2 I3 machine instance with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform node using the new Amazon EC2 I3 machine instance.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in the OpenShift web console. Confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
- Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the local storage devices available in the new worker node to the OpenShift Container Storage StorageCluster.
Add the new disk entries to LocalVolume CR.
Edit
LocalVolumeCR. You can either remove or comment out the failed device/dev/disk/by-id/{id}and add the new/dev/disk/by-id/{id}.oc get -n local-storage localvolume
$ oc get -n local-storage localvolumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE local-block 25h
NAME AGE local-block 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit -n local-storage localvolume local-block
$ oc edit -n local-storage localvolume local-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure to save the changes after editing the CR.
You can see that in this CR the below two new devices using by-id have been added.
-
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS6F45C01D7E84FE3E9 -
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS636BC945B4ECB9AE4
-
Display PVs with
localblock.oc get pv | grep localblock
$ oc get pv | grep localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the storage resources associated with the failed node.
Identify the DeviceSet associated with the OSD to be replaced.
osd_id_to_remove=0 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc$ osd_id_to_remove=0 $ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow where,
osd_id_to_removeis the integer in the pod name immediately after therook-ceph-osdprefix. In this example, the deployment name isrook-ceph-osd-0.Example output:
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>
$ oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
x,y, andpvc-suffixare the values in the DeviceSet identified in an earlier step.Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49mCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the associated PV is
local-pv-8176b2bf.Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} | oc create -f -$ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD is removed successfully by checking the status of the
ocs-osd-removalpod. A status ofCompletedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage# oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example:
oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1# oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV which was identified in earlier steps. In this example, the PV name is
local-pv-8176b2bf.oc delete pv local-pv-8176b2bf
$ oc delete pv local-pv-8176b2bfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
persistentvolume "local-pv-8176b2bf" deleted
persistentvolume "local-pv-8176b2bf" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete
crashcollectorpod deployment identified in an earlier step.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob(s).oc delete job ocs-osd-removal-${osd_id_to_remove}$ oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Also, ensure that the new incremental mon is created and is in the Running state.
oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27m
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSDs and mon’s might take several minutes to get to the Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.3.2. Replacing an operational Amazon EC2 node on installer-provisioned infrastructure Copy linkLink copied to clipboard!
Use this procedure to replace an operational node on Amazon EC2 I3 installer-provisioned infrastructure (IPI).
Replacing storage nodes in Amazon EC2 I3 infrastructure is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the node that needs to be replaced. Take a note of its Machine Name.
Get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and OSDs that are running in the node to be replaced.
oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the nodes as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Machines. Search for the required machine.
- Besides the required machine, click the Action menu (⋮) → Delete Machine.
- Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
- Click Compute → Nodes in the OpenShift web console. Confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
- Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the local storage devices available in the new worker node to the OpenShift Container Storage StorageCluster.
Add the new disk entries to LocalVolume CR.
Edit
LocalVolumeCR. You can either remove or comment out the failed device/dev/disk/by-id/{id}and add the new/dev/disk/by-id/{id}.oc get -n local-storage localvolume
$ oc get -n local-storage localvolumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE local-block 25h
NAME AGE local-block 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit -n local-storage localvolume local-block
$ oc edit -n local-storage localvolume local-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure to save the changes after editing the CR.
You can see that in this CR the below two new devices using by-id have been added.
-
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS6F45C01D7E84FE3E9 -
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS636BC945B4ECB9AE4
-
Display PVs with
localblock.oc get pv | grep localblock
$ oc get pv | grep localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the storage resources associated with the failed node.
Identify the DeviceSet associated with the OSD to be replaced.
osd_id_to_remove=0 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc$ osd_id_to_remove=0 $ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow where,
osd_id_to_removeis the integer in the pod name immediately after therook-ceph-osdprefix. In this example, the deployment name isrook-ceph-osd-0.Example output:
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>
$ oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
x,y, andpvc-suffixare the values in the DeviceSet identified in an earlier step.Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49mCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the associated PV is
local-pv-8176b2bf.Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} | oc create -f -$ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD is removed successfully by checking the status of the
ocs-osd-removalpod. A status ofCompletedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage# oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example:
oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1# oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV which was identified in earlier steps. In this example, the PV name is
local-pv-8176b2bf.oc delete pv local-pv-8176b2bf
$ oc delete pv local-pv-8176b2bfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
persistentvolume "local-pv-8176b2bf" deleted
persistentvolume "local-pv-8176b2bf" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete
crashcollectorpod deployment identified in an earlier step.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
rook-ceph-operator.oc delete -n openshift-storage pod rook-ceph-operator-6f74fb5bff-2d982
$ oc delete -n openshift-storage pod rook-ceph-operator-6f74fb5bff-2d982Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
pod "rook-ceph-operator-6f74fb5bff-2d982" deleted
pod "rook-ceph-operator-6f74fb5bff-2d982" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
rook-ceph-operatorpod is restarted.oc get -n openshift-storage pod -l app=rook-ceph-operator
$ oc get -n openshift-storage pod -l app=rook-ceph-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-6f74fb5bff-7mvrq 1/1 Running 0 66s
NAME READY STATUS RESTARTS AGE rook-ceph-operator-6f74fb5bff-7mvrq 1/1 Running 0 66sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Creation of the new OSD may take several minutes after the operator starts.
Delete the
ocs-osd-removaljob(s).oc delete job ocs-osd-removal-${osd_id_to_remove}$ oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Also, ensure that the new incremental mon is created and is in the Running state.
oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27m
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSDs and mon’s might take several minutes to get to the Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.3.3. Replacing a failed Amazon EC2 node on user-provisioned infrastructure Copy linkLink copied to clipboard!
The ephemeral storage of Amazon EC2 I3 for OpenShift Container Storage might cause data loss when there is an instance power off. Use this procedure to recover from such an instance power off on Amazon EC2 infrastructure.
Replacing storage nodes in Amazon EC2 I3 infrastructure is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and OSDs that are running in the node to be replaced.
oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the nodes as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a new Amazon EC2 I3 machine instance with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform node using the new Amazon EC2 I3 machine instance.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in the OpenShift web console. Confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
- Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the local storage devices available in the new worker node to the OpenShift Container Storage StorageCluster.
Add the new disk entries to LocalVolume CR.
Edit
LocalVolumeCR. You can either remove or comment out the failed device/dev/disk/by-id/{id}and add the new/dev/disk/by-id/{id}.oc get -n local-storage localvolume
$ oc get -n local-storage localvolumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE local-block 25h
NAME AGE local-block 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit -n local-storage localvolume local-block
$ oc edit -n local-storage localvolume local-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure to save the changes after editing the CR.
You can see that in this CR the below two new devices using by-id have been added.
-
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS6F45C01D7E84FE3E9 -
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS636BC945B4ECB9AE4
-
Display PVs with
localblock.oc get pv | grep localblock
$ oc get pv | grep localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the storage resources associated with the failed node.
Identify the DeviceSet associated with the OSD to be replaced.
osd_id_to_remove=0 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc$ osd_id_to_remove=0 $ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow where,
osd_id_to_removeis the integer in the pod name immediately after therook-ceph-osdprefix. In this example, the deployment name isrook-ceph-osd-0.Example output:
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>
$ oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
x,y, andpvc-suffixare the values in the DeviceSet identified in an earlier step.Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49mCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the associated PV is
local-pv-8176b2bf.Change into the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_ids_to_remove} | oc create -f -$ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_ids_to_remove} | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD is removed successfully by checking the status of the
ocs-osd-removalpod. A status ofCompletedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage# oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example:
oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1# oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV which was identified in earlier steps. In this example, the PV name is
local-pv-8176b2bf.oc delete pv local-pv-8176b2bf
$ oc delete pv local-pv-8176b2bfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
persistentvolume "local-pv-8176b2bf" deleted
persistentvolume "local-pv-8176b2bf" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete
crashcollectorpod deployment identified in an earlier step.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob(s).oc delete job ocs-osd-removal-${osd_id_to_remove}$ oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Also, ensure that the new incremental mon is created and is in the Running state.
oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27m
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSDs and mon’s might take several minutes to get to the Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.3.4. Replacing a failed Amazon EC2 node on installer-provisioned infrastructure Copy linkLink copied to clipboard!
The ephemeral storage of Amazon EC2 I3 for OpenShift Container Storage might cause data loss when there is an instance power off. Use this procedure to recover from such an instance power off on Amazon EC2 infrastructure.
Replacing storage nodes in Amazon EC2 I3 infrastructure is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
- Log in to OpenShift Web Console and click Compute → Nodes.
- Identify the node that needs to be replaced. Take a note of its Machine Name.
Get the labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and OSDs that are running in the node to be replaced.
oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Machines. Search for the required machine.
- Besides the required machine, click the Action menu (⋮) → Delete Machine.
- Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
- Click Compute → Nodes in the OpenShift web console. Confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
- Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the local storage devices available in the new worker node to the OpenShift Container Storage StorageCluster.
Add the new disk entries to LocalVolume CR.
Edit
LocalVolumeCR. You can either remove or comment out the failed device/dev/disk/by-id/{id}and add the new/dev/disk/by-id/{id}.oc get -n local-storage localvolume
$ oc get -n local-storage localvolumeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE local-block 25h
NAME AGE local-block 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit -n local-storage localvolume local-block
$ oc edit -n local-storage localvolume local-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make sure to save the changes after editing the CR.
You can see that in this CR the below two new devices using by-id have been added.
-
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS6F45C01D7E84FE3E9 -
nvme-Amazon_EC2_NVMe_Instance_Storage_AWS636BC945B4ECB9AE4
-
Display PVs with
localblock.oc get pv | grep localblock
$ oc get pv | grep localblockCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the storage resources associated with the failed node.
Identify the DeviceSet associated with the OSD to be replaced.
osd_id_to_remove=0 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc$ osd_id_to_remove=0 $ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow where,
osd_id_to_removeis the integer in the pod name immediately after therook-ceph-osdprefix. In this example, the deployment name isrook-ceph-osd-0.Example output:
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68
ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68 ceph.rook.io/pvc: ocs-deviceset-0-0-nvs68Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>
$ oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
x,y, andpvc-suffixare the values in the DeviceSet identified in an earlier step.Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-0-0-nvs68 Bound local-pv-8176b2bf 2328Gi RWO localblock 4h49mCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the associated PV is
local-pv-8176b2bf.Change into the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_ids_to_remove} | oc create -f -$ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_ids_to_remove} | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD is removed successfully by checking the status of the
ocs-osd-removalpod. A status ofCompletedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage# oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example:
oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1# oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV which was identified in earlier steps. In this example, the PV name is
local-pv-8176b2bf.oc delete pv local-pv-8176b2bf
$ oc delete pv local-pv-8176b2bfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
persistentvolume "local-pv-8176b2bf" deleted
persistentvolume "local-pv-8176b2bf" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete
crashcollectorpod deployment identified in an earlier step.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<old_node_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob(s).oc delete job ocs-osd-removal-${osd_id_to_remove}$ oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Also, ensure that the new incremental mon is created and is in the Running state.
oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27m
rook-ceph-mon-a-64556f7659-c2ngc 1/1 Running 0 5h1m rook-ceph-mon-b-7c8b74dc4d-tt6hd 1/1 Running 0 5h1m rook-ceph-mon-d-57fb8c657-wg5f2 1/1 Running 0 27mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSDs and mon’s might take several minutes to get to the Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.4. Replacing storage nodes on VMware infrastructure Copy linkLink copied to clipboard!
- To replace an operational node, see Section 4.4.1, “Replacing an operational node on VMware user-provisioned infrastructure”
- To replace a failed node, see Section 4.4.2, “Replacing a failed node on VMware user-provisioned infrastructure”
4.4.1. Replacing an operational node on VMware user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
- If you upgraded to OpenShift Container Storage 4.6 from a previous version instead of performing a fresh installation, ensure that you have completed Post-update configuration changes.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to vSphere and terminate the identified VM.
- Create a new VM on VMware with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in OpenShift Web Console, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.oc get -n local-storage-project localvolumeset NAME AGE localblock 25h
# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV.
oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deleted
# oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the
crashcollectorpod deployment.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob.oc delete job ocs-osd-removal-${osd_id_to_remove}# oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in
Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162m
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.4.2. Replacing a failed node on VMware user-provisioned infrastructure Copy linkLink copied to clipboard!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
- If you upgraded to OpenShift Container Storage 4.6 from a previous version instead of performing a fresh installation, ensure that you have completed Post-update configuration changes.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to vSphere and terminate the identified VM.
- Create a new VM on VMware with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in OpenShift Web Console, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
- From Command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.oc get -n local-storage-project localvolumeset NAME AGE localblock 25h
# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV.
oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deleted
# oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the
crashcollectorpod deployment.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob.oc delete job ocs-osd-removal-${osd_id_to_remove}# oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-0" deleted
job.batch "ocs-osd-removal-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in
Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162m
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow (Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
4.5. Replacing storage nodes on IBM Power Systems infrastructure Copy linkLink copied to clipboard!
For OpenShift Container Storage, node replacement can be performed proactively for an operational node and reactively for a failed node for the IBM Power Systems related deployments.
4.5.1. Replacing an operational or failed storage node on IBM Power Systems Copy linkLink copied to clipboard!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into OpenShift Container Platform (RHOCP) cluster.
Procedure
Check the labels on the failed node and make note of the rack label.
oc get nodes --show-labels | grep failed-node-name
$ oc get nodes --show-labels | grep failed-node-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and object storage device (OSD) pods that are running in the failed node.
oc get pods -n openshift-storage -o wide | grep -i failed-node-name
$ oc get pods -n openshift-storage -o wide | grep -i failed-node-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the failed node so that it cannot be scheduled for work.
oc adm cordon failed-node-name
$ oc adm cordon failed-node-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the failed node of existing work.
oc adm drain failed-node-name --force --delete-local-data --ignore-daemonsets
$ oc adm drain failed-node-name --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the failed node is not connected to the network, remove the pods running on it by using the command:
oc get pods -A -o wide | grep -i failed-node-name | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}' oc adm drain failed-node-name --force --delete-local-data --ignore-daemonsets$ oc get pods -A -o wide | grep -i failed-node-name | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}' $ oc adm drain failed-node-name --force --delete-local-data --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the failed node.
oc delete node failed-node-name
$ oc delete node failed-node-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Get a new IBM Power machine with required infrastructure. See Installing a cluster on IBM Power Systems.
- Create a new OpenShift Container Platform node using the new IBM Power Systems machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Storage that are in
Pendingstate:oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Storage CSRs for the new node:
oc adm certificate approve certificate-name
$ oc adm certificate approve certificate-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click Compute → Nodes in OpenShift Web Console, confirm if the new node is in Ready state.
Apply the OpenShift Container Storage label to the new node using your preferred interface:
From OpenShift web console
- For the new node, click Action Menu (⋮) → Edit Labels.
-
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
From the command line interface
Execute the following command to apply the OpenShift Container Storage label to the new node:
oc label node new-node-name cluster.ocs.openshift.io/openshift-storage=""
$ oc label node new-node-name cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a newly added worker node to localVolumeSet.
Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.oc get -n local-storage-project localvolumeset NAME AGE localblock 25h
# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
Verify that the new
localblockPV is available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster.
Identify the PVC as afterwards we need to delete PV associated with that specific PVC.
osd_id_to_remove=1 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc# osd_id_to_remove=1 # oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow where,
osd_id_to_removeis the integer in the pod name immediately after therook-ceph-osd prefix. In this example, the deployment name isrook-ceph-osd-1.Example output:
ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmcceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmcCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the PVC name is
ocs-deviceset-localblock-0-data-0-g2mmc.Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove},{osd_id_to_remove2} | oc create -f -# oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove},{osd_id_to_remove2} | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the OSD is removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage# oc get pod -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1# oc logs -l job-name=ocs-osd-removal-${osd_id_to_remove} -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>
# oc get -n openshift-storage pvc ocs-deviceset-<x>-<y>-<pvc-suffix>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
x,y, andpvc-suffixare the values in theDeviceSetidentified in the previous step.For example:
oc get -n openshift-storage pvc ocs-deviceset-localblock-0-data-0-g2mmc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-localblock-0-data-0-g2mmc Bound local-pv-5c9b8982 500Gi RWO localblock 24h
# oc get -n openshift-storage pvc ocs-deviceset-localblock-0-data-0-g2mmc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-localblock-0-data-0-g2mmc Bound local-pv-5c9b8982 500Gi RWO localblock 24hCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the associated PV is
local-pv-5c9b8982.Delete the PV.
oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-5c9b8982 persistentvolume "local-pv-5c9b8982" deleted
# oc delete pv local-pv-5c9b8982 persistentvolume "local-pv-5c9b8982" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the
crashcollectorpod deployment.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the new OSD by restarting the
rook-ceph-operatorto force operator reconciliation.oc get -n openshift-storage pod -l app=rook-ceph-operator
# oc get -n openshift-storage pod -l app=rook-ceph-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-77758ddc74-dlwn2 1/1 Running 0 1d20h
NAME READY STATUS RESTARTS AGE rook-ceph-operator-77758ddc74-dlwn2 1/1 Running 0 1d20hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
rook-ceph-operator.oc delete -n openshift-storage pod rook-ceph-operator-77758ddc74-dlwn2
# oc delete -n openshift-storage pod rook-ceph-operator-77758ddc74-dlwn2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
pod "rook-ceph-operator-77758ddc74-dlwn2" deleted
pod "rook-ceph-operator-77758ddc74-dlwn2" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
rook-ceph-operatorpod is restarted.oc get -n openshift-storage pod -l app=rook-ceph-operator
# oc get -n openshift-storage pod -l app=rook-ceph-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-77758ddc74-wqf25 1/1 Running 0 66s
NAME READY STATUS RESTARTS AGE rook-ceph-operator-77758ddc74-wqf25 1/1 Running 0 66sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Creation of the new OSD and
monmight take several minutes after the operator restarts.Delete the
ocs-osd-removaljob.oc delete job ocs-osd-removal-${osd_id_to_remove}# oc delete job ocs-osd-removal-${osd_id_to_remove}Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete job ocs-osd-removal-1 job.batch "ocs-osd-removal-1" deleted
# oc delete job ocs-osd-removal-1 job.batch "ocs-osd-removal-1" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads → Pods, confirm that at least the following pods on the new node are in Running state:
-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Make sure that the new incremental
monis created and is in theRunningstate.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-b-74f6dc9dd6-4llzq 1/1 Running 0 6h14m rook-ceph-mon-c-74948755c-h7wtx 1/1 Running 0 4h24m rook-ceph-mon-d-598f69869b-4bv49 1/1 Running 0 162m
rook-ceph-mon-b-74f6dc9dd6-4llzq 1/1 Running 0 6h14m rook-ceph-mon-c-74948755c-h7wtx 1/1 Running 0 4h24m rook-ceph-mon-d-598f69869b-4bv49 1/1 Running 0 162mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.
- If verification steps fail, contact Red Hat Support.