OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Questo contenuto non è disponibile nella lingua selezionata.
Chapter 2. OpenShift Data Foundation deployed using local storage devices
2.1. Replacing storage nodes on bare metal infrastructure Copia collegamentoCollegamento copiato negli appunti!
- To replace an operational node, see Section 2.1.1, “Replacing an operational node on bare metal user-provisioned infrastructure”
- To replace a failed node, see Section 2.1.2, “Replacing a failed node on bare metal user-provisioned infrastructure”
2.1.1. Replacing an operational node on bare metal user-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
Identify the NODE and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a new bare metal machine with required infrastructure. See Installing a cluster on bare metal.
ImportantFor information about how to replace a master node when you have installed OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, see the Backup and Restore guide in the OpenShift Container Platform documentation.
- Create a new OpenShift Container Platform node using the new bare metal machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required:
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.1.2. Replacing a failed node on bare metal user-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
Identify the NODE and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get a new bare metal machine with required infrastructure. See Installing a cluster on bare metal.
ImportantFor information about how to replace a master node when you have installed OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, see the Backup and Restore guide in the OpenShift Container Platform documentation.
- Create a new OpenShift Container Platform node using the new bare metal machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required:
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.2. Replacing storage nodes on IBM Z or LinuxONE infrastructure Copia collegamentoCollegamento copiato negli appunti!
You can choose one of the following procedures to replace storage nodes:
2.2.1. Replacing operational nodes on IBM Z or LinuxONE infrastructure Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to replace an operational node on IBM Z or LinuxONE infrastructure.
Procedure
- Log in to OpenShift Web Console.
-
Click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Mark the node as unschedulable using the following command:
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node using the following command:
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis activity may take at least 5-10 minutes. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
-
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. - Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes.
-
Click Compute
Nodes, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Running state: -
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Data Foundation pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.2.2. Replacing failed nodes on IBM Z or LinuxONE infrastructure Copia collegamentoCollegamento copiato negli appunti!
Perform this procedure to replace a failed node which is not operational on IBM Z or LinuxONE infrastructure for OpenShift Data Foundation.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the faulty node and click on its Machine Name.
-
Click Actions
Edit Annotations, and click Add More. -
Add
machine.openshift.io/exclude-node-drainingand click Save. -
Click Actions
Delete Machine, and click Delete. A new machine is automatically created, wait for new machine to start.
ImportantThis activity may take at least 5-10 minutes. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional.
-
Click Compute
Nodes, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From the web user interface
-
For the new node, click Action Menu (⋮)
Edit Labels -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From the command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= | cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= | cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Running state: -
csi-cephfsplugin-* -
csi-rbdplugin-*
-
- Verify that all other required OpenShift Data Foundation pods are in Running state.
Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.3. Replacing storage nodes on IBM Power infrastructure Copia collegamentoCollegamento copiato negli appunti!
For OpenShift Data Foundation, node replacement can be performed proactively for an operational node and reactively for a failed node for the IBM Power related deployments.
2.3.1. Replacing an operational or failed storage node on IBM Power Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation 4.9 from a previous version and have not already created the
LocalVolumeDiscoveryobject, do so now following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
Identify the node and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and object storage device (OSD) pods that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Get a new IBM Power machine with required infrastructure. See Installing a cluster on IBM Power.
- Create a new OpenShift Container Platform node using the new IBM Power machine.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in
Pendingstate:oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
- Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=''
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=''Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscovery.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
worker-0was removed andworker-3is the new node.
Add a newly added worker node to localVolume.
Determine which
localVolumeto edit.oc get -n $local_storage_project localvolume
# oc get -n $local_storage_project localvolume NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumedefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
worker-0was removed andworker-3is the new node.
Verify that the new
localblockPV is available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required.
Identify the PVC as afterwards we need to delete PV associated with that specific PVC.
osd_id_to_remove=1 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc$ osd_id_to_remove=1 $ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow where,
osd_id_to_removeis the integer in the pod name immediately after therook-ceph-osd prefix. In this example, the deployment name isrook-ceph-osd-1.Example output:
ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmcceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmcCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the PVC name is
ocs-deviceset-localblock-0-data-0-g2mmc.Remove the failed OSD from the cluster.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.WarningThis step results in the OSD being completely removed from the cluster. Ensure that the correct value of
osd_id_to_removeis provided.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
The PVC name must be identical to the name that is obtained while removing the failed OSD from the cluster.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-5c9b8982 500Gi RWO Delete Released openshift-storage/ocs-deviceset-localblock-0-data-0-g2mmc localblock 24h worker-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is a PV in
Releasedstate, delete it.oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-5c9b8982
# oc delete pv local-pv-5c9b8982 persistentvolume "local-pv-5c9b8982" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the
crashcollectorpod deployment.oc get deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storage
$ oc get deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an existing
crashcollectorpod deployment, delete it.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Running state: -
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-b-74f6dc9dd6-4llzq 1/1 Running 0 6h14m rook-ceph-mon-c-74948755c-h7wtx 1/1 Running 0 4h24m rook-ceph-mon-d-598f69869b-4bv49 1/1 Running 0 162m
rook-ceph-mon-b-74f6dc9dd6-4llzq 1/1 Running 0 6h14m rook-ceph-mon-c-74948755c-h7wtx 1/1 Running 0 4h24m rook-ceph-mon-d-598f69869b-4bv49 1/1 Running 0 162mCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.4. Replacing storage nodes on VMware infrastructure Copia collegamentoCollegamento copiato negli appunti!
To replace an operational node, see:
To replace a failed node,see:
2.4.1. Replacing an operational node on VMware user-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
Identify the NODE and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to vSphere and terminate the identified VM.
- Create a new VM on VMware with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required:
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.4.2. Replacing an operational node on VMware installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. - Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
-
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. - Physically add a new device to the node.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required:
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
#oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1
#oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is a PV in
Releasedstate, delete it.oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
#oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deleted
#oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
crashcollectorpod deployment.oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an existing
crashcollectorpod deployment, delete it.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.4.3. Replacing a failed node on VMware user-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
Identify the NODE and get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the node.
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to vSphere and terminate the identified VM.
- Create a new VM on VMware with the required infrastructure. See Supported Infrastructure and Platforms.
- Create a new OpenShift Container Platform worker node using the new VM.
Check for certificate signing requests (CSRs) related to OpenShift Container Platform that are in Pending state:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all required OpenShift Container Platform CSRs for the new node:
oc adm certificate approve <Certificate_Name>
$ oc adm certificate approve <Certificate_Name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required:
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.4.4. Replacing a failed node on VMware installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources, and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
mon(if any) and OSDs that are running in the node to be replaced.oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in Terminating state.
oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. - Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
-
Click Compute
Nodes in OpenShift Web Console, confirm if the new node is in Ready state. - Physically add a new device to the node.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required:
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
#oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1
#oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is a PV in
Releasedstate, delete it.oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
#oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deleted
#oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d9c5cbd6" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
crashcollectorpod deployment.oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an existing
crashcollectorpod deployment, delete it.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removal-job.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.5. Replacing storage nodes on Red Hat Virtualization infrastructure Copia collegamentoCollegamento copiato negli appunti!
- To replace an operational node, see Section 2.5.1, “Replacing an operational node on Red Hat Virtualization installer-provisioned infrastructure”
- To replace a failed node, see Section 2.5.2, “Replacing a failed node on Red Hat Virtualization installer-provisioned infrastructure”
2.5.1. Replacing an operational node on Red Hat Virtualization installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to replace an operational node on Red Hat Virtualization installer-provisioned infrastructure (IPI).
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Get labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and OSDs that are running in the node to be replaced.
oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the nodes as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
-
Click Compute
Nodes in the OpenShift web console. Confirm if the new node is in Ready state. - Physically add the new device(s) to the node.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
- Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=_<failed_osd_id>_ FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=_<failed_osd_id>_ FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 512Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h server3.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is a PV in
Releasedstate, delete it.oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-d6bf175b
# oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d6bf175b" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
crashcollectorpod deployment.oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an existing
crashcollectorpod, delete it.oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s
rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8sCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.
2.5.2. Replacing a failed node on Red Hat Virtualization installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Perform this procedure to replace a failed node which is not operational on Red Hat Virtualization installer-provisioned infrastructure (IPI) for OpenShift Data Foundation.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure, resources and disks to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
-
If you upgraded to OpenShift Data Foundation version 4.8 from a previous version, and have not already created the
LocalVolumeDiscoveryandLocalVolumeSetobjects, do so now by following the procedure described in Post-update configuration changes for clusters backed by local storage.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Get the labels on the node to be replaced.
oc get nodes --show-labels | grep <node_name>
$ oc get nodes --show-labels | grep <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the mon (if any) and OSDs that are running in the node to be replaced.
oc get pods -n openshift-storage -o wide | grep -i <node_name>
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the deployments of the pods identified in the previous step.
For example:
oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the node as unschedulable.
oc adm cordon <node_name>
$ oc adm cordon <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the pods which are in the
Terminatingstate.oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain the node.
oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets
$ oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
-
Click Compute
Nodes in the OpenShift web console. Confirm if the new node is in Ready state. - Physically add the new device(s) to the node.
Apply the OpenShift Data Foundation label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. - Add cluster.ocs.openshift.io/openshift-storage and click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
- Execute the following command to apply the OpenShift Data Foundation label to the new node:
oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Identify the namespace where OpenShift local storage operator is installed and assign it to
local_storage_projectvariable:local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local)$ local_storage_project=$(oc get csv --all-namespaces | awk '{print $1}' | grep local) echo $local_storage_project openshift-local-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new worker node to
localVolumeDiscoveryandlocalVolumeSet.Update the
localVolumeDiscoverydefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.Determine which
localVolumeSetto edit.oc get -n $local_storage_project localvolumeset
# oc get -n $local_storage_project localvolumeset NAME AGE localblock 25hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
localVolumeSetdefinition to include the new node and remove the failed node.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remember to save before exiting the editor.
In the above example,
server3.example.comwas removed andnewnode.example.comis the new node.
Verify that the new
localblockPV is available.$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26s
$oc get pv | grep localblock | grep Available local-pv-551d950 512Gi RWO Delete Available localblock 26sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the
openshift-storageproject.oc project openshift-storage
$ oc project openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required.
oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=_<failed_osd_id>_ FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=_<failed_osd_id>_ FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow <failed_osd_id>Is the integer in the pod name immediately after the
rook-ceph-osdprefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example,FAILED_OSD_IDS=0,1,2.The
FORCE_OSD_REMOVALvalue must be changed totruein clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed.
Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removal-jobpod.A status of
Completedconfirms that the OSD removal job succeeded.oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage
# oc get pod -l job-name=ocs-osd-removal-job -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the OSD removal is completed.
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'
$ oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0
2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the
ocs-osd-removal-jobfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging.For example:
oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1
# oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the PV associated with the PVC.
oc get pv -L kubernetes.io/hostname | grep localblock | grep Released
# oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-d6bf175b 512Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h server3.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is a PV in Released state, delete it.
oc delete pv <persistent-volume>
# oc delete pv <persistent-volume>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pv local-pv-d6bf175b
# oc delete pv local-pv-d6bf175b persistentvolume "local-pv-d6bf175b" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the
crashcollectorpod deployment.oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc get deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow If there is an existing crashcollector pod deployment, delete it.
oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storage
$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ocs-osd-removaljob.oc delete -n openshift-storage job ocs-osd-removal-job
# oc delete -n openshift-storage job ocs-osd-removal-jobCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
job.batch "ocs-osd-removal-job" deleted
job.batch "ocs-osd-removal-job" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification steps
Execute the following command and verify that the new node is present in the output:
oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Data Foundation pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.oc get pod -n openshift-storage | grep mon
$ oc get pod -n openshift-storage | grep monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s
rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8sCopy to Clipboard Copied! Toggle word wrap Toggle overflow OSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
oc debug node/<node name> chroot /host
$ oc debug node/<node name> $ chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)lsblk
$ lsblkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If verification steps fail, contact Red Hat Support.