Questo contenuto non è disponibile nella lingua selezionata.
Chapter 13. Replacing storage nodes
- To replace an operational node, see Section 13.1, “Replacing an operational node on Red Hat Virtualization installer-provisioned infrastructure”
- To replace a failed node, see Section 13.2, “Replacing a failed node on Red Hat Virtualization installer-provisioned infrastructure”
13.1. Replacing an operational node on Red Hat Virtualization installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
Use this procedure to replace an operational node on Red Hat Virtualization installer-provisioned infrastructure (IPI).
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Get labels on the node to be replaced.
$ oc get nodes --show-labels | grep <node_name>Identify the mon (if any) and OSDs that are running in the node to be replaced.
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Scale down the deployments of the pods identified in the previous step.
For example:
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageMark the nodes as unschedulable.
$ oc adm cordon <node_name>Drain the node.
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets-
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. - Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
-
Click Compute
Nodes in the OpenShift web console. Confirm if the new node is in Ready state. Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
- Execute the following command to apply the OpenShift Container Storage label to the new node:
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
Add the local storage devices available on these worker nodes to the OpenShift Container Storage StorageCluster.
Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hAdd the new node to the
localVolumeSetdefinition.# oc edit -n local-storage-project localvolumeset localblock [...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com # - server3.example.com - newnode.example.com [...]Remember to save before exiting the editor.
Verify that the new
localblockPV is available.$ oc get pv | grep localblock CAPA- ACCESS RECLAIM STORAGE NAME CITY MODES POLICY STATUS CLAIM CLASS AGE local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 3e8964d3 ocs-deviceset-2-0 -79j94 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 414755e0 ocs-deviceset-1-0 -959rp local-pv- 931Gi RWO Delete Available localblock 3m24s b481410 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h d9c5cbd6 ocs-deviceset-0-0 -nvs68Change to the
openshift-storageproject.$ oc project openshift-storageRemove the failed OSD from the cluster.
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.# oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storageNoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:# oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
# oc get -n openshift-storage pvc claim-nameFor example:
# oc get -n openshift-storage pvc ocs-deviceset-0-0-nvs68 ACCESS STORAGE NAME STATUS VOLUME CAPACITY MODES CLASS AGE ocs-deviceset- Released local-pv- 931Gi RWO localblock 24h 0-0-nvs68 d9c5cbd6Delete the PV.
# oc delete pv <persistent-volume>For example:
# oc delete pv local-pv-d9c5cbd6 persistentvolume "local-pv-d9c5cbd6" deleted
Delete the
crashcollectorpod deployment.$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageDeploy the new OSD by restarting the
rook-ceph-operatorto force operator reconciliation.# oc get -n openshift-storage pod -l app=rook-ceph-operatorExample output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-6f74fb5bff-2d982 1/1 Running 0 1d20hDelete the
rook-ceph-operator.# oc delete -n openshift-storage pod rook-ceph-operator-6f74fb5bff-2d982Example output:
pod "rook-ceph-operator-6f74fb5bff-2d982" deletedVerify that the
rook-ceph-operatorpod is restarted.# oc get -n openshift-storage pod -l app=rook-ceph-operatorExample output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-6f74fb5bff-7mvrq 1/1 Running 0 66sCreation of the new OSD and
monmight take several minutes after the operator restarts.
Delete the
ocs-osd-removaljob.# oc delete job ocs-osd-removal-${osd_id_to_remove}Example output:
job.batch "ocs-osd-removal-0" deleted
Verification steps
Execute the following command and verify that the new node is present in the output:
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.$ oc get pod -n openshift-storage | grep monExample output:
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162mOSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd(Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
$ oc debug node/<node name> $ chroot /hostRun “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)$ lsblk
- If verification steps fail, contact Red Hat Support.
13.2. Replacing a failed node on Red Hat Virtualization installer-provisioned infrastructure Copia collegamentoCollegamento copiato negli appunti!
The ephemeral storage of Red Hat Virtualization for OpenShift Container Storage might cause data loss when there is an instance power off. Use this procedure to recover from such an instance power off on Red Hat Virtualization platform.
Prerequisites
- Red Hat recommends that replacement nodes are configured with similar infrastructure and resources to the node being replaced.
- You must be logged into the OpenShift Container Platform (RHOCP) cluster.
Procedure
-
Log in to OpenShift Web Console and click Compute
Nodes. - Identify the node that needs to be replaced. Take a note of its Machine Name.
Get the labels on the node to be replaced.
$ oc get nodes --show-labels | grep <node_name>Identify the mon (if any) and OSDs that are running in the node to be replaced.
$ oc get pods -n openshift-storage -o wide | grep -i <node_name>Scale down the deployments of the pods identified in the previous step.
For example:
$ oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage $ oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage $ oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storageMark the node as unschedulable.
$ oc adm cordon <node_name>Remove the pods which are in Terminating state.
$ oc get pods -A -o wide | grep -i <node_name> | awk '{if ($4 == "Terminating") system ("oc -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}'Drain the node.
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets-
Click Compute
Machines. Search for the required machine. -
Besides the required machine, click the Action menu (⋮)
Delete Machine. - Click Delete to confirm the machine deletion. A new machine is automatically created.
Wait for the new machine to start and transition into Running state.
ImportantThis activity may take at least 5-10 minutes or more.
-
Click Compute
Nodes in the OpenShift web console. Confirm if the new node is in Ready state. Apply the OpenShift Container Storage label to the new node using any one of the following:
- From User interface
-
For the new node, click Action Menu (⋮)
Edit Labels. -
Add
cluster.ocs.openshift.io/openshift-storageand click Save.
-
For the new node, click Action Menu (⋮)
- From Command line interface
- Execute the following command to apply the OpenShift Container Storage label to the new node:
$ oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=""
- Add the local storage devices available in the new worker node to the OpenShift Container Storage StorageCluster.
Add the local storage devices available on these worker nodes to the OpenShift Container Storage StorageCluster.
Determine which
localVolumeSetto edit.Replace local-storage-project in the following commands with the name of your local storage project. The default project name is
openshift-local-storagein OpenShift Container Storage 4.6 and later. Previous versions uselocal-storageby default.# oc get -n local-storage-project localvolumeset NAME AGE localblock 25hAdd the new node to the
localVolumeSetdefinition.# oc edit -n local-storage-project localvolumeset localblock [...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com # - server3.example.com - newnode.example.com [...]Remember to save before exiting the editor.
Verify that the new
localblockPV is available.$ oc get pv | grep localblock CAPA- ACCESS RECLAIM STORAGE NAME CITY MODES POLICY STATUS CLAIM CLASS AGE local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 3e8964d3 ocs-deviceset-2-0 -79j94 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 414755e0 ocs-deviceset-1-0 -959rp local-pv- 931Gi RWO Delete Available localblock 3m24s b481410 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h d9c5cbd6 ocs-deviceset-0-0 -nvs68Change to the
openshift-storageproject.$ oc project openshift-storageRemove the failed OSD from the cluster.
$ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -Verify that the OSD was removed successfully by checking the status of the
ocs-osd-removalpod.A status of
Completedconfirms that the OSD removal job succeeded.# oc get pod -l job-name=ocs-osd-removal-failed-osd-id -n openshift-storageNoteIf
ocs-osd-removalfails and the pod is not in the expectedCompletedstate, check the pod logs for further debugging. For example:# oc logs -l job-name=ocs-osd-removal-failed-osd_id -n openshift-storage --tail=-1Delete the PV associated with the failed node.
Identify the PV associated with the PVC.
# oc get -n openshift-storage pvc claim-nameFor example:
# oc get -n openshift-storage pvc ocs-deviceset-0-0-nvs68 ACCESS STORAGE NAME STATUS VOLUME CAPACITY MODES CLASS AGE ocs-deviceset- Released local-pv- 931Gi RWO localblock 24h 0-0-nvs68 d9c5cbd6Delete the PV.
# oc delete pv <persistent-volume>For example:
# oc delete pv local-pv-d9c5cbd6 persistentvolume "local-pv-d9c5cbd6" deleted
Delete the
crashcollectorpod deployment.$ oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=failed-node-name -n openshift-storageDeploy the new OSD by restarting the
rook-ceph-operatorto force operator reconciliation.# oc get -n openshift-storage pod -l app=rook-ceph-operatorExample output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-6f74fb5bff-2d982 1/1 Running 0 1d20hDelete the
rook-ceph-operator.# oc delete -n openshift-storage pod rook-ceph-operator-6f74fb5bff-2d982Example output:
pod "rook-ceph-operator-6f74fb5bff-2d982" deletedVerify that the
rook-ceph-operatorpod is restarted.# oc get -n openshift-storage pod -l app=rook-ceph-operatorExample output:
NAME READY STATUS RESTARTS AGE rook-ceph-operator-6f74fb5bff-7mvrq 1/1 Running 0 66sCreation of the new OSD and
monmight take several minutes after the operator restarts.
Delete the`ocs-osd-removal` job.
# oc delete job ocs-osd-removal-${osd_id_to_remove}Example output:
job.batch "ocs-osd-removal-0" deleted
Verification steps
Execute the following command and verify that the new node is present in the output:
$ oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1Click Workloads
Pods, confirm that at least the following pods on the new node are in Runningstate:-
csi-cephfsplugin-* -
csi-rbdplugin-*
-
Verify that all other required OpenShift Container Storage pods are in Running state.
Ensure that the new incremental
monis created and is in the Running state.$ oc get pod -n openshift-storage | grep monExample output:
rook-ceph-mon-c-64556f7659-c2ngc 1/1 Running 0 6h14m rook-ceph-mon-d-7c8b74dc4d-tt6hd 1/1 Running 0 4h24m rook-ceph-mon-e-57fb8c657-wg5f2 1/1 Running 0 162mOSD and Mon might take several minutes to get to the
Runningstate.Verify that new OSD pods are running on the replacement node.
$ oc get pods -o wide -n openshift-storage| egrep -i new-node-name | egrep osd(Optional) If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted.
For each of the new nodes identified in previous step, do the following:
Create a debug pod and open a chroot environment for the selected host(s).
$ oc debug node/<node name> $ chroot /hostRun “lsblk” and check for the “crypt” keyword beside the
ocs-devicesetname(s)$ lsblk
- If verification steps fail, contact Red Hat Support.