Search

Chapter 8. Removing failed or unwanted Ceph Object Storage devices

download PDF

The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs.

If you have any failed or unwanted Ceph OSDs to remove:

  1. Verify the Ceph health status.

    For more information see: Verifying Ceph cluster is healthy.

  2. Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs.

    See:

If you are using local disks, you can reuse these disks after removing the old OSDs.

8.1. Verifying Ceph cluster is healthy

Storage health is visible on the Block and File and Object dashboards.

Procedure

  1. In the OpenShift Web Console, click Storage Data Foundation.
  2. In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears.
  3. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick.
  4. In the Details card, verify that the cluster information is displayed.

8.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation

Follow the steps in the procedure to remove the failed or unwanted Ceph Object Storage Devices (OSDs) in dynamically provisioned Red Hat OpenShift Data Foundation.

Important

Scaling down of clusters is supported only with the help of the Red Hat support team.

Warning
  • Removing an OSD when the Ceph component is not in a healthy state can result in data loss.
  • Removing two or more OSDs at the same time results in data loss.

Prerequisites

Procedure

  1. Scale down the OSD deployment.

    # oc scale deployment rook-ceph-osd-<osd-id> --replicas=0
  2. Get the osd-prepare pod for the Ceph OSD to be removed.

    # oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc
  3. Delete the osd-prepare pod.

    # oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>
  4. Remove the failed OSD from the cluster.

    # failed_osd_id=<osd-id>
    
    # oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=$<failed_osd_id> | oc create -f -

    where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix.

  5. Verify that the OSD is removed successfully by checking the logs.

    # oc logs -n openshift-storage ocs-osd-removal-$<failed_osd_id>-<pod-suffix>
  6. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs.
  7. Delete the OSD deployment.

    # oc delete deployment rook-ceph-osd-<osd-id>

Verification step

  • To check if the OSD is deleted successfully, run:

    # oc get pod -n openshift-storage ocs-osd-removal-$<failed_osd_id>-<pod-suffix>

    This command must return the status as Completed.

8.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices

You can remove failed or unwanted Ceph provisioned Object Storage Devices (OSDs) using local storage devices by following the steps in the procedure.

Important

Scaling down of clusters is supported only with the help of the Red Hat support team.

Warning
  • Removing an OSD when the Ceph component is not in a healthy state can result in data loss.
  • Removing two or more OSDs at the same time results in data loss.

Prerequisites

Procedure

  1. Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure.

    # oc scale deployment rook-ceph-osd-<osd-id> --replicas=0
  2. Remove the failed OSD from the cluster.

    # failed_osd_id=<osd_id>
    
    # oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=$<failed_osd_id> | oc create -f -

    where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix.

  3. Verify that the OSD is removed successfully by checking the logs.

    # oc logs -n openshift-storage ocs-osd-removal-$<failed_osd_id>-<pod-suffix>
  4. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs.
  5. Delete persistent volume claim (PVC) resources associated with the failed OSD.

    1. Get the PVC associated with the failed OSD.

      # oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc
    2. Get the persistent volume (PV) associated with the PVC.

      # oc get -n openshift-storage pvc <pvc-name>
    3. Get the failed device name.

      # oc get pv <pv-name-from-above-command> -oyaml | grep path
    4. Get the prepare-pod associated with the failed OSD.

      # oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted
    5. Delete the osd-prepare pod before removing the associated PVC.

      # oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>
    6. Delete the PVC associated with the failed OSD.

      # oc delete -n openshift-storage pvc <pvc-name-from-step-a>
  6. Remove failed device entry from the LocalVolume custom resource (CR).

    1. Log in to node with the failed device.

      # oc debug node/<node_with_failed_osd>
    2. Record the /dev/disk/by-id/<id> for the failed device name.

      # ls -alh /mnt/local-storage/localblock/
  7. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink.

    # oc debug node/<node_with_failed_osd>
    1. Get the OSD symlink for the failed device name.

      # ls -alh /mnt/local-storage/localblock
    2. Remove the symlink.

      # rm /mnt/local-storage/localblock/<failed-device-name>
  8. Delete the PV associated with the OSD.
# oc delete pv <pv-name>

Verification step

  • To check if the OSD is deleted successfully, run:

    #oc get pod -n openshift-storage ocs-osd-removal-$<failed_osd_id>-<pod-suffix>

    This command must return the status as Completed.

8.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs

If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the Object Storage Device (OSD) removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state.

# oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=$<failed_osd_id> | oc create -f -
Note

You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigate to ensure they are active.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.