Chapter 5. Troubleshooting OSDs


This chapter contains information on how to fix the most common errors related to Ceph OSDs.

Before You Start

5.1. The Most Common Error Messages Related to OSDs

The following tables list the most common error messages that are returned by the ceph health detail command, or included in the Ceph logs. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix the problems.

Table 5.1. Error Messages Related to OSDs
Error messageSee

HEALTH_ERR

full osds

Section 5.1.1, “Full OSDs”

HEALTH_WARN

nearfull osds

Section 5.1.2, “Nearfull OSDs”

osds are down

Section 5.1.3, “One or More OSDs Are Down”

Section 5.1.4, “Flapping OSDs”

requests are blocked

Section 5.1.5, “Slow Requests, and Requests are Blocked”

slow requests

Section 5.1.5, “Slow Requests, and Requests are Blocked”

Table 5.2. Common Error Messages in Ceph Logs Related to OSDs
Error messageLog fileSee

heartbeat_check: no reply from osd.X

Main cluster log

Section 5.1.4, “Flapping OSDs”

wrongly marked me down

Main cluster log

Section 5.1.4, “Flapping OSDs”

osds have slow requests

Main cluster log

Section 5.1.5, “Slow Requests, and Requests are Blocked”

FAILED assert(!m_filestore_fail_eio)

OSD log

Section 5.1.3, “One or More OSDs Are Down”

FAILED assert(0 == "hit suicide timeout")

OSD log

Section 5.1.3, “One or More OSDs Are Down”

5.1.1. Full OSDs

The ceph health detail command returns an error message similar to the following one:

HEALTH_ERR 1 full osds
osd.3 is full at 95%
What This Means

Ceph prevents clients from performing I/O operations on full OSD nodes to avoid losing data. It returns the HEALTH_ERR full osds message when the cluster reaches the capacity set by the mon_osd_full_ratio parameter. By default, this parameter is set to 0.95 which means 95% of the cluster capacity.

To Troubleshoot This Problem

Determine how many percent of raw storage (%RAW USED) is used:

# ceph df

If %RAW USED is above 70-75%, you can:

See Also

5.1.2. Nearfull OSDs

The ceph health detail command returns an error message similar to the following one:

HEALTH_WARN 1 nearfull osds
osd.2 is near full at 85%
What This Means

Ceph returns the nearfull osds message when the cluster reaches the capacity set by the mon osd nearfull ratio defaults parameter. By default, this parameter is set to 0.85 which means 85% of the cluster capacity.

Ceph distributes data based on the CRUSH hierarchy in the best possible way but it cannot guarantee equal distribution. The main causes of the uneven data distribution and the nearfull osds messages are:

  • The OSDs are not balanced among the OSD nodes in the cluster. That is, some OSD nodes host significantly more OSDs than others, or the weight of some OSDs in the CRUSH map is not adequate to their capacity.
  • The Placement Group (PG) count is not proper as per the number of the OSDs, use case, target PGs per OSD, and OSD utilization.
  • The cluster uses inappropriate CRUSH tunables.
  • The back-end storage for OSDs is almost full.
To Troubleshoot This Problem:
  1. Verify that the PG count is sufficient and increase it if needed. See Section 7.5, “Increasing the PG Count” for details.
  2. Verify that you use CRUSH tunables optimal to the cluster version and adjust them if not. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 3 and the How can I test the impact CRUSH map tunable modifications will have on my PG distribution across OSDs in Red Hat Ceph Storage? solution on the Red Hat Customer Portal.
  3. Change the weight of OSDs by utilization. See the Set an OSD’s Weight by Utilization section in the Storage Strategies guide for Red Hat Ceph Storage 3.
  4. Determine how much space is left on the disks used by OSDs.

    1. To view how much space OSDs use in general:

      # ceph osd df
    2. To view how much space OSDs use on particular nodes. Use the following command from the node containing nearful OSDs:

      $ df
    3. If needed, add a new OSD node. See the Adding and Removing OSD Nodes chapter in the Administration Guide for Red Hat Ceph Storage 3.
See Also

5.1.3. One or More OSDs Are Down

The ceph health command returns an error similar to the following one:

HEALTH_WARN 1/3 in osds are down
What This Means

One of the ceph-osd processes is unavailable due to a possible service failure or problems with communication with other OSDs. As a consequence, the surviving ceph-osd daemons reported this failure to the Monitors.

If the ceph-osd daemon is not running, the underlying OSD drive or file system is either corrupted, or some other error, such as a missing keyring, is preventing the daemon from starting.

In most cases, networking issues cause the situation when the ceph-osd daemon is running but still marked as down.

To Troubleshoot This Problem
  1. Determine which OSD is down:

    # ceph health detail
    HEALTH_WARN 1/3 in osds are down
    osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080
  2. Try to restart the ceph-osd daemon:

    systemctl restart ceph-osd@<OSD-number>

    Replace <OSD-number> with the ID of the OSD that is down, for example:

    # systemctl restart ceph-osd@0
    1. If you are not able start ceph-osd, follow the steps in The ceph-osd daemon cannot start.
    2. If you are able to start the ceph-osd daemon but it is marked as down, follow the steps in The ceph-osd daemon is running but still marked as down.
The ceph-osd daemon cannot start
  1. If you have a node containing a number of OSDs (generally, more that twelve), verify that the default maximum number of threads (PID count) is sufficient. See Section 5.5, “Increasing the PID count” for details.
  2. Verify that the OSD data and journal partitions are mounted properly:

    # ceph-disk list
    ...
    /dev/vdb :
     /dev/vdb1 ceph data, prepared
     /dev/vdb2 ceph journal
    /dev/vdc :
     /dev/vdc1 ceph data, active, cluster ceph, osd.1, journal /dev/vdc2
     /dev/vdc2 ceph journal, for /dev/vdc1
    /dev/sdd1 :
     /dev/sdd1 ceph data, unprepared
     /dev/sdd2 ceph journal

    A partition is mounted if ceph-disk marks it as active. If a partition is prepared, mount it. See Section 5.3, “Mounting the OSD Data Partition” for details. If a partition is unprepared, you must prepare it first before mounting. See the Preparing the OSD Data and Journal Drives section in the Administration Guide Red Hat Ceph Storage 3.

  3. If you got the ERROR: missing keyring, cannot use cephx for authentication error message, the OSD is a missing keyring. See the Keyring Management section in the Administration Guide for Red Hat Ceph Storage 3.
  4. If you got the ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-1 error message, the ceph-osd daemon cannot read the underlying file system. See the following steps for instructions on how to troubleshoot and fix this error.

    Note

    If this error message is returned during boot time of the OSD host, open a support ticket as this might indicate a known issue tracked in the Red Hat Bugzilla 1439210. See Chapter 9, Contacting Red Hat Support Service for details.

  5. Check the corresponding log file to determine the cause of the failure. By default, Ceph stores log files in the /var/log/ceph/ directory.

    1. An EIO error message similar to the following one indicates a failure of the underlying disk:

      FAILED assert(!m_filestore_fail_eio || r != -5)

      To fix this problem replace the underlying OSD disk. See Section 5.4, “Replacing an OSD Drive” for details.

    2. If the log includes any other FAILED assert errors, such as the following one, open a support ticket. See Chapter 9, Contacting Red Hat Support Service for details.

      FAILED assert(0 == "hit suicide timeout")
  6. Check the dmesg output for the errors with the underlying file system or disk:

    $ dmesg
    1. The error -5 error message similar to the following one indicates corruption of the underlying XFS file system. For details on how to fix this problem, see the What is the meaning of "xfs_log_force: error -5 returned"? solution on the Red Hat Customer Portal.

      xfs_log_force: error -5 returned
    2. If the dmesg output includes any SCSI error error messages, see the SCSI Error Codes Solution Finder solution on the Red Hat Customer Portal to determine the best way to fix the problem.
    3. Alternatively, if you are unable to fix the underlying file system, replace the OSD drive. See Section 5.4, “Replacing an OSD Drive” for details.
  7. If the OSD failed with a segmentation fault, such as the following one, gather the required information and open a support ticket. See Chapter 9, Contacting Red Hat Support Service for details.

    Caught signal (Segmentation fault)
The ceph-osd is running but still marked as down
  1. Check the corresponding log file to determine the cause of the failure. By default, Ceph stores log files in the /var/log/ceph/ directory.

    1. If the log includes error messages similar to the following ones, see Section 5.1.4, “Flapping OSDs”.

      wrongly marked me down
      heartbeat_check: no reply from osd.2 since back
    2. If you see any other errors, open a support ticket. See Chapter 9, Contacting Red Hat Support Service for details.
See Also

5.1.4. Flapping OSDs

The ceph -w | grep osds command shows OSDs repeatedly as down and then up again within a short period of time:

# ceph -w | grep osds
2017-04-05 06:27:20.810535 mon.0 [INF] osdmap e609: 9 osds: 8 up, 9 in
2017-04-05 06:27:24.120611 mon.0 [INF] osdmap e611: 9 osds: 7 up, 9 in
2017-04-05 06:27:25.975622 mon.0 [INF] HEALTH_WARN; 118 pgs stale; 2/9 in osds are down
2017-04-05 06:27:27.489790 mon.0 [INF] osdmap e614: 9 osds: 6 up, 9 in
2017-04-05 06:27:36.540000 mon.0 [INF] osdmap e616: 9 osds: 7 up, 9 in
2017-04-05 06:27:39.681913 mon.0 [INF] osdmap e618: 9 osds: 8 up, 9 in
2017-04-05 06:27:43.269401 mon.0 [INF] osdmap e620: 9 osds: 9 up, 9 in
2017-04-05 06:27:54.884426 mon.0 [INF] osdmap e622: 9 osds: 8 up, 9 in
2017-04-05 06:27:57.398706 mon.0 [INF] osdmap e624: 9 osds: 7 up, 9 in
2017-04-05 06:27:59.669841 mon.0 [INF] osdmap e625: 9 osds: 6 up, 9 in
2017-04-05 06:28:07.043677 mon.0 [INF] osdmap e628: 9 osds: 7 up, 9 in
2017-04-05 06:28:10.512331 mon.0 [INF] osdmap e630: 9 osds: 8 up, 9 in
2017-04-05 06:28:12.670923 mon.0 [INF] osdmap e631: 9 osds: 9 up, 9 in

In addition, the Ceph log contains error messages similar to the following ones:

2016-07-25 03:44:06.510583 osd.50 127.0.0.1:6801/149046 18992 : cluster [WRN] map e600547 wrongly marked me down
2016-07-25 19:00:08.906864 7fa2a0033700 -1 osd.254 609110 heartbeat_check: no reply from osd.2 since back 2016-07-25 19:00:07.444113 front 2016-07-25 18:59:48.311935 (cutoff 2016-07-25 18:59:48.906862)
What This Means

The main causes of flapping OSDs are:

  • Certain cluster operations, such as scrubbing or recovery, take an abnormal amount of time. For example, if you perform these operations on objects with a large index or large placement groups. Usually, after these operations finish, the flapping OSDs problem is solved.
  • Problems with the underlying physical hardware. In this case, the ceph health detail command also returns the slow requests error message. For details, see Section 5.1.5, “Slow Requests, and Requests are Blocked”.
  • Problems with network.

OSDs cannot handle well the situation when the cluster (back-end) network fails or develops significant latency while the public (front-end) network operates optimally.

OSDs use the cluster network for sending heartbeat packets to each other to indicate that they are up and in. If the cluster network does not work properly, OSDs are unable to send and receive the heartbeat packets. As a consequence, they report each other as being down to the Monitors, while marking themselves as up.

The following parameters in the Ceph configuration file influence this behavior:

ParameterDescriptionDefault value

osd_heartbeat_grace

How long OSDs wait for the heartbeat packets to return before reporting an OSD as down to the Monitors.

20 seconds

mon_osd_min_down_reporters

How many OSDs must report another OSD as down before the Monitors mark the OSD as down

2

This table shows that in the default configuration, the Ceph Monitors mark an OSD as down if only one OSD made three distinct reports about the first OSD being down. In some cases, if one single host encounters network issues, the entire cluster can experience flapping OSDs. This is because the OSDs that reside on the host will report other OSDs in the cluster as down.

Note

The flapping OSDs scenario does not include the situation when the OSD processes are started and then immediately killed.

To Troubleshoot This Problem
  1. Check the output of the ceph health detail command again. If it includes the slow requests error message, see Section 5.1.5, “Slow Requests, and Requests are Blocked” for details on how to troubleshoot this issue.

    # ceph health detail
    HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests
    30 ops are blocked > 268435 sec
    1 ops are blocked > 268435 sec on osd.11
    1 ops are blocked > 268435 sec on osd.18
    28 ops are blocked > 268435 sec on osd.39
    3 osds have slow requests
  2. Determine which OSDs are marked as down and on what nodes they reside:

    # ceph osd tree | grep down
  3. On the nodes containing the flapping OSDs, troubleshoot and fix any networking problems. For details, see Chapter 3, Troubleshooting Networking Issues.
  4. Alternatively, you can temporary force Monitors to stop marking the OSDs as down and up by setting the noup and nodown flags:

    # ceph osd set noup
    # ceph osd set nodown
    Important

    Using the noup and nodown flags does not fix the root cause of the problem but only prevents OSDs from flapping. Open a support ticket, if you are unable to fix and troubleshoot the error by yourself. See Chapter 9, Contacting Red Hat Support Service for details.

  5. Additionally, flapping OSDs can be fixed by setting osd heartbeat min size = 100 in the Ceph configuration file and then restarting the OSDs. This resolves network issue due to MTU misconfiguration.
Additional Resources

5.1.5. Slow Requests, and Requests are Blocked

The ceph-osd daemon is slow to respond to a request and the ceph health detail command returns an error message similar to the following one:

HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests
30 ops are blocked > 268435 sec
1 ops are blocked > 268435 sec on osd.11
1 ops are blocked > 268435 sec on osd.18
28 ops are blocked > 268435 sec on osd.39
3 osds have slow requests

In addition, the Ceph logs include an error message similar to the following ones:

2015-08-24 13:18:10.024659 osd.1 127.0.0.1:6812/3032 9 : cluster [WRN] 6 slow requests, 6 included below; oldest blocked for > 61.758455 secs
2016-07-25 03:44:06.510583 osd.50 [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]
What This Means

An OSD with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds.

The main causes of OSDs having slow requests are:

  • Problems with the underlying hardware, such as disk drives, hosts, racks, or network switches
  • Problems with network. These problems are usually connected with flapping OSDs. See Section 5.1.4, “Flapping OSDs” for details.
  • System load

The following table shows the types of slow requests. Use the dump_historic_ops administration socket command to determine the type of a slow request. For details about the administration socket, see the Using the Administration Socket section in the Administration Guide for Red Hat Ceph Storage 3.

Slow request typeDescription

waiting for rw locks

The OSD is waiting to acquire a lock on a placement group for the operation.

waiting for subops

The OSD is waiting for replica OSDs to apply the operation to the journal.

no flag points reached

The OSD did not reach any major operation milestone.

waiting for degraded object

The OSDs have not replicated an object the specified number of times yet.

To Troubleshoot This Problem
  1. Determine if the OSDs with slow or block requests share a common piece of hardware, for example a disk drive, host, rack, or network switch.
  2. If the OSDs share a disk:

    1. Use the smartmontools utility to check the health of the disk or the logs to determine any errors on the disk.

      Note

      The smartmontools utility is included in the smartmontools package.

    2. Use the iostat utility to get the I/O wait report (%iowai) on the OSD disk to determine if the disk is under heavy load.

      Note

      The iostat utility is included in the sysstat package.

  3. If the OSDs share a host:

    1. Check the RAM and CPU utilization
    2. Use the netstat utility to see the network statistics on the Network Interface Controllers (NICs) and troubleshoot any networking issues. See also Chapter 3, Troubleshooting Networking Issues for further information.
  4. If the OSDs share a rack, check the network switch for the rack. For example, if you use jumbo frames, verify that the NIC in the path has jumbo frames set.
  5. If you are unable to determine a common piece of hardware shared by OSDs with slow requests, or to troubleshoot and fix hardware and networking problems, open a support ticket. See Chapter 9, Contacting Red Hat Support Service for details.
See Also

5.2. Stopping and Starting Rebalancing

When an OSD fails or you stop it, the CRUSH algorithm automatically starts the rebalancing process to redistribute data across the remaining OSDs.

Rebalancing can take time and resources, therefore, consider stopping rebalancing during troubleshooting or maintaining OSDs. To do so, set the noout flag before stopping the OSD:

# ceph osd set noout

When you finish troubleshooting or maintenance, unset the noout flag to start rebalancing:

# ceph osd unset noout
Note

Placement groups within the stopped OSDs become degraded during troubleshooting and maintenance.

See Also

5.3. Mounting the OSD Data Partition

If the OSD data partition is not mounted correctly, the ceph-osd daemon cannot start. If you discover that the partition is not mounted as expected, follow the steps in this section to mount it.

Procedure: Mounting the OSD Data Partition

  1. Mount the partition:

    # mount -o noatime <partition> /var/lib/ceph/osd/<cluster-name>-<osd-number>

    Replace <partition> with the path to the partition on the OSD drive dedicated to OSD data. Specify the cluster name and the OSD number, for example:

    # mount -o noatime /dev/sdd1 /var/lib/ceph/osd/ceph-0
  2. Try to start the failed ceph-osd daemon:

    # systemctl start ceph-osd@<OSD-number>

    Replace the <OSD-number> with the ID of the OSD, for example:

    # systemctl start ceph-osd@0

See Also

5.4. Replacing an OSD Drive

Ceph is designed for fault tolerance, which means that it can operate in a degraded state without losing data. Consequently, Ceph can operate even if a data storage drive fails. In the context of a failed drive, the degraded state means that the extra copies of the data stored on other OSDs will backfill automatically to other OSDs in the cluster. However, if this occurs, replace the failed OSD drive and recreate the OSD manually.

When a drive fails, Ceph reports the OSD as down:

HEALTH_WARN 1/3 in osds are down
osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080
Note

Ceph can mark an OSD as down also as a consequence of networking or permissions problems. See Section 5.1.3, “One or More OSDs Are Down” for details.

Modern servers typically deploy with hot-swappable drives so you can pull a failed drive and replace it with a new one without bringing down the node. The whole procedure includes these steps:

  1. Remove the OSD from the Ceph cluster. For details, see the Removing an OSD from the Ceph Cluster procedure.
  2. Replace the drive. For details see, the Replacing the Physical Drive section.
  3. Add the OSD to the cluster. For details, see the Adding an OSD to the Ceph Cluster procedure.

Before You Start

  1. Determine which OSD is down:

    # ceph osd tree | grep -i down
    ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
     0 0.00999         osd.0     down  1.00000          1.00000
  2. Ensure that the OSD process is stopped. Use the following command from the OSD node:

    # systemctl status ceph-osd@<OSD-number>

    Replace <OSD-number> with the ID of the OSD marked as down, for example:

    # systemctl status ceph-osd@osd.0
    ...
       Active: inactive (dead)

    If the ceph-osd daemon is running. See Section 5.1.3, “One or More OSDs Are Down” for more details about troubleshooting OSDs that are marked as down but their corresponding ceph-osd daemon is running.

Procedure: Removing an OSD from the Ceph Cluster

  1. Mark the OSD as out:

    # ceph osd out osd.<OSD-number>

    Replace <OSD-number> with the ID of the OSD that is marked as down, for example:

    # ceph osd out osd.0
    marked out osd.0.
    Note

    If the OSD is down, Ceph marks it as out automatically after 600 seconds when it does not receive any heartbeat packet from the OSD. When this happens, other OSDs with copies of the failed OSD data begin backfilling to ensure that the required number of copies exists within the cluster. While the cluster is backfilling, the cluster will be in a degraded state.

  2. Ensure that the failed OSD is backfilling. The output will include information similar to the following one:

    # ceph -w | grep backfill
    2017-06-02 04:48:03.403872 mon.0 [INF] pgmap v10293282: 431 pgs: 1 active+undersized+degraded+remapped+backfilling, 28 active+undersized+degraded, 49 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 294 active+clean; 72347 MB data, 101302 MB used, 1624 GB / 1722 GB avail; 227 kB/s rd, 1358 B/s wr, 12 op/s; 10626/35917 objects degraded (29.585%); 6757/35917 objects misplaced (18.813%); 63500 kB/s, 15 objects/s recovering
    2017-06-02 04:48:04.414397 mon.0 [INF] pgmap v10293283: 431 pgs: 2 active+undersized+degraded+remapped+backfilling, 75 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 295 active+clean; 72347 MB data, 101398 MB used, 1623 GB / 1722 GB avail; 969 kB/s rd, 6778 B/s wr, 32 op/s; 10626/35917 objects degraded (29.585%); 10580/35917 objects misplaced (29.457%); 125 MB/s, 31 objects/s recovering
    2017-06-02 04:48:00.380063 osd.1 [INF] 0.6f starting backfill to osd.0 from (0'0,0'0] MAX to 2521'166639
    2017-06-02 04:48:00.380139 osd.1 [INF] 0.48 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'43079
    2017-06-02 04:48:00.380260 osd.1 [INF] 0.d starting backfill to osd.0 from (0'0,0'0] MAX to 2513'136847
    2017-06-02 04:48:00.380849 osd.1 [INF] 0.71 starting backfill to osd.0 from (0'0,0'0] MAX to 2331'28496
    2017-06-02 04:48:00.381027 osd.1 [INF] 0.51 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'87544
  3. Remove the OSD from the CRUSH map:

    # ceph osd crush remove osd.<OSD-number>

    Replace <OSD-number> with the ID of the OSD that is marked as down, for example:

    # ceph osd crush remove osd.0
    removed item id 0 name 'osd.0' from crush map
  4. Remove authentication keys related to the OSD:

    # ceph auth del osd.<OSD-number>

    Replace <OSD-number> with the ID of the OSD that is marked as down, for example:

    # ceph auth del osd.0
    updated
  5. Remove the OSD from the Ceph Storage Cluster:

    # ceph osd rm osd.<OSD-number>

    Replace <OSD-number> with the ID of the OSD that is marked as down, for example:

    # ceph osd rm osd.0
    removed osd.0

    If you have removed the OSD successfully, it is not present in the output of the following command:

    # ceph osd tree
  6. Unmount the failed drive:

    # umount /var/lib/ceph/osd/<cluster-name>-<OSD-number>

    Specify the name of the cluster and the ID of the OSD, for example:

    # umount /var/lib/ceph/osd/ceph-0/

    If you have unmounted the drive successfully, it is not present in the output of the following command:

    # df -h

Procedure: Replacing the Physical Drive

  1. See the documentation for the hardware node for details on replacing the physical drive.

    1. If the drive is hot-swappable, replace the failed drive with a new one.
    2. If the drive is not hot-swappable and the node contains multiple OSDs, you might have to shut down the whole node and replace the physical drive. Consider preventing the cluster from backfilling. See Section 5.2, “Stopping and Starting Rebalancing” for details.
  2. When the drive appears under the /dev/ directory, make a note of the drive path.
  3. If you want to add the OSD manually, find the OSD drive and format the disk.

Procedure: Adding an OSD to the Ceph Cluster

  1. Add the OSD again.

    1. If you used Ansible to deploy the cluster, run the ceph-ansible playbook again from the Ceph administration server:

      # ansible-playbook /usr/share/ceph-ansible site.yml
    2. If you added the OSD manually, see the Adding an OSD with the Command-line Interface section in the _Administration Guid_e for Red Hat Ceph Storage 3.
  2. Ensure that the CRUSH hierarchy is accurate:

    # ceph osd tree
  3. If you are not satisfied with the location of the OSD in the CRUSH hierarchy, move the OSD to a desired location:

    ceph osd crush move <bucket-to-move> <bucket-type>=<parent-bucket>

    For example, to move the bucket located at sdd:row1 to the root bucket:

    # ceph osd crush move ssd:row1 root=ssd:root

See Also

5.5. Increasing the PID count

If you have a node containing more than 12 Ceph OSDs, the default maximum number of threads (PID count) can be insufficient, especially during recovery. As a consequence, some ceph-osd daemons can terminate and fail to start again. If this happens, increase the maximum possible number of threads allowed.

To temporary increase the number:

# sysctl -w kernel.pid.max=4194303

To permanently increase the number, update the /etc/sysctl.conf file as follows:

kernel.pid.max = 4194303

5.6. Deleting Data from a Full Cluster

Ceph automatically prevents any I/O operations on OSDs that reached the capacity specified by the mon_osd_full_ratio parameter and returns the full osds error message.

This procedure shows how to delete unnecessary data to fix this error.

Note

The mon_osd_full_ratio parameter sets the value of the full_ratio parameter when creating a cluster. You cannot change the value of mon_osd_full_ratio afterwards. To temporarily increase the full_ratio value, increase the set-full-ratio instead.

Procedure: Deleting Data from a Full Cluster

  1. Determine the current value of full_ratio, by default it is set to 0.95:

    # ceph osd dump | grep -i full
    full_ratio 0.95
  2. Temporarily increase the value by setting set-full-ratio to 0.97:

    # ceph osd set-full-ratio 0.97
    Important

    Red Hat strongly recommends to not set the set-full-ratio to a value higher than 0.97. Setting this parameter to a higher value makes the recovery process harder. As a consequence, you might not be able to recover full OSDs at all.

  3. Verify that you successfully set the parameter to 0.97:

    # ceph osd dump | grep -i full
    full_ratio 0.97
  4. Monitor the cluster state:

    # ceph -w

    As soon as the cluster changes its state from full to nearfull, delete any unnecessary data.

  5. Set the value of full_ratio back to 0.95:

    # ceph osd set-full-ratio 0.95
  6. Verify that you successfully set the parameter to 0.95:

    # ceph osd dump | grep -i full
    full_ratio 0.95

See Also

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.