이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 10. Troubleshooting scrub and deep-scrub issues


Learn to troubleshoot the scrub and deep-scrub issues.

10.1. Addressing the scrub slowness issue while upgrading to 9

Learn to troubleshoot the scrub slowness issue which seen after upgrading to Red Hat Ceph Storage 9.

Scrub slowness is caused by the automated OSD benchmark setting a very low value for osd_mclock_max_capacity_iops_hdd. Due to this, scrub operations are impacted since the IOPS capacity of an OSD plays a significant role in determining the bandwidth the scrub operation receives. To further increase the problem, scrubs receive only a fraction of the total IOPS capacity based on the QoS allocation defined by the mClock profile.

Due to this, the Ceph cluster reports the expected scrub completion time in multiples of days or weeks.

Prequisites

  1. A running Red Hat Ceph Storage cluster in a healthy state.
  2. Root-level access to the node.

Procedure

  1. Detect low measured IOPS reported by OSD bench during OSD boot-up and fallback to default IOPS setting defined for osd_mclock_max_capacity_iops_[hdd|ssd]. The fallback is triggered if the reported IOPS falls below a threshold determined by osd_mclock_iops_capacity_low_threshold_[hdd|ssd]. A cluster warning is also logged.

    Example:

    $ ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]
    Copy to Clipboard Toggle word wrap

  2. [Optional]: Perform the following steps:

    1. For clusters already affected by the issue, remove the IOPS capacity setting on the OSD(s) before upgrading to the release with the fix by running the following command:

      Example:

      $ ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]
      Copy to Clipboard Toggle word wrap

    2. Set the osd_mclock_force_run_benchmark_on_init option for the affected OSD to true before the upgrade:

      Example:

      $ ceph config set osd.X osd_mclock_force_run_benchmark_on_init true
      Copy to Clipboard Toggle word wrap

      After upgrading to the release with this fix, the IOPS capacity reflects the default setting or the new one reported by the OSD bench.

  3. [Optional]: Perform the following steps if you have already upgraded from 8 to 9 (after upgrade):

    1. If you were unable to perform the above steps before upgrade, you re-run the OSD bench again after upgrading by removing the osd_mclock_max_capacity_iops_[hdd|ssd] setting:

      Example:

      $ ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]
      Copy to Clipboard Toggle word wrap

    2. Set osd_mclock_force_run_benchmark_on_init to true.

      Example:

      $ ceph config set osd.X osd_mclock_force_run_benchmark_on_init true
      Copy to Clipboard Toggle word wrap

    3. Restart the OSD.

      After the OSD restarts, the IOPS capacity reflects the default setting or the new setting reported by the OSD bench.

Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2026 Red Hat
맨 위로 이동