이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 8. CRUSH Weights


The CRUSH algorithm assigns a weight value per device with the objective of approximating a uniform probability distribution for I/O requests. As a best practice, we recommend creating pools with devices of the same type and size, and assigning the same relative weight. Since this is not always practical, you may incorporate devices of different size and use a relative weight so that Ceph will distribute more data to larger drives and less data to smaller drives.

To adjust an OSD’s crush weight in the CRUSH map of a running cluster, execute the following:

ceph osd crush reweight {name} {weight}
Copy to Clipboard Toggle word wrap

Where:

name

Description
The full name of the OSD.
Type
String
Required
Yes
Example
osd.0

weight

Description
The CRUSH weight for the OSD.
Type
Double
Required
Yes
Example
2.0
Note

You can also set weights on osd crush add or osd crush set (move).

CRUSH buckets reflect the sum of the weights of the buckets or the devices they contain. For example, a rack containing a two hosts with two OSDs each, might have a weight of 4.0 and each host a weight of 2.0--the sum for each OSD, where the weight per OSD is 1.00. Generally, we recommend using 1.0 as the measure of 1TB of data.

Note

Introducing devices of different size and performance characteristics in the same pool can lead to variance in data distribution and performance.

CRUSH weight is a persistent setting, and it affects how CRUSH assigns data to OSDs. Ceph also has temporary reweight settings if the cluster gets out of balance. For example, whereas a Ceph Block Device will shard a block device image into a series of smaller objects and stripe them across the cluster, using librados to store data without normalizing the size of objects can lead to imbalanced clusters (e.g., storing 100 1MB objects and 10 4MB objects will make a few OSDs have more data than the others).

You can temporarily increase or decrease the weight of particular OSDs by executing:

ceph osd reweight {id} {weight}
Copy to Clipboard Toggle word wrap

Where:

  • id is the OSD number.
  • weight is a range from 0.0-1.0.

You can also temporarily reweight OSDs by utilization.

ceph osd reweight-by-utilization {threshold}
Copy to Clipboard Toggle word wrap

Where:

  • threshold is a percentage of utilization where OSDs facing higher loads will receive a lower weight. The default value is 120, reflecting 120%. Any value from 100+ is a valid threshold.
Note

Restarting the cluster will wipe out osd reweight and osd reweight-by-utilization, but osd crush reweight settings are persistent.

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat