2.10. CRUSH 스토리지 전략의 예
대부분의 풀은 기본적으로 대규모 하드 드라이브에서 지원하는 OSD에 연결하되 일부 풀은 SSD(솔리드 스테이트 드라이브)에서 지원하는 OSD에 매핑되어 있어야 합니다. CRUSH는 이러한 시나리오를 쉽게 처리할 수 있습니다.
장치 클래스를 사용합니다. 프로세스는 각 장치에 클래스를 추가하는 것은 간단합니다. 예를 들어 다음과 같습니다.
ceph osd crush set-device-class <class> <osdId> [<osdId>] ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5 ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7
# ceph osd crush set-device-class <class> <osdId> [<osdId>]
# ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5
# ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7
그런 다음 장치를 사용할 규칙을 만듭니다.
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain-type> <device-class>: ceph osd crush rule create-replicated cold default host hdd ceph osd crush rule create-replicated hot default host ssd
# ceph osd crush rule create-replicated <rule-name> <root> <failure-domain-type> <device-class>:
# ceph osd crush rule create-replicated cold default host hdd
# ceph osd crush rule create-replicated hot default host ssd
마지막으로 규칙을 사용하도록 풀을 설정합니다.
ceph osd pool set <poolname> crush_rule <rule-name> ceph osd pool set cold crush_rule hdd ceph osd pool set hot crush_rule ssd
ceph osd pool set <poolname> crush_rule <rule-name>
ceph osd pool set cold crush_rule hdd
ceph osd pool set hot crush_rule ssd
하나의 계층 구조가 여러 장치 클래스를 처리할 수 있으므로 CRUSH 맵을 수동으로 편집할 필요가 없습니다.
device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class ssd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd host ceph-osd-server-1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 item osd.2 weight 1.00 item osd.3 weight 1.00 } host ceph-osd-server-2 { id -2 alg straw2 hash 0 item osd.4 weight 1.00 item osd.5 weight 1.00 item osd.6 weight 1.00 item osd.7 weight 1.00 } root default { id -3 alg straw2 hash 0 item ceph-osd-server-1 weight 4.00 item ceph-osd-server-2 weight 4.00 } rule cold { ruleset 0 type replicated min_size 2 max_size 11 step take default class hdd step chooseleaf firstn 0 type host step emit } rule hot { ruleset 1 type replicated min_size 2 max_size 11 step take default class ssd step chooseleaf firstn 0 type host step emit }
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class ssd
device 3 osd.3 class ssd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class ssd
device 7 osd.7 class ssd
host ceph-osd-server-1 {
id -1
alg straw2
hash 0
item osd.0 weight 1.00
item osd.1 weight 1.00
item osd.2 weight 1.00
item osd.3 weight 1.00
}
host ceph-osd-server-2 {
id -2
alg straw2
hash 0
item osd.4 weight 1.00
item osd.5 weight 1.00
item osd.6 weight 1.00
item osd.7 weight 1.00
}
root default {
id -3
alg straw2
hash 0
item ceph-osd-server-1 weight 4.00
item ceph-osd-server-2 weight 4.00
}
rule cold {
ruleset 0
type replicated
min_size 2
max_size 11
step take default class hdd
step chooseleaf firstn 0 type host
step emit
}
rule hot {
ruleset 1
type replicated
min_size 2
max_size 11
step take default class ssd
step chooseleaf firstn 0 type host
step emit
}