2.10. CRUSH 스토리지 전략 예
대부분의 풀이 대규모 하드 드라이브에서 지원하는 OSD로 기본 설정되지만 일부 풀이 빠른 SSD(Solid-State Drive)에서 지원하는 OSD에 매핑되도록 하려면 다음을 수행합니다. CRUSH는 이러한 시나리오를 쉽게 처리할 수 있습니다.
장치 클래스를 사용합니다. 프로세스는 각 장치에 클래스를 추가하기 쉽습니다.
구문
ceph osd crush set-device-class CLASS OSD_ID [OSD_ID]
ceph osd crush set-device-class CLASS OSD_ID [OSD_ID]
예
[ceph:root@host01 /]# ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5 [ceph:root@host01 /]# ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7
[ceph:root@host01 /]# ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5
[ceph:root@host01 /]# ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7
그런 다음 장치를 사용할 규칙을 만듭니다.
구문
ceph osd crush rule create-replicated RULENAME ROOT FAILURE_DOMAIN_TYPE DEVICE_CLASS
ceph osd crush rule create-replicated RULENAME ROOT FAILURE_DOMAIN_TYPE DEVICE_CLASS
예
[ceph:root@host01 /]# ceph osd crush rule create-replicated cold default host hdd [ceph:root@host01 /]# ceph osd crush rule create-replicated hot default host ssd
[ceph:root@host01 /]# ceph osd crush rule create-replicated cold default host hdd
[ceph:root@host01 /]# ceph osd crush rule create-replicated hot default host ssd
마지막으로 규칙을 사용하도록 풀을 설정합니다.
구문
ceph osd pool set POOL_NAME crush_rule RULENAME
ceph osd pool set POOL_NAME crush_rule RULENAME
예
[ceph:root@host01 /]# ceph osd pool set cold crush_rule hdd [ceph:root@host01 /]# ceph osd pool set hot crush_rule ssd
[ceph:root@host01 /]# ceph osd pool set cold crush_rule hdd
[ceph:root@host01 /]# ceph osd pool set hot crush_rule ssd
하나의 계층 구조에서 여러 장치 클래스를 제공할 수 있으므로 CRUSH 맵을 수동으로 편집할 필요가 없습니다.
device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class ssd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd host ceph-osd-server-1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 item osd.2 weight 1.00 item osd.3 weight 1.00 } host ceph-osd-server-2 { id -2 alg straw2 hash 0 item osd.4 weight 1.00 item osd.5 weight 1.00 item osd.6 weight 1.00 item osd.7 weight 1.00 } root default { id -3 alg straw2 hash 0 item ceph-osd-server-1 weight 4.00 item ceph-osd-server-2 weight 4.00 } rule cold { ruleset 0 type replicated min_size 2 max_size 11 step take default class hdd step chooseleaf firstn 0 type host step emit } rule hot { ruleset 1 type replicated min_size 2 max_size 11 step take default class ssd step chooseleaf firstn 0 type host step emit }
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class ssd
device 3 osd.3 class ssd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class ssd
device 7 osd.7 class ssd
host ceph-osd-server-1 {
id -1
alg straw2
hash 0
item osd.0 weight 1.00
item osd.1 weight 1.00
item osd.2 weight 1.00
item osd.3 weight 1.00
}
host ceph-osd-server-2 {
id -2
alg straw2
hash 0
item osd.4 weight 1.00
item osd.5 weight 1.00
item osd.6 weight 1.00
item osd.7 weight 1.00
}
root default {
id -3
alg straw2
hash 0
item ceph-osd-server-1 weight 4.00
item ceph-osd-server-2 weight 4.00
}
rule cold {
ruleset 0
type replicated
min_size 2
max_size 11
step take default class hdd
step chooseleaf firstn 0 type host
step emit
}
rule hot {
ruleset 1
type replicated
min_size 2
max_size 11
step take default class ssd
step chooseleaf firstn 0 type host
step emit
}