7.9. 扩展 BlueFS DB 设备
您可以使用 ceph-bluestore
工具扩展作为 ceph-volume
创建的 ceph-volume 的 RocksDB 数据的 BlueStore 文件系统(BlueFS)数据的存储。
先决条件
- 一个正在运行的 Red Hat Ceph Storage 集群。
-
Ceph OSD 由
ceph-volume
实用程序准备。 - 创建卷组和逻辑卷。
在部署 OSD 的主机上运行这些步骤。
流程
可选:在
cephadm
shell 之外,列出 Red Hat Ceph Storage 集群中的设备。示例
[ceph: root@host01 /]# ceph-volume lvm list ====== osd.3 ======= [db] /dev/db-test/db1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type db vdo 0 devices /dev/vdh [block] /dev/test/lv1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type block vdo 0 devices /dev/vdg
获取卷组信息:
示例
[root@host01 ~]# vgs VG #PV #LV #SN Attr VSize VFree db-test 1 1 0 wz--n- <200.00g <160.00g test 1 1 0 wz--n- <200.00g <170.00g
停止 Ceph OSD 服务:
示例
[root@host01 ~]# systemctl stop host01a6112da-ed05-11ee-bacd-525400565cda@osd.3.service
重新调整逻辑卷的大小、缩小和扩展:
示例
[root@host01 ~]# lvresize -l 100%FREE /dev/db-test/db1 Size of logical volume db-test/db1 changed from 40.00 GiB (10240 extents) to <160.00 GiB (40959 extents). Logical volume db-test/db1 successfully resized.
启动
cephadm
shell:语法
cephadm shell -m /var/lib/ceph/CLUSTER_FSID/osd.OSD_ID:/var/lib/ceph/osd/ceph-OSD_ID:z
示例
[root@host01 ~]# cephadm shell -m /var/lib/ceph/1a6112da-ed05-11ee-bacd-525400565cda/osd.3:/var/lib/ceph/osd/ceph-3:z
ceph-bluestore-tool
需要从cephadm
shell 容器内访问 BlueStore 数据,因此必须绑定挂载。使用-m
选项使 BlueStore 数据可用。在扩展前检查 Rocks DB 的大小:
语法
ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH
示例
[ceph: root@host01 /]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { "/var/lib/ceph/osd/ceph-3/block": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 32212254720, "btime": "2024-04-03T08:34:12.742848+0000", "description": "main", "bfm_blocks": "7864320", "bfm_blocks_per_key": "128", "bfm_bytes_per_block": "4096", "bfm_size": "32212254720", "bluefs": "1", "ceph_fsid": "1a6112da-ed05-11ee-bacd-525400565cda", "ceph_version_when_created": "ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)", "created_at": "2024-04-03T08:34:15.637253Z", "elastic_shared_blobs": "1", "kv_backend": "rocksdb", "magic": "ceph osd volume v026", "mkfs_done": "yes", "osd_key": "AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==", "osdspec_affinity": "None", "ready": "ready", "require_osd_release": "19", "whoami": "3" }, "/var/lib/ceph/osd/ceph-3/block.db": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 40794497536, "btime": "2024-04-03T08:34:12.748816+0000", "description": "bluefs db" } }
扩展 BlueStore 设备:
语法
ceph-bluestore-tool bluefs-bdev-expand --path OSD_DIRECTORY_PATH
示例
[ceph: root@host01 /]# ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path 1 : device size 0x27ffbfe000 : using 0x2300000(35 MiB) 2 : device size 0x780000000 : using 0x52000(328 KiB) Expanding DB/WAL... 1 : expanding to 0x171794497536 1 : size label updated to 171794497536
验证
block.db
是否已扩展:语法
ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH
示例
[ceph: root@host01 /]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { "/var/lib/ceph/osd/ceph-3/block": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 32212254720, "btime": "2024-04-03T08:34:12.742848+0000", "description": "main", "bfm_blocks": "7864320", "bfm_blocks_per_key": "128", "bfm_bytes_per_block": "4096", "bfm_size": "32212254720", "bluefs": "1", "ceph_fsid": "1a6112da-ed05-11ee-bacd-525400565cda", "ceph_version_when_created": "ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)", "created_at": "2024-04-03T08:34:15.637253Z", "elastic_shared_blobs": "1", "kv_backend": "rocksdb", "magic": "ceph osd volume v026", "mkfs_done": "yes", "osd_key": "AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==", "osdspec_affinity": "None", "ready": "ready", "require_osd_release": "19", "whoami": "3" }, "/var/lib/ceph/osd/ceph-3/block.db": { "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731", "size": 171794497536, "btime": "2024-04-03T08:34:12.748816+0000", "description": "bluefs db" } }
退出 shell 并重启 OSD:
示例
[root@host01 ~]# systemctl start host01a6112da-ed05-11ee-bacd-525400565cda@osd.3.service osd.3 host01 running (15s) 0s ago 13m 46.9M 4096M 19.0.0-2493-gd82c9aa1 3714003597ec 02150b3b6877