搜索

7.9. 扩展 BlueFS DB 设备

download PDF

您可以使用 ceph-bluestore 工具扩展作为 ceph-volume 创建的 ceph-volume 的 RocksDB 数据的 BlueStore 文件系统(BlueFS)数据的存储。

先决条件

  • 一个正在运行的 Red Hat Ceph Storage 集群。
  • Ceph OSD 由 ceph-volume 实用程序准备。
  • 创建卷组和逻辑卷。
注意

在部署 OSD 的主机上运行这些步骤。

流程

  1. 可选:在 cephadm shell 之外,列出 Red Hat Ceph Storage 集群中的设备。

    示例

    [ceph: root@host01 /]# ceph-volume lvm list
    
    ====== osd.3 =======
    
      [db]          /dev/db-test/db1
    
          block device              /dev/test/lv1
          block uuid                N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB
          cephx lockbox secret
          cluster fsid              1a6112da-ed05-11ee-bacd-525400565cda
          cluster name              ceph
          crush device class
          db device                 /dev/db-test/db1
          db uuid                   1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6
          encrypted                 0
          osd fsid                  94ff742c-7bfd-4fb5-8dc4-843d10ac6731
          osd id                    3
          osdspec affinity          None
          type                      db
          vdo                       0
          devices                   /dev/vdh
    
      [block]       /dev/test/lv1
    
          block device              /dev/test/lv1
          block uuid                N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB
          cephx lockbox secret
          cluster fsid              1a6112da-ed05-11ee-bacd-525400565cda
          cluster name              ceph
          crush device class
          db device                 /dev/db-test/db1
          db uuid                   1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6
          encrypted                 0
          osd fsid                  94ff742c-7bfd-4fb5-8dc4-843d10ac6731
          osd id                    3
          osdspec affinity          None
          type                      block
          vdo                       0
          devices                   /dev/vdg

  2. 获取卷组信息:

    示例

    [root@host01 ~]# vgs
    
    VG                                        #PV #LV #SN Attr   VSize    VFree
    db-test                                     1   1   0 wz--n- <200.00g <160.00g
    test                                        1   1   0 wz--n- <200.00g <170.00g

  3. 停止 Ceph OSD 服务:

    示例

    [root@host01 ~]# systemctl stop host01a6112da-ed05-11ee-bacd-525400565cda@osd.3.service

  4. 重新调整逻辑卷的大小、缩小和扩展:

    示例

    [root@host01 ~]# lvresize -l 100%FREE /dev/db-test/db1
    Size of logical volume db-test/db1 changed from 40.00 GiB (10240 extents) to <160.00 GiB (40959 extents).
    Logical volume db-test/db1 successfully resized.

  5. 启动 cephadm shell:

    语法

    cephadm shell -m /var/lib/ceph/CLUSTER_FSID/osd.OSD_ID:/var/lib/ceph/osd/ceph-OSD_ID:z

    示例

    [root@host01 ~]# cephadm shell -m /var/lib/ceph/1a6112da-ed05-11ee-bacd-525400565cda/osd.3:/var/lib/ceph/osd/ceph-3:z

    ceph-bluestore-tool 需要从 cephadm shell 容器内访问 BlueStore 数据,因此必须绑定挂载。使用 -m 选项使 BlueStore 数据可用。

  6. 在扩展前检查 Rocks DB 的大小:

    语法

    ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH

    示例

    [ceph: root@host01 /]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/
    inferring bluefs devices from bluestore path
    {
        "/var/lib/ceph/osd/ceph-3/block": {
            "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731",
            "size": 32212254720,
            "btime": "2024-04-03T08:34:12.742848+0000",
            "description": "main",
            "bfm_blocks": "7864320",
            "bfm_blocks_per_key": "128",
            "bfm_bytes_per_block": "4096",
            "bfm_size": "32212254720",
            "bluefs": "1",
            "ceph_fsid": "1a6112da-ed05-11ee-bacd-525400565cda",
            "ceph_version_when_created": "ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)",
            "created_at": "2024-04-03T08:34:15.637253Z",
            "elastic_shared_blobs": "1",
            "kv_backend": "rocksdb",
            "magic": "ceph osd volume v026",
            "mkfs_done": "yes",
            "osd_key": "AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==",
            "osdspec_affinity": "None",
            "ready": "ready",
            "require_osd_release": "19",
            "whoami": "3"
        },
        "/var/lib/ceph/osd/ceph-3/block.db": {
            "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731",
            "size": 40794497536,
            "btime": "2024-04-03T08:34:12.748816+0000",
            "description": "bluefs db"
        }
    }

  7. 扩展 BlueStore 设备:

    语法

    ceph-bluestore-tool bluefs-bdev-expand --path OSD_DIRECTORY_PATH

    示例

    [ceph: root@host01 /]# ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-3/
    inferring bluefs devices from bluestore path
    1 : device size 0x27ffbfe000 : using 0x2300000(35 MiB)
    2 : device size 0x780000000 : using 0x52000(328 KiB)
    Expanding DB/WAL...
    1 : expanding  to 0x171794497536
    1 : size label updated to 171794497536

  8. 验证 block.db 是否已扩展:

    语法

    ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH

    示例

    [ceph: root@host01 /]# ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/
    inferring bluefs devices from bluestore path
    {
        "/var/lib/ceph/osd/ceph-3/block": {
            "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731",
            "size": 32212254720,
            "btime": "2024-04-03T08:34:12.742848+0000",
            "description": "main",
            "bfm_blocks": "7864320",
            "bfm_blocks_per_key": "128",
            "bfm_bytes_per_block": "4096",
            "bfm_size": "32212254720",
            "bluefs": "1",
            "ceph_fsid": "1a6112da-ed05-11ee-bacd-525400565cda",
            "ceph_version_when_created": "ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)",
            "created_at": "2024-04-03T08:34:15.637253Z",
            "elastic_shared_blobs": "1",
            "kv_backend": "rocksdb",
            "magic": "ceph osd volume v026",
            "mkfs_done": "yes",
            "osd_key": "AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==",
            "osdspec_affinity": "None",
            "ready": "ready",
            "require_osd_release": "19",
            "whoami": "3"
        },
        "/var/lib/ceph/osd/ceph-3/block.db": {
            "osd_uuid": "94ff742c-7bfd-4fb5-8dc4-843d10ac6731",
            "size": 171794497536,
            "btime": "2024-04-03T08:34:12.748816+0000",
            "description": "bluefs db"
        }
    }

  9. 退出 shell 并重启 OSD:

    示例

    [root@host01 ~]# systemctl start host01a6112da-ed05-11ee-bacd-525400565cda@osd.3.service
    osd.3              host01               running (15s)     0s ago  13m    46.9M    4096M  19.0.0-2493-gd82c9aa1  3714003597ec  02150b3b6877

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.