4.2. 扩展集群中的读取关联性
读取关联性通过在对应数据中心中保留数据访问权限来减少跨区流量。
对于在多区环境中部署的扩展集群,读关联性拓扑实现提供了一种机制,以帮助保持其源自的数据中心的流量。Ceph 对象网关卷能够根据节点上 CRUSH map 和拓扑标签中定义的 OSD 位置从 OSD 读取数据。
例如,扩展集群包含一个 Ceph 对象网关主 OSD 和复制 OSD,分布到两个数据中心 A 和 B。如果对数据中心 A 中的对象执行 GET
操作,则 READ
操作将在数据中心 A 中客户端最接近的 OSD 的数据上执行。
4.2.1. 执行本地化读取
您可以在扩展集群中的复制池上执行本地化的读取。在复制池上发出本地化读取请求时,Ceph 根据 crush_location 中指定的客户端位置,选择与客户端最接近的本地 OSD。
先决条件
- 在两者上配置有两个数据中心和 Ceph 对象网关的扩展集群。
- 使用具有主要和复制 OSD 的 bucket 创建的用户。
流程
要执行本地化读取,请使用
ceph config set
命令在 OSD 守护进程配置中将rados_replica_read_policy
设置为 'localize'。[ceph: root@host01 /]# ceph config set client.rgw.rgw.1 rados_replica_read_policy localize
[ceph: root@host01 /]# ceph config set client.rgw.rgw.1 rados_replica_read_policy localize
Copy to Clipboard Copied! 验证 :执行以下步骤来验证从 OSD 集进行本地化读取。
运行
ceph osd tree
命令,以查看 OSD 和数据中心。示例
[ceph: root@host01 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000
[ceph: root@host01 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000
Copy to Clipboard Copied! 运行
ceph orch
命令,以识别数据中心的 Ceph 对象网关守护进程。示例
[ceph: root@host01 /]# ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4
[ceph: root@host01 /]# ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4
Copy to Clipboard Copied! 在 Ceph 对象网关日志上运行
vim
命令,验证默认读取是否已发生。示例
[ceph: root@host01 /]# vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-26T08:07:45.471+0000 7fc623e63640 1 ====== starting new request req=0x7fc5b93694a0 ===== 2024-08-26T08:07:45.471+0000 7fc623e63640 1 -- 10.0.67.142:0/279982082 --> [v2:10.0.66.23:6816/73244434,v1:10.0.66.23:6817/73244434] -- osd_op(unknown.0.0:9081 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+localize_reads+known_if_redirected+supports_pool_eio e3533) -- 0x55f781bd2000 con 0x55f77f0e8c00
[ceph: root@host01 /]# vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-26T08:07:45.471+0000 7fc623e63640 1 ====== starting new request req=0x7fc5b93694a0 ===== 2024-08-26T08:07:45.471+0000 7fc623e63640 1 -- 10.0.67.142:0/279982082 --> [v2:10.0.66.23:6816/73244434,v1:10.0.66.23:6817/73244434] -- osd_op(unknown.0.0:9081 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+localize_reads+known_if_redirected+supports_pool_eio e3533) -- 0x55f781bd2000 con 0x55f77f0e8c00
Copy to Clipboard Copied! 您可以在进行本地化的日志中看到。
重要要能够查看调试日志,您必须首先通过运行
ceph config set
命令在配置中启用debug_ms 1
。[ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 [ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1
[ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 [ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1
Copy to Clipboard Copied!
4.2.2. 执行均衡的读取
您可以在池中执行均衡的读取,以在数据中心之间检索平均分布式 OSD。在池中发出平衡的 READ 时,读取操作会在数据中心分散的所有 OSD 中平均分配。
先决条件
- 在两者上配置有两个数据中心和 Ceph 对象网关的扩展集群。
- 创建带有 bucket 和 OSD 的用户 - 主要和复制 OSD。
流程
要执行均衡读取,请使用
ceph config set
命令在 OSD 守护进程配置中将rados_replica_read_policy
设置为 'balance'。[ceph: root@host01 /]# ceph config set client.rgw.rgw.1 rados_replica_read_policy balance
[ceph: root@host01 /]# ceph config set client.rgw.rgw.1 rados_replica_read_policy balance
Copy to Clipboard Copied! 验证 :执行以下步骤来验证从 OSD 集读取的平衡。
运行
ceph osd tree
命令,以查看 OSD 和数据中心。示例
[ceph: root@host01 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000
[ceph: root@host01 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000
Copy to Clipboard Copied! 运行
ceph orch
命令,以识别数据中心的 Ceph 对象网关守护进程。示例
[ceph: root@host01 /]# ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4
[ceph: root@host01 /]# ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4
Copy to Clipboard Copied! 在 Ceph 对象网关日志上运行
vim
命令,验证是否已发生均衡的读取。示例
[ceph: root@host01 /]# vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 ====== starting new request req=0x7f2a31fcf4a0 ===== 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 -- 10.0.67.142:0/3116867178 --> [v2:10.0.64.146:6816/2838383288,v1:10.0.64.146:6817/2838383288] -- osd_op(unknown.0.0:268731 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+balance_reads+known_if_redirected+supports_pool_eio e3554) -- 0x55cd1b88dc00 con 0x55cd18dd6000
[ceph: root@host01 /]# vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 ====== starting new request req=0x7f2a31fcf4a0 ===== 2024-08-27T09:32:25.510+0000 7f2a7a284640 1 -- 10.0.67.142:0/3116867178 --> [v2:10.0.64.146:6816/2838383288,v1:10.0.64.146:6817/2838383288] -- osd_op(unknown.0.0:268731 11.55 11:ab26b168:::3acf4091-c54c-43b5-a495-c505fe545d25.27842.1_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+balance_reads+known_if_redirected+supports_pool_eio e3554) -- 0x55cd1b88dc00 con 0x55cd18dd6000
Copy to Clipboard Copied! 您可以在平衡读取已发生的日志中看到。
重要要能够查看调试日志,您必须首先通过运行
ceph config set
命令在配置中启用debug_ms 1
。[ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 [ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1
[ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 [ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1
Copy to Clipboard Copied!
4.2.3. 执行默认读取
您可以在池中执行默认读取,以从主数据中心检索数据。当在池中发出默认 READ 时,IO 操作将直接从数据中心的每个 OSD 检索。
先决条件
- 在两者上配置有两个数据中心和 Ceph 对象网关的扩展集群。
- 创建带有 bucket 和 OSD 的用户 - 主要和复制 OSD。
流程
要执行默认的读取,请使用
ceph config set
命令在 OSD 守护进程配置中将rados_replica_read_policy
设置为 'default'。示例
[ceph: root@host01 /]#ceph config set client.rgw.rgw.1 advanced rados_replica_read_policy default
[ceph: root@host01 /]#ceph config set client.rgw.rgw.1 advanced rados_replica_read_policy default
Copy to Clipboard Copied! 执行 GET 操作时,数据中心中最接近的 OSD 的 IO 操作会被检索。
验证 :执行以下步骤来验证从 OSD 集进行本地化读取。
运行
ceph osd tree
命令,以查看 OSD 和数据中心。示例
[ceph: root@host01 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000
[ceph: root@host01 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.58557 root default -3 0.29279 datacenter DC1 -2 0.09760 host ceph-ci-fbv67y-ammmck-node2 2 hdd 0.02440 osd.2 up 1.00000 1.00000 11 hdd 0.02440 osd.11 up 1.00000 1.00000 17 hdd 0.02440 osd.17 up 1.00000 1.00000 22 hdd 0.02440 osd.22 up 1.00000 1.00000 -4 0.09760 host ceph-ci-fbv67y-ammmck-node3 0 hdd 0.02440 osd.0 up 1.00000 1.00000 6 hdd 0.02440 osd.6 up 1.00000 1.00000 12 hdd 0.02440 osd.12 up 1.00000 1.00000 18 hdd 0.02440 osd.18 up 1.00000 1.00000 -5 0.09760 host ceph-ci-fbv67y-ammmck-node4 5 hdd 0.02440 osd.5 up 1.00000 1.00000 10 hdd 0.02440 osd.10 up 1.00000 1.00000 16 hdd 0.02440 osd.16 up 1.00000 1.00000 23 hdd 0.02440 osd.23 up 1.00000 1.00000 -7 0.29279 datacenter DC2 -6 0.09760 host ceph-ci-fbv67y-ammmck-node5 3 hdd 0.02440 osd.3 up 1.00000 1.00000 8 hdd 0.02440 osd.8 up 1.00000 1.00000 14 hdd 0.02440 osd.14 up 1.00000 1.00000 20 hdd 0.02440 osd.20 up 1.00000 1.00000 -8 0.09760 host ceph-ci-fbv67y-ammmck-node6 4 hdd 0.02440 osd.4 up 1.00000 1.00000 9 hdd 0.02440 osd.9 up 1.00000 1.00000 15 hdd 0.02440 osd.15 up 1.00000 1.00000 21 hdd 0.02440 osd.21 up 1.00000 1.00000 -9 0.09760 host ceph-ci-fbv67y-ammmck-node7 1 hdd 0.02440 osd.1 up 1.00000 1.00000 7 hdd 0.02440 osd.7 up 1.00000 1.00000 13 hdd 0.02440 osd.13 up 1.00000 1.00000 19 hdd 0.02440 osd.19 up 1.00000 1.00000
Copy to Clipboard Copied! 运行
ceph orch
命令,以识别数据中心的 Ceph 对象网关守护进程。示例
ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4
ceph orch ps | grep rg rgw.rgw.1.ceph-ci-fbv67y-ammmck-node4.dmsmex ceph-ci-fbv67y-ammmck-node4 *:80 running (4h) 10m ago 22h 93.3M - 19.1.0-55.el9cp 0ee0a0ad94c7 34f27723ccd2 rgw.rgw.1.ceph-ci-fbv67y-ammmck-node7.pocecp ceph-ci-fbv67y-ammmck-node7 *:80 running (4h) 10m ago 22h 96.4M - 19.1.0-55.el9cp 0ee0a0ad94c7 40e4f2a6d4c4
Copy to Clipboard Copied! 在 Ceph 对象网关日志上运行 vim 命令,验证默认读取是否已发生。
示例
[ceph: root@host01 /]# vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-28T10:26:05.155+0000 7fe6b03dd640 1 ====== starting new request req=0x7fe6879674a0 ===== 2024-08-28T10:26:05.156+0000 7fe6b03dd640 1 -- 10.0.64.251:0/2235882725 --> [v2:10.0.65.171:6800/4255735352,v1:10.0.65.171:6801/4255735352] -- osd_op(unknown.0.0:1123 11.6d 11:b69767fc:::699c2d80-5683-43c5-bdcd-e8912107c176.24827.3_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e4513) -- 0x5639da653800 con 0x5639d804d800
[ceph: root@host01 /]# vim /var/log/ceph/<fsid>/<ceph-client-rgw>.log 2024-08-28T10:26:05.155+0000 7fe6b03dd640 1 ====== starting new request req=0x7fe6879674a0 ===== 2024-08-28T10:26:05.156+0000 7fe6b03dd640 1 -- 10.0.64.251:0/2235882725 --> [v2:10.0.65.171:6800/4255735352,v1:10.0.65.171:6801/4255735352] -- osd_op(unknown.0.0:1123 11.6d 11:b69767fc:::699c2d80-5683-43c5-bdcd-e8912107c176.24827.3_f1:head [getxattrs,stat] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e4513) -- 0x5639da653800 con 0x5639d804d800
Copy to Clipboard Copied! 您可以在日志中看到默认读取已发生。
重要要能够查看调试日志,您必须首先通过运行
ceph config set
命令在配置中启用debug_ms 1
。[ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 [ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1
[ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node4.dgvrmx advanced debug_ms 1/1 [ceph: root@host01 /]#ceph config set client.rgw.rgw.1.ceph-ci-gune2w-mysx73-node7.rfkqqq advanced debug_ms 1/1
Copy to Clipboard Copied!