10.4. 低级对象操作故障排除
作为存储管理员,您可以使用 ceph-objectstore-tool
实用程序来执行低级对象操作。ceph-objectstore-tool
实用程序支持以下低级别对象操作:
- 操作对象的内容
- 删除对象
- 列出对象映射(OMAP)
- 操作 OMAP 标头
- 操作 OMAP 密钥
- 列出对象的属性
- 操作对象的属性键
操作对象可能会导致无法恢复的数据丢失。在使用 ceph-objectstore-tool
实用程序前,请联络红帽支持。
10.4.1. 先决条件
- 对 Ceph OSD 节点的根级别访问权限.
10.4.2. 操作对象的内容
使用 ceph-objectstore-tool
实用程序时,您可以在对象上获取或设置字节。
在对象上设置字节可能会导致无法恢复的数据丢失。要防止数据丢失,请为对象制作备份副本。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
验证适当的 OSD 是否停机:
[root@osd ~]# systemctl status ceph-osd@$OSD_NUMBER
示例
[root@osd ~]# systemctl status ceph-osd@1
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
- 通过列出 OSD 或 PG(PG)的对象来查找对象。
在对象中设置字节前,请进行备份和对象的工作副本:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ get-bytes > OBJECT_FILE_NAME [root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ get-bytes > OBJECT_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-bytes > zone_info.default.backup [root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-bytes > zone_info.default.working-copy
- 编辑工作复制对象文件,并相应地修改对象内容。
设置对象的字节:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ set-bytes < OBJECT_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-bytes < zone_info.default.working-copy
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。
10.4.3. 删除对象
使用 ceph-objectstore-tool
实用程序删除对象。通过移除对象,其内容和引用将从放置组(PG)中删除。
对象被删除后,您就无法重新创建对象。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
验证适当的 OSD 是否停机:
[root@osd ~]# systemctl status ceph-osd@$OSD_NUMBER
示例
[root@osd ~]# systemctl status ceph-osd@1
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
删除对象:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ remove
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ remove
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。
10.4.4. 列出对象映射
使用 ceph-objectstore-tool
实用程序列出对象映射(OMAP)的内容。输出为您提供了键列表。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
验证适当的 OSD 是否停机:
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
示例
[root@osd ~]# systemctl status ceph-osd@1
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
列出对象映射:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ list-omap
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ list-omap
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。
10.4.5. 操作对象映射标头
ceph-objectstore-tool
实用程序将输出对象映射(OMAP)标头,以及与对象的键关联的值。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
验证适当的 OSD 是否停机:
语法
systemctl status ceph-osd@OSD_NUMBER
示例
[root@osd ~]# systemctl status ceph-osd@1
获取对象映射标头:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-omaphdr > OBJECT_MAP_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-omaphdr > zone_info.default.omaphdr.txt
设置对象映射标头:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-omaphdr < OBJECT_MAP_FILE_NAME
示例
[root@osd ~]# su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-omaphdr < zone_info.default.omaphdr.txt
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。
10.4.6. 操作对象映射密钥
使用 ceph-objectstore-tool
实用程序更改对象映射(OMAP)密钥。您需要提供数据路径、放置组标识符(PG ID)、对象和 OMAP 中的密钥。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
获取对象映射键:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-omap KEY > OBJECT_MAP_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-omap "" > zone_info.default.omap.txt
设置对象映射键:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ set-omap KEY < OBJECT_MAP_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-omap "" < zone_info.default.omap.txt
删除对象映射键:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ rm-omap KEY
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ rm-omap ""
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。
10.4.7. 列出对象的属性
使用 ceph-objectstore-tool
实用程序列出对象的属性。输出为您提供对象的键和值。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
验证适当的 OSD 是否停机:
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
示例
[root@osd ~]# systemctl status ceph-osd@1
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
列出对象的属性:
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ list-attrs
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ list-attrs
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。
10.4.8. 操作对象属性键
使用 ceph-objectstore-tool
实用程序更改对象的属性。若要操作对象的属性,您需要数据和日志路径、放置组标识符(PG ID)、对象以及对象属性中的密钥。
先决条件
- Ceph OSD 节点的根级别访问权限.
-
停止
ceph-osd
守护进程.
流程
验证适当的 OSD 是否停机:
[root@osd ~]# systemctl status ceph-osd@$OSD_NUMBER
示例
[root@osd ~]# systemctl status ceph-osd@1
对于容器化部署,要访问 bluestore 工具,请按照以下步骤执行:
在集群中设置
noout
标志。示例
[root@mon ~]# ceph osd set noout
- 登录托管 OSD 容器的节点。
将
/etc/systemd/system/ceph-osd@.service
单元文件备份到/root
目录。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/root
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
编辑
/etc/systemd/system/ceph-osd@.service
单元文件,并在 podman 命令中添加-it --entrypoint /bin/bash
选项。示例
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
重新启动与
OSD_ID
关联的 OSD 服务。语法
systemctl restart ceph-osd@OSD_ID.service
将
OSD_ID
替换为 OSD 的 ID。示例
[root@osd ~]# systemctl restart ceph-osd@0.service
登录与
OSD_ID
关联的容器。语法
podman exec -it ceph-osd-OSD_ID /bin/bash
示例
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
获取
osd fsid
并激活 OSD 以挂载 OSD 的逻辑卷(LV)。语法
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
示例
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
获取对象的属性:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-attrs KEY > OBJECT_ATTRS_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-attrs "oid" > zone_info.default.attr.txt
设置对象的属性:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ set-attrs KEY < OBJECT_ATTRS_FILE_NAME
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-attrs "oid" < zone_info.default.attr.txt
删除对象的属性:
语法
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ rm-attrs KEY
示例
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ rm-attrs "oid"
对于容器化部署,要恢复更改,请按照以下步骤执行:
退出容器后,从
/root
目录中复制/etc/systemd/system/ceph-osd@.service
单元文件。示例
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
重新加载
systemd
管理器配置。示例
[root@osd ~]# systemctl daemon-reload
将
/run/ceph-osd@OSD_ID.service-cid
文件移到/tmp
。示例
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
重新启动与
OSD_ID
关联的 OSD 服务。语法
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
示例
[root@osd ~]# systemctl restart ceph-osd@0.service
其它资源
- 有关停止 OSD 的更多信息,请参阅《 红帽 Ceph 存储管理指南》中的"启动、停止和重新启动 Ceph守护进程"一节 。