Chapter 10. Troubleshooting Ceph objects
As a storage administrator, you can use the ceph-objectstore-tool
utility to perform high-level or low-level object operations. The ceph-objectstore-tool
utility can help you troubleshoot problems related to objects within a particular OSD or placement group.
You can also start OSD containers in rescue/maintenance mode to repair OSDs without installing Ceph packages on the OSD node.
Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool
utility.
10.1. Prerequisites
- Verify there are no network-related issues.
10.2. Troubleshooting Ceph objects in a containerized environment
The OSD container can be started in rescue/maintenance mode to repair OSDs in Red Hat Ceph Storage 4 without installing Ceph packages on the OSD node.
You can use ceph-bluestore-tool
to run consistency check with fsck
command, or to run consistency check and repair any errors with repair
command.
This procedure is specific to containerized deployments only. Skip this section for bare-metal deployments
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0 Run
fsck
andrepair
commands.Syntax
ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-OSD_ID ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-OSD_ID
Example
[root@osd ~]# ceph-bluestore-tool fsck --path /var/lib/ceph/osd/ceph-0 fsck success
[root@osd ~]# ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-0 repair success
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.3. Troubleshooting high-level object operations
As a storage administrator, you can use the ceph-objectstore-tool
utility to perform high-level object operations. The ceph-objectstore-tool
utility supports the following high-level object operations:
- List objects
- List lost objects
- Fix lost objects
Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool
utility.
10.3.1. Prerequisites
- Root-level access to the Ceph OSD nodes.
10.3.2. Listing objects
The OSD can contain zero to many placement groups, and zero to many objects within a placement group (PG). The ceph-objectstore-tool
utility allows you to list objects stored within an OSD.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
Identify all the objects within an OSD, regardless of their placement group:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --op list
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list
Identify all the objects within a placement group:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list
Identify the PG an object belongs to:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --op list OBJECT_ID
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list default.region
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.3.3. Listing lost objects
An OSD can mark objects as lost or unfound. You can use the ceph-objectstore-tool
to list the lost and unfound objects stored within an OSD.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
Use the
ceph-objectstore-tool
utility to list lost and unfound objects. Select the appropriate circumstance:To list all the lost objects:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --op list-lost
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list-lost
To list all the lost objects within a placement group:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list-lost
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list-lost
To list a lost object by its identifier:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --op list-lost OBJECT_ID
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list-lost default.region
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting a Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.3.4. Fixing lost objects
You can use the ceph-objectstore-tool
utility to list and fix lost and unfound objects stored within a Ceph OSD. This procedure applies only to legacy objects.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
Syntax
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
To list all the lost legacy objects:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost --dry-run
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost --dry-run
Use the
ceph-objectstore-tool
utility to fix lost and unfound objects as aceph
user. Select the appropriate circumstance:To fix all lost objects:
Syntax
su - ceph -c 'ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost'
Example
[root@osd ~]# su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost'
To fix all the lost objects within a placement group:
su - ceph -c 'ceph-objectstore-tool --data-path _PATH_TO_OSD_ --pgid _PG_ID_ --op fix-lost'
Example
[root@osd ~]# su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op fix-lost'
To fix a lost object by its identifier:
Syntax
su - ceph -c 'ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost OBJECT_ID'
Example
[root@osd ~]# su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost default.region'
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4. Troubleshooting low-level object operations
As a storage administrator, you can use the ceph-objectstore-tool
utility to perform low-level object operations. The ceph-objectstore-tool
utility supports the following low-level object operations:
- Manipulate the object’s content
- Remove an object
- List the object map (OMAP)
- Manipulate the OMAP header
- Manipulate the OMAP key
- List the object’s attributes
- Manipulate the object’s attribute key
Manipulating objects can cause unrecoverable data loss. Contact Red Hat support before using the ceph-objectstore-tool
utility.
10.4.1. Prerequisites
- Root-level access to the Ceph OSD nodes.
10.4.2. Manipulating the object’s content
With the ceph-objectstore-tool
utility, you can get or set bytes on an object.
Setting the bytes on an object can cause unrecoverable data loss. To prevent data loss, make a backup copy of the object.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@$OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
- Find the object by listing the objects of the OSD or placement group (PG).
Before setting the bytes on an object, make a backup and a working copy of the object:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ get-bytes > OBJECT_FILE_NAME [root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ get-bytes > OBJECT_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-bytes > zone_info.default.backup [root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-bytes > zone_info.default.working-copy
- Edit the working copy object file and modify the object contents accordingly.
Set the bytes of the object:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ set-bytes < OBJECT_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-bytes < zone_info.default.working-copy
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4.3. Removing an object
Use the ceph-objectstore-tool
utility to remove an object. By removing an object, its contents and references are removed from the placement group (PG).
You cannot recreate an object once it is removed.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@$OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
Remove an object:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ remove
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ remove
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4.4. Listing the object map
Use the ceph-objectstore-tool
utility to list the contents of the object map (OMAP). The output provides you a list of keys.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
List the object map:
[root@osd ~]# ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID \ OBJECT \ list-omap
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c \ '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ list-omap
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4.5. Manipulating the object map header
The ceph-objectstore-tool
utility will output the object map (OMAP) header with the values associated with the object’s keys.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
Verify the appropriate OSD is down:
Syntax
systemctl status ceph-osd@OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
Get the object map header:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-omaphdr > OBJECT_MAP_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-omaphdr > zone_info.default.omaphdr.txt
Set the object map header:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-omaphdr < OBJECT_MAP_FILE_NAME
Example
[root@osd ~]# su - ceph -c 'ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-omaphdr < zone_info.default.omaphdr.txt
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4.6. Manipulating the object map key
Use the ceph-objectstore-tool
utility to change the object map (OMAP) key. You need to provide the data path, the placement group identifier (PG ID), the object, and the key in the OMAP.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
Get the object map key:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-omap KEY > OBJECT_MAP_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-omap "" > zone_info.default.omap.txt
Set the object map key:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ set-omap KEY < OBJECT_MAP_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-omap "" < zone_info.default.omap.txt
Remove the object map key:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ rm-omap KEY
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ rm-omap ""
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4.7. Listing the object’s attributes
Use the ceph-objectstore-tool
utility to list an object’s attributes. The output provides you with the object’s keys and values.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
List the object’s attributes:
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ list-attrs
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ list-attrs
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.4.8. Manipulating the object attribute key
Use the ceph-objectstore-tool
utility to change an object’s attributes. To manipulate the object’s attributes you need the data and journal paths, the placement group identifier (PG ID), the object, and the key in the object’s attribute.
Prerequisites
- Root-level access to the Ceph OSD node.
-
Stopping the
ceph-osd
daemon.
Procedure
Verify the appropriate OSD is down:
[root@osd ~]# systemctl status ceph-osd@$OSD_NUMBER
Example
[root@osd ~]# systemctl status ceph-osd@1
For containerized deployments, to access the bluestore tool, follow the below steps:
Set
noout
flag on cluster.Example
[root@mon ~]# ceph osd set noout
- Login to the node hosting the OSD container.
Backup
/etc/systemd/system/ceph-osd@.service
unit file to/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.backup
Move
/run/ceph-osd@OSD_ID.service-cid
file to/root
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /root
Edit
/etc/systemd/system/ceph-osd@.service
unit file and add-it --entrypoint /bin/bash
option to podman command.Example
# Please do not change this file directly since it is managed by Ansible and will be overwritten [Unit] Description=Ceph OSD After=network.target [Service] EnvironmentFile=-/etc/environment ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid ExecStartPre=-/usr/bin/podman rm -f ceph-osd-%i ExecStart=/usr/bin/podman run -it --entrypoint /bin/bash \ -d --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \ --rm \ --net=host \ --privileged=true \ --pid=host \ --ipc=host \ --cpus=2 \ -v /dev:/dev \ -v /etc/localtime:/etc/localtime:ro \ -v /var/lib/ceph:/var/lib/ceph:z \ -v /etc/ceph:/etc/ceph:z \ -v /var/run/ceph:/var/run/ceph:z \ -v /var/run/udev/:/var/run/udev/ \ -v /var/log/ceph:/var/log/ceph:z \ -e OSD_BLUESTORE=1 -e OSD_FILESTORE=0 -e OSD_DMCRYPT=0 \ -e CLUSTER=ceph \ -v /run/lvm/:/run/lvm/ \ -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \ -e CONTAINER_IMAGE=registry.redhat.io/rhceph/rhceph-4-rhel8:latest \ -e OSD_ID=%i \ -e DEBUG=stayalive \ --name=ceph-osd-%i \ \ registry.redhat.io/rhceph/rhceph-4-rhel8:latest ExecStop=-/usr/bin/sh -c "/usr/bin/podman rm -f `cat /%t/%n-cid`" KillMode=none Restart=always RestartSec=10s TimeoutStartSec=120 TimeoutStopSec=15 Type=forking PIDFile=/%t/%n-pid [Install] WantedBy=multi-user.target
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Restart the OSD service associated with the
OSD_ID
.Syntax
systemctl restart ceph-osd@OSD_ID.service
Replace
OSD_ID
with the ID of the OSD.Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Login to the container associated with the
OSD_ID
.Syntax
podman exec -it ceph-osd-OSD_ID /bin/bash
Example
[root@osd ~]# podman exec -it ceph-osd-0 /bin/bash
Get
osd fsid
and activate the OSD to mount OSD’s logical volume (LV).Syntax
ceph-volume lvm list |grep -A15 "osd\.OSD_ID"|grep "osd fsid" ceph-volume lvm activate --bluestore OSD_ID OSD_FSID
Example
[root@osd ~]# ceph-volume lvm list |grep -A15 "osd\.0"|grep "osd fsid" osd fsid 087eee15-6561-40a3-8fe4-9583ba64a4ff [root@osd ~]# ceph-volume lvm activate --bluestore 0 087eee15-6561-40a3-8fe4-9583ba64a4ff Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc --path /var/lib/ceph/osd/ceph-0 --no-mon-config Running command: /usr/bin/ln -snf /dev/ceph-41c69f8f-30e2-4685-9c5c-c605898c5537/osd-data-d073e8b3-0b89-4271-af5b-83045fd000dc /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--41c69f8f--30e2--4685--9c5c--c605898c5537-osd--data--d073e8b3--0b89--4271--af5b--83045fd000dc Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-087eee15-6561-40a3-8fe4-9583ba64a4ff.service
/usr/lib/systemd/system/ceph-volume@.service. Running command: /usr/bin/systemctl enable --runtime ceph-osd@0 stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph osd@0.service /usr/lib/systemd/system/ceph-osd@.service. Running command: /usr/bin/systemctl start ceph-osd@0 stderr: Running in chroot, ignoring request: start --> ceph-volume lvm activate successful for osd ID: 0
Get the object’s attributes:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ get-attrs KEY > OBJECT_ATTRS_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ get-attrs "oid" > zone_info.default.attr.txt
Set an object’s attributes:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ set-attrs KEY < OBJECT_ATTRS_FILE_NAME
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ set-attrs "oid" < zone_info.default.attr.txt
Remove an object’s attributes:
Syntax
ceph-objectstore-tool --data-path PATH_TO_OSD \ --pgid PG_ID OBJECT \ rm-attrs KEY
Example
[root@osd ~]# ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 \ --pgid 0.1c '{"oid":"zone_info.default","key":"","snapid":-2,"hash":235010478,"max":0,"pool":11,"namespace":""}' \ rm-attrs "oid"
For containerized deployments, to revert the changes, follow the below steps:
After exiting the container, copy
/etc/systemd/system/ceph-osd@.service
unit file from/root
directory.Example
[root@osd ~]# cp /etc/systemd/system/ceph-osd@.service /root/ceph-osd@.service.modified [root@osd ~]# cp /root/ceph-osd@.service.backup /etc/systemd/system/ceph-osd@.service
Reload
systemd
manager configuration.Example
[root@osd ~]# systemctl daemon-reload
Move
/run/ceph-osd@OSD_ID.service-cid
file to/tmp
.Example
[root@osd ~]# mv /run/ceph-osd@0.service-cid /tmp
Restart the OSD service associated with the
OSD_ID
.Syntax
[root@osd ~]# systemctl restart ceph-osd@OSD_ID.service
Example
[root@osd ~]# systemctl restart ceph-osd@0.service
Additional Resources
- For more information on stopping an OSD, see the Starting, Stopping, and Restarting the Ceph Daemons by Instance section in the Red Hat Ceph Storage Administration Guide.
10.5. Additional Resources
- For Red Hat Ceph Storage support, see the Red Hat Customer Portal.