Chapter 9. Rebooting the environment
A situation might occur where you need to reboot the environment. For example, when you might need to modify the physical servers, or you might need to recover from a power outage. In this situation, it is important to make sure your Ceph Storage nodes boot correctly.
Make sure to boot the nodes in the following order:
- Boot all Ceph Monitor nodes first - This ensures the Ceph Monitor service is active in your high availability cluster. By default, the Ceph Monitor service is installed on the Controller node. If the Ceph Monitor is separate from the Controller in a custom role, make sure this custom Ceph Monitor role is active.
- Boot all Ceph Storage nodes - This ensures the Ceph OSD cluster can connect to the active Ceph Monitor cluster on the Controller nodes.
9.1. Rebooting a Ceph Storage (OSD) cluster
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Procedure
Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:
sudo podman exec -it ceph-mon-controller-0 ceph osd set noout sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance
$ sudo podman exec -it ceph-mon-controller-0 ceph osd set noout $ sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance
Copy to Clipboard Copied! - Select the first Ceph Storage node to reboot and log into the node.
Reboot the node:
sudo reboot
$ sudo reboot
Copy to Clipboard Copied! - Wait until the node boots.
Log in to the node and check the cluster status:
sudo podman exec -it ceph-mon-controller-0 ceph status
$ sudo podman exec -it ceph-mon-controller-0 ceph status
Copy to Clipboard Copied! Check the
pgmap
reports allpgs
as normal (active+clean
).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance
$ sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout $ sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance
Copy to Clipboard Copied! Perform a final status check to verify the cluster reports
HEALTH_OK
:sudo podman exec -it ceph-mon-controller-0 ceph status
$ sudo podman exec -it ceph-mon-controller-0 ceph status
Copy to Clipboard Copied!
If a situation occurs where all overcloud nodes boot at the same time, the Ceph OSD services might not start correctly on the Ceph Storage nodes. In this situation, reboot the Ceph Storage OSDs so they can connect to the Ceph Monitor service.
Verify a HEALTH_OK
status of the Ceph Storage node cluster with the following command:
sudo ceph status
$ sudo ceph status