此内容没有您所选择的语言版本。
Chapter 10. Rebooting the Environment
A situation might occur where you need to reboot the environment. For example, when you might need to modify the physical servers, or you might need to recover from a power outage. In this situation, it is important to make sure your Ceph Storage nodes boot correctly.
Make sure to boot the nodes in the following order:
- Boot all Ceph Monitor nodes first - This ensures the Ceph Monitor service is active in your high availability cluster. By default, the Ceph Monitor service is installed on the Controller node. If the Ceph Monitor is separate from the Controller in a custom role, make sure this custom Ceph Monitor role is active.
- Boot all Ceph Storage nodes - This ensures the Ceph OSD cluster can connect to the active Ceph Monitor cluster on the Controller nodes.
10.1. Rebooting a Ceph Storage (OSD) cluster 复制链接链接已复制到粘贴板!
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Procedure
Log into a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:
sudo ceph osd set noout sudo ceph osd set norebalance
$ sudo ceph osd set noout $ sudo ceph osd set norebalance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Select the first Ceph Storage node to reboot and log into the node.
Reboot the node:
sudo reboot
$ sudo reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Log into the node and check the cluster status:
sudo ceph -s
$ sudo ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
pgmap
reports allpgs
as normal (active+clean
).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
sudo ceph osd unset noout sudo ceph osd unset norebalance
$ sudo ceph osd unset noout $ sudo ceph osd unset norebalance
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a final status check to verify the cluster reports
HEALTH_OK
:sudo ceph status
$ sudo ceph status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If a situation occurs where all Overcloud nodes boot at the same time, the Ceph OSD services might not start correctly on the Ceph Storage nodes. In this situation, reboot the Ceph Storage OSDs so they can connect to the Ceph Monitor service.
Verify a HEALTH_OK
status of the Ceph Storage node cluster with the following command:
sudo ceph status
$ sudo ceph status