이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 10. Rebooting Nodes
Some situations require a reboot of nodes in the undercloud and overcloud. The following procedures show how to reboot different node types. Be aware of the following notes:
- If rebooting all nodes in one role, it is advisable to reboot each node individually. This helps retain services for that role during the reboot.
- If rebooting all nodes in your OpenStack Platform environment, use the following list to guide the reboot order:
Recommended Node Reboot Order
- Reboot the director
- Reboot Controller nodes
- Reboot standalone Ceph MON nodes
- Reboot Ceph Storage nodes
- Reboot Compute nodes
- Reboot object Storage nodes
10.1. Rebooting the Director 링크 복사링크가 클립보드에 복사되었습니다!
To reboot the director node, follow this process:
Reboot the node:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
When the node boots, check the status of all services:
sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
$ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt might take approximately 10 minutes for the
openstack-nova-computeto become active after a reboot.Verify the existence of your Overcloud and its nodes:
source ~/stackrc openstack server list openstack baremetal node list openstack stack list
$ source ~/stackrc $ openstack server list $ openstack baremetal node list $ openstack stack listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2. Rebooting Controller Nodes 링크 복사링크가 클립보드에 복사되었습니다!
To reboot the Controller nodes, follow this process:
Select a node to reboot. Log into it and stop the cluster before rebooting:
sudo pcs cluster stop
$ sudo pcs cluster stopCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the cluster:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow The remaining Controller Nodes in the cluster retain the high availability services during the reboot.
- Wait until the node boots.
Re-enable the cluster for the node:
sudo pcs cluster start
$ sudo pcs cluster startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the node and check the cluster status:
sudo pcs status
$ sudo pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow The node rejoins the cluster.
NoteIf any services fail after the reboot, run sudo
pcs resource cleanup, which cleans the errors and sets the state of each resource toStarted. If any errors persist, contact Red Hat and request guidance and assistance.Check all
systemdservices on the Controller Node are active:sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"
$ sudo systemctl list-units "openstack*" "neutron*" "openvswitch*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log out of the node, select the next Controller Node to reboot, and repeat this procedure until you have rebooted all Controller Nodes.
10.3. Rebooting standalone Ceph MON nodes 링크 복사링크가 클립보드에 복사되었습니다!
To reboot the Ceph MON nodes, follow this process:
- Log into a Ceph MON node.
Reboot the node:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots and rejoins the MON cluster.
Repeat these steps for each MON node in the cluster.
10.4. Rebooting Ceph Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
To reboot the Ceph Storage nodes, follow this process:
Log into a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily:
sudo ceph osd set noout sudo ceph osd set norebalance
$ sudo ceph osd set noout $ sudo ceph osd set norebalanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Select the first Ceph Storage node to reboot and log into it.
Reboot the node:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Log into the node and check the cluster status:
sudo ceph -s
$ sudo ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
pgmapreports allpgsas normal (active+clean).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.
When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
sudo ceph osd unset noout sudo ceph osd unset norebalance
$ sudo ceph osd unset noout $ sudo ceph osd unset norebalanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a final status check to verify the cluster reports
HEALTH_OK:sudo ceph status
$ sudo ceph statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5. Rebooting Compute Nodes 링크 복사링크가 클립보드에 복사되었습니다!
Reboot each Compute node individually and ensure zero downtime of instances in your OpenStack Platform environment. This involves the following workflow:
- Select a Compute node to reboot
- Migrate its instances to another Compute node
- Reboot the empty Compute node
List all Compute nodes and their UUIDs:
nova list | grep "compute"
$ nova list | grep "compute"
Select a Compute node to reboot and first migrate its instances using the following process:
From the undercloud, select a Compute Node to reboot and disable it:
source ~/overcloudrc openstack compute service list openstack compute service set [hostname] nova-compute --disable
$ source ~/overcloudrc $ openstack compute service list $ openstack compute service set [hostname] nova-compute --disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all instances on the Compute node:
openstack server list --host [hostname] --all-projects
$ openstack server list --host [hostname] --all-projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Migrate each instance from the disabled host. Use one of the following commands:
Migrate the instance to a specific host of your choice:
openstack server migrate [instance-id] --live [target-host]--wait
$ openstack server migrate [instance-id] --live [target-host]--waitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Let
nova-schedulerautomatically select the target host:nova live-migration [instance-id]
$ nova live-migration [instance-id]Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
novacommand might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm the instance has migrated from the Compute node:
openstack server list --host [hostname] --all-projects
$ openstack server list --host [hostname] --all-projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step until you have migrated all instances from the Compute Node.
For full instructions on configuring and migrating instances, see Chapter 8, Migrating Virtual Machines Between Compute Nodes.
Reboot the Compute node using the following process
Log into the Compute Node and reboot it:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Enable the Compute Node again:
source ~/overcloudrc openstack compute service set [hostname] nova-compute --enable
$ source ~/overcloudrc $ openstack compute service set [hostname] nova-compute --enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Select the next node to reboot.
10.6. Rebooting Object Storage Nodes 링크 복사링크가 클립보드에 복사되었습니다!
To reboot the Object Storage nodes, follow this process:
Select a Object Storage node to reboot. Log into it and reboot it:
sudo reboot
$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait until the node boots.
Log into the node and check the status:
sudo systemctl list-units "openstack-swift*"
$ sudo systemctl list-units "openstack-swift*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log out of the node and repeat this process on the next Object Storage node.