Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Backing up and restoring the undercloud and control plane nodes with collocated Ceph monitors
If an error occurs during an update or upgrade, you can use ReaR backups to restore either the undercloud or overcloud control plane nodes, or both, to their previous state.
Prerequisites
- Install and configure ReaR. For more information, see Install and configure ReaR.
- Prepare the backup node. For more information, see Prepare the backup node.
- Execute the backup procedure. For more information, see Execute the backup procedure.
Procedure
- On the backup node, export the NFS directory to host the Ceph backups. Replace - <IP_ADDRESS/24>with the IP address and subnet mask of the network:- [root@backup ~]# cat >> /etc/exports << EOF /ceph_backups <IP_ADDRESS/24>(rw,sync,no_root_squash,no_subtree_check) EOF - [root@backup ~]# cat >> /etc/exports << EOF /ceph_backups <IP_ADDRESS/24>(rw,sync,no_root_squash,no_subtree_check) EOF- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the undercloud node, source the undercloud credentials and run the following script: - source stackrc - # source stackrc- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - #! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo systemctl stop ceph-mon@$(hostname -s) ceph-mgr@$(hostname -s)'; done- #! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo systemctl stop ceph-mon@$(hostname -s) ceph-mgr@$(hostname -s)'; done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - To verify that the - ceph-mgr@controller.servicecontainer has stopped, enter the following command:- sudo podman ps | grep ceph - [heat-admin@overcloud-controller-x ~]# sudo podman ps | grep ceph- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the undercloud node, source the undercloud credentials and run the following script. Replace - <BACKUP_NODE_IP_ADDRESS>with the IP address of the backup node:- source stackrc - # source stackrc- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the node that you want to restore, complete the following tasks: - Power off the node before you proceed.
- 
							Restore the node with the ReaR backup file that you have created during the backup process. The file is located in the /ceph_backupsdirectory of the backup node.
- 
							From the Relax-and-Recoverboot menu, selectRecover <CONTROL_PLANE_NODE>, where<CONTROL_PLANE_NODE>is the name of the control plane node.
- At the prompt, enter the following command: - RESCUE <CONTROL_PLANE_NODE> :~ # rear recover - RESCUE <CONTROL_PLANE_NODE> :~ # rear recover- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - When the image restoration process completes, the console displays the following message: 
 - Finished recovering your system Exiting rear recover Running exit tasks - Finished recovering your system Exiting rear recover Running exit tasks- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- For the node that you want to restore, copy the Ceph backup from the - /ceph_backupsdirectory into the- /var/lib/cephdirectory:- Identify the system mount points: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The - /dev/vda2file system is mounted on- /mnt/local.
- Create a temporary directory: - RESCUE <CONTROL_PLANE_NODE>:~ # mkdir /tmp/restore RESCUE <CONTROL_PLANE_NODE>:~ # mount -v -t nfs -o rw,noatime <BACKUP_NODE_IP_ADDRESS>:/ceph_backups /tmp/restore/ - RESCUE <CONTROL_PLANE_NODE>:~ # mkdir /tmp/restore RESCUE <CONTROL_PLANE_NODE>:~ # mount -v -t nfs -o rw,noatime <BACKUP_NODE_IP_ADDRESS>:/ceph_backups /tmp/restore/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the control plane node, remove the existing - /var/lib/cephdirectory:- RESCUE <CONTROL_PLANE_NODE>:~ # rm -rf /mnt/local/var/lib/ceph/* - RESCUE <CONTROL_PLANE_NODE>:~ # rm -rf /mnt/local/var/lib/ceph/*- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Restore the previous Ceph maps. Replace - <CONTROL_PLANE_NODE>with the name of your control plane node:- RESCUE <CONTROL_PLANE_NODE>:~ # tar -xvC /mnt/local/ -f /tmp/restore/<CONTROL_PLANE_NODE>/<CONTROL_PLANE_NODE>.tar.gz --xattrs --xattrs-include='*.*' var/lib/ceph - RESCUE <CONTROL_PLANE_NODE>:~ # tar -xvC /mnt/local/ -f /tmp/restore/<CONTROL_PLANE_NODE>/<CONTROL_PLANE_NODE>.tar.gz --xattrs --xattrs-include='*.*' var/lib/ceph- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the files are restored: - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Power off the node: - RESCUE <CONTROL_PLANE_NODE> :~ # poweroff - RESCUE <CONTROL_PLANE_NODE> :~ # poweroff- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Power on the node. The node resumes its previous state.