Backing up and restoring the undercloud and control plane nodes
Creating and restoring backups of the undercloud and the overcloud control plane nodes
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback.
- Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.
- Click the following link to open a the Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Backing up the undercloud node
To back up the undercloud node, you configure the backup node, install the Relax-and-Recover tool on the undercloud node, and create the backup image. You can create backups as a part of your regular environment maintenance.
In addition, you must back up the undercloud node before performing updates or upgrades. You can use the backups to restore the undercloud node to its previous state if an error occurs during an update or upgrade.
1.1. Supported backup formats and protocols
The undercloud and backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols.
The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore the undercloud and control plane.
- Bootable media formats
- ISO
- File transport protocols
- SFTP
- NFS
1.2. Configuring the backup storage location
Before you create a backup of the control plane nodes, configure the backup storage location in the bar-vars.yaml
environment file. This file stores the key-value parameters that you want to pass to the backup execution.
Procedure
In the
bar-vars.yaml
file, configure the backup storage location. Follow the appropriate steps for your NFS server or SFTP server.If you use an NFS server, add the following parameters the
bar-vars.yaml
file:tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_server_dir_path> tripleo_backup_and_restore_output_url: "nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}" tripleo_backup_and_restore_backup_url: "nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}"
Replace
<ip_address>
and<backup_server_dir_path>
. The default value fortripleo_backup_and_restore_server
parameter value is192.168.24.1
.If you use an SFTP server, add the
tripleo_backup_and_restore_output_url
parameter and set the values of the URL and credentials of the SFTP server:tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/
Replace
<user>
,<password>
, and<backup_node>
with the backup node URL and credentials.
1.3. Installing and configuring an NFS server on the backup node
You can install and configure a new NFS server to store the backup file. To install and configure an NFS server on the backup node, create an inventory file, create an SSH key, and run the openstack undercloud backup
command with the NFS server options.
- If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up.
-
By default, the Relax and Recover (ReaR) IP address parameter for the NFS server is
192.168.24.1
. You must add the parametertripleo_backup_and_restore_server
to set the IP address value that matches your environment.
Procedure
On the undercloud node, source the undercloud credentials:
[stack@undercloud ~]$ source stackrc (undercloud) [stack@undercloud ~]$
On the undercloud node, create an inventory file for the backup node:
(undercloud) [stack@undercloud ~]$ cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF
Replace
<ip_address>
and<user>
with the values that apply to your environment.Copy the public SSH key from the undercloud node to the backup node.
(undercloud) [stack@undercloud ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node>
Replace
<backup_node>
with the path and name of the backup node.Configure the NFS server on the backup node:
(undercloud) [stack@undercloud ~]$ openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml
1.4. Installing ReaR on the undercloud node
Before you create a backup of the undercloud node, install and configure Relax and Recover (ReaR) on the undercloud.
Prerequisites
- You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS server on the backup node”.
Procedure
On the undercloud node, source the undercloud credentials:
[stack@undercloud ~]$ source stackrc
If you use a custom stack name, add the
--stack <stack_name>
option to thetripleo-ansible-inventory
command.If you have not done so before, create an inventory file and use the
tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the overcloud nodes:(undercloud) [stack@undercloud ~]$ tripleo-ansible-inventory \ --ansible_ssh_user heat-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml
Install ReaR on the undercloud node:
(undercloud) [stack@undercloud ~]$ openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml
If your system uses the UEFI boot loader, perform the following steps on the undercloud node:
Install the following tools:
$ sudo dnf install dosfstools efibootmgr
-
Enable UEFI backup in the ReaR configuration file located in
/etc/rear/local.conf
by replacing theUSING_UEFI_BOOTLOADER
parameter value0
with the value1
.
1.5. Creating a standalone database backup of the undercloud nodes
If you are upgrading your Red Hat OpenStack Platform environment from 13 to 16.2, you must create a standalone database backup after you perform the undercloud upgrade and before you perform the Leapp upgrade process on the undercloud nodes.
You can optionally include standalone undercloud database backups in your routine backup schedule to provide additional data security. A full backup of an undercloud node includes a database backup of the undercloud node. But if a full undercloud restoration fails, you might lose access to the database portion of the full undercloud backup. In this case, you can recover the database from a standalone undercloud database backup.
Procedure
Create a database backup of the undercloud nodes:
openstack undercloud backup --db-only
The db backup file is stored in
/home/stack with the name openstack-backup-mysql-<timestamp>.sql
.
1.6. Configuring Open vSwitch (OVS) interfaces for backup
If you use an Open vSwitch (OVS) bridge in your environment, you must manually configure the OVS interfaces before you create a backup of the undercloud or control plane nodes. The restoration process uses this information to restore the network interfaces.
Procedure
In the
/etc/rear/local.conf
file, add theNETWORKING_PREPARATION_COMMANDS
parameter in the following format:NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...')
Replace
<command_1>
and<command_2>
with commands that configure the network interface names or IP addresses. For example, you can add theip link add br-ctlplane type bridge
command to configure the control plane bridge name or add theip link set eth0 up
command to set the name of the interface. You can add more commands to the parameter based on your network configuration.
1.7. Creating a backup of the undercloud node
To create a backup of the undercloud node, use the openstack undercloud backup
command. You can then use the backup to restore the undercloud node to its previous state in case the node becomes corrupted or inaccessible. The backup of the undercloud node includes the backup of the database that runs on the undercloud node.
If you are upgrading your Red Hat OpenStack Platform environment from 13 to 16.2, you must create a separate database backup after you perform the undercloud upgrade and before you perform the Leapp upgrade process on the overcloud nodes. For more information, see Section 1.5, “Creating a standalone database backup of the undercloud nodes”.
Prerequisites
- You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS server on the backup node”.
- You have installed ReaR on the undercloud node. For more information, see Section 1.4, “Installing ReaR on the undercloud node”.
- If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces. For more information, see Section 1.6, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
-
Log in to the undercloud as the
stack
user. Retrieve the MySQL root password:
[stack@undercloud ~]$ PASSWORD=$(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)
Create a database backup of the undercloud node:
[stack@undercloud ~]$ sudo podman exec mysql bash -c "mysqldump -uroot -p$PASSWORD --opt --all-databases" | sudo tee /root/undercloud-all-databases.sql
Source the undercloud credentials:
[stack@undercloud ~]$ source stackrc
If you have not done so before, create an inventory file and use the
tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the overcloud nodes:(undercloud) [stack@undercloud ~]$ tripleo-ansible-inventory \ --ansible_ssh_user heat-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml
Create a backup of the undercloud node:
(undercloud) [stack@undercloud ~]$ openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml
1.8. Scheduling undercloud node backups with cron
You can schedule backups of the undercloud nodes with ReaR by using the Ansible backup-and-restore
role. You can view the logs in the /var/log/rear-cron
directory.
Prerequisites
- You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS server on the backup node”.
- You have installed ReaR on the undercloud and control plane nodes. For more information, see Section 2.3, “Installing ReaR on the control plane nodes”.
- You have sufficient available disk space at your backup location to store the backup.
Procedure
To schedule a backup of your control plane nodes, run the following command. The default schedule is Sundays at midnight:
openstack undercloud backup --cron
Optional: Customize the scheduled backup according to your deployment:
To change the default backup schedule, pass a different cron schedule on the
tripleo_backup_and_restore_cron
parameter:openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron": "0 0 * * 0"}'
To define additional parameters that are added to the backup command when cron runs the scheduled backup, pass the
tripleo_backup_and_restore_cron_extra
parameter to the backup command, as shown in the following example:openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_extra":"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml"}'
To change the default user that executes the backup, pass the
tripleo_backup_and_restore_cron_user
parameter to the backup command, as shown in the following example:openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_user": "root"}
Chapter 2. Backing up the control plane nodes
To back up the control plane nodes, you configure the backup node, install the Relax-and-Recover tool on the control plane nodes, and create the backup image. You can create backups as a part of your regular environment maintenance.
In addition, you must back up the control plane nodes before performing updates or upgrades. You can use the backups to restore the control plane nodes to their previous state if an error occurs during an update or upgrade.
2.1. Supported backup formats and protocols
The undercloud and backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols.
The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore the undercloud and control plane.
- Bootable media formats
- ISO
- File transport protocols
- SFTP
- NFS
2.2. Installing and configuring an NFS server on the backup node
You can install and configure a new NFS server to store the backup file. To install and configure an NFS server on the backup node, create an inventory file, create an SSH key, and run the openstack undercloud backup
command with the NFS server options.
- If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up.
-
By default, the Relax and Recover (ReaR) IP address parameter for the NFS server is
192.168.24.1
. You must add the parametertripleo_backup_and_restore_server
to set the IP address value that matches your environment.
Procedure
On the undercloud node, source the undercloud credentials:
[stack@undercloud ~]$ source stackrc (undercloud) [stack@undercloud ~]$
On the undercloud node, create an inventory file for the backup node:
(undercloud) [stack@undercloud ~]$ cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF
Replace
<ip_address>
and<user>
with the values that apply to your environment.Copy the public SSH key from the undercloud node to the backup node.
(undercloud) [stack@undercloud ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node>
Replace
<backup_node>
with the path and name of the backup node.Configure the NFS server on the backup node:
(undercloud) [stack@undercloud ~]$ openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml
2.3. Installing ReaR on the control plane nodes
Before you create a backup of the overcloud control plane, install and configure Relax and Recover (ReaR) on each of the control plane nodes.
Due to a known issue, the ReaR backup of overcloud nodes continues even if a Controller node is down. Ensure that all your Controller nodes are running before you run the ReaR backup. A fix is planned for a later Red Hat OpenStack Platform (RHOSP) release. For more information, see BZ#2077335 - Back up of the overcloud ctlplane keeps going even if one controller is unreachable.
Prerequisites
- You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.2, “Installing and configuring an NFS server on the backup node”.
Procedure
On the undercloud node, source the undercloud credentials:
[stack@undercloud ~]$ source stackrc
If you have not done so before, create an inventory file and use the
tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the overcloud nodes:(undercloud) [stack@undercloud ~]$ tripleo-ansible-inventory \ --ansible_ssh_user heat-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml
In the
bar-vars.yaml
file, configure the backup storage location. Follow the appropriate steps for your NFS server or SFTP server.If you use an NFS server, add the following parameters to the
bar-vars.yaml
file:tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_server_dir_path> tripleo_backup_and_restore_output_url: "nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}" tripleo_backup_and_restore_backup_url: "nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}"
Replace
<ip_address>`and `<backup_server_dir_path>
. The default value fortripleo_backup_and_restore_server
parameter value is192.168.24.1
.If you use an SFTP server, add the
tripleo_backup_and_restore_output_url
parameter and set the values of the URL and credentials of the SFTP server:tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/
Replace
<user>
,<password>
, and<backup_node>
with the backup node URL and credentials.
Install ReaR on the control plane nodes:
(undercloud) [stack@undercloud ~]$ openstack overcloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml
If your system uses the UEFI boot loader, perform the following steps on the control plane nodes:
Install the following tools:
$ sudo dnf install dosfstools efibootmgr
-
Enable UEFI backup in the ReaR configuration file located in
/etc/rear/local.conf
by replacing theUSING_UEFI_BOOTLOADER
parameter value0
with the value1
.
2.4. Configuring Open vSwitch (OVS) interfaces for backup
If you use an Open vSwitch (OVS) bridge in your environment, you must manually configure the OVS interfaces before you create a backup of the undercloud or control plane nodes. The restoration process uses this information to restore the network interfaces.
Procedure
In the
/etc/rear/local.conf
file, add theNETWORKING_PREPARATION_COMMANDS
parameter in the following format:NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...')
Replace
<command_1>
and<command_2>
with commands that configure the network interface names or IP addresses. For example, you can add theip link add br-ctlplane type bridge
command to configure the control plane bridge name or add theip link set eth0 up
command to set the name of the interface. You can add more commands to the parameter based on your network configuration.
2.5. Creating a backup of the control plane nodes
To create a backup of the control plane nodes, use the openstack overcloud backup
command. You can then use the backup to restore the control plane nodes to their previous state in case the nodes become corrupted or inaccessible. The backup of the control plane nodes includes the backup of the database that runs on the control plane nodes.
Prerequisites
- You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 2.2, “Installing and configuring an NFS server on the backup node”.
- You have installed ReaR on the control plane nodes. For more information, see Section 2.3, “Installing ReaR on the control plane nodes”.
- If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces. For more information, see Section 2.4, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
Locate the
config-drive
partition on each control plane node:[stack@undercloud ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 55G 0 disk ├─vda1 253:1 0 1M 0 part 1 ├─vda2 253:2 0 100M 0 part /boot/efi └─vda3 253:3 0 54.9G 0 part /
- 1
- The
config-drive
partition is the 1M partition that is not mounted.
On each control plane node, back up the
config-drive
partition of each node as theroot
user:[root@controller-x ~]# dd if=<config_drive_partition> of=/mnt/config-drive
Replace
<config_drive_partition>
with the name of theconfig-drive
partition that you located in step 1.On the undercloud node, source the undercloud credentials:
[stack@undercloud ~]$ source stackrc
If you have not done so before, use the
tripleo-ansible-inventory
command to generate a static inventory file that contains hosts and variables for all the overcloud nodes:(undercloud) [stack@undercloud ~]$ tripleo-ansible-inventory \ --ansible_ssh_user heat-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml
Create a backup of the control plane nodes:
(undercloud) [stack@undercloud ~]$ openstack overcloud backup --inventory /home/stack/tripleo-inventory.yaml
The backup process runs sequentially on each control plane node without disrupting the service to your environment.
2.6. Scheduling control plane node backups with cron
You can schedule backups of the control plane nodes with ReaR by using the Ansible backup-and-restore
role. You can view the logs in the /var/log/rear-cron
directory.
Prerequisites
- You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, “Installing and configuring an NFS server on the backup node”.
- You have installed ReaR on the undercloud and control plane nodes. For more information, see Section 2.3, “Installing ReaR on the control plane nodes”.
- You have sufficient available disk space at your backup location to store the backup.
Procedure
To schedule a backup of your control plane nodes, run the following command. The default schedule is Sundays at midnight:
openstack overcloud backup --cron
Optional: Customize the scheduled backup according to your deployment:
To change the default backup schedule, pass a different cron schedule on the
tripleo_backup_and_restore_cron
parameter:openstack overcloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron": "0 0 * * 0"}'
To define additional parameters that are added to the backup command when cron runs the scheduled backup, pass the
tripleo_backup_and_restore_cron_extra
parameter to the backup command, as shown in the following example:openstack overcloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_extra":"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml"}'
To change the default user that executes the backup, pass the
tripleo_backup_and_restore_cron_user
parameter to the backup command, as shown in the following example:openstack overcloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_user": "root"}
Chapter 3. Restoring the undercloud and control plane nodes
If your undercloud or control plane nodes become corrupted or if an error occurs during an update or upgrade, you can restore the undercloud or overcloud control plane nodes from a backup to their previous state. If the restore process fails to automatically restore the Galera cluster or nodes with colocated Ceph monitors, you can restore these components manually.
3.1. Preparing a control plane with colocated Ceph monitors for the restore process
Before you restore a control plane nodes with colocated Ceph monitors, prepare your environment by creating a script that mounts the Ceph monitor backup file to the node file system and another script that ReaR uses to locate the backup file.
If you cannot back up the /var/lib/ceph
directory, you must contact the Red Hat Technical Support team to rebuild the ceph-mon
index. For more information, see Red Hat Technical Support Team.
Prerequisites
- You have created a backup of the undercloud node. For more information, see Section 1.7, “Creating a backup of the undercloud node”.
- You have created a backup of the control plane nodes. For more information, see Section 2.5, “Creating a backup of the control plane nodes”.
- You have access to the backup node.
-
If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the
NETWORKING_PREPARATION_COMMANDS
parameter. For more information, see see Section 1.6, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
On each node that you want to restore, create the script
/usr/share/rear/setup/default/011_backup_ceph.sh
and add the following content:mount -t <file_type> <device_disk> /mnt/local cd /mnt/local [ -d "var/lib/ceph" ] && tar cvfz /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.' --acls cd / umount <device_disk>
Replace
<file_type>
and<device_disk>
with the type and location of the backup file. Normally, the file type isxfs
and the location is/dev/vda2
.On the same node, create the script
/usr/share/rear/wrapup/default/501_restore_ceph.sh
and add the following content:if [ -f "/tmp/ceph.tar.gz" ]; then rm -rf /mnt/local/var/lib/ceph/* tar xvC /mnt/local -f /tmp/ceph.tar.gz var/lib/ceph --xattrs --xattrs-include='.' fi
3.2. Restoring the undercloud node
You can restore the undercloud node to its previous state using the backup ISO image that you created using ReaR. You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or download it to the undercloud node through Integrated Lights-Out (iLO) remote access.
Prerequisites
- You have created a backup of the undercloud node. For more information, see Section 2.5, “Creating a backup of the control plane nodes”.
- You have access to the backup node.
-
If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the
NETWORKING_PREPARATION_COMMANDS
parameter. For more information, see see Section 1.6, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
- Power off the undercloud node. Ensure that the undercloud node is powered off completely before you proceed.
- Boot the undercloud node with the backup ISO image.
When the
Relax-and-Recover
boot menu displays, selectRecover <undercloud_node>
. Replace<undercloud_node>
with the name of your undercloud node.NoteIf your system uses UEFI, select the
Relax-and-Recover (no Secure Boot)
option.Log in as the
root
user and restore the node:The following message displays:
Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <undercloud_node>:~ # rear recover
When the undercloud node restoration process completes, the console displays the following message:
Finished recovering your system Exiting rear recover Running exit tasks
Power off the node:
RESCUE <undercloud_node>:~ # poweroff
On boot up, the node resumes its previous state.
3.3. Restoring the control plane nodes
If an error occurs during an update or upgrade, you can restore the control plane nodes to their previous state using the backup ISO image that you have created using ReaR.
To restore the control plane, you must restore all control plane nodes to ensure state consistency.
You can find the backup ISO images on the backup node. Burn the bootable ISO image to a DVD or download it to the undercloud node through Integrated Lights-Out (iLO) remote access.
Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information about third-party SDNs, refer to the third-party SDN documentation.
Prerequisites
- You have created a backup of the control plane nodes. For more information, see Section 2.5, “Creating a backup of the control plane nodes”.
- You have access to the backup node.
-
If you use an OVS bridge for your network interfaces, you have access to the network configuration information that you set in the
NETWORKING_PREPARATION_COMMANDS
parameter. For more information, see see Section 2.4, “Configuring Open vSwitch (OVS) interfaces for backup”.
Procedure
- Power off each control plane node. Ensure that the control plane nodes are powered off completely before you proceed.
- Boot each control plane node with the corresponding backup ISO image.
When the
Relax-and-Recover
boot menu displays, on each control plane node, selectRecover <control_plane_node>
. Replace<control_plane_node>
with the name of the corresponding control plane node.NoteIf your system uses UEFI, select the
Relax-and-Recover (no Secure Boot)
option.On each control plane node, log in as the
root
user and restore the node:The following message displays:
Welcome to Relax-and-Recover. Run "rear recover" to restore your system! RESCUE <control_plane_node>:~ # rear recover
When the control plane node restoration process completes, the console displays the following message:
Finished recovering your system Exiting rear recover Running exit tasks
When the command line console is available, restore the
config-drive
partition of each control plane node:# once completed, restore the config-drive partition (which is ISO9660) RESCUE <control_plane_node>:~ $ dd if=/mnt/local/mnt/config-drive of=<config_drive_partition>
Power off the node:
RESCUE <control_plane_node>:~ # poweroff
- Set the boot sequence to the normal boot device. On boot up, the node resumes its previous state.
To ensure that the services are running correctly, check the status of pacemaker. Log in to a Controller node as the
root
user and enter the following command:# pcs status
- To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For more information, see Validating your OpenStack cloud with the Integration Test Suite (tempest).
Troubleshooting
-
Clear resource alarms that are displayed by
pcs status
by running the following command:
# pcs resource clean
-
Clear STONITH fencing action errors that are displayed by
pcs status
by running the following commands:
# pcs resource clean # pcs stonith history cleanup
3.4. Restoring the Galera cluster manually
If the Galera cluster does not restore as part of the restoration procedure, you must restore Galera manually.
In this procedure, you must perform some steps on one Controller node. Ensure that you perform these steps on the same Controller node as you go through the procedure.
Procedure
On
Controller-0
, retrieve the Galera cluster virtual IP:$ sudo hiera -c /etc/puppet/hiera.yaml mysql_vip
Disable the database connections through the virtual IP on all Controller nodes:
$ sudo iptables -I INPUT -p tcp --destination-port 3306 -d $MYSQL_VIP -j DROP
On
Controller-0
, retrieve the MySQL root password:$ sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password
On
Controller-0
, set the Galera resource tounmanaged
mode:$ sudo pcs resource unmanage galera-bundle
Stop the MySQL containers on all Controller nodes:
$ sudo podman container stop $(sudo podman container ls --all --format "{{.Names}}" --filter=name=galera-bundle)
Move the current directory on all Controller nodes:
$ sudo mv /var/lib/mysql /var/lib/mysql-save
Create the new directory
/var/lib/mysq
on all Controller nodes:$ sudo mkdir /var/lib/mysql $ sudo chown 42434:42434 /var/lib/mysql $ sudo chcon -t container_file_t /var/lib/mysql $ sudo chmod 0755 /var/lib/mysql $ sudo chcon -r object_r /var/lib/mysql $ sudo chcon -u system_u /var/lib/mysql
Start the MySQL containers on all Controller nodes:
$ sudo podman container start $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)
Create the MySQL database on all Controller nodes:
$ sudo podman exec -i $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql_install_db --datadir=/var/lib/mysql --user=mysql --log_error=/var/log/mysql/mysql_init.log"
Start the database on all Controller nodes:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqld_safe --skip-networking --wsrep-on=OFF --log-error=/var/log/mysql/mysql_safe.log" &
Move the
.my.cnf
Galera configuration file on all Controller nodes:$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf /root/.my.cnf.bck"
Reset the Galera root password on all Controller nodes:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -uroot -e'use mysql;update user set password=PASSWORD(\"$ROOTPASSWORD\")where User=\"root\";flush privileges;'"
Restore the
.my.cnf
Galera configuration file inside the Galera container on all Controller nodes:$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mv /root/.my.cnf.bck /root/.my.cnf"
On
Controller-0
, copy the backup database files to/var/lib/MySQL
:$ sudo cp $BACKUP_FILE /var/lib/mysql $ sudo cp $BACKUP_GRANT_FILE /var/lib/mysql
NoteThe path to these files is /home/heat-admin/.
On
Controller-0
, restore the MySQL database:$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_FILE\" " $ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_GRANT_FILE\" "
Shut down the databases on all Controller nodes:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "mysqladmin shutdown"
On
Controller-0
, start the bootstrap node:$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=gcomm:// &
Verification: On Controller-0, check the status of the cluster:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck"
Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you must recreate the node.
On
Controller-0
, retrieve the cluster address from the configuration:$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf" | awk '{print $3}'
On each of the remaining Controller nodes, start the database and validate the cluster:
Start the database:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock \ --datadir=/var/lib/mysql --log-error=/var/log/mysql/mysql_cluster.log --user=mysql --open-files-limit=16384 \ --wsrep-cluster-address=$CLUSTER_ADDRESS &
Check the status of the MYSQL cluster:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" \ --filter=name=galera-bundle) bash -c "clustercheck"
Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you must recreate the node.
Stop the MySQL container on all Controller nodes:
$ sudo podman exec $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \ /usr/bin/mysqladmin -u root shutdown
On all Controller nodes, remove the following firewall rule to allow database connections through the virtual IP address:
$ sudo iptables -D INPUT -p tcp --destination-port 3306 -d $MYSQL_VIP -j DROP
Restart the MySQL container on all Controller nodes:
$ sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)
Restart the
clustercheck
container on all Controller nodes:$ sudo podman container restart $(sudo podman container ls --all --format "{{ .Names }}" --filter=name=clustercheck)
On
Controller-0
, set the Galera resource tomanaged
mode:$ sudo pcs resource manage galera-bundle
Verification
To ensure that services are running correctly, check the status of pacemaker:
$ sudo pcs status
- To view the status of the overcloud, use the OpenStack Integration Test Suite (tempest). For more information, see Validating your OpenStack cloud with the Integration Test Suite (tempest).
If you suspect an issue with a particular node, check the state of the cluster with
clustercheck
:$ sudo podman exec clustercheck /usr/bin/clustercheck
3.5. Restoring the undercloud node database manually
If the undercloud database does not restore as part of the undercloud restore process, you can restore the database manually. You can only restore the database if you previously created a standalone database backup.
Prerequisites
- You have created a standalone backup of the undercloud database. For more information, see Section 1.5, “Creating a standalone database backup of the undercloud nodes”.
Procedure
-
Log in to the director undercloud node as the
root
user. Stop all tripleo services:
[root@director ~]# systemctl stop tripleo_*
Ensure that no containers are running on the server by entering the following command:
[root@director ~]# podman ps
If any containers are running, enter the following command to stop the containers:
[root@director ~]# podman stop <container_name>
Create a backup of the current
/var/lib/mysql
directory and then delete the directory:[root@director ~]# cp -a /var/lib/mysql /var/lib/mysql_bck [root@director ~]# rm -rf /var/lib/mysql
Recreate the database directory and set the SELinux attributes for the new directory:
[root@director ~]# mkdir /var/lib/mysql [root@director ~]# chown 42434:42434 /var/lib/mysql [root@director ~]# chmod 0755 /var/lib/mysql [root@director ~]# chcon -t container_file_t /var/lib/mysql [root@director ~]# chcon -r object_r /var/lib/mysql [root@director ~]# chcon -u system_u /var/lib/mysql
Create a local tag for the
mariadb
image. Replace<image_id>
and<undercloud.ctlplane.example.com>
with the values applicable in your environment:[root@director ~]# podman images | grep mariadb <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB
[root@director ~]# podman tag <image_id> mariadb
[root@director ~]# podman images | grep maria localhost/mariadb latest <image_id> 3 weeks ago 718 MB <undercloud.ctlplane.example.com>:8787/rh-osbs/rhosp16-openstack-mariadb 16.2_20210322.1 <image_id> 3 weeks ago 718 MB
Initialize the
/var/lib/mysql
directory with the container:[root@director ~]# podman run --net=host -v /var/lib/mysql:/var/lib/mysql localhost/mariadb mysql_install_db --datadir=/var/lib/mysql --user=mysql
Copy the database backup file that you want to import to the database:
[root@director ~]# cp /root/undercloud-all-databases.sql /var/lib/mysql
Start the database service to import the data:
[root@director ~]# podman run --net=host -dt -v /var/lib/mysql:/var/lib/mysql localhost/mariadb /usr/libexec/mysqld
Import the data and configure the
max_allowed_packet
parameter:Log in to the container and configure it:
[root@director ~]# podman exec -it <container_id> /bin/bash ()[mysql@5a4e429c6f40 /]$ mysql -u root -e "set global max_allowed_packet = 1073741824;" ()[mysql@5a4e429c6f40 /]$ mysql -u root < /var/lib/mysql/undercloud-all-databases.sql ()[mysql@5a4e429c6f40 /]$ mysql -u root -e 'flush privileges' ()[mysql@5a4e429c6f40 /]$ exit exit
Stop the container:
[root@director ~]# podman stop <container_id>
Check that no containers are running:
[root@director ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@director ~]#
Restart all tripleo services:
[root@director ~]# systemctl start multi-user.target