4.3. Backing up the control plane
To back up the control plane, you must first stop the pacemaker cluster and all containers operating on the control plane nodes. Do not operate the stack to ensure state consistency. After you complete the backup procedure, start the pacemaker cluster and the containers.
As a precaution, you must back up the database to ensure that you can restore the database after you restart the pacemaker cluster and containers.
Back up the control plane nodes simultaneously.
Prerequisites
- You have created and exported the backup directory. For more information, see Creating and exporting the backup directory.
- You have installed and configured ReaR on the undercloud node. For more information, see Install and Configure ReaR.
Procedure
Locate the database password:
[heat-admin@overcloud-controller-x ~]# PASSWORD=$(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)
Back up the databases:
[heat-admin@overcloud-controller-x ~]# podman exec galera-bundle-podman-X bash -c "mysql -uroot -p$PASSWORD -s -N -e \"SELECT CONCAT('\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' " > openstack-backup-mysql-grants.sql
[heat-admin@overcloud-controller-x ~]# podman exec galera-bundle-podman-X bash -c "mysql -uroot -p$PASSWORD -s -N -e \"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases" > openstack-backup-mysql.sql
On one of control plane nodes, stop the pacemaker cluster:
重要Do not operate the stack. When you stop the pacemaker cluster and the containers, this results in the temporary interruption of control plane services to Compute nodes. There is also disruption to network connectivity, Ceph, and the NFS data plane service. You cannot create instances, migrate instances, authenticate requests, or monitor the health of the cluster until the pacemaker cluster and the containers return to service following the final step of this procedure.
[heat-admin@overcloud-controller-x ~]# pcs cluster stop --all
On each control plane node, stop the containers.
Stop the containers:
[heat-admin@overcloud-controller-x ~]# systemctl stop tripleo_*
Stop the
ceph-mon@controller.service
container:[heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mon@$(hostname -s)
Stop the
ceph-mgr@controller.service
container:[heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mgr@$(hostname -s)
To back up the control plane, run the following command as
root
in the command line interface of each control plane node:[heat-admin@overcloud-controller-x ~]# rear -d -v mkbackup
You can find the backup ISO file that you create with ReaR on the backup node in the
/ctl_plane_backups
directory.注記When you execute the backup command, you might see warning messages regarding the
tar
command and sockets that are ignored during the tar process, similar to the following:WARNING: tar ended with return code 1 and below output: ---snip--- tar: /var/spool/postfix/public/qmgr: socket ignored ... ... This message indicates that files have been modified during the archiving process and the backup might be inconsistent. Relax-and-Recover continues to operate, however, it is important that you verify the backup to ensure that you can use this backup to recover your system.
When the backup procedure generates ISO images for each of the control plane nodes, restart the pacemaker cluster and the containers:
On one of the control plane nodes, enter the following command:
[heat-admin@overcloud-controller-x ~]# pcs cluster start --all
On each control plane node, start the containers.
Start the
ceph-mon@controller.service
container:[heat-admin@overcloud-controller-x ~]# systemctl start ceph-mon@$(hostname -s)
Start the
ceph-mgr@controller.service
container:[heat-admin@overcloud-controller-x ~]# systemctl start ceph-mgr@$(hostname -s)