Este conteúdo não está disponível no idioma selecionado.
Chapter 4. Executing the backup procedure
Before you perform a fast forward upgrade, back up the undercloud and the overcloud control plane nodes so that you can restore them to their previous state if an error occurs.
Before you back up the undercloud and overcloud, ensure that you do not perform any operations on the overcloud from the undercloud.
4.1. Performing prerequisite tasks before backing up the undercloud
				Do not perform an undercloud backup when you deploy the undercloud or when you make changes to an existing undercloud. To prevent data corruptions, confirm that there are no stack failures, ongoing tasks, and that all OpenStack services except for mariadb are stopped before you back up the undercloud node.
			
Procedure
- List failures for all available stacks: - source stackrc && for i in `openstack stack list -c 'Stack Name' -f value`; do openstack stack failures list $i; done - (undercloud) [stack@undercloud-0 ~]$ source stackrc && for i in `openstack stack list -c 'Stack Name' -f value`; do openstack stack failures list $i; done- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that there are no ongoing tasks in the cloud: - openstack stack list --nested | grep -v "_COMPLETE" - (undercloud) [stack@undercloud-0 ~]$ openstack stack list --nested | grep -v "_COMPLETE"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - If the command returns no results, there are no ongoing tasks. 
- Stop all OpenStack services in the cloud: - systemctl stop tripleo_* - # systemctl stop tripleo_*- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Start the - tripleo_mysqlservice:- systemctl start tripleo_mysql - # systemctl start tripleo_mysql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Verify that the - tripleo_mysqlservice is running:- systemctl status tripleo_mysql - # systemctl status tripleo_mysql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
4.2. Backing up the undercloud
To back up the undercloud node, you must log in as the root user on the undercloud node. As a precaution, you must back up the database to ensure that you can restore it.
Prerequisites
- You have created and exported the backup directory. For more information, see Creating and exporting the backup directory.
- You have performed prerequisite tasks before backing up the undercloud. For more information, see Performing prerequisite tasks before backing up the undercloud.
- You have installed and configured ReaR on the undercloud node. For more information, see Install and Configure ReaR.
Procedure
- Locate the database password. - PASSWORD=$(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password) - [root@undercloud stack]# PASSWORD=$(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Back up the databases: - podman exec mysql bash -c "mysql -uroot -p$PASSWORD -s -N -e \"SELECT CONCAT('\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' " > openstack-backup-mysql-grants.sql- # podman exec mysql bash -c "mysql -uroot -p$PASSWORD -s -N -e \"SELECT CONCAT('\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' " > openstack-backup-mysql-grants.sql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - podman exec mysql bash -c "mysql -uroot -p$PASSWORD -s -N -e \"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases" > openstack-backup-mysql.sql - # podman exec mysql bash -c "mysql -uroot -p$PASSWORD -s -N -e \"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases" > openstack-backup-mysql.sql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Stop the - mariadbdatabase service:- systemctl stop tripleo_mysql - [root@undercloud stack]# systemctl stop tripleo_mysql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the backup: - rear -d -v mkbackup - [root@undercloud stack]# rear -d -v mkbackup- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - You can find the backup ISO file that you create with ReaR on the backup node in the - /ctl_plane_backupsdirectory.
4.3. Backing up the control plane
To back up the control plane, you must first stop the pacemaker cluster and all containers operating on the control plane nodes. Do not operate the stack to ensure state consistency. After you complete the backup procedure, start the pacemaker cluster and the containers.
As a precaution, you must back up the database to ensure that you can restore the database after you restart the pacemaker cluster and containers.
Back up the control plane nodes simultaneously.
Prerequisites
- You have created and exported the backup directory. For more information, see Creating and exporting the backup directory.
- You have installed and configured ReaR on the undercloud node. For more information, see Install and Configure ReaR.
Procedure
- Locate the database password: - PASSWORD=$(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password) - [heat-admin@overcloud-controller-x ~]# PASSWORD=$(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Back up the databases: - podman exec galera-bundle-podman-X bash -c "mysql -uroot -p$PASSWORD -s -N -e \"SELECT CONCAT('\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' " > openstack-backup-mysql-grants.sql- [heat-admin@overcloud-controller-x ~]# podman exec galera-bundle-podman-X bash -c "mysql -uroot -p$PASSWORD -s -N -e \"SELECT CONCAT('\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' " > openstack-backup-mysql-grants.sql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - podman exec galera-bundle-podman-X bash -c "mysql -uroot -p$PASSWORD -s -N -e \"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases" > openstack-backup-mysql.sql - [heat-admin@overcloud-controller-x ~]# podman exec galera-bundle-podman-X bash -c "mysql -uroot -p$PASSWORD -s -N -e \"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases" > openstack-backup-mysql.sql- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On one of control plane nodes, stop the pacemaker cluster: Important- Do not operate the stack. When you stop the pacemaker cluster and the containers, this results in the temporary interruption of control plane services to Compute nodes. There is also disruption to network connectivity, Ceph, and the NFS data plane service. You cannot create instances, migrate instances, authenticate requests, or monitor the health of the cluster until the pacemaker cluster and the containers return to service following the final step of this procedure. - pcs cluster stop --all - [heat-admin@overcloud-controller-x ~]# pcs cluster stop --all- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On each control plane node, stop the containers. - Stop the containers: - systemctl stop tripleo_* - [heat-admin@overcloud-controller-x ~]# systemctl stop tripleo_*- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Stop the - ceph-mon@controller.servicecontainer:- sudo systemctl stop ceph-mon@$(hostname -s) - [heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mon@$(hostname -s)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Stop the - ceph-mgr@controller.servicecontainer:- sudo systemctl stop ceph-mgr@$(hostname -s) - [heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mgr@$(hostname -s)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- To back up the control plane, run the following command as - rootin the command line interface of each control plane node:- rear -d -v mkbackup - [heat-admin@overcloud-controller-x ~]# rear -d -v mkbackup- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - You can find the backup ISO file that you create with ReaR on the backup node in the - /ctl_plane_backupsdirectory.Note- When you execute the backup command, you might see warning messages regarding the - tarcommand and sockets that are ignored during the tar process, similar to the following:- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- When the backup procedure generates ISO images for each of the control plane nodes, restart the pacemaker cluster and the containers: - On one of the control plane nodes, enter the following command: - pcs cluster start --all - [heat-admin@overcloud-controller-x ~]# pcs cluster start --all- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On each control plane node, start the containers. - Start the - ceph-mon@controller.servicecontainer:- systemctl start ceph-mon@$(hostname -s) - [heat-admin@overcloud-controller-x ~]# systemctl start ceph-mon@$(hostname -s)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Start the - ceph-mgr@controller.servicecontainer:- systemctl start ceph-mgr@$(hostname -s) - [heat-admin@overcloud-controller-x ~]# systemctl start ceph-mgr@$(hostname -s)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow