Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Executing the back up procedure
Before you perform a fast forward upgrade, back up the undercloud and the overcloud control plane nodes so that you can restore them to their previous state if an error occurs.
Before you backup the undercloud and overcloud, ensure that you do not perform any operations on the overcloud from the undercloud.
4.1. Performing prerequisite tasks before backing up the undercloud Link kopierenLink in die Zwischenablage kopiert!
Do not perform an undercloud backup when you deploy the undercloud or when you make changes to an existing undercloud.
To prevent data corruptions, confirm that there are no stack failures and ongoing tasks, and that all OpenStack services except for mariadb are stopped before you back up the undercloud node.
Procedure
Confirm that there are no failures on the stack. Replace
<STACKNAME>with the name of the stack. Use the command for every stack that is deployed and available:openstack stack failures list <STACKNAME>
(undercloud) [stack@undercloud-0 ~]$ openstack stack failures list <STACKNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that there are no ongoing tasks on the undercloud:
openstack stack list --nested | grep -v "_COMPLETE"
(undercloud) [stack@undercloud-0 ~]$ openstack stack list --nested | grep -v "_COMPLETE"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command returns no results, there are no ongoing tasks.
Stop all OpenStack services on the undercloud:
systemctl stop openstack-* systemctl stop neutron-* systemctl stop ironic* systemctl stop haproxy systemctl stop httpd
# systemctl stop openstack-* # systemctl stop neutron-* # systemctl stop ironic* # systemctl stop haproxy # systemctl stop httpdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that
mariadbis running:sudo systemctl status mariadb
# sudo systemctl status mariadbCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Backing up the undercloud Link kopierenLink in die Zwischenablage kopiert!
To back up the undercloud node, you must log in as the root user on the undercloud node. As a precaution, you must back up the database to ensure that you can restore it.
Prerequisites
- You have created and exported the backup directory. For more information, see Creating and exporting the backup directory.
- You have performed prerequisite tasks before backing up the undercloud. For more information, see Performing prerequisite tasks before backing up the undercloud.
- You have installed and configured ReaR on each control plane node. For more information, see Installing and configuring Relax and Recover (ReaR).
Procedure
Locate the database password:
PASSWORD=$(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)
[root@undercloud stack]# PASSWORD=$(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the databases:
mysql -uroot -p$PASSWORD -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases > openstack-backup-mysql.sql
[root@undercloud stack]# mysql -uroot -p$PASSWORD -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases > openstack-backup-mysql.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow mysql -uroot -p$PASSWORD -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' > openstack-backup-mysql-grants.sql[root@undercloud stack]# mysql -uroot -p$PASSWORD -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' > openstack-backup-mysql-grants.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
mariadbdatabase service:systemctl stop mariadb
[root@undercloud stack]# systemctl stop mariadbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the backup:
rear -d -v mkbackup
[root@undercloud stack]# rear -d -v mkbackupCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can find the backup ISO file that you create with ReaR on the backup node under the
/ctl_plane_backupsdirectory.Restart the undercloud:
- Log in to the undercloud as the stack user.
Restart the undercloud:
sudo reboot
[stack@undercloud]$ sudo rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Backing up the control plane Link kopierenLink in die Zwischenablage kopiert!
To back up the control plane, you must first stop the pacemaker cluster and all containers operating on the control plane nodes. Do not operate the stack to ensure state consistency. After you complete the backup procedure, start the pacemaker cluster and the containers.
As a precaution, you must back up the database to ensure that you can restore the database after you restart the pacemaker cluster and containers.
Back up the control plane nodes simultaneously.
Prerequisites
- You have created and exported the backup directory. For more information, see Creating and exporting the backup directory.
- You have installed and configured ReaR on each control plane node. For more information, see Installing and configuring Relax and Recover (ReaR).
Procedure
Locate the database password:
PASSWORD=$(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)
[heat-admin@overcloud-controller-x ~]# PASSWORD=$(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the databases:
mysql -uroot -p$PASSWORD -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases > openstack-backup-mysql.sql
[heat-admin@overcloud-controller-x ~]# mysql -uroot -p$PASSWORD -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$PASSWORD --single-transaction --databases > openstack-backup-mysql.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow mysql -uroot -p$PASSWORD -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' > openstack-backup-mysql-grants.sql[heat-admin@overcloud-controller-x ~]# mysql -uroot -p$PASSWORD -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$PASSWORD -s -N -e | sed 's/$/;/' > openstack-backup-mysql-grants.sqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBacking up the databases is a precautionary measure. This step ensures that you can manually restore the Galera cluster if it does not restore automatically as part of the restoration procedure. For more information about restoring the Galera cluster, see Troubleshooting the Galera cluster.
On one of the control plane nodes, stop the pacemaker cluster:
ImportantDo not operate the stack. When you stop the pacemaker cluster and the containers, this results in the temporary interruption of control plane services to Compute nodes. There is also disruption to network connectivity, Ceph, and the NFS data plane service. You cannot create instances, migrate instances, authenticate requests, or monitor the health of the cluster until the pacemaker cluster and the containers return to service following the final step of this procedure.
sudo pcs cluster stop --all
[heat-admin@overcloud-controller-x ~]# sudo pcs cluster stop --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow On each control plane node, stop the containers:
Stop the containers:
sudo docker stop $(sudo docker ps -a -q)
[heat-admin@overcloud-controller-x ~]# sudo docker stop $(sudo docker ps -a -q)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the ceph-mon@controller.service container:
sudo systemctl stop ceph-mon@$(hostname -s)
[heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mon@$(hostname -s)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the ceph-mgr@controller.service container:
sudo systemctl stop ceph-mgr@$(hostname -s)
[heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mgr@$(hostname -s)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: If you use
ganesha-nfs, disable the file server on one controller:sudo pcs resource disable ceph-nfs
[heat-admin@overcloud-controller-x ~]# sudo pcs resource disable ceph-nfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you use the ceph services
ceph-mdsandceph-rgw, stop these services:sudo systemctl stop ceph-mds@$(hostname -s) sudo systemctl stop ceph-rgw@$(hostname -s)
[heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-mds@$(hostname -s) [heat-admin@overcloud-controller-x ~]# sudo systemctl stop ceph-rgw@$(hostname -s)Copy to Clipboard Copied! Toggle word wrap Toggle overflow To back up the control plane, run the control plane backup on each control plane node:
sudo rear -d -v mkbackup
[heat-admin@overcloud-controller-x ~]# sudo rear -d -v mkbackupCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can find the backup ISO file that you create with ReaR on the backup node under the
/ctl_plane_backupsdirectory.NoteWhen you execute the backup command, you might see warning messages regarding the
tarcommand and sockets that are ignored during the tar process, similar to the following warning:Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the backup procedure generates ISO images for each of the control plane nodes, restart the pacemaker cluster. On one of the control plane nodes, enter the following command:
sudo pcs cluster start --all
[heat-admin@overcloud-controller-x ~]# sudo pcs cluster start --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow On each control plane node, start the containers:
Start the containers:
sudo systemctl restart docker
[heat-admin@overcloud-controller-x ~]# sudo systemctl restart dockerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
ceph-mon@controller.servicecontainer:sudo systemctl start ceph-mon@$(hostname -s)
[heat-admin@overcloud-controller-x ~]# sudo systemctl start ceph-mon@$(hostname -s)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
ceph-mgr@controller.servicecontainer:sudo systemctl start ceph-mgr@$(hostname -s)
[heat-admin@overcloud-controller-x ~]# sudo systemctl start ceph-mgr@$(hostname -s)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: If you use
ceph-mdsandceph-rgw, start these services:sudo systemctl start ceph-rgw@$(hostname -s) sudo systemctl start ceph-mds@$(hostname -s)
[heat-admin@overcloud-controller-x ~]# sudo systemctl start ceph-rgw@$(hostname -s) [heat-admin@overcloud-controller-x ~]# sudo systemctl start ceph-mds@$(hostname -s)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If you use
ganesha-nfs, enable the file server on one controller:sudo pcs resource enable ceph-nfs
[heat-admin@overcloud-controller-x ~]# sudo pcs resource enable ceph-nfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow