이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Back Up and Restore the Director Undercloud
Back up and restore the director undercloud
Abstract
Chapter 1. Back Up the Undercloud 링크 복사링크가 클립보드에 복사되었습니다!
This guide describes how to back up the undercloud used in the Red Hat OpenStack Platform director. The undercloud is usually a single physical node (although high availability options exist using a two-node pacemaker cluster that runs director in a VM) that is used to deploy and manage your OpenStack environment.
1.1. Backup Considerations 링크 복사링크가 클립보드에 복사되었습니다!
Formulate a robust back up and recovery policy in order to minimize data loss and system downtime. When determining your back up strategy, you will need to answer the following questions:
- How quickly will you need to recover from data loss? If you cannot have data loss at all, you should include high availability in your deployment strategy, in addition to using backups. You’ll need to consider how long it will take to obtain the physical backup media (including from an offsite location, if used), and how many tape drives are available for restore operations.
- How many backups should you keep? You will need to consider legal and regulatory requirements that affect how long you are expected to store data.
- Should your backups be kept off-site? Storing your backup media offsite will help mitigate the risk of catastrophe befalling your physical location.
- How often should backups be tested? A robust back up strategy will include regular restoration tests of backed up data. This can help validate that the correct data is still being backed up, and that no corruption is being introduced during the back up or restoration processes. These drills should assume that they are being performed under actual disaster recovery conditions.
- What will be backed up? The following sections describe database and file-system backups for components, as well as information on recovering backups.
1.2. High Availability of the Undercloud node 링크 복사링크가 클립보드에 복사되었습니다!
You are free to consider your preferred high availability (HA) options for the Undercloud node; Red Hat does not prescribe any particular requirements for this. For example, you might consider running your Undercloud node as a highly available virtual machine within Red Hat Enterprise Virtualization (RHEV). You might also consider using physical nodes with Pacemaker providing HA for the required services.
When approaching high availability for your Undercloud node, you should consult the documentation and good practices of the solution you decide works best for your environment.
1.3. Backing up a containerized undercloud 링크 복사링크가 클립보드에 복사되었습니다!
A full undercloud backup includes the following databases and files:
- All MariaDB databases on the undercloud node
- MariaDB configuration file on the undercloud so that you can accurately restore databases
-
The configuration data:
/etc -
Log data:
/var/log -
Image data:
/var/lib/glance -
Certificate generation data if using SSL:
/var/lib/certmonger -
Any container image data:
/var/lib/containersand/var/lib/image-serve -
All swift data:
/srv/node -
All data in the stack user home directory:
/home/stack
Confirm that you have sufficient disk space available on the undercloud before you perform the backup process. Expect the archive file to be at least 3.5 GB.
Procedure
-
Log into the undercloud as the
rootuser. Retrieve the password:
[root@director ~]# /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_passwordPerform the backup:
[root@director ~]# podman exec mysql bash -c "mysqldump -uroot -pPASSWORD --opt --all-databases" > /root/undercloud-all-databases.sqlCopy the root configuration file for the database:
[root@director ~]# cp /var/lib/config-data/puppet-generated/mysql/root/.my.cnf ~/.Archive the database backup and the configuration files:
[root@director ~]# cd /backup [root@director backup]# tar --xattrs --xattrs-include='*.*' --ignore-failed-read -cf \ undercloud-backup-`date +%F`.tar \ /root/undercloud-all-databases.sql \ /etc \ /var/log \ /var/lib/glance \ /var/lib/certmonger \ /var/lib/containers \ /var/lib/image-serve \ /var/lib/config-data \ /srv/node \ /root \ /home/stack-
The
--ignore-failed-readoption skips any directory that does not apply to your undercloud. -
The
--xattrsoption includes extended attributed, which are required to store metadata for Object Storage (swift).
This creates a file named
undercloud-backup-<date>.tar.gz, where<date>is the system date. Copy thistarfile to a secure location.-
The
1.4. Validate the Completed Backup 링크 복사링크가 클립보드에 복사되었습니다!
You can validate the success of the completed back up process by running and validating the restore process. See the next section for further details on restoring from backup.
Part I. Restore the Undercloud 링크 복사링크가 클립보드에 복사되었습니다!
This section describes how to restore the undercloud used in the Red Hat OpenStack Platform Director.
This process contains steps to restore the data from the OpenStack Platform director backup to a fresh undercloud installation. As a result, the restored undercloud uses the latest packages.
Chapter 2. Restoring a containerized undercloud 링크 복사링크가 클립보드에 복사되었습니다!
The following restore procedure assumes your undercloud node has failed and is in an unrecoverable state. This procedure involves restoring the database and critical filesystems on a fresh installation. It assumes the following:
- You have re-installed the latest version of Red Hat Enterprise Linux 8.
- The hardware layout is the same.
- The hostname and undercloud settings of the machine are the same.
-
The backup archive has been copied to the
rootdirectory.
Procedure
-
Log into your undercloud as the
rootuser. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
[root@director ~]# subscription-manager registerAttach the Red Hat OpenStack Platform entitlement:
[root@director ~]# subscription-manager attach --pool=Valid-Pool-Number-123456Disable all default repositories, and then enable the required Red Hat Enterprise Linux repositories:
[root@director ~]# subscription-manager repos --disable=* [root@director ~]# subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=openstack-16-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpmsPerform an update on your system to make sure you have the latest base system packages:
[root@director ~]# dnf update -y [root@director ~]# rebootEnsure the time on your undercloud is synchronized. For example:
[root@director ~]# dnf install -y chrony [root@director ~]# systemctl start chronyd [root@director ~]# systemctl enable chronyd-
Copy the undercloud backup archive to the undercloud’s
rootdirectory. The following steps useundercloud-backup-$TIMESTAMP.taras the filename, where $TIMESTAMP is a Bash variable for the timestamp on the archive. Install the database server and client tools:
[root@director ~]# dnf install -y mariadb mariadb-serverStart the database:
[root@director ~]# systemctl start mariadbIncrease the allowed packets to accommodate the size of our database backup:
[root@director ~]# mysql -uroot -e"set global max_allowed_packet = 1073741824;"Extract the database and database configuration from the archive:
[root@director ~]# tar -xvC / -f undercloud-backup-$TIMESTAMP.tar var/lib/config-data/mysql/etc/my.cnf.d/galera.cnf [root@director ~]# tar -xvC / -f undercloud-backup-$TIMESTAMP.tar root/undercloud-all-databases.sqlRestore the database backup:
[root@director ~]# mysql -u root < /root/undercloud-all-databases.sqlExtract a temporary version of the root configuration file:
[root@director ~]# tar -xvf undercloud-backup-$TIMESTAMP.tar root/.my.cnfGet the old root database password:
[root@director ~]# OLDPASSWORD=$(sudo cat root/.my.cnf | grep -m1 password | cut -d'=' -f2 | tr -d "'")Reset the root database password:
[root@director ~]# mysqladmin -u root password "$OLDPASSWORD"Copy the root configuration file from the temporary location:
[root@director ~]# mv root/.my.cnf . [root@director ~]# rmdir rootGet a list of old user permissions:
[root@director ~]# mysql -e 'select host, user, password from mysql.user;'Remove the old user permissions for each host listed. For example:
[root@director ~]# HOST="192.0.2.1" [root@director ~]# USERS=$(mysql -Nse "select user from mysql.user WHERE user != \"root\" and host = \"$HOST\";" | uniq | xargs) [root@director ~]# for USER in $USERS ; do mysql -e "drop user \"$USER\"@\"$HOST\"" || true ;done [root@director ~]# mysql -e 'flush privileges'Perform this for all users accessing through the host IP and any host ("
%").
The IP address in the HOST parameter is the undercloud’s IP address in the control plane.
Stop the database:
[root@director ~]# systemctl stop mariadbCreate the
stackuser:[root@director ~]# useradd stackSet a password for the user:
[root@director ~]# passwd stackDisable password requirements when using
sudo:[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@director ~]# chmod 0440 /etc/sudoers.d/stackRestore the
stackuser home directory:# tar -xvC / -f undercloud-backup-$TIMESTAMP.tar home/stackInstall the
python3-policycoreutilspackage:[root@director ~]# dnf -y install python3-policycoreutilsRestore the
glancedata:[root@director ~]# tar --xattrs -xvC / -f undercloud-backup-$TIMESTAMP.tar var/lib/glanceRestore the
swiftdata:[root@director ~]# tar --xattrs -xvC / -f undercloud-backup-$TIMESTAMP.tar srv/nodeIf using SSL in the undercloud, refresh the CA certificates:
[root@director ~]# tar -xvC / -f undercloud-backup-$TIMESTAMP.tar etc/pki/instack-certs/undercloud.pem [root@director ~]# tar -xvC / -f undercloud-backup-$TIMESTAMP.tar etc/pki/ca-trust/source/anchors/* [root@director ~]# restorecon -R /etc/pki [root@director ~]# semanage fcontext -a -t etc_t "/etc/pki/instack-certs(/.*)?" [root@director ~]# restorecon -R /etc/pki/instack-certs [root@director ~]# update-ca-trust extractSwitch to the
stackuser:[root@director ~]# su - stack [stack@director ~]$Install the
python3-tripleoclientpackage:$ sudo dnf install -y python3-tripleoclient ceph-ansibleRun the undercloud installation command. Ensure that you run it in the
stackuser’s home directory:[stack@director ~]$ openstack undercloud install
When the install completes, the undercloud automatically restores its connection to the overcloud. The nodes continue to poll OpenStack Orchestration (heat) for pending tasks.
Chapter 3. Restoring images for overcloud nodes 링크 복사링크가 클립보드에 복사되었습니다!
The director requires the latest disk images for provisioning new overcloud nodes. Follow this procedure to restore these images.
Procedure
Source the
stackrcfile to enable the director’s command line tools:[stack@director ~]$ source ~/stackrcInstall the
rhosp-director-imagesandrhosp-director-images-ipapackages:(undercloud) [stack@director ~]$ sudo dnf install rhosp-director-images rhosp-director-images-ipaExtract the images archives to the
imagesdirectory in thestackuser’s home (/home/stack/images):(undercloud) [stack@director ~]$ cd ~/images (undercloud) [stack@director images]$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.0.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.0.0.tar; do tar -xvf $i; doneImport these images into the director:
(undercloud) [stack@director images]$ cd ~/images (undercloud) [stack@director images]$ openstack overcloud image upload --image-path /home/stack/images/Configure nodes in your environment to use the new images:
(undercloud) [stack@director images]$ for NODE in $(openstack baremetal node list -c UUID -f value) ; do openstack overcloud node configure $NODE ; done
Chapter 4. Validate the Completed Restore 링크 복사링크가 클립보드에 복사되었습니다!
Use the following commands to perform a healthcheck of your newly restored environment:
4.1. Check Identity Service (Keystone) Operation 링크 복사링크가 클립보드에 복사되었습니다!
This step validates Identity Service operations by querying for a list of users.
# source stackrc
# openstack user list
When run from the controller, the output of this command should include a list of users created in your environment. This action demonstrates that keystone is running and successfully authenticating user requests. For example:
# openstack user list
+----------------------------------+------------+---------+----------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+----------------------+
| 9e47bb53bb40453094e32eccce996828 | admin | True | root@localhost |
| 9fe2466f88cc4fa0ba69e59b47898829 | ceilometer | True | ceilometer@localhost |
| 7a40d944e55d422fa4e85daf47e47c42 | cinder | True | cinder@localhost |
| 3d2ed97538064f258f67c98d1912132e | demo | True | |
| 756e73a5115d4e9a947d8aadc6f5ac22 | glance | True | glance@localhost |
| f0d1fcee8f9b4da39556b78b72fdafb1 | neutron | True | neutron@localhost |
| e9025f3faeee4d6bb7a057523576ea19 | nova | True | nova@localhost |
| 65c60b1278a0498980b2dc46c7dcf4b7 | swift | True | swift@localhost |
+----------------------------------+------------+---------+----------------------+