Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 5. Executing the restore procedure


If an error occurs during an update or upgrade, you can restore either the undercloud or overcloud control plane nodes or both so that they assume their previous state. If the Galera cluster does not restore automatically as part of the restoration procedure, you must restore the cluster manually.

You can also restore the undercloud or overcloud control plane nodes with colocated ceph monitors.

Note

When you boot from an ISO file, ensure that the NFS server is reachable by the undercloud and overcloud.

Use the following general steps:

  1. Burn the bootable ISO image to a DVD or load it through ILO remote access.
  2. Boot the node that requires restoration from the recovery medium.
  3. Select Recover <HOSTNAME>. Replace <HOSTNAME> with the name of the node to restore.
  4. Log on as user root.
  5. Recover the backup.

5.1. Restoring the undercloud

If an error occurs during a fast-forward upgrade, you can restore the undercloud node to its previously saved state by using the ISO image that you created using the Section 4.2, “Backing up the undercloud” procedure. The backup procedure stores the ISO images on the backup node in the folders that you created during the Section 2.2, “Creating and exporting the backup directory” step.

Procedure

  1. Shut down the undercloud node. Ensure that the undercloud node is shutdown completely before you proceed.
  2. Restore the undercloud node by booting it with the ISO image created during the backup process. The ISO image is located under the /ctl_plane_backups directory of the Backup node.
  3. When the Relax-and-Recover boot menu appears, select Recover <UNDERCLOUD_NODE> where <UNDERCLOUD_NODE> is the name of the undercloud node.
  4. Log in as user root.

    The following message displays:

    Welcome to Relax-and-Recover. Run "rear recover" to restore your system!
    RESCUE <UNDERCLOUD_NODE>:~ # rear recover
    Copy to Clipboard Toggle word wrap

    The image restore progresses quickly. When it is complete, the console echoes the following message:

    Finished recovering your system
    Exiting rear recover
    Running exit tasks
    Copy to Clipboard Toggle word wrap
  5. When the command line interface is available, the image is restored. Switch the node off.

    RESCUE <UNDERCLOUD_NODE>:~ #  poweroff
    Copy to Clipboard Toggle word wrap

    On boot up, the node resumes with its previous state.

5.2. Restoring the control plane

If an error occurs during a fast-forward upgrade, you can use the ISO images created using the Section 4.3, “Backing up the control plane” procedure to restore the control plane nodes to their previously saved state. To restore the control plane, you must restore all control plane nodes to the previous state to ensure state consistency.

Note

Red Hat supports backups of Red Hat OpenStack Platform with native SDNs, such as Open vSwitch (OVS) and the default Open Virtual Network (OVN). For information about third-party SDNs, refer to the third-party SDN documentation.

Procedure

  1. Shut down each control plane node. Ensure that the control plane nodes are shut down completely before you proceed.
  2. Restore the control plane nodes by booting them with the ISO image that you created during the backup process. The ISO images are located under the /ctl_plane_backups directory of the Backup node.
  3. When the Relax-and-Recover boot menu appears, select Recover <CONTROL_PLANE_NODE>. Replace <CONTROL_PLANE_NODE> with the name of the control plane node.

    The following message displays:

    Welcome to Relax-and-Recover. Run "rear recover" to restore your system!
    RESCUE <CONTROL_PLANE_NODE>:~ # rear recover
    Copy to Clipboard Toggle word wrap

    The image restore progresses quickly. When the restore completes, the console echoes the following message:

    Finished recovering your system
    Exiting rear recover
    Running exit tasks
    Copy to Clipboard Toggle word wrap

    When the command line interface is available, the image is restored. Switch the node off.

    RESCUE <CONTROL_PLANE_NODE>:~ #  poweroff
    Copy to Clipboard Toggle word wrap

    Set the boot sequence to the normal boot device. On boot up, the node resumes with its previous state.

  4. To ensure that the services are running correctly, check the status of pacemaker. Log in to a controller as root user and run the following command:

    # pcs status
    Copy to Clipboard Toggle word wrap
  5. To view the status of the overcloud, use Tempest. For more information about Tempest, see Chapter 4 of the OpenStack Integration Test Suite Guide.

5.3. Troubleshooting the Galera cluster

If the Galera cluster does not restore as part of the restoration procedure, you must restore Galera manually.

Note

In this procedure, you must perform some steps on one Controller node. Ensure that you perform these steps on the same Controller node as you go through the procedure.

Procedure

  1. On Controller-0, retrieve the Galera cluster virtual IP:

    $ sudo hiera -c /etc/puppet/hiera.yaml mysql_vip
    Copy to Clipboard Toggle word wrap
  2. Disable the database connections through the virtual IP on all Controller nodes:

    $ sudo iptables -I INPUT  -p tcp --destination-port 3306 -d $MYSQL_VIP  -j DROP
    Copy to Clipboard Toggle word wrap
  3. On Controller-0, retrieve the MySQL root password:

    $ sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password
    Copy to Clipboard Toggle word wrap
  4. On Controller-0, set the Galera resource to unmanaged mode:

    $ sudo pcs resource unmanage galera-bundle
    Copy to Clipboard Toggle word wrap
  5. Stop the MySQL containers on all Controller nodes:

    $ sudo docker container stop $(sudo docker container ls --all --format{{.Names}}--filter=name=galera-bundle)
    Copy to Clipboard Toggle word wrap
  6. Move the current directory on all Controller nodes:

    $ sudo mv /var/lib/mysql /var/lib/mysql-save
    Copy to Clipboard Toggle word wrap
  7. Create the new directory /var/lib/mysq on all Controller nodes:

    $ sudo mkdir /var/lib/mysql
    $ sudo chown 42434:42434 /var/lib/mysql
    $ sudo chcon -t container_file_t /var/lib/mysql
    $ sudo chmod 0755 /var/lib/mysql
    $ sudo chcon -r object_r /var/lib/mysql
    $ sudo chcon -u system_u /var/lib/mysql
    Copy to Clipboard Toggle word wrap
  8. Start the MySQL containers on all Controller nodes:

    $ sudo docker container start $(sudo docker container ls --all --format "{{ .Names }}" --filter=name=galera-bundle)
    Copy to Clipboard Toggle word wrap
  9. Create the MySQL database on all Controller nodes:

    $ sudo docker exec -i $(sudo docker container ls --all --format "{{ .Names }}" \
          --filter=name=galera-bundle) bash -c "mysql_install_db --datadir=/var/lib/mysql --user=mysql"
    Copy to Clipboard Toggle word wrap
  10. Start the database on all Controller nodes:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" \
          --filter=name=galera-bundle) bash -c "mysqld_safe --skip-networking --wsrep-on=OFF" &
    Copy to Clipboard Toggle word wrap
  11. Move the .my.cnf Galera configuration file on all Controller nodes:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" \
          --filter=name=galera-bundle) bash -c "mv /root/.my.cnf /root/.my.cnf.bck"
    Copy to Clipboard Toggle word wrap
  12. Reset the Galera root password on all Controller nodes:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}"  \
          --filter=name=galera-bundle) bash -c "mysql -uroot -e'use mysql;update user set password=PASSWORD(\"$ROOTPASSWORD\")where User=\"root\";flush privileges;'"
    Copy to Clipboard Toggle word wrap
  13. Restore the .my.cnf Galera configuration file inside the Galera container on all Controller nodes:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}"   \
          --filter=name=galera-bundle) bash -c "mv /root/.my.cnf.bck /root/.my.cnf"
    Copy to Clipboard Toggle word wrap
  14. On Controller-0, copy the backup database files to /var/lib/MySQL:

    $ sudo cp $BACKUP_FILE /var/lib/mysql
    $ sudo cp $BACKUP_GRANT_FILE /var/lib/mysql
    Copy to Clipboard Toggle word wrap
    Note

    The path to these files is /home/heat-admin/.

  15. On Controller-0, restore the MySQL database:

    $ sudo docker exec $(docker container ls --all --format "{{ .Names }}"    \
    --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_FILE \"  "
    
    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}"    \
    --filter=name=galera-bundle) bash -c "mysql -u root -p$ROOT_PASSWORD < \"/var/lib/mysql/$BACKUP_GRANT_FILE \"  "
    Copy to Clipboard Toggle word wrap
  16. Shut down the databases on all Controller nodes:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}"    \
          --filter=name=galera-bundle) bash -c "mysqladmin shutdown"
    Copy to Clipboard Toggle word wrap
  17. On Controller-0, start the bootstrap node:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}"  --filter=name=galera-bundle) \
            /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \
            --log-error=/var/log/mysql_cluster.log  --user=mysql --open-files-limit=16384 \
            --wsrep-cluster-address=gcomm:// &
    Copy to Clipboard Toggle word wrap
  18. Verification: On Controller-0, check the status of the cluster:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" \
             --filter=name=galera-bundle) bash -c "clustercheck"
    Copy to Clipboard Toggle word wrap

    Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you must recreate the node.

  19. On Controller-0, retrieve the cluster address from the configuration:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" \
    --filter=name=galera-bundle) bash -c "grep wsrep_cluster_address /etc/my.cnf.d/galera.cnf" | awk '{print $3}'
    Copy to Clipboard Toggle word wrap
  20. On each of the remaining Controller nodes, start the database and validate the cluster:

    1. Start the database:

      $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" \
            --filter=name=galera-bundle) /usr/bin/mysqld_safe --pid-file=/var/run/mysql/mysqld.pid --socket=/var/lib/mysql/mysql.sock \
            --datadir=/var/lib/mysql --log-error=/var/log/mysql_cluster.log  --user=mysql --open-files-limit=16384 \
            --wsrep-cluster-address=$CLUSTER_ADDRESS &
      Copy to Clipboard Toggle word wrap
    2. Check the status of the MYSQL cluster:

      $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" \
               --filter=name=galera-bundle) bash -c "clustercheck"
      Copy to Clipboard Toggle word wrap

      Ensure that the following message is displayed: “Galera cluster node is synced”, otherwise you must recreate the node.

  21. Stop the MySQL container on all Controller nodes:

    $ sudo docker exec $(sudo docker container ls --all --format "{{ .Names }}" --filter=name=galera-bundle) \
            /usr/bin/mysqladmin -u root shutdown
    Copy to Clipboard Toggle word wrap
  22. On all Controller nodes, remove the following firewall rule to allow database connections through the virtual IP address:

    $ sudo iptables -D  INPUT  -p tcp --destination-port 3306 -d $MYSQL_VIP  -j DROP
    Copy to Clipboard Toggle word wrap
  23. Restart the MySQL container on all Controller nodes:

    $ sudo docker container restart $(sudo docker container ls --all --format  "{{ .Names }}" --filter=name=galera-bundle)
    Copy to Clipboard Toggle word wrap
  24. Restart the clustercheck container on all Controller nodes:

    $ sudo docker container restart $(sudo docker container ls --all --format  "{{ .Names }}" --filter=name=clustercheck)
    Copy to Clipboard Toggle word wrap
  25. On Controller-0, set the Galera resource to managed mode:

    $ sudo pcs resource manage galera-bundle
    Copy to Clipboard Toggle word wrap

If an error occurs during an update or upgrade, you can use ReaR backups to restore either the undercloud or overcloud control plane nodes, or both, to their previous state.

Prerequisites

Procedure

  1. On the backup node, export the NFS directory to host the Ceph backups. Replace <IP_ADDRESS/24> with the IP address and subnet mask of the network:

    [root@backup ~]# cat >> /etc/exports << EOF
    /ceph_backups <IP_ADDRESS/24>(rw,sync,no_root_squash,no_subtree_check)
    EOF
    Copy to Clipboard Toggle word wrap
  2. On the undercloud node, source the undercloud credentials and run the following script:

    # source stackrc
    Copy to Clipboard Toggle word wrap
    #! /bin/bash
    for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo systemctl stop ceph-mon@$(hostname -s) ceph-mgr@$(hostname -s)'; done
    Copy to Clipboard Toggle word wrap

    To verify that the ceph-mgr@controller.service container has stopped, enter the following command:

    [heat-admin@overcloud-controller-x ~]# sudo docker ps | grep ceph
    Copy to Clipboard Toggle word wrap
  3. On the undercloud node, source the undercloud credentials and run the following script:

    # source stackrc
    Copy to Clipboard Toggle word wrap
    #! /bin/bash
    for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo mkdir /ceph_backups'; done
    
    #! /bin/bash
    for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo mount -t nfs  <BACKUP_NODE_IP_ADDRESS>:/ceph_backups /ceph_backups'; done
    
    #! /bin/bash
    for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo mkdir /ceph_backups/$(hostname -s)'; done
    
    #! /bin/bash
    for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print $2}' | awk -F' ' '{print $1}'`; do ssh -q heat-admin@$i 'sudo tar -zcv --xattrs-include=*.* --xattrs  --xattrs-include=security.capability --xattrs-include=security.selinux --acls -f /ceph_backups/$(hostname -s)/$(hostname -s).tar.gz  /var/lib/ceph'; done
    Copy to Clipboard Toggle word wrap
  4. On the node that you want to restore, complete the following tasks:

    1. Power off the node before you proceed.
    2. Restore the node with the ReaR backup file that you have created during the backup process. The file is located in the /ceph_backups directory of the backup node.
    3. From the Relax-and-Recover boot menu, select Recover <CONTROL_PLANE_NODE>, where <CONTROL_PLANE_NODE> is the name of the control plane node.
    4. At the prompt, enter the following command:

      RESCUE <CONTROL_PLANE_NODE> :~ # rear recover
      Copy to Clipboard Toggle word wrap

      When the image restoration process completes, the console displays the following message:

    Finished recovering your system
    Exiting rear recover
    Running exit tasks
    Copy to Clipboard Toggle word wrap
  5. For the node that you want to restore, copy the Ceph backup from the /ceph_backups directory into the /var/lib/ceph directory:

    1. Identify the system mount points:

      RESCUE <CONTROL_PLANE_NODE>:~# df -h
      Filesystem      Size  Used Avail Use% Mounted on
      devtmpfs         16G     0   16G   0% /dev
      tmpfs            16G     0   16G   0% /dev/shm
      tmpfs            16G  8.4M   16G   1% /run
      tmpfs            16G     0   16G   0% /sys/fs/cgroup
      /dev/vda2        30G   13G   18G  41% /mnt/local
      Copy to Clipboard Toggle word wrap

      The /dev/vda2 file system is mounted on /mnt/local.

    2. Create a temporary directory:

      RESCUE <CONTROL_PLANE_NODE>:~ # mkdir /tmp/restore
      RESCUE <CONTROL_PLANE_NODE>:~ # mount -v -t nfs -o rw,noatime <BACKUP_NODE_IP_ADDRESS>:/ceph_backups /tmp/restore/
      Copy to Clipboard Toggle word wrap
    3. On the control plane node, remove the existing /var/lib/ceph directory:

      RESCUE <CONTROL_PLANE_NODE>:~ # rm -rf /mnt/local/var/lib/ceph/*
      Copy to Clipboard Toggle word wrap
    4. Restore the previous Ceph maps. Replace <CONTROL_PLANE_NODE> with the name of your control plane node:

      RESCUE <CONTROL_PLANE_NODE>:~ # tar -xvC /mnt/local/ -f /tmp/restore/<CONTROL_PLANE_NODE>/<CONTROL_PLANE_NODE>.tar.gz --xattrs --xattrs-include='*.*' var/lib/ceph
      Copy to Clipboard Toggle word wrap
    5. Verify that the files are restored:

      RESCUE <CONTROL_PLANE_NODE>:~ # ls -l
      total 0
      drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-mds
      drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-osd
      drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-rbd
      drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-rgw
      drwxr-xr-x 3 root 107 31 Jun 18 18:52 mds
      drwxr-xr-x 3 root 107 31 Jun 18 18:52 mgr
      drwxr-xr-x 3 root 107 31 Jun 18 18:52 mon
      drwxr-xr-x 2 root 107  6 Jun 18 18:52 osd
      drwxr-xr-x 3 root 107 35 Jun 18 18:52 radosgw
      drwxr-xr-x 2 root 107  6 Jun 18 18:52 tmp
      Copy to Clipboard Toggle word wrap
  6. Power off the node:

    RESCUE <CONTROL_PLANE_NODE> :~ #  poweroff
    Copy to Clipboard Toggle word wrap
  7. Power on the node. The node resumes its previous state.
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat