Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Reusing bricks and restoring configuration from backups

download PDF

3.1. Host replacement prerequisites

  • Determine which node to use as the Ansible controller node (the node from which all Ansible playbooks are executed). Red Hat recommends using a healthy node in the same cluster as the failed node as the Ansible controller node.
  • If possible, locate a recent backup or create a new backup of the important files (disk configuration or inventory files). See Backing up important files for details.
  • Stop brick processes and unmount file systems on the failed host, to avoid file system inconsistency issues.

    # pkill glusterfsd
    # umount /gluster_bricks/{engine,vmstore,data}
  • Check which operating system is running on your hyperconverged hosts by running the following command:

    $ nodectl info
  • Reinstall the same operating system on the failed hyperconverged host.

3.2. Preparing the cluster for host replacement

  1. Verify host state in the Administrator Portal.

    1. Log in to the Red Hat Virtualization Administrator Portal.

      The host is listed as NonResponsive in the Administrator Portal. Virtual machines that previously ran on this host are in the Unknown state.

    2. Click Compute Hosts and click the Action menu (⋮).
    3. Click Confirm host has been rebooted and confirm the operation.
    4. Verify that the virtual machines are now listed with a state of Down.
  2. Update the SSH fingerprint for the failed node.

    1. Log in to the Ansible controller node as the root user.
    2. Remove the existing SSH fingerprint for the failed node.

      # sed -i `/failed-host-frontend.example.com/d` /root/.ssh/known_hosts
      # sed -i `/failed-host-backend.example.com/d` /root/.ssh/known_hosts
    3. Copy the public key from the Ansible controller node to the freshly installed node.

      # ssh-copy-id root@new-host-backend.example.com
      # ssh-copy-id root@new-host-frontend.example.com
    4. Verify that you can log in to all hosts in the cluster, including the Ansible controller node, using key-based SSH authentication without a password. Test access using all network addresses. The following example assumes that the Ansible controller node is host1.

      # ssh root@host1-backend.example.com
      # ssh root@host1-frontend.example.com
      # ssh root@host2-backend.example.com
      # ssh root@host2-frontend.example.com
      # ssh root@new-host-backend.example.com
      # ssh root@new-host-frontend.example.com

      Use ssh-copy-id to copy the public key to any host you cannot log into without a password using this method.

      # ssh-copy-id root@host-frontend.example.com
      # ssh-copy-id root@host-backend.example.com

3.3. Restoring disk configuration from backups

Prerequisites

  • This procedure assumes you have already performed the backup process in Chapter 2, Backing up important files and know the location of your backup files and the address of the backup host.

Procedure

  1. If the new host does not have multipath configuration, blacklist the devices.

    1. Create an inventory file for the new host that defines the devices to blacklist.

      hc_nodes:
        hosts:
          new-host-backend-fqdn.example.com:
            blacklist_mpath_devices:
              - sda
              - sdb
              - sdc
              - sdd
    2. Run the gluster_deployment.yml playbook on this inventory file using the blacklistdevices tag.

      # ansible-playbook -i blacklist-inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml --tags=blacklistdevices
  2. Copy backed up configuration details to the new host.

    # mkdir /rhhi-backup
    # scp backup-host.example.com:/backups/rhvh-node-host1-backend.example.com-backup.tar.gz /rhhi-backup
    # tar -xvf /rhhi-backup/rhvh-node-host1-backend.example.com-backup.tar.gz -C /rhhi-backup
  3. Create an inventory file for host restoration.

    1. Change into the hc-ansible-deployment directory and back up the default archive_config_inventory.yml file.

      # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
      # cp archive_config_inventory.yml archive_config_inventory.yml.bk
    2. Edit the archive_config_inventory.yml file with details of the cluster you want to back up.

      hosts
      The backend FQDN of the host that you want to restore (this host).
      backup_dir
      The directory in which to store extracted backup files.
      nbde_setup
      If you use Network-Bound Disk Encryption, set this to true. Otherwise, set to false.
      upgrade
      Set to false.

      For example:

      all:
        hosts:
          host1-backend.example.com:
        vars:
          backup_dir: /rhhi-backup
          nbde_setup: true
          upgrade: false
  4. Execute the archive_config.yml playbook.

    Run the archive_config.yml playbook using your updated inventory file with the restorefiles tag.

    # ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags=restorefiles
  5. (Optional) Configure Network-Bound Disk Encryption (NBDE) on the root disk.

    1. Create an inventory file for the new host that defines devices to encrypt.

      hc_nodes:
        hosts:
          new-node-frontend-fqdn.example.com:
            blacklist_mpath_devices:
              - sda
              - sdb
              - sdc
            rootpassphrase: stronGpa55
            rootdevice: /dev/sda2
            networkinterface: eth1
      vars:
        ip_version: IPv4
        ip_config_method: dhcp
      
        gluster_infra_tangservers:
          - url: http://tang-server.example.com:80

      See Understanding the luks_tang_inventory.yml file for more information about these parameters.

    2. Run the luks_tang_setup.yml playbook using your inventory file and the bindtang tag.

      # ansible-playbook -i inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/luks_tang_setup.yml --tags=bindtang

3.4. Creating the node_replace_inventory.yml file

Define your cluster hosts by creating a node_replacement_inventory.yml file.

Procedure

  1. Back up the node_replace_inventory.yml file.

    # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
    # cp node_replace_inventory.yml node_replace_inventory.yml.bk
  2. Edit the node_replace_inventory.yml file to define your cluster.

    See Appendix C, Understanding the node_replace_inventory.yml file for more information about this inventory file and its parameters.

3.5. Executing the replace_node.yml playbook file

The replace_node.yml playbook reconfigures a Red Hat Hyperconverged Infrastructure for Virtualization cluster to use a new node after an existing cluster node has failed.

Procedure

  1. Execute the playbook.

    # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/
    # ansible-playbook -i node_replace_inventory.yml tasks/replace_node.yml --tags=restorepeer

3.6. Finalizing host replacement

After you have replaced a failed host with a new host, follow these steps to ensure that the cluster is connected to the new host and properly activated.

Procedure

  1. Activate the host.

    1. Log in to the Red Hat Virtualization Administrator Portal.
    2. Click Compute Hosts and observe that the replacement host is listed with a state of Maintenance.
    3. Select the host and click Management Activate.
    4. Wait for the host to reach the Up state.
  2. Attach the gluster network to the host.

    1. Click Compute Hosts and select the host.
    2. Click Network Interfaces Setup Host Networks.
    3. Drag and drop the newly created network to the correct interface.
    4. Ensure that the Verify connectivity between Host and Engine checkbox is checked.
    5. Ensure that the Save network configuration checkbox is checked.
    6. Click OK to save.
    7. Verify the health of the network.

      Click the Network Interfaces tab and check the state of the host’s network.

      If the network interface enters an "Out of sync" state or does not have an IP Address, click Management Refresh Capabilities.

3.7. Verifying healing in progress

After replacing a failed host with a new host, verify that your storage is healing as expected.

Procedure

  • Verify that healing is in progress.

    Run the following command on any hyperconverged host:

    # for vol in `gluster volume list`; do gluster volume heal $vol info summary; done

    The output shows a summary of healing activity on each brick in each volume, for example:

    Brick brick1
    Status: Connected
    Total Number of entries: 3
    Number of entries in heal pending: 2
    Number of entries in split-brain: 1
    Number of entries possibly healing: 0

    Depending on brick size, volumes can take a long time to heal. You can still run and migrate virtual machines using this node while the underlying storage heals.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.