Chapter 4. Reusing bricks and reconstructing existing brick configuration


4.1. Host replacement prerequisites

  • Determine which node to use as the Ansible controller node (the node from which all Ansible playbooks are executed). Red Hat recommends using a healthy node in the same cluster as the failed node as the Ansible controller node.
  • If the failed host used Network-Bound Disk Encryption, ensure that you know the passphrase used for the existing disks.
  • Take note of the disks that comprise the gluster volumes hosted by the server you are replacing.
  • If possible, locate a recent backup or create a new backup of the important files (disk configuration or inventory files). See Backing up important files for details.
  • Stop brick processes and unmount file systems on the failed host, to avoid file system inconsistency issues.

    # pkill glusterfsd
    # umount /gluster_bricks/{engine,vmstore,data}
    Copy to Clipboard Toggle word wrap
  • Check which operating system is running on your hyperconverged hosts by running the following command:

    $ nodectl info
    Copy to Clipboard Toggle word wrap
  • Reinstall the same operating system on the failed hyperconverged host.

4.2. Preparing the cluster for host replacement

  1. Verify host state in the Administrator Portal.

    1. Log in to the Red Hat Virtualization Administrator Portal.

      The host is listed as NonResponsive in the Administrator Portal. Virtual machines that previously ran on this host are in the Unknown state.

    2. Click Compute Hosts and click the Action menu (⋮).
    3. Click Confirm host has been rebooted and confirm the operation.
    4. Verify that the virtual machines are now listed with a state of Down.
  2. Update the SSH fingerprint for the failed node.

    1. Log in to the Ansible controller node as the root user.
    2. Remove the existing SSH fingerprint for the failed node.

      # sed -i `/failed-host-frontend.example.com/d` /root/.ssh/known_hosts
      # sed -i `/failed-host-backend.example.com/d` /root/.ssh/known_hosts
      Copy to Clipboard Toggle word wrap
    3. Copy the public key from the Ansible controller node to the freshly installed node.

      # ssh-copy-id root@new-host-backend.example.com
      # ssh-copy-id root@new-host-frontend.example.com
      Copy to Clipboard Toggle word wrap
    4. Verify that you can log in to all hosts in the cluster, including the Ansible controller node, using key-based SSH authentication without a password. Test access using all network addresses. The following example assumes that the Ansible controller node is host1.

      # ssh root@host1-backend.example.com
      # ssh root@host1-frontend.example.com
      # ssh root@host2-backend.example.com
      # ssh root@host2-frontend.example.com
      # ssh root@new-host-backend.example.com
      # ssh root@new-host-frontend.example.com
      Copy to Clipboard Toggle word wrap

      Use ssh-copy-id to copy the public key to any host you cannot log into without a password using this method.

      # ssh-copy-id root@host-frontend.example.com
      # ssh-copy-id root@host-backend.example.com
      Copy to Clipboard Toggle word wrap

4.3. Recreating disk configuration without backups

If you do not have backup configuration files available for your cluster, you can recreate configuration using the following sections to ensure you are still able to use existing bricks and their data.

If the failed host used encryption, but you do not have backup encryption configuration available, you need to recreate your encryption configuration when you replace a failed host. Follow these steps to create encryption configuration files on the replacement host to match the other hosts in your existing cluster.

Procedure

  1. Set new keys and key files.

    1. Store the passphrase for the LUKS encrypted disk in a temporary file in the /root directory.

      # echo passphrase /root/key
      Copy to Clipboard Toggle word wrap

      If each disk has a separate passphrase, save them separately.

      # echo passphraseA /root/sda_key
      # echo passphraseB /root/sdb_key
      # echo passphraseC /root/sdc_key
      # echo passphraseD /root/sdd_key
      Copy to Clipboard Toggle word wrap
    2. Generate new key files.

      1. Generate a random key file for each disk.

        # for disk in sda sdb sdc sdd; do dd if=/dev/urandom of=/etc/${disk}_keyfile bs=1024 count=8192
        Copy to Clipboard Toggle word wrap
      2. Set appropriate permissions on the new keyfiles.

        # chown 400 /etc/*_keyfile
        Copy to Clipboard Toggle word wrap
    3. Set the new key for each disk.

      # cryptsetup luksAddKey /etc/sda_keyfile --key-file /root/sda_key
      # cryptsetup luksAddKey /etc/sdb_keyfile --key-file /root/sdb_key
      # cryptsetup luksAddKey /etc/sdc_keyfile --key-file /root/sdc_key
      # cryptsetup luksAddKey /etc/sdd_keyfile --key-file /root/sdd_key
      Copy to Clipboard Toggle word wrap
  2. Verify each device can be opened with its key file.

    1. Determine the LUKS UUID for each device.

      # cryptsetup luksUUID /dev/sdX
      Copy to Clipboard Toggle word wrap
    2. Open each device using its key file and UUID.

      # cryptsetup luksOpen UUID=sdX-UUID luks_sdX -d /etc/sdX_keyfile
      Copy to Clipboard Toggle word wrap

      For example:

      # cryptsetup luksOpen UUID=a28a19c7-6028-44df-b0b8-e5245944710c luks_sda -d /etc/sda_keyfile
      Copy to Clipboard Toggle word wrap
  3. Configure automatic decryption at boot time.

    Add a line for each device to the /etc/crypttab file using the following format.

    # echo luks_sdX UUID=sdX-UUID /etc/sdX_keyfile >> /etc/crypttab
    Copy to Clipboard Toggle word wrap

    For example:

    # echo luks_sda UUID=a28a19c7-6028-44df-b0b8-e5245944710c /etc/sda_keyfile >> /etc/crypttab
    Copy to Clipboard Toggle word wrap
  4. Set up Network-Bound Disk Encryption on the root disk.

    1. Change into the hc-ansible-deployment directory:

      # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
      Copy to Clipboard Toggle word wrap
    2. Create the inventory file.

      1. Make a copy of the luks_tang_inventory.yml file for future reference.

        cp luks_tang_inventory.yml luks_tang_inventory.yml.backup
        Copy to Clipboard Toggle word wrap
      2. Define your configuration in the luks_tang_inventory.yml file.

        Use the example luks_tang_inventory.yml file to define the details of disk encryption on each host. A complete outline of this file is available in Understanding the luks_tang_inventory.yml file.

    3. Encrypt the luks_tang_inventory.yml file and specify a password using ansible-vault.

      The required variables in luks_tang_inventory.yml include password values, so it is important to encrypt the file to protect the password values.

      # ansible-vault encrypt luks_tang_inventory.yml
      Copy to Clipboard Toggle word wrap

      Enter and confirm a new vault password when prompted.

    4. Execute the luks_tang_setup.yml playbook with the bindtang tag.

      # ansible-playbook -i luks_tang_inventory.yml tasks/luks_tang_setup.yml --tags=bindtang --ask-vault-pass
      Copy to Clipboard Toggle word wrap

      Enter the vault password for this file when prompted to start disk encryption configuration.

If the failed host used deduplication and compression (VDO), but you do not have backup configuration information available, you need to recreate the deduplication and compression configuration when you replace a failed host. Follow these steps to create deduplication and compression configuration files on the replacement host to match the other hosts in your existing cluster.

Procedure

  1. Copy the /etc/vdoconf.yml file from a healthy node to the replacement node.

    # scp /etc/vdoconf.yml root@new-node.example.com:/etc/
    Copy to Clipboard Toggle word wrap
  2. Edit the indicated values in the /etc/vdoconf.yml file to provide the correct values for your replacement node.

    Important

    Be careful when editing this file. Editing this file by hand is supported only when reconstructing deduplication and compression configuration without a backup file.

    vdo_sd*
    Change this parameter to match the name of your VDO device.
    device
    Specify the VDO device using its by-id path. For normal volumes, this is something like /dev/disk/by-id/scsi-xxx. For encrypted volumes, this is something like /dev/disk/by-id/dm-uuid-CRYPT-LUKS2-xxxxx.

    For example:

    # cat /etc/vdoconf.yml
    
    config: !Configuration
      vdos:
        vdo_sdc: !VDOService
          ...
          device: /dev/disk/by-id/scsi-360030480197f830125618adb17bac04c
          ...
          logicalSize: 180T
          ...
          physicalSize: 18625G
          ...
    Copy to Clipboard Toggle word wrap
  3. Restart the VDO service.

    # systemctl restart vdo.service
    Copy to Clipboard Toggle word wrap

4.3.3. Restoring disk mount configuration

If you do not have backup disk mount configuration, you need to recreate your configuration when you replace a host. Follow these steps to reconstruct disk mount configuration.

Procedure

  1. Scan existing physical volumes, volume groups, and logical volumes.

    # pvscan
    # vgscan
    # lvscan
    Copy to Clipboard Toggle word wrap
  2. Determine the UUID of each gluster brick.

    # blkid lv_name
    Copy to Clipboard Toggle word wrap
  3. Add a line to the /etc/fstab file for each gluster brick, using the UUID.

    # echo "UUID=64dfd1b1-4333-4ef6-8835-1053c6904d93 /gluster_bricks/engine xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0 0 0" >> /etc/fstab
    Copy to Clipboard Toggle word wrap

    Volumes that use deduplication and compression need additional mount options, as shown:

    # echo "UUID=64dfd1b1-4333-4ef6-8835-1053c6904d93 /gluster_bricks/vmstore xfs inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0" >> /etc/fstab
    Copy to Clipboard Toggle word wrap
  4. Create mount directories based on information from volumes.

    # mkdir -p /gluster_bricks/{engine,vmstore,data}
    Copy to Clipboard Toggle word wrap
  5. Mount all bricks.

    # mount -a
    Copy to Clipboard Toggle word wrap
  6. Set the required SELinux labels on all brick mount points.

    # semanage fcontext -a -t glusterd_brick_t /gluster_bricks/engine
    # semanage fcontext -a -t glusterd_brick_t /gluster_bricks/vmstore
    # semanage fcontext -a -t glusterd_brick_t /gluster_bricks/data
    # restorecon -Rv /gluster_bricks/engine
    # restorecon -Rv /gluster_bricks/vmstore
    # restorecon -Rv /gluster_bricks/data
    Copy to Clipboard Toggle word wrap

4.4. Creating the node_prep_inventory.yml file

Define the replacement node in the node_prep_inventory.yml file.

Procedure

  1. Familiarize yourself with your Gluster configuration.

    The configuration that you define in your inventory file must match the existing Gluster volume configuration. Use gluster volume info to check where your bricks should be mounted for each Gluster volume, for example:

    # gluster volume info engine | grep -i brick
    Number of Bricks: 1 x 3 = 3
    Bricks:
    Brick1: host1.example.com:/gluster_bricks/engine/engine
    Brick2: host2.example.com:/gluster_bricks/engine/engine
    Brick3: host3.example.com:/gluster_bricks/engine/engine
    Copy to Clipboard Toggle word wrap
  2. Back up the node_prep_inventory.yml file.

    # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
    # cp node_prep_inventory.yml node_prep_inventory.yml.bk
    Copy to Clipboard Toggle word wrap
  3. Edit the node_prep_inventory.yml file to define your node preparation.

    See Appendix B, Understanding the node_prep_inventory.yml file for more information about this inventory file and its parameters.

4.5. Creating the node_replace_inventory.yml file

Define your cluster hosts by creating a node_replacement_inventory.yml file.

Procedure

  1. Back up the node_replace_inventory.yml file.

    # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
    # cp node_replace_inventory.yml node_replace_inventory.yml.bk
    Copy to Clipboard Toggle word wrap
  2. Edit the node_replace_inventory.yml file to define your cluster.

    See Appendix C, Understanding the node_replace_inventory.yml file for more information about this inventory file and its parameters.

4.6. Executing the replace_node.yml playbook file

The replace_node.yml playbook reconfigures a Red Hat Hyperconverged Infrastructure for Virtualization cluster to use a new node after an existing cluster node has failed.

Procedure

  1. Execute the playbook.

    # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/
    # ansible-playbook -i node_prep_inventory.yml -i node_replace_inventory.yml tasks/replace_node.yml
    Copy to Clipboard Toggle word wrap

4.7. Finalizing host replacement

After you have replaced a failed host with a new host, follow these steps to ensure that the cluster is connected to the new host and properly activated.

Procedure

  1. Activate the host.

    1. Log in to the Red Hat Virtualization Administrator Portal.
    2. Click Compute Hosts and observe that the replacement host is listed with a state of Maintenance.
    3. Select the host and click Management Activate.
    4. Wait for the host to reach the Up state.
  2. Attach the gluster network to the host.

    1. Click Compute Hosts and select the host.
    2. Click Network Interfaces Setup Host Networks.
    3. Drag and drop the newly created network to the correct interface.
    4. Ensure that the Verify connectivity between Host and Engine checkbox is checked.
    5. Ensure that the Save network configuration checkbox is checked.
    6. Click OK to save.
    7. Verify the health of the network.

      Click the Network Interfaces tab and check the state of the host’s network.

      If the network interface enters an "Out of sync" state or does not have an IP Address, click Management Refresh Capabilities.

4.8. Verifying healing in progress

After replacing a failed host with a new host, verify that your storage is healing as expected.

Procedure

  • Verify that healing is in progress.

    Run the following command on any hyperconverged host:

    # for vol in `gluster volume list`; do gluster volume heal $vol info summary; done
    Copy to Clipboard Toggle word wrap

    The output shows a summary of healing activity on each brick in each volume, for example:

    Brick brick1
    Status: Connected
    Total Number of entries: 3
    Number of entries in heal pending: 2
    Number of entries in split-brain: 1
    Number of entries possibly healing: 0
    Copy to Clipboard Toggle word wrap

    Depending on brick size, volumes can take a long time to heal. You can still run and migrate virtual machines using this node while the underlying storage heals.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat