Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Upgrading a standalone Manager local database environment


2.1. Upgrading from Red Hat Virtualization 4.3 to 4.4

Upgrading your environment from 4.3 to 4.4 involves the following steps:

Upgrade Considerations

  • When planning to upgrade, see Red Hat Virtualization 4.4 upgrade considerations and known issues.
  • When upgrading from Open Virtual Network (OVN) and Open vSwitch (OvS) 2.11 to OVN 2021 and OvS 2.15, the process is transparent to the user as long as the following conditions are met:

    • The Manager is upgraded first.
    • The ovirt-provider-ovn security groups must be disabled, before the host upgrade, for all OVN networks that are expected to work between hosts with OVN/OvS version 2.11.
    • The hosts are upgraded to match OVN version 2021 or higher and OvS version 2.15. You must complete this step in the Administration Portal, so you can properly reconfigure OVN and refresh the certificates.
    • The host is rebooted after an upgrade.
Note

To verify whether the provider and OVN were configured successfully on the host, check the OVN configured flag on the General tab for the host. If the OVN Configured is set to No, click Management Refresh Capabilities. This setting is also available in the REST API. If refreshing the capabilities fails, you can configure OVN by reinstalling the host from Manager 4.4 or higher.

2.1.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.
  • Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide.
  • When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

2.1.2. Analyzing the Environment

It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them.

2.1.3. Log Collection Analysis tool

Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file.

Prerequisites

  • Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.3.

    Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network.

Procedure

  1. Install the Log Collection Analysis tool on the Manager machine:

    # yum install rhv-log-collector-analyzer
  2. Run the tool:

    # rhv-log-collector-analyzer --live

    A detailed report is displayed.

    By default, the report is saved to a file called analyzer_report.html.

    To save the file to a specific location, use the --html flag and specify the location:

    # rhv-log-collector-analyzer --live --html=/directory/filename.html
  3. You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser:

    # yum install -y elinks
  4. Launch ELinks and open analyzer_report.html.

    # elinks /home/user1/analyzer_report.html

    To navigate the report, use the following commands in ELinks:

    • Insert to scroll up
    • Delete to scroll down
    • PageUp to page up
    • PageDown to page down
    • Left Bracket to scroll left
    • Right Bracket to scroll right

2.1.3.1. Monitoring snapshot health with the image discrepancies tool

The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as:

  • Before upgrading versions, to avoid carrying over broken volumes or chains to the new version.
  • Following a failed storage operation, to detect volumes or attributes in a bad state.
  • After restoring the RHV database or storage from backup.
  • Periodically, to detect potential problems before they worsen.
  • To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems.

Prerequisites

  • Required Versions: this tool was introduced in RHV version 4.3.8 with rhv-log-collector-analyzer-0.2.15-0.el7ev.
  • Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process.

Procedure

  1. To run the tool, enter the following command on the RHV Manager:

    # rhv-image-discrepancies
  2. If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running.
Note

This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database.

Understanding the results

The tool reports the following:

  • If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage.
  • If some volume attributes differ between the storage and the database.

Sample output:

 Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801
    Looking for missing images...
    No missing images found
    Checking discrepancies between SD/DB attributes...
    image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624)
    image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976)

 Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2
    Looking for missing images...
    No missing images found
    Checking discrepancies between SD/DB attributes...
    No discrepancies found

You can now update the Manager to the latest version of 4.3.

2.1.4. Updating the Red Hat Virtualization Manager

Prerequisites

  • Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.3.

    Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network.

Procedure

  1. On the Manager machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # yum update ovirt\*setup\* rh\*vm-setup-plugins
  3. Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully
    Note

    The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    Important

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Manager:

    # yum update --nobest
    Important

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    Important

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Manager to 4.4.

2.1.5. Upgrading the Red Hat Virtualization Manager from 4.3 to 4.4

Red Hat Virtualization Manager 4.4 is only supported on Red Hat Enterprise Linux versions 8.2 to 8.6. You need to do a clean installation of Red Hat Enterprise Linux 8.6 and Red Hat Virtualization Manager 4.4, even if you are using the same physical machine that you use to run RHV Manager 4.3.

The upgrade process requires restoring Red Hat Virtualization Manager 4.3 backup files onto the Red Hat Virtualization Manager 4.4 machine.

Prerequisites

  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3.
  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3.
  • If you use an external CA to sign HTTPS certificates, follow the steps in Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide. The backup and restore include the 3rd-party certificate, so you should be able to log in to the Administration portal after the upgrade. Ensure the CA certificate is added to system-wide trust stores of all clients to ensure the foreign menu of virt-viewer works. See BZ#1313379 for more information.
Note

Connected hosts and virtual machines can continue to work while the Manager is being upgraded.

Procedure

  1. Log in to the Manager machine.
  2. Back up the Red Hat Virtualization Manager 4.3 environment.

    # engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log
  3. Copy the backup file to a storage device outside of the RHV environment.
  4. Install Red Hat Enterprise Linux 8.6. See Performing a standard RHEL installation for more information.
  5. Complete the steps to install Red Hat Virtualization Manager 4.4, including running the command yum install rhvm, but do not run engine-setup. See one of the Installing Red Hat Virtualization guides for more information.
  6. Copy the backup file to the Red Hat Virtualization Manager 4.4 machine and restore it.

    # engine-backup --mode=restore --file=backup.bck --provision-all-databases
    Note

    If the backup contained grants for extra database users, this command creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See https://access.redhat.com/articles/2686731.

  7. Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.4.

    Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network.

  8. Install optional extension packages if they were installed on the Red Hat Virtualization Manager 4.3 machine.

    # yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc
    Note

    The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.

    Note

    The configuration for these package extensions must be manually reapplied because they are not migrated as part of the backup and restore process.

  9. Configure the Manager by running the engine-setup command:

    # engine-setup
  10. Decommission the Red Hat Virtualization Manager 4.3 machine if a different machine is used for Red Hat Virtualization Manager 4.4. Two different Managers must not manage the same hosts or storage.
  11. Run engine-setup to configure the Manager.

    # engine-setup

The Red Hat Virtualization Manager 4.4 is now installed, with the cluster compatibility version set to 4.2 or 4.3, whichever was the preexisting cluster compatibility version. Now you need to upgrade the hosts in your environment to RHV 4.4, after which you can change the cluster compatibility version to 4.4.

You can now update the hosts.

2.1.6. Migrating hosts and virtual machines from RHV 4.3 to 4.4

You can migrate hosts and virtual machines from Red Hat Virtualization 4.3 to 4.4 such that you minimize the downtime of virtual machines in your environment.

This process requires migrating all virtual machines from one host so as to make that host available to upgrade to RHV 4.4. After the upgrade, you can reattach the host to the Manager.

Warning

When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

Note

CPU-passthrough virtual machines might not migrate properly from RHV 4.3 to RHV 4.4.

RHV 4.3 and RHV 4.4 are based on RHEL 7 and RHEL 8, respectively, which have different kernel versions with different CPU flags and microcodes. This can cause problems in migrating CPU-passthrough virtual machines.

Prerequisites

  • Hosts for RHV 4.4 require Red Hat Enterprise Linux versions 8.2 to 8.6. A clean installation of Red Hat Enterprise Linux 8.6, or Red Hat Virtualization Host 4.4 is required, even if you are using the same physical machine that you use to run hosts for RHV 4.3.
  • Red Hat Virtualization Manager 4.4 is installed and running.
  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.2 or 4.3. All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3 before you start the procedure.

Procedure

  1. Pick a host to upgrade and migrate that host’s virtual machines to another host in the same cluster. You can use Live Migration to minimize virtual machine downtime. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide.
  2. Put the host into maintenance mode and remove the host from the Manager. For more information, see Removing a Host in the Administration Guide.
  3. Install Red Hat Enterprise Linux 8.6, or RHVH 4.4. For more information, see Installing Hosts for Red Hat Virtualization in one of the Installing Red Hat Virtualization guides.
  4. Install the appropriate packages to enable the host for RHV 4.4. For more information, see Installing Hosts for Red Hat Virtualization in one of the Installing Red Hat Virtualization guides.
  5. Add this host to the Manager, assigning it to the same cluster. You can now migrate virtual machines onto this host. For more information, see Adding Standard Hosts to the Manager in one of the Installing Red Hat Virtualization guides.

Repeat these steps to migrate virtual machines and upgrade hosts for the rest of the hosts in the same cluster, one by one, until all are running Red Hat Virtualization 4.4.

2.1.7. Upgrading RHVH while preserving local storage

Environments with local storage cannot migrate virtual machines to a host in another cluster because the local storage is not shared with other storage domains. To upgrade RHVH 4.3 hosts that have a local storage domain, reinstall the host while preserving the local storage, create a new local storage domain in the 4.4 environment, and import the previous local storage into the new domain.

Prerequisites

  • Red Hat Virtualization Manager 4.4 is installed and running.
  • The compatibility level of the data center and cluster to which the host belongs is set to 4.2 or 4.3.

Procedure

  1. Ensure that the local storage on the RHVH 4.3 host’s local storage is in maintenance mode before starting this process. Complete these steps:

    1. Open the Data Centers tab.
    2. Click the Storage tab in the Details pane and select the storage domain in the results list.
    3. Click Maintenance.
  2. Reinstall the Red Hat Virtualization Host, as described in Installing Red Hat Virtualization Host in the Installation Guide.

    Important

    When selecting the device on which to install RHVH from the Installation Destination screen, do not select the device(s) storing the virtual machines. Only select the device where the operating system should be installed.

    If you are using Kickstart to install the host, ensure that you preserve the devices containing the virtual machines by adding the following to the Kickstart file, replacing `device` with the relevant device.

    # clearpart --all --drives=device

    For more information on using Kickstart, see Kickstart references in Red Hat Enterprise Linux 8 Performing an advanced RHEL installation.

  3. On the reinstalled host, create a directory, for example /data in which to recover the previous environment.

    # mkdir /data
  4. Mount the previous local storage in the new directory. In our example, /dev/sdX1 is the local storage:

    # mount /dev/sdX1 /data
  5. Set the following permissions for the new directory.

    # chown -R 36:36 /data
    # chmod -R 0755 /data
  6. Red Hat recommends that you also automatically mount the local storage via /etc/fstab in case the server requires a reboot:

    # blkid | grep -i sdX1
    /dev/sdX1: UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" TYPE="ext4"
    # vi /etc/fstab
    UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" /data    ext4    defaults     0       0
  7. In the Administration Portal, create a data center and select Local in the Storage Type drop-down menu.
  8. Configure a cluster on the new data center. See Creating a New Cluster in the Administration Guide for more information.
  9. Add the host to the Manager. See Adding Standard Hosts to the Red Hat Virtualization Manager in one of the Installing Red Hat Virtualization guides for more information.
  10. On the host, create a new directory that will be used to create the initial local storage domain. For example:

    # mkdir -p /localfs
    # chown 36:36 /localfs
    # chmod -R 0755 /localfs
  11. In the Administration Portal, open the Storage tab and click New Domain to create a new local storage domain.
  12. Set the name to localfs and set the path to /localfs.
  13. Once the local storage is active, click Import Domain and set the domain’s details. For example, define Data as the name, Local on Host as the storage type and /data as the path.
  14. Click OK to confirm the message that appears informing you that storage domains are already attached to the data center.
  15. Activate the new storage domain:

    1. Open the Data Centers tab.
    2. Click the Storage tab in the details pane and select the new data storage domain in the results list.
    3. Click Activate.
  16. Once the new storage domain is active, import the virtual machines and their disks:

    1. In the Storage tab, select data.
    2. Select the VM Import tab in the details pane, select the virtual machines and click Import. See Importing Virtual Machines from a Data Domain in the Virtual Machine Management Guide for more details.
  17. Once you have ensured that all virtual machines have been successfully imported and are functioning properly, you can move localfs to maintenance mode.
  18. Click the Storage tab and select localfs from the results list.

    1. Click the Data Center tab in the details pane.
    2. Click Maintenance, then click OK to move the storage domain to maintenance mode.
    3. Click Detach. The Detach Storage confirmation window opens.
    4. Click OK.

You have now upgraded the host to version 4.4, created a new local storage domain, and imported the 4.3 storage domain and its virtual machines.

2.1.8. Upgrading RHVH while preserving Gluster storage

Environments with Gluster as storage can take a backup of Gluster storage and be restored after the RHVH upgrade. Try to keep workloads on all virtual machines using Gluster storage as light as possible to shorten the time required to upgrade. If there are highly write-intensive workloads, expect more time to restore.

Note

GlusterFS Storage is deprecated, and will no longer be supported in future releases.

Prerequisites

  • If there are geo-replication schedules on the storage domains, remove those schedules to avoid upgrade conflicts.
  • No geo-replication sync are currently running.
  • Additional disk space of 100 GB is required on 3 hosts for creating a new volume for the new RHVH 4.4 Manager deployment.
  • All data centers and clusters in the environment must have a cluster compatibility level of 4.3 before you start the procedure.

Restriction

  • Network-Bound Disk Encryption (NBDE) is supported only with new deployments with Red Hat Virtualization 4.4. This feature cannot be enabled during the upgrade.

Procedure

  1. Create a new Gluster volume for RHVH 4.4 Manager deployment.

    1. Create a new brick on each host for the new RHVH 4.4 self-hosted engine virtual machine(VM).
    2. If you have a spare disk in the setup, follow the document Create Volume from the web console.
    3. If there is enough space for a new Manager 100GB brick in the existing Volume Group(VG), it can be used as a new Manager Logical Volume (LV).

      Run the following commands on all the hosts, unless specified otherwise explicitly:

    4. Check the free size of the Volume Group (VG).

      # vgdisplay <VG_NAME> | grep -i free
    5. Create one more Logical Volume in this VG.

      # lvcreate -n gluster_lv_newengine -L 100G <EXISTING_VG>
    6. Format the new Logical Volume (LV) as XFS.

      # mkfs.xfs  <LV_NAME>
    7. Create the mount point for the new brick.

      # mkdir /gluster_bricks/newengine
    8. Create an entry corresponding to the newly created filesystem in /etc/fstab and mount the filesystem.
    9. Set the SELinux Labels on the brick mount points.

      # semanage fcontext -a -t glusterd_brick_t /gluster_bricks/newengine
       restorecon -Rv /gluster_bricks/newengine
    10. Create a new gluster volume by executing the gluster command on one of the hosts in the cluster:

      # gluster volume create newengine replica 3 host1:/gluster_bricks/newengine/newengine host2:/gluster_bricks/newengine/newengine host3:/gluster_bricks/newengine/newengine
    11. Set the required volume options on the newly created volume. Run the following commands on one of the hosts in the cluster:

      # gluster volume set newengine group virt
       gluster volume set newengine network.ping-timeout 30
       gluster volume set newengine cluster.granular-entry-heal enable
       gluster volume set newengine network.remote-dio off
       gluster volume set newengine performance.strict-o-direct on
       gluster volume set newengine storage.owner-uid 36
       gluster volume set newengine storage.owner-gid 36
    12. Start the newly created Gluster volume. Run the following command on one of the hosts in the cluster.

      # gluster volume start newengine
  2. Back up the Gluster configuration on all RHVH 4.3 nodes using the backup playbook.

    1. The backup playbook is available with the latest version of RHVH 4.3. If this playbook is not available, create a playbook and inventory file:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/archive_config.yml

      Example:

       all:
        hosts:
          host1:
          host2:
          host3:
        vars:
          backup_dir: /archive
          nbde_setup: false
          upgrade: true
    2. Edit the backup inventory file with correct details.

        Common variables
        backup_dir ->  Absolute path to directory that contains the extracted contents of the backup archive
        nbde_setup -> Set to false as the {virt-product-fullname} 4.3 setup doesn’t support NBDE
        upgrade -> Default value true . This value will make no effect with backup
    3. Switch to the directory and execute the playbook.

      ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags backupfiles
    4. The generated backup configuration tar file is generated under /root with the name RHVH-<HOSTNAME>-backup.tar.gz. On all the hosts, copy the backup configuration tar file to the backup host.
  3. Using the Manager Administration Portal, migrate the VMs running on the first host to other hosts in the cluster.
  4. Backup Manager configurations.

    # engine-backup --mode=backup --scope=all --file=<backup-file.tar.gz> --log=<logfile>
    Note

    Before creating a backup, do the following:

    • Enable Global Maintenance for the self-hosted engine(SHE).
    • Log in to the Manager VM using SSH and stop the ovirt-engine service.
    • Copy the backup file from the self-hosted engine VM to the remote host.
    • Shut down the Manager.
  5. Check for any pending self-heal tasks on all the replica 3 volumes. Wait for the heal to be completed.
  6. Run the following command on one of the hosts:

    # gluster volume heal <volume> info summary
  7. Stop the glusterfs brick process and unmount all the bricks on the first host to maintain file system consistency. Run the following on the first host:

    # pkill glusterfsd; pkill glusterfs
    # systemctl stop glusterd
    # umount /gluster_bricks/*
  8. Reinstall the host with RHVH 4.4 ISO, only formatting the OS disk.

    Important

    Make sure that the installation does not format the other disks, as bricks are created on top of those disks.

  9. Once the node is up following the RHVH 4.4 installation reboot, subscribe to RHVH 4.4 repos as outlined in the Installation Guide, or install the downloaded RHVH 4.4 appliance.

    # yum install <appliance>
  10. Disable the devices used for Gluster bricks.

    1. Create the new SSH private and public key pairs.
    2. Establish SSH public key authentication ( passwordless SSH ) to the same host, using frontend and backend network FQDN.
    3. Create the inventory file:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/blacklist_inventory.yml

      Example:

       hc_nodes:
        hosts:
          host1-backend-FQDN.example.com:
            blacklist_mpath_devices:
               - sda
               - sdb
    4. Run the playbook

      ansible-playbook -i blacklist_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml --tags blacklistdevices*
  11. Copy the Manager backup and host config tar files from the backup host to the newly installed host and untar the content using scp.
  12. Restore the Gluster configuration files.

    1. Extract the contents of the Gluster configuration files

       # mkdir /archive
       # tar -xvf /root/ovirt-host-host1.example.com.tar.gz -C /archive/
    2. Edit the inventory file to perform restoration of the configuration files. The Inventory file is available at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/archive_config_inventory.yml

      Example playbook content:

       all:
         hosts:
       	host1.example.com:
         vars:
       	backup_dir: /archive
       	nbde_setup: false
       	upgrade: true
      Important
      Use only one host under ‘hosts’ section of restoration playbook.
    3. Execute the playbook to restore configuration files

      ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags restorefiles
  13. Perform Manager deployment with the option --restore-from-file pointing to the backed-up archive from the Manager. This Manager deployment can be done interactively using the hosted-engine --deploy command, providing the storage corresponds to the newly created Manager volume. The same can also be done using ovirt-ansible-hosted-engine-setup in an automated procedure. The following procedure is an automated method for deploying a HostedEngine VM using the backup:

    1. Create a playbook for HostedEngine deployment in the newly installed host:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he.yml

      - name: Deploy oVirt hosted engine
        hosts: localhost
        roles:
          - role: ovirt.hosted_engine_setup
    2. Update the HostedEngine related information using the template file:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json

      Example:

      # cat /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json
      
      {
        "he_appliance_password": "<password>",
        "he_admin_password": "<password>",
        "he_domain_type": "glusterfs",
        "he_fqdn": "<hostedengine.example.com>",
        "he_vm_mac_addr": "<00:18:15:20:59:01>",
        "he_default_gateway": "<19.70.12.254>",
        "he_mgmt_network": "ovirtmgmt",
        "he_storage_domain_name": "HostedEngine",
        "he_storage_domain_path": "</newengine>",
        "he_storage_domain_addr": "<host1.example.com>",
        "he_mount_options": "backup-volfile-servers=<host2.example.com>:<host3.example.com>",
        "he_bridge_if": "<eth0>",
        "he_enable_hc_gluster_service": true,
        "he_mem_size_MB": "16384",
        "he_cluster": "Default",
        "he_restore_from_file": "/root/engine-backup.tar.gz",
        "he_vcpus": 4
      }
      Important
      • In the above he_gluster_vars.json, There are 2 important values: “he_restore_from_file” and “he_storage_domain_path”. The first option “he_restore_from_file” should point to the absolute file name of the Manager backup archive copied to the local machine. The second option “he_storage_domain_path” should refer to the newly created Gluster volume.
      • Also note that the previous version of RHVH Version running inside the Manager VM is down and that will be discarded. MAC Address and FQDN corresponding to the older Manager VM can be reused for the new Manager as well.
    3. For static Manager network configuration, add more options as listed below:

        “he_vm_ip_addr”:  “<engine VM ip address>”
        “he_vm_ip_prefix”:  “<engine VM ip prefix>”
        “he_dns_addr”:  “<engine VM DNS server>”
        “he_default_gateway”:  “<engine VM default gateway>”
      Important

      If there is no specific DNS available, try to include 2 more options: “he_vm_etc_hosts”: true and “he_network_test”: “ping”

    4. Run the playbook to deploy HostedEngine Deployment.

      # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
      # ansible-playbook he.yml --extra-vars "@he_gluster_vars.json"
    5. Wait for the self-hosted engine deployment to complete.

      Important

      If there are any failures during self-hosted engine deployment, find the problem looking at the log messages under /var/log/ovirt-hosted-engine-setup, fix the problem. Clean the failed self-hosted engine deployment using the command ovirt-hosted-engine-cleanup and rerun the deployment.

  14. Log in to the RHVH 4.4 Administration Portal on the newly installed Red Hat Virtualization manager. Make sure all the hosts are in the ‘up’ state, and wait for the self-heal on the Gluster volumes to be completed.
  15. Upgrade the next host

    1. Move the next host (ideally, the next one in order), to Maintenance mode from the Administration Portal. Stop the Gluster service while moving this host to Maintenance mode.
    2. From the command line of the host, unmount Gluster bricks

      # umount /gluster_bricks/*
    3. Reinstall this host with RHVH 4.4.

      Important

      Make sure that the installation does not format the other disks, as bricks are created on top of those disks.

    4. If multipath configuration is not available on the newly installed host, disable the Gluster devices. The inventory file is already created in the first host as part of the step Disable the devices used for Gluster bricks.

      1. Set up SSH public key authentication from the first host to the newly installed host.
      2. Update the inventory with the new host name.
      3. Execute the playbook.
    5. Copy the Gluster configuration tar files from the backup host to the newly installed host and untar the content.
    6. Restore Gluster configuration on the newly installed host by executing the playbook as described in the step Restoring the Gluster configurations files on this host.

      Important

      Edit the playbook on the newly installed host and execute it as described in the step Perform manager deployment with the option --restore-from-file…​. Do not change hostname and execute on the same host.

    7. Reinstall the host in RHVH Administration Portal Copy the authorized key from the first deployed host in RHVH 4.4

      # scp root@host1.example.com:/root/.ssh/authorized_keys /root/.ssh/
      1. In the Administration Portal, The host will be in ‘Maintenance’. Go to Compute Hosts Installation Reinstall.
      2. In the New Host dialog box HostedEngine tab, and select the deploy self-hosted engine deployment action.
      3. Wait for the host to reach Up status.
    8. Make sure that there are no errors in the volumes related to GFID mismatch. If there are any errors, resolve them.

      grep -i "gfid mismatch" /var/log/glusterfs/*
  16. Repeat the step Upgrade the next host for all the RHVH in the cluster.
  17. (optional) If a separate Gluster logical network exists in the cluster, attach the Gluster logical network to the required interface on each host.
  18. Remove the old Manager storage domain. Identify the old Manager storage domain by the name hosted_storage with no gold star next to it, listed under Storage Domains.

    1. Go to the Storage Domains hosted_storage Data center tab, and select Maintenance.
    2. Wait for the storage domain to move into Maintenance mode.
    3. Once the storage domain moves into Maintenance mode, click Detach, the storage domain will move to unattached.
    4. Select the unattached storage domain, click Remove, and confirm OK.
  19. Stop and remove the old Manager volume.

    1. Go to Storage Volumes, and select the old Manager volume. Click Stop, and confirm OK.
    2. Select the same volume, click Remove, and confirm OK.
  20. Update the cluster compatibility version.

    1. Go to Compute Clusters and select the cluster Default, click Edit, update the Compatibility Version to 4.4 and click OK.

      Important

      There will be a warning for changing compatibility version, which requires VMs on the cluster to be restarted. Click OKto confirm.

  21. There are new Gluster volume options available with RHVH 4.4, apply those volume options on all the volumes. Execute the following on one of the nodes in the cluster:

    # for vol in gluster volume list; do gluster volume set $vol group virt; done
  22. Remove the archives and extracted the contents of the backup configuration files on all nodes.

Creating an additional Gluster volume using the Web Console

  1. Log in to the Manager web console.
  2. Go to Virtualization Hosted Engine and click Manage Gluster.
  3. Click Create Volume. In the Create Volume window, do the following:

    1. In the Hosts tab, select three different ovirt-ng-nodes with unused disks and click Next.
    2. In the Volumes tab, specify the details of the volume you want to create and click Next.
    3. In the Bricks tab, specify the details of the disks to be used to create the volume and click Next.
    4. In the Review tab, check the generated configuration file for any incorrect information. When you are satisfied, click Deploy.

You can now update the cluster compatibility version.

2.1.9. Changing the Cluster Compatibility Version

Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites

  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations

  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure

  1. In the Administration Portal, click Compute Clusters.
  2. Select the cluster to change and click Edit.
  3. On the General tab, change the Compatibility Version to the desired value.
  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.
  5. Click OK to confirm.
Important

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

You can now update the cluster compatibility version for virtual machines in the cluster.

2.1.10. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( pendingchanges ).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure

  1. In the Administration Portal, click Compute Virtual Machines.
  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

Note

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

You can now update the data center compatibility version.

2.1.11. Changing the Data Center Compatibility Version

Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites

  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure

  1. In the Administration Portal, click Compute Data Centers.
  2. Select the data center to change and click Edit.
  3. Change the Compatibility Version to the desired value.
  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.
  5. Click OK to confirm.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.