Este contenido no está disponible en el idioma seleccionado.
1.2. Upgrading a Self-Hosted Engine from Red Hat Virtualization 4.2 to 4.3
Upgrading a self-hosted engine environment from version 4.2 to 4.3 involves the following steps:
- Make sure you meet the prerequisites, including enabling the correct repositories
- Use the Log Collection Analysis tool and Image Discrepancies tool to check for issues that might prevent a successful upgrade
- Place the environment in global maintenance mode
- Update the 4.2 Manager to the latest version of 4.2
- Upgrade the Manager from 4.2 to 4.3
- Disable global maintenance mode
- Upgrade the self-hosted engine nodes, and any standard hosts
- Update the compatibility version of the clusters
- Reboot any running or suspended virtual machines to update their configuration
- Update the compatibility version of the data centers
- If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must replace the certificates now.
1.2.1. Prerequisites
- Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.
- Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide.
- When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.
1.2.2. Analyzing the Environment
It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them.
1.2.3. Log Collection Analysis tool
Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file.
Prerequisites
Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2.
Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network.
Procedure
Install the Log Collection Analysis tool on the Manager machine:
# yum install rhv-log-collector-analyzer
Run the tool:
# rhv-log-collector-analyzer --live
A detailed report is displayed.
By default, the report is saved to a file called
analyzer_report.html
.To save the file to a specific location, use the
--html
flag and specify the location:# rhv-log-collector-analyzer --live --html=/directory/filename.html
You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser:
# yum install -y elinks
Launch ELinks and open
analyzer_report.html
.# elinks /home/user1/analyzer_report.html
To navigate the report, use the following commands in ELinks:
-
Insert
to scroll up -
Delete
to scroll down -
PageUp
to page up -
PageDown
to page down -
Left Bracket
to scroll left -
Right Bracket
to scroll right
-
1.2.3.1. Monitoring snapshot health with the image discrepancies tool
The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as:
- Before upgrading versions, to avoid carrying over broken volumes or chains to the new version.
- Following a failed storage operation, to detect volumes or attributes in a bad state.
- After restoring the RHV database or storage from backup.
- Periodically, to detect potential problems before they worsen.
- To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems.
Prerequisites
-
Required Versions: this tool was introduced in RHV version 4.3.8 with
rhv-log-collector-analyzer-0.2.15-0.el7ev
. - Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process.
Procedure
To run the tool, enter the following command on the RHV Manager:
# rhv-image-discrepancies
- If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running.
This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database.
Understanding the results
The tool reports the following:
- If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage.
- If some volume attributes differ between the storage and the database.
Sample output:
Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801 Looking for missing images... No missing images found Checking discrepancies between SD/DB attributes... image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624) image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976) Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2 Looking for missing images... No missing images found Checking discrepancies between SD/DB attributes... No discrepancies found
1.2.4. Enabling global maintenance mode
You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine.
Procedure
Log in to one of the self-hosted engine nodes and enable global maintenance mode:
# hosted-engine --set-maintenance --mode=global
Confirm that the environment is in global maintenance mode before proceeding:
# hosted-engine --vm-status
You should see a message indicating that the cluster is in global maintenance mode.
1.2.5. Updating the Red Hat Virtualization Manager
Prerequisites
Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2.
Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network.
Procedure
On the Manager machine, check if updated packages are available:
# engine-upgrade-check
Update the setup packages:
# yum update ovirt\*setup\* rh\*vm-setup-plugins
Update the Red Hat Virtualization Manager with the
engine-setup
script. Theengine-setup
script prompts you with some configuration questions, then stops theovirt-engine
service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts theovirt-engine
service.# engine-setup
When the script completes successfully, the following message appears:
Execution of setup completed successfully
NoteThe
engine-setup
script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date ifengine-config
was used to update configuration after installation. For example, ifengine-config
was used to updateSANWipeAfterDelete
totrue
after installation,engine-setup
will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten byengine-setup
.ImportantThe update process might take some time. Do not stop the process before it completes.
Update the base operating system and any optional packages installed on the Manager:
# yum update --nobest
ImportantIf you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).
ImportantIf any kernel packages were updated, reboot the machine to complete the update.
1.2.6. Upgrading the Red Hat Virtualization Manager from 4.2 to 4.3
You need to be logged into the machine that you are upgrading.
If the upgrade fails, the engine-setup
command attempts to restore your Red Hat Virtualization Manager installation to its previous state. For this reason, do not remove the previous version’s repositories until after the upgrade is complete. If the upgrade fails, the engine-setup
script explains how to restore your installation.
Procedure
Enable the Red Hat Virtualization 4.3 repositories:
# subscription-manager repos \ --enable=rhel-7-server-rhv-4.3-manager-rpms \ --enable=jb-eap-7.2-for-rhel-7-server-rpms
All other repositories remain the same across Red Hat Virtualization releases.
Update the setup packages:
# yum update ovirt\*setup\* rh\*vm-setup-plugins
Run
engine-setup
and follow the prompts to upgrade the Red Hat Virtualization Manager:# engine-setup
When the script completes successfully, the following message appears:
Execution of setup completed successfully
Disable the Red Hat Virtualization 4.2 repositories to ensure the system does not use any 4.2 packages:
# subscription-manager repos \ --disable=rhel-7-server-rhv-4.2-manager-rpms \ --disable=jb-eap-7-for-rhel-7-server-rpms
Update the base operating system:
# yum update
ImportantIf you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).
ImportantIf any kernel packages were updated, reboot the machine to complete the upgrade.
The Manager is now upgraded to version 4.3.
1.2.7. Disabling global maintenance mode
Procedure
- Log in to the Manager virtual machine and shut it down.
Log in to one of the self-hosted engine nodes and disable global maintenance mode:
# hosted-engine --set-maintenance --mode=none
When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start.
Confirm that the environment is running:
# hosted-engine --vm-status
The listed information includes Engine Status. The value for Engine status should be:
{"health": "good", "vm": "up", "detail": "Up"}
NoteWhen the virtual machine is still booting and the Manager hasn’t started yet, the Engine status is:
{"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}
If this happens, wait a few minutes and try again.
You can now update the self-hosted engine nodes, and then any standard hosts. The procedure is the same for both host types.
1.2.8. Updating All Hosts in a Cluster
You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.
Update one cluster at a time.
Limitations
-
On RHVH, the update only preserves modified content in the
/etc
and/var
directories. Modified data in other paths is overwritten during an update. - If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.
- In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
- The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
- You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.
Procedure
-
In the Administration Portal, click
and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster. - Click Upgrade.
- Select the hosts to update, then click Next.
Configure the options:
- Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.
-
Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is
60
. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. - Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default.
- Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.
-
Use Maintenance Policy sets the cluster’s scheduling policy to
cluster_maintenance
during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.
- Click Next.
- Review the summary of the hosts and virtual machines that are affected.
- Click Upgrade.
- A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.
You can track the progress of host updates:
-
in the
view, the Upgrade Status column displays a progress bar that displays the percentage of completion. -
in the
view - in the Events section of the Notification Drawer ( ).
You can track the progress of individual virtual machine migrations in the Status column of the
1.2.9. Changing the Cluster Compatibility Version
Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.
Prerequisites
- To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.
Limitations
Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.
If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.
Procedure
-
In the Administration Portal, click
. - Select the cluster to change and click .
- On the General tab, change the Compatibility Version to the desired value.
- Click Change Cluster Compatibility Version confirmation dialog opens. . The
- Click to confirm.
An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.
1.2.10. Changing Virtual Machine Cluster Compatibility
After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ).
The Manager virtual machine does not need to be rebooted.
Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.
Procedure
-
In the Administration Portal, click
. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:
next_run_config_exists=True
The search results show all virtual machines with pending changes.
- Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.
When the virtual machine starts, the new compatibility version is automatically applied.
You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.
1.2.11. Changing the Data Center Compatibility Version
Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.
Prerequisites
- To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.
Procedure
-
In the Administration Portal, click
. - Select the data center to change and click .
- Change the Compatibility Version to the desired value.
- Click Change Data Center Compatibility Version confirmation dialog opens. . The
- Click to confirm.
If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now.
1.2.12. Replacing SHA-1 Certificates with SHA-256 Certificates
Red Hat Virtualization 4.4 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable Red Hat Virtualization’s public key infrastructure (PKI) to use SHA-256 signatures.
Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide.
Preventing Warning Messages from Appearing in the Browser
- Log in to the Manager machine as the root user.
Check whether /etc/pki/ovirt-engine/openssl.conf includes the line
default_md = sha256
:# cat /etc/pki/ovirt-engine/openssl.conf
If it still includes
default_md = sha1
, back up the existing configuration and change the default tosha256
:# cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
Define the certificate that should be re-signed:
# names="apache"
Log in to one of the self-hosted engine nodes and enable global maintenance:
# hosted-engine --set-maintenance --mode=global
On the Manager, save a backup of the
/etc/ovirt-engine/engine.conf.d
and/etc/pki/ovirt-engine
directories, and re-sign the certificates:# . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in $names; do subject="$( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"${name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="${name}" \ --password=mypass \ <1> --subject="${subject}" \ --san=DNS:"${ENGINE_FQDN}" \ --keep-key done
- Do not change this the password value.
Restart the httpd service:
# systemctl restart httpd
Log in to one of the self-hosted engine nodes and disable global maintenance:
# hosted-engine --set-maintenance --mode=none
- Connect to the Administration Portal to confirm that the warning no longer appears.
-
If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to
http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
, replacing your-manager-fqdn with the fully qualified domain name (FQDN).
Replacing All Signed Certificates with SHA-256
- Log in to the Manager machine as the root user.
Check whether /etc/pki/ovirt-engine/openssl.conf includes the line
default_md = sha256
:# cat /etc/pki/ovirt-engine/openssl.conf
If it still includes
default_md = sha1
, back up the existing configuration and change the default tosha256
:# cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new:
# cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."$(date +"%Y%m%d%H%M%S")" # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256
Replace the existing certificate with the new certificate:
# mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem
Define the certificates that should be re-signed:
# names="engine apache websocket-proxy jboss imageio-proxy"
If you replaced the Red Hat Virtualization Manager SSL Certificate after the upgrade, run the following instead:
# names="engine websocket-proxy jboss imageio-proxy"
For more details see Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide.
Log in to one of the self-hosted engine nodes and enable global maintenance:
# hosted-engine --set-maintenance --mode=global
On the Manager, save a backup of the
/etc/ovirt-engine/engine.conf.d
and/etc/pki/ovirt-engine
directories, and re-sign the certificates:# . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in $names; do subject="$( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"${name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="${name}" \ --password=mypass \ <1> --subject="${subject}" \ --san=DNS:"${ENGINE_FQDN}" \ --keep-key done
- Do not change this the password value.
Restart the following services:
# systemctl restart httpd # systemctl restart ovirt-engine # systemctl restart ovirt-websocket-proxy # systemctl restart ovirt-imageio
Log in to one of the self-hosted engine nodes and disable global maintenance:
# hosted-engine --set-maintenance --mode=none
- Connect to the Administration Portal to confirm that the warning no longer appears.
-
If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to
http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
, replacing your-manager-fqdn with the fully qualified domain name (FQDN). Enroll the certificates on the hosts. Repeat the following procedure for each host.
-
In the Administration Portal, click
. -
Select the host and click
and . -
Once the host is in maintenance mode, click
. -
Click
.
-
In the Administration Portal, click