Release Notes
Release notes for Red Hat Virtualization 4.4
Abstract
Chapter 1. Introduction
The Release Notes provide a high-level description of improvements and additions that have been implemented in Red Hat Virtualization 4.4.
Red Hat Virtualization is an enterprise-grade server and desktop virtualization platform built on Red Hat Enterprise Linux. See the Product Guide for more information.
Chapter 2. Subscriptions
To install the Red Hat Virtualization Manager and hosts, your systems must be registered with the Content Delivery Network using Red Hat Subscription Management. This section outlines the subscriptions and repositories required to set up a Red Hat Virtualization environment.
2.1. Required Subscriptions and Repositories
The packages provided in the following repositories are required to install and configure a functioning Red Hat Virtualization environment. When one of these repositories is required to install a package, the steps required to enable the repository are provided in the appropriate location in the documentation.
Subscription Pool | Repository Name | Repository Label | Details |
---|---|---|---|
|
|
| Provides the Red Hat Enterprise Linux 8 Server. |
|
|
| Provides the Red Hat Enterprise Linux 8 Server. |
|
|
| Provides the Red Hat Virtualization Manager. |
|
|
| Provides the supported release of Red Hat JBoss Enterprise Application Platform on which the Manager runs. |
|
|
| Provides Open vSwitch (OVS) packages. |
Subscription Pool | Repository Name | Repository Label | Details |
---|---|---|---|
|
|
|
Provides the |
Subscription Pool | Repository Name | Repository Label | Details |
---|---|---|---|
|
|
| Provides the Red Hat Enterprise Linux 8 Server. |
|
|
| Provides the Red Hat Enterprise Linux 8 Server. |
|
|
| Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 8 servers as virtualization hosts. |
|
|
| Provides packages for advanced virtualization. |
|
|
| Provides Open vSwitch (OVS) packages. |
2.2. Optional Subscriptions and Repositories
The packages provided in the following repositories are not required to install and configure a functioning Red Hat Virtualization environment. However, they are required to install packages that provide supporting functionality on virtual machines and client systems such as virtual machine resource monitoring. When one of these repositories is required to install a package, the steps required to enable the repository are provided in the appropriate location in the documentation.
Subscription Pool | Repository Name | Repository Label | Details |
---|---|---|---|
|
|
|
Provides the |
|
|
|
Provides the |
|
|
|
Provides the |
|
|
|
Provides |
|
|
|
Provides |
Chapter 3. Managing certificates
Red Hat Virtualization Manager uses certificates to enable encrypted communications. These RHV certificates follow a standard 398 day lifetime and must be renewed once per year.
Do not let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error-prone and time-consuming process.
Starting in RHV 4.4, you will receive warning notifications 120 days before expiration and error notifications 30 days before expiration. Do not ignore these notifications. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide.
Chapter 4. RHV for IBM Power
This release supports Red Hat Enterprise Linux 8 hosts on IBM POWER8 little endian hardware and Red Hat Enterprise Linux 7 and 8 virtual machines on emulated IBM POWER8 hardware. From Red Hat Virtualization 4.2.6 Red Hat Enterprise Linux hosts are supported on IBM POWER9 little endian hardware and Red Hat Enterprise Linux 8 virtual machines on emulated IBM POWER9 hardware.
Previous releases of RHV for IBM Power required Red Hat Enterprise Linux hosts on POWER8 hardware to be installed from an ISO image. These hosts cannot be updated for use with this release. You must reinstall Red Hat Enterprise Linux 8 hosts using the repositories outlined below.
The packages provided in the following repositories are required to install and configure aspects of a Red Hat Virtualization environment on POWER8 hardware.
Component | Subscription Pool | Repository Name | Repository Label | Details |
---|---|---|---|---|
Red Hat Virtualization Manager |
|
|
| Provides the Red Hat Virtualization Manager for use with IBM POWER8 hosts. The Manager itself must be installed on x86_64 architecture. |
Red Hat Enterprise Linux 8 hosts, little endian |
|
|
| Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. Provides additional packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. |
Red Hat Enterprise Linux 8 virtual machines, big endian |
|
|
|
Provides the |
Red Hat Enterprise Linux 8 virtual machines, little endian |
|
|
|
Provides the |
Component | Subscription Pool | Repository Name | Repository Label | Details |
---|---|---|---|---|
Red Hat Virtualization Manager |
|
|
| Provides the Red Hat Virtualization Manager for use with IBM POWER9 hosts. The Manager itself must be installed on x86_64 architecture. |
Red Hat Enterprise Linux 8 hosts, little endian |
|
|
| Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. |
Red Hat Enterprise Linux 8 hosts, little endian |
|
|
| Provides additional packages required for using Red Hat Enterprise Linux 8 servers on IBM Power (little endian) hardware as virtualization hosts. |
Red Hat Enterprise Linux 8 virtual machines, big endian |
|
|
|
Provides the |
Red Hat Enterprise Linux 8 virtual machines, little endian |
|
|
|
Provides the |
If the virtual machine fails to boot on IBM POWER9, it might be because of the risk level setting on your firmware. To resolve this issue, see the Troubleshooting scenarios in Starting a virtual machine.
Unsupported Features for IBM POWER
The following Red Hat Virtualization features are not supported:
- SPICE display
- SmartCard
- Sound device
- Guest SSO
- Integration with OpenStack Networking (Neutron), OpenStack Image (Glance), and OpenStack Volume (Cinder)
- Self-hosted engine
- Red Hat Virtualization Host (RHVH)
- Disk Block Alignment
For a full list of bugs that affect the RHV for IBM Power release, see Red Hat Private BZ#1444027.
Chapter 5. Technology Preview, Deprecated, and Removed Features
5.1. Technology Preview Features
Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.
The following table describes features available as Technology Previews in Red Hat Virtualization.
Technology Preview Feature | Details |
---|---|
IPv6 | Static IPv6 assignment is fully supported in Red Hat Virtualization 4.3 and 4.4, but Dynamic IPv6 assignment is available as a Technology Preview. Note All hosts in the cluster must use IPv4 or IPv6 for RHV networks, not simultaneous IPv4 and IPv6, because dual stack is not supported. For details about IPv6 support, see IPv6 Networking Support in the Administration Guide. |
NoVNC console option | Option for opening a virtual machine console in the browser using HTML5. |
Websocket proxy | Allows users to connect to virtual machines through a noVNC console. |
VDSM hook for nested virtualization | Allows a virtual machine to serve as a host. For details, see Enabling nested virtualization for all virtual machines in the Administration Guide. |
Import Debian and Ubuntu virtual machines from VMware and RHEL 5 Xen |
Allows Known Issues:
|
NVDIMM host devices | Support for attaching an emulated NVDIMM to virtual machines that are backed by NVDIMM on the host machine. For details, see NVDIMM Host Devices. |
Open vSwitch (OVS) cluster type support | Adds Open vSwitch networking capabilities. |
Shared and local storage in the same data center | Allows the creation of single-brick Gluster volumes to enable local storage to be used as a storage domain in shared data centers. |
Cinderlib integration | Leverages CinderLib library to use Cinder-supported storage drivers in Red Hat Virtualization without a Full Cinder-OpenStack deployment. Adds support for Ceph storage along with Fibre Channel and iSCSI storage. The Cinder volume has multipath support on the Red Hat Virtualization Host. |
SSO with OpenID Connect | Adds support for external OpenID Connect authentication using Keycloak in both the user interface and with the REST API. |
oVirt Engine Backup |
Adds support to back up and restore Red Hat Virtualization Manager with the Ansible |
Failover vNIC profile | Allows users to migrate a virtual machine connected via SR-IOV with minimal downtime by using a failover network that is activated during migration. |
Dedicated CPU pinning policy | Guest vCPUs will be exclusively pinned to a set of host pCPUs (similar to static CPU pinning). The set of pCPUs will be chosen to match the required guest CPU topology. If the host has an SMT architecture, thread siblings are preferred. |
5.2. Deprecated Features
This chapter provides an overview of features that have been deprecated in all minor releases of Red Hat Virtualization.
Deprecated features continue to be supported for a minimum of two minor release cycles before being fully removed. For the most recent list of deprecated features within a particular release, refer to the latest version of release documentation.
Although support for deprecated features is typically removed after a few release cycles, some tasks may still require use of a deprecated feature. These exceptions are noted in the description of the deprecated feature.
The following table describes deprecated features to be removed in a future version of Red Hat Virtualization.
Deprecated Feature | Details |
---|---|
OpenStack Glance | Support for OpenStack Glance is now deprecated. This functionality will be removed in a future release. |
Remote engine database | A remote engine database is now deprecated, whether implemented during deployment or by migrating after deployment. This functionality will be removed from the deployment script in a future release. |
Cisco Virtual Machine Fabric Extender (VM-FEX) | Support for the Cisco Virtual Machine Fabric Extender (VM-FEX) is now deprecated. This functionality will be removed in a future release. |
Export Domains | Use a data domain. Migrate data domains between data centers and Importing Virtual Machines from a Data Domain into the new data center. In Red Hat Virtualization 4.4, some tasks may still require the export domain. |
ISO domains | Use a data domain. Upload images to data domains. In Red Hat Virtualization 4.4, some tasks may still require the ISO domain. |
ovirt-guest-agent | The ovirt-guest-agent project is no longer supported. Use qemu-guest-agent version 2.12.0 or later. |
moVirt | Mobile Android app for Red Hat Virtualization. |
OpenStack Networking (Neutron) | Support for Red Hat OpenStack Networking (Neutron) as an external network provider is now deprecated, and was removed in Red Hat Virtualization 4.4.5. |
OpenStack block storage (Cinder) | Support for Red Hat OpenStack block storage (Cinder) is now deprecated, and will be removed in a future release. |
instance types | Support for instance types that can be used to define the hardware configuration of a virtual machine is now deprecated. This functionality will be removed in a future release. |
websocket proxy deployment on a remote host | Support for third party websocket proxy deployment is now deprecated, and will be removed in a future release. |
SSO for virtual machines | Since the ovirt-guest-agent package was deprecated, Single Sign-On (SSO) is deprecated for virtual machines running Red Hat Enterprise Linux version 7 or earlier. SSO is not supported for virtual machines running Red Hat Enterprise Linux 8 or later, or for Windows operating systems. |
GlusterFS Storage | GlusterFS Storage is deprecated, and will no longer be supported in future releases. |
ovirt-engine extension-aaa-ldap and ovirt-engine extension-aaa-jdbc | The engine extensions ovirt-engine extension-aaa-ldap and ovirt-engine extension-aaa-jdbc have been deprecated. For new installations, use Red Hat Single Sign On for authentication. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide. |
5.3. Removed Features
The following table describes features that have been removed in this version of Red Hat Virtualization.
Removed Feature | Details |
---|---|
Metrics Store | Metrics Store support has been removed in Red Hat Virtualization 4.4. Administrators can use the Data Warehouse with Grafana dashboards (deployed by default with Red Hat Virtualization 4.4) to view metrics and inventory reports. See Grafana.com for information on Grafana. Administrators can also send metrics and logs to a standalone Elasticsearch instance. See Deprecation of RHV Metrics Store and Alternative Solutions |
Version 3 REST API | Version 3 of the REST API is no longer supported. Use the version 4 REST API. |
Version 3 SDKs | Version 3 of the SDKs for Java, Python, and Ruby are no longer supported. Use the version 4 SDK for Java, Python, or Ruby. |
RHEVM Shell | Red Hat Virtualization’s specialized command line interface is no longer supported. Use the version 4 SDK for Java, Python, or Ruby, or the version 4 REST API. |
Iptables |
Use the Note iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld. |
Conroe, Penryn, Opteron G1, Opteron G2, and Opteron G3 CPU types | Use newer CPU types. |
Use newer fixes. | |
3.6, 4.0 and 4.1 cluster compatibility versions | Use a newer cluster compatibility version. Upgrade the compatibility version of existing clusters. |
cockpit-machines-ovirt |
The |
ovirt-guest-tools | ovirt-guest-tools has been replaced with a new WiX-based installer, included in Virtio-Win. You can download the ISO file containing the Windows guest drivers, agents, and installers from latest virtio-win downloads |
OpenStack Neutron deployment |
The Red Hat Virtualization 4.4.0 release removes OpenStack Neutron deployment, including the automatic deployment of the Neutron agents through the
- To deploy
- The
- To manage the deployment of |
screen |
With this update to RHEL 8-based hosts, the |
Application Provisioning Tool service (APT) | With this release, the virtio-win installer replaces the APT service. |
ovirt-engine-api-explorer |
The |
DPDK (Data Plane Development Kit) | Experimental support for DPDK has been removed in Red Hat Virtualization 4.4.4. |
VDSM hooks | Starting with Red Hat Virtualization 4.4.7, VDSM hooks are not installed by default. You can manually install VDSM hooks as needed. |
Foreman integration | Provisioning hosts using Foreman, which is initiated from Red Hat Virtualization Manager, is removed in Red Hat Virtualization 4.4.7. Removing this neither affects the ability to manage Red Hat Virtualization virtual machines from Satellite nor the ability for Red Hat Virtualization to work with errata from Satellite for hosts and virtual machines. |
Cockpit installation for Self-hosted engine | Using Cockpit to install the self-hosted engine is no longer supported. Use the command line installation. |
oVirt Scheduler Proxy | The ovirt-scheduler-proxy package is removed in Red Hat Virtualization 4.4 SP1. |
Ruby software development kit (SDK) | The Ruby SDK is no longer supported. |
systemtap |
The |
Red Hat Virtualization Manager (RHVM) appliance | With this release, the Red Hat Virtualization Manager (RHVM) appliance is being retired. Following this release, you can update the RHVM by running the dnf update command followed by engine-setup after connecting to the Content Delivery Network. |
DISA STIG for Red Hat Virtualization Host (RHVH) | The DISA STIG security profile is no longer supported for RHVH. Use the RHEL host with a DISA STIG profile instead. |
5.4. Data Center and Cluster Compatibility Levels
Red Hat Virtualization data centers and clusters have a compatibility version.
The data center compatibility version indicates the version of Red Hat Virtualization that the data center is intended to be compatible with. All clusters in the data center must support the desired compatibility level.
The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.
The table below provides a compatibility matrix of RHV versions and the required data center and cluster compatibility levels.
Compatibility Level | RHV Version | Description |
---|---|---|
4.7 | 4.4 | Compatibility Level 4.7 was introduced in RHV 4.4 to support new features introduced by RHEL 8.6 hypervisors. |
4.6 | 4.4.6 | Compatibility Level 4.6 was introduced in RHV 4.4.6 to support new features introduced by RHEL 8.4 hypervisors with Advanced Virtualization 8.4 packages. |
4.5 | 4.4.3 | Compatibility Level 4.5 was introduced in RHV 4.4.3 to support new features introduced by RHEL 8.3 hypervisors with Advanced Virtualization 8.3 packages. |
Limitations
Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.
If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.
Chapter 6. Release Information
These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat Virtualization.
Notes for updates released during the support lifecycle of this Red Hat Virtualization release will appear in the advisory text associated with each update or in the Red Hat Virtualization Technical Notes. This document is available on the Red Hat documentation page.
6.1. Red Hat Virtualization 4.4 SP 1 Batch Update 3 (ovirt-4.5.3)
6.1.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1705338
- Previously, stale data sometimes appeared in the DB "unregistered_ovf_of_entities" DB table. As a result, when importing a floating Storage Domain with a VM and disks from a source RHV to destination RHV. After importing the floating Storage Domain back into the source RHV, the VM is listed under the "VM Import" tab, but can’t be imported because all its disks are now located on another Storage Domain (the destination RHV). In addition, after the first OVF update, the OVF of the VM reappears on the floating Storage Domain as a "ghost" OVF.
In this release, after the floating Storage Domain is re-attached in the source RHV, the VM does not appear under the "VM Import" tab and no "ghost" OVF is re-created after the OVF update, and the DB table is filled correctly during Storage Domain attachment. This ensures that the "unregistered_ovf_of_entities" DB table contains the most up-to-date data, and no irrelevant entries are present.
- BZ#1968433
- Previously, attempts to start highly available virtual machines during failover or failback flows sometimes failed with an error "Cannot run VM. VM X is being imported", resulting in the virtual machines staying down. In this release, virtual machines are no longer started by the disaster-recovery scripts while being imported.
- BZ#1974535
- Previously, highly available VMs with a VM lease running on a primary site may not have started on a secondary site during active-passive failover because none of the hosts were set as ready to support VM leases. In this release, when a highly available VM with a VM lease fails to start because hosts were filtered out due to not being ready to support VM leases, it keeps trying to start periodically. If it takes time for the engine to discover that the storage domain that contains the VM lease is ready, the attempts to start the VM will continue until the status of the storage domain changes.
- BZ#1983567
- There may be stale data in some DB tables, resulting in missing disks after importing a VM (after Storage Domain was imported from a source RHV to destination RHV, and the VM was imported too). Bug fixes BZ#1910858 and BZ#1705338 solved similar issues, and since this bug is hard to reproduce, it may have been fixed by these 2 fixes. In this release, everything works, the VM is imported with all the attached disks.
- BZ#2094576
- Previously, small qcow2 volumes in block storage were allocated 2.5 GiB (chunk size), without considering the requested capacity. As a result, there was wasted space with volumes allocated beyond their capacity. In this release, volumes with capacity smaller than one chunk use their capacity for the initial size (rounded to the next extent). For example, for capacities smaller than one extent (128 Mib), this results in 128 MiB allocated as their initial size.
- BZ#2123141
- In this release, image transfers cannot move from the final state (finished successfully or finished with failure) back to the non-final state which could lead to hanging image transfers that block moving hosts to maintenance.
- BZ#2125290
- Previously, an LVM device file was not created if no LVM devices were found during VDSM configuration. As a result, all LVM commands worked on VGs belonging to RHV storage domains. In this release, the vdsm-tool creates a devices file even when no LVM devices are found, and Storage Domain VGs are not seen by LVM commands.
- BZ#2125658
- Previously, static IPv6 interface configuration in the ifcfg file during Self-Hosted Engine setup did not include the IPV6_AUTOCONF=no setting. As a result, in NetworkManager the configuration of the property ipv6.method remained 'auto' instead of 'manual' on the interface and the interface connection was intermittent causing a loss of connectivity with the Manager. In this release, during Self-Hosted Engine deployment, the interfaces are also configured with IPV6_AUTOCONF=no, and the connection is truly static and unaffected by dynamic changes in the network.
- BZ#2137532
- Previously, the Memory Overcommitment Manager (MoM) sometimes experienced an error on startup, resulting in the MoM not working and reporting error messages with tracebacks in the logs. In this release, the MoM works properly.
6.1.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1886211
- In this release, during a restore operation, the snapshot is locked. In addition, a notification is now displayed following a successful snapshot restore.
6.1.3. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#2130700
- Incremental backup or Changed Block Tracking (CBT) is now generally available.
- BZ#2132386
- RHV 4.4 SP1 is only supported on RHEL 8.6 EUS. When performing RHV Manager or hypervisor installation, the RHEL version must be updated to RHEL 8.6 and the subscription channels must be updated to RHEL 8.6 EUS (when they are available).
6.1.4. Known Issues
These known issues exist in Red Hat Virtualization at this time:
- BZ#1952078
- When migrating virtual machines from hosts that have not been upgraded to hosts that have been upgraded, and migration encryption is enabled, the migration might fail due to a missing migration client certificate. Workaround: Place the migration origin host (that has not been upgraded) in Maintenance mode before proceeding with migration.
6.2. Red Hat Virtualization 4.4 SP 1 Batch Update 2 (ovirt-4.5.2)
6.2.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1853924
- Previously, when attempting to add a disk using ovirt-engine SDK script when the disk already exists, the operation fails, and an exception is thrown. With this release, the Add Disk Functionality checks for duplicate disks, and fails gracefully with a readable error message when the disk to be inserted already exists.
- BZ#1955388
- Previously, the Manager was able to start a virtual machine with a Resize and Pin NUMA policy on a host whose physical sockets did not correspond to the number of NUMA nodes. As a result, the wrong pinning was assigned to the policy. With this release, the Manager does not allow the virtual machine to be scheduled on such a host, making the pinning correct based on the algorithm.
- BZ#2081676
- Previously, when two mutually exclusive sos report options were used in the ovirt-log-collector, the log size limit was ignored. In this release, the limit on the size of the log per plugin works as expected.
- BZ#2097558
- Previously, running engine-setup did not always renew OVN certificates when they were close to expiration or expired. With this release, OVN certificates are always renewed by engine-setup when needed.
- BZ#2097725
- Previously, the Manager issued warnings about approaching certificate expiration before engine-setup could update the certificates. In this release the expiration warning and certificate update periods are aligned, and certificates are updated as soon as the warnings about their upcoming expiration occur.
- BZ#2101481
- The handling of core dumps during upgrade from previous Red Hat Virtualization versions to RHV 4.4 SP1 batch 1 has been fixed.
- BZ#2104115
- Previously, when importing a virtual machine with manual CPU pinning (pinned to a dedicated host), the manual pinning string was cleared, but the CPU pinning policy was not set to NONE. As a result, importing failed. In this release, the CPU pinning policy is set to NONE if the CPU pinning string is cleared, and importing succeeds.
- BZ#2105781
- The hosted-engine-ha binaries have been moved from /usr/share to /usr/libexec. As a result, the hosted-engine --clean-metadata command fails. With this release, you must use the new path for the command to succeed: /usr/libexec/ovirt-hosted-engine-ha/ovirt-ha-agent
- BZ#2109923
- Previously, it was not possible to import templates from the Administration Portal. With this release, importing templates from the Administration Portal is now possible.
6.2.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1793207
A new warning has been added to the vdsm-tool to protect users from using the unsupported user_friendly_names multipath configuration. The following is an example of the output:
$ vdsm-tool is-configured --module multipath WARNING: Invalid configuration: 'user_friendly_names' is enabled in multipath configuration: section1 { key1 value1 user_friendly_names yes key2 value2 } section2 { user_friendly_names yes } This configuration is not supported and may lead to storage domain corruption.
- BZ#2097536
- In this release, the rhv-log-collector-analyzer now provides a detailed output for each problematic image, including disk names, associated virtual machine, the host running the virtual machine, snapshots, and the current Storage Pool Manager. This makes it easier to identify problematic virtual machines and collect SOS reports for related systems. The detailed view is now the default, and the compact option can be set by using the --compact switch in the command line.
- BZ#2097560
- Expiration of ovirt-provider-ovn certificate is now checked regularly along with other RHV certificates (engine CA, engine, or hypervisors) and if ovirt-provider-ovn is going to expire or has expired, the warning or alert is raised to the audit log. To renew the ovirt-provider-ovn certificate, run engine-setup. If your ovirt-provider-ovn certificate expires on a previous RHV version, you must upgrade to RHV 4.4 SP1 batch 2 or newer, and the ovirt-provider-ovn certificate will be renewed automatically as part of engine-setup.
- BZ#2104939
- With this release, OVA export or import works on hosts with a non-standard SSH port.
- BZ#2107250
- With this release, the process to check certificate validity is now compatible with both RHEL 8 and RHEL 7 based hypervisors.
6.2.3. Rebase: Bug Fixes and Enhancements
These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization:
- BZ#2092478
- UnboundID LDAP SDK has been rebased on upstream version 6.0.4. See https://github.com/pingidentity/ldapsdk/releases for changes since version 4.0.14
6.2.4. Rebase: Bug Fixes Only
These items are rebases of bug fixes included in this release of Red Hat Virtualization:
- BZ#2104831
- Rebase package(s) to version: 4.4.7. Highlights, important fixes, or notable enhancements: fixed BZ#2081676
6.2.5. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#2049286
- With this release, only virtual machines pinned to hosts selected for upgrade are stopped during cluster upgrade. VMs pinned to hosts that are not selected for upgrade are not stopped.
- BZ#2108985
- RHV 4.4 SP1 and later is only supported on RHEL 8.6, so you cannot use RHEL 8.7 or later, and must stay with RHEL 8.6 EUS.
- BZ#2113068
- With this release, permissions for the /var/log/ovn directory are updated correctly during the upgrade of OVS/OVN 2.11 to OVS 2.15/OVN 2021.
6.2.6. Deprecated Functionality
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#2111600
- ovirt-engine-extension-aaa-jdbc and ovirt-engine-extension-aaa-ldap are deprecated in RHV 4.4 SP1. They remain in the RHV product, but for any new request, you should use integration with Red Hat Single Sign-On as described in https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/administration_guide/index#Configuring_Red_Hat_SSO
6.3. Red Hat Virtualization 4.4 SP 1 Batch Update 1 (ovirt-4.5.1)
6.3.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1930643
- A wait_after_lease option has been added to the ovirt_vm Ansible module to provide a delay so that the VM lease creation is completed before the next action starts.
- BZ#1958032
- Previously, live storage migration could fail if the destination volume filled up before it was extended. In the current release, the initial size of the destination volume is larger and the extension is no longer required.
- BZ#1994144
- The email address for notifications is updated correctly on the "Manage Events" screen.
- BZ#2001574
- Previously, when closing the "Move/Copy disk" dialog in the Administration Portal, some of the acquired resources were not released, causing browser slowness and high memory usage in environments with many disks. In this release, the memory leak has been fixed.
- BZ#2001923
- Previously, when a failed VM snapshot was removed from the Manager database while the volume remained on the storage, subsequent operations failed because there was a discrepancy between the storage and the database. Now, the VM snapshot is retained if the volume is not removed from the storage.
- BZ#2006625
- Previously, memory allocated by hugepages was included in the host memory usage calculation, resulting in high memory usage in the Administration Portal, even with no running VMs, and false VDS_HIGH_MEM_USE warnings in the logs. In this release, hugepages are not included in the memory usage. VDS_HIGH_MEM_USE warnings are logged only when normal (not hugepages) memory usage is above a defined threshold. Memory usage in the Administration Portal is calculated from the normal and hugepages used memory, not from allocated memory.
- BZ#2030293
- A VM no longer remains in a permanent locked state if the Manager is rebooted while exporting the VM as OVA.
- BZ#2048545
- LVM command error messages have been improved so that it is easier to trace and debug errors.
- BZ#2055905
- The default migration timeout period has been increased to enable VMs with many direct LUN disks, which require more preparation time on the destination host, to be migrated.
The migration_listener_prepare_disk_timeout and max_migration_listener_timeout VDSM options have been added so that the default migration timeout period can be extended if necessary.
- BZ#2068270
- Previously, when downloading snapshots, the disk_id was not set, which caused resumption of the transfer operation to fail because locking requires the disk_id to be set. In this release, the disk_id is always set so that the transfer operation recovers after restart.
- BZ#2070045
- The host no longer enters a non-responsive state if the OVF store update operation times out because of network errors.
- BZ#2072626
- The ovirt-engine-notifier correctly increments the SNMP EngineBoots value after restarts, which enables the ovirt-engine-notifier to work with the SNMPv3 authPriv security level.
- BZ#2077008
- The QEMU guest agent now reports the correct guest CPU count.
- BZ#2081241
- Previously, VMs with one or more VFIO devices, Q35 chipset, and maximum number of vCPUs >= 256 might fail to start because of a memory allocation error reported by the QEMU guest agent. This error has been fixed.
- BZ#2081359
- Infiniband interfaces are now reported by VDSM.
- BZ#2081493
- The size of preallocated volumes is unchanged after a cold merge.
- BZ#2090331
- The ovirt_vm Ansible module displays an error message if a non-existent snapshot is used to clone a VM.
- BZ#2099650
- A bug that caused the upgrade process to fail if the vdc_options table contained records with a NULL default value has been fixed.
- BZ#2105296
- Virtual machines with VNC created by earlier Manager versions sometimes failed to migrate to newer hosts because the VNC password was too long. This issue has been fixed.
6.3.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1663217
- The hostname and/or FQDN of the VM or VDSM host can change after a virtual machine (VM) is created. Previously, this change could prevent the VM from fetching errata from Red Hat Satellite/Foreman. With this enhancement, errata can be fetched even if the VM hostname or FQDN changes.
- BZ#1782077
- An "isolated threads" CPU pinning policy has been added. This policy pins a physical core exclusively to a virtual CPU, enabling a complete physical core to be used as the virtual core of a single virtual machine.
- BZ#1881280
- The hosted-engine --deploy --restore-from-file prompts now include guidance to clarify the options and to ensure correct input.
- BZ#1937408
The following key-value pairs have been added to the KVM dictionary in the ovirt_template module for importing a template from OVA:
- URL, for example, qemu:///system
- storage_domain for converted disks
- host from which the template is imported
- clone to regenerate imported template’s identifiers
- BZ#1976607
- VGA has replaced QXL as the default video device for virtual machines. You can switch from QXL to VGA using the API by removing the graphic and video devices from the VM (creating a headless VM) and then adding a VNC graphic device.
- BZ#1996098
- The copy_paste_enabled and file_transfer_enabled options have been added to the ovirt_vm Ansible module.
- BZ#1999167
- Spice console remote-viewer now allows the Change CD command to work with data domains if no ISO domains exist. If there are multiple data domains, remote-viewer selects the first data domain on the list of available domains.
- BZ#2081559
- The rhv-log-collector-analyzer discrepancy tool now detects preallocated QCOW2 images that have been reduced.
- BZ#2092885
- The Welcome page of the Administration Portal now displays both the upstream and downstream version names.
6.3.3. Rebase: Bug Fixes Only
These items are rebases of bug fixes included in this release of Red Hat Virtualization:
- BZ#2093795
- Rebase package(s) to version: 4.4.6 This fixes an issue which prevented the collection of PostgreSQL data and the documentation of the --log-size option.
6.3.4. Known Issues
These known issues exist in Red Hat Virtualization at this time:
- BZ#1703153
There is a workaround for creating a RHV Manager hostname that is longer than 95 characters.
- Create a short FQDN, up to 63 characters, for the engine-setup tool.
- Create a custom certificate and put the short FQDN and a long FQDN (final hostname) into the certificate’s Subject Alternate Name field.
- Configure the Manager to use the custom certificate.
-
Create an
/etc/ovirt-engine/engine.conf.d/99-alternate-engine-fqdns.conf
file with the following content: SSO_ALTERNATE_ENGINE_FQDNS="long FQDN" -
Restart the
ovirt-engine
service.
If you cannot access the Manager and are using a very long FQDN: 1. Check for the following error message in /var/log/httpd/error_log
: ajp_msg_check_header() incoming message is too big NNNN, max is MMMM
2. Add the following line to /etc/httpd/conf.d/z-ovirt-engine-proxy.conf
: ProxyIOBufferSize PPPP where PPPP
is greater than NNNN
in the error message. Restart Apache.
6.4. Red Hat Virtualization 4.4 SP1 General Availability (ovirt-4.5.0)
6.4.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1648985
- A user with user role permissions cannot take control of a VM from a superuser, close the superuser’s console connection, and assign the VM to a user with user role permissions.
- BZ#1687845
- Notifications for hosts rely on the server time, instead of comparing the job’s "end time" to the local browser time, to resolve the issue of multiple "Finish activating host" notifications.
- BZ#1768969
- During a self-hosted engine deployment, the TPGT value (target portal group tag) is used for the iSCSI login to resolve the issue of duplicate iSCSI sessions being created.
- BZ#1810032
- The default value of a vNIC network filter is documented in the REST API documentation.
- BZ#1834542
-
The
engine-setup
process uses theyum
proxy configuration to check for packages and RPMs. - BZ#1932149
-
The
hosted-engine --deploy
command checks the compatibility level of the cluster or data center and creates a storage domain in the appropriate format. - BZ#1944290
- If a user tries to log in to the VM Portal or the Administration Portal with an expired password, a link directs the user to the "Change password" page.
- BZ#1959186, BZ#1991240
- When a user provisions VMs from templates in the VM Portal, the Manager selects a quota that the user has access to, so that the user is not restricted to the quota specified by the template.
- BZ#1971622
- The warning icons on the Virtual Machines tab of the host’s details view are displayed correctly.
- BZ#1971863
-
The
engine-setup
process ignores DNS queries with the deprecated typeANY
. - BZ#1974741
- Previously, a bug in the finalization mechanism left the disk locked in the database. In this release, the finalization mechanism works correctly and the disk remains unlocked in all scenarios.
- BZ#1979441
- Previously, a warning appeared if the CPU of a high performance VM was different from the cluster CPU. In this release, the warning is not displayed when CPU passthrough is configured.
- BZ#1986726
- When a VM is imported as an OVA, the selected allocation policy is followed.
- BZ#1988496
-
THe
vmconsole-proxy-helper
certificate is renewed with the Manager certificate during theengine-setup
process. - BZ#2000031
- A non-responsive SPM host reboots once instead of multiple times.
- BZ#2003996
-
Previously, a regular snapshot could not be deleted if a "next run" snapshot existed because the "next run" snapshot
type
was missing. In this release, the issue is resolved by not reporting "next run" snapshots to clients. - BZ#2006745
-
Previously, when a template disk was copied to/from a Managed Block Storage domain, its storage domain ID was incorrect, the same image was saved repeatedly in the images and base disks database tables, and its
ManagedBlockStorageDisk
disk type was cast toDiskImage
. In this release, copying a template disk to/from a Managed Block Storage domain works as expected. - BZ#2007384
-
The data type of the disk
writeRate
andreadRate
parameter values has been changed frominteger
tolong
to support higher values. - BZ#2010067
- When a preallocated disk is downloaded, its image is saved as sparse instead of fully allocated.
- BZ#2010203
- The Log Collection Analysis tool handles line breaks correctly, resolving the issue of incorrect formatting in the "Virtual Machine(s) with unmanaged devices" table of the HTML report.
- BZ#2010478
- A VM behaves correctly, according to its resume policy, if the storage state changes during VM migration.
- BZ#2011309
-
Previously, a self-hosted engine deployment failed when an OpenSCAP security profile was applied, resulting in the SSH key permissions being changed to
0640
, which is insecure. In this release, the permissions remain0600
and the deployment succeeds. - BZ#2013928
- Special characters in the Log Collection Analysis tool database are escaped, resolving the issue of incorrect formatting in the "vdc_options" table of the HTML report.
- BZ#2016173
-
The LVM filter created by the
vdsm-tool
filters correctly for a multipath device instead of including SCSI devices. - BZ#2024202
- Translation strings in the Administration Portal dialogs are correctly displayed in all languages.
- BZ#2028481
- SCSI reservation works for hot-plugged disks.
- BZ#2040361
- When multiple disks with VirtIO-SCSI interfaces are hot-plugged to a virtual machine configured for multiple IO threads, each disk is assigned a unique PCI address, resolving the issue of duplicate PCI addresses.
- BZ#2040402
- Commands that use the obsolete "log_days" option of the Log Collector tool have been removed.
- BZ#2041544
- When you select a host to upload, the host list no longer jumps back to the first host if you select a different host.
- BZ#2048546
-
The Log Collector tool has been modified to use the
sos report
command in order to avoid warning messages caused by thesosreport
command, which will be deprecated in the future. - BZ#2050108
-
The
ovirt-ha-broker
service runs successfully on a host with a DISA STIG profile. - BZ#2052557
- When stateless VMs or VMs that were started in run-once mode are shut down, vGPU devices are properly released.
- BZ#2064380
- The VNC console password has been changed from 12 to 8 characters, in compliance with libvirt 8 requirements.
- BZ#2066811
-
Self-hosted engine deployment succeeds on a host with a DISA STIG profile, which does not allow non-root users to run Ansible playbooks, when the postgres user is replaced by
engine_psql.sh
.
- BZ#2075852
-
The correct version of the
nodejs
package is installed.
6.4.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#977379
- You can edit and manage iSCSI storage domain connections in the Administration Portal. For example, you can edit a logical domain to point to a different physical storage, which is useful if the underlying LUNs are replicated for backup purposes or if the physical storage address has changed.
- BZ#1616158
- The self-hosted engine installation checks that the IP address of the Manager is in the same subnet as the host running the self-hosted engine agent.
- BZ#1624015
-
You can set a console type globally for all VMs with
engine-config
. - BZ#1667517
-
A logged-in user can set the default console type, full screen mode, smart card enablement,
Ctrl+Alt+Del
key mapping, and the SSH key in the VM Portal. - BZ#1745141
-
The SnowRidge Accelerator Interface Architecture (AIA) can be enabled by modifying the
extra_cpu_flags
custom property of a virtual machine (movdiri
,movdir64b
). - BZ#1781241
- The ability to connect automatically to a VM in the VM Portal has been restored as a configurable option.
- BZ#1849169
-
The
VCPU_TO_PHYSICAL_CPU_RATIO
parameter has been added to theEvenly Distributed
scheduling policy to prevent over-utilization of physical CPUs on a host. The value of the parameter reflects the ratio between virtual and physical CPUs. - BZ#1878930
-
You can configure a threshold for the minimum number of available MAC addresses in a pool with
engine-config
.
- BZ#1922977
- Shared disks are included in the 'OVF_STORE' configuration, which enables VMs to share disks after a storage domain is moved to a new data center and the VMs are imported.
- BZ#1925878
- A link to the Administration Portal has been added to all Grafana dashboards.
- BZ#1926625
- You can enable HTTP Strict Transport Security after installing the Manager by following the instructions in How to enable HTTP Strict Transport Security (HSTS) on Apache HTTPD.
- BZ#1944834
- You can set a delay interval for shutting down your VM console session in the Administration Portal to avoid accidental disconnection.
- BZ#1964208
- You can create and download a VM screenshot in the Administration Portal.
- BZ#1975720
- You can create parallel migration connections. See Parallel migration connections for details.
- BZ#1979797
- A warning message is displayed if you try to remove a storage domain that contains a volume leased by a VM in a different storage domain.
- BZ#1987121
-
You can specify vGPU driver parameters as a string, for example,
enable_uvm=1
, for all the vGPUs of a VM by using the vGPU editing dialog. The vGPU editing dialog has been moved from Host devices to VM devices. - BZ#1990462
- RSyslog can authenticate to Elasticsearch with a user name and password.
- BZ#1991482
- A link to the Monitoring Portal has been added to the Administration Portal dashboard.
- BZ#1995455
- You can use any number of CPU sockets, up to the number of maximum vCPUs, on cluster versions 4.6 and earlier, if the guest OS is compatible.
- BZ#1998255
- You can search and filter vNIC profiles by attributes.
- BZ#1998866
- Windows 11 is supported as a guest operating system.
- BZ#1999698
-
The Apache HTTPD SSLProtocol configuration is managed by crypto-policies instead of being set by
engine-setup
. - BZ#2012830
- You can now use the Logical Volume Management (LVM) devices file for managing storage devices instead of LVM filter, which can be complicated to set up and difficult to manage. Starting with RHEL 8.6, this will be the default for storage device management.
- BZ#2002283
-
You can set the number of PCI Express ports for VMs with
engine-config
. - BZ#2020620
- You can deploy a self-hosted engine on a host with a DISA STIG profile.
- BZ#2021217
- Windows 2022 is supported as a guest operating system.
- BZ#2021545
- DataCenter/Cluster compatibility level 4.7 is available for hosts with RHEL 8.6 or later.
- BZ#2023786
-
If a VM is set with the custom property
sap_agent=true
, hosts that do not have thevdsm-hook-vhostmd
package installed are filtered out by the scheduler when the VM is started. - BZ#2029830
- You can select either the DISA STIG or the PCI-DSS security profile for the self-hosted engine VM during installation.
- BZ#2030596
- The Manager can run on a host with a PCI-DSS security profile.
- BZ#2033185
- Cluster level 4.7 supports the e1000e VM NIC type. Because the e1000 driver is deprecated by RHEL 8.0, users should switch to e1000e if possible.
- BZ#2037121
- The RHV Image Discrepancy tool displays data center and storage domain names in its output.
- BZ#2040474
- The Administration Portal provides better error messages and status and progress indicators during cluster upgrade.
- BZ#2049782
- You can set user-specific preferences in the Administration Portal.
- BZ#2054756
- A link to the Migration Toolkit for Virtualization documentation has been added to the login screen of the Administration Portal.
- BZ#2058177
-
The
nvme-cli
package, used by RHEL 8 to manage storage devices, has been added to RHVH. - BZ#2066042
-
ansible-core
package, required bycockpit-ovirt
has been added to RHVH. - BZ#2070963
-
The
rng-tools
,rsyslog-gnutls
, andusbguard
packages have been added torhvm-appliance
to comply with DISA-STIG profile requirements. - BZ#2070980
-
The OVA package manifest has been added to the
rhvm-appliance
RPM. - BZ#2072881
- You can restore a backup of an earlier RHV 4 version to a datacenter/cluster with the current version.
6.4.3. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#1782056
-
IPSec for Open Virtual Network is available for hosts with
ovirt-provider-ovn
,ovn-2021
or later, andopenvswitch2.15
or later. - BZ#1940824
- You can upgrade OVN and OVS 2.11 to OVN 2021 and OVS 2.15.
- BZ#2004852
- You can enable VirtIO-SCSI and multiple queues, depending on the number of available vCPUs, when creating a VM with an Ansible playbook.
- BZ#2015796
- The current release can be deployed on a host with the RHEL 8.6 DISA STIG OpenSCAP profile.
- BZ#2023250
-
The host installation and upgrade flows have been updated to enable the
virt:rhel
module during a new installation of the RHEL 8.6 host or upgrade from RHEL 8.5 or earlier. - BZ#2030226
- RHVH can be deployed on a machine with the PCI-DSS security profile.
- BZ#2052686
-
The current release requires
ansible-core
2.12.0 or later. - BZ#2055136
-
The
virt
DNF module version is set to the RHEL version of the host during the upgrade procedure. - BZ#2056126
- When an internal certificate is due to expire, the Manager creates a warning event 120 days in advance and an alert event 30 days in advance in the audit log. Custom certificates for HTTPS access to the Manager are not checked.
6.4.4. Deprecated Functionality
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#2016359
- The GlusterFS storage type is deprecated because Red Hat Gluster Storage reaches end of life in 2024.
6.4.5. Removed Functionality
6.5. Red Hat Virtualization 4.4 Batch Update 9 (ovirt-4.4.10)
6.5.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1898049
- Previously, the default disk configuration did not propagate disk errors to the client, which caused the virtual machine to stop. As a result, the Windows high availability cluster validator failed and one of the virtual machines in the cluster was paused. In this release, disk errors are propagated to the client using the "engine-config -s PropagateDiskErrors=true" setting. The Windows high availability cluster validator works and all tests, including iSCSI reservations, have passed.
- BZ#1978655
- Previously, ELK integration failed due to missing configuration parameters when using certificates. In this release, the missing parameters were added, and updated to the correct names according to the logging role. ELK integration now works with or without certificates.
- BZ#2014882
- Previously, the VM memory/CPU overcommit panels in the Ovirt Executive Dashboard/Cluster Dashboard displayed the average memory for a single hypervisor and the average memory of all virtual machines in a cluster. In this release, the dashboard panels show the correct overcommit rates between all the hosts in the cluster and all the virtual machines in the cluster.
- BZ#2022660
- Previously, when unassigning a storage connection from a storage domain for a LUN associated with another storage server connection, all storage connections for that LUN were removed and the LUN was also removed. In this release, only the specified storage connection is removed. The LUN is removed only if it has no storage connections.
- BZ#2025872
- Previously, certain CPU topologies caused virtual machines with PCI host devices to fail. In this release, the issue has been fixed.
- BZ#2026625
- Previously, the timeout setting on the broker socket in the 'ovirt-hosted-engine-ha' library was ignored because the timeout was set after the connection was opened. This caused the VDSM threads to be blocked if the broker did not respond. Now the timeout setting is established before the connection is opened, resolving this issue.
- BZ#2032919
- Previously, RHEL 7 hosts could not be added to the Red Hat Virtualization Manager in clusters with level 4.3 or 4.2. In this release, RHEL 7 hosts can be added to the Red Hat Virtualization Manager successfully in clusters with level 4.3 or 4.2. For additional details, see BZ#2019807.
6.5.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1897114
- In this release, monitoring of host refresh capabilities functionality was improved to help debug very rare production issues that sometimes caused the Red Hat Virtualization Manager to lose connectivity with the Red Hat Virtualization hosts.
- BZ#2012135
- Previously, multiple stale LUNs had to be removed individually, after removing a storage domain, by calling the 'ovirt_remove_stale_lun' Ansible role multiple times. In the current release, multiple LUN WWIDs for stale links can be included in the 'ovirt_remove_stale_lun' role, which only needs to be called once.
- BZ#2023224
- Previously, when running the 'ovirt_remove_stale_lun' Ansible role, the removal of the multipath device map could fail because of a conflict with a VGS scan. In the current release, the 'ovirt_remove_stale_lun' role for removing multipath is retried six times to allow the removal to succeed.
6.5.3. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#2007286
- Previously, a non-responding host was first soft-fenced by the Engine, but this did not fix the connectivity issue. The engine did not initiate a hard fence and the host was left in an non-responding state. In this release, soft fencing has been fixed so that if the soft fencing does not make the host responsive again, then the non-responding host treatment process continues correctly with the additional steps.
6.5.4. Deprecated Functionality
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#2017068
- The 'manageiq' Ansible role has been deprecated in 'ovirt-ansible-collection-1.6.6' and will be removed in 'ovirt-ansible-collection-2.0.0'
- BZ#2056934
- The Red Hat Virtualization Manager (RHVM) appliance is being retired. The last supported build of the RHVM appliance will be shipped with the release of Red Hat Virtualization 4.4 SP1. Following the Red Hat Virtualization 4.4 SP1 release, you can update the RHVM by running the dnf update command followed by engine-setup after connecting to the Content Delivery Network.
6.5.5. Removed Functionality
- BZ#2045913
- The log manager extension ovirt-engine-extension-logger-log4j has been removed in this release. It has been replaced by the JBoss EAP SyslogHandler log manager.
6.6. Red Hat Virtualization 4.4 Batch Update 8 (ovirt-4.4.9)
6.6.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1940991
- Previously, when hot unplugging memory in rapid succession using REST API, the same DIMMs could be hot unplugged multiple times instead of using different DIMMs for different hot unplug actions. This resulted in failed hot unplugs and could lead to invalid assumptions about the amount of RAM in the virtual machine. In this release, this issue is fixed, and DIMMs that are hot unplugged are no longer used in followup hot unplugs.
- BZ#1947709
- Previously, upgrading from Red Hat Virtualization 4.3 failed when using an isolated network during IPv6 deployment. In this release, a forward network is used instead of an isolated network during an IPv6 deployment. As a result, upgrade from Red Hat Virtualization 4.3 using IPv6 now succeeds.
- BZ#1977276
- Previously, in some cases, adding a new disk for the upload succeeded, but the system handled the operation as a failure. AS a result, the upload failed silently without uploading any data and ending in an empty new disk. In this release, the add disk success is detected correctly, and uploading completes successfully.
- BZ#1978672
- Previously, virtual machines failed to restore when running hibernation on block based storage. In the current release, the data is written as raw data allowing the virtual machine restore to succeed.
- BZ#1979730
- Previously, when upgrading a cluster from cluster level 4.5 to 4.6, the emulated machine changed to a newer one. This caused problems on some Windows virtual machines, such as - loss of static IP configuration or secondary disks going offline. In this release, the Webadmin shows a confirmation dialog during the cluster upgrade from cluster level 4.5 or lower to cluster level 4.6 or higher if there are any virtual machines that could be affected.
- BZ#1980230
- In Red Hat Enterprise Linux 8.5, the socat package has been updated introducing a change in the command line syntax. In the current release, the hosted-engine command has been updated to adapt to this change.
- BZ#1989324
- Previously, the UploadStreamCommand updated the OVF_STORE Actual disk size in the database incorrectly during an OVF update. As a result, rhv-image-discrepancies received the wrong disk size. In this release, OVF and Self-Hosted Engine metadata are skipped by rhv-image-discrepancies, and the tool does not produce irrelevant warnings.
- BZ#2000364
- Previously, on Manager startup, system threads may have been used to retrieve the virtual machine configuration from stateless snapshots, causing the Manager to fail to start. In this release, the way of retrieving the virtual machine configuration from stateless snapshots on the Manager was changed to avoid using the system thread and only use application threads. AS a result, the Manager can start when stateless snapshots with cloud-init network properties are defined.
- BZ#2000720
- Previously, VDSM reported partial results to the engine which resulted in a failure to import the storage domain using new LUNs. This happened because VDSM would not wait for the creation of new multipath devices after the discovery of new LUNs. The current release fixes this issue and VDSM waits for multipathd reports to be ready and the storage domain is now detected.
- BZ#2014017
- Previously, the status of the operation (download VM disks) was changed to one of the final statuses (FINISHED_SUCCESS / FINISHED_FAILURE) before the disk locks were actually released. As a result, in some scenarios, the operation following this one failed with a Disk is locked error. In this release, the locks are released immediately before changing the command status to one of the final phases. AS a result, the operation that follows this one and uses the same disk will succeed.
6.6.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1352501
- In this release, using virtual TPM, it is now possible to inject a LUKS encryption key to the guest operating system.
- BZ#1845909
- In the current release, the sanlock_io timeout is configurable. Before configuring sanlock_io timeout, it is recommended that you contact Red Hat support. Please refer to https://access.redhat.com/solutions/6338821. Red Hat is not responsible for testing different timeout values other than the defaults. Red Hat support will only provide guidance on how to change those values consistently across the RHV setup.
- BZ#1949046
- SPICE has been deprecated, and will be removed from the RHEL 9 subscription channel. This release provides SPICE packages for RHEL 9 clients so that Red Hat Virtualization can support SPICE with RHEL 9 clients and guests.
- BZ#1957830
- In this release, the VM Portal now allows creating preallocated or thin provisioned disk images on various types of storage domains.
- BZ#1983021
- Red Hat Virtualization Host now includes the packages needed for using Managed Block Devices via cinderlib.
- BZ#1984886
- Previously, manual installation of the rsyslog-openssl package was required to setup remote encrypted logging. In the current release, the rsyslog-openssl package is installed by default on both the oVirt Node and RHV-H.
- BZ#1992690
- Previously, the Inventory dashboard showed CPU overcommit rates for each data center. In this release, CPU overcommit rates are available in the Inventory dashboard for each cluster as well.
- BZ#2001551
- In this release, rhv-image-discrepancies now allows more granular checks. Two new options have been added to rhv-image-discrepancies command line to restrict the run to specific data centers or storage domains. If both are specified it is restricted to the intersection of both. -p --pool-uuid: limit run to data centers, can be specified multiple times -s --storage-uuid: limit run to storage domains, can be specified multiple times For example: # rhv-image-discrepancies -p=5bbe9966-ea58-475f-863f -s=977ba581-23e5-460a-b1de
- BZ#2007550
- In this release, the data type for the virtual machines disk write/read rate was changed from integer to long.
- BZ#2009659
- Previously, users were required to manually install Cinderlib and Ceph dependencies, such as, python3-cinderlb or python3-ox-brick on each host. In the current release, these dependencies are automatically installed and provided within RHV-H. Please note that for standard RHEL hosts, this feature requires the proper subscription to be enabled.
6.6.3. Rebase: Bug Fixes and Enhancements
These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization:
- BZ#1975175
- Red Hat Virtualization Host now includes packages from RHGS-3.5.z on RHEL-8 Batch #5.
- BZ#1998104
- Red Hat Virtualization Host now includes openvswitch related packages from Fast Data Path 21.G release.
- BZ#2002945
- The ovirt-hosted-engine-ha package has been rebased to version: 2.4.9. This update fixes the issue of incorrect CPU load scores causing the engine virtual machine to shut down.
6.6.4. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#1904085
- A playbook executed by Ansible Engine 2.9.25 inside a virtual machine running on Red Hat Virtualization 4.4.9 correctly detects that this is a virtual machine running on Red Hat Virtualization by using Ansible facts.
- BZ#1939262
- Previously, an issue with lldpad required a workaround on RHEL 7. The RHEL 8.5 release provides an update of llpad to version 1.0.1-16 which resolves the issue.
- BZ#1963748
- Red Hat Virtualization 4.4.9 now requires EAP 7.4.2 which also requires a repository change. Before upgrading to RHV 4.4.9 with EAP 7.4.2 or later, make sure that EAP is upgraded to 7.3.9 or later when upgrading from RHV 4.4.8 or earlier.
- BZ#2003671
- Red Hat Virtualization now supports Ansible-2.9.27 for internal usage.
- BZ#2004444
- During host installation or host upgrade, the Manager checks if cinderlib and Ceph packages are available. If not, it tries to enable the required channels specified in the documentation. If there is a problem during channel enablement, an error is raised in the audit_log, and customers need to enable the channel manually and retry the installation or upgrade.
- BZ#2004469
- Previously it was not possible to upgrade RHVH to version 4.4.8 when custom VDSM hooks were installed on RHVH. This was caused by the VDSM hooks dependency on the concrete version of VDSM. The current release allows users to maintain the VDSM dependency manually. In other words, if you want to upgrade from VDSM X.Y.Z to version A.B.C, you must upgrade all VDSM hooks to the same A.B.C version.
- BZ#2004913
- The Red Hat OpenStack Platform (RHOSP) cinderlib repository has been upgraded from RHOSP 16.1 to 16.2.
6.7. Red Hat Virtualization 4.4 Batch Update 7 (ovirt-4.4.8)
BZ#1947709 was included in the advisory (RHBA-2021:3464) in error and remains a known issue. The fix is scheduled for a future release.
6.7.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1770027
- Previously, connection with the postgresql would fail during restart or any other issue. The virtual machine monitoring thread would fail with an unrecoverable error and would not run again until the ovirt-engine was restarted. The current release fixes this issue allowing the monitoring thread to recover once errors are resolved.
- BZ#1948177
- An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives.
- BZ#1950767
- Previously, sending rapid multiple requests to update affinity groups simultaneously would cause conflicts resulting in a failure. The conflict would occur because the affinity group was being removed and recreated during the update process. The current release fixes this issue by allowing each update on an affinity group to be initiated with a specific operation. Therefore, the affinity group is no longer removed and recreated during the update.
- BZ#1959436
- Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked.
- BZ#1982296
- Previously, it was possible to set the maximum number of vCPUs too high for virtual machine with the i4440fx BIOS type with certain CPU topologies. This prevented those virtual machines from starting. The current release fixes this issue and the maximum number of vCPUs for such virtual machines is now set within a valid range.
- BZ#1984209
- Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode.
- BZ#1993017
- Previously, when guaranteed memory (minimum available memory) was not specified in a request to add a virtual machine via the REST API, ovirt-engine set the guaranteed memory=memory, without considering the memory overcommit that is set on the cluster, and effectively disabling memory overcommit for the virtual machine. In this release, when not specified, the calculation for guaranteed memory takes into account both the specified memory and the cluster’s memory overcommit.
- BZ#1999754
- Previously, the origin disks of the virtual machine would become locked after doing a live snapshot to a vitual machine with disks and then making a copy of them. The current release fixes this issue in ovirt-engine.
6.7.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1691696
- Multipath events were introduced in Red Hat Virtualization version 4.2, but there was no way to configure email notifications for these events. The current release now allows you to configure email notifications for multipath events using either user interface or the REST API.
- BZ#1939286
- Previously, you could only monitor broken affinity groups using the Administration Portal. In the current release, you can now monitor broken affinity groups using both the REST API and the Administration Portal.
- BZ#1941507
- Previously, log files would use too much disk space because the operation runs frequently. The current release fixes this issue by implementing the logrotate feature. With this feature, logs will be rotated monthly or daily with only one archive file being retained. Host deployment, enrollment certificate, host upgrade, ova, brick setup and db-manual logs are rotated monthly. Check for update logs are rotated daily. Compressed files will be removed for update logs 24 hours from the time of creation, brick setup logs will be removed 30 days from the time of creation, and all other logs will be removed 30 days from the last metadata change.
- BZ#1949046
- SPICE has been deprecated, and will be removed from the RHEL 9 subscription channel. This release provides SPICE packages for RHEL 9 clients so that Red Hat Virtualization can support SPICE with RHEL 9 clients and guests.
- BZ#1991171
- Since Red Hat Virtualization 4.4.7, the engine-backup refuses to restore to a version older than the one used for backup. This causes 'hosted-engine --restore-from-file' to fail if the latest appliance is older than the latest Manager. In this release, such a scenario does not fail, but prompts the user to connect via SSH to the Manager virtual machine and fix the restore issue.
6.7.3. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#1983039
- Red Hat Virtualization 4.4.8 is tested and supported with Ansible-2.9.23
6.7.4. Removed Functionality
- BZ#1989823
- OTOPI Java bindings have been removed as they are no longer used within the product (see oVirt bug BZ#1983047).
6.8. Red Hat Virtualization 4.4 Batch Update 6 (ovirt-4.4.7)
6.8.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1662657
One of the steps during hosted-engine deployment is "Activate Storage Domain". Normally, this step returns the amount of available space in the domain. Under certain conditions, this information is missing.
In previous releases, if the available space was missing, deployment failed. With this release, deployment will provide an error message and allow you to provide the details needed for creating a storage domain.
This issue appears to affect deployments using '--restore-from-file', when the existing setup has problematic block storage (iSCSI or Fiber Channel). If this happens, it is recommended that you connect to the Administration Portal and clear all storage-related issues before continuing.
- BZ#1947902
- Previously, using an Ansible playbook to fetch virtual machine disk information was slow and incomplete, while the REST API fetched the information faster and more completely. In this release, the Ansible playbook fetches the information completely and quickly.
- BZ#1952345
- Previously, when two threads in a VDSM attempted to release a storage lease at the same time, sanlock would incorrectly close the socket to VDSM and release the leases owned by VDSM. In this Release, VDSM serializes calls to sanlock_release() so that if multiple threads attempt to release a lease at the same time, the calls will run sequentially.
- BZ#1958145
- Previously, rhsmcertd was not enabled by default on the Red Hat Virtualization Host. As a result, the systems did not regularly report to RHSM while the subscription-manager reported no obvious issues and repositories were properly enabled. In this release, rhsmcertd is enabled by default in RHVH, and as a result, RHSM now receives reports regularly.
6.8.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1848986
- With this release, out of sync indications have been added whenever a configuration change affecting a vNIC may be pending and the vNIC has not been updated yet. An update to the MTU or VLAN tag of a network attached to the vNIC via its profile, or an update to VM QoS, network filter, or custom properties of a vNIC profile now trigger an out of sync indication for the vNIC until it is updated. The Administration Portal displays a warning icon with tooltip text on the vNIC in the Network Interfaces tab of a Virtual Machine and on the Virtual Machine in the Virtual Machines list page. An event is reported to the Events tab as well. The REST API reports via the ‘next_run_configuration_exists’ attribute on the Virtual Machine and via the ‘is_synced’ attribute on the vNIC.
- BZ#1883793
- Red Hat Virtualization Host now includes an updated scap-security-guide-rhv which allows you to apply a PCI DSS security profile to the system during installation,
- BZ#1947450
- The ovirt-host package no longer pulls in vdsm-hooks automatically. To use vdsm hooks, you must install the appropriate hook for the specific functionality required.
- BZ#1976095
- The redhat-release-virtualization-host package no longer requires vdsm-hooks. In this release, the installation of vdsm-hooks is not mandatory for the Red Hat Virtualization Host.
6.8.3. Rebase: Bug Fixes and Enhancements
These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization:
- BZ#1957241
- The RHVM Appliance has been rebased on top of the RHEL 8.4.0 Batch #1 update. Please refer to the RHEL 8.4 release notes.
- BZ#1957242
- In this release, the Red Hat Virtualization Host has been rebased on top of the RHEL 8.4.0 Batch #1 update. For more information, see the RHEL release notes.
6.8.4. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#1804774
Adding a message banner to the web administration welcome page is straight forward using custom branding that only contains a preamble section. An example of preamble branding is given here: 1783329.
In an engine upgrade, the custom preamble brand remains in place and will work without issue.
During engine backup and subsequent restore, on engine restore the custom preamble branding needs to be manually restored/reinstalled and verified.
- BZ#1901011
Foreman integration, which allows you to provision bare metal hosts from the Administration Portal using Foreman and then added to the Manager, was deprecated in oVirt 4.4.6 / RHV 4.4.6 and removed completely in oVirt 4.4.7 / RHV 4.4.7.
Similar functionality to provision bare metal hosts can be achieved using Foreman directly and adding an already provisioned host using the Administration Portal or the REST API.
- BZ#1966145
- The ovirt-engine in RHV 4.4.7 requires an Ansible 2.9.z version later than Ansible 2.9.20. In addition, in RHV 4.4.7 the version limitation for a specific Ansible version has been removed, the correct Ansible version is now shipped in the RHV subscription channels.
- BZ#1966150
- The ovirt-hosted-engine-setup in RHV 4.4.7 requires Ansible 2.9.21 or higher. Also in RHV 4.4.7, the specific Ansible version has been removed, because the correct Ansible version is shipped through RHV channels.
- BZ#1969763
- In this release, the new package ovirt-openvswitch provides all the requirements for OVN/OVS for oVirt, and replaces the existing rhv-openvswitch package.
6.8.5. Known Issues
These known issues exist in Red Hat Virtualization at this time:
- BZ#1981471
There is a known issue in the VM Portal in Red Hat Virtualization 4.4.7 wherein changing the size of an existing disk or changing the Bootable paramaeter results in the disk becoming inactive. This behavior is a regression from ovirt-web-ui 1.6.9-1.
Avoid editing any existing disks in the VM Portal. If it is necessary to edit a disk, use the Administration Portal.
6.8.6. Deprecated Functionality
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#1896359
- The column name threads_per_core in the Red hat Virtualization manager Dashboard is being deprecated, and will be removed in a future release. In version 4.4.7.2 the column name for threads_per_core will be changed to number_of_threads. In the Data Warehouse, the old name will be retained as an additional alias, resulting in 2 columns providing the same data: number_of_threads and threads_per_core, and threads_per_core will be removed in a future version.
- BZ#1961520
- Using Cockpit to install the self-hosted engine is deprecated. Support for this installation method will be removed in a later release.
6.8.7. Removed Functionality
- BZ#1947944
Previously, VDSM hooks were installed by default, as a dependency, when installing a RHEL host or a RHV-H host. Starting with Red Hat Virtualization 4.4.7, VDSM hooks are not installed by default. You can manually install VDSM hooks as needed. Additional resources:
- Bug 1947450 "ovirt-host shouldn’t have hard dependency on vdsm hooks"
- "Installing a VDSM hook" in the RHV Administration Guide
6.9. Red Hat Virtualization 4.4 Batch Update 5 (ovirt-4.4.6)
6.9.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1932284
- Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it’s own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command.
- BZ#1936972
- Previously, old RPM files were not properly removed during package removal (uninistall) or upgrade. As a result, removed packages were reinstalled, or, during and upgrade, the system tried to install two or more different versions at once, causing the upgrade to fail. In this release, the dnf plugin has been fixed, and RPM packages are now properly removed. The new version will also auto-heal the broken system by removing RPM packages which are not supposed to be in the persisted-rpms directory.
- BZ#1940484
- Previously, libvirtd could crash resulting in non-responsive hosts. The current release fixes this issue.
6.9.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#911394
- This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver.
- BZ#1683987
- With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac.
- BZ#1745023
- This enhancement adds support for the AVX-512 Vector Neural Network Instructions (AVX512_VNNI) feature for Cascadelake and Icelake CPUs. AVX512_VNNI is part of the AVX-512 extensions.
- BZ#1845877
- This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations.
- BZ#1906074
- With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains.
- BZ#1944723
- With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures.
6.9.3. Technology Preview
The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to Technology Preview Features Support Scope.
- BZ#1688177
- Failover vNIC profiles. This feature allows users to migrate a virtual machine connected via SR-IOV with minimal downtime by using a failover network that is activated during migration.
6.9.4. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
6.9.5. Deprecated Functionality
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#1869251
- Support for SSO for virtual machines on RHEL 7 or earlier and Windows guest operating systems is now deprecated. SSO will not be provided for virtual machines running RHEL 8 or later as guest operating systems.
- BZ#1948962
- Support for the Cisco Virtual Machine Fabric Extender (VM-FEX) is now deprecated. This functionality will be removed in a future release.
6.10. Red Hat Virtualization 4.4 Batch Update 4 (ovirt-4.4.5)
6.10.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1145658
- This release allows the proper removal of a storage domain containing memory dumps, either by moving the memory dumps to another storage domain or deleting the memory dumps from the snapshot.
- BZ#1815589
- Previously, following a successful migration on the Self-hosted Engine, he HA agent on the source host immediately moved to the state EngineDown, and shorly thereafter tried to start the engine locally, if the destination host didn’t update the shared storage quickly enough, marking the Manager virtual machine as being up. As a result, starting the virtual machine failed due to a shared lock held by the destination host. This also resulted in generating false alarms and notifications. In this release, the HA agent first moves to the state EngineMaybeAway, providing the destination host more time to update the shared storage with the updated state. As a result, no notifications or false alarms are generated. Note: in scenarios where the virtual machine needs to be started on the source host, this fix slightly increases the time it takes the Manager virtual machine on the source host to start.
- BZ#1860492
- Previously, if the Seal option was used when creating a template for Linux virtual machines, the original host name was not removed from the template. In this release, the host name is set to localhost or the new virtual machine host name.
- BZ#1895217
- Previously, after a host that virtual machines were pinned to was removed, the Manager failed to start. As a result,the setup of the self-hosted engine failed. In this release, when a host is removed, virtual machines no longer remain pinned to that host and the Manager can start successfully.
- BZ#1905108
- Previously, plugging several virtual disks to a running virtual machine over a short time interval could cause a failure to plug some of the disks, and issued an error message: "Domain already contains a disk with that address". In this release, this is avoided by making sure that a disk that is being plugged to a running virtual machine is not assigned with an address that has already been assigned to another disk that was previously plugged to the virtual machine.
- BZ#1916032
- Previously, if a host in the Self-hosted Engine had an ID number higher than 64, other hosts did not recognize that host, and the host did not appear in 'hosted-engine --vm-status'. In this release, the Self-hosted Engine allows host ID numbers of up to 2000.
- BZ#1916519
- Previously, the used memory of the host didn’t take the SReclaimable memory into consideration while it did for free memory. As a result, there were discrepancies in the host statistics. In this release, the SReclaimable memory is a part of the used memory calculation.
- BZ#1921119
- Previously, a cluster page indicated an out-of-sync cluster when in fact all networks were in sync. This was due to a logical error in the code when a host QoS was assigned to two networks on same host. In this release, the cluster page does not show out-of-sync for this setup.
- BZ#1931786
-
Previously, the Red Hat Virtualization Manager missed the
SkuToAVLevel
configuration for 4.5 clusters. In this release, theSkuToAVLevel
is available for these clusters and allows Windows updates to update Red Hat related drivers for the guest host. - BZ#1940672
- Previously, when Red Hat Virtualization Manager 4.4.3+ upgraded a host in a cluster that is set with Skylake/Cascadelake CPU type and compatibility level 4.4 (or lower), the host could become non-operational. In this release, the Red Hat Virtualization Manager blocks the upgrade of a host when the cluster is set with a secured Skylake/Cascadelake CPU type 1 (Secure Intel Skylake Client Family, Secure Intel Skylake Server Family, or Secure Intel Cascadelake Server Family) where the upgrade is likely to make the host non-operational. If the cluster is set with an insecure Skylake/Cascadelake CPU type 2 (Intel Skylake Client Family, Intel Skylake Server Family, or Intel Cascadelake Server Family) the user is notified with a recommendation to change the cluster to a secure Skylake/Cascadelake CPU type, but is allowed to proceed with the host upgrade. In order to make the upgraded host operational, the user must enable TSX at the operating system level.
6.10.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
To refresh a LUN’s disk size: 1. In the Administration portal, go to Compute>Virtual Machines and select a virtual machine. 2. In the Disks tab, click Refresh LUN.
For connected virtual machines that are not running, update the disk on the virtual machines once they are running.
- BZ#1431792
- This feature allows adding emulated TPM (Trusted Platform Module) devices to Virtual Machines. TPM devices are useful in cryptographic operations (generating cryptographic keys, random numbers, hashes, etc.) or for storing data that can be used to verify software configurations securely. QEMU and libvirt implement support for emulated TPM 2.0 devices, which is what Red Hat Virtualization uses to add TPM devices to Virtual Machines.
Once an emulated TPM device is added to the Virtual Machine, it can be used as a normal TPM 2.0 device in the guest OS.
- BZ#1688186
- Previously, the CPU and NUMA pinning were done manually or automatically only by using the REST API when adding a new virtual machine.
With this update, you can update the CPU and NUMA pinning using the Administration portal and when updating a virtual machine.
- BZ#1755156
- In this release, it is now possible to enter a path to the OVA archive for local appliance installation using the cockpit-ovirt UI.
- BZ#1836661
- Previously the logical names for disks without a mounted filesystem were not displayed in the Red Hat Virtualization Manager. In this release, logical names for such disks are properly reported provided the version of QEMU Guest Agent in the virtual machine is 5.2 or higher.
- BZ#1837221
- Previously, the Manager was able to connect to hypervisors only using RSA public keys for SSH connection. With this update, the Manager can also use EcDSA and EdDSA public keys for SSH.
Previously, RHV used only the fingerprint of an SSH public key to verify the host. Now that RHV can use EcDSA and EdDSA public keys for SSH, the whole public SSH key must be stored in the RHV database. As a result, using the fingerprint of an SSH public key is deprecated.
When adding a new host to the Manager, the Manager will always use the strongest public key that the host offers, unless an administrator provides another specific public key to use.
For existing hosts, the Manager stores the entire RSA public key in its database on the next SSH connection. For example, if an administrator moves the host to maintenance mode and executes an enroll certificate or reinstalls the host, to use a different public key for the host, the administrator can provide a custom public key using the REST API or by fetching the strongest public key in the Edit host dialog in the Administration Portal.
- BZ#1884233
- The authz name is now used as the user domain on the RHVM (Red hat Virtualization Manager) home page. It replaces the profile name. Additionally, several log statements related to authorization/authentication flow have been made consistent by presenting both the user authz name and the profile name where applicable. In this release, <username>@<authz name> is displayed on the home page once the user is successfully logged in to the RHVM. In addition, the log statements now contain both the authz name and the profile name as well as the username.
- BZ#1899583
- With this update, live updating of vNIC filter parameters is possible. When adding\deleting\editing the filter parameters of a virtual machine’s vNIC in the Manager, the changes are applied immediately on the device on the virtual machine.
- BZ#1910302
- Previously, the storage pool manager (SPM) failed to switch to another host if the SPM had uncleared tasks. With this enhancement, a new UI menu has been added to enable cleaning up finished tasks.
- BZ#1922200
-
Previously, records in the
event_notification_hist
table were erased only during regular cleanup of theaudit_log
table By defaultaudit_log
table records that are older than 30 days are removed.
With this update, records in the event_notification_hist
table are kept for 7 days. You can override this limit by creating a custom configuration file /etc/ovirt-engine/notifier/notifier.conf.d/history.conf
with the following content:
DAYS_TO_KEEP_HISTORY=<number_of_days>
Where <number_of_days> is the number of days to keep records in the event_notification_hist
table. After adding this file the first time or after changing this value, you need to restart the ovirt-engine-notifier service:
# systemctl restart ovirt-engine-notifier
- BZ#1927851
- The timezone AUS Eastern Standard Time has been added to cover daylight saving time in Canberra, Melbourne and Sydney.
6.10.3. Technology Preview
The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to Technology Preview Features Support Scope.
- BZ#1919805
- With this update, support for the Bochs display video card emulator has been added for UEFI guest machines. This feature is now the default for a guest UEFI server that uses cluster-level 4.6 or above, where BOCHS is the default value of Video Type.
6.10.4. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
- BZ#1917409
- Red Hat Virtualization (RHV) 4.4.5+ includes Ansible within its own channels. Therefore, the ansible-2.9-for-rhel-8-x86_64-rpms channel does not need to be enabled on either the RHV Manager or RHEL-H hosts. Customers upgrading from RHV releases 4.4.0 through 4.4.4 or 4.3.z, should remove that channel from their RHV Manager and RHEL-H hosts.
- BZ#1921104
- Ansible-2.9.17 is required for proper setup and functioning of Red Hat Virtualization Manager 4.4.5.
- BZ#1921108
- ovirt-hosted-engine-setup now requires Ansible-2.9.17.
6.10.5. Known Issues
These known issues exist in Red Hat Virtualization at this time:
- BZ#1923169
- Limiting package subscriptions to the Ansible 2.9 channel is not required for Red Hat Virtualization 4.4.5 installation. Workaround: Remove the Ansible 2.9 channel subscription on Red Hat Virtualization Manager and Red Hat Virtualization hosts when upgrading from Red Hat Virtualization version 4.4.4 or lower.
6.11. Red Hat Virtualization 4.4 Batch Update 3 (ovirt-4.4.4)
6.11.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1694711
- Previously, the UI NUMA panel showed an incorrect NUMA node for a corresponding socket. In this release, the NUMA nodes are ordered by the database, and the socket matches the NUMA node.
- BZ#1792905
- Previously, users could invoke the 'sparsify' operation on thin-provisioned (qcow) disks with a single volume. While the freed space was reclaimed by the storage device, the image size didn’t change and users could see this as a failure to sparsify the image. In this release, sparsifying a thin-provisioned disks with a single volume is now blocked.
- BZ#1797553
- Previously, when the export VM as OVA command was executed, other operations on the engine were blocked. This made the engine execute operations serially while expected to be parallel. In this release, engine tasks are executed in parallel, unblocked by the export VM as OVA command.
- BZ#1834876
- Previously, ovirt-vmconsole caused SELinux denials logged by sshd. While it generally didn’t affect ovirt-vmconsole functionality, it could raise false alerts. In this release, there are no ovirt-vmconsole SELinux denials issued.
- BZ#1868967
- Previously, the Red Hat Virtualization Host (RHV-H) repository (rhvh-4-for-rhel-8-x86_64-rpms) did not include the libsmbclient package, which is a dependency for the sssd-ad package. Consequently, the sssd-ad package failed to install.
With this update, the libsmbclient is now in the RHV-H repository, and sssd-ad now installs on RHV-H.
- BZ#1871792
- Previously, when importing a Virtual Machine using virt-v2v and the ovirt-engine service restarted, the import failed. In this release, the import continues as long there is an async command running, allowing the import to complete successfully.
- BZ#1886750
- Previously, when removing a host, neither the virtual machine’s host device nor the host dependency list were removed. As a result, this sometimes caused error messages when running the virtual machine on another host, and leaving behind incorrect entries in the database. In this release, the virtual machine host device and entry in the virtual machine’s dependency list for the removed host are no longer included in the database, and the associated error messages no longer occur.
- BZ#1888142
- Previously, stateless virtual machines including pool virtual machines issued a warning regarding not using the latest version, even when the virtual machine was not set to use the last version. In this release, there is no attempt to change the version of the template that virtual machines are based on unless they are set to use the last version of a template and thus this warning is omitted from the log.
- BZ#1889987
- Previously, when the export VM as OVA command was executed, other operations on the engine were blocked. This made the engine execute operations serially while expected to be parallel. In this release, engine tasks are executed in parallel, unblocked by the export VM as OVA command.
- BZ#1897422
- Previously, virtual machines that were imported from OVA files were not set with small or large icons. In this release, small/large icons are set according to the operating system the Virtual Machine is configured with during import from OVA files. Consequently, virtual machines that are imported from OVA files are set with small and large icons.
- BZ#1899768
- Previously, live-merge failed on snapshots of virtual machines that are set with bios-type = CLUSTER-DEFAULT. In this release, live-merge works on snapshots of virtual machines that are set with bios-type = CLUSTER-DEFAULT.
6.11.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1710446
- With this enhancement, the Europe/Helsinki timezone can now be set in virtual machines.
- BZ#1729897
- Previously, the NUMA tune mode was set according to the Virtual Machine, using the same setting for every virtual NUMA node of the Virtual Machine. In this release, it is possible to set the NUMA tune mode for each virtual NUMA node.
- BZ#1881250
-
Before this update, when restoring a self-hosted engine you needed to enter the same FQDN that you used in the backup. With this update, when you run
hosted-engine --deploy --restore-from-file=backup_file
deploy script fetches the FQDN from the backup file and you don’t need to enter it. - BZ#1893385
- In previous versions, when using 'hosted-engine --restore-from-file' to restore or upgrade, if the backup included extra required networks in the cluster, and if the user did not reply 'Yes' to the question about pausing the execution, deployment failed. In this release, regardless of the answer to 'pause?', if the host is found to be in state "Non Operational", deployment will pause, outputting relevant information to the user, and waiting until a lock file is removed. This should allow the user to then connect to the web admin UI and manually handle the situation, activate the host, and then remove the lock file and continue the deployment. This release also allows supplying a custom hook to fix such issues automatically.
- BZ#1897399
- The vdsm-hook related packages have been updated in the RHV-H x86_64 repository.
6.11.3. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
6.11.4. Known Issues
These known issues exist in Red Hat Virtualization at this time:
- BZ#1846256
- Grafana now allows Single-Sign-On (SSO) using oVirt engine users, but does not allow automatic creation of them. A future version (see bugs 1835163 and 1807323) will allow automatic creation of admin users. For now, users must be created manually, but following that, they can login using SSO.
6.11.5. Deprecated Functionality
The items in this section are either no longer supported, or will no longer be supported in a future release.
- BZ#1898545
- Support for Red Hat OpenStack block storage (Cinder) is now deprecated, and will be removed in a future release.
- BZ#1899867
- Support for Red Hat OpenStack Networking (Neutron) as an external network provider is now deprecated, and will be removed in Red Hat Virtualization 4.4.5.
- BZ#1901073
- Support for third party websocket proxy deployment is now deprecated, and will be removed in a future release.
- BZ#1901211
- Support for instance types that can be used to define the hardware configuration of a virtual machine is now deprecated. This functionality will be removed in a future release.
6.11.6. Removed Functionality
- BZ#1899865
- Experimental support for DPDK has been removed in Red Hat Virtualization 4.4.4.
6.12. Red Hat Virtualization 4.4 Batch Update 2 (ovirt-4.4.3)
6.12.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
- BZ#1702016
Previously, the Manager allowed adding or migrating hosts configured as self-hosted engine hosts to a data center or cluster other than the one in which the self-hosted engine VM is running, even though all self-hosted engine hosts should be in the same data center and cluster. The hosts' IDs were identical to what they were when initially deployed, causing a Sanlock error. Consequently, the agent failed to start.
With this update, an error is raised when adding a new self-hosted engine host or migrating an existing one to a data center or cluster other than the one in which the self-hosted engine is running.
To add or migrate a self-hosted engine host to a data center or cluster other than the one in which the self-hosted engine is running, you need to disable the host from being a self-hosted engine host by reinstalling it. Follow these steps in the Administration Portal:
- Move the host to Maintenance mode.
-
Invoke Reinstall with the Hosted Engine UNDEPLOY option selected. If using the REST API, use the
undeploy_hosted_engine
parameter. - Edit the host and select the target data center and cluster.
Activate the host.
For details, see the Administration Guide or REST API Guide.
- BZ#1760170
- Previously, the MAC Pool search functionality failed to find unused addresses. As a result, creating a vNIC failed. In this release, the MAC pool search is now able to locate an unused address in the pool, and all unused addresses are assigned from a pool.
- BZ#1808320
- Previously, users with specific Data Center or Cluster permissions could not edit the cluster they have access to. In this release, users with specific Data Center or Cluster permissions can edit the cluster they have access to if they don’t change the MAC pool associated with the cluster or attempt to add a new MAC pool.
- BZ#1821425
- Previously, when deploying Self-Hosted Engine, the Appliance size was not estimated correctly, and as a result, not enough space was allotted, and unpacking the Appliance failed. In this release, the Appliance size estimation and unpacking space allotment are correct, and deployment succeeds.
- BZ#1835550
- Previously, when the RHV Manager requested a listing of available ports from the ovirt-provider-ovn, the implementation was not optimized for scaling scenarios. As a result, in scenarios with many active OVN vNICs on virtual machines, starting a virtual machine using OVN vNICs was slow and sometimes timed out. In this release, implementation of listing ports has been optimized for scaling, as starting a VM with OVN vNICs with many active OVN vNICs is quicker.
- BZ#1855305
- Previously, hot-plugging a disk to Virtual Machine sometimes failed if the disk was assigned an address that was already assigned to a host-passthrough disk device. In this release, conflicts are avoided by preventing an address that is assigned to host-passthrough disk device from being assigned to a disk that is hot-plugged to the Virtual Machine.
- BZ#1859314
- Previously, unicode strings were not handled properly by the rhv-log-collector-analyzer after porting to python3. In this release, unicode strings are now handled properly.
- BZ#1866862
- Previously, Virtual Machines deployed on AMD EPYC hosts without NUMA enabled sometimes failed to start, with an unsupported configuration error reported. In this release, Virtual Machines start successfully on AMD EPYC hosts.
- BZ#1866981
- Previously, unicode strings were not handled properly by the ovirt-engine-db-query after porting to Python3. In this release, unicode strings are now handled properly.
- BZ#1871694
- Previously, changing a cluster’s bios type to UEFI or UEFI+SecureBoot changed the Self-Hosted Engine Virtual Machine that runs within the cluster as well. As a result, the Self-Hosted Engine Virtual Machine failed to reboot upon restart. In this release, the Self-Hosted Engine Virtual Machine is configured with a custom bios type, and does not change if the cluster’s bios type changes.
- BZ#1871819
- Previously, when changes were made in the logical network, the ovn-controller on the host sometimes exceeded the timeout interval during recalculation, and calculation was triggered repeatedly. As a result, OVN networking failed. In this release, recalculation by the ovn-controller is only triggered once per change, and OVN networking is maintained.
- BZ#1877632
- Previously, when the VDSM was restarted during a Virtual Machine migration on the migration destination host, the VM status wasn’t identified correctly. In this release, the VDSM identifies the migration destination status correctly.
- BZ#1878005
- Previously, when a RHV-H 4.4 host was being prepared as a conversion host for Infrastructure Migration (IMS) using CloudForms 5, installing the v2v-conversion-host-wrapper failed due to a dependency on the missing libcgroup-tools package. The current release fixes this issue. It ships the missing package in the rhvh-4-for-rhel-8-x86_64-rpms repository.
6.12.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
- BZ#1613514
- This enhancement adds the ‘nowait’ option to the domain stats to help avoid instances of non-responsiveness in the VDSM. As a result, libvirt now receives the ‘nowait’ option to avoid non-responsiveness.
- BZ#1657294
- With this enhancement, the user can change the HostedEngine VM name after deployment.
- BZ#1745024
- With this enhancement, the Intel Icelake Server Family is now supported in 4.4 and 4.5 compatibility levels.
- BZ#1752751
This enhancement enables customization of the columns displayed in the Virtual Machines table of the Administration Portal.
- Two new columns have been added to the Virtual Machines table - (number of) ‘vCPUs’, and ‘Memory (MB)’. These columns are not displayed by default.
- A new pop-up menu has been added to the Virtual Machines table that allows you to reset the table column settings, and to add or remove columns from the display.
- The selected column display settings (column visibility and order) are now persistent on the server by default, and are migrated (uploaded) to the server. This functionality can be disabled in the User > Options popup, by de-selecting the 'Persist grid settings' option.
- BZ#1797717
- With this enhancement, you can now perform a free text search in the Administration Portal that includes internally defined keywords.
- BZ#1812316
- With this enhancement, when scheduling a Virtual Machine with pinned NUMA nodes, memory requirements are calculated correctly by taking into account the available memory as well as hugepages allocated on NUMA nodes.
- BZ#1828347
- Previously, you used Windows Guest Tools to install the required drivers for virtual machines running Microsoft Windows. Now, RHV version 4.4 uses VirtIO-Win to provide these drivers. For clusters with a compatibility level of 4.4 and later, the engine sign of the guest-agent depends on the available VirtIO-Win. The auto-attaching of a driver ISO is dropped in favor of Microsoft Windows updates. However, the initial installation needs to be done manually.
- BZ#1845397
- With this enhancement, the migration transfer speed in the VDSM log is now displayed as Mbps (Megabits per second).
- BZ#1854888
- This enhancements adds error handling for OVA import and export operations, providing successful detection and reporting to the Red Hat Virtualization Manager if the qemu-img process fails to complete.
- BZ#1862968
-
This enhancement introduces a new option for automatically setting the CPU and NUMA pinning of a Virtual Machine by introducing a new configuration parameter, auto_pinning_policy. This option can be set to
existing
, using the current topology of the Virtual Machine’s CPU, or it can be set toadjust
, using the dedicated host CPU topology and changing it according to the Virtual Machine. - BZ#1879280
- Default Data Center and Default Cluster, which are created during Red Hat Virtualization installation, are created with 4.5 compatibility level by default in Red Hat Virtualization 4.4.3. Please be aware that compatibility level 4.5 requires RHEL 8.3 with Advanced Virtualization 8.3.
6.12.3. Technology Preview
The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to Technology Preview Features Support Scope.
- BZ#1361718
- This enhancement provides support for attaching an emulated NVDIMM to virtual machines that are backed by NVDIMM on the host machine. For details, see Virtual Machine Management Guide
6.12.4. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
6.12.5. Known Issues
These known issues exist in Red Hat Virtualization at this time:
- BZ#1886487
- RHV-H 4.4.3 is based on RHEL 8.3, which uses a new version of Anaconda (BZ#1691319). This new combination introduces a regression that breaks the features that BZ#1777886 "[RFE] Support minimal storage layout for RHVH" added to RHV-H 4.4 GA. This regression affects only new installations of RHV-H 4.4.3. To work around this issue, first install the RHV-H 4.4 GA ISO and then upgrade the host to RHV-H 4.4.3.
6.12.6. Removed Functionality
- BZ#1884146
- The ovirt-engine-api-explorer package has been deprecated and removed in Red Hat Virtualization Manager 4.4.3. Customers should use the official REST API Guide instead, which provides the same information as ovirt-engine-api-explorer. See the Red Hat Virtualization REST API Guide.
6.13. Red Hat Virtualization 4.4 Batch Update 1 (ovirt-4.4.2)
6.13.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
BZ#1663135 Previously, virtual machine (VM) imports from sparse storage assumed the target also used sparse storage. However, block storage does not support sparse allocation. The current release fixes this issue: Imports to block storage for COW image file formats preserve sparse allocation types and work as expected.
BZ#1740058 Before this update, when you ran a VM that was previously powered off, the VDSM log contained many uninformative warnings. This update resolves the issue and these warnings no longer appear in the VDSM log.
BZ#1793290 Previously, the partition number was not removed from the disk path, so the disk mapping pointed to an arbitrary partition on the disk, instead of the disk itself. The current release fixes this issue: Disk mapping contains only disk paths.
BZ#1843234 Before this update, when using Firefox 74.0.1 and greater with Autofill enabled, the Administration Portal password was used to autofill the Sysprep Administrator password field in the Initial Run tab of the Run Virtual Machine(s) dialog. Validation of the dialog failed because the password did not match the Verify admin password field, which was not autofilled.
This issue has been resolved, and the browser no longer uses Autofill for the Sysprep admin password field.
BZ#1855761 Firefox 68 ESR does not support several standard units in the <svg> tag. (For more information, see 1287054.) Consequently, before this update, aggregated status card icons appeared larger than intended.
This update uses supported units to size icons, and as a result, icons appear correctly in FireFox 68 ESR and later.
BZ#1866956 Before this update, when the Blank template was set with HA enabled, a backup of the RHVM virtual machine saved this setting. This setting prevented deployment of the RHVM virtual machine during the restore operation. Consequently, upgrading to Red Hat Virtualization 4.4 failed.
This update disables the HA setting on the RHVM virtual machine during self-hosted engine deployment, and as a result, the upgrade to 4.4 succeeds.
BZ#1867038 Previously, restoring from backup or upgrading from RHV 4.3 to RHV 4.4 failed while restoring SSO configuration requiring the gssapi module. In this release, the mod_auth_gssapi package is included in the RHV Manager appliance, and upgrading or restoring from backup succeeds even when SSO configuration is included.
BZ#1869209 Before this update, adding hosts with newer Intel CPUs to IBRS family clusters could fail, and the spec_ctrl flag was not detected.
This update resolves the issue and you can now add hosts with modern Intel CPUs to the IBRS family clusters and the spec_ctrl flag is detected.
BZ#1869307 Previously, vim-enhanced package installation failed on Red Hat Virtualization Host 4.4. In this release, vim-enhanced package installation succeeds.
BZ#1870122 Previously, when upgrading a self-hosted engine from RHV 4.3 to RHV 4.4, Grafana was installed by default during the engine-setup process, and if the remote database option was selected for Data Warehouse setup, the upgrade failed. In this release, Grafana deployment is disabled by default in self-hosted engine installations, and the upgrade process succeeds.
BZ#1871235 Before this update, a virtual machine that was set with a High Performance profile using the REST API could not start if it had any USB devices, because the High Performance profile disabled the USB controller. Additionally, hosts in clusters with compatibility level 4.3 did not report the TSC frequency.
This update resolves these issues. TSC is no longer present for 4.3 clusters and the VM won’t have USB devices when there is no USB controller, allowing VMs to run normally.
BZ#1875851 Firefox 68 ESR does not support several standard units in the <svg> tag. (For more information, see 1287054.) Consequently, before this update, aggregated status card icons appeared larger than intended.
This update uses supported units to size icons, and as a result, icons appear correctly in FireFox 68 ESR and later.
6.13.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
BZ#1749803 This enhancement enables you to set the same target domain for multiple disks.
Previously, when moving or copying multiple disks, you needed to set the target domain for each disk separately. Now, if a common target domain exists, you can set it as the target domain for all disks.
If there is no common storage domain, such that not all disks are moved or copied to the same storage domain, set the common target domain as 'Mixed'.
BZ#1819260 The following search filter properties for Storage Domains have been enhanced: - 'size' changed to 'free_size' - 'total_size' added to the search engine options - 'used' changed to 'used_size'
For example , you can use now the following in the Storage Domains tab:
→
6.13.3. Known Issues
These known issues exist in Red Hat Virtualization at this time:
BZ#1674497 Previously, hot-unplugging memory on RHEL 8 guests generated a error because the memory DIMM was in use. This prevented the removal of that memory from that VM. To work around this issue, add movable_node
by setting the virtual machine’s kernel command-line parameters, as described here.
BZ#1837864 When upgrading from Red Hat Virtualization 4.4 GA (RHV 4.4.1) to RHEV 4.4.2, the host enters emergency mode and cannot be restarted. Workaround: see the solution in https://access.redhat.com/solutions/5428651
BZ#1850378 When you upgrade Red Hat Virtualization from 4.3 to 4.4 with a storage domain that is locally mounted on / (root), the upgrade fails. Specifically, on the host it appears that the upgrade is successful, but the host’s status on the Administration Portal, is NonOperational
.
Local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades.
If you are using / (root) as the locally mounted storage domain, migrate your data to a separate logical volume or disk prior to upgrading.
6.14. Red Hat Virtualization 4.4 General Availability (ovirt-4.4.1)
6.14.1. Bug Fix
These bugs were fixed in this release of Red Hat Virtualization:
BZ#1061569
Previously, if you requested multiple concurrent network changes on a host, some requests were not handled due to a 'reject on busy' service policy. The current release fixes this issue with a new service policy: If resources are not available on the server to handle a request, the host queues the request for a configurable period. If server resources become available within this period, the server handles the request. Otherwise, it rejects the request. There is no guarantee for the order in which queued requests are handled.
BZ#1437559
When a virtual machine is loading, the Manager machine sends the domain XML with a NUMA Configuration CPU list containing the current CPU IDs. As a result, the libvirt/QEMU issued a warning that the NUMA Configuration CPU list is incomplete, and should contain IDs for all of the virtual CPUs. In this release, the warning no longer appears in the log.
BZ#1501798
Previously, using ovirt-engine-rename
did not handle the OVN provider correctly. This caused bad IP address and hostname configurations, which prevented adding new hosts and other related issues. The current release fixes this issue. Now, ovirt-engine-rename
handles ovirt-provider-ovn
correctly, resolving the previous issues.
BZ#1569593
When deploying the self-hosted engine on a host, the Broker and Agent Services are brought down momentarily. When the VDSM service attempted to send a get_stats message before the services are restarted, the communication failed and the VDSM logged an error message. In this release, such events now result in a warning, and are not flagged or logged as errors.
BZ#1569926
Previously, commands trying to access an unresponsive NFS storage domain remained blocked for 20-30 minutes, which had significant impacts. This was caused by the non-optimal values of the NFS storage timeout and retry parameters. The current release fixes this issue: It changes these parameter values so commands to a non-responsive NFS storage domain fail within one minute.
BZ#1573600
Previously, importing a virtual machine (VM) from a snapshot that included the memory disk failed if you imported it to a storage domain that is different from the storage domain where the snapshot was created. This happened because the memory disk depended on the storage domain remaining unchanged. The current release fixes this issue. Registration of the VM with its memory disks succeeds. If the memory disk is not in the RHV Manager database, the VM creates a new one.
BZ#1583328
Previously, a custom scheduler policy was used without the HostDevice
filter. Consequently, the virtual machine was scheduled on an unsupported host, causing a null pointer exception.
With this update, some filter policy units are now mandatory, including HostDevice
. These filter policy units are always active, cannot be disabled, and they are no longer visible in the UI or API.
These filters are mandatory:
- Compatibility-Version
- CPU-Level
- CpuPinning
- HostDevice
- PinToHost
- VM leases ready
BZ#1585986
Previously, if you lowered the cluster compatibility version, the change did not propagate to the self-hosted engine virtual machine. As a result, the self-hosted engine virtual machine was not compatible with the new cluster version; you could not start or migrate it to another host in the cluster. The current release fixes this issue: The lower cluster compatibility version propagates to the self-hosted engine virtual machine; you can start and migrate it.
BZ#1590911
Previously, if two or more templates had the same name, selecting any of these templates displayed the same details from only one of the templates. This happened because the Administration Portal identified the selected template using a non-unique template name. The current release fixes this issue by using the template ID, which is unique, instead.
BZ#1596178
Previously, the VM Portal was inconsistent in how it displayed pool cards. After a user took all of the virtual machines from them, the VM Portal removed automatic pool cards but continued displaying manual pool cards. The current release fixes this issue: VM Portal always displays a pool card, and the card has a new label that shows how many virtual machines the user can take from the pool.
BZ#1598266
When a system had many FC LUNs with many paths per LUN, and a high I/O load, scanning of FC devices became slow, causing timeouts in monitoring VM disk size, and making VMs non-responsive. In this release, FC scans have been optimized for speed, and VMs are much less likely to become non-responsive.
BZ#1612152
Previously, Virtual Data Optimizer (VDO) statistics were not available for VDO volumes with an error, so VDO monitoring from VDSM caused a traceback. This update fixes the issue by correctly handling the different outputs from the VDO statistics tool.
BZ#1634742
Previously, if you decided to redeploy RHV Manager as a hosted engine, running the ovirt-hosted-engine-cleanup
command did not clean up the /etc/libvirt/qemu.conf
file correctly. Then, the hosted engine redeployment failed to restart the libvirtd service because libvirtd-tls.socket
remained active. The current release fixes this issue. You can run the cleanup tool and redeploy the Manager as a hosted engine.
BZ#1639360
Previously, mixing the Logical Volume Manager (LVM) activation and deactivation commands with other commands caused possible undefined LVM behavior and warnings in the logs. The current release fixes this issue. It runs the LVM activation and deactivation commands separately from other commands. This produces resulting well-defined LVM behavior and clear errors in case of failure.
BZ#1650417
Previously, if a host failed and if the RHV Manager tried to start the high-availability virtual machine (HA VM) before the NFS lease expired, OFD locking caused the HA VM to fail with the error, "Failed to get "write" lock Is another process using the image?." If the HA VM failed three times in a row, the Manager could not start it again, breaking the HA functionality. The current release fixes this issue. RHV Manager would continue to try starting the VM even after three failures (the frequency of the attempts decreases over time). Eventually, once the lock expires, the VM would be started.
BZ#1650505
Previously, after increasing the cluster compatibility version of a cluster with virtual machines that had outstanding configuration changes, those changes were reverted. The current release fixes this issue. It applies both the outstanding configuration changes and the new cluster compatibility version to the virtual machines.
BZ#1654555
Previously the /
filesystem automatically grew to fit the whole disk, and the user could not increase the size of /var
or /var/log
. This happened because, if a customer specified a disk larger than 49 GB while installing the Hosted Engine, the whole logical volume was allocated to the root (/
) filesystem. In contrast, for the RHVM machine, the critical filesystems are /var
and /var/log
.
The current release fixes this issue. Now, the RHV Manager appliance is based on the logical volume manager (LVM). At setup time, its PV and VG are automatically extended, but the logical volumes (LVs) are not. As a result, after installation is complete, you can extend all of the LVs in the Manager VM using the free space in the VG.
BZ#1656621
Previously, an imported VM always had 'Cloud-Init/Sysprep' turned on. The Manager created a VmInit even when one did not exist in the OVF file of the OVA. The current release fixes this issue: The imported VM only has 'Cloud-Init/Sysprep' turned on if the OVA had it enabled. Otherwise, it is disabled.
BZ#1658101
In this release, when updating a Virtual Machine using a REST API, not specifying the console value now means that the console state should not be changed. As a result, the console keeps its previous state.
BZ#1659161
Previously, changing the template version of a VM pool created from a delete-protected VM made the VM pool non-editable and unusable. The current release fixes this issue: It prevents you from changing the template version of the VM pool whose VMs are delete-protected and fails with an error message.
BZ#1659574
Previously, after upgrading RHV 4.1 to a later version, high-availability virtual machines (HA VMs) failed validation and did not run. To run the VMs, the user had to reset the lease Storage Domain ID. The current release fixes this issue: It removes the validation and regenerates the lease information data when the lease Storage Domain ID is set. After upgrading RHV 4.1, HA VMs with lease Storage Domain IDs run.
BZ#1660071
Previously, when migrating a paused virtual machine, the Red Hat Virtualization Manager did not always recognize that the migration completed. With this update, the Manager immediately recognizes when migration is complete.
BZ#1664479
When you use the engine ("Master") to set the high-availability host running the engine virtual machine (VM) to maintenance mode, the ovirt-ha-agent migrates the engine VM to another host. Previously, in specific cases, such as when these VMs had an old compatibility version, this type of migration failed. The current release fixes this problem.
BZ#1670102
Previously, to get the Cinder Library (cinderlib), you had to install the OpenStack repository. The current release fixes this issue by providing a separate repository for cinderlib.
To enable the repository, enter:
$ dnf config-manager --set-enabled rhel-8-openstack-cinderlib-rpms
To install cinderlib, enter:
$ sudo dnf install python3-cinderlib
BZ#1676582
Previously, the user interface used the wrong unit of measure for the VM memory size in the VM settings of Hosted Engine deployment via cockpit: It showed MB instead of MiB. The current release fixes this issue: It uses MiB as the unit of measure.
BZ#1678007
Before this update, you could import a virtual machine from a cluster with a compatibility version lower than the target cluster, and the virtual machine’s cluster version would not automatically update to the new cluster’s compatibility version, causing the virtual machine’s configuration to be invalid. Consequently, you could not run the virtual machine without manually changing its configuration. With this update, the virtual machine’s cluster version automatically updates to the new cluster’s compatibility version. You can import virtual machines from cluster compatibility version 3.6 or newer.
BZ#1678262
Previously, when you created a virtual machine from a template, the BIOS type of defined in the template did not take effect on the new virtual machine. Consequently, the BIOS type on the new virtual machine was incorrect. With this update, this bug is fixed, so the BIOS type on the new virtual machine is correct.
BZ#1679471
Previously, the console client resources page showed truncated titles for some locales. The current release fixes this issue. It re-arranges the console client resources page layout as part of migrating from Patternfly 3 to Patternfly 4 and fixes the truncated titles.
BZ#1680368
Previously, the slot
parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot
parameter is parsed as an int, so disk rollback and virtual machine creation succeed.
BZ#1684266
When a large disk is converted as part of VM export to OVA, it takes a long time. Previously, the SSH channel the export script timed out and closed due to the long period of inactivity, leaving an orphan volume. The current release fixes this issue: Now, the export script adds some traffic to the SSH channel during disk conversion to prevent the SSH channel from being closed.
BZ#1684537
Previously, a virtual machine could crash with the message "qemu-kvm: Failed to lock byte 100" during a live migration with storage problems. The current release fixes this issue in the underlying platform so the issue no longer happens.
BZ#1685034
after_get_caps is a vdsm hook that periodically checks for a database connection. This hook requires ovs-vswitchd to be running in order to execute properly. Previously, the hook ran even when ovs-vswitchd was disabled, causing an error to be logged to /var/log/messages, eventually flooding it. Now, when the hook starts, it checks if the OVS service is available, and bails out of the hook when the service is not available, so the log is no longer flooded with these error messages.
BZ#1686575
Previously, the self-hosted engine high availability host’s management network was configured during deployment. The VDSM took over the Network Manager and configured the selected network interface during initial deployment, while the Network Manager remained disabled. During restore, there was no option to attach additional (non-default) networks, and the restore process failed because the high-availability host had no connectivity to networks previously configured by the user that were listed in the backup file.
In this release, the user can pause the restore process, manually add the required networks, and resume the restore process to completion.
BZ#1688052
Previously, the gluster fencing policy check failed due to a non-iterable object and threw an exception. The code also contained a minor typo. The current release fixes these issues.
BZ#1688159
Previously, when a virtual machine migration entered post-copy mode and remained in that mode for a long time, the migration sometimes failed and the migrated virtual machine was powered off. In this release, post-copy migrations are maintained to completion.
BZ#1692592
Previously, items with number ten and higher on the BIOS boot menu were not assigned sequential indexes. This made it difficult to select those items. The current release fixes this issue. Now, items ten and higher are assigned letter indexes. Users can select those items by entering the corresponding letter.
BZ#1693628
Previously, the state of the user session was not saved correctly in the Engine database, causing many unnecessary database updates to be performed. The current release fixes this issue: Now, the user session state is saved correctly on the first update.
BZ#1693813
Previously, if you updated the Data Center (DC) level, and the DC had a VM with a lower custom compatibility level than the DC’s level, the VM could not resume due to a "not supported custom compatibility version." The current release fixes this issue: It validates the DC before upgrading the DC level. If the validation finds VMs with old custom compatibility levels, it does not upgrade the DC level: Instead, it displays "Cannot update Data Center compatibility version. Please resume/power off the following VMs before updating the Data Center."
BZ#1696313
Before this update, some architecture-specific dependencies of VDSM were moved to safelease in order to keep VDSM architecture-agnostic. With this update, those dependencies have been returned to VDSM and removed from safelease.
BZ#1698102
Previously, engine-setup did not provide enough information about configuring ovirt-provider-ovn. The current release fixes this issue by providing more information in the engine-setup prompt and documentation that helps users understand their choice and follow up actions.
BZ#1700623
Previously, moving a disk resulted in the wrong SIZE/CAP key in the volume metadata. This happened because creating a volume that had a parent overwrote the size of the newly-created volume with the parent size. As a result, the volume metadata contained the wrong volume size value. The current release fixes this issue, so the volume metadata contains the correct value.
BZ#1703112
In some scenarios, the PCI address of a hotplugged SR-IOV vNIC was overwritten by an empty value, and as a result, the NIC name in the virtual machine was changed following a reboot. In this release, the vNIC PCI address is stored in the database and the NIC name persists following a virtual machine reboot.
BZ#1703428
Previously, when importing a KVM into Red Hat Virtualization, "Hardware Clock Time Offset" was not set. As a result, the Manager machine did not recognize the guest agent installed in the virtual machine. In this release, the Manager machine recognizes the guest agent on a virtual machine imported from KVM, and the "Hardware Clock Time Offset" won’t be null.
BZ#1707225
Before this update, there was no way to backup and restore the Cinderlib database. With this update, the engine-backup command includes the Cinderlib database.
For example, to backup the engine including the Cinderlib database:
# engine-backup --scope=all --mode=backup --file=cinderlib_from_old_engine --log=log_cinderlib_from_old_engine
To restore this same database:
# engine-backup --mode=restore --file=/root/cinderlib_from_old_engine --log=/root/log_cinderlib_from_old_engine --provision-all-databases --restore-permissions
BZ#1711902
In a Red Hat Virtualization (RHV) environment with VDSM version 4.3 and Manager version 4.1, the DiskTypes are parsed as int values. However, in an RHV environment with Manager version > 4.1, the DiskTypes are parsed as strings. That compatibility mismatch produced an error: "VDSM error: Invalid parameter: 'DiskType=2'". The current release fixes this issue by changing the string value back to an int, so the operation succeeds with no error.
BZ#1713724
Previously, converting a storage domain to the V5 format failed when, following an unsuccessful delete volume operation, partly-deleted volumes with cleared metadata remained in the storage domain. The current release fixes this issue. Converting a storage domain succeeds even when partly-deleted volumes with cleared metadata remain in the storage domain.
BZ#1714528
Previously, some HTML elements in Cluster Upgrade dialog had missing or duplicated IDs, which impaired automated UI testing. The current release fixes this issue. It provides missing IDs and removes duplicates to improve automated UI testing.
BZ#1715393
Previously, if you changed a virtual machine’s BIOS Type chipset from one of the Q35 options to Cluster default or visa versa while USB Policy or USB Support was Enabled, the change did not update the USB controller to the correct setting. The current release fixes this issue. The same actions update the USB controller correctly.
BZ#1717390
Previously, if you hot-unplugged a virtual machine interface shortly after booting the virtual machine, the unplugging action failed with an error. When this happened, it was because VM monitoring did not report the alias of the interface soon enough; and VDSM could not identify the vNIC to unplug. The current release fixes this issue: If if the alias is missing during hot unplug, the Engine generates one on the fly.
BZ#1718141
Previously, the python3-ovirt-engine-sdk4 package did not include the all_content attribute of the HostNicService and HostNicsService. As a result, this attribute was effectively unavailable to python3-ovirt-engine-sdk4 users. The current release fixes this issue by adding the all_content parameter to the python3-ovirt-engine-sdk4.
BZ#1719990
Previously, when creating a virtual machine with the French language selected, the Administration Portal did not accept the memory size using the French abbreviation Mo instead of MB. After setting the value with the Mo suffix, the value was reset to zero. With this update, the value is parsed correctly and the value remains as entered.
BZ#1720747
Previously, if ovirt-ha-broker restarted while the RHV Manager (engine) was querying the status of the self-hosted engine cluster, the query could get stuck. If that happened, the most straightforward workaround was to restart the RHV Manager.
This happened because the RHV Manager periodically checked the status of the self-hosted engine cluster by querying the VDSM daemon on the cluster host. With each query, VDSM checked the status of the ovirt-ha-broker daemon over a Unix Domain Socket. The communication between VDSM and ovirt-ha-broker wasn’t enforcing a timeout. If ovirt-ha-broker was restarting, such as trying to recover from a storage issue, the VDSM request could get lost, causing VDSM and the RHV Manager to wait indefinitely.
The current release fixes this issue. It enforces a timeout in the communication channel between the VDSM and ovirt-ha-broker. If ovirt-ha-broker cannot reply to VDSM within a certain timeout, VDSM reports a self-hosted engine error to the RHV Manager.
BZ#1720795
Previously, the Manager searched for guest tools only on ISO domains, not data domains. The current release fixes this issue: Now, if the Manager detects a new tool on data domains or ISO domains, it displays a mark for the Windows VM.
BZ#1721804
Before this update libvirt did not support launching virtual machines with names ending with a period, even though the Manager did. This prevented launching virtual machines with names ending with a period. With this update, the Administration Portal and the REST API now prevent ending the name of a virtual machine with a period, resolving the issue.
BZ#1722854
Previously, while VDSM was starting, the definition of the network filter vdsm-no-mac-spoofing was removed and recreated to ensure the filter was up to date. This occasionally resulted in a timeout during the start of VDSM. The current release fixes this issue. Instead of removing and recreating of the filter, the vdsm-no-mac-spoofing filter is updated during the start of the VDSM. This update takes less than a second, regardless of the number of vNICs using this filter.
BZ#1723668
Previously, during virtual machine shut down, the VDSM command Get Host Statistics occasionally failed with an Internal JSON-RPC error {'reason': '[Errno 19] vnet<x> is not present in the system'}
. This failure happened because the shutdown could make an interface disappear while statistics were being gathered. The current release fixes this issue. It prevents such failures from being reported.
BZ#1724002
Previously, cloud-init could not be used on hosts with FIPS enabled. With this update, cloud-init can be used on hosts with FIPS enabled.
BZ#1724959
Previously, the About dialog in the VM Portal provided a link to GitHub for reporting issues. However, RHV customers should use the Customer Portal to report issues. The current release fixes this issue. Now, the About dialog provides a link to the Red Hat Customer Portal.
BZ#1728472
Previously, the RHV Manager reported network out of sync because the Linux kernel applied the default gateway IPv6 router advertisements, and the IPv6 routing table was not configured by RHV. The current release fixes this issue. The IPv6 routing table is configured by RHV. NetworkManager manages the default gateway IPv6 router advertisements.
BZ#1729511
During installation or upgrade to Red Hat Virtualization 4.3, engine-setup failed if the PKI Organization Name in the CA certificate included non-ASCII characters. In this release, the upgrade engine-setup process completes successfully.
BZ#1729811
Previously, the guest_cur_user_name of the vm_dynamic database table was limited to 255 characters, not enough for more than approximately 100 user names. As a result, when too many users logged in, updating the table failed with an error. The current release fixes this issue by changing the field type from VARCHAR(255) to TEXT.
BZ#1730264
Previously, enabling port mirroring on networks whose user-visible name was longer than 15 characters failed. This happened because port mirroring tried to use this long user-visible network name, which was not a valid network name. The current release fixes this issue. Now, instead of the user-visible name, port mirroring uses the VDSM network name. Therefore, you can enable port mirroring for networks whose user-visible name is longer than 15 characters.
BZ#1731212
Previously, the RHV landing page did not support scrolling. With lower screen resolutions, some users could not use the log in menu option for the Administration Portal or VM Portal. The current release fixes this issue by migrating the landing and login pages to PatternFly 4, which displays horizontal and vertical scroll bars when needed. Users can access the entire screen regardless of their screen resolution or zoom setting.
BZ#1731590
Before this update, previewing a snapshot of a virtual machine, where the snapshot of one or more of the machine’s disks did not exist or had no image with active set to "true", caused a null pointer exception to appear in the logs, and the virtual machine remained locked. With this update, before a snapshot preview occurs, a database query checks for any damaged images in the set of virtual machine images. If the query finds a damaged image, the preview operation is blocked. After you fix the damaged image, the preview operation should work.
BZ#1733227
Previously, an issue with the Next button on External Provider Imports prevented users from importing virtual machines (VMs) from external providers such as VMware. The current release fixes this issue and users can import virtual machines from external providers.
BZ#1733843
Previously, exporting a virtual machine (VM) to an Open Virtual Appliance (OVA) file archive failed if the VM was running on the Host performing the export operation. The export process failed because doing so created a virtual machine snapshot, and while the image was in use, the RHV Manager could not tear down the virtual machine. The current release fixes this issue. If the VM is running, the RHV Manager skips tearing down the image. Exporting the OVA of a running VM succeeds.
BZ#1737234
Previously, if you sent the RHV Manager an API command to attach a non-existing ISO to a VM, it attached an empty CD or left an existing one intact. The current release fixes this issue. Now, the Manager checks if the specified ISO exists, and throws an error if it doesn’t.
BZ#1739377
Previously, creating a snapshot did not correctly save the Cloud-Init/Sysprep settings for the guest OS. If you tried to clone a virtual machine from the snapshot, it did not have valid values to initialize the guest OS. The current release fixes this issue. Now, creating a snapshot correctly saves the the Cloud-Init/Sysprep configuration for the guest OS.
BZ#1741792
Previously, using LUKS alone was a problem because the RHV Manager could reboot a node using Power Management commands. However, the node would not reboot because it was waiting for the user to enter a decrypt/open/unlock passphrase. This release fixes the issue by adding clevis RPMs to the Red Hat Virtualization Host (RHVH) image. As a result, a Manager can automatically unlock/decrypt/open an RHVH using TPM or NBDE.
BZ#1743269
Previously, upgrading RHV from version 4.2 to 4.3 made the 10-setup-ovirt-provider-ovn.conf file world-readable. The current release fixes this issue, so the file has no unnecessary permissions.
BZ#1743296
Before this update, selecting templates or virtual machines did not display the proper details when templates or virtual machines with the same name were saved in different Data Centers, because the machine’s name, instead of its GUID, was used to fetch the machine’s details. With this update, the query uses the virtual machine’s GUID, and the correct details are displayed.
BZ#1745384
Previously, trying to update the IPv6 gateway in the Setup Networks dialog removed it from the network attachment. The current release fixes this issue: You can update the IPv6 gateway if the related network has the default route role.
BZ#1746699
Before this update,copying disks created by virt-v2v failed with an Invalid Parameter Exception, Invalid parameter:'DiskType=1'. With this release, copying disks succeeds.
BZ#1746700
The ovirt-host-deploy package uses otopi. Previously, otopi could not handle non-ASCII text in /root/.ssh/authorized_keys and failed with an error: 'ascii' codec can’t decode byte 0xc3 in position 25: ordinal not in range(128). The new release fixes this issue by adding support for Unicode characters to otopi.
BZ#1749347
Previously, systemd units from failed conversions were not removed from the host. These could cause collisions and prevent subsequent conversions from starting because the service name was already "in use." The current release fixes this issue. If the conversion fails, the units are explicitly removed so they cannot interfere with subsequent conversions.
BZ#1749630
Previously, the Administration Portal showed very high memory usage for a host with no virtual machines running because it was not counting slab reclaimable memory. As a result, virtual machines could not be migrated to that host. The current release fixes that issue. The free host memory is evaluated correctly.
BZ#1750212
Previously, when you tried to delete the snapshot of a virtual machine with a LUN disk, RHV parsed its image ID incorrectly and used "mapper" as its value. This issue produced a null pointer error (NPE) and made the deletion fail. The current release fixes this issue, so the image ID parses correctly and the deletion succeeds.
BZ#1750482
Previously, when you used the VM Portal to configure a virtual machine (VM) to use Windows OS, it failed with the error, "Invalid time zone for given OS type." This happened because the VM’s timezone for Windows OS was not set properly. The current release fixes this issue. If the time zone in the VM template or VM is not compatible with the VM OS, it uses the default time zone. For Windows, this default is "GMT Standard Time". For other OSs, it is "Etc/GMT". Now, you can use the VM Portal to configure a VM to use Windows OS.
BZ#1751215
Previously, after upgrading to RHV version 4.1 to 4.3, the Graphical Console for the self-hosted engine virtual machine was locked because the default display in version 4.1 is VGA. The current release fixes this issue. While upgrading to version 4.3, it changes the default display to VNC. As a result, the Graphical Console for the Hosted-Engine virtual machine is changeable.
BZ#1754363
With this release, the number of DNS configuration SQL queries that the Red Hat Virtualization Manager runs is significantly reduced, which improves the Manager’s ability to scale.
BZ#1756244
Previously, in an IPv4-only host with a .local FQDN, the deployment kept looping searching for an available IPv6 prefix until it failed. This was because the hosted-engine setup picked a link-local IP address for the host. The hosted-engine setup could not ensure that the Engine and the host are on the same subnet when one of them used a link-local address. The Engine must not use on a link-local address to be reachable through a routed network. The current release fixes this issue: Even if the hostname is resolved to a link-local IP address, the hosted-engine setup ignores the link-local IP addresses and tries to use another IP address as the address for the host. The hosted-engine can deploy on hosts, even if the hostname is resolved to a link-local address.
BZ#1759388
Previously, ExecStopPost was present in the VDSM service file. This meant that, after stopping VDSM, some of its child processes could continue and, in some cases, lead to data corruption. The current release fixes this issue. It removes ExecStopPost from the VDSM service. As a result, terminating VDSM also stops its child processes.
BZ#1763084
Previously, some migrations failed because of invalid host certificates whose Common Name (CN) contained an IP address, and because using the CN for hostname matching is obsolete. The current release fixes this issue by filling-in the Subject Alternative Name (SAN) during host installation, host upgrade, and certificate enrolment. Periodic certificate validation includes the SAN field and raises an error if it is not filled.
BZ#1764943
Previously, while creating virtual machine snapshots, if the VDSM’s command to freeze a virtual machines' file systems exceeded the snapshot command’s 3-minute timeout period, creating snapshots failed, causing virtual machines and disks to lock.
The current release adds two key-value pairs to the engine configuration. You can configure these using the engine-config tool:
-
Setting
LiveSnapshotPerformFreezeInEngine
totrue
enables the Manager to freeze VMs' file systems before it creates a snapshot of them. -
Setting
LiveSnapshotAllowInconsistent
totrue
enables the Manager to continue creating snapshots if it fails to freeze VMs' file systems.
BZ#1769339
Previously, extending a floating QCOW disk did not work because the user interface and REST API ignored the getNewSize parameter. The current release fixes this issue and validates the settings so you can extend a floating QCOW disk.
BZ#1769463
Previously, in a large environment, the oVirt’s REST API’s response to a request for the cluster list was slow: This slowness was caused by processing a lot of surplus data from the engine database about out-of-sync hosts on the cluster which eventually was not included in the response. The current release fixes this issue. The query excludes the surplus data, and the API responds quickly.
BZ#1770237
Previously, the virtual machine (VM) instance type edit and create dialog displayed a vNIC profile editor. This item gave users the impression they could associate a vNIC profile with an instance type, which is not valid. The current release fixes this issue by removing the vNIC profile editor from the instance edit and create dialog.
BZ#1770889
Previously, VDSM did not send the Host.getStats message: It did not convert the description field of the Host.getStats message to utf-8, which caused the JSON layer to fail. The current release fixes this issue. It converts the description field to utf-8 so that VDSM can send the Host.getStats message.
BZ#1775248
Previously, issues with aliases for USB, channel, and PCI devices generated WARN and ERROR messages in engine.log when you started virtual machines.
RHV Manager omitted the GUID from the alias of the USB controller device. This information is required later to correlate the alias with the database instance of the USB device. As a result, duplicate devices were being created. Separately, channel and PCI devices whose aliases did not contain GUIDs also threw exceptions and caused warnings.
The current release fixes these issues. It removes code that was prevented the USB controller device from sending the correct alias when launching the VM. The GUID is added to the USB controller devices' aliases within the domain XML. It also filters channel and PCI controllers from the GUID conversion code to avoid printing exception warnings for these devices.
BZ#1777954
Previously, for the list of virtual machine templates in the Administration Portal, a paging bug hid every other page, and the templates on those pages, from view. The current release fixes this issue and displays every page of templates correctly.
BZ#1781095
Before this update, the engine-cleanup command enabled you to do a partial cleanup by prompting you to select which components to remove, even though partial cleanup is not supported. This resulted in a broken system. With this update, the prompt no longer appears and only full cleanup is possible.
BZ#1783180
Previously, a problem with AMD EPYC CPUs that were missing the virt-ssbd CPU flag prevented Hosted Engine installation. The current release fixes this issue.
BZ#1783337
Previously, the rename tool did not renew the websocketproxy
certificates and did not update the value of WebSocketProxy
in the engine configuration. This caused issues such as the VNC browser console not being able to connect to the server. The current release fixes this issue. Now, ovirt-engine-rename
handles the websocket proxy correctly. It regenerates the certificate, restarts the service, and updates the value of WebSocketProxy
.
BZ#1783815
Previously, if a virtual machine (VM) was forcibly shut down by SIGTERM, in some cases the VDSM did not handle the libvirt shutdown event that contained information about why the VM was shut down and evaluated it as if the guest had initiated a clean shutdown. The current release fixes this issue: VDSM handles the shutdown event, and the Manager restarts the high-availability VMs as expected.
BZ#1784049
Previously, if you ran a virtual machine (VMs) with an old operating system such as RHEL 6 and the BIOS Type was a Q35 Chipset, it caused a kernel panic. The current release fixes this issue. If a VM has an old operating system and the BIOS Type is a Q35 Chipset, it uses the VirtIO-transitional model for some devices, which enables the VM to run normally.
BZ#1784398
Previously, because of a UI regression bug in the Administration Portal, you could not add system permissions to a user. For example, clicking Add System Permissions, selecting a Role to assign, and clicking OK did not work. The current release fixes so you can add system permissions to a user.
BZ#1785364
Previously, when restoring a backup, engine-setup did not restart ovn-northd, so the ssl/tls configuration was outdated. With this update ,the the restored ssl/tls ovn-northd reloads the restored ssl/tls configuration.
BZ#1785615
Previously, trying to mount an ISO domain (File → Change CD) within the Console generated a "Failed to perform 'Change CD' operation" error due to the deprecation of REST API v3. The current release fixes this issue: It upgrades Remote Viewer to use REST API v4 so mounting an ISO domain within the console works.
BZ#1788424
Previously, if you disabled the virtio-scsi drive and imported the virtual machine that had a direct LUN attached, the import validation failed with a "Cannot import VM. VirtIO-SCSI is disabled for the VM" error. This happened because the validation tried to verify that the virtio-scsi drive was still attached to the VM. The current release fixes this issue. If the Disk Interface Type is not virtio-scsi, the validation does not search for the virtio-scsi drive. Disk Interface Type uses an alternative driver, and the validation passes.
BZ#1788783
Previously, when migrating a virtual machine, information about the running guest agent was not always passed to the destination host. In these cases, the migrated virtual machine on the destination host did not receive an after_migration life cycle event notification. This update fixes this issue. The after_migration notification works as expected now.
BZ#1793481
Before this update, you could enable a raw format disk for incremental backup from the Administration Portal or using the REST API, but because incremental backup does not support raw format disks, the backup failed.
With this update, you can only enable incremental backup for QCOW2 format disks, preventing inclusion of raw format disks.
BZ#1795886
Before this update, validation succeeded for an incremental backup operation that included raw format disks, even though incremental backup does not support raw format disks. With this update, validation succeeds for a full backup operation for a virtual machine with a raw format disk, but validation fails for an incremental backup operation for a virtual machine with a raw format disk.
BZ#1796811
The apache-sshd library is not bundled anymore in the rhvm-dependencies package. The apache-sshd library is now packaged in its own rpm package.
BZ#1798175
Previously, due to a regression, KVM Importing failed and threw exceptions. This was due to a missing readinto function on the StreamAdapter. The current release fixes this issue so that KVM importing works.
BZ#1798425
Previously, importing virtual machines failed when the source version variable was null. With this update, validation of the source compatibility version is removed, enabling the import to succeed even when the source version variable is null.
BZ#1801205
Previously, VM Pools set to HA could no be run. VM Pools are stateless. Nonetheless, a user could set a VM in a Pool as supporting HA, but then the VM could not be launched. The current release fixes this issue: It disables the HA checkbox so that the user can no longer set VM Pools to support HA. As a result, the user can no longer set a VM Pool to support HA.
BZ#1806276
Previously, the ovirt-provider-ovn network provider was non-functional on RHV 4.3.9 Hosted-Engine. This happened because, with FDP 20.A (bug 1791388), the OVS/OVN service no longer had the permissions to read the private SSL/TLS key file. The current release fixes this issue: It updates the private SSL/TLS key file permissions. OVS/OVN reads the key file and works as expected.
BZ#1807937
Previously, if running a virtual machine with its Run Once configuration failed, the RHV Manager would try to run the virtual machine with its standard configuration on a different host. The current release fixes this issue. Now, if Run Once fails, the RHV Manager tries to run the virtual machine with its Run Once configuration on a different host.
BZ#1808788
Previously, trying to run a VM failed with an unsupported configuration error if its configuration did not specify a numa node. This happened because the domain xml was missing its numa node section, and VMs require at least one numa node to run. The current release fixes this issue: If the user has not specified any numa nodes, the VM generates a numa node section. As a result, a VM where numa nodes were not specified launches regardless of how many offline CPUs are available.
BZ#1809875
Before this update, a problem in the per Data-Center loop collecting images information caused incomplete data for analysis for all but the last Data-Center collected. With this update, the information is properly collected for all Data-Centers, resolving the issue.
BZ#1810893
Previously, using the Administration Portal to import a storage domain omitted custom mount options for NFS storage servers. The current release fixes this issue by including the custom mount options.
BZ#1812875
Previously, when the Administration Portal was configured to use French language, the user could not create virtual machines. This was caused by French translations that were missing from the user interface. The current release fixes this issue. It provides the missing translations. Users can configure and create virtual machines while the Administration Portal is configured to use the French language.
BZ#1813028
Previously, if you exported a virtual machine (VM) as an Open Virtual Appliance (OVA) file from a host that was missing a loop device, and imported the OVA elsewhere, the resulting VM had an empty disk (no OS) and could not run. This was caused by a timing and permissions issue related to the missing loop device. The current release fixes the timing and permission issues. As a result, the VM to OVA export includes the guest OS. Now, when you create a VM from the OVA, the VM can run.
BZ#1816327
Previously, if you tried to start an already-running virtual machine (VM) on the same host, VDSM failed this operation too late and the VM on the host became hidden from the RHV Manager. The current release fixes the issue: VDSM immediately rejects attempts to start a running VM on the same host.
BZ#1816777
Previously, when initiating the console from the VM portal to noVNC, the console didn’t work due to a missing 'path' parameter when initiating the console. In this release,the 'path' is not mandatory, and the noVNC console can initiate even when 'path' isn’t provided.
BZ#1819299
Previously when loading a memory snapshot, the RHV Manager did not load existing device IDs. Instead, it created new IDs for each device. The Manager was unable to correlate the IDs with the devices and treated them as though they were unplugged. The current release fixes this issue. Now, the Manager consumes the device IDs and correlates them with the devices.
BZ#1819960
Previously, if you used the update template script example of the ovirt-engine-sdk to import a virtual machine or template from an OVF configuration, it failed with a null-pointer exception (NPE). This happened because the script example did not supply the Storage Pool Id and Source Storage Domain Id. The current release fixes this issue. Now, the script gets the correct ID values from the image, so importing a template succeeds.
BZ#1820140
Previously, with RHV Manager running as a self-hosted engine, the user could hotplug memory on the self-hosted engine virtual machine and exceed the physical memory of the host. In that case, restarting the virtual machine failed due to insufficient memory. The current release fixes this issue. It prevents the user from setting the self-hosted engine virtual machine’s memory to exceed the active host’s physical memory. You can only save configurations where the self-hosted engine virtual machine’s memory is less than the active host’s physical memory.
BZ#1821164
While the RHV Manager is creating a virtual machine (VM) snapshot, it can time out and fail while trying to freeze the file system. If this happens, more than one VM can write data to the same logical volume and corrupt the data on it. In the current release, you can prevent this condition by configuring the Manager to freeze the VM’s guest filesystems before creating a snapshot. To enable this behavior, run the engine-configuration tool and set the LiveSnapshotPerformFreezeInEngine
key-value pair to true
.
BZ#1822479
Previously, when redeploying the RHV Manager as a hosted engine after cleanup, the libvirtd service failed to start. This happened because the libvirtd-tls.socket
service was active. The current release fixes this issue. Now, when you run the ovirt-hosted-engine-cleanup
tool, it stops the libvirtd-tls.socket
service. The libvirtd service starts when you redeploy RHV Manager as a hosted engine.
BZ#1826248
Previously, the 'Host console SSO' feature did not work with python3, which is the default python on RHEL 8. The code was initially written for Python2 and was not properly modified for Python3. The current release fixes this issue: The 'Host console SSO' feature works with Python3.
BZ#1830730
Previously, if the DNS query test timed out, it did not produce a log message. The current release fixes this issue: If a DNS query times out, it produces a "DNS query failed" message in the broker.log.
BZ#1832905
In previous versions, engine-backup --mode=verify
passed even if pg_restore
emitted errors. The current release fixes this issue. The engine-backup --mode=verify
command correctly fails if pg_restore
emits errors.
BZ#1834523
Previously, adding or removing a smart card to a running virtual machine did not work. The current release fixes this issue. When you add or remove a smart card, it saves this change to the virtual machine’s next run configuration. In the Administration Portal, the virtual machine indicates that a next run configuration exists, and lists "Smartcard" as a changed field. When you restart the virtual machine, it applies the new configuration to the virtual machine.
BZ#1834873
Previously, retrieving host capabilities failed for specific non-NUMA CPU topologies. The current release fixes this issue and correctly reports the host capabilities for those topologies.
BZ#1835096
Previously, if creating a live snapshot failed because of a storage error, the RHV Manager would incorrectly report that it had been successful. The current release fixes this issue. Now, if creating a snapshot fails, the Manager correctly shows that it failed.
BZ#1836609
Previously, the slot
parameter was parsed as a string, causing disk rollback to fail during the creation of a virtual machine from a template when using an Ansible script. Note that there was no such failure when using the Administration Portal to create a virtual machine from a template. With this update, the slot
parameter is parsed as an int, so disk rollback and virtual machine creation succeed.
BZ#1837266
Previously, if you backed up RHV Manager running as a self-hosted engine in RHV version 4.3, restoring it in RHV version 4.4 failed with particular CPU configurations. The current release fixes this issue. Now, restoring the RHV Manager with those CPU configurations succeeds.
BZ#1838439
Previously, in the beta version of RHV 4.4, after adding a host to a cluster with compatibility version 4.2, editing the cluster reset its BIOS Type from the previous automatically detected value to Cluster default. As a result, virtual machines could not run because a Chip Set does not exist for Cluster Default. The current release fixes this issue. It preserves the original value of BIOS Type and prevents it from being modified when you edit the cluster. As a result, you can create and run virtual machines normally after editing cluster properties.
BZ#1838493
Previously, creating a live snapshot with memory while LiveSnapshotPerformFreezeInEngine was set to True, resulted in a virtual machine file system that is frozen when previewing or committing the snapshot with memory restore. In this release, the virtual machine runs successfully after creating a preview snapshot from a memory snapshot.
BZ#1839967
Previously, running ovirt-engine-rename
generated errors and failed because Python 3 renamed urlparse
to urllib.parse
. The current release fixes this issue. Now, ovirt-engine-rename
uses urllib.parse
and runs successfully.
BZ#1842260
Previously, suppose you were trying to send metrics and logs to Elasticsearch that was not on OCP: You could not set usehttps
to false
while also not using Elasticsearch certificates (use_omelasticsearch_cert: false
). As a result, you could not send data to Elasticsearch without https. The current release fixes this issue. Now, you can set the variable "usehttps" as expected and send data to Elasticsearch without https.
BZ#1843089
Before this release, local storage pools were created but were not deleted during Self-Hosted Engine deployment, causing storage pool leftovers to remain. In this release, the cleanup is performed properly following Self-Hosted Engine deployment, and there are no storage pool leftovers.
BZ#1845473
Previously, exporting a virtual machine or template to an OVA file incorrectly sets its format in the OVF metadata file to "RAW". This issue causes problems using the OVA file. The current release fixes this issue. Exporting to OVA sets the format in the OVF metadata file to "COW", which represents the disk’s actual format, qcow2.
BZ#1847513
When you change the cluster compatibility version, it can also update the compatibility version of the virtual machines. If the update fails, it rolls back the changes. Previously, chipsets and emulated machines were not part of the cluster update. The current release fixes this issue. Now, you can also update chipsets and emulator machines when you update the cluster compatibility version.
BZ#1849275
Previously, if the block path was unavailable for a storage block device on a host, the RHV Manager could not process host devices from that host. The current release fixes this issue. The Manager can process host devices even though a block path is missing.
BZ#1850117
Previously, the`hosted-engine --set-shared-config storage` command failed to update the hosted engine storage. With this update, the command works.
BZ#1850220
Old virtual machines that have not been restarted since user aliases were introduced in RHV version 4.2 use old device aliases created by libvirt. The current release adds support for those old device aliases and links them to the new user-aliases to prevent correlation issues and devices being unplugged.
6.14.2. Enhancements
This release of Red Hat Virtualization features the following enhancements:
BZ#854932
The REST API in the current release adds the following updatable disk properties for floating disks:
- For Image disks: provisioned_size, alias, description, wipe_after_delete, shareable, backup, and disk_profile.
- For LUN disks: alias, description and shareable.
- For Cinder and Managed Block disks: provisioned_size, alias, and description.
See Services.
BZ#1080097
In this release, it is now possible to edit the properties of a Floating Disk in the Storage > Disks tab of the Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk.
BZ#1107803
With this enhancement, oVirt uses NetworkManager and NetworkManager Stateful Configuration (nmstate) to configure host networking. The previous implementation used network-scripts, which are deprecated in CentOS 8. This usage of NetworkManager helps to share code with software components. As a result, oVirt integrates better with RHEL-based software. Now, for example, the Cockpit web interface can see the host networking configuration, and oVirt can read the network configuration created by the Anaconda installer.
BZ#1179273
The VDSM’s ssl_protocol
, ssl_excludes
, and ssl_ciphers
config options have been removed. For details, see: Consistent security by crypto policies in Red Hat Enterprise Linux 8.
To fine-tune your crypto settings, change or create your crypto policy. For example, for your hosts to communicate with legacy systems that still use insecure TLSv1 or TLSv1.1, change your crypto policy to LEGACY
with:
# update-crypto-policies --set LEGACY
BZ#1306586
The floppy device has been replaced by a CDROM device for sysprep installation of Compatibility Versions 4.4 and later.
BZ#1325468
After a high-availability virtual machine (HA VM) crashes, the RHV Manager tries to restart it indefinitely. At first, with a short delay between restarts. After a specified number of failed retries, the delay is longer.
Also, the Manager starts crashed HA VMs in order of priority, delaying lower-priority VMs until higher-priority VMs are 'Up.'
The current release adds new configuration options:
-
RetryToRunAutoStartVmShortIntervalInSeconds
, the short delay, in seconds. The default value is30
. -
RetryToRunAutoStartVmLongIntervalInSeconds
, the long delay, in seconds. The default value is1800
, which equals 30 minutes. -
NumOfTriesToRunFailedAutoStartVmInShortIntervals
, the number of restart tries with short delays before switching to long delays. The default value is10
tries. -
MaxTimeAutoStartBlockedOnPriority
, the maximum time, in minutes, before starting a lower-priority VM. The default value is10
minutes.
BZ#1358501
Network operations that span multiple hosts may take a long time. This enhancement shows you when these operations finish: It records start and end events in the Events Tab of the Administration Portal and engine.log. If you use the Administration Portal to trigger the network operation, the portal also displays a pop-up notification when the operation is complete.
BZ#1388599
In the default virtual machine template, the current release changes the default setting for "VM Type" to "server." Previously, it was "desktop."
BZ#1403677
With this update, you can connect to a Gluster storage network over IPv6, without the need for IPv4.
BZ#1427717
The current release adds the ability for you to select affinity groups while creating or editing a virtual machine (VM) or host. Previously, you could only add a VM or host by editing an affinity group.
BZ#1450351
With this update, you can set the reason for shutting down or powering off a virtual machine when using a REST API request to execute the shutdown or power-off.
BZ#1455465
In this release, the default "optimized for" value optimization type for bundled templates is now set to "Server".
BZ#1475774
Previously, when creating/managing an iSCSI storage domain, there was no indication that the operation may take a long time. In this release, the following message has been added: “Loading… A large number of LUNs may slow down the operation.”
BZ#1477049
With this update, unmanaged networks are viewable by the user on the host NICs page at a glance. Each NIC indicates whether one of its networks is unmanaged by oVirt engine. Previously, to view this indication, the user had to open the setup dialog, which was cumbersome.
BZ#1482465
With this update, when viewing clusters, you can sort by the Cluster CPU Type and Compatibility Version columns.
BZ#1512838
The current release adds a new capability: In the "Edit Template" window, you can use the "Sealed" checkbox to indicate whether a template is sealed. The Compute > Templates window has a new "Sealed" column, which displays this information.
BZ#1523289
With this update, you can check the list of hosts that are not configured for metrics, that is, those hosts on which the Collectd and Rsyslog/Fluentd services are not running.
First, run the playbook 'manage-ovirt-metrics-services.yml' by entering:
# /usr/share/ovirt-engine-metrics/configure_ovirt_machines_for_metrics.sh --playbook=manage-ovirt-metrics-services.yml
Then, check the file /etc/ovirt-engine-metrics/hosts_not_configured_for_metrics
.
BZ#1546838
The current release displays a new warning when you use 'localhost' as an FQDN: "[WARNING] Using the name 'localhost' is not recommended, and may cause problems later on."
BZ#1547937
This release adds a progress bar for the disk synchronization stage of Live Storage Migration.
BZ#1564280
This enhancement adds support for OVMF with SecureBoot, which enables UEFI support for Virtual Machines.
BZ#1572155
The current release adds the VM’s current state and uptime to the Compute > Virtual Machine: General tab.
BZ#1574443
Previously, it was problematic to flip the host into the maintenance state while it was flipping between connecting and activating state. In this release, the host, regardless of its initial state before restart, will be put into maintenance mode after restarting the host using power management configuration.
BZ#1581417
All new clusters with x86 architecture and compatibility version 4.4 or higher now set the BIOS Type to the Q35 Chipset by default, instead of the i440FX chipset.
BZ#1593800
When creating a new MAC address pool, its ranges must not overlap with each other or with any ranges in existing MAC address pools.
BZ#1595536
When a host is running in FIPS mode, VNC must use SASL authorization instead of regular passwords because of a weak algorithm inherent to the VNC protocol. The current release facilitates using SASL by providing an Ansible role, ovirt-host-setup-vnc-sasl, which you can run manually on FIPS-enabled hosts. This role does the following:
- Creates an empty SASL password database.
- Prepares the SASL config file for qemu.
- Changes the libvirt config file for qemu.
BZ#1600059
Previously, when High Availability was selected for a new virtual machine, the Lease Storage Domain was set to a bootable Storage Domain automatically if the user did not already select one. In this release, a bootable Storage Domain is set as the lease Storage Domain for new High Availability virtual machines.
BZ#1602816
Previously, if you tried to deploy hosted-engine over a teaming device, it would try to proceed and then fail with an error. The current release fixes this issue. It filters out teaming devices. If only teaming devices are available, it rejects the deployment with a clear error message that describes the issue.
BZ#1603591
With this enhancement, while using cockpit or engine-setup to deploy RHV Manager as a Self-Hosted Engine, the options for specifying the NFS version include two additional versions, 4.0 and 4.2.
BZ#1622700
Previously, multipath repeatedly logged irrelevant errors for local devices. In this release, local devices are blacklisted and irrelevant errors are no longer logged.
BZ#1622946
With this update, the API reports extents information for sparse disks; which extents are data, read as zero, or unallocated (holes). This enhancement enables clients to use the imageio REST API to optimize image transfers and minimize storage requirements by skipping zero and unallocated extents.
BZ#1640192
Before this update, you could enable FIPS on a host. But because the engine was not aware of FIPS, it did not use the appropriate options with qemu when starting virtual machines, so the virtual machines were not fully operable.
With this update, you can enable FIPS for a host in the Administration Portal, and the engine uses qemu with FIPS-compatible arguments.
To enable FIPS for a host, in the Edit Host window, select the Kernel tab and check the FIPS mode checkbox.
BZ#1640908
Previously, if there were hundreds of Fiber Channel LUNs, the Administration Portal dialog box for adding or managing storage domains took too long to render and might become unresponsive. This enhancement improves performance: It displays a portion of the LUNs in a table and provides right and left arrows that users can click to see the next or previous set of LUNs. As a result, the window renders normally and remains responsive regardless of how many LUNs are present.
BZ#1641694
With this update, you can start the self-hosted engine virtual machine in a paused state. To do so, enter the following command:
# hosted-engine --vm-start-paused
To un-pause the virtual machine, enter the following command:
# hosted-engine --vm-start
BZ#1643886
This update adds support for Hyper V enlightenment for Windows virtual machines on hosts running RHEL 8.2 with cluster compatibility level set to 4.4. Specifically, Windows virtual machines now support the following Hyper V functionality:
- reset
- vpindex
- runtime
- frequencies
- reenlightenment
- tlbflush
BZ#1647440
The current release adds a new feature: On the VM list page, the tooltip for the VM type icon shows a list of the fields you have changed between the current and the next run of the virtual machine.
BZ#1651406
The current release enables you to migrate a group of virtual machines (VMs) that are in positive enforcing affinity with each other.
- You can use the new checkbox in the Migrate VM dialog to migrate this type of affinity group.
- You can use the following REST API to migrate this type of affinity group: http://ovirt.github.io/ovirt-engine-api-model/4.4/#services/vm/methods/migrate/parameters/migrate_vms_in_affinity_closure.
- Putting a host into maintenance also migrates this type of affinity group.
BZ#1652565
In this release, it is now possible to edit the properties of a Floating Disk in the Storage > Disks tab of Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk.
BZ#1666913
With this enhancement, if a network name contains spaces or is longer than 15 characters, the Administration Portal notifies you that the RHV Manager will rename the network using the host network’s UUID as a basis for the new name.
BZ#1671876
Suppose a Host has a pair of bonded NICs using (Mode 1) Active-Backup
. Previously, the user clicked Refresh Capabilities to get the current status of this bonded pair. In the current release, if the active NIC changes, it refreshes the state of the bond in the Administration Portal and REST API. You do not need to click Refresh Capabilities.
BZ#1674420
This update adds support for the following virtual CPU models:
- Intel Cascade Lake Server
- Intel Ivy Bridge
BZ#1679110
This enhancement moves the pop-up ("toast") notifications from the upper right corner to the lower right corner, so they no longer cover the action buttons. Now, the notifications rise from the bottom right corner to within 400 px of the top.
BZ#1679730
This update adds an audit log warning on an out-of-range IPv4 gateway static configuration for a host NIC. The validity of the gateway is assessed compared to the configured IP address and netmask. This gives users better feedback and helps them notice incorrect configurations.
BZ#1683108
This release adds a new 'status' column to the affinity group table that shows whether all of an affinity group’s rules are satisfied (status = ok) or not (status = broken). The "Enforcing" option does not affect this status.
BZ#1687345
Previously, RHV Manager created live virtual machine snapshots synchronously. If creating the snapshot exceeded the timeout period (default 180 seconds), the operation failed. These failures tended to happen with virtual machines that had large memory loads or clusters that had slow storage speeds.
With this enhancement, the live snapshot operation is asynchronous and runs until it is complete, regardless of how long it takes.
BZ#1688796
With this update, a new configuration variable, AAA_JAAS_ENABLE_DEBUG
, has been added to enable Kerberos/GSSAPI
debug on AAA
. The default value is false
.
To enable debugging, create a new configuration file named /etc/ovirt-engine/engine.conf.d/99-kerberos-debug.conf
with the following content:
AAA_JAAS_ENABLE_DEBUG=true
BZ#1691704
Red Hat Virtualization Manager virtual machines now support ignition configuration, and this feature can be used via the UI or API by any guest OS that supports it, for example, RHCOS or FCOS.
BZ#1692709
With this update, each host’s boot partition is explicitly stated in the kernel boot parameters. For example: boot=/dev/sda1
or boot=UUID=<id>
BZ#1696245
Previously, while cloning a virtual machine, you could only edit the name of the virtual machine in the Clone Virtual Machine window. With this enhancement, you can fully customize any of the virtual machine settings in the Clone Virtual Machine window. This means, for example, that you can clone a virtual machine into a different storage domain.
BZ#1700021
Previously, if a Certificate Authority ca.pem
file was not present, the engine-setup tool automatically regenerated all PKI files, requiring you to reinstall or re-enroll certificates for all hosts.
Now, if ca.pem
is not present but other PKI files are, engine-setup prompts you to restore ca.pem from backup without regenerating all PKI files. If a backup is present and you select this option, then you no longer need to reinstall or re-enroll certificates for all hosts.
BZ#1700036
This enhancement adds support for DMTF Redfish to RHV. To use this functionality, you use the Administration Portal to edit a Host’s properties. On the Host’s Power Management tab, you click + to add a new power management device. In the Edit fence agent window, you set Type to redfish and fill-in additional details like login information and IP/FQDN of the agent.
BZ#1700338
This enhancement enables you to use the RHV Manager’s REST API to manage subscriptions and receive notifications based on specific events. In previous versions, you could do this only in the Administration Portal.
BZ#1710491
With this enhancement, an EVENT_ID is logged when a virtual machine’s guest operating system reboots. External systems such as Cloudforms and Manage IQ rely on the EVENT_ID log messages to keep track of the virtual machine’s state.
BZ#1712890
With this update, when you upgrade RHV, engine-setup notifies you if virtual machines in the environment have snapshots whose cluster levels are incompatible with the RHV version you are upgrading to. It is safe to let it proceed, but it is not safe to use these snapshots after the upgrade. For example, it is not safe to preview these snapshots.
There is an exception to the above: engine-setup does not notify you if the virtual machine is running the Manager as a self-hosted engine. For hosted-engine, it provides an automatic "Yes" and upgrades the virtual machine without prompting or notifying you. It is unsafe to use snapshots of the hosted-engine virtual machine after the upgrade.
BZ#1716590
With this enhancement, on the "System" tab of the "New Virtual Machine" and "Edit Virtual Machine" windows, the "Serial Number Policy" displays the value of the "Cluster default" setting. If you are adding or editing a VM and are deciding whether to override the cluster-level serial number policy, seeing that information here is convenient. Previously, to see the cluster’s default serial number policy, you had to close the VM window and navigate to the Cluster window.
BZ#1718818
This enhancement enables you to attach a SCSI host device, scsi_hostdev
, to a virtual machine and specify the optimal driver for the type of SCSI device:
- scsi_generic: (Default) Enables the guest operating system to access OS-supported SCSI host devices attached to the host. Use this driver for SCSI media changers that require raw access, such as tape or CD changers.
- scsi_block: Similar to scsi_generic but better speed and reliability. Use for SCSI disk devices. If trim or discard for the underlying device is desired, and it’s a hard disk, use this driver.
- scsi_hd: Provides performance with lowered overhead. Supports large numbers of devices. Uses the standard SCSI device naming scheme. Can be used with aio-native. Use this driver for high-performance SSDs.
- virtio_blk_pci: Provides the highest performance without the SCSI overhead. Supports identifying devices by their serial numbers.
BZ#1726494
qemu-guest-agent for OpenSUSE guests has been updated to qemu-guest-agent-3.1.0-lp151.6.1 build.
BZ#1726907
With this update, you can select Red Hat CoreOS (RHCOS) as the operating system for a virtual machine. When you do so, the initialization type is set to ignition
. RHCOS uses ignition to initialize the virtual machine, differentiating it from RHEL.
BZ#1731395
Previously, with every security update, a new CPU type was created in the vdc_options table under the key ServerCPUList in the database for all affected architectures. For example, the Intel Skylake Client Family included the following CPU types:
-
Intel Skylake Client Family
-
Intel Skylake Client IBRS Family
-
Intel Skylake Client IBRS SSBD Family
-
Intel Skylake Client IBRS SSBD MDS Family
With this update, only two CPU Types are now supported for any CPU microarchitecture that has security updates, keeping the CPU list manageable. For example:
- Intel Skylake Client Family
- Secure Intel Skylake Client Family
The default CPU type will not change. The Secure CPU type will contain the latest updates.
BZ#1732738
Modernizing the software stack of ovirt-engine for build and runtime using java-11-openjdk. Java 11 openjdk is the new LTS version from Red Hat.
BZ#1733031
To transfer virtual machines between data centers, you use data storage domains because export domains were deprecated. However, moving a data storage domain to a data center that has a higher compatibility level (DC level) can upgrade its storage format version, for example, from V3 to V5. This higher format version can prevent you from reattaching the data storage domain to the original data center and transferring additional virtual machines.
In the current release, if you encounter this situation, the Administration Portal asks you to confirm that you want to update the storage domain format, for example, from 'V3' to 'V5'. It also warns that you will not be able to attach it back to an older data center with a lower DC level.
To work around this issue, you can create a destination data center that has the same compatibility level as the source data center. When you finish transferring the virtual machines, you can increase the DC level.
BZ#1733932
With this update, you can remove an unregistered entity, such as a virtual machine, a template, or a disk, without importing it into the environment.
BZ#1734727
The current release updates the ovirt-engine-extension-logger-log4j package from OpenJDK version 8 to version 11 so it aligns with the oVirt engine.
BZ#1739557
With this update, you can enable encryption for live migration of virtual machines between hosts in the same cluster. This provides more protection to data transferred between hosts. You can enable or disable encryption in the Administration Portal, in the Edit Cluster dialog box, under Migration Policy > Additional Properties. Encryption is disabled by default.
BZ#1740644
The current release adds a configuration option, VdsmUseNmstate, which you can use to enable nmstate on every new host with cluster compatibility level >= 4.4.
BZ#1740978
When a VM from the older compatibility version is imported, its configuration has to be updated to be compatible with the current cluster compatibility version. This enhancement adds a warning to the audit log that lists the updated parameters.
BZ#1745019
The current release adds support for running virtual machines on hosts that have an Intel Snow Ridge CPU. There are two ways to enable this capability:
- Enable a virtual machine’s Pass-Through Host CPU setting and configure it to Start Running On on Specific Host(s) that have a Snow Ridge CPU.
-
Set
cpuflags
in the virtual machine’s custom properties to+gfni,+cldemote
.
BZ#1748097
In this release, it is now possible to edit the properties of a Floating Virtual Disk in the Storage > Disks tab of the Administration Portal. For example, the user can edit the Description, Alias, and Size of the disk. You can also update floating virtual disk properties using the REST API update put command described in the Red Hat Virtualization REST API Guide.
BZ#1749284
Before this update, the live snapshot operation was synchronized, such that if VDSM required more than 180 seconds to create a snapshot, the operation failed, preventing snapshots of some virtual machines, such as those with large memory loads, or slow storage.
With this update, the live snapshot operation is asynchronous, so the operation runs until it ends successfully, regardless of how long it takes.
BZ#1751268
The current release adds a new Insights section to the RHV welcome or landing page. This section contains two links:
- "Insights Guide" links to the "Deploying Insights in Red Hat Virtualization Manager" topic in the Administration Guide.
- "Insights Dashboard" links to the Red Hat Insights Dashboard on the Customer Portal.
BZ#1752995
With this update, the default action in the VM Portal’s dashboard for a running virtual machine is to open a console. Before this update, the default action was "Suspend".
Specifically, the default operation for a running VM is set to "SPICE Console" if the virtual machine supports SPICE, or "VNC Console" if the virtual machine only supports VNC.
For a virtual machine running in headless mode, the default action is still "Suspend".
BZ#1757320
This update provides packages required to run oVirt Node and oVirt CentOS Linux hosts based on CentOS Linux 8.
BZ#1758289
When you remove a host from the RHV Manager, it can create duplicate entries for a host-unreachable event in the RHV Manager database. Later, if you add the host back to the RHV Manager, these entries can cause networking issues. With this enhancement, if this type of event happens, the RHV Manager prints a message to the events tab and log. The message notifies users of the issue and explains how to avoid networking issues if they add the host back to RHV Manager.
BZ#1763812
The current release moves the button to Remove a virtual machine to the "more" menu (three dots in the upper-right area). This was done to improve usability: Too many users pressed the Remove button, mistakenly believing it would remove a selected item in the details view, such as a snapshot. They did not realize it would delete the virtual machine. The new location should help users avoid this kind of mistake.
BZ#1764788
In this release, Ansible Runner is installed by default and allows running Ansible playbooks directly in the Red Hat Virtualization Manager.
BZ#1767319
In this release, modifying a MAC address pool or modifying the range of a MAC address pool that has any overlap with existing MAC address pool ranges, is strictly forbidden.
BZ#1768844
With this enhancement, when you add a host to a cluster, it has the advanced virtualization channel enabled, so the host uses the latest supported libvirt and qemu packages.
BZ#1768937
With this enhancement, the Administration Portal enables you to copy a host network configuration from one host to another by clicking a button. Copying network configurations this way is faster and easier than configuring each host separately.
BZ#1771977
On RHV-4.4, NetworkManager manages the interface and static routes. As a result, you can make more robust modifications to static routes using Network Manager Stateful Configuration (nmstate).
BZ#1777877
This release adds Grafana as a user interface and visualization tool for monitoring the Data Warehouse. You can install and configure Grafana during engine-setup. Grafana includes pre-built dashboards that present data from the ovirt_engine_history
PostgreSql data warehouse database.
BZ#1779580
The current release updates the Documentation section of the RHV welcome or landing page. This makes it is easier to access the current documentation and facilitate access to translated documentation in the future.
- The links now point to the online documentation on the Red Hat customer portal.
- The "Introduction to the Administration Portal" guide and "REST API v3 Guide" are now obsolete and have been removed.
- The rhvm-doc package is obsolete and has been removed.
BZ#1780943
Previously, a live snapshot of a virtual machine could take an infinite amount of time, locking the virtual machine. With this release, you can set a limit on the amount of time an asynchronous live snapshot can take using the command engine-config -s LiveSnapshotTimeoutInMinutes=<time>
where <time>
is a value in minutes. After the set time passes, the snapshot aborts, releasing the lock and enabling you to use the virtual machine. The default value of <time>
is 30
.
BZ#1796809
The apache-sshd library is not bundled anymore in the rhvm-dependencies package. The apache-sshd library is now packaged in its own rpm package.
BZ#1798127
apache-commons-collections4 has been packaged for Red Hat Virtualization Manager consumption. The package is an extension of the Java Collections Framework.
BZ#1798403
Previously, the Windows guest tools were delivered as virtual floppy disk (.vfd
) files.
With this release, the virtual floppy disk is removed and the Windows guest tools are included as a virtual CD-ROM. To install the Windows guest tools, check the Attach Windows guest tools CD check box when installing a Windows virtual machine.
BZ#1806339
The current release changes the Huge Pages label to Free Huge Pages so it is easier to understand what the values represent.
BZ#1813831
This enhancement enables you to remove incremental backup root checkpoints.
Backing up a virtual machine (VM) creates a checkpoint in libvirt and the RHV Manager’s database. In large scale environments, these backups can produce a high number of checkpoints. When you restart virtual machines, the Manager redefines their checkpoints on the host; if there are many checkpoints, this operation can degrade performance. The checkpoints' XML descriptions also consume a lot of storage.
This enhancement provides the following operations:
-
View all the VM checkpoints using the new checkpoints service under the VM service -
GET path-to-engine/api/vms/vm-uuid/checkpoints
-
View a specific checkpoint -
GET path-to-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid
-
Remove the oldest (root) checkpoint from the chain -
DELETE path-to-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid
BZ#1821487
Previously, network tests timed out after 2 seconds. The current release increases the timeout period from 2 seconds to 5 seconds. This reduces unnecessary timeouts when the network tests require more than 2 seconds to pass.
BZ#1821930
With this enhancement, RHEL 7-based hosts have SPICE encryption enabled during host deployment. Only TLSv1.2 and newer protocols are enabled. Available ciphers are limited as described in BZ1563271
RHEL 8-based hosts do not have SPICE encryption enabled. Instead, they rely on defined RHEL crypto policies (similar to VDSM BZ1179273).
BZ#1824117
The usbutils and net-tools packages have been added to the RHV-H optional channel. This eases the installation of "iDRAC Service Module" on Dell PowerEdge systems.
BZ#1831031
This enhancement increases the maximum memory limit for virtual machines to 6TB. This also applies to virtual machines with cluster level 4.3 in RHV 4.4.
BZ#1841083
With this update, the maximum memory size for 64-bit virtual machines based on x86_64 or ppc64/ppc64le architectures is now 6 TB. This limit also applies to virtual machines based on x86_64 architecture in 4.2 and 4.3 Cluster Levels.
BZ#1845017
Starting with this release, the Grafana dashboard for the Data Warehouse is installed by default to enable easy monitoring of Red Hat Virtualization metrics and logs. The Data Warehouse is installed by default at Basic scale resource use. To obtain the full benefits of Grafana, it is recommended to update the Data Warehouse scale to Full (to be able to view a larger data collection interval of up to 5 years). Full scaling may require migrating the Data Warehouse to a separate virtual machine. For Data Warehouse scaling instructions, see Changing the Data Warehouse Sampling Scale
For instructions on migrating to or installing on a separate machine, see Migrating the Data Warehouse to a Seperate Machine. and Installing and Configuring Data Warehouse on a Separate Machine
BZ#1848381
The current release adds a panel to the beginning of each Grafana dashboard describing the reports it displays and their purposes.
6.14.3. Rebase: Bug Fixes and Enhancements
These items are rebases of bug fixes and enhancements included in this release of Red Hat Virtualization:
BZ#1700867
The amkeself package has been rebased to version: 2.4.0. Highlights, important fixes, or notable enhancements:
- v2.3.0: Support for archive encryption via GPG or OpenSSL. Added LZO and LZ4 compression support. Options to set the packaging date and stop the umask from being overridden. Optionally ignore check for available disk space when extracting. New option to check for root permissions before extracting.
- v2.3.1: Various compatibility updates. Added unit tests for Travis CI in the GitHub repo. New --tar-extra, --untar-extra, --gpg-extra, --gpg-asymmetric-encrypt-sign options.
- v2.4.0: Added optional support for SHA256 archive integrity checksums.
BZ#1701530
Rebase package(s) to version: 0.1.2
With this update, the ovirt-cockpit-sso package supports RHEL 8.
BZ#1713700
Rebase package(s) to version: spice-qxl-wddm-dod 0.19
Highlights, important fixes, or notable enhancements:
- Add 800x800 resolution
- Improve performance vs spice server 14.0 and earlier
- Fix black screen on driver uninstall on OVMF platforms
- Fix black screen on return from S3
BZ#1796815
The Object-Oriented SNMP API for Java Managers and Agents (snmp4j) library has been packaged for RHV-M consumption. The library was previously provided by the rhvm-dependencies package and is now provided as a standalone package.
BZ#1797316
Upgrade package(s) to version: rhv-4.4.0-23
Highlights and important bug fixes: Enhancements to VM snapshots caused a regression due to inconsistencies between the VDSM and RHV Manager versions. This upgrade fixes the issue by synchronizing the RHV Manager version to match the VDSM version.
BZ#1798114
Rebase of the apache-commons-digester package to version 2.1. This update is a minor release with new features. See the Apache release notes for more information.
BZ#1798117
Rebase of the apache-commons-configuration package to version 1.10. This update includes minor bug fixes and enhancements. See the Apache release notes for more information.
BZ#1799171
With this rebase, package ws-commons-utils has been updated to version 1.0.2 which provides following changes:
- Updated a non-static "newDecoder" method in the Base64 class to be static.
- Fixed the completely broken CharSetXMLWriter.
BZ#1807047
The m2crypto package has been built for use with the current version of RHV Manager. This package enables you to call OpenSSL functions from Python scripts.
BZ#1818745
With this release, Red Hat Virtualization is ported to Python 3. It no longer depends on Python 2.
6.14.4. Rebase: Enhancements Only
These items are rebases of enhancements included in this release of Red Hat Virtualization:
BZ#1698009
The openstack-java-sdk package has been rebased to version: 3.2.8. Highlights and notable enhancements: Refactored the package to use newer versions of these dependent libraries:
- Upgraded jackson to com.fasterxml version 2.9.x
- Upgraded commons-httpclient to org.apache.httpcomponents version 4.5
BZ#1720686
With this rebase ovirt-scheduler-proxy packages have been updated to version 0.1.9 introducing support for RHEL 8 and a refactor of the code for Python3 and Java 11 support.
6.14.5. Release Notes
This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment.
BZ#1745302
oVirt 4.4 replaces the ovirt-guest-tools with a new WiX-based installer, included in Virtio-Win. You can download the ISO file containing the Windows guest drivers, agents, and installers from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/
BZ#1838159
With this release, you can add hosts to RHV Manager that do not provide standard rsa-sha-1 SSH public keys but only provide rsa-sha256/rsa-sha-512 SSH public keys instead, such as CentOS 8 hosts with FIPS hardening enabled.
BZ#1844389
On non-production systems, you can use CentOS Stream as an alternative to CentOS Linux.
6.14.6. Known Issues
These known issues exist in Red Hat Virtualization at this time:
BZ#1809116
There is currently a known issue: Open vSwitch (OVS) does not work with nmstate-managed hosts. Therefore, OVS clusters cannot contain RHEL 8 hosts. Workaround: In clusters that use OVS, do not upgrade hosts to RHEL 8.
BZ#1810550
The current release contains a known issue: When the RHV Manager tries to change the mode of an existing bond to mode balance-tlb 5 or mode balance-alb 6, the host fails to apply this change. The Manager reports this as a user-visible error. To work around this issue, remove the bond and create a new one with the desired mode. A solution is presently being worked on and, if successful, is intended for RHEL 8.2.1.
BZ#1813694
Known issue: If you configure a virtual machine’s BIOS Type and Emulation Machine Type with mismatched settings, the virtual machine fails when you restart it. Workaround: To avoid problems, configure the BIOS Type and Emulation Machine Type with the proper settings for your hardware. The current release helps you avoid this issue: Adding a Host to a new cluster with auto-detect sets the BIOS Type accordingly.
BZ#1829656
Known issue: Unsubscribed RHVH hosts do not get package updates when you perform a 'Check for upgrade' operation. Instead, you get a 'no updates found' message. This happens because RHVH hosts that are not registered to Red Hat Subscription Management (RHSM) do not have repos enabled. Workaround: To get updates, register the RHVH host with Red Hat Subscription Management (RHSM).
BZ#1836181
The current release contains a known issue: If a VM has a bond mode 1 (active-backup) over an SR-IOV vNIC and VirtIO vNIC, the bond might stop working after the VM migrates to a host with SR-IOV on a NIC that uses an i40e driver, such as the Intel X710.
BZ#1852422
Registration fails for user accounts that belong to multiple organizations
Currently, when you attempt to register a system with a user account that belongs to multiple organizations, the registration process fails with the error message You must specify an organization for new units.
To work around this problem, you can either:
- Use a different user account that does not belong to multiple organizations.
- Use the Activation Key authentication method available in the Connect to Red Hat feature for GUI and Kickstart installations.
- Skip the registration step in Connect to Red Hat and use the Subscription Manager to register your system post-installation.
BZ#1859284
If you create VLANs on virtual functions of SR-IOV NICs, and the VLAN interface names are longer than ten characters, the VLANs fail. This happens because the naming convention for VLAN interfaces, parent_device.VLAN_ID
, tends to produce names that exceed the 10-character limit. The workaround for this issue is to create udev rules as described in 1854851.
BZ#1860923
In RHEL 8.2, ignoredisk --drives
is not recognized by Anaconda in Kickstart files correctly. Consequently, when installing or reinstalling the host’s operating system, it is strongly recommended that you either detach any existing non-OS storage that is attached to the host, or use ignoredisk --only-use
to avoid accidental initialization of these disks, and with that, potential data loss.
BZ#1863045
When you upgrade Red Hat Virtualization with a storage domain that is locally mounted on / (root), the data might be lost.
Use a separate logical volume or disk to prevent possible loss of data during upgrades. If you are using / (root) as the locally mounted storage domain, migrate your data to a separate logical volume or disk prior to upgrading.
6.14.7. Removed Functionality
BZ#1399714
Version 3 of the Python SDK has been deprecated since version 4.0 of oVirt. The current release removes it completely, along with version 3 of the REST API.
BZ#1399717
Version 3 of the Java SDK has been deprecated since version 4.0 of oVirt. The current release removes it completely, along with version 3 of the REST API.
BZ#1638675
The current release removes OpenStack Neutron deployment, including the automatic deployment of the neutron agents through the Network Provider tab in the New Host window and the AgentConfiguration in the REST-API. Use the following components instead:
- To deploy OpenStack hosts, use the OpenStack Platform Director/TripleO.
- The Open vSwitch interface mappings are already managed automatically by VDSM in Clusters with switch type OVS.
- To manage the deployment of ovirt-provider-ovn-driver on a cluster, update the cluster’s "Default Network Provider" attribute.
BZ#1658061
RHV 4.3 was shipping drivers for Windows XP and Windows Server 2k3. Both of these operating systems are obsolete and unsupported. The current release removes these drivers.
BZ#1698016
Previously, the cockpit-machines-ovirt package was deprecated in Red Hat Virtualization version 4.3 (reference bug #1698014). The current release removes the cockpit-machines-ovirt from the ovirt-host dependencies and RHV-H image.
BZ#1703840
The vdsm-hook-macspoof has been dropped from the VDSM code. If you still require the ifacemacspoof hook, you can find and fix the vnic profiles using a script similar to the one provided in the commit message.
BZ#1712255
Support for datacenter and cluster levels earlier than version 4.2 has been removed.
BZ#1725775
Previously, the screen
package was deprecated in RHEL 7.6. With this update to RHEL 8-based hosts, the screen
package is removed. The current release installs the tmux
package on RHEL 8-based hosts instead of screen
.
BZ#1728667
The current release removes heat-cfntools, which is not used in rhvm-appliance and RHV. Updates to heat-cfntools are available only through OSP.
BZ#1746354
With this release, the Application Provisioning Tool service (APT) is removed.
The APT service could cause a Windows virtual machine to reboot without notice, causing possible data loss. With this release, the virtio-win installer replaces the APT service.
BZ#1753889
In RHV version 4.4, oVirt Engine REST API v3 has been removed. Update your custom scripts to use REST API v4.
BZ#1753894
The oVirt Engine SDK 3 Java bindings are not shipped anymore with oVirt 4.4 release.
BZ#1753896
The oVirt Python SDK version 3 has been removed from the project. You need to upgrade your scripts to use Python SDK version 4.
BZ#1795684
Hystrix monitoring integration has been removed from ovirt-engine due to limited adoption and difficulty to maintain.
BZ#1796817
The Object-Oriented SNMP API for Java Managers and Agents (snmp4j) library is no longer bundled with the rhvm-dependencies package. It is now provided as a standalone rpm package (Bug #1796815).
BZ#1818554
The current version of RHV removes libvirt packages that provided non-socket activation. Now it contains only libvirt versions that provide socket activation. Socket activation provides better resource handling: There is no dedicated active daemon; libvirt is activated for certain tasks and then exits.
BZ#1827177
Metrics Store support has been removed in Red Hat Virtualization 4.4. Administrators can use the Data Warehouse with Grafana dashboards (deployed by default with Red Hat Virtualization 4.4) to view metrics and inventory reports. See the Grafana documentation for information on Grafana. Administrators can also send metrics and logs to a standalone Elasticsearch instance.
BZ#1846596
In previous versions, the katello-agent package was automatically installed on all hosts as a dependency of the ovirt-host package. The current release, RHV 4.4 removes this dependency to reflect the removal of the katello-agent from Satellite 6.7. Instead, you can now use katello-host-tools, which enables users to install the correct agent for their version of Satellite.
Appendix A. Legal notice
Copyright © 2022 Red Hat, Inc.
Licensed under the (Creative Commons Attribution–ShareAlike 4.0 International License). Derived from documentation for the (oVirt Project). If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Modified versions must remove all Red Hat trademarks.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.