Chapter 5. OpenShift Virtualization release notes
5.1. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
5.2. About Red Hat OpenShift Virtualization
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the icon.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
Learn more about OpenShift Virtualization architecture and deployments.
Prepare your cluster for OpenShift Virtualization.
5.2.1. OpenShift Virtualization supported cluster version
OpenShift Virtualization 4.12 is supported for use on OpenShift Container Platform 4.12 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
5.2.2. Supported guest operating systems
To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization.
5.3. New and changed features
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
- Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
- Intel and AMD CPUs.
- OpenShift Virtualization no longer uses the logo. OpenShift Virtualization is now represented by the logo for versions 4.9 and later.
-
You can create a VM memory dump for forensic analysis by using the
virtctl memory-dump
command.
-
You can export and download a volume from a virtual machine (VM), a VM snapshot, or a persistent volume claim (PVC) to recreate it on a different cluster or in a different namespace on the same cluster by using the
virtctl vmexport
command or by creating aVirtualMachineExport
custom resource. You can also export the memory-dump for forensic analysis.
- You can learn about the functions and organization of the OpenShift Virtualization web console by referring to the web console overview documentation.
-
You can use the
virtctl ssh
command to forward SSH traffic to a virtual machine by using your local SSH client or by copying the SSH command from the OpenShift Container Platform web console.
-
Standalone data volumes, and data volumes created when using a
dataVolumeTemplate
to prepare a disk for a VM, are no longer stored in the system. The data volumes are now automatically garbage collected and deleted after the PVC is created.
- OpenShift Virtualization now provides live migration metrics that you can access by using the OpenShift Container Platform monitoring dashboard.
-
The OpenShift Virtualization Operator now reads the cluster-wide TLS security profile from the
APIServer
custom resource and propagates it to the OpenShift Virtualization components, including virtualization, storage, networking, and infrastructure.
-
OpenShift Virtualization has runbooks to help you troubleshoot issues that trigger alerts. The alerts are displayed on the Virtualization
Overview page of the web console. Each runbook defines an alert and provides steps to diagnose and resolve the issue. This feature was previously introduced as a Technology Preview and is now generally available.
5.3.1. Quick starts
-
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the
virtualization
keyword in the Filter field.
5.3.2. Networking
- You can now specify the namespace where the OpenShift Container Platform cluster checkup is to be run.
- You can now configure a load balancing service by using the MetalLB Operator in layer 2 mode.
5.3.3. Web console
The Virtualization
Overview page has the following usability enhancements: - A Download virtctl link is available.
- Resource information is customized for administrative and non-administrative users. For example, non-administrative users see only their VMs.
- The Overview tab displays the number of VMs, and vCPU, memory, and storage usage with charts that show the last 7 days' trend.
- The Alerts card on the Overview tab displays the alerts grouped by severity.
- The Top Consumers tab displays the top consumers of CPU, memory, and storage usage over a configurable time period.
- The Migrations tab displays the progress of VM migrations.
- The Settings tab displays cluster-wide settings, including live migration limits, live migration network, and templates project.
-
You can create and manage live migration policies in a single location on the Virtualization
MigrationPolicies page.
- The Metrics tab on the VirtualMachine details page displays memory, CPU, storage, network, and migration metrics of a VM, over a configurable period of time.
- When you customize a template to create a VM, you can set the YAML switch to ON on each VM configuration tab to view the live changes in the YAML configuration file alongside the form.
-
The Migrations tab on the Virtualization
Overview page displays the progress of virtual machine instance migrations over a configurable time period.
-
You can now define a dedicated network for live migration to minimize disruption to tenant workloads. To select a network, navigate to Virtualization
Overview Settings Live migration.
5.3.4. Deprecated features
Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.
5.3.5. Removed features
Removed features are not supported in the current release.
- Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.12, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a previous version of OpenShift Virtualization.
OpenShift Virtualization 4.11 removed support for nmstate, including the following objects:
-
NodeNetworkState
-
NodeNetworkConfigurationPolicy
-
NodeNetworkConfigurationEnactment
To preserve and support your existing nmstate configuration, install the Kubernetes NMState Operator before updating to OpenShift Virtualization 4.11. For 4.12 for Extended Update Support (EUS) versions, install the Kubernetes NMState Operator after updating to 4.12. You can install the Operator from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (
oc
).-
The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (
oc
).You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later 4.10 releases. For Extended Update Support (EUS) versions, you must perform the following tasks before updating to OpenShift Virtualization 4.12 from 4.10.2 and later 4.10 releases:
- Move all nodes out of maintenance mode.
-
Install the standalone NMO and replace the
nodemaintenances.nodemaintenance.kubevirt.io
custom resource (CR) with anodemaintenances.nodemaintenance.medik8s.io
CR.
5.4. Technology Preview features
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
Technology Preview Features Support Scope
- You can now run OpenShift Container Platform cluster checkups to measure network latency between VMs.
The Tekton Tasks Operator (TTO) now integrates OpenShift Virtualization with Red Hat OpenShift Pipelines. TTO includes cluster tasks and example pipelines that allow you to:
- Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes.
- Run commands in VMs.
-
Manipulate disk images with
libguestfs
tools. - Install Windows 10 into a new data volume from a Windows installation image (ISO file).
- Customize a basic Windows 10 installation and then create a new image and template.
- You can now use the guest agent ping probe to determine if the QEMU guest agent is running on a virtual machine.
- You can now use Microsoft Windows 11 as a guest operating system. However, OpenShift Virtualization 4.12 does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the BitLocker recovery guide.
- You can create live migration policies with specific parameters, such as bandwidth usage, maximum number of parallel migrations, and timeout, and apply the policies to groups of virtual machines by using virtual machine and namespace labels.
5.5. Bug fixes
-
You can now configure the
HyperConverged
CR to enable mediated devices before drivers are installed without losing the new device configuration after driver installation. (BZ#2046298) -
The OVN-Kubernetes cluster network provider no longer crashes from peak RAM and CPU usage if you create a large number of
NodePort
services. (OCPBUGS-1940) - Cloning more than 100 VMs at once no longer intermittently fails if you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage. (BZ#1989527)
5.6. Known issues
- You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
- In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV Reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)
When you use two pods with different SELinux contexts, VMs with the
ocs-storagecluster-cephfs
storage class fail to migrate and the VM status changes toPaused
. This is because both pods try to access the sharedReadWriteMany
CephFS volume at the same time. (BZ#2092271)-
As a workaround, use the
ocs-storagecluster-ceph-rbd
storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage.
-
As a workaround, use the
The
TopoLVM
provisioner name string has changed in OpenShift Virtualization 4.12. As a result, the automatic import of operating system images might fail with the following error message (BZ#2158521):DataVolume.storage spec is missing accessMode and volumeMode, cannot get access mode from StorageProfile.
As a workaround:
Update the
claimPropertySets
array of the storage profile:$ oc patch storageprofile <storage_profile> --type=merge -p '{"spec": {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], "volumeMode": "Block"}, \ {"accessModes": ["ReadWriteOnce"], "volumeMode": "Filesystem"}]}}'
-
Delete the affected data volumes in the
openshift-virtualization-os-images
namespace. They are recreated with the access mode and volume mode from the updated storage profile.
When restoring a VM snapshot for storage whose binding mode is
WaitForFirstConsumer
, the restored PVCs remain in thePending
state and the restore operation does not progress.-
As a workaround, start the restored VM, stop it, and then start it again. The VM will be scheduled, the PVCs will be in the
Bound
state, and the restore operation will complete. (BZ#2149654)
-
As a workaround, start the restored VM, stop it, and then start it again. The VM will be scheduled, the PVCs will be in the
-
VMs created from common templates on a Single Node OpenShift (SNO) cluster display a
VMCannotBeEvicted
alert because the template’s default eviction strategy isLiveMigrate
. You can ignore this alert or remove the alert by updating the VM’s eviction strategy. (BZ#2092412) -
Uninstalling OpenShift Virtualization does not remove the
feature.node.kubevirt.io
node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-22036) Some persistent volume claim (PVC) annotations created by the Containerized Data Importer (CDI) can cause the virtual machine snapshot restore operation to hang indefinitely. (BZ#2070366)
As a workaround, you can remove the annotations manually:
-
Obtain the VirtualMachineSnapshotContent custom resource (CR) name from the
status.virtualMachineSnapshotContentName
value in theVirtualMachineSnapshot
CR. -
Edit the
VirtualMachineSnapshotContent
CR and remove all lines that containk8s.io/cloneRequest
. If you did not specify a value for
spec.dataVolumeTemplates
in theVirtualMachine
object, delete anyDataVolume
andPersistentVolumeClaim
objects in this namespace where both of the following conditions are true:-
The object’s name begins with
restore-
. The object is not referenced by virtual machines.
This step is optional if you specified a value for
spec.dataVolumeTemplates
.
-
The object’s name begins with
-
Repeat the restore operation with the updated
VirtualMachineSnapshot
CR.
-
Obtain the VirtualMachineSnapshotContent custom resource (CR) name from the
-
Windows 11 virtual machines do not boot on clusters running in FIPS mode. Windows 11 requires a TPM (trusted platform module) device by default. However, the
swtpm
(software TPM emulator) package is incompatible with FIPS. (BZ#2089301)
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
- As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)
- As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then
openshift-monitoring
sends aPodDisruptionBudgetAtLimit
alert every 60 minutes for virtual machine images that use theLiveMigrate
eviction strategy. (BZ#2026733)- As a workaround, silence alerts.
OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)
- As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
If you clone more than 100 VMs using the
csi-clone
cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. (BZ#2055595)-
As a workaround, you can restart the
ceph-mgr
to purge the VM clones.
-
As a workaround, you can restart the
VMs that use Logical volume management (LVM) with block storage devices require additional configuration to avoid conflicts with Red Hat Enterprise Linux CoreOS (RHCOS) hosts.
-
As a workaround, you can create a VM, provision an LVM, and restart the VM. This creates an empty
system.lvmdevices
file. (OCPBUGS-5223)
-
As a workaround, you can create a VM, provision an LVM, and restart the VM. This creates an empty