Chapter 2. OpenShift Virtualization release notes
2.1. About Red Hat OpenShift Virtualization
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the logo.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
2.1.1. OpenShift Virtualization supported cluster version
OpenShift Virtualization 2.5 is supported for use on OpenShift Container Platform 4.6 clusters.
2.1.2. Supported guest operating systems
OpenShift Virtualization guests can use the following operating systems:
- Red Hat Enterprise Linux 6, 7, and 8.
- Microsoft Windows Server 2012 R2, 2016, and 2019.
- Microsoft Windows 10.
Other operating system templates shipped with OpenShift Virtualization are not supported.
2.2. New and changed features
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
- Red Hat Enterprise Linux CoreOS 8 workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
- Intel and AMD CPUs.
OpenShift Virtualization 2.5 adds three new
virtctl
commands to manage QEMU guest agent data:-
virtctl fslist <vmi_name>
returns a full list of file systems available on the guest machine. -
virtctl guestosinfo <vmi_name>
returns guest agent information about the operating system. -
virtctl userlist <vmi_name>
returns a full list of logged-in users on the guest machine.
-
-
You can now download the
virtctl
client from the Command Line Tools page in the web console.
- You can now import a virtual machine with a Single Root I/O Virtualization (SR-IOV) network interface from Red Hat Virtualization.
2.2.1. Networking
-
The supported bond modes with nmstate now includes
mode=2 balance-xor
andmode=4 802.3ad
.
2.2.2. Storage
- The Containerized Data Importer (CDI) can now import container disk storage volumes from the container image registry at a faster speed and allocate storage capacity more efficiently. CDI can pull a container disk image from the registry in about the same amount of time as it would take to import from an HTTP endpoint. You can import the disk into a persistent volume claim (PVC) equal in size to the disk image to use the underlying storage more efficiently.
It is now easier to diagnose and troubleshoot issues when preparing virtual machine (VM) disks that are managed by DataVolumes:
- For asynchronous image upload, if the virtual size of the disk image is larger than the size of the target DataVolume, an error message is returned before the connection is closed.
-
You can use the
oc describe dv
command to monitor changes in thePersistentVolumeClaim
(PVC)Bound
conditions or transfer failures. If the value of theStatus:Phase
field isSucceeded
, then the DataVolume is ready to be used.
You can create, restore, and delete virtual machine (VM) snapshots in the CLI for VMs that are powered off (offline). OpenShift Virtualization supports offline VM snapshots on:
- Red Hat OpenShift Container Storage
- Any other storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API
-
You can now clone virtual disks efficiently and quickly using smart-cloning. Smart-cloning occurs automatically when you create a DataVolume with a
PersistentVolumeClaim
(PVC) source. Your storage provider must support the CSI Snapshots API to use smart-cloning.
2.2.3. Web console
If the virtual machine is running, changes made to the following fields and tabs in the web console will not take effect until you restart the virtual machine:
- Boot Order and Flavor in the Details tab
- The Network Interfaces tab
- The Disks tab
The Environment tab
The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
- You can now open a virtual machine console in a separate window.
- You can now create default OS images and automatically upload them using the OpenShift Container Platform web console. A default OS image is a bootable disk containing an operating system and all of the operating system’s configuration settings, such as drivers. You use a default OS image to create bootable virtual machines with specific configurations.
- You can now upload a virtual machine image file to a new persistent volume claim by using the web console.
- When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks.
2.3. Notable technical changes
- When you install or upgrade OpenShift Virtualization, you select an Update Channel. There is a new Update Channel option that is named stable. Select the stable channel to ensure that you install or upgrade to the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
- You can now import VMs with block-based storage into OpenShift Virtualization.
The HyperConverged Operator (HCO), Containerized Data Importer (CDI), Hostpath Provisioner (HPP), and VM import custom resources have moved to API version
v1beta1
. The respective API version for these components is now:hco.kubevirt.io/v1beta1
cdi.kubevirt.io/v1beta1
hostpathprovisioner.kubevirt.io/v1beta1
v2v.kubevirt.io/v1beta1
-
The default
cloud-init
user password is now auto-generated for virtual machines that are created from templates.
- When using host-assisted cloning, you can now clone virtual machine disks at a faster speed because of a more efficient compression algorithm.
- When a node fails in user-provisioned installations of OpenShift Container Platform on bare metal deployments, the virtual machine does not automatically restart on another node. Automatic restart is supported only for installer-provisioned installations that have machine health checks enabled. Learn more about configuring your cluster for OpenShift Virtualization.
2.4. Known issues
- If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. As a workaround, you can use a secondary network interface connected to your host or switch to the OpenShift SDN default CNI provider. (BZ#1887456)
-
If you add a VMware Virtual Disk Development Kit (VDDK) image to the
openshift-cnv/v2v-vmware
config map by using the web console, a Managed resource error message displays. You can safely ignore this error. Save the config map by clicking Save. (BZ#1884538) - When nodes are evicted, for example, when they are placed in maintenance mode during an OpenShift Container Platform cluster upgrade, virtual machines are migrated twice instead of just once. (BZ#1888790)
- Following an upgrade, there might be more than one template per operating system workload. When creating a Microsoft Windows virtual machine from a cloned PVC using the default operating system (OS) images feature, the OS must have the correct workload value defined. Selecting an incorrect Workload value does not allow you to use a default OS image, even though the (Source available) label displays in the web console. The default OS image is attached to the newer template but the wizard might use the old template, which is not configured to support default OS images. Windows 2010 systems only support a workload value of Desktop, while Windows 2012, Windows 2016, and Windows 2019 only support a workload value of Server. (BZ#1907183)
-
If you enable a MAC address pool for a namespace by applying the KubeMacPool label and using the
io
attribute for virtual machines in that namespace, theio
attribute configuration is not retained for the VMs. As a workaround, do not use theio
attribute for VMs. Alternatively, you can disable KubeMacPool for the namespace. (BZ#1869527)
- If you upgrade to OpenShift Virtualization 2.5, both older and newer versions of common templates are available for each combination of operating system, workload, and flavor. When you create a virtual machine by using a common template, you must use the newer version of the template. Disregard the older version to avoid issues. (BZ#1859235)
Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath-provisioner storage or SR-IOV network interfaces. (BZ#1858777)
As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the
spec
section of the virtual machine configuration file:-
Remove the
evictionStrategy: LiveMigrate
field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy. -
Set the
runStrategy
field toAlways
.
-
Remove the
-
For unknown reasons, memory consumption for the
containerDisk
volume type might gradually increase until it exceeds the memory limit. To resolve this issue, restart the VM. (BZ#1855067)
Sometimes, when attempting to edit the subscription channel of the OpenShift Virtualization Operator in the web console, clicking the Channel button of the Subscription Overview results in a JavaScript error. (BZ#1796410)
As a workaround, trigger the upgrade process to OpenShift Virtualization 2.5 from the CLI by running the following
oc
patch command:$ export TARGET_NAMESPACE=openshift-cnv CNV_CHANNEL=2.5 && oc patch -n "${TARGET_NAMESPACE}" $(oc get subscription -n ${TARGET_NAMESPACE} --no-headers -o name) --type='json' -p='[{"op": "replace", "path": "/spec/channel", "value":"'${CNV_CHANNEL}'"}, {"op": "replace", "path": "/spec/installPlanApproval", "value":"Automatic"}]'
This command points your subscription to upgrade channel
2.5
and enables automatic updates.
Live migration fails when nodes have different CPU models. Even in cases where nodes have the same physical CPU model, differences introduced by microcode updates have the same effect. This is because the default settings trigger host CPU passthrough behavior, which is incompatible with live migration. (BZ#1760028)
As a workaround, set the default CPU model in the
kubevirt-config
ConfigMap, as shown in the following example:NoteYou must make this change before starting the virtual machines that support live migration.
Open the
kubevirt-config
ConfigMap for editing by running the following command:$ oc edit configmap kubevirt-config -n openshift-cnv
Edit the ConfigMap:
kind: ConfigMap metadata: name: kubevirt-config data: default-cpu-model: "<cpu-model>" 1
- 1
- Replace
<cpu-model>
with the actual CPU model value. You can determine this value by runningoc describe node <node>
for all nodes and looking at thecpu-model-<name>
labels. Select the CPU model that is present on all of your nodes.
OpenShift Virtualization cannot reliably identify node drains that are triggered by running either
oc adm drain
orkubectl drain
. Do not run these commands on the nodes of any clusters where OpenShift Virtualization is deployed. The nodes might not drain if there are virtual machines running on top of them.- The current solution is to put nodes into maintenance.
-
If the OpenShift Virtualization storage PV is not suitable for importing a RHV VM, the progress bar remains at 10% and the import does not complete. The VM Import Controller Pod log displays the following error message:
Failed to bind volumes: provisioning failed for PVC
. (BZ#1857784)
If you enter the wrong credentials for the RHV Manager while importing a RHV VM, the Manager might lock the admin user account because the
vm-import-operator
tries repeatedly to connect to the RHV API. (BZ#1887140)To unlock the account, log in to the Manager and enter the following command:
$ ovirt-aaa-jdbc-tool user unlock admin
-
If you are logged in to the OpenShift Container Platform cluster as a user with
basic-user
privileges, retrieving guest agent information by runningvirtctl guestosinfo <vmi_name>
fails. As a workaround, you can fetch a subset of the guest agent data by running theoc describe vmi
command. (BZ#2000464)