Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. OpenShift Virtualization release notes
3.1. About Red Hat OpenShift Virtualization
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the icon.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
3.1.1. OpenShift Virtualization supported cluster version
OpenShift Virtualization 4.10 is supported for use on OpenShift Container Platform 4.10 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
3.1.2. Supported guest operating systems
To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization.
3.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
3.3. New and changed features
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
- Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
- Intel and AMD CPUs.
- OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can connect virtual machines to a service mesh to monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
- OpenShift Virtualization now provides a unified API for the automatic import and update of pre-defined boot sources.
3.3.1. Quick starts
-
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the
virtual machine
keyword in the Filter field.
3.3.2. Installation
-
OpenShift Virtualization workloads, such as
virt-launcher
pods, now automatically update if they support live migration. You can configure workload update strategies or opt out of future automatic updates by editing theHyperConverged
custom resource.
You can now use OpenShift Virtualization with single node clusters, also known as Single Node OpenShift (SNO).
NoteSingle node clusters are not configured for high-availability operation, which results in significant changes to OpenShift Virtualization behavior.
- Resource requests and priority classes are now defined for all OpenShift Virtualization control plane components.
3.3.3. Networking
-
You can now configure multiple nmstate-enabled nodes concurrently by using a single
NodeNetworkConfigurationPolicy
manifest.
- Live migration is now supported by default for virtual machines that are attached to an SR-IOV network interface.
3.3.4. Storage
- Online snapshots are supported for virtual machines that have hot-plugged virtual disks. However, hot-plugged disks that are not in the virtual machine specification are not included in the snapshot.
- You can use the Kubernetes Container Storage Interface (CSI) driver with the hostpath provisioner (HPP) to configure local storage for your virtual machines. Using the CSI driver minimizes disruption to your existing OpenShift Container Platform nodes and clusters when configuring local storage.
3.3.5. Web console
- The OpenShift Virtualization dashboard provides resource consumption data for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries.
3.4. Deprecated and removed features
3.4.1. Deprecated features
Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.
- In a future release, support for the legacy HPP custom resource, and the associated storage class, will be deprecated. Beginning in OpenShift Virtualization 4.10, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. The Operator continues to support the existing (legacy) format of the HPP custom resource and the associated storage class. If you use the HPP Operator, plan to create a storage class for the CSI driver as part of your migration strategy.
3.4.2. Removed features
Removed features are not supported in the current release.
- The VM Import Operator has been removed from OpenShift Virtualization with this release. It is replaced by the Migration Toolkit for Virtualization.
This release removes the template for CentOS Linux 8, which reached End of Life (EOL) on December 31, 2021. However, OpenShift Container Platform now includes templates for CentOS Stream 8 and CentOS Stream 9.
NoteAll CentOS distributions are community-supported.
3.5. Technology Preview features
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. The Red Hat Customer Portal provides the Technology Preview Features Support Scope for these features:
- You can now use the Red Hat Enterprise Linux 9 Beta template to create virtual machines.
- You can now deploy OpenShift Virtualization on AWS bare metal nodes.
- OpenShift Virtualization critical alerts now have corresponding descriptions of problems that require immediate attention, reasons for why each alert occurs, a troubleshooting process to diagnose the source of the problem, and steps for resolving each alert.
- A cluster administrator can now back up namespaces that contain VMs by using the OpenShift API for Data Protection with the OpenShift Virtualization plug-in.
-
Administrators can now declaratively create and expose mediated devices such as virtual graphics processing units (vGPUs) by editing the
HyperConverged
CR. Virtual machine owners can then assign these devices to VMs.
-
You can transfer the static IP configuration of the NIC attached to the bridge by applying a single
NodeNetworkConfigurationPolicy
manifest to the cluster.
- You can now install OpenShift Virtualization on IBM Cloud Bare Metal Servers. Bare metal servers offered by other cloud providers are not supported.
3.6. Bug fixes
- If you initiate a cloning operation before the clone source becomes available, the cloning operation now completes successfully without using a workaround. (BZ#1855182)
- Editing a virtual machine fails if the VM references a deleted template that was provided by OpenShift Virtualization before version 4.8. In OpenShift Virtualization 4.8 and later, deleted OpenShift Virtualization-provided templates are automatically recreated by the OpenShift Virtualization Operator. (BZ#1929165)
-
You can now successfully use the
Send Keys
andDisconnect
buttons when using a virtual machine with a VNC console. (BZ#1964789) - When you create a virtual machine, its unique fully qualified domain name (FQDN) now contains the cluster domain name. (BZ#1998300)
-
If you hot-plug a virtual disk and then force delete the
virt-launcher
pod, you no longer lose data. (BZ#2007397) OpenShift Virtualization now issues a HPPSharingPoolPathWithOS alert if you try to install the hostpath provisioner (HPP) on a path that shares the filesystem with other critical components.
To use the HPP to provide storage for virtual machine disks, configure it with dedicated storage that is separate from the node’s root filesystem. Otherwise, the node might run out of storage and become non-functional. (BZ#2038985)
- If you provision a virtual machine disk, OpenShift Virtualization now allocates a persistent volume claim (PVC) that is just large enough to accommodate the requested disk size, rather than issuing a KubePersistentVolumeFillingUp alert for each VM disk PVC. You can monitor disk usage from within the virtual machine itself. (BZ#2039489)
- You can now create a virtual machine snapshot for VMs with hot-plugged disks. (BZ#2042908)
- You can now successfully import a VM image when using a cluster-wide proxy configuration. (BZ#2046271)
3.7. Known issues
- You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
When you use two pods with different SELinux contexts, VMs with the
ocs-storagecluster-cephfs
storage class fail to migrate and the VM status changes toPaused
. This is because both pods try to access the sharedReadWriteMany
CephFS volume at the same time. (BZ#2092271)-
As a workaround, use the
ocs-storagecluster-ceph-rbd
storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage.
-
As a workaround, use the
Updating to OpenShift Virtualization 4.10.5 causes some virtual machines (VMs) to get stuck in a live migration loop. This occurs if the
spec.volumes.containerDisk.path
field in the VM manifest is set to a relative path.-
As a workaround, delete and recreate the VM manifest, setting the value of the
spec.volumes.containerDisk.path
field to an absolute path. You can then update OpenShift Virtualization.
-
As a workaround, delete and recreate the VM manifest, setting the value of the
If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. (BZ#1984442)
-
As a workaround, you can disable the image limit by editing the
KubeletConfig
object and setting the value ofnodeStatusMaxImages
to-1
.
-
As a workaround, you can disable the image limit by editing the
If you deploy the hostpath provisioner on a cluster where any node has a fully qualified domain name (FQDN) that exceeds 42 characters, the provisioner fails to bind PVCs. (BZ#2057157)
Example error message
E0222 17:52:54.088950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: unable to parse requirement: values[0][csi.storage.k8s.io/managed-by]: Invalid value: "external-provisioner-<node_FQDN>": must be no more than 63 characters 1
- 1
- Though the error message refers to a maximum of 63 characters, this includes the
external-provisioner-
string that is prefixed to the node’s FQDN.
As a workaround, disable the
storageCapacity
option in the hostpath provisioner CSI driver by running the following command:$ oc patch csidriver kubevirt.io.hostpath-provisioner --type merge --patch '{"spec": {"storageCapacity": false}}'
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
- As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath provisioner storage or SR-IOV network interfaces.
As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the
spec
section of the virtual machine configuration file:Modify the
evictionStrategy
andrunStrategy
fields.-
Remove the
evictionStrategy: LiveMigrate
field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy. -
Set the
runStrategy
field toAlways
.
-
Remove the
Set the default CPU model by running the following command:
NoteYou must make this change before starting the virtual machines that support live migration.
$ oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[ { "op": "add", "path": "/spec/configuration/cpuModel", "value": "<cpu_model>" 1 } ]'
- 1
- Replace
<cpu_model>
with the actual CPU model value. You can determine this value by runningoc describe node <node>
for all nodes and looking at thecpu-model-<name>
labels. Select the CPU model that is present on all of your nodes.
If you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage, cloning more than 100 VMs at once might fail. (BZ#1989527)
As a workaround, you can perform a host-assisted copy by setting
spec.cloneStrategy: copy
in the storage profile manifest. For example:apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Filesystem cloneStrategy: copy 1 status: provisioner: <provisioner> storageClass: <provisioner_class>
- 1
- The default cloning method set as
copy
.
In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)
- As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then
openshift-monitoring
sends aPodDisruptionBudgetAtLimit
alert every 60 minutes for virtual machine images that use theLiveMigrate
eviction strategy. (BZ#2026733)- As a workaround, Silencing alerts.
On a large cluster, the OpenShift Virtualization MAC pool manager might take too much time to boot and OpenShift Virtualization might not become ready. (BZ#2035344)
As a workaround, if you do not require MAC pooling functionality, then disable this sub-component by running the following command:
$ oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[ { "op": "replace" "path": "/spec/kubeMacPool" "value": null } ]'
OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)
- As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
- If a VM crashes or hangs during shutdown, new shutdown requests do not stop the VM. (BZ#2040766)
If you configure the
HyperConverged
custom resource (CR) to enable mediated devices before drivers are installed, enablement of mediated devices does not occur. This issue can be triggered by updates. For example, ifvirt-handler
is updated beforedaemonset
, which installs NVIDIA drivers, then nodes cannot provide virtual machine GPUs. (BZ#2046298)As a workaround:
-
Remove
mediatedDevicesConfiguration
andpermittedHostDevices
from theHyperConverged
CR. -
Update both
mediatedDevicesConfiguration
andpermittedHostDevices
stanzas with the configuration you want to use.
-
Remove
- YAML examples in the VM wizard are hardcoded and do not always contain the latest upstream changes. (BZ#2055492)
If you clone more than 100 VMs using the
csi-clone
cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. (BZ#2055595)-
As a workaround, you can restart the
ceph-mgr
to purge the VM clones.
-
As a workaround, you can restart the
A non-privileged user cannot use the Add Network Interface button on the
VM Network Interfaces
tab. (BZ#2056420)- As a workaround, non-privileged users can add additional network interfaces while creating the VM by using the VM wizard.
A non-privileged user cannot add disks to a VM due to RBAC rules. (BZ#2056421)
- As a workaround, manually add the RBAC rule to allow specific users to add disks.
The web console does not display virtual machine templates that are deployed to a custom namespace. Only templates deployed to the default namespace display in the web console. (BZ#2054650)
- As a workaround, avoid deploying templates to a custom namespace.
On a Single Node OpenShift (SNO) cluster, updating the cluster fails if a VMI has the
spec.evictionStrategy
field set toLiveMigrate
. For live migration to succeed, the cluster must have more than one worker node. (BZ#2073880)There are two workaround options:
-
Remove the
spec.evictionStrategy
field from the VM declaration. - Manually stop the VM before you update OpenShift Container Platform.
-
Remove the