Chapter 2. Container-native virtualization release notes
2.1. Container-native virtualization release notes
2.1.1. About container-native virtualization 2.2
2.1.1.1. What you can do with container-native virtualization
Container-native virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.
Container-native virtualization adds new objects into your OpenShift Container Platform cluster via Kubernetes custom resources to enable virtualization tasks. These tasks include:
- Creating and managing Linux and Windows virtual machines
- Connecting to virtual machines through a variety of consoles and CLI tools
- Importing and cloning existing virtual machines
- Managing network interface controllers and storage disks attached to virtual machines
- Live migrating virtual machines between nodes
An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.
2.1.1.2. Container-native virtualization support
container-native virtualization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
2.1.2. New and changed features
Managing virtual machines is simpler and more efficient due to improvements in design and workflow. You can now:
- Run the virtual machine wizard with less navigation. The wizard now uses a comprehensive in-page style and includes a review page for confirming configuration details before submission.
- Import a single VMware virtual machine with less navigation.
- Edit virtual machine templates as well as virtual machine configurations.
- Monitor health of virtual machine-backed services as you would for Pod-based services.
- Enable persistent local storage for virtual machine images.
- Add, edit, and view virtual CD-ROM devices attached to a virtual machine.
- Add and view network attachment definitions with a graphical editor.
2.1.3. Resolved issues
-
Previously, when you added a disk to a virtual machine via the Disks tab in the web console, the added disk had a
Filesystem
volumeMode regardless of the volumeMode set in thekubevirt-storage-class-default
ConfigMap. This issue has been fixed. (BZ#1753688) - Previously, when navigating to the Virtual Machines Console tab, sometimes no content was displayed. This issue has been fixed. (BZ#1753606)
- Previously, attempting to list all instances of the container-native virtualization operator from a browser resulted in a 404 (page not found) error. This issue has been fixed. (BZ#1757526)
-
Previously, if a virtual machine used guaranteed CPUs, it was not scheduled because the label
cpumanager=true
was not automatically set on nodes. This issue has been fixed. (BZ#1718944)
2.1.4. Known issues
- If you have container-native virtualization 2.1.0 deployed, you must first upgrade container-native virtualization to 2.2.0 before upgrading OpenShift Container Platform. Upgrading OpenShift Container Platform before upgrading container-native virtualization might trigger virtual machine deletion. (BZ#1785661)
-
The
masquerade
binding method for virtual machines cannot be used in clusters with RHEL 7 compute nodes. (BZ#1741626) After migration, a virtual machine is assigned a new IP address. However, the commands
oc get vmi
andoc describe vmi
still generate output containing the obsolete IP address. (BZ#1686208)As a workaround, view the correct IP address by running the following command:
$ oc get pod -o wide
- Some resources are improperly retained when removing container-native virtualization. You must manually remove these resources in order to reinstall container-native virtualization. (BZ#1712429)
Users without administrator privileges cannot add a network interface to a project in an L2 network using the virtual machine wizard. This issue is caused by missing permissions that allow users to load network attachment definitions. (BZ#1743985)
As a workaround, provide the user with permissions to load the network attachment definitions.
Define
ClusterRole
andClusterRoleBinding
objects to the YAML configuration file, using the following examples:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cni-resources rules: - apiGroups: ["k8s.cni.cncf.io"] resources: ["*"] verbs: ["*"]
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <role-binding-name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cni-resources subjects: - kind: User name: <user to grant the role to> namespace: <namespace of the user>
As a
cluster-admin
user, run the following command to create theClusterRole
andClusterRoleBinding
objects you defined:$ oc create -f <filename>.yaml
Live migration fails when nodes have different CPU models. Even in cases where nodes have the same physical CPU model, differences introduced by microcode updates have the same effect. This is because the default settings trigger host CPU passthrough behavior, which is incompatible with live migration. (BZ#1760028)
As a workaround, set the default CPU model in the
kubevirt-config
ConfigMap, as shown in the following example:NoteYou must make this change before starting the virtual machines that support live migration.
Open the
kubevirt-config
ConfigMap for editing by running the following command:$ oc edit configmap kubevirt-config -n openshift-cnv
Edit the ConfigMap:
kind: ConfigMap metadata: name: kubevirt-config data: default-cpu-model: "<cpu-model>" 1
- 1
- Replace
<cpu-model>
with the actual CPU model value. You can determine this value by runningoc describe node <node>
for all nodes and looking at thecpu-model-<name>
labels. Select the CPU model that is present on all of your nodes.
When running
virtctl image-upload
to upload large VM disk images inqcow2
format, an end-of-file (EOF) error may be reported after the data is transmitted, even though the upload is either progressing normally or completed. (BZ#1789093)Run the following command to check the status of an upload on a given PVC:
$ oc describe pvc <pvc-name> | grep cdi.kubevirt.io/storage.pod.phase
When attempting to create and launch a virtual machine using a Haswell CPU, the launch of the virtual machine can fail due to incorrectly labeled nodes. This is a change in behavior from previous versions of container-native virtualization, where virtual machines could be successfully launched on Haswell hosts. (BZ#1781497)
As a workaround, select a different CPU model, if possible.
- If you select a directory that shares space with your operating system, you can potentially exhaust the space on the partition, causing the node to be non-functional. Instead, create a separate partition and point the hostpath provisioner to that partition so it will not interfere with your operating system. (BZ#1793132)
- The container-native virtualization upgrade process occasionally fails due to an interruption from the Operator Lifecycle Manager (OLM). This issue is caused by the limitations associated with using a declarative API to track the state of container-native virtualization Operators. Enabling automatic updates during installation decreases the risk of encountering this issue. (BZ#1759612)
-
Container-native virtualization cannot reliably identify node drains that are triggered by running either
oc adm drain
orkubectl drain
. Do not run these commands on the nodes of any clusters where container-native virtualization is deployed. The nodes might not drain if there are virtual machines running on top of them. The current solution is to put nodes into maintenance. (BZ#1707427) If you navigate to the Subscription tab on the Operators
Installed Operators page and click the current upgrade channel to edit it, there might be no visible results. If this occurs, there are no visible errors. (BZ#1796410) As a workaround, trigger the upgrade process to container-native virtualization 2.2 from the CLI by running the following
oc
patch command:$ export TARGET_NAMESPACE=openshift-cnv CNV_CHANNEL=2.2 && oc patch -n "${TARGET_NAMESPACE}" $(oc get subscription -n ${TARGET_NAMESPACE} --no-headers -o name) --type='json' -p='[{"op": "replace", "path": "/spec/channel", "value":"'${CNV_CHANNEL}'"}, {"op": "replace", "path": "/spec/installPlanApproval", "value":"Automatic"}]'
This command points your subscription to upgrade channel
2.2
and enables automatic updates.