Search

Virtualization

download PDF
OpenShift Container Platform 4.14

OpenShift Virtualization installation, usage, and release notes

Red Hat OpenShift Documentation Team

Abstract

This document provides information about how to use OpenShift Virtualization in OpenShift Container Platform.

Chapter 1. About

1.1. About OpenShift Virtualization

Learn about OpenShift Virtualization’s capabilities and support scope.

1.1.1. What you can do with OpenShift Virtualization

OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.

OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include:

  • Creating and managing Linux and Windows virtual machines (VMs)
  • Running pod and VM workloads alongside each other in a cluster
  • Connecting to virtual machines through a variety of consoles and CLI tools
  • Importing and cloning existing virtual machines
  • Managing network interface controllers and storage disks attached to virtual machines
  • Live migrating virtual machines between nodes

An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.

OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features.

Important

When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.

You can use OpenShift Virtualization with OVN-Kubernetes, OpenShift SDN, or one of the other certified network plugins listed in Certified OpenShift CNI Plug-ins.

You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies.

1.1.1.1. OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.14 is supported for use on OpenShift Container Platform 4.14 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

1.1.2. About volume and access modes for virtual machine disks

If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.

For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons:

  • ReadWriteMany (RWX) access mode is required for live migration.
  • The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.

    For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.

Important

You cannot live migrate virtual machines with the following configurations:

  • Storage volume with ReadWriteOnce (RWO) access mode
  • Passthrough features such as GPUs

Do not set the evictionStrategy field to LiveMigrate for these virtual machines.

1.1.3. Single-node OpenShift differences

You can install OpenShift Virtualization on single-node OpenShift.

However, you should be aware that Single-node OpenShift does not support the following features:

  • High availability
  • Pod disruption
  • Live migration
  • Virtual machines or templates that have an eviction strategy configured

1.1.4. Additional resources

1.2. Security policies

Learn about OpenShift Virtualization security and authorization.

Key points

  • OpenShift Virtualization adheres to the restricted Kubernetes pod security standards profile, which aims to enforce the current best practices for pod security.
  • Virtual machine (VM) workloads run as unprivileged pods.
  • Security context constraints (SCCs) are defined for the kubevirt-controller service account.
  • TLS certificates for OpenShift Virtualization components are renewed and rotated automatically.

1.2.1. About workload security

By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization, and there are no supported OpenShift Virtualization features that require root privileges.

For each VM, a virt-launcher pod runs an instance of libvirt in session mode to manage the VM process. In session mode, the libvirt daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege.

1.2.2. TLS certificates

TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually.

Automatic renewal schedules

TLS certificates are automatically deleted and replaced according to the following schedule:

  • KubeVirt certificates are renewed daily.
  • Containerized Data Importer controller (CDI) certificates are renewed every 15 days.
  • MAC pool certificates are renewed every year.

Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption:

  • Migrations
  • Image uploads
  • VNC and console connections

1.2.3. Authorization

OpenShift Virtualization uses role-based access control (RBAC) to define permissions for human users and service accounts. The permissions defined for service accounts control the actions that OpenShift Virtualization components can perform.

You can also use RBAC roles to manage user access to virtualization features. For example, an administrator can create an RBAC role that provides the permissions required to launch a virtual machine. The administrator can then restrict access by binding the role to specific users.

1.2.3.1. Default cluster roles for OpenShift Virtualization

By using cluster role aggregation, OpenShift Virtualization extends the default OpenShift Container Platform cluster roles to include permissions for accessing virtualization objects.

Table 1.1. OpenShift Virtualization cluster roles
Default cluster roleOpenShift Virtualization cluster roleOpenShift Virtualization cluster role description

view

kubevirt.io:view

A user that can view all OpenShift Virtualization resources in the cluster but cannot create, delete, modify, or access them. For example, the user can see that a virtual machine (VM) is running but cannot shut it down or gain access to its console.

edit

kubevirt.io:edit

A user that can modify all OpenShift Virtualization resources in the cluster. For example, the user can create VMs, access VM consoles, and delete VMs.

admin

kubevirt.io:admin

A user that has full permissions to all OpenShift Virtualization resources, including the ability to delete collections of resources. The user can also view and modify the OpenShift Virtualization runtime configuration, which is located in the HyperConverged custom resource in the openshift-cnv namespace.

1.2.3.2. RBAC roles for storage features in OpenShift Virtualization

The following permissions are granted to the Containerized Data Importer (CDI), including the cdi-operator and cdi-controller service accounts.

1.2.3.2.1. Cluster-wide RBAC roles
Table 1.2. Aggregated cluster roles for the cdi.kubevirt.io API group
CDI cluster roleResourcesVerbs

cdi.kubevirt.io:admin

datavolumes, uploadtokenrequests

* (all)

datavolumes/source

create

cdi.kubevirt.io:edit

datavolumes, uploadtokenrequests

*

datavolumes/source

create

cdi.kubevirt.io:view

cdiconfigs, dataimportcrons, datasources, datavolumes, objecttransfers, storageprofiles, volumeimportsources, volumeuploadsources, volumeclonesources

get, list, watch

datavolumes/source

create

cdi.kubevirt.io:config-reader

cdiconfigs, storageprofiles

get, list, watch

Table 1.3. Cluster-wide roles for the cdi-operator service account
API groupResourcesVerbs

rbac.authorization.k8s.io

clusterrolebindings, clusterroles

get, list, watch, create, update, delete

security.openshift.io

securitycontextconstraints

get, list, watch, update, create

apiextensions.k8s.io

customresourcedefinitions, customresourcedefinitions/status

get, list, watch, create, update, delete

cdi.kubevirt.io

*

*

upload.cdi.kubevirt.io

*

*

admissionregistration.k8s.io

validatingwebhookconfigurations, mutatingwebhookconfigurations

create, list, watch

admissionregistration.k8s.io

validatingwebhookconfigurations

Allow list: cdi-api-dataimportcron-validate, cdi-api-populator-validate, cdi-api-datavolume-validate, cdi-api-validate, objecttransfer-api-validate

get, update, delete

admissionregistration.k8s.io

mutatingwebhookconfigurations

Allow list: cdi-api-datavolume-mutate

get, update, delete

apiregistration.k8s.io

apiservices

get, list, watch, create, update, delete

Table 1.4. Cluster-wide roles for the cdi-controller service account
API groupResourcesVerbs

"" (core)

events

create, patch

"" (core)

persistentvolumeclaims

get, list, watch, create, update, delete, deletecollection, patch

"" (core)

persistentvolumes

get, list, watch, update

"" (core)

persistentvolumeclaims/finalizers, pods/finalizers

update

"" (core)

pods, services

get, list, watch, create, delete

"" (core)

configmaps

get, create

storage.k8s.io

storageclasses, csidrivers

get, list, watch

config.openshift.io

proxies

get, list, watch

cdi.kubevirt.io

*

*

snapshot.storage.k8s.io

volumesnapshots, volumesnapshotclasses, volumesnapshotcontents

get, list, watch, create, delete

snapshot.storage.k8s.io

volumesnapshots

update, deletecollection

apiextensions.k8s.io

customresourcedefinitions

get, list, watch

scheduling.k8s.io

priorityclasses

get, list, watch

image.openshift.io

imagestreams

get, list, watch

"" (core)

secrets

create

kubevirt.io

virtualmachines/finalizers

update

1.2.3.2.2. Namespaced RBAC roles
Table 1.5. Namespaced roles for the cdi-operator service account
API groupResourcesVerbs

rbac.authorization.k8s.io

rolebindings, roles

get, list, watch, create, update, delete

"" (core)

serviceaccounts, configmaps, events, secrets, services

get, list, watch, create, update, patch, delete

apps

deployments, deployments/finalizers

get, list, watch, create, update, delete

route.openshift.io

routes, routes/custom-host

get, list, watch, create, update

config.openshift.io

proxies

get, list, watch

monitoring.coreos.com

servicemonitors, prometheusrules

get, list, watch, create, delete, update, patch

coordination.k8s.io

leases

get, create, update

Table 1.6. Namespaced roles for the cdi-controller service account
API groupResourcesVerbs

"" (core)

configmaps

get, list, watch, create, update, delete

"" (core)

secrets

get, list, watch

batch

cronjobs

get, list, watch, create, update, delete

batch

jobs

create, delete, list, watch

coordination.k8s.io

leases

get, create, update

networking.k8s.io

ingresses

get, list, watch

route.openshift.io

routes

get, list, watch

1.2.3.3. Additional SCCs and permissions for the kubevirt-controller service account

Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.

The virt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These pods are granted permissions by the kubevirt-controller service account.

The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to use OpenShift Virtualization features that are beyond the scope of typical pods.

The kubevirt-controller service account is granted the following SCCs:

  • scc.AllowHostDirVolumePlugin = true
    This allows virtual machines to use the hostpath volume plugin.
  • scc.AllowPrivilegedContainer = false
    This ensures the virt-launcher pod is not run as a privileged container.
  • scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}

    • SYS_NICE allows setting the CPU affinity.
    • NET_BIND_SERVICE allows DHCP and Slirp operations.

Viewing the SCC and RBAC definitions for the kubevirt-controller

You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool:

$ oc get scc kubevirt-controller -o yaml

You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool:

$ oc get clusterrole kubevirt-controller -o yaml

1.2.4. Additional resources

1.3. OpenShift Virtualization Architecture

The Operator Lifecycle Manager (OLM) deploys operator pods for each component of OpenShift Virtualization:

  • Compute: virt-operator
  • Storage: cdi-operator
  • Network: cluster-network-addons-operator
  • Scaling: ssp-operator
  • Templating: tekton-tasks-operator

OLM also deploys the hyperconverged-cluster-operator pod, which is responsible for the deployment, configuration, and life cycle of other components, and several helper pods: hco-webhook, and hyperconverged-cluster-cli-download.

After all operator pods are successfully deployed, you should create the HyperConverged custom resource (CR). The configurations set in the HyperConverged CR serve as the single source of truth and the entrypoint for OpenShift Virtualization, and guide the behavior of the CRs.

The HyperConverged CR creates corresponding CRs for the operators of all other components within its reconciliation loop. Each operator then creates resources such as daemon sets, config maps, and additional components for the OpenShift Virtualization control plane. For example, when the HyperConverged Operator (HCO) creates the KubeVirt CR, the OpenShift Virtualization Operator reconciles it and creates additional resources such as virt-controller, virt-handler, and virt-api.

The OLM deploys the Hostpath Provisioner (HPP) Operator, but it is not functional until you create a hostpath-provisioner CR.

Deployments

1.3.1. About the HyperConverged Operator (HCO)

The HCO, hco-operator, provides a single entry point for deploying and managing OpenShift Virtualization and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators.

hco-operator components
Table 1.7. HyperConverged Operator components
ComponentDescription

deployment/hco-webhook

Validates the HyperConverged custom resource contents.

deployment/hyperconverged-cluster-cli-download

Provides the virtctl tool binaries to the cluster so that you can download them directly from the cluster.

KubeVirt/kubevirt-kubevirt-hyperconverged

Contains all operators, CRs, and objects needed by OpenShift Virtualization.

SSP/ssp-kubevirt-hyperconverged

A Scheduling, Scale, and Performance (SSP) CR. This is automatically created by the HCO.

CDI/cdi-kubevirt-hyperconverged

A Containerized Data Importer (CDI) CR. This is automatically created by the HCO.

NetworkAddonsConfig/cluster

A CR that instructs and is managed by the cluster-network-addons-operator.

1.3.2. About the Containerized Data Importer (CDI) Operator

The CDI Operator, cdi-operator, manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume.

cdi-operator components
Table 1.8. CDI Operator components
ComponentDescription

deployment/cdi-apiserver

Manages the authorization to upload VM disks into PVCs by issuing secure upload tokens.

deployment/cdi-uploadproxy

Directs external disk upload traffic to the appropriate upload server pod so that it can be written to the correct PVC. Requires a valid upload token.

pod/cdi-importer

Helper pod that imports a virtual machine image into a PVC when creating a data volume.

1.3.3. About the Cluster Network Addons Operator

The Cluster Network Addons Operator, cluster-network-addons-operator, deploys networking components on a cluster and manages the related resources for extended network functionality.

cluster-network-addons-operator components
Table 1.9. Cluster Network Addons Operator components
ComponentDescription

deployment/kubemacpool-cert-manager

Manages TLS certificates of Kubemacpool’s webhooks.

deployment/kubemacpool-mac-controller-manager

Provides a MAC address pooling service for virtual machine (VM) network interface cards (NICs).

daemonset/bridge-marker

Marks network bridges available on nodes as node resources.

daemonset/kube-cni-linux-bridge-plugin

Installs Container Network Interface (CNI) plugins on cluster nodes, enabling the attachment of VMs to Linux bridges through network attachment definitions.

1.3.4. About the Hostpath Provisioner (HPP) Operator

The HPP Operator, hostpath-provisioner-operator, deploys and manages the multi-node HPP and related resources.

hpp-operator components
Table 1.10. HPP Operator components
ComponentDescription

deployment/hpp-pool-hpp-csi-pvc-block-<worker_node_name>

Provides a worker for each node where the HPP is designated to run. The pods mount the specified backing storage on the node.

daemonset/hostpath-provisioner-csi

Implements the Container Storage Interface (CSI) driver interface of the HPP.

daemonset/hostpath-provisioner

Implements the legacy driver interface of the HPP.

1.3.5. About the Scheduling, Scale, and Performance (SSP) Operator

The SSP Operator, ssp-operator, deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator.

ssp-operator components
Table 1.11. SSP Operator components
ComponentDescription

deployment/create-vm-from-template

Creates a VM from a template.

deployment/copy-template

Copies a VM template.

deployment/modify-vm-template

Creates or removes a VM template.

deployment/modify-data-object

Creates or removes data volumes or data sources.

deployment/cleanup-vm

Runs a script or a command on a VM, then stops or deletes the VM afterward.

deployment/disk-virt-customize

Runs a customize script on a target persistent volume claim (PVC) using virt-customize.

deployment/disk-virt-sysprep

Runs a sysprep script on a target PVC by using virt-sysprep.

deployment/wait-for-vmi-status

Waits for a specific virtual machine instance (VMI) status, then fails or succeeds according to that status.

deployment/create-vm-from-manifest

Creates a VM from a manifest.

1.3.6. About the OpenShift Virtualization Operator

The OpenShift Virtualization Operator, virt-operator deploys, upgrades, and manages OpenShift Virtualization without disrupting current virtual machine (VM) workloads.

virt-operator components
Table 1.12. virt-operator components
ComponentDescription

deployment/virt-api

HTTP API server that serves as the entry point for all virtualization-related flows.

deployment/virt-controller

Observes the creation of a new VM instance object and creates a corresponding pod. When the pod is scheduled on a node, virt-controller updates the VM with the node name.

daemonset/virt-handler

Monitors any changes to a VM and instructs virt-launcher to perform the required operations. This component is node-specific.

pod/virt-launcher

Contains the VM that was created by the user as implemented by libvirt and qemu.

Chapter 2. Release notes

2.1. OpenShift Virtualization release notes

2.1.1. Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

2.1.2. Providing documentation feedback

To report an error or to improve our documentation, log in to your Red Hat Jira account and submit a Jira issue.

2.1.3. About Red Hat OpenShift Virtualization

Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.

OpenShift Virtualization is represented by the OpenShift Virtualization icon.

You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.

Learn more about what you can do with OpenShift Virtualization.

Learn more about OpenShift Virtualization architecture and deployments.

Prepare your cluster for OpenShift Virtualization.

2.1.3.1. OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.14 is supported for use on OpenShift Container Platform 4.14 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

2.1.3.2. Supported guest operating systems

To view the supported guest operating systems for OpenShift Virtualization, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.

2.1.4. New and changed features

  • OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.

    The SVVP Certification applies to:

    • Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 9.
    • Intel and AMD CPUs.
  • Using the NVIDIA GPU Operator to provision worker nodes for GPU-enabled VMs was previously Technology Preview and is now generally available. For more information, see Configuring the NVIDIA GPU Operator.
  • You can add a static authorized SSH key to a project by using the web console. The key is then added to all VMs that you create in the project.
  • OpenShift Virtualization now supports persisting the virtual Trusted Platform Module (vTPM) device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass attribute in the HyperConverged custom resource (CR).
  • The access mode and volume mode fields in storage profiles are populated automatically with their optimal values for the following additional Containerized Storage Interface provisioners:

    • Dell PowerFlex
    • Dell PowerMax
    • Dell PowerScale
    • Dell Unity
    • Dell PowerStore
    • Hitachi Virtual Storage Platform
    • IBM Fusion Hyper-Converged Infrastructure
    • IBM Fusion HCI with Fusion Data Foundation or Fusion Global Data Platform
    • IBM Fusion Software-Defined Storage
    • IBM FlashSystems
    • Hewlett Packard Enterprise 3PAR
    • Hewlett Packard Enterprise Nimble
    • Hewlett Packard Enterprise Alletra
    • Hewlett Packard Enterprise Primera
  • Garbage collection for data volumes is disabled by default.
  • You can add a static authorized SSH key to a project by using the web console. The key is then added to all VMs that you create in the project.
2.1.4.1. Quick starts
  • Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the virtualization keyword in the Filter field.
2.1.4.2. Networking
2.1.4.3. Web console
  • Cluster administrators can now enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.
  • You can now force stop an unresponsive VM from the action menu. To force stop a VM, select Stop and then Force stop from the action menu.
  • The DataSources and the Bootable volumes pages have been merged into the Bootable volumes page so that you can manage these similar resources in a single location.
  • Cluster administrators can enable or disable Technology Preview features on the Settings tab on the VirtualizationOverview page.

2.1.5. Deprecated and removed features

2.1.5.1. Deprecated features

Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.

  • The tekton-tasks-operator is deprecated and Tekton tasks and example pipelines are now deployed by the ssp-operator.
  • The copy-template, modify-vm-template, and create-vm-from-template tasks are deprecated.
  • Support for Windows Server 2012 R2 templates is deprecated.
2.1.5.2. Removed features

Removed features are not supported in the current release.

  • Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.14, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a previous version of OpenShift Virtualization.
  • Installing the virtctl client as an RPM is no longer supported for Red Hat Enterprise Linux (RHEL) 7 and RHEL 9.

2.1.6. Technology Preview features

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:

Technology Preview Features Support Scope

  • You can now install and edit customized instance types and preferences to create a VM from a volume or PersistentVolumeClaim (PVC).
  • You can hot plug a bridge network interface to a running virtual machine (VM). Hot plugging and hot unplugging is supported only for VMs created with OpenShift Virtualization 4.14 or later.

2.1.7. Bug fixes

  • The mediated devices configuration API in the HyperConverged custom resource (CR) has been updated to improve consistency. The field that was previously named mediatedDevicesTypes is now named mediatedDeviceTypes to align with the naming convention used for the nodeMediatedDeviceTypes field. (BZ#2054863)
  • Virtual machines created from common templates on a Single Node OpenShift (SNO) cluster no longer display a VMCannotBeEvicted alert when the cluster-level eviction strategy is None for SNO. (BZ#2092412)
  • Windows 11 virtual machines now boot on clusters running in FIPS mode. (BZ#2089301)
  • When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs storage class no longer fail to migrate. (BZ#2092271)
  • If you stop a node on a cluster and then use the Node Health Check Operator to bring the node back up, connectivity to Multus is retained. (OCPBUGS-8398)
  • When restoring a VM snapshot for storage whose binding mode is WaitForFirstConsumer, the restored PVCs no longer remain in the Pending state and the restore operation proceeds. (BZ#2149654)

2.1.8. Known issues

Monitoring
  • The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. (BZ#2026733)

Networking
  • If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)

    • As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
  • You cannot SSH into a VM when using the networkType: OVNKubernetes option in your install-config.yaml file. (BZ#2165895)
  • You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
Nodes
  • Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-22036)
  • In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)
Storage
  • In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)

    • As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
  • If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. (BZ#2055595)

    • As a workaround, you can restart the ceph-mgr to purge the VM clones.
  • If you use Portworx as your storage solution on AWS and create a VM disk image, the created image might be smaller than expected due to the filesystem overhead being accounted for twice. (BZ#2237287)

    • As a workaround, you can manually expand the Persistent Volume Claim (PVC) to increase the available space after the initial provisioning process completes.
  • If you simultaneously clone more than 1000 VMs using the provided DataSources in the openshift-virtualization-os-images namespace, it is possible that not all of the VMs will move to a running state. (BZ#2216038)

    • As a workaround, deploy VMs in smaller batches.
  • Live migration cannot be enabled for a virtual machine instance (VMI) after a hotplug volume has been added and removed. (BZ#2247593)
Virtualization
  • OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)

    • As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
  • With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret (EMS) extension (RFC 7627) is mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected. (BZ#2157951)

    Legacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2.

    As a workaround, upgrade legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the Modern TLS security profile type, for FIPS mode.

Web console
  • If you upgrade OpenShift Container Platform 4.13 to 4.14 without upgrading OpenShift Virtualization, the Virtualization pages of the web console crash. (OCPBUGS-22853)

    You must upgrade the OpenShift Virtualization Operator to 4.14 manually or set your subscription approval strategy to "Automatic."

Chapter 3. Getting started

3.1. Getting started with OpenShift Virtualization

You can explore the features and functionalities of OpenShift Virtualization by installing and configuring a basic environment.

Note

Cluster configuration procedures require cluster-admin privileges.

3.1.1. Planning and installing OpenShift Virtualization

Plan and install OpenShift Virtualization on an OpenShift Container Platform cluster:

Planning and installation resources

3.1.2. Creating and managing virtual machines

Create a virtual machine (VM):

Important

{FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • Create a VM from a custom image.

    You can create a VM by importing a custom image from a container registry or a web page, by uploading an image from your local machine, or by cloning a persistent volume claim (PVC).

Connect a VM to a secondary network:

Connect to a VM:

Manage a VM:

3.1.3. Next steps

3.2. Using the virtctl and libguestfs CLI tools

You can manage OpenShift Virtualization resources by using the virtctl command line tool.

You can access and modify virtual machine (VM) disk images by using the libguestfs command line tool. You deploy libguestfs by using the virtctl libguestfs command.

3.2.1. Installing virtctl

To install virtctl on Red Hat Enterprise Linux (RHEL) 9, Linux, Windows, and MacOS operating systems, you download and install the virtctl binary file.

To install virtctl on RHEL 8, you enable the OpenShift Virtualization repository and then install the kubevirt-virtctl package.

3.2.1.1. Installing the virtctl binary on RHEL 9, Linux, Windows, or macOS

You can download the virtctl binary for your operating system from the OpenShift Container Platform web console and then install it.

Procedure

  1. Navigate to the Virtualization → Overview page in the web console.
  2. Click the Download virtctl link to download the virtctl binary for your operating system.
  3. Install virtctl:

    • For RHEL 9 and other Linux operating systems:

      1. Decompress the archive file:

        $ tar -xvf <virtctl-version-distribution.arch>.tar.gz
      2. Run the following command to make the virtctl binary executable:

        $ chmod +x <path/virtctl-file-name>
      3. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        $ echo $PATH
      4. Set the KUBECONFIG environment variable:

        $ export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
    • For Windows:

      1. Decompress the archive file.
      2. Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client.
      3. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        C:\> path
    • For macOS:

      1. Decompress the archive file.
      2. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        echo $PATH
3.2.1.2. Installing the virtctl RPM on RHEL 8

You can install the virtctl RPM package on Red Hat Enterprise Linux (RHEL) 8 by enabling the OpenShift Virtualization repository and installing the kubevirt-virtctl package.

Prerequisites

  • Each host in your cluster must be registered with Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription.

Procedure

  1. Enable the OpenShift Virtualization repository by using the subscription-manager CLI tool to run the following command:

    # subscription-manager repos --enable cnv-4.14-for-rhel-8-x86_64-rpms
  2. Install the kubevirt-virtctl package by running the following command:

    # yum install kubevirt-virtctl

3.2.2. virtctl commands

The virtctl client is a command-line utility for managing OpenShift Virtualization resources.

Note

The virtual machine (VM) commands also apply to virtual machine instances (VMIs) unless otherwise specified.

3.2.2.1. virtctl information commands

You use virtctl information commands to view information about the virtctl client.

Table 3.1. Information commands
CommandDescription

virtctl version

View the virtctl client and server versions.

virtctl help

View a list of virtctl commands.

virtctl <command> -h|--help

View a list of options for a specific command.

virtctl options

View a list of global command options for any virtctl command.

3.2.2.2. VM information commands

You can use virtctl to view information about virtual machines (VMs) and virtual machine instances (VMIs).

Table 3.2. VM information commands
CommandDescription

virtctl fslist <vm_name>

View the file systems available on a guest machine.

virtctl guestosinfo <vm_name>

View information about the operating systems on a guest machine.

virtctl userlist <vm_name>

View the logged-in users on a guest machine.

3.2.2.3. VM management commands

You use virtctl virtual machine (VM) management commands to manage and migrate virtual machines (VMs) and virtual machine instances (VMIs).

Table 3.3. VM management commands
CommandDescription

virtctl create -name <vm_name>

Create a VirtualMachine manifest.

virtctl start <vm_name>

Start a VM.

virtctl start --paused <vm_name>

Start a VM in a paused state. This option enables you to interrupt the boot process from the VNC console.

virtctl stop <vm_name>

Stop a VM.

virtctl stop <vm_name> --grace-period 0 --force

Force stop a VM. This option might cause data inconsistency or data loss.

virtctl pause vm <vm_name>

Pause a VM. The machine state is kept in memory.

virtctl unpause vm <vm_name>

Unpause a VM.

virtctl migrate <vm_name>

Migrate a VM.

virtctl migrate-cancel <vm_name>

Cancel a VM migration.

virtctl restart <vm_name>

Restart a VM.

virtctl create instancetype --cpu <cpu_value> --memory <memory_value> --name <instancetype_name>

Create an InstanceType manifest for a ClusterInstanceType, or a namespaced InstanceType, to streamline the creation of your InstanceType specifications.

virtctl create preference --name <preference_name>

Create a Preference manifest for a ClusterPreference, or a namespaced Preference, to streamline the creation of your Preference specifications.

3.2.2.4. VM connection commands

You use virtctl connection commands to expose ports and connect to virtual machines (VMs) and virtual machine instances (VMIs).

Table 3.4. VM connection commands
CommandDescription

virtctl console <vm_name>

Connect to the serial console of a VM.

virtctl expose vm <vm_name> --name <service_name> --type <ClusterIP|NodePort|LoadBalancer> --port <port>

Create a service that forwards a designated port of a VM and expose the service on the specified port of the node.

Example: virtctl expose vm rhel9_vm --name rhel9-ssh --type NodePort --port 22

virtctl scp -i <ssh_key> <file_name> <user_name>@<vm_name>

Copy a file from your machine to a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key.

virtctl scp -i <ssh_key> <user_name@<vm_name>:<file_name> .

Copy a file from a VM to your machine. This command uses the private key of an SSH key pair. The VM must be configured with the public key.

virtctl ssh -i <ssh_key> <user_name>@<vm_name>

Open an SSH connection with a VM. This command uses the private key of an SSH key pair. The VM must be configured with the public key.

virtctl vnc <vm_name>

Connect to the VNC console of a VM.

You must have virt-viewer installed.

virtctl vnc --proxy-only=true <vm_name>

Display the port number and connect manually to a VM by using any viewer through the VNC connection.

virtctl vnc --port=<port-number> <vm_name>

Specify a port number to run the proxy on the specified port, if that port is available.

If a port number is not specified, the proxy runs on a random port.

3.2.2.5. VM export commands

Use virtctl vmexport commands to create, download, or delete a volume exported from a VM, VM snapshot, or persistent volume claim (PVC). Certain manifests also contain a header secret, which grants access to the endpoint to import a disk image in a format that OpenShift Virtualization can use.

Table 3.5. VM export commands
CommandDescription

virtctl vmexport create <vmexport_name> --vm|snapshot|pvc=<object_name>

Create a VirtualMachineExport custom resource (CR) to export a volume from a VM, VM snapshot, or PVC.

  • --vm: Exports the PVCs of a VM.
  • --snapshot: Exports the PVCs contained in a VirtualMachineSnapshot CR.
  • --pvc: Exports a PVC.
  • Optional: --ttl=1h specifies the time to live. The default duration is 2 hours.

virtctl vmexport delete <vmexport_name>

Delete a VirtualMachineExport CR manually.

virtctl vmexport download <vmexport_name> --output=<output_file> --volume=<volume_name>

Download the volume defined in a VirtualMachineExport CR.

  • --output specifies the file format. Example: disk.img.gz.
  • --volume specifies the volume to download. This flag is optional if only one volume is available.

Optional:

  • --keep-vme retains the VirtualMachineExport CR after download. The default behavior is to delete the VirtualMachineExport CR after download.
  • --insecure enables an insecure HTTP connection.

virtctl vmexport download <vmexport_name> --<vm|snapshot|pvc>=<object_name> --output=<output_file> --volume=<volume_name>

Create a VirtualMachineExport CR and then download the volume defined in the CR.

virtctl vmexport download export --manifest

Retrieve the manifest for an existing export. The manifest does not include the header secret.

virtctl vmexport download export --manifest --vm=example

Create a VM export for a VM example, and retrieve the manifest. The manifest does not include the header secret.

virtctl vmexport download export --manifest --snap=example

Create a VM export for a VM snapshot example, and retrieve the manifest. The manifest does not include the header secret.

virtctl vmexport download export --manifest --include-secret

Retrieve the manifest for an existing export. The manifest includes the header secret.

virtctl vmexport download export --manifest --manifest-output-format=json

Retrieve the manifest for an existing export in json format. The manifest does not include the header secret.

virtctl vmexport download export --manifest --include-secret --output=manifest.yaml

Retrieve the manifest for an existing export. The manifest includes the header secret and writes it to the file specified.

3.2.2.6. VM memory dump commands

You can use the virtctl memory-dump command to output a VM memory dump on a PVC. You can specify an existing PVC or use the --create-claim flag to create a new PVC.

Prerequisites

  • The PVC volume mode must be FileSystem.
  • The PVC must be large enough to contain the memory dump.

    The formula for calculating the PVC size is (VMMemorySize + 100Mi) * FileSystemOverhead, where 100Mi is the memory dump overhead.

  • You must enable the hot plug feature gate in the HyperConverged custom resource by running the following command:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op": "add", "path": "/spec/featureGates", \
      "value": "HotplugVolumes"}]'

Downloading the memory dump

You must use the virtctl vmexport download command to download the memory dump:

$ virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> \
  --volume=<volume_name> --output=<output_file>
Table 3.6. VM memory dump commands
CommandDescription

virtctl memory-dump get <vm_name> --claim-name=<pvc_name>

Save the memory dump of a VM on a PVC. The memory dump status is displayed in the status section of the VirtualMachine resource.

Optional:

  • --create-claim creates a new PVC with the appropriate size. This flag has the following options:

    • --storage-class=<storage_class>: Specify a storage class for the PVC.
    • --access-mode=<access_mode>: Specify ReadWriteOnce or ReadWriteMany.

virtctl memory-dump get <vm_name>

Rerun the virtctl memory-dump command with the same PVC.

This command overwrites the previous memory dump.

virtctl memory-dump remove <vm_name>

Remove a memory dump.

You must remove a memory dump manually if you want to change the target PVC.

This command removes the association between the VM and the PVC, so that the memory dump is not displayed in the status section of the VirtualMachine resource. The PVC is not affected.

3.2.2.7. Hot plug and hot unplug commands

You use virtctl to add or remove resources from running virtual machines (VMs) and virtual machine instances (VMIs).

Table 3.7. Hot plug and hot unplug commands
CommandDescription

virtctl addvolume <vm_name> --volume-name=<datavolume_or_PVC> [--persist] [--serial=<label>]

Hot plug a data volume or persistent volume claim (PVC).

Optional:

  • --persist mounts the virtual disk permanently on a VM. This flag does not apply to VMIs.
  • --serial=<label> adds a label to the VM. If you do not specify a label, the default label is the data volume or PVC name.

virtctl removevolume <vm_name> --volume-name=<virtual_disk>

Hot unplug a virtual disk.

virtctl addinterface <vm_name> --network-attachment-definition-name <net_attach_def_name> --name <interface_name>

Hot plug a Linux bridge network interface.

virtctl removeinterface <vm_name> --name <interface_name>

Hot unplug a Linux bridge network interface.

3.2.2.8. Image upload commands

You use the virtctl image-upload commands to upload a VM image to a data volume.

Table 3.8. Image upload commands
CommandDescription

virtctl image-upload dv <datavolume_name> --image-path=</path/to/image> --no-create

Upload a VM image to a data volume that already exists.

virtctl image-upload dv <datavolume_name> --size=<datavolume_size> --image-path=</path/to/image>

Upload a VM image to a new data volume of a specified requested size.

3.2.3. Deploying libguestfs by using virtctl

You can use the virtctl guestfs command to deploy an interactive container with libguestfs-tools and a persistent volume claim (PVC) attached to it.

Procedure

  • To deploy a container with libguestfs-tools, mount the PVC, and attach a shell to it, run the following command:

    $ virtctl guestfs -n <namespace> <pvc_name> 1
    1
    The PVC name is a required argument. If you do not include it, an error message appears.
3.2.3.1. Libguestfs and virtctl guestfs commands

Libguestfs tools help you access and modify virtual machine (VM) disk images. You can use libguestfs tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks.

You can also use the virtctl guestfs command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter virt- on the command line and press the Tab key. For example:

CommandDescription

virt-edit -a /dev/vda /etc/motd

Edit a file interactively in your terminal.

virt-customize -a /dev/vda --ssh-inject root:string:<public key example>

Inject an ssh key into the guest and create a login.

virt-df -a /dev/vda -h

See how much disk space is used by a VM.

virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list'

See the full list of all RPMs installed on a guest by creating an output file containing the full list.

virt-cat -a /dev/vda /rpm-list

Display the output file list of all RPMs created using the virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list' command in your terminal.

virt-sysprep -a /dev/vda

Seal a virtual machine disk image to be used as a template.

By default, virtctl guestfs creates a session with everything needed to manage a VM disk. However, the command also supports several flag options if you want to customize the behavior:

Flag OptionDescription

--h or --help

Provides help for guestfs.

-n <namespace> option with a <pvc_name> argument

To use a PVC from a specific namespace.

If you do not use the -n <namespace> option, your current project is used. To change projects, use oc project <namespace>.

If you do not include a <pvc_name> argument, an error message appears.

--image string

Lists the libguestfs-tools container image.

You can configure the container to use a custom image by using the --image option.

--kvm

Indicates that kvm is used by the libguestfs-tools container.

By default, virtctl guestfs sets up kvm for the interactive container, which greatly speeds up the libguest-tools execution because it uses QEMU.

If a cluster does not have any kvm supporting nodes, you must disable kvm by setting the option --kvm=false.

If not set, the libguestfs-tools pod remains pending because it cannot be scheduled on any node.

--pull-policy string

Shows the pull policy for the libguestfs image.

You can also overwrite the image’s pull policy by setting the pull-policy option.

The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the libguestfs-tools process starts, the setup cannot avoid a new pod using the same PVC. You must verify that there are no active virtctl guestfs pods before starting the VM that accesses the same PVC.

Note

The virtctl guestfs command accepts only a single PVC attached to the interactive pod.

3.3. Web console overview

The Virtualization section of the OpenShift Container Platform web console contains the following pages for managing and monitoring your OpenShift Virtualization environment.

Table 3.9. Virtualization pages
PageDescription

Overview page

Manage and monitor the OpenShift Virtualization environment.

Catalog page

Create virtual machines from a catalog of templates.

VirtualMachines page

Create and manage virtual machines.

Templates page

Create and manage templates.

InstanceTypes page

Create and manage virtual machine instance types.

Preferences page

Create and manage virtual machine preferences.

Bootable volumes page

Create and manage DataSources for bootable volumes.

MigrationPolicies page

Create and manage migration policies for workloads.

Table 3.10. Key
IconDescription

icon pencil

Edit icon

icon link

Link icon

3.3.1. Overview page

The Overview page displays resources, metrics, migration progress, and cluster-level settings.

Example 3.1. Overview page

ElementDescription

Download virtctl icon link

Download the virtctl command line tool to manage resources.

Overview tab

Resources, usage, alerts, and status.

Top consumers tab

Top consumers of CPU, memory, and storage resources.

Migrations tab

Status of live migrations.

Settings tab

The Settings tab contains the Cluster tab and the User tab.

SettingsCluster tab

OpenShift Virtualization version, update status, live migration, templates project, preview features, and load balancer service settings.

SettingsUser tab

Authorized SSH keys, user permissions, and welcome information settings.

3.3.1.1. Overview tab

The Overview tab displays resources, usage, alerts, and status.

Example 3.2. Overview tab

ElementDescription

Getting started resources card

  • Quick Starts tile: Learn how to create, import, and run virtual machines with step-by-step instructions and tasks.
  • Feature highlights tile: Read the latest information about key virtualization features.
  • Related operators tile: Install Operators such as the Kubernetes NMState Operator or the OpenShift Data Foundation Operator.

Memory tile

Memory usage, with a chart showing the last 7 days' trend.

Storage tile

Storage usage, with a chart showing the last 7 days' trend.

VirtualMachines tile

Number of virtual machines, with a chart showing the last 7 days' trend.

vCPU usage tile

vCPU usage, with a chart showing the last 7 days' trend.

VirtualMachine statuses tile

Number of virtual machines, grouped by status.

Alerts tile

OpenShift Virtualization alerts, grouped by severity.

VirtualMachines per resource chart

Number of virtual machines created from templates and instance types.

3.3.1.2. Top consumers tab

The Top consumers tab displays the top consumers of CPU, memory, and storage.

Example 3.3. Top consumers tab

ElementDescription

View virtualization dashboard icon link

Link to Observe → Dashboards, which displays the top consumers for OpenShift Virtualization.

Time period list

Select a time period to filter the results.

Top consumers list

Select the number of top consumers to filter the results.

CPU chart

Virtual machines with the highest CPU usage.

Memory chart

Virtual machines with the highest memory usage.

Memory swap traffic chart

Virtual machines with the highest memory swap traffic.

vCPU wait chart

Virtual machines with the highest vCPU wait periods.

Storage throughput chart

Virtual machines with the highest storage throughput usage.

Storage IOPS chart

Virtual machines with the highest storage input/output operations per second usage.

3.3.1.3. Migrations tab

The Migrations tab displays the status of virtual machine migrations.

Example 3.4. Migrations tab

ElementDescription

Time period list

Select a time period to filter virtual machine migrations.

VirtualMachineInstanceMigrations information table

List of virtual machine migrations.

3.3.1.4. Settings tab

The Settings tab displays cluster-wide settings.

Example 3.5. Tabs on the Settings tab

TabDescription

Cluster tab

OpenShift Virtualization version and update status, live migration, templates project, preview features, and load balancer service settings.

User tab

Authorized SSH key management, user permissions, and welcome information settings.

3.3.1.4.1. Cluster tab

The Cluster tab displays the OpenShift Virtualization version and update status. You configure preview features, live migration, and other settings on the Cluster tab.

Example 3.6. Cluster tab

ElementDescription

Installed version

OpenShift Virtualization version.

Update status

OpenShift Virtualization update status.

Channel

OpenShift Virtualization update channel.

Preview features section

Expand this section to manage preview features.

Preview features are disabled by default and must not be enabled in production environments.

Live Migration section

Expand this section to configure live migration settings.

Live MigrationMax. migrations per cluster field

Select the maximum number of live migrations per cluster.

Live MigrationMax. migrations per node field

Select the maximum number of live migrations per node.

Live MigrationLive migration network list

Select a dedicated secondary network for live migration.

Automatic subscription of new RHEL VirtualMachines section

Expand this section to enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines.

To enable this feature, you need cluster administrator permissions, an organization ID, and an activation key.

LoadBalancer section

Expand this section to enable the creation of load balancer services for SSH access to virtual machines.

The cluster must have a load balancer configured.

Template project section

Expand this section to select a project for Red Hat templates. The default project is openshift.

To store Red Hat templates in multiple projects, clone the template and then select a project for the cloned template.

3.3.1.4.2. User tab

You view user permissions and manage authorized SSH keys and welcome information on the User tab.

Example 3.7. User tab

ElementDescription

Manage SSH keys section

Expand this section to add authorized SSH keys to a project.

The keys are added automatically to all virtual machines that you subsequently create in the selected project.

Permissions section

Expand this section to view cluster-wide user permissions.

Welcome information section

Expand this section to show or hide the Welcome information dialog.

3.3.2. Catalog page

You create a virtual machine from a template or instance type on the Catalog page.

Example 3.8. Catalog page

ElementDescription

Template catalog tab

Displays a catalog of templates for creating a virtual machine.

InstanceTypes tab

Displays bootable volumes and instance types for creating a virtual machine.

3.3.2.1. Template catalog tab

You select a template on the Template catalog tab to create a virtual machine.

Example 3.9. Template catalog tab

ElementDescription

Template project list

Select the project in which Red Hat templates are located.

By default, Red Hat templates are stored in the openshift project. You can edit the template project on the Overview page → Settings tab → Cluster tab.

All items|Default templates

Click All items to display all available templates.

Boot source available checkbox

Select the checkbox to display templates with an available boot source.

Operating system checkboxes

Select checkboxes to display templates with selected operating systems.

Workload checkboxes

Select checkboxes to display templates with selected workloads.

Search field

Search templates by keyword.

Template tiles

Click a template tile to view template details and to create a virtual machine.

3.3.2.2. InstanceTypes tab

You create a virtual machine from an instance type on the InstanceTypes tab.

Important

Creating a virtual machine from an instance type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

ElementDescription

Volumes project field

Project in which bootable volumes are stored. The default is openshift-virtualization-os-images.

Add volume button

Click to upload a new volume or to use an existing persistent volume claim.

Filter field

Filter boot sources by operating system or resource.

Search field

Search boot sources by name.

Manage columns icon

Select up to 9 columns to display in the table.

Volume table

Select a bootable volume for your virtual machine.

Red Hat provided tab

Select an instance type provided by Red Hat.

User provided tab

Select an instance type that you created on the InstanceType page.

VirtualMachine details pane

Displays the virtual machine settings.

Name field

Optional: Enter the virtual machine name.

SSH key name

Click the edit icon to add a public SSH key.

Start this VirtualMachine after creation checkbox

Clear this checkbox to prevent the virtual machine from starting automatically.

Create VirtualMachine button

Creates a virtual machine.

YAML & CLI button

Displays the YAML configuration file and the virtctl create command to create the virtual machine from the command line.

3.3.3. VirtualMachines page

You create and manage virtual machines on the VirtualMachines page.

Example 3.10. VirtualMachines page

ElementDescription

Create button

Create a virtual machine from a template, volume, or YAML configuration file.

Filter field

Filter virtual machines by status, template, operating system, or node.

Search field

Search for virtual machines by name or by label.

Manage columns icon

Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list.

Virtual machines table

List of virtual machines.

Click the actions menu kebab beside a virtual machine to select Stop, Restart, Pause, Clone, Migrate, Copy SSH command, Edit labels, Edit annotations, or Delete. If you select Stop, Force stop replaces Stop in the action menu. Use Force stop to initiate an immediate shutdown if the operating system becomes unresponsive.

Click a virtual machine to navigate to the VirtualMachine details page.

3.3.3.1. VirtualMachine details page

You configure a virtual machine on the VirtualMachine details page.

Example 3.11. VirtualMachine details page

ElementDescription

Actions menu

Click the Actions menu to select Stop, Restart, Pause, Clone, Migrate, Copy SSH command, Edit labels, Edit annotations, or Delete. If you select Stop, Force stop replaces Stop in the action menu. Use Force stop to initiate an immediate shutdown if the operating system becomes unresponsive.

Overview tab

Resource usage, alerts, disks, and devices.

Details tab

Virtual machine details and configurations.

Metrics tab

Memory, CPU, storage, network, and migration metrics.

YAML tab

Virtual machine YAML configuration file.

Configuration tab

Contains the Disks, Network interfaces, Scheduling, Environment, and Scripts tabs.

ConfigurationDisks tab

Disks.

ConfigurationNetwork interfaces tab

Network interfaces.

ConfigurationScheduling tab

Scheduling a virtual machine to run on specific nodes.

ConfigurationEnvironment tab

Config map, secret, and service account management.

ConfigurationScripts tab

Cloud-init settings, authorized SSH key and dynamic key injection for Linux virtual machines, Sysprep settings for Windows virtual machines.

Events tab

Virtual machine event stream.

Console tab

Console session management.

Snapshots tab

Snapshot management.

Diagnostics tab

Status conditions and volume snapshot status.

3.3.3.1.1. Overview tab

The Overview tab displays resource usage, alerts, and configuration information.

Example 3.12. Overview tab

ElementDescription

Details tile

General virtual machine information.

Utilization tile

CPU, Memory, Storage, and Network transfer charts. By default, Network transfer displays the sum of all networks. To view the breakdown for a specific network, click Breakdown by network.

Hardware devices tile

GPU and host devices.

Alerts tile

OpenShift Virtualization alerts, grouped by severity.

Snapshots tile

Take snapshot icon link and snapshots table.

Network interfaces tile

Network interfaces table.

Disks tile

Disks table.

3.3.3.1.2. Details tab

You view information about the virtual machine and edit labels, annotations, and other metadata and on the Details tab.

Example 3.13. Details tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Name

Virtual machine name.

Namespace

Virtual machine namespace or project.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Description

Click the edit icon to enter a description.

Operating system

Operating system name.

CPU|Memory

Click the edit icon to edit the CPU|Memory request. Restart the virtual machine to apply the change.

The number of CPUs is calculated by using the following formula: sockets * threads * cores.

Machine type

Machine type.

Boot mode

Click the edit icon to edit the boot mode. Restart the virtual machine to apply the change.

Start in pause mode

Click the edit icon to enable this setting. Restart the virtual machine to apply the change.

Template

Name of the template used to create the virtual machine.

Created at

Virtual machine creation date.

Owner

Virtual machine owner.

Status

Virtual machine status.

Pod

virt-launcher pod name.

VirtualMachineInstance

Virtual machine instance name.

Boot order

Click the edit icon to select a boot source. Restart the virtual machine to apply the change.

IP address

IP address of the virtual machine.

Hostname

Hostname of the virtual machine. Restart the virtual machine to apply the change.

Time zone

Time zone of the virtual machine.

Node

Node on which the virtual machine is running.

Workload profile

Click the edit icon to edit the workload profile.

SSH access

These settings apply to Linux.

SSH using virtctl

Click the copy icon to copy the virtctl ssh command to the clipboard. This feature is disabled if the virtual machine does not have an authorized SSH key.

SSH service type

Select SSH over LoadBalancer.

After you create a service, the SSH command is displayed. Click the copy icon to copy the command to the clipboard.

GPU devices

Click the edit icon to add a GPU device. Restart the virtual machine to apply the change.

Host devices

Click the edit icon to add a host device. Restart the virtual machine to apply the change.

Headless mode

Click the edit icon to set headless mode to ON and to disable VNC console. Restart the virtual machine to apply the change.

Services

Displays a list of services if QEMU guest agent is installed.

Active users

Displays a list of active users if QEMU guest agent is installed.

3.3.3.1.3. Metrics tab

The Metrics tab displays memory, CPU, storage, network, and migration usage charts.

Example 3.14. Metrics tab

ElementDescription

Time range list

Select a time range to filter the results.

Virtualization dashboard icon link

Link to the Workloads tab of the current project.

Utilization

Memory and CPU charts.

Storage

Storage total read/write and Storage IOPS total read/write charts.

Network

Network in, Network out, Network bandwidth, and Network interface charts. Select All networks or a specific network from the Network interface list.

Migration

Migration and KV data transfer rate charts.

3.3.3.1.4. YAML tab

You configure the virtual machine by editing the YAML file on the YAML tab.

Example 3.15. YAML tab

ElementDescription

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

3.3.3.1.5. Configuration tab

You configure scheduling, network interfaces, disks, and other options on the Configuration tab.

Example 3.16. Tabs on the Configuration tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Disks tab

Disks.

Network interfaces tab

Network interfaces.

Scheduling tab

Scheduling and resource requirements.

Environment tab

Config maps, secrets, and service accounts.

Scripts tab

Cloud-init settings, authorized SSH key for Linux virtual machines, Sysprep answer file for Windows virtual machines.

3.3.3.1.5.1. Disks tab

You manage disks on the Disks tab.

Example 3.17. Disks tab

SettingDescription

Add disk button

Add a disk to the virtual machine.

Filter field

Filter by disk type.

Search field

Search for a disk by name.

Mount Windows drivers disk checkbox

Select to mount a virtio-win container disk as a CD-ROM to install VirtIO drivers.

Disks table

List of virtual machine disks.

Click the actions menu kebab beside a disk to select Edit or Detach.

File systems table

List of virtual machine file systems.

3.3.3.1.5.2. Network interfaces tab

You manage network interfaces on the Network interfaces tab.

Example 3.18. Network interfaces tab

SettingDescription

Add network interface button

Add a network interface to the virtual machine.

Filter field

Filter by interface type.

Search field

Search for a network interface by name or by label.

Network interface table

List of network interfaces.

Click the actions menu kebab beside a network interface to select Edit or Delete.

3.3.3.1.5.3. Scheduling tab

You configure virtual machines to run on specific nodes on the Scheduling tab.

Restart the virtual machine to apply changes.

Example 3.19. Scheduling tab

SettingDescription

Node selector

Click the edit icon to add a label to specify qualifying nodes.

Tolerations

Click the edit icon to add a toleration to specify qualifying nodes.

Affinity rules

Click the edit icon to add an affinity rule.

Descheduler switch

Enable or disable the descheduler. The descheduler evicts a running pod so that the pod can be rescheduled onto a more suitable node.

This field is disabled if the virtual machine cannot be live migrated.

Dedicated resources

Click the edit icon to select Schedule this workload with dedicated resources (guaranteed policy).

Eviction strategy

Click the edit icon to select LiveMigrate as the virtual machine eviction strategy.

3.3.3.1.5.4. Environment tab

You manage config maps, secrets, and service accounts on the Environment tab.

Example 3.20. Environment tab

ElementDescription

Add Config Map, Secret or Service Account icon link

Click the link and select a config map, secret, or service account from the resource list.

3.3.3.1.5.5. Scripts tab

You manage cloud-init settings, add SSH keys, or configure Sysprep for Windows virtual machines on the Scripts tab.

Restart the virtual machine to apply changes.

Example 3.21. Scripts tab

ElementDescription

Cloud-init

Click the edit icon to edit the cloud-init settings.

Authorized SSH key

Click the edit icon to add a public SSH key to a Linux virtual machine.

The key is added as a cloud-init data source at first boot.

Dynamic SSH key injection switch

Set Dynamic SSH key injection to on to enable dynamic public SSH key injection. Then, you can add or revoke the key at runtime.

Dynamic SSH key injection is only supported by Red Hat Enterprise Linux (RHEL) 9. If you manually disable this setting, the virtual machine inherits the SSH key settings of the image from which it was created.

Sysprep

Click the edit icon to upload an Autounattend.xml or Unattend.xml answer file to automate Windows virtual machine setup.

3.3.3.1.6. Events tab

The Events tab displays a list of virtual machine events.

3.3.3.1.7. Console tab

You can open a console session to the virtual machine on the Console tab.

Example 3.22. Console tab

ElementDescription

Guest login credentials section

Expand Guest login credentials to view the credentials created with cloud-init. Click the copy icon to copy the credentials to the clipboard.

Console list

Select VNC console or Serial console.

The Desktop viewer option is displayed for Windows virtual machines. You must install an RDP client on a machine on the same network.

Send key list

Select a key-stroke combination to send to the console.

Disconnect button

Disconnect the console connection.

You must manually disconnect the console connection if you open a new console session. Otherwise, the first console session continues to run in the background.

Paste button

Paste a string from your clipboard to the VNC console.

3.3.3.1.8. Snapshots tab

You create snapshots and restore virtual machines from snapshots on the Snapshots tab.

Example 3.23. Snapshots tab

ElementDescription

Take snapshot button

Create a snapshot.

Filter field

Filter snapshots by status.

Search field

Search for snapshots by name or by label.

Snapshot table

List of snapshots

Click the snapshot name to edit the labels or annotations.

Click the actions menu kebab beside a snapshot to select Restore or Delete.

3.3.3.1.9. Diagnostics tab

You view the status conditions and volume snapshot status on the Diagnostics tab.

Example 3.24. Diagnostics tab

ElementDescription

Status conditions table

Display a list of conditions that are reported for the virtual machine.

Filter field

Filter status conditions by category and condition.

Search field

Search status conditions by reason.

Manage columns icon

Select up to 9 columns to display in the table.

Volume snapshot status table

List of volumes, their snapshot enablement status, and reason.

3.3.4. Templates page

You create, edit, and clone virtual machine templates on the VirtualMachine Templates page.

Note

You cannot edit a Red Hat template. However, you can clone a Red Hat template and edit it to create a custom template.

Example 3.25. VirtualMachine Templates page

ElementDescription

Create Template button

Create a template by editing a YAML configuration file.

Filter field

Filter templates by type, boot source, template provider, or operating system.

Search field

Search for templates by name or by label.

Manage columns icon

Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list.

Virtual machine templates table

List of virtual machine templates.

Click the actions menu kebab beside a template to select Edit, Clone, Edit boot source, Edit boot source reference, Edit labels, Edit annotations, or Delete. You cannot edit a Red Hat provided template. You can clone the Red Hat template and then edit the custom template.

3.3.4.1. Template details page

You view template settings and edit custom templates on the Template details page.

Example 3.26. Template details page

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Actions menu

Click the Actions menu to select Edit, Clone, Edit boot source, Edit boot source reference, Edit labels, Edit annotations, or Delete.

Details tab

Template settings and configurations.

YAML tab

YAML configuration file.

Scheduling tab

Scheduling configurations.

Network interfaces tab

Network interface management.

Disks tab

Disk management.

Scripts tab

Cloud-init, SSH key, and Sysprep management.

Parameters tab

Name and cloud user password management.

3.3.4.1.1. Details tab

You configure a custom template on the Details tab.

Example 3.27. Details tab

ElementDescription

Name

Template name.

Namespace

Template namespace.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Display name

Click the edit icon to edit the display name.

Description

Click the edit icon to enter a description.

Operating system

Operating system name.

CPU|Memory

Click the edit icon to edit the CPU|Memory request.

The number of CPUs is calculated by using the following formula: sockets * threads * cores.

Machine type

Template machine type.

Boot mode

Click the edit icon to edit the boot mode.

Base template

Name of the base template used to create this template.

Created at

Template creation date.

Owner

Template owner.

Boot order

Template boot order.

Boot source

Boot source availability.

Provider

Template provider.

Support

Template support level.

GPU devices

Click the edit icon to add a GPU device.

Host devices

Click the edit icon to add a host device.

Headless mode

Click the edit icon to set headless mode to ON and to disable VNC console.

3.3.4.1.2. YAML tab

You configure a custom template by editing the YAML file on the YAML tab.

Example 3.28. YAML tab

ElementDescription

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

3.3.4.1.3. Scheduling tab

You configure scheduling on the Scheduling tab.

Example 3.29. Scheduling tab

SettingDescription

Node selector

Click the edit icon to add a label to specify qualifying nodes.

Tolerations

Click the edit icon to add a toleration to specify qualifying nodes.

Affinity rules

Click the edit icon to add an affinity rule.

Descheduler switch

Enable or disable the descheduler. The descheduler evicts a running pod so that the pod can be rescheduled onto a more suitable node.

Dedicated resources

Click the edit icon to select Schedule this workload with dedicated resources (guaranteed policy).

Eviction strategy

Click the edit icon to select LiveMigrate as the virtual machine eviction strategy.

3.3.4.1.4. Network interfaces tab

You manage network interfaces on the Network interfaces tab.

Example 3.30. Network interfaces tab

SettingDescription

Add network interface button

Add a network interface to the template.

Filter field

Filter by interface type.

Search field

Search for a network interface by name or by label.

Network interface table

List of network interfaces.

Click the actions menu kebab beside a network interface to select Edit or Delete.

3.3.4.1.5. Disks tab

You manage disks on the Disks tab.

Example 3.31. Disks tab

SettingDescription

Add disk button

Add a disk to the template.

Filter field

Filter by disk type.

Search field

Search for a disk by name.

Disks table

List of template disks.

Click the actions menu kebab beside a disk to select Edit or Detach.

3.3.4.1.6. Scripts tab

You manage the cloud-init settings, SSH keys, and Sysprep answer files on the Scripts tab.

Example 3.32. Scripts tab

ElementDescription

Cloud-init

Click the edit icon to edit the cloud-init settings.

Authorized SSH key

Click the edit icon to create a new secret or to attach an existing secret to a Linux virtual machine.

Sysprep

Click the edit icon to upload an Autounattend.xml or Unattend.xml answer file to automate Windows virtual machine setup.

3.3.4.1.7. Parameters tab

You edit selected template settings on the Parameters tab.

Example 3.33. Parameters tab

ElementDescription

NAME

Set the name parameters for a virtual machine created from this template.

CLOUD_USER_PASSWORD

Set the cloud user password parameters for a virtual machine created from this template.

3.3.5. InstanceTypes page

You view and manage virtual machine instance types on the InstanceTypes page.

Example 3.34. VirtualMachineClusterInstancetypes page

ElementDescription

Create button

Create an instance type by editing a YAML configuration file.

Search field

Search for an instance type by name or by label.

Manage columns icon

Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list.

Instance types table

List of instance.

Click the actions menu kebab beside an instance type to select Clone or Delete.

Click an instance type to view the VirtualMachineClusterInstancetypes details page.

3.3.5.1. VirtualMachineClusterInstancetypes details page

You configure an instance type on the VirtualMachineClusterInstancetypes details page.

Example 3.35. VirtualMachineClusterInstancetypes details page

ElementDescription

Details tab

Configure an instance type by editing a form.

YAML tab

Configure an instance type by editing a YAML configuration file.

Actions menu

Select Edit labels, Edit annotations, Edit VirtualMachineClusterInstancetype, or Delete VirtualMachineClusterInstancetype.

3.3.5.1.1. Details tab

You configure an instance type by editing a form on the Details tab.

Example 3.36. Details tab

ElementDescription

Name

VirtualMachineClusterInstancetype name.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Created at

Instance type creation date.

Owner

Instance type owner.

3.3.5.1.2. YAML tab

You configure an instance type by editing the YAML file on the YAML tab.

Example 3.37. YAML tab

ElementDescription

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

3.3.6. Preferences page

You view and manage virtual machine preferences on the Preferences page.

Example 3.38. VirtualMachineClusterPreferences page

ElementDescription

Create button

Create a preference by editing a YAML configuration file.

Search field

Search for a preference by name or by label.

Manage columns icon

Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list.

Preferences table

List of preferences.

Click the actions menu kebab beside a preference to select Clone or Delete.

Click a preference to view the VirtualMachineClusterPreference details page.

3.3.6.1. VirtualMachineClusterPreference details page

You configure a preference on the VirtualMachineClusterPreference details page.

Example 3.39. VirtualMachineClusterPreference details page

ElementDescription

Details tab

Configure a preference by editing a form.

YAML tab

Configure a preference by editing a YAML configuration file.

Actions menu

Select Edit labels, Edit annotations, Edit VirtualMachineClusterPreference, or Delete VirtualMachineClusterPreference.

3.3.6.1.1. Details tab

You configure a preference by editing a form on the Details tab.

Example 3.40. Details tab

ElementDescription

Name

VirtualMachineClusterPreference name.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Created at

Preference creation date.

Owner

Preference owner.

3.3.6.1.2. YAML tab

You configure a preference type by editing the YAML file on the YAML tab.

Example 3.41. YAML tab

ElementDescription

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

3.3.7. Bootable volumes page

You view and manage available bootable volumes on the Bootable volumes page.

Example 3.42. Bootable volumes page

ElementDescription

Add volume button

Add a bootable volume by completing a form or by editing a YAML configuration file.

Filter field

Filter bootable volumes by operating system and resource type.

Search field

Search for bootable volumes by name or by label.

Manage columns icon

Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list.

Bootable volumes table

List of bootable volumes.

Click the actions menu kebab beside a bootable volume to select Edit, Remove from list, or Delete.

Click a bootable volume to view the PersistentVolumeClaim details page.

3.3.7.1. PersistentVolumeClaim details page

You configure the persistent volume claim (PVC) of a bootable volume on the PersistentVolumeClaim details page.

Example 3.43. PersistentVolumeClaim details page

ElementDescription

Details tab

Configure the PVC by editing a form.

YAML tab

Configure the PVC by editing a YAML configuration file.

Events tab

The Events tab displays a list of PVC events.

VolumeSnapshots tab

The VolumeSnapshots tab displays a list of volume snapshots.

Actions menu

Select Expand PVC, Create snapshot, Clone PVC, Edit labels, Edit annotations, Edit PersistentVolumeClaim or Delete PersistentVolumeClaim.

3.3.7.1.1. Details tab

You configure the persistent volume claim (PVC) of the bootable volume by editing a form on the Details tab.

Example 3.44. Details tab

ElementDescription

Name

PVC name.

Namespace

PVC namespace.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Created at

PVC creation date.

Owner

PVC owner.

Status

Status of the PVC, for example, Bound.

Requested capacity

Requested capacity of the PVC.

Capacity

Capacity of the PVC.

Used

Used space of the PVC.

Access modes

PVC access modes.

Volume mode

PVC volume mode.

StorageClasses

PVC storage class.

PersistentVolumes

Persistent volume associated with the PVC.

Conditions table

Displays the status of the PVC.

3.3.7.1.2. YAML tab

You configure the persistent volume claim of the bootable volume by editing the YAML file on the YAML tab.

Example 3.45. YAML tab

ElementDescription

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

3.3.8. MigrationPolicies page

You manage migration policies for workloads on the MigrationPolicies page.

Example 3.46. MigrationPolicies page

ElementDescription

Create MigrationPolicy

Create a migration policy by entering configurations and labels in a form or by editing a YAML file.

Search field

Search for a migration policy by name or by label.

Manage columns icon

Select up to 9 columns to display in the table. The Namespace column is only displayed when All Projects is selected from the Projects list.

MigrationPolicies table

List of migration policies.

Click the actions menu kebab beside a migration policy to select Edit or Delete.

Click a migration policy to view the MigrationPolicy details page.

3.3.8.1. MigrationPolicy details page

You configure a migration policy on the MigrationPolicy details page.

Example 3.47. MigrationPolicy details page

ElementDescription

Details tab

Configure a migration policy by editing a form.

YAML tab

Configure a migration policy by editing a YAML configuration file.

Actions menu

Select Edit or Delete.

3.3.8.1.1. Details tab

You configure a custom template on the Details tab.

Example 3.48. Details tab

ElementDescription

Name

Migration policy name.

Description

Migration policy description.

Configurations

Click the edit icon to update the migration policy configurations.

Bandwidth per migration

Bandwidth request per migration. For unlimited bandwidth, set the value to 0.

Auto converge

When auto converge is enabled, the performance and availability of the virtual machines might be reduced to ensure that migration is successful.

Post-copy

Post-copy policy.

Completion timeout

Completion timeout value in seconds.

Project labels

Click Edit to edit the project labels.

VirtualMachine labels

Click Edit to edit the virtual machine labels.

3.3.8.1.2. YAML tab

You configure the migration polic by editing the YAML file on the YAML tab.

Example 3.49. YAML tab

ElementDescription

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

Chapter 4. Installing

4.1. Preparing your cluster for OpenShift Virtualization

Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.

Important
Installation method considerations
You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.
Red Hat OpenShift Data Foundation
If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.
IPv6
You cannot run OpenShift Virtualization on a single-stack IPv6 cluster.

FIPS mode

If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.

4.1.1. Supported platforms

You can use the following platforms with OpenShift Virtualization:

  • IBM Cloud® Bare Metal Servers. See Deploy OpenShift Virtualization on IBM Cloud® Bare Metal nodes.

    Important

    Installing OpenShift Virtualization on IBM Cloud® Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Bare metal instances or servers offered by other cloud providers are not supported.

4.1.1.1. OpenShift Virtualization on AWS bare metal

You can run OpenShift Virtualization on an Amazon Web Services (AWS) bare-metal OpenShift Container Platform cluster.

Note

OpenShift Virtualization is also supported on Red Hat OpenShift Service on AWS (ROSA) Classic clusters, which have the same configuration requirements as AWS bare-metal clusters.

Before you set up your cluster, review the following summary of supported features and limitations:

Installing
  • You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes by editing the install-config.yaml file. For example, you can use the c5n.metal type value for a machine based on x86_64 architecture.

    For more information, see the OpenShift Container Platform documentation about installing on AWS.

Accessing virtual machines (VMs)
  • There is no change to how you access VMs by using the virtctl CLI tool or the OpenShift Container Platform web console.
  • You can expose VMs by using a NodePort or LoadBalancer service.

    • The load balancer approach is preferable because OpenShift Container Platform automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, OpenShift Container Platform removes the load balancer and its associated resources.
Networking
  • You cannot use Single Root I/O Virtualization (SR-IOV) or bridge Container Network Interface (CNI) networks, including virtual LAN (VLAN). If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks.
Storage
  • You can use any storage solution that is certified by the storage vendor to work with the underlying platform.

    Important

    AWS bare-metal and ROSA clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor.

  • Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations. Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities.
Hosted control planes (HCPs)
  • HCPs for OpenShift Virtualization are not currently supported on AWS infrastructure.

4.1.2. Hardware and operating system requirements

Review the following hardware and operating system requirements for OpenShift Virtualization.

4.1.2.1. CPU requirements
  • Supported by Red Hat Enterprise Linux (RHEL) 9.

    See Red Hat Ecosystem Catalog for supported CPUs.

    Note

    If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.

    See Configuring a required node affinity rule for details.

  • Support for AMD and Intel 64-bit architectures (x86-64-v2).
  • Support for Intel 64 or AMD64 CPU extensions.
  • Intel VT or AMD-V hardware virtualization extensions enabled.
  • NX (no execute) flag enabled.
4.1.2.2. Operating system requirements
  • Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes.

    See About RHCOS for details.

    Note

    RHEL worker nodes are not supported.

4.1.2.3. Storage requirements
  • Supported by OpenShift Container Platform. See Optimizing storage.
  • You must create a default OpenShift Virtualization or OpenShift Container Platform storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OpenShift Virtualization and OpenShift Container Platform default storage classes exist, the OpenShift Virtualization class takes precedence when creating VM disks.
Note

You must specify a default storage class for the cluster. See Managing the default storage class. If the default storage class provisioner supports the ReadWriteMany (RWX) access mode, use the RWX mode for the associated persistent volumes for optimal performance.

If the storage provisioner supports snapshots, there must be a VolumeSnapshotClass object associated with the default storage class.

4.1.2.3.1. About volume and access modes for virtual machine disks

If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode.

For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons:

  • ReadWriteMany (RWX) access mode is required for live migration.
  • The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage.

    For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes.

Important

You cannot live migrate virtual machines with the following configurations:

  • Storage volume with ReadWriteOnce (RWO) access mode
  • Passthrough features such as GPUs

Do not set the evictionStrategy field to LiveMigrate for these virtual machines.

4.1.3. Live migration requirements

  • Shared storage with ReadWriteMany (RWX) access mode.
  • Sufficient RAM and network bandwidth.

    Note

    You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

    Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

    The default number of migrations that can run in parallel in the cluster is 5.

  • If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
  • A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.

4.1.4. Physical resource overhead requirements

OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.

Important

The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.

Memory overhead

Calculate the memory overhead values for OpenShift Virtualization by using the equations below.

Cluster memory overhead

Memory overhead per infrastructure node ≈ 150 MiB

Memory overhead per worker node ≈ 360 MiB

Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.

Virtual machine memory overhead

Memory overhead per virtual machine ≈ (1.002 × requested memory) \
              + 218 MiB \ 1
              + 8 MiB × (number of vCPUs) \ 2
              + 16 MiB × (number of graphics devices) \ 3
              + (additional memory overhead) 4

1
Required for the processes that run in the virt-launcher pod.
2
Number of virtual CPUs requested by the virtual machine.
3
Number of virtual graphics cards requested by the virtual machine.
4
Additional memory overhead:
  • If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
  • If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB.
  • If Trusted Platform Module (TPM) is enabled, add 53 MiB.
CPU overhead

Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.

Cluster CPU overhead

CPU overhead for infrastructure nodes ≈ 4 cores

OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine

Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.

Virtual machine CPU overhead

If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.

Storage overhead

Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.

Cluster storage overhead

Aggregated storage overhead per node ≈ 10 GiB

10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.

Virtual machine storage overhead

Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.

Example

As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.

4.1.5. Single-node OpenShift differences

You can install OpenShift Virtualization on single-node OpenShift.

However, you should be aware that Single-node OpenShift does not support the following features:

  • High availability
  • Pod disruption
  • Live migration
  • Virtual machines or templates that have an eviction strategy configured

4.1.6. Object maximums

You must consider the following tested object maximums when planning your cluster:

4.1.7. Cluster high-availability options

You can configure one of the following high-availability (HA) options for your cluster:

  • Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.

    Note

    In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with a properly configured MachineHealthCheck resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See Run strategies for more detailed information about the potential outcomes and how run strategies affect those outcomes.

  • Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
  • High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node>.

    Note

    Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.

4.2. Installing OpenShift Virtualization

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.

Important

If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager (OLM) for restricted networks.

If you have limited internet connectivity, you can configure proxy support in OLM to access the OperatorHub.

4.2.1. Installing the OpenShift Virtualization Operator

Install the OpenShift Virtualization Operator by using the OpenShift Container Platform web console or the command line.

4.2.1.1. Installing the OpenShift Virtualization Operator by using the web console

You can deploy the OpenShift Virtualization Operator by using the OpenShift Container Platform web console.

Prerequisites

  • Install OpenShift Container Platform 4.14 on your cluster.
  • Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions.

Procedure

  1. From the Administrator perspective, click OperatorsOperatorHub.
  2. In the Filter by keyword field, type Virtualization.
  3. Select the OpenShift Virtualization Operator tile with the Red Hat source label.
  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
    2. For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist.

      Warning

      Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail.

    3. For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.

      While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.

      Warning

      Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.

  6. Click Install to make the Operator available to the openshift-cnv namespace.
  7. When the Operator installs successfully, click Create HyperConverged.
  8. Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
  9. Click Create to launch OpenShift Virtualization.

Verification

  • Navigate to the WorkloadsPods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
4.2.1.2. Installing the OpenShift Virtualization Operator by using the command line

Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster.

4.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI

Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators.

To subscribe, configure Namespace, OperatorGroup, and Subscription objects by applying a single manifest to your cluster.

Prerequisites

  • Install OpenShift Container Platform 4.14 on your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: kubevirt-hyperconverged-group
      namespace: openshift-cnv
    spec:
      targetNamespaces:
        - openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: hco-operatorhub
      namespace: openshift-cnv
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: kubevirt-hyperconverged
      startingCSV: kubevirt-hyperconverged-operator.v4.14.8
      channel: "stable" 1
    1
    Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
  2. Create the required Namespace, OperatorGroup, and Subscription objects for OpenShift Virtualization by running the following command:

    $ oc apply -f <file name>.yaml
Note

You can configure certificate rotation parameters in the YAML file.

4.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI

You can deploy the OpenShift Virtualization Operator by using the oc CLI.

Prerequisites

  • Subscribe to the OpenShift Virtualization catalog in the openshift-cnv namespace.
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
  2. Deploy the OpenShift Virtualization Operator by running the following command:

    $ oc apply -f <file_name>.yaml

Verification

  • Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command:

    $ watch oc get csv -n openshift-cnv

    The following output displays if deployment was successful:

    Example output

    NAME                                      DISPLAY                    VERSION   REPLACES   PHASE
    kubevirt-hyperconverged-operator.v4.14.8   OpenShift Virtualization   4.14.8                Succeeded

4.2.2. Next steps

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

4.3. Uninstalling OpenShift Virtualization

You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.

4.3.1. Uninstalling OpenShift Virtualization by using the web console

You uninstall OpenShift Virtualization by using the web console to perform the following tasks:

Important

You must first delete all virtual machines, and virtual machine instances.

You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

4.3.1.1. Deleting the HyperConverged custom resource

To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to the OperatorsInstalled Operators page.
  2. Select the OpenShift Virtualization Operator.
  3. Click the OpenShift Virtualization Deployment tab.
  4. Click the Options menu kebab beside kubevirt-hyperconverged and select Delete HyperConverged.
  5. Click Delete in the confirmation window.
4.3.1.2. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Navigate to the OperatorsInstalled Operators page.
  2. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
  3. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.

    An Uninstall Operator? dialog box is displayed.

  4. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.

    Note

    This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

4.3.1.3. Deleting a namespace using the web console

You can delete a namespace by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to AdministrationNamespaces.
  2. Locate the namespace that you want to delete in the list of namespaces.
  3. On the far right side of the namespace listing, select Delete Namespace from the Options menu kebab .
  4. When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
  5. Click Delete.
4.3.1.4. Deleting OpenShift Virtualization custom resource definitions

You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to AdministrationCustomResourceDefinitions.
  2. Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs.
  3. Click the Options menu kebab beside each CRD and select Delete CustomResourceDefinition.

4.3.2. Uninstalling OpenShift Virtualization by using the CLI

You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

Procedure

  1. Delete the HyperConverged custom resource:

    $ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
  2. Delete the OpenShift Virtualization Operator subscription:

    $ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
  3. Delete the OpenShift Virtualization ClusterServiceVersion resource:

    $ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
  4. Delete the OpenShift Virtualization namespace:

    $ oc delete namespace openshift-cnv
  5. List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option:

    $ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

    Example output

    customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)

  6. Delete the CRDs by running the oc delete crd command without the dry-run option:

    $ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

Chapter 5. Postinstallation configuration

5.1. Postinstallation configuration

The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment:

5.2. Specifying nodes for OpenShift Virtualization components

The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.

Note

You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads.

5.2.1. About node placement rules for OpenShift Virtualization components

You can use node placement rules for the following tasks:

  • Deploy virtual machines only on nodes intended for virtualization workloads.
  • Deploy Operators only on infrastructure nodes.
  • Maintain separation between workloads.

Depending on the object, you can use one or more of the following rule types:

nodeSelector
Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied.
tolerations
Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.

5.2.2. Applying node placement rules

You can apply node placement rules by editing a Subscription, HyperConverged, or HostPathProvisioner object using the command line.

Prerequisites

  • The oc CLI tool is installed.
  • You are logged in with cluster administrator permissions.

Procedure

  1. Edit the object in your default editor by running the following command:

    $ oc edit <resource_type> <resource_name> -n {CNVNamespace}
  2. Save the file to apply the changes.

5.2.3. Node placement rule examples

You can specify node placement rules for a OpenShift Virtualization component by editing a Subscription, HyperConverged, or HostPathProvisioner object.

5.2.3.1. Subscription object node placement rule examples

To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation.

Currently, you cannot configure node placement rules for the Subscription object by using the web console.

The Subscription object does not support the affinity node pplacement rule.

Example Subscription object with nodeSelector rule

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.14.8
  channel: "stable"
  config:
    nodeSelector:
      example.io/example-infra-key: example-infra-value 1

1
OLM deploys the OpenShift Virtualization Operators on nodes labeled example.io/example-infra-key = example-infra-value.

Example Subscription object with tolerations rule

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source:  redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.14.8
  channel: "stable"
  config:
    tolerations:
    - key: "key"
      operator: "Equal"
      value: "virtualization" 1
      effect: "NoSchedule"

1
OLM deploys OpenShift Virtualization Operators on nodes labeled key = virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled on these nodes.
5.2.3.2. HyperConverged object node placement rule example

To specify the nodes where OpenShift Virtualization deploys its components, you can edit the nodePlacement object in the HyperConverged custom resource (CR) file that you create during OpenShift Virtualization installation.

Example HyperConverged object with nodeSelector rule

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      nodeSelector:
        example.io/example-infra-key: example-infra-value 1
  workloads:
    nodePlacement:
      nodeSelector:
        example.io/example-workloads-key: example-workloads-value 2

1
Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-infra-value.
2
workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value.

Example HyperConverged object with affinity rule

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-infra-key
                operator: In
                values:
                - example-infra-value 1
  workloads:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-workloads-key 2
                operator: In
                values:
                - example-workloads-value
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: example.io/num-cpus
                operator: Gt
                values:
                - 8 3

1
Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-value.
2
workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value.
3
Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.

Example HyperConverged object with tolerations rule

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  workloads:
    nodePlacement:
      tolerations: 1
      - key: "key"
        operator: "Equal"
        value: "virtualization"
        effect: "NoSchedule"

1
Nodes reserved for OpenShift Virtualization components are labeled with the key = virtualization:NoSchedule taint. Only pods with matching tolerations are scheduled on reserved nodes.
5.2.3.3. HostPathProvisioner object node placement rule example

You can edit the HostPathProvisioner object directly or by using the web console.

Warning

You must schedule the hostpath provisioner and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines.

After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM.

You can configure node placement rules by specifying nodeSelector, affinity, or tolerations for the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner.

Example HostPathProvisioner object with nodeSelector rule

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload:
    nodeSelector:
      example.io/example-workloads-key: example-workloads-value 1

1
Workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value.

5.2.4. Additional resources

5.3. Postinstallation network configuration

By default, OpenShift Virtualization is installed with a single, internal pod network.

After you install OpenShift Virtualization, you can install networking Operators and configure additional networks.

5.3.1. Installing networking Operators

You must install the Kubernetes NMState Operator to configure a Linux bridge network for live migration or external access to virtual machines (VMs). For installation instructions, see Installing the Kubernetes NMState Operator by using the web console.

You can install the SR-IOV Operator to manage SR-IOV network devices and network attachments. For installation instructions, see Installing the SR-IOV Network Operator.

You can add the MetalLB Operator to manage the lifecycle for an instance of MetalLB on your cluster. For installation instructions, see Installing the MetalLB Operator from the OperatorHub using the web console.

5.3.2. Configuring a Linux bridge network

After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs).

5.3.2.1. Creating a Linux bridge NNCP

You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network.

Prerequisites

  • You have installed the Kubernetes NMState Operator.

Procedure

  • Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy 1
    spec:
      desiredState:
        interfaces:
          - name: br1 2
            description: Linux bridge with eth1 as a port 3
            type: linux-bridge 4
            state: up 5
            ipv4:
              enabled: false 6
            bridge:
              options:
                stp:
                  enabled: false 7
              port:
                - name: eth1 8
    1
    Name of the policy.
    2
    Name of the interface.
    3
    Optional: Human-readable description of the interface.
    4
    The type of interface. This example creates a bridge.
    5
    The requested state for the interface after creation.
    6
    Disables IPv4 in this example.
    7
    Disables STP in this example.
    8
    The node NIC to which the bridge is attached.
5.3.2.2. Creating a Linux bridge NAD by using the web console

You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the OpenShift Container Platform web console.

A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.

Warning

Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.

Procedure

  1. In the web console, click NetworkingNetworkAttachmentDefinitions.
  2. Click Create Network Attachment Definition.

    Note

    The network attachment definition must be in the same namespace as the pod or virtual machine.

  3. Enter a unique Name and optional Description.
  4. Select CNV Linux bridge from the Network Type list.
  5. Enter the name of the bridge in the Bridge Name field.
  6. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
  7. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
  8. Click Create.

5.3.3. Configuring a network for live migration

After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.

5.3.3.1. Configuring a dedicated secondary network for live migration

To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR).

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You logged in to the cluster as a user with the cluster-admin role.
  • Each node has at least two Network Interface Cards (NICs).
  • The NICs for live migration are connected to the same VLAN.

Procedure

  1. Create a NetworkAttachmentDefinition manifest according to the following example:

    Example configuration file

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: my-secondary-network 1
      namespace: openshift-cnv 2
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "migration-bridge",
        "type": "macvlan",
        "master": "eth1", 3
        "mode": "bridge",
        "ipam": {
          "type": "whereabouts", 4
          "range": "10.200.5.0/24" 5
        }
      }'

    1
    Specify the name of the NetworkAttachmentDefinition object.
    2 3
    Specify the name of the NIC to be used for live migration.
    4
    Specify the name of the CNI plugin that provides the network for the NAD.
    5
    Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network.
  2. Open the HyperConverged CR in your default editor by running the following command:

    oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  3. Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR:

    Example HyperConverged manifest

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      liveMigrationConfig:
        completionTimeoutPerGiB: 800
        network: <network> 1
        parallelMigrationsPerCluster: 5
        parallelOutboundMigrationsPerNode: 2
        progressTimeout: 150
    # ...

    1
    Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations.
  4. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network.

Verification

  • When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata.

    $ oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'
5.3.3.2. Selecting a dedicated network by using the web console

You can select a dedicated network for live migration by using the OpenShift Container Platform web console.

Prerequisites

  • You configured a Multus network for live migration.

Procedure

  1. Navigate to Virtualization > Overview in the OpenShift Container Platform web console.
  2. Click the Settings tab and then click Live migration.
  3. Select the network from the Live migration network list.

5.3.4. Configuring an SR-IOV network

After you install the SR-IOV Operator, you can configure an SR-IOV network.

5.3.4.1. Configuring SR-IOV network devices

The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).

Note

When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes.

It might take several minutes for a configuration change to apply.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the SR-IOV Network Operator.
  • You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
  • You have not selected any control plane nodes for SR-IOV network device configuration.

Procedure

  1. Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration.

    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy
    metadata:
      name: <name> 1
      namespace: openshift-sriov-network-operator 2
    spec:
      resourceName: <sriov_resource_name> 3
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: "true" 4
      priority: <priority> 5
      mtu: <mtu> 6
      numVfs: <num> 7
      nicSelector: 8
        vendor: "<vendor_code>" 9
        deviceID: "<device_id>" 10
        pfNames: ["<pf_name>", ...] 11
        rootDevices: ["<pci_bus_id>", "..."] 12
      deviceType: vfio-pci 13
      isRdma: false 14
    1
    Specify a name for the CR object.
    2
    Specify the namespace where the SR-IOV Operator is installed.
    3
    Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name.
    4
    Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
    5
    Optional: Specify an integer value between 0 and 99. A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99. The default value is 99.
    6
    Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
    7
    Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127.
    8
    The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices, you must also specify a value for vendor, deviceID, or pfNames. If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device.
    9
    Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3.
    10
    Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b, 1015, 1017.
    11
    Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
    12
    The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1.
    13
    The vfio-pci driver type is required for virtual functions in OpenShift Virtualization.
    14
    Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false. The default value is false.
    Note

    If isRDMA flag is set to true, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.

  2. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes".
  3. Create the SriovNetworkNodePolicy object:

    $ oc create -f <name>-sriov-node-network.yaml

    where <name> specifies the name for this configuration.

    After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status.

  4. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured.

    $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

5.3.5. Enabling load balancer service creation by using the web console

You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console.

Prerequisites

  • You have configured a load balancer for the cluster.
  • You are logged in as a user with the cluster-admin role.

Procedure

  1. Navigate to VirtualizationOverview.
  2. On the Settings tab, click Cluster.
  3. Expand LoadBalancer service and select Enable the creation of LoadBalancer services for SSH connections to VirtualMachines.

5.4. Postinstallation storage configuration

The following storage configuration tasks are mandatory:

  • You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates.
  • You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.

Optional: You can configure local storage by using the hostpath provisioner (HPP).

See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates.

5.4.1. Configuring local storage by using the HPP

When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner.

The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR).

Important

HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable.

5.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza

To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.

When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object’s parameters after you create it.

Note

Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer, the binding and provisioning of the PV is delayed until a pod is created using the PVC.

Procedure

  1. Create a storageclass_csi.yaml file to define the storage class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-csi
    provisioner: kubevirt.io.hostpath-provisioner
    reclaimPolicy: Delete 1
    volumeBindingMode: WaitForFirstConsumer 2
    parameters:
      storagePool: my-storage-pool 3
    1
    The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the default value is Delete.
    2
    The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.
    3
    Specify the name of the storage pool defined in the HPP CR.
  2. Save the file and exit.
  3. Create the StorageClass object by running the following command:

    $ oc create -f storageclass_csi.yaml

Chapter 6. Updating

6.1. Updating OpenShift Virtualization

Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization.

6.1.1. OpenShift Virtualization on RHEL 9

OpenShift Virtualization 4.14 is based on Red Hat Enterprise Linux (RHEL) 9. You can update to OpenShift Virtualization 4.14 from a version that was based on RHEL 8 by following the standard OpenShift Virtualization update procedure. No additional steps are required.

As in previous versions, you can perform the update without disrupting running workloads. OpenShift Virtualization 4.14 supports live migration from RHEL 8 nodes to RHEL 9 nodes.

6.1.1.1. RHEL 9 machine type

All VM templates that are included with OpenShift Virtualization now use the RHEL 9 machine type by default: machineType: pc-q35-rhel9.<y>.0, where <y> is a single digit corresponding to the latest minor version of RHEL 9. For example, the value pc-q35-rhel9.2.0 is used for RHEL 9.2.

Updating OpenShift Virtualization does not change the machineType value of any existing VMs. These VMs continue to function as they did before the update. You can optionally change a VM’s machine type so that it can benefit from RHEL 9 improvements.

Important

Before you change a VM’s machineType value, you must shut down the VM.

6.1.2. About updating OpenShift Virtualization

  • Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster.
  • OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the next minor version. You cannot update OpenShift Virtualization to the next minor version without first updating OpenShift Container Platform.
  • OpenShift Virtualization subscriptions use a single update channel that is named stable. The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible.
  • If your subscription’s approval strategy is set to Automatic, the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.14 on OpenShift Container Platform 4.14.

    • Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported.
  • The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.
  • Updating OpenShift Virtualization does not interrupt network connections.
  • Data volumes and their associated persistent volume claims are preserved during update.
Important

If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update.

As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always.

6.1.2.1. About workload updates

When you update OpenShift Virtualization, virtual machine workloads, including libvirt, virt-launcher, and qemu, update automatically if they support live migration.

Note

Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt, which is used to manage the virtual machine (VM) process.

You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict.

Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default.

When LiveMigrate is the only update strategy enabled:

  • VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled.
  • VMIs that do not support live migration are not disrupted or updated.

    • If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated.

If you enable both LiveMigrate and Evict:

  • VMIs that support live migration use the LiveMigrate update strategy.
  • VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has runStrategy: Always set, a new VMI is created in a new pod with updated components.
Migration attempts and timeouts

When updating workloads, live migration fails if a pod is in the Pending state for the following periods:

5 minutes
If the pod is pending because it is Unschedulable.
15 minutes
If the pod is stuck in the pending state for any reason.

When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely.

Note

Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging.

6.1.2.2. About Control Plane Only updates

Every even-numbered minor version of OpenShift Container Platform, including 4.10 and 4.12, is an Extended Update Support (EUS) version. However, because Kubernetes design mandates serial minor version updates, you cannot directly update from one EUS version to the next.

After you update from the source EUS version to the next odd-numbered minor version, you must sequentially update OpenShift Virtualization to all z-stream releases of that minor version that are on your update path. When you have upgraded to the latest applicable z-stream version, you can then update OpenShift Container Platform to the target EUS minor version.

When the OpenShift Container Platform update succeeds, the corresponding update for OpenShift Virtualization becomes available. You can now update OpenShift Virtualization to the target EUS version.

6.1.2.2.1. Preparing to update

Before beginning a Control Plane Only update, you must:

  • Pause worker nodes' machine config pools before you start a Control Plane Only update so that the workers are not rebooted twice.
  • Disable automatic workload updates before you begin the update process. This is to prevent OpenShift Virtualization from migrating or evicting your virtual machines (VMs) until you update to your target EUS version.
Note

By default, OpenShift Virtualization automatically updates workloads, such as the virt-launcher pod, when you update the OpenShift Virtualization Operator. You can configure this behavior in the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource.

Learn more about Performing a Control Plane Only update.

6.1.3. Preventing workload updates during a Control Plane Only update

When you update from one Extended Update Support (EUS) version to the next, you must manually disable automatic workload updates to prevent OpenShift Virtualization from migrating or evicting workloads during the update process.

Prerequisites

  • You are running an EUS version of OpenShift Container Platform and want to update to the next EUS version. You have not yet updated to the odd-numbered version in between.
  • You read "Preparing to perform a Control Plane Only update" and learned the caveats and requirements that pertain to your OpenShift Container Platform cluster.
  • You paused the worker nodes' machine config pools as directed by the OpenShift Container Platform documentation.
  • It is recommended that you use the default Automatic approval strategy. If you use the Manual approval strategy, you must approve all pending updates in the web console. For more details, refer to the "Manually approving a pending Operator update" section.

Procedure

  1. Run the following command and record the workloadUpdateMethods configuration:

    $ oc get kv kubevirt-kubevirt-hyperconverged \
      -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'
  2. Turn off all workload update methods by running the following command:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]'

    Example output

    hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

  3. Ensure that the HyperConverged Operator is Upgradeable before you continue. Enter the following command and monitor the output:

    $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"

    Example 6.1. Example output

    [
      {
        "lastTransitionTime": "2022-12-09T16:29:11Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "ReconcileComplete"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "Available"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "False",
        "type": "Progressing"
      },
      {
        "lastTransitionTime": "2022-12-09T16:39:11Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "False",
        "type": "Degraded"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "Upgradeable" 1
      }
    ]
    1
    The OpenShift Virtualization Operator has the Upgradeable status.
  4. Manually update your cluster from the source EUS version to the next minor version of OpenShift Container Platform:

    $ oc adm upgrade

    Verification

    • Check the current version by running the following command:

      $ oc get clusterversion
      Note

      Updating OpenShift Container Platform to the next version is a prerequisite for updating OpenShift Virtualization. For more details, refer to the "Updating clusters" section of the OpenShift Container Platform documentation.

  5. Update OpenShift Virtualization.

    • With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform.
    • If you use the Manual approval strategy, approve the pending updates by using the web console.
  6. Monitor the OpenShift Virtualization update by running the following command:

    $ oc get csv -n openshift-cnv
  7. Update OpenShift Virtualization to every z-stream version that is available for the non-EUS minor version, monitoring each update by running the command shown in the previous step.
  8. Confirm that OpenShift Virtualization successfully updated to the latest z-stream release of the non-EUS version by running the following command:

    $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions"

    Example output

    [
      {
        "name": "operator",
        "version": "4.14.8"
      }
    ]

  9. Wait until the HyperConverged Operator has the Upgradeable status before you perform the next update. Enter the following command and monitor the output:

    $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"
  10. Update OpenShift Container Platform to the target EUS version.
  11. Confirm that the update succeeded by checking the cluster version:

    $ oc get clusterversion
  12. Update OpenShift Virtualization to the target EUS version.

    • With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform.
    • If you use the Manual approval strategy, approve the pending updates by using the web console.
  13. Monitor the OpenShift Virtualization update by running the following command:

    $ oc get csv -n openshift-cnv

    The update completes when the VERSION field matches the target EUS version and the PHASE field reads Succeeded.

  14. Restore the workloadUpdateMethods configuration that you recorded from step 1 with the following command:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \
      "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":{WorkloadUpdateMethodConfig}}]"

    Example output

    hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

    Verification

    • Check the status of VM migration by running the following command:

      $ oc get vmim -A

Next steps

  • You can now unpause the worker nodes' machine config pools.

6.1.4. Configuring workload update methods

You can configure workload update methods by editing the HyperConverged custom resource (CR).

Prerequisites

  • To use live migration as an update method, you must first enable live migration in the cluster.

    Note

    If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update.

Procedure

  1. To open the HyperConverged CR in your default editor, run the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      workloadUpdateStrategy:
        workloadUpdateMethods: 1
        - LiveMigrate 2
        - Evict 3
        batchEvictionSize: 10 4
        batchEvictionInterval: "1m0s" 5
    # ...
    1
    The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict. If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty.
    2
    The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated.
    3
    A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: Always configured, a new VMI is created in a new pod with updated components.
    4
    The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method.
    5
    The interval to wait before evicting the next batch of workloads. This does not apply to the LiveMigrate method.
    Note

    You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR.

  3. To apply your changes, save and exit the editor.

6.1.5. Approving pending Operator updates

6.1.5.1. Manually approving a pending Operator update

If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
  2. Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
  3. Click the Subscription tab. Any updates requiring approval are displayed next to Upgrade status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for update. When satisfied, click Approve.
  6. Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

6.1.6. Monitoring update status

6.1.6.1. Monitoring OpenShift Virtualization upgrade status

To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE. You can also monitor the CSV conditions in the web console or by running the command provided here.

Note

The PHASE and conditions values are approximations that are based on available information.

Prerequisites

  • Log in to the cluster as a user with the cluster-admin role.
  • Install the OpenShift CLI (oc).

Procedure

  1. Run the following command:

    $ oc get csv -n openshift-cnv
  2. Review the output, checking the PHASE field. For example:

    Example output

    VERSION  REPLACES                                        PHASE
    4.9.0    kubevirt-hyperconverged-operator.v4.8.2         Installing
    4.9.0    kubevirt-hyperconverged-operator.v4.9.0         Replacing

  3. Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command:

    $ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'

    A successful upgrade results in the following output:

    Example output

    ReconcileComplete  True  Reconcile completed successfully
    Available          True  Reconcile completed successfully
    Progressing        False Reconcile completed successfully
    Degraded           False Reconcile completed successfully
    Upgradeable        True  Reconcile completed successfully

6.1.6.2. Viewing outdated OpenShift Virtualization workloads

You can view a list of outdated workloads by using the CLI.

Note

If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires.

Procedure

  • To view a list of outdated virtual machine instances (VMIs), run the following command:

    $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Note

Configure workload updates to ensure that VMIs update automatically.

6.1.7. Additional resources

Chapter 7. Virtual machines

7.1. Creating VMs from Red Hat images

7.1.1. Creating virtual machines from Red Hat images overview

Red Hat images are golden images. They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images project as snapshots or persistent volume claims (PVCs).

Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates.

Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.

You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods:

Important

Do not create VMs in the default openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix.

7.1.1.1. About golden images

A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently.

7.1.1.1.1. How do golden images work?

Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences.

After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image.

7.1.1.1.2. Red Hat implementation of golden images

Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs.

7.1.1.2. About VM boot sources

Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications.

Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.

Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the previous default storage class.

7.1.2. Creating virtual machines from templates

You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console.

7.1.2.1. About VM templates
Boot sources

You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label.

Templates without a boot source are labeled Boot source required. See Creating virtual machines from custom images.

Customization

You can customize the disk source and VM parameters before you start the VM:

Note

If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console.

Single-node OpenShift
Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles.
7.1.2.2. Creating a VM from a template

You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console.

Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM.

Procedure

  1. Navigate to VirtualizationCatalog in the web console.
  2. Click Boot source available to filter templates with boot sources.

    The catalog displays the default templates. Click All Items to view all available templates for your filters.

  3. Click a template tile to view its details.
  4. Click Quick create VirtualMachine to create a VM from the template.

    Optional: Customize the template or VM parameters:

    1. Click Customize VirtualMachine.
    2. Expand Storage or Optional parameters to edit data source settings.
    3. Click Customize VirtualMachine parameters.

      The Customize and create VirtualMachine pane displays the Overview, YAML, Scheduling, Environment, Network interfaces, Disks, Scripts, and Metadata tabs.

    4. Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key.
    5. Click Create VirtualMachine.

      The VirtualMachine details page displays the provisioning status.

7.1.2.2.1. Storage volume types
Table 7.1. Storage volume types
TypeDescription

ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready.

Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.

cloudInitNoCloud

Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

containerDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.

A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

Note

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

7.1.2.2.2. Storage fields
FieldDescription

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

Size of the disk in GiB.

Type

Type of disk. Example: Disk or CD-ROM

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.

If you do not specify these parameters, the system uses the default storage profile values.

ParameterOptionParameter description

Volume Mode

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This mode is required for live migration.

7.1.2.2.3. Customizing a VM template by using the web console

You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed.

You can remove the deprecated designation from the customized template.

Procedure

  1. Navigate to VirtualizationTemplates in the web console.
  2. From the list of VM templates, click the template marked as deprecated.
  3. Click Edit next to the pencil icon beside Labels.
  4. Remove the following two labels:

    • template.kubevirt.io/type: "base"
    • template.kubevirt.io/version: "version"
  5. Click Save.
  6. Click the pencil icon beside the number of existing Annotations.
  7. Remove the following annotation:

    • template.kubevirt.io/deprecated
  8. Click Save.

7.1.3. Creating virtual machines from instance types

You can create virtual machines (VMs) from instance types by using the OpenShift Container Platform web console.

Important

Creating a VM from an instance type is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.1.3.1. Creating a VM from an instance type

You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console.

Procedure

  1. In the web console, navigate to VirtualizationCatalog and click the InstanceTypes tab.
  2. Select a bootable volume.

    Note

    The volume table only lists volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label.

  3. Click an instance type tile and select the configuration appropriate for your workload.
  4. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
  5. Select one of the following options:

    • Use existing: Select a secret from the secrets list.
    • Add new:

      1. Browse to the public SSH key file or paste the file in the key field.
      2. Enter the secret name.
      3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
      4. Click Save.
  6. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
  7. Click Create VirtualMachine.

After the VM is created, you can monitor the status on the VirtualMachine details page.

7.1.4. Creating virtual machines from the command line

You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine manifest.

7.1.4.1. Creating a VM from a VirtualMachine manifest

You can create a virtual machine (VM) from a VirtualMachine manifest.

Procedure

  1. Edit the VirtualMachine manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM:

    Example 7.1. Example manifest for a RHEL VM

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        app: <vm_name> 1
      name: <vm_name>
    spec:
      dataVolumeTemplates:
      - apiVersion: cdi.kubevirt.io/v1beta1
        kind: DataVolume
        metadata:
          name: <vm_name>
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9 2
            namespace: openshift-virtualization-os-images
          storage:
            resources:
              requests:
                storage: 30Gi
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: <vm_name>
        spec:
          domain:
            cpu:
              cores: 1
              sockets: 2
              threads: 1
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - masquerade: {}
                name: default
              rng: {}
            features:
              smm:
                enabled: true
            firmware:
              bootloader:
                efi: {}
            resources:
              requests:
                memory: 8Gi
          evictionStrategy: LiveMigrate
          networks:
          - name: default
            pod: {}
          volumes:
          - dataVolume:
              name: <vm_name>
            name: rootdisk
          - cloudInitNoCloud:
              userData: |-
                #cloud-config
                user: cloud-user
                password: '<password>' 3
                chpasswd: { expire: False }
            name: cloudinitdisk
    1
    Specify the name of the virtual machine.
    2
    Specify the name in the spec.dataImportCronTemplate.spec.managedDataSource field in the Hyperconvered CR.
    3
    Specify the password for cloud-user.
  2. Create a virtual machine by using the manifest file:

    $ oc create -f <vm_manifest_file>.yaml
  3. Optional: Start the virtual machine:

    $ virtctl start <vm_name> -n <namespace>

7.2. Creating VMs from custom images

7.2.1. Creating virtual machines from custom images overview

You can create virtual machines (VMs) from custom operating system images by using one of the following methods:

The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the OpenShift Container Platform web console or command line.

Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

You must also install VirtIO drivers on Windows VMs.

The QEMU guest agent is included with Red Hat images.

7.2.2. Creating VMs by using container disks

You can create virtual machines (VMs) by using container disks built from operating system images.

You can enable auto updates for your container disks. See Managing automatic boot source updates for details.

Important

If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can perform the following tasks to resolve this issue:

You create a VM from a container disk by performing the following steps:

  1. Build an operating system image into a container disk and upload it to your container registry.
  2. If your container registry does not have TLS, configure your environment to disable TLS for your registry.
  3. Create a VM with the container disk as the disk source by using the web console or the command line.
Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

7.2.2.1. Building and uploading a container disk

You can build a virtual machine (VM) image into a container disk and upload it to a registry.

The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted.

Note

For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.

Prerequisites

  • You must have podman installed.
  • You must have a QCOW2 or RAW image file.

Procedure

  1. Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of 107, and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440.

    The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result:

    $ cat > Dockerfile << EOF
    FROM registry.access.redhat.com/ubi8/ubi:latest AS builder
    ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1
    RUN chmod 0440 /disk/*
    
    FROM scratch
    COPY --from=builder /disk/* /disk/
    EOF
    1
    Where <vm_image> is the image in either QCOW2 or RAW format. If you use a remote image, replace <vm_image>.qcow2 with the complete URL.
  2. Build and tag the container:

    $ podman build -t <registry>/<container_disk_name>:latest .
  3. Push the container image to the registry:

    $ podman push <registry>/<container_disk_name>:latest
7.2.2.2. Disabling TLS for a container registry

You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource.

Prerequisites

  1. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add a list of insecure registries to the spec.storageImport.insecureRegistries field.

    Example HyperConverged custom resource

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      storageImport:
        insecureRegistries: 1
          - "private-registry-example-1:5000"
          - "private-registry-example-2:5000"

    1
    Replace the examples in this list with valid registry hostnames.
7.2.2.3. Creating a VM from a container disk by using the web console

You can create a virtual machine (VM) by importing a container disk from a container registry by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to VirtualizationCatalog in the web console.
  2. Click a template tile without an available boot source.
  3. Click Customize VirtualMachine.
  4. On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list.
  5. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
  6. Set the disk size.
  7. Click Next.
  8. Click Create VirtualMachine.
7.2.2.4. Creating a VM from a container disk by using the command line

You can create a virtual machine (VM) from a container disk by using the command line.

When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage.

Prerequisites

  • You must have access credentials for the container registry that contains the container disk.

Procedure

  1. If the container registry requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: data-source-secret
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" 1
      secretKey:   "" 2
    1
    Specify the Base64-encoded key ID or user name.
    2
    Specify the Base64-encoded secret key or password.
  2. Apply the Secret manifest by running the following command:

    $ oc apply -f data-source-secret.yaml
  3. If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM:

    $ oc create configmap tls-certs 1
      --from-file=</path/to/file/ca.pem> 2
    1
    Specify the config map name.
    2
    Specify the path to the CA certificate.
  4. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: vm-fedora-datavolume
      name: vm-fedora-datavolume 1
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: fedora-dv 2
        spec:
          storage:
            resources:
              requests:
                storage: 10Gi 3
            storageClassName: <storage_class> 4
          source:
            registry:
              url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 5
              secretRef: data-source-secret 6
              certConfigMap: tls-certs 7
        status: {}
      running: true
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-fedora-datavolume
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: datavolumedisk1
            machine:
              type: ""
            resources:
              requests:
                memory: 1.5Gi
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: fedora-dv
            name: datavolumedisk1
    status: {}
    1
    Specify the name of the VM.
    2
    Specify the name of the data volume.
    3
    Specify the size of the storage requested for the data volume.
    4
    Optional: If you do not specify a storage class, the default storage class is used.
    5
    Specify the URL of the container registry.
    6
    Optional: Specify the secret name if you created a secret for the container registry access credentials.
    7
    Optional: Specify a CA certificate config map.
  5. Create the VM by running the following command:

    $ oc create -f vm-fedora-datavolume.yaml

    The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded. You can start the VM.

    Data volume provisioning happens in the background, so there is no need to monitor the process.

Verification

  1. The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:

    $ oc get pods
  2. Monitor the data volume until its status is Succeeded by running the following command:

    $ oc describe dv fedora-dv 1
    1
    Specify the data volume name that you defined in the VirtualMachine manifest.
  3. Verify that provisioning is complete and that the VM has started by accessing its serial console:

    $ virtctl console vm-fedora-datavolume

7.2.3. Creating VMs by importing images from web pages

You can create virtual machines (VMs) by importing operating system images from web pages.

Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

7.2.3.1. Creating a VM from an image on a web page by using the web console

You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console.

Prerequisites

  • You must have access to the web page that contains the image.

Procedure

  1. Navigate to VirtualizationCatalog in the web console.
  2. Click a template tile without an available boot source.
  3. Click Customize VirtualMachine.
  4. On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list.
  5. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
  6. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
  7. Set the disk size.
  8. Click Next.
  9. Click Create VirtualMachine.
7.2.3.2. Creating a VM from an image on a web page by using the command line

You can create a virtual machine (VM) from an image on a web page by using the command line.

When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage.

Prerequisites

  • You must have access credentials for the web page that contains the image.

Procedure

  1. If the web page requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file:

    apiVersion: v1
    kind: Secret
    metadata:
      name: data-source-secret
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" 1
      secretKey:   "" 2
    1
    Specify the Base64-encoded key ID or user name.
    2
    Specify the Base64-encoded secret key or password.
  2. Apply the Secret manifest by running the following command:

    $ oc apply -f data-source-secret.yaml
  3. If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM:

    $ oc create configmap tls-certs 1
      --from-file=</path/to/file/ca.pem> 2
    1
    Specify the config map name.
    2
    Specify the path to the CA certificate.
  4. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: vm-fedora-datavolume
      name: vm-fedora-datavolume 1
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: fedora-dv 2
        spec:
          storage:
            resources:
              requests:
                storage: 10Gi 3
            storageClassName: <storage_class> 4
          source:
            http:
              url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 5
            registry:
              url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 6
              secretRef: data-source-secret 7
              certConfigMap: tls-certs 8
        status: {}
      running: true
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-fedora-datavolume
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: datavolumedisk1
            machine:
              type: ""
            resources:
              requests:
                memory: 1.5Gi
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: fedora-dv
            name: datavolumedisk1
    status: {}
    1
    Specify the name of the VM.
    2
    Specify the name of the data volume.
    3
    Specify the size of the storage requested for the data volume.
    4
    Optional: If you do not specify a storage class, the default storage class is used.
    5 6
    Specify the URL of the web page.
    7
    Optional: Specify the secret name if you created a secret for the web page access credentials.
    8
    Optional: Specify a CA certificate config map.
  5. Create the VM by running the following command:

    $ oc create -f vm-fedora-datavolume.yaml

    The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded. You can start the VM.

    Data volume provisioning happens in the background, so there is no need to monitor the process.

Verification

  1. The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:

    $ oc get pods
  2. Monitor the data volume until its status is Succeeded by running the following command:

    $ oc describe dv fedora-dv 1
    1
    Specify the data volume name that you defined in the VirtualMachine manifest.
  3. Verify that provisioning is complete and that the VM has started by accessing its serial console:

    $ virtctl console vm-fedora-datavolume

7.2.4. Creating VMs by uploading images

You can create virtual machines (VMs) by uploading operating system images from your local machine.

You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM.

Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

You must also install VirtIO drivers on Windows VMs.

7.2.4.1. Creating a VM from an uploaded image by using the web console