Search

Virtualization

download PDF
OpenShift Container Platform 4.10

OpenShift Virtualization installation, usage, and release notes

Red Hat OpenShift Documentation Team

Abstract

This document provides information about how to use OpenShift Virtualization in OpenShift Container Platform.

Chapter 1. About OpenShift Virtualization

Learn about OpenShift Virtualization’s capabilities and support scope.

1.1. What you can do with OpenShift Virtualization

OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.

OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include:

  • Creating and managing Linux and Windows virtual machines
  • Connecting to virtual machines through a variety of consoles and CLI tools
  • Importing and cloning existing virtual machines
  • Managing network interface controllers and storage disks attached to virtual machines
  • Live migrating virtual machines between nodes

An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.

OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features.

Important

When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.

You can use OpenShift Virtualization with the OVN-Kubernetes, OpenShift SDN, or one of the other certified default Container Network Interface (CNI) network providers listed in Certified OpenShift CNI Plugins.

1.1.1. OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.10 is supported for use on OpenShift Container Platform 4.10 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

Chapter 2. Getting started with OpenShift Virtualization

You can install and configure a basic OpenShift Virtualization environment to explore its features and functionality.

Note

Cluster configuration procedures require cluster-admin privileges.

2.1. Before you begin

2.2. Getting started

Create a virtual machine
Connect to a virtual machine
Manage a virtual machine
  • Stop, start, pause, and restart a virtual machine from the web console.
  • Manage a virtual machine, expose a port, and connect to the serial console of a virtual machine from the command line with virtctl.

2.3. Next steps

Connect VMs to secondary networks
Monitor your OpenShift Virtualization environment
Automate your deployments

2.4. Additional resources

Chapter 3. OpenShift Virtualization release notes

3.1. About Red Hat OpenShift Virtualization

Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.

OpenShift Virtualization is represented by the OpenShift Virtualization icon.

You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.

Learn more about what you can do with OpenShift Virtualization.

3.1.1. OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.10 is supported for use on OpenShift Container Platform 4.10 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

3.1.2. Supported guest operating systems

To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization.

3.2. Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

3.3. New and changed features

  • OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.

    The SVVP Certification applies to:

    • Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
    • Intel and AMD CPUs.
  • OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can connect virtual machines to a service mesh to monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.

3.3.1. Quick starts

  • Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the virtual machine keyword in the Filter field.

3.3.2. Installation

  • OpenShift Virtualization workloads, such as virt-launcher pods, now automatically update if they support live migration. You can configure workload update strategies or opt out of future automatic updates by editing the HyperConverged custom resource.
  • You can now use OpenShift Virtualization with single node clusters, also known as Single Node OpenShift (SNO).

    Note

    Single node clusters are not configured for high-availability operation, which results in significant changes to OpenShift Virtualization behavior.

  • Resource requests and priority classes are now defined for all OpenShift Virtualization control plane components.

3.3.3. Networking

  • Live migration is now supported by default for virtual machines that are attached to an SR-IOV network interface.

3.3.4. Storage

  • Online snapshots are supported for virtual machines that have hot-plugged virtual disks. However, hot-plugged disks that are not in the virtual machine specification are not included in the snapshot.
  • You can use the Kubernetes Container Storage Interface (CSI) driver with the hostpath provisioner (HPP) to configure local storage for your virtual machines. Using the CSI driver minimizes disruption to your existing OpenShift Container Platform nodes and clusters when configuring local storage.

3.3.5. Web console

  • The OpenShift Virtualization dashboard provides resource consumption data for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries.

3.4. Deprecated and removed features

3.4.1. Deprecated features

Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.

  • In a future release, support for the legacy HPP custom resource, and the associated storage class, will be deprecated. Beginning in OpenShift Virtualization 4.10, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. The Operator continues to support the existing (legacy) format of the HPP custom resource and the associated storage class. If you use the HPP Operator, plan to create a storage class for the CSI driver as part of your migration strategy.

3.4.2. Removed features

Removed features are not supported in the current release.

  • The VM Import Operator has been removed from OpenShift Virtualization with this release. It is replaced by the Migration Toolkit for Virtualization.
  • This release removes the template for CentOS Linux 8, which reached End of Life (EOL) on December 31, 2021. However, OpenShift Container Platform now includes templates for CentOS Stream 8 and CentOS Stream 9.

    Note

    All CentOS distributions are community-supported.

3.5. Technology Preview features

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. The Red Hat Customer Portal provides the Technology Preview Features Support Scope for these features:

3.6. Bug fixes

  • If you initiate a cloning operation before the clone source becomes available, the cloning operation now completes successfully without using a workaround. (BZ#1855182)
  • Editing a virtual machine fails if the VM references a deleted template that was provided by OpenShift Virtualization before version 4.8. In OpenShift Virtualization 4.8 and later, deleted OpenShift Virtualization-provided templates are automatically recreated by the OpenShift Virtualization Operator. (BZ#1929165)
  • You can now successfully use the Send Keys and Disconnect buttons when using a virtual machine with a VNC console. (BZ#1964789)
  • When you create a virtual machine, its unique fully qualified domain name (FQDN) now contains the cluster domain name. (BZ#1998300)
  • If you hot-plug a virtual disk and then force delete the virt-launcher pod, you no longer lose data. (BZ#2007397)
  • OpenShift Virtualization now issues a HPPSharingPoolPathWithOS alert if you try to install the hostpath provisioner (HPP) on a path that shares the filesystem with other critical components.

    To use the HPP to provide storage for virtual machine disks, configure it with dedicated storage that is separate from the node’s root filesystem. Otherwise, the node might run out of storage and become non-functional. (BZ#2038985)

  • If you provision a virtual machine disk, OpenShift Virtualization now allocates a persistent volume claim (PVC) that is just large enough to accommodate the requested disk size, rather than issuing a KubePersistentVolumeFillingUp alert for each VM disk PVC. You can monitor disk usage from within the virtual machine itself. (BZ#2039489)
  • You can now create a virtual machine snapshot for VMs with hot-plugged disks. (BZ#2042908)
  • You can now successfully import a VM image when using a cluster-wide proxy configuration. (BZ#2046271)

3.7. Known issues

  • You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
  • When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs storage class fail to migrate and the VM status changes to Paused. This is because both pods try to access the shared ReadWriteMany CephFS volume at the same time. (BZ#2092271)

    • As a workaround, use the ocs-storagecluster-ceph-rbd storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage.
  • Updating to OpenShift Virtualization 4.10.5 causes some virtual machines (VMs) to get stuck in a live migration loop. This occurs if the spec.volumes.containerDisk.path field in the VM manifest is set to a relative path.

    • As a workaround, delete and recreate the VM manifest, setting the value of the spec.volumes.containerDisk.path field to an absolute path. You can then update OpenShift Virtualization.
  • If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. (BZ#1984442)

  • If you deploy the hostpath provisioner on a cluster where any node has a fully qualified domain name (FQDN) that exceeds 42 characters, the provisioner fails to bind PVCs. (BZ#2057157)

    Example error message

    E0222 17:52:54.088950       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: unable to parse requirement: values[0][csi.storage.k8s.io/managed-by]: Invalid value: "external-provisioner-<node_FQDN>": must be no more than 63 characters 1

    1
    Though the error message refers to a maximum of 63 characters, this includes the external-provisioner- string that is prefixed to the node’s FQDN.
    • As a workaround, disable the storageCapacity option in the hostpath provisioner CSI driver by running the following command:

      $ oc patch csidriver kubevirt.io.hostpath-provisioner --type merge --patch '{"spec": {"storageCapacity": false}}'
  • If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)

    • As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
  • Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath provisioner storage or SR-IOV network interfaces.

    • As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the spec section of the virtual machine configuration file:

      1. Modify the evictionStrategy and runStrategy fields.

        1. Remove the evictionStrategy: LiveMigrate field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy.
        2. Set the runStrategy field to Always.
      2. Set the default CPU model by running the following command:

        Note

        You must make this change before starting the virtual machines that support live migration.

        $ oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[
          {
              "op": "add",
              "path": "/spec/configuration/cpuModel",
              "value": "<cpu_model>" 1
          }
        ]'
        1
        Replace <cpu_model> with the actual CPU model value. You can determine this value by running oc describe node <node> for all nodes and looking at the cpu-model-<name> labels. Select the CPU model that is present on all of your nodes.
  • If you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage, cloning more than 100 VMs at once might fail. (BZ#1989527)

    • As a workaround, you can perform a host-assisted copy by setting spec.cloneStrategy: copy in the storage profile manifest. For example:

      apiVersion: cdi.kubevirt.io/v1beta1
      kind: StorageProfile
      metadata:
        name: <provisioner_class>
      #   ...
      spec:
        claimPropertySets:
        - accessModes:
          - ReadWriteOnce
          volumeMode: Filesystem
        cloneStrategy: copy 1
      status:
        provisioner: <provisioner>
        storageClass: <provisioner_class>
      1
      The default cloning method set as copy.
  • In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)

    • As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
  • The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. (BZ#2026733)

  • On a large cluster, the OpenShift Virtualization MAC pool manager might take too much time to boot and OpenShift Virtualization might not become ready. (BZ#2035344)

    • As a workaround, if you do not require MAC pooling functionality, then disable this sub-component by running the following command:

      $ oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[
        {
          "op": "replace"
          "path": "/spec/kubeMacPool"
          "value": null
        }
       ]'
  • OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)

    • As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
  • If a VM crashes or hangs during shutdown, new shutdown requests do not stop the VM. (BZ#2040766)
  • If you configure the HyperConverged custom resource (CR) to enable mediated devices before drivers are installed, enablement of mediated devices does not occur. This issue can be triggered by updates. For example, if virt-handler is updated before daemonset, which installs NVIDIA drivers, then nodes cannot provide virtual machine GPUs. (BZ#2046298)

    • As a workaround:

      1. Remove mediatedDevicesConfiguration and permittedHostDevices from the HyperConverged CR.
      2. Update both mediatedDevicesConfiguration and permittedHostDevices stanzas with the configuration you want to use.
  • YAML examples in the VM wizard are hardcoded and do not always contain the latest upstream changes. (BZ#2055492)
  • If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. (BZ#2055595)

    • As a workaround, you can restart the ceph-mgr to purge the VM clones.
  • A non-privileged user cannot use the Add Network Interface button on the VM Network Interfaces tab. (BZ#2056420)

    • As a workaround, non-privileged users can add additional network interfaces while creating the VM by using the VM wizard.
  • A non-privileged user cannot add disks to a VM due to RBAC rules. (BZ#2056421)

    • As a workaround, manually add the RBAC rule to allow specific users to add disks.
  • The web console does not display virtual machine templates that are deployed to a custom namespace. Only templates deployed to the default namespace display in the web console. (BZ#2054650)

    • As a workaround, avoid deploying templates to a custom namespace.
  • On a Single Node OpenShift (SNO) cluster, updating the cluster fails if a VMI has the spec.evictionStrategy field set to LiveMigrate. For live migration to succeed, the cluster must have more than one worker node. (BZ#2073880)

    • There are two workaround options:

      • Remove the spec.evictionStrategy field from the VM declaration.
      • Manually stop the VM before you update OpenShift Container Platform.

Chapter 4. Installing

4.1. Preparing your cluster for OpenShift Virtualization

Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.

Important

You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.

Single-node OpenShift differences

You can install OpenShift Virtualization on a single-node cluster. See About single-node OpenShift for more information. Single-node OpenShift does not support high availability, which results in the following differences:

FIPS mode

If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.

IPv6

You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)

4.1.1. Hardware and operating system requirements

Review the following hardware and operating system requirements for OpenShift Virtualization.

Supported platforms

Important

Installing OpenShift Virtualization on AWS bare metal instances or on IBM Cloud Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • Bare metal instances or servers offered by other cloud providers are not supported.

CPU requirements

  • Supported by Red Hat Enterprise Linux (RHEL) 8
  • Support for Intel 64 or AMD64 CPU extensions
  • Intel VT or AMD-V hardware virtualization extensions enabled
  • NX (no execute) flag enabled

Storage requirements

  • Supported by OpenShift Container Platform
Warning

If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.

Operating system requirements

  • Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes

    Note

    RHEL worker nodes are not supported.

  • If your cluster uses worker nodes with different CPUs, live migration failures can occur because different CPUs have different capabilities. To avoid such failures, use CPUs with appropriate capacity for each node and set node affinity on your virtual machines to ensure successful migration. See Configuring a required node affinity rule for more information.

Additional resources

4.1.2. Physical resource overhead requirements

OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.

Important

The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.

4.1.2.1. Memory overhead

Calculate the memory overhead values for OpenShift Virtualization by using the equations below.

Cluster memory overhead

Memory overhead per infrastructure node ≈ 150 MiB

Memory overhead per worker node ≈ 360 MiB

Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.

Virtual machine memory overhead

Memory overhead per virtual machine ≈ (1.002 * requested memory) + 146 MiB  \
                + 8 MiB * (number of vCPUs) \ 1
             + 16 MiB * (number of graphics devices) 2

1
Number of virtual CPUs requested by the virtual machine
2
Number of virtual graphics cards requested by the virtual machine

If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.

4.1.2.2. CPU overhead

Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.

Cluster CPU overhead

CPU overhead for infrastructure nodes ≈ 4 cores

OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine

Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.

Virtual machine CPU overhead

If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.

4.1.2.3. Storage overhead

Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.

Cluster storage overhead

Aggregated storage overhead per node ≈ 10 GiB

10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.

Virtual machine storage overhead

Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.

4.1.2.4. Example

As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.

4.1.3. Object maximums

You must consider the following tested object maximums when planning your cluster:

4.1.4. Restricted network environments

If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks.

If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub.

4.1.5. Live migration

Live migration has the following requirements:

  • Shared storage with ReadWriteMany (RWX) access mode.
  • Sufficient RAM and network bandwidth.
  • If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
Note

You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

The default number of migrations that can run in parallel in the cluster is 5.

4.1.6. Snapshots and cloning

See OpenShift Virtualization storage features for snapshot and cloning requirements.

4.1.7. Cluster high-availability options

You can configure one of the following high-availability (HA) options for your cluster:

  • Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.

    Note

    In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.

  • Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses the Self Node Remediation Operator to remediate the unhealthy nodes.

    Important

    Node Health Check Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

    For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node>.

    Note

    Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.

4.2. Specifying nodes for OpenShift Virtualization components

Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.

Note

You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads.

4.2.1. About node placement for virtualization components

You might want to customize where OpenShift Virtualization deploys its components to ensure that:

  • Virtual machines only deploy on nodes that are intended for virtualization workloads.
  • Operators only deploy on infrastructure nodes.
  • Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization.
4.2.1.1. How to apply node placement rules to virtualization components

You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.

  • For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM Subscription object directly. Currently, you cannot configure node placement rules for the Subscription object by using the web console.
  • For components that the OpenShift Virtualization Operators deploy, edit the HyperConverged object directly or configure it by using the web console during OpenShift Virtualization installation.
  • For the hostpath provisioner, edit the HostPathProvisioner object directly or configure it by using the web console.

    Warning

    You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.

Depending on the object, you can use one or more of the following rule types:

nodeSelector
Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
tolerations
Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
4.2.1.2. Node placement in the OLM Subscription object

To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation. You can include node placement rules in the spec.config field, as shown in the following example:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.10.10
  channel: "stable"
  config: 1
1
The config field supports nodeSelector and tolerations, but it does not support affinity.
4.2.1.3. Node placement in the HyperConverged object

To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement under the spec.infra and spec.workloads fields, as shown in the following example:

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement: 1
    ...
  workloads:
    nodePlacement:
    ...
1
The nodePlacement fields support nodeSelector, affinity, and tolerations fields.
4.2.1.4. Node placement in the HostPathProvisioner object

You can configure node placement rules in the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner.

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload: 1
1
The workload field supports nodeSelector, affinity, and tolerations fields.
4.2.1.5. Additional resources

4.2.2. Example manifests

The following example YAML files use nodePlacement, affinity, and tolerations objects to customize node placement for OpenShift Virtualization components.

4.2.2.1. Operator Lifecycle Manager Subscription object
4.2.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object

In this example, nodeSelector is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.10.10
  channel: "stable"
  config:
    nodeSelector:
      example.io/example-infra-key: example-infra-value
4.2.2.1.2. Example: Node placement with tolerations in the OLM Subscription object

In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.10.10
  channel: "stable"
  config:
    tolerations:
    - key: "key"
      operator: "Equal"
      value: "virtualization"
      effect: "NoSchedule"
4.2.2.2. HyperConverged object
4.2.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR

In this example, nodeSelector is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      nodeSelector:
        example.io/example-infra-key: example-infra-value
  workloads:
    nodePlacement:
      nodeSelector:
        example.io/example-workloads-key: example-workloads-value
4.2.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR

In this example, affinity is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-infra-key
                operator: In
                values:
                - example-infra-value
  workloads:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-workloads-key
                operator: In
                values:
                - example-workloads-value
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: example.io/num-cpus
                operator: Gt
                values:
                - 8
4.2.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR

In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  workloads:
    nodePlacement:
      tolerations:
      - key: "key"
        operator: "Equal"
        value: "virtualization"
        effect: "NoSchedule"
4.2.2.3. HostPathProvisioner object
4.2.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object

In this example, nodeSelector is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload:
    nodeSelector:
      example.io/example-workloads-key: example-workloads-value

4.3. Installing OpenShift Virtualization using the web console

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.

You can use the OpenShift Container Platform 4.10 web console to subscribe to and deploy the OpenShift Virtualization Operators.

4.3.1. Installing the OpenShift Virtualization Operator

You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console.

Prerequisites

  • Install OpenShift Container Platform 4.10 on your cluster.
  • Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions.

Procedure

  1. From the Administrator perspective, click OperatorsOperatorHub.
  2. In the Filter by keyword field, type OpenShift Virtualization.
  3. Select the OpenShift Virtualization tile.
  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
    2. For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist.

      Warning

      Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail.

    3. For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.

      While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.

      Warning

      Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.

  6. Click Install to make the Operator available to the openshift-cnv namespace.
  7. When the Operator installs successfully, click Create HyperConverged.
  8. Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
  9. Click Create to launch OpenShift Virtualization.

Verification

  • Navigate to the WorkloadsPods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.

4.3.2. Next steps

You might want to additionally configure the following components:

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

4.4. Installing OpenShift Virtualization using the CLI

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster.

Note

To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules.

4.4.1. Prerequisites

  • Install OpenShift Container Platform 4.10 on your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

4.4.2. Subscribing to the OpenShift Virtualization catalog by using the CLI

Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators.

To subscribe, configure Namespace, OperatorGroup, and Subscription objects by applying a single manifest to your cluster.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: kubevirt-hyperconverged-group
      namespace: openshift-cnv
    spec:
      targetNamespaces:
        - openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: hco-operatorhub
      namespace: openshift-cnv
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: kubevirt-hyperconverged
      startingCSV: kubevirt-hyperconverged-operator.v4.10.10
      channel: "stable" 1
    1
    Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
  2. Create the required Namespace, OperatorGroup, and Subscription objects for OpenShift Virtualization by running the following command:

    $ oc apply -f <file name>.yaml
Note

You can configure certificate rotation parameters in the YAML file.

4.4.3. Deploying the OpenShift Virtualization Operator by using the CLI

You can deploy the OpenShift Virtualization Operator by using the oc CLI.

Prerequisites

  • An active subscription to the OpenShift Virtualization catalog in the openshift-cnv namespace.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
  2. Deploy the OpenShift Virtualization Operator by running the following command:

    $ oc apply -f <file_name>.yaml

Verification

  • Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command:

    $ watch oc get csv -n openshift-cnv

    The following output displays if deployment was successful:

    Example output

    NAME                                      DISPLAY                    VERSION   REPLACES   PHASE
    kubevirt-hyperconverged-operator.v4.10.10   OpenShift Virtualization   4.10.10                Succeeded

4.4.4. Next steps

You might want to additionally configure the following components:

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

4.5. Enabling the virtctl client

The virtctl client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, macOS, and Windows distributions.

4.5.1. Downloading and installing the virtctl client

4.5.1.1. Downloading the virtctl client

Download the virtctl client by using the link provided in the ConsoleCLIDownload custom resource (CR).

Procedure

  1. View the ConsoleCLIDownload object by running the following command:

    $ oc get ConsoleCLIDownload virtctl-clidownloads-kubevirt-hyperconverged -o yaml
  2. Download the virtctl client by using the link listed for your distribution.
4.5.1.2. Installing the virtctl client

Extract and install the virtctl client after downloading from the appropriate location for your operating system.

Prerequisites

  • You must have downloaded the virtctl client.

Procedure

  • For Linux:

    1. Extract the tarball. The following CLI command extracts it into the same directory as the tarball:

      $ tar -xvf <virtctl-version-distribution.arch>.tar.gz
    2. Navigate the extracted folder hierachy and run the following command to make the virtctl binary executable:

      $ chmod +x <virtctl-file-name>
    3. Move the virtctl binary to a directory in your PATH environment variable.
    4. To check your path, run the following command:

      $ echo $PATH
  • For Windows users:

    1. Unpack and unzip the archive.
    2. Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client.
    3. Move the virtctl binary to a directory in your PATH environment variable.
    4. To check your path, run the following command:

      C:\> path
  • For macOS users:

    1. Unpack and unzip the archive.
    2. Move the virtctl binary to a directory in your PATH environment variable.
    3. To check your path, run the following command:

      echo $PATH

4.5.2. Additional setup options

4.5.2.1. Installing the virtctl client using the yum utility

Install the virtctl client from the kubevirt-virtctl package.

Procedure

  • Install the kubevirt-virtctl package:

    # yum install kubevirt-virtctl
4.5.2.2. Enabling OpenShift Virtualization repositories

Red Hat offers OpenShift Virtualization repositories for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 7:

  • Red Hat Enterprise Linux 8 repository: cnv-4.10-for-rhel-8-x86_64-rpms
  • Red Hat Enterprise Linux 7 repository: rhel-7-server-cnv-4.10-rpms

The process for enabling the repository in subscription-manager is the same in both platforms.

Procedure

  • Enable the appropriate OpenShift Virtualization repository for your system by running the following command:

    # subscription-manager repos --enable <repository>

4.5.3. Additional resources

4.6. Uninstalling OpenShift Virtualization using the web console

You can uninstall OpenShift Virtualization by using the OpenShift Container Platform web console.

4.6.1. Prerequisites

4.6.2. Deleting the OpenShift Virtualization Operator Deployment custom resource

To uninstall OpenShift Virtualization, you must first delete the OpenShift Virtualization Operator Deployment custom resource.

Prerequisites

  • Create the OpenShift Virtualization Operator Deployment custom resource.

Procedure

  1. From the OpenShift Container Platform web console, select openshift-cnv from the Projects list.
  2. Navigate to the OperatorsInstalled Operators page.
  3. Click OpenShift Virtualization.
  4. Click the OpenShift Virtualization Operator Deployment tab.
  5. Click the Options menu kebab in the row containing the kubevirt-hyperconverged custom resource. In the expanded menu, click Delete HyperConverged Cluster.
  6. Click Delete in the confirmation window.
  7. Navigate to the WorkloadsPods page to verify that only the Operator pods are running.
  8. Open a terminal window and clean up the remaining resources by running the following command:

    $ oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv

4.6.3. Deleting the OpenShift Virtualization catalog subscription

To finish uninstalling OpenShift Virtualization, delete the OpenShift Virtualization catalog subscription.

Prerequisites

  • An active subscription to the OpenShift Virtualization catalog

Procedure

  1. Navigate to the OperatorsOperatorHub page.
  2. Search for OpenShift Virtualization and then select it.
  3. Click Uninstall.
Note

You can now delete the openshift-cnv namespace.

4.6.4. Deleting a namespace using the web console

You can delete a namespace by using the OpenShift Container Platform web console.

Note

If you do not have permissions to delete the namespace, the Delete Namespace option is not available.

Procedure

  1. Navigate to AdministrationNamespaces.
  2. Locate the namespace that you want to delete in the list of namespaces.
  3. On the far right side of the namespace listing, select Delete Namespace from the Options menu kebab .
  4. When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
  5. Click Delete.

4.7. Uninstalling OpenShift Virtualization using the CLI

You can uninstall OpenShift Virtualization by using the OpenShift Container Platform CLI.

4.7.1. Prerequisites

4.7.2. Deleting OpenShift Virtualization

You can delete OpenShift Virtualization by using the CLI.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Access to a OpenShift Virtualization cluster using an account with cluster-admin permissions.
Note

When you delete the subscription of the OpenShift Virtualization operator in the OLM by using the CLI, the ClusterServiceVersion (CSV) object is not deleted from the cluster. To completely uninstall OpenShift Virtualization, you must explicitly delete the CSV.

Procedure

  1. Delete the HyperConverged custom resource:

    $ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
  2. Delete the subscription of the OpenShift Virtualization operator in the Operator Lifecycle Manager (OLM):

    $ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
  3. Set the cluster service version (CSV) name for OpenShift Virtualization as an environment variable:

    $ CSV_NAME=$(oc get csv -n openshift-cnv -o=jsonpath="{.items[0].metadata.name}")
  4. Delete the CSV from the OpenShift Virtualization cluster by specifying the CSV name from the previous step:

    $ oc delete csv ${CSV_NAME} -n openshift-cnv

    OpenShift Virtualization is uninstalled when a confirmation message indicates that the CSV was deleted successfully:

    Example output

    clusterserviceversion.operators.coreos.com "kubevirt-hyperconverged-operator.v4.10.10" deleted

Chapter 5. Updating OpenShift Virtualization

Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization.

5.1. About updating OpenShift Virtualization

  • Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster.
  • OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the next minor version. You cannot update OpenShift Virtualization to the next minor version without first updating OpenShift Container Platform.
  • OpenShift Virtualization subscriptions use a single update channel that is named stable. The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible.
  • If your subscription’s approval strategy is set to Automatic, the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.10 on OpenShift Container Platform 4.10.

    • Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported.
  • The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.
  • Updating OpenShift Virtualization does not interrupt network connections.
  • Data volumes and their associated persistent volume claims are preserved during update.
Important

If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update.

As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always.

5.2. Configuring automatic workload updates

5.2.1. About workload updates

When you update OpenShift Virtualization, virtual machine workloads, including libvirt, virt-launcher, and qemu, update automatically if they support live migration.

Note

Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt, which is used to manage the virtual machine (VM) process.

You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict.

Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default.

When LiveMigrate is the only update strategy enabled:

  • VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled.
  • VMIs that do not support live migration are not disrupted or updated.

    • If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated.

If you enable both LiveMigrate and Evict:

  • VMIs that support live migration use the LiveMigrate update strategy.
  • VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has a runStrategy value of always, a new VMI is created in a new pod with updated components.
Migration attempts and timeouts

When updating workloads, live migration fails if a pod is in the Pending state for the following periods:

5 minutes
If the pod is pending because it is Unschedulable.
15 minutes
If the pod is stuck in the pending state for any reason.

When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely.

Note

Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging.

5.2.2. Configuring workload update methods

You can configure workload update methods by editing the HyperConverged custom resource (CR).

Prerequisites

  • To use live migration as an update method, you must first enable live migration in the cluster.

    Note

    If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update.

Procedure

  1. To open the HyperConverged CR in your default editor, run the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      workloadUpdateStrategy:
        workloadUpdateMethods: 1
        - LiveMigrate 2
        - Evict 3
        batchEvictionSize: 10 4
        batchEvictionInterval: "1m0s" 5
    ...
    1
    The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict. If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty.
    2
    The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated.
    3
    A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: always configured, a new VMI is created in a new pod with updated components.
    4
    The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method.
    5
    The interval to wait before evicting the next batch of workloads. This does not apply to the LiveMigrate method.
    Note

    You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR.

  3. To apply your changes, save and exit the editor.

5.3. Approving pending Operator updates

5.3.1. Manually approving a pending Operator update

If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
  2. Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
  3. Click the Subscription tab. Any update requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for update. When satisfied, click Approve.
  6. Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

5.4. Monitoring update status

5.4.1. Monitoring OpenShift Virtualization upgrade status

To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE. You can also monitor the CSV conditions in the web console or by running the command provided here.

Note

The PHASE and conditions values are approximations that are based on available information.

Prerequisites

  • Log in to the cluster as a user with the cluster-admin role.
  • Install the OpenShift CLI (oc).

Procedure

  1. Run the following command:

    $ oc get csv -n openshift-cnv
  2. Review the output, checking the PHASE field. For example:

    Example output

    VERSION  REPLACES                                        PHASE
    4.9.0    kubevirt-hyperconverged-operator.v4.8.2         Installing
    4.9.0    kubevirt-hyperconverged-operator.v4.9.0         Replacing

  3. Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command:

    $ oc get hco -n openshift-cnv kubevirt-hyperconverged \
    -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'

    A successful upgrade results in the following output:

    Example output

    ReconcileComplete  True  Reconcile completed successfully
    Available          True  Reconcile completed successfully
    Progressing        False Reconcile completed successfully
    Degraded           False Reconcile completed successfully
    Upgradeable        True  Reconcile completed successfully

5.4.2. Viewing outdated OpenShift Virtualization workloads

You can view a list of outdated workloads by using the CLI.

Note

If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires.

Procedure

  • To view a list of outdated virtual machine instances (VMIs), run the following command:

    $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Note

Configure workload updates to ensure that VMIs update automatically.

5.5. Additional resources

Chapter 6. Additional security privileges granted for kubevirt-controller and virt-launcher

The kubevirt-controller and virt-launcher pods are granted some SELinux policies and Security Context Constraints privileges that are in addition to typical pod owners. These privileges enable virtual machines to use OpenShift Virtualization features.

6.1. Extended SELinux policies for virt-launcher pods

The container_t SELinux policy for virt-launcher pods is extended to enable essential functions of OpenShift Virtualization.

  • The following policy is required for network multi-queue, which enables network performance to scale as the number of available vCPUs increases:

    • allow process self (tun_socket (relabelfrom relabelto attach_queue))
  • The following policy allows virt-launcher to read files under the /proc directory, including /proc/cpuinfo and /proc/uptime:

    • allow process proc_type (file (getattr open read))
  • The following policy allows libvirtd to relay network-related debug messages.

    • allow process self (netlink_audit_socket (nlmsg_relay))

      Note

      Without this policy, any attempt to relay network debug messages is blocked. This might fill the node’s audit logs with SELinux denials.

  • The following policies allow libvirtd to access hugetblfs, which is required to support huge pages:

    • allow process hugetlbfs_t (dir (add_name create write remove_name rmdir setattr))
    • allow process hugetlbfs_t (file (create unlink))
  • The following policies allow virtiofs to mount filesystems and access NFS:

    • allow process nfs_t (dir (mounton))
    • allow process proc_t (dir (mounton))
    • allow process proc_t (filesystem (mount unmount))

6.2. Additional OpenShift Container Platform security context constraints and Linux capabilities for the kubevirt-controller service account

Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.

The kubevirt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These virt-launcher pods are granted permissions by the kubevirt-controller service account.

6.2.1. Additional SCCs granted to the kubevirt-controller service account

The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to take advantage of OpenShift Virtualization features that are beyond the scope of typical pods.

The kubevirt-controller service account is granted the following SCCs:

  • scc.AllowHostDirVolumePlugin = true
    This allows virtual machines to use the hostpath volume plugin.
  • scc.AllowPrivilegedContainer = false
    This ensures the virt-launcher pod is not run as a privileged container.
  • scc.AllowedCapabilities = []corev1.Capability{"NET_ADMIN", "NET_RAW", "SYS_NICE"}
    This provides the following additional Linux capabilities NET_ADMIN, NET_RAW, and SYS_NICE.

6.2.2. Viewing the SCC and RBAC definitions for the kubevirt-controller

You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool:

$ oc get scc kubevirt-controller -o yaml

You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool:

$ oc get clusterrole kubevirt-controller -o yaml

6.3. Additional resources

Chapter 7. Using the CLI tools

The two primary CLI tools used for managing resources in the cluster are:

  • The OpenShift Virtualization virtctl client
  • The OpenShift Container Platform oc client

7.1. Prerequisites

7.2. OpenShift Container Platform client commands

The OpenShift Container Platform oc client is a command-line utility for managing OpenShift Container Platform resources, including the VirtualMachine (vm) and VirtualMachineInstance (vmi) object types.

Note

You can use the -n <namespace> flag to specify a different project.

Table 7.1. oc commands
CommandDescription

oc login -u <user_name>

Log in to the OpenShift Container Platform cluster as <user_name>.

oc get <object_type>

Display a list of objects for the specified object type in the current project.

oc describe <object_type> <resource_name>

Display details of the specific resource in the current project.

oc create -f <object_config>

Create a resource in the current project from a file name or from stdin.

oc edit <object_type> <resource_name>

Edit a resource in the current project.

oc delete <object_type> <resource_name>

Delete a resource in the current project.

For more comprehensive information on oc client commands, see the OpenShift Container Platform CLI tools documentation.

7.3. Virtctl client commands

The virtctl client is a command-line utility for managing OpenShift Virtualization resources.

To view a list of virtctl commands, run the following command:

$ virtctl help

To view a list of options that you can use with a specific command, run it with the -h or --help flag. For example:

$ virtctl image-upload -h

To view a list of global command options that you can use with any virtctl command, run the following command:

$ virtctl options

The following table contains the virtctl commands used throughout the OpenShift Virtualization documentation.

Table 7.2. virtctl client commands
CommandDescription

virtctl start <vm_name>

Start a virtual machine.

virtctl start --paused <vm_name>

Start a virtual machine in a paused state. This option enables you to interrupt the boot process from the VNC console.

virtctl stop <vm_name>

Stop a virtual machine.

virtctl stop <vm_name> --grace-period 0 --force

Force stop a virtual machine. This option might cause data inconsistency or data loss.

virtctl pause vm|vmi <object_name>

Pause a virtual machine or virtual machine instance. The machine state is kept in memory.

virtctl unpause vm|vmi <object_name>

Unpause a virtual machine or virtual machine instance.

virtctl migrate <vm_name>

Migrate a virtual machine.

virtctl restart <vm_name>

Restart a virtual machine.

virtctl expose <vm_name>

Create a service that forwards a designated port of a virtual machine or virtual machine instance and expose the service on the specified port of the node.

virtctl console <vmi_name>

Connect to a serial console of a virtual machine instance.

virtctl vnc --kubeconfig=$KUBECONFIG <vmi_name>

Open a VNC (Virtual Network Client) connection to a virtual machine instance. Access the graphical console of a virtual machine instance through a VNC which requires a remote viewer on your local machine.

virtctl vnc --kubeconfig=$KUBECONFIG --proxy-only=true <vmi-name>

Display the port number and connect manually to the virtual machine instance by using any viewer through the VNC connection.

virtctl vnc --kubeconfig=$KUBECONFIG --port=<port-number> <vmi-name>

Specify a port number to run the proxy on the specified port, if that port is available. If a port number is not specified, the proxy runs on a random port.

virtctl image-upload dv <datavolume_name> --image-path=</path/to/image> --no-create

Upload a virtual machine image to a data volume that already exists.

virtctl image-upload dv <datavolume_name> --size=<datavolume_size> --image-path=</path/to/image>

Upload a virtual machine image to a new data volume.

virtctl version

Display the client and server version information.

virtctl fslist <vmi_name>

Return a full list of file systems available on the guest machine.

virtctl guestosinfo <vmi_name>

Return guest agent information about the operating system.

virtctl userlist <vmi_name>

Return a full list of logged-in users on the guest machine.

7.4. Creating a container using virtctl guestfs

You can use the virtctl guestfs command to deploy an interactive container with libguestfs-tools and a persistent volume claim (PVC) attached to it.

Procedure

  • To deploy a container with libguestfs-tools, mount the PVC, and attach a shell to it, run the following command:

    $ virtctl guestfs -n <namespace> <pvc_name> 1
    1
    The PVC name is a required argument. If you do not include it, an error message appears.

7.5. Libguestfs tools and virtctl guestfs

Libguestfs tools help you access and modify virtual machine (VM) disk images. You can use libguestfs tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks.

You can also use the virtctl guestfs command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter virt- on the command line and press the Tab key. For example:

CommandDescription

virt-edit -a /dev/vda /etc/motd

Edit a file interactively in your terminal.

virt-customize -a /dev/vda --ssh-inject root:string:<public key example>

Inject an ssh key into the guest and create a login.

virt-df -a /dev/vda -h

See how much disk space is used by a VM.

virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list'

See the full list of all RPMs installed on a guest by creating an output file containing the full list.

virt-cat -a /dev/vda /rpm-list

Display the output file list of all RPMs created using the virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list' command in your terminal.

virt-sysprep -a /dev/vda

Seal a virtual machine disk image to be used as a template.

By default, virtctl guestfs creates a session with everything needed to manage a VM disk. However, the command also supports several flag options if you want to customize the behavior:

Flag OptionDescription

--h or --help

Provides help for guestfs.

-n <namespace> option with a <pvc_name> argument

To use a PVC from a specific namespace.

If you do not use the -n <namespace> option, your current project is used. To change projects, use oc project <namespace>.

If you do not include a <pvc_name> argument, an error message appears.

--image string

Lists the libguestfs-tools container image.

You can configure the container to use a custom image by using the --image option.

--kvm

Indicates that kvm is used by the libguestfs-tools container.

By default, virtctl guestfs sets up kvm for the interactive container, which greatly speeds up the libguest-tools execution because it uses QEMU.

If a cluster does not have any kvm supporting nodes, you must disable kvm by setting the option --kvm=false.

If not set, the libguestfs-tools pod remains pending because it cannot be scheduled on any node.

--pull-policy string

Shows the pull policy for the libguestfs image.

You can also overwrite the image’s pull policy by setting the pull-policy option.

The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the libguestfs-tools process starts, the setup cannot avoid a new pod using the same PVC. You must verify that there are no active virtctl guestfs pods before starting the VM that accesses the same PVC.

Note

The virtctl guestfs command accepts only a single PVC attached to the interactive pod.

7.6. Additional resources

Chapter 8. Virtual machines

8.1. Creating virtual machines

Use one of these procedures to create a virtual machine:

  • Quick Start guided tour
  • Running the wizard
  • Pasting a pre-configured YAML file with the virtual machine wizard
  • Using the CLI
Warning

Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix.

When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines.

Templates without a boot source are labeled as Boot source required. You can use these templates if you complete the steps for adding a boot source to the virtual machine.

Important

Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for any templates or virtual machines that use data volumes or storage profiles.

8.1.1. Using a Quick Start to create a virtual machine

The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process.

Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine.

Prerequisites

  • Access to the website where you can download the URL link for the operating system image.

Procedure

  1. In the web console, select Quick Starts from the Help menu.
  2. Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine.
  3. Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The VirtualizationVirtualMachines page displays the virtual machine.

8.1.2. Running the virtual machine wizard to create a virtual machine

The web console features a wizard that guides you through the process of selecting a virtual machine template and creating a virtual machine. Red Hat virtual machine templates are preconfigured with an operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). When templates are configured with a boot source, they are labeled with a customized label text or the default label text Available boot source. These templates are then ready to be used for creating virtual machines.

You can select a template from the list of preconfigured templates, review the settings, and create a virtual machine in the Create virtual machine from template wizard. If you choose to customize your virtual machine, the wizard guides you through the General, Networking, Storage, Advanced, and Review steps. All required fields displayed by the wizard are marked by a *.

Create network interface controllers (NICs) and storage disks later and attach them to virtual machines.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. From the Virtual Machines tab or the Templates tab, click Create and select Virtual Machine with Wizard.
  3. Select a template that is configured with a boot source.
  4. Click Next to go to the Review and create step.
  5. Clear the Start this virtual machine after creation checkbox if you do not want to start the virtual machine now.
  6. Click Create virtual machine and exit the wizard or continue with the wizard to customize the virtual machine.
  7. Click Customize virtual machine to go to the General step.

    1. Optional: Edit the Name field to specify a custom name for the virtual machine.
    2. Optional: In the Description field, add a description.
  8. Click Next to go to the Networking step. A nic0 NIC is attached by default.

    1. Optional: Click Add Network Interface to create additional NICs.
    2. Optional: You can remove any or all NICs by clicking the Options menu kebab and selecting Delete. A virtual machine does not need a NIC attached to be created. You can create NICs after the virtual machine has been created.
  9. Click Next to go to the Storage step.

    1. Optional: Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu kebab and selecting Delete.
    2. Optional: Click the Options menu kebab to edit the disk and save your changes.
  10. Click Next to go to the Advanced step and choose one of the following options:

    1. If you selected a Linux template to create the VM, review the details for Cloud-init and configure SSH access.

      Note

      Statically inject an SSH key by using the custom script in cloud-init or in the wizard. This allows you to securely and remotely manage virtual machines and manage and transfer information. This step is strongly recommended to secure your VM. 

    2. If you selected a Windows template to create the VM, use the SysPrep section to upload answer files in XML format for automated Windows setup.
  11. Click Next to go to the Review step and review the settings for the virtual machine.
  12. Click Create Virtual Machine.
  13. Click See virtual machine details to view the Overview for this virtual machine.

    The virtual machine is listed in the Virtual Machines tab.

Refer to the virtual machine wizard fields section when running the web console wizard.

8.1.2.1. Virtual machine wizard fields
NameParameterDescription

Name

 

The name can contain lowercase letters (a-z), numbers (0-9), and hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

Description

 

Optional description field.

Operating System

 

The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template.

Boot Source

URL (creates PVC)

Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image.

Clone (creates PVC)

Select an existent persistent volume claim available on the cluster and clone it.

Registry (creates PVC)

Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo.

PXE (network boot - adds network interface)

Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition.

Persistent Volume Claim project

 

Project name that you want to use for cloning the PVC.

Persistent Volume Claim name

 

PVC name that should apply to this virtual machine template if you are cloning an existing PVC.

Mount this as a CD-ROM boot source

 

A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later.

Flavor

Tiny, Small, Medium, Large, Custom

Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template.

If you choose a default template, you can override the cpus and memsize values in the template using custom values to create a custom template. Alternatively, you can create a custom template by modifying the cpus and memsize values in the General tab on the Template details page.

Workload Type

Note

If you choose the incorrect Workload Type, there could be performance or resource utilization issues (such as a slow UI).

Desktop

A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. Use this template class or the Server template class to prioritize VM density over guaranteed VM performance.

Server

Balances performance and it is compatible with a wide range of server workloads. Use this template class or the Desktop template class to prioritize VM density over guaranteed VM performance.

High-Performance (requires CPU Manager)

A virtual machine configuration that is optimized for high-performance workloads. Use this template class to prioritize guaranteed VM performance over VM density.

Start this virtual machine after creation.

 

This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created.

Enable the CPU Manager to use the high-performance workload profile.

8.1.2.1.1. Networking fields
NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

8.1.2.2. Storage fields
NameSelectionDescription

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

 

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

 

Size of the disk in GiB.

Type

 

Type of disk. Example: Disk or CD-ROM

Interface

 

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

 

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile.

Note

Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization.

To manually specify Volume Mode and Access Mode, you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default.

NameMode descriptionParameterParameter description

Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Access mode of the persistent volume.

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This is required for some features, such as live migration of virtual machines between nodes.

ReadOnlyMany (ROX)

Volume can be mounted as read only by many nodes.

8.1.2.3. Cloud-init fields
NameDescription

Hostname

Sets a specific hostname for the virtual machine.

Authorized SSH Keys

The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine.

Custom script

Replaces other options with a field in which you paste a custom cloud-init script.

To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.

8.1.2.4. Pasting in a pre-configured YAML file to create a virtual machine

Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen.

If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Click Create and select With YAML.
  3. Write or paste your virtual machine configuration in the editable window.

    1. Alternatively, use the example virtual machine provided by default in the YAML screen.
  4. Optional: Click Download to download the YAML configuration file in its present state.
  5. Click Create to create the virtual machine.

The virtual machine is listed on the VirtualMachines page.

8.1.3. Using the CLI to create a virtual machine

You can create a virtual machine from a virtualMachine manifest.

Procedure

  1. Edit the VirtualMachine manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM:

    Example 8.1. Example manifest for a RHEL VM

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        app: <vm_name> 1
      name: <vm_name>
    spec:
      dataVolumeTemplates:
      - apiVersion: cdi.kubevirt.io/v1beta1
        kind: DataVolume
        metadata:
          name: <vm_name>
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9
            namespace: openshift-virtualization-os-images
          storage:
            resources:
              requests:
                storage: 30Gi
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: <vm_name>
        spec:
          domain:
            cpu:
              cores: 1
              sockets: 2
              threads: 1
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - masquerade: {}
                name: default
              rng: {}
            features:
              smm:
                enabled: true
            firmware:
              bootloader:
                efi: {}
            resources:
              requests:
                memory: 8Gi
          evictionStrategy: LiveMigrate
          networks:
          - name: default
            pod: {}
          volumes:
          - dataVolume:
              name: <vm_name>
            name: rootdisk
          - cloudInitNoCloud:
              userData: |-
                #cloud-config
                user: cloud-user
                password: '<password>' 2
                chpasswd: { expire: False }
            name: cloudinitdisk
    1
    Specify the name of the virtual machine.
    2
    Specify the password for cloud-user.
  2. Create a virtual machine by using the manifest file:

    $ oc create -f <vm_manifest_file>.yaml
  3. Optional: Start the virtual machine:

    $ virtctl start <vm_name>

8.1.4. Virtual machine storage volume types

Storage volume typeDescription

ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready.

Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.

cloudInitNoCloud

Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

containerDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.

A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

Note

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

8.1.5. About RunStrategies for virtual machines

A RunStrategy for virtual machines determines a virtual machine instance’s (VMI) behavior, depending on a series of conditions. The spec.runStrategy setting exists in the virtual machine configuration process as an alternative to the spec.running setting. The spec.runStrategy setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running setting with only true or false responses. However, the two settings are mutually exclusive. Only either spec.running or spec.runStrategy can be used. An error occurs if both are used.

There are four defined RunStrategies.

Always
A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as spec.running: true.
RerunOnFailure
A VMI is re-created if the previous instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down.
Manual
The start, stop, and restart virtctl client commands can be used to control the VMI’s state and existence.
Halted
No VMI is present when a virtual machine is created, which is the same behavior as spec.running: false.

Different combinations of the start, stop and restart virtctl commands affect which RunStrategy is used.

The following table follows a VM’s transition from different states. The first column shows the VM’s initial RunStrategy. Each additional column shows a virtctl command and the new RunStrategy after that command is run.

Initial RunStrategystartstoprestart

Always

-

Halted

Always

RerunOnFailure

-

Halted

RerunOnFailure

Manual

Manual

Manual

Manual

Halted

Always

-

-

Note

In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always or RerunOnFailure are rescheduled on a new node.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  RunStrategy: Always 1
  template:
...
1
The VMI’s current RunStrategy setting.

8.1.6. Additional resources

8.2. Editing virtual machines

You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen.

8.2.1. Editing a virtual machine in the web console

Edit select values of a virtual machine in the web console by clicking the pencil icon next to the relevant field. Other values can be edited using the CLI.

Labels and annotations are editable for both preconfigured Red Hat templates and your custom virtual machine templates. All other values are editable only for custom virtual machine templates that users have created using the Red Hat templates or the Create Virtual Machine Template wizard.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Optional: Use the Filter drop-down menu to sort the list of virtual machines by attributes such as status, template, node, or operating system (OS).
  3. Select a virtual machine to open the VirtualMachine details page.
  4. Click the pencil icon to make a field editable.
  5. Make the relevant changes and click Save.
Note

If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

8.2.1.1. Virtual machine fields

The following table lists the virtual machine fields that you can edit in the OpenShift Container Platform web console:

Table 8.1. Virtual machine fields
TabFields or functionality

Details

  • Labels
  • Annotations
  • Description
  • CPU/Memory
  • Boot mode
  • Boot order
  • GPU devices
  • Host devices
  • SSH access

YAML

  • View, edit, or download the custom resource.

Scheduling

  • Node selector
  • Tolerations
  • Affinity rules
  • Dedicated resources
  • Eviction strategy
  • Descheduler setting

Network Interfaces

  • Add, edit, or delete a network interface.

Disks

  • Add, edit, or delete a disk.

Scripts

  • cloud-init settings

Snapshots

  • Add, restore, or delete a virtual machine snapshot.

8.2.2. Editing a virtual machine YAML configuration using the web console

You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine.
  3. Click the YAML tab to display the editable configuration.
  4. Optional: You can click Download to download the YAML file locally in its current state.
  5. Edit the file and click Save.

A confirmation message shows that the modification has been successful and includes the updated version number for the object.

8.2.3. Editing a virtual machine YAML configuration using the CLI

Use this procedure to edit a virtual machine YAML configuration using the CLI.

Prerequisites

  • You configured a virtual machine with a YAML object configuration file.
  • You installed the oc CLI.

Procedure

  1. Run the following command to update the virtual machine configuration:

    $ oc edit <object_type> <object_ID>
  2. Open the object configuration.
  3. Edit the YAML.
  4. If you edit a running virtual machine, you need to do one of the following:

    • Restart the virtual machine.
    • Run the following command for the new configuration to take effect:

      $ oc apply <object_type> <object_ID>

8.2.4. Adding a virtual disk to a virtual machine

Use this procedure to add a virtual disk to a virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details screen.
  3. Click the Disks tab and then click Add disk.
  4. In the Add disk window, specify the Source, Name, Size, Type, Interface, and Storage Class.

    1. Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
    2. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.
  5. Click Add.
Note

If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine.

The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.

8.2.4.1. Editing CD-ROMs for VirtualMachines

Use the following procedure to edit CD-ROMs for virtual machines.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details screen.
  3. Click the Disks tab.
  4. Click the Options menu kebab for the CD-ROM that you want to edit and select Edit.
  5. In the Edit CD-ROM window, edit the fields: Source, Persistent Volume Claim, Name, Type, and Interface.
  6. Click Save.
8.2.4.2. Storage fields
NameSelectionDescription

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

 

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

 

Size of the disk in GiB.

Type

 

Type of disk. Example: Disk or CD-ROM

Interface

 

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

 

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile.

Note

Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization.

To manually specify Volume Mode and Access Mode, you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default.

NameMode descriptionParameterParameter description

Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Access mode of the persistent volume.

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This is required for some features, such as live migration of virtual machines between nodes.

ReadOnlyMany (ROX)

Volume can be mounted as read only by many nodes.

8.2.5. Adding a network interface to a virtual machine

Use this procedure to add a network interface to a virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details screen.
  3. Click the Network Interfaces tab.
  4. Click Add Network Interface.
  5. In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
  6. Click Add.
Note

If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine.

The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

8.2.5.1. Networking fields
NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

8.2.6. Additional resources

8.3. Editing boot order

You can update the values for a boot order list by using the web console or the CLI.

With Boot Order in the Virtual Machine Overview page, you can:

  • Select a disk or network interface controller (NIC) and add it to the boot order list.
  • Edit the order of the disks or NICs in the boot order list.
  • Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.

8.3.1. Adding items to a boot order list in the web console

Add items to a boot order list by using the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
  5. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
  6. Add any additional disks or NICs to the boot order list.
  7. Click Save.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

8.3.2. Editing a boot order list in the web console

Edit the boot order list in the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Choose the appropriate method to move the item in the boot order list:

    • If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
    • If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
  6. Click Save.
Note

If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

8.3.3. Editing a boot order list in the YAML configuration file

Edit the boot order list in a YAML configuration file by using the CLI.

Procedure

  1. Open the YAML configuration file for the virtual machine by running the following command:

    $ oc edit vm example
  2. Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:

    disks:
      - bootOrder: 1 1
        disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      - cdrom:
          bus: virtio
        name: cd-drive-1
    interfaces:
      - boot Order: 2 2
        macAddress: '02:96:c4:00:00'
        masquerade: {}
        name: default
    1
    The boot order value specified for the disk.
    2
    The boot order value specified for the network interface controller.
  3. Save the YAML file.
  4. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.

8.3.4. Removing items from a boot order list in the web console

Remove items from a boot order list by using the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Click the Remove icon delete next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

8.4. Deleting virtual machines

You can delete a virtual machine from the web console or by using the oc command line interface.

8.4.1. Deleting a virtual machine using the web console

Deleting a virtual machine permanently removes it from the cluster.

Note

When you delete a virtual machine, the data volume it uses is automatically deleted.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Click the Options menu kebab of the virtual machine that you want to delete and select Delete.

    • Alternatively, click the virtual machine name to open the VirtualMachine details page and click ActionsDelete.
  3. In the confirmation pop-up window, click Delete to permanently delete the virtual machine.

8.4.2. Deleting a virtual machine by using the CLI

You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines.

Note

When you delete a virtual machine, the data volume it uses is automatically deleted.

Prerequisites

  • Identify the name of the virtual machine that you want to delete.

Procedure

  • Delete the virtual machine by running the following command:

    $ oc delete vm <vm_name>
    Note

    This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace.

8.5. Managing virtual machine instances

If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI).

The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port.

8.5.1. About virtual machine instances

A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI).

A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:

  • List standalone VMIs and their details.
  • Edit labels and annotations for a standalone VMI.
  • Delete a standalone VMI.

When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.

Note

Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.

8.5.2. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

8.5.3. Listing standalone virtual machine instances using the web console

Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).

Note

VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.

Procedure

  • Click VirtualizationVirtualMachines from the side menu.

    You can identify a standalone VMI by a dark colored badge next to its name.

8.5.4. Editing a standalone virtual machine instance using the web console

You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a standalone VMI to open the VirtualMachineInstance details page.
  3. On the Details tab, click the pencil icon beside Annotations or Labels.
  4. Make the relevant changes and click Save.

8.5.5. Deleting a standalone virtual machine instance using the CLI

You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the VMI that you want to delete.

Procedure

  • Delete the VMI by running the following command:

    $ oc delete vmi <vmi_name>

8.5.6. Deleting a standalone virtual machine instance using the web console

Delete a standalone virtual machine instance (VMI) from the web console.

Procedure

  1. In the OpenShift Container Platform web console, click VirtualizationVirtualMachines from the side menu.
  2. Click ActionsDelete VirtualMachineInstance.
  3. In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.

8.6. Controlling virtual machine states

You can stop, start, restart, and unpause virtual machines from the web console.

You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port.

8.6.1. Starting a virtual machine

You can start a virtual machine from the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to start.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you start it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions.
  4. Select Restart.
  5. In the confirmation window, click Start to start the virtual machine.
Note

When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.

8.6.2. Restarting a virtual machine

You can restart a running virtual machine from the web console.

Important

To avoid errors, do not restart a virtual machine while it has a status of Importing.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to restart.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you restart it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click ActionsRestart.
  4. In the confirmation window, click Restart to restart the virtual machine.

8.6.3. Stopping a virtual machine

You can stop a virtual machine from the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to stop.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you stop it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click ActionsStop.
  4. In the confirmation window, click Stop to stop the virtual machine.

8.6.4. Unpausing a virtual machine

You can unpause a paused virtual machine from the web console.

Prerequisites

  • At least one of your virtual machines must have a status of Paused.

    Note

    You can pause virtual machines by using the virtctl client.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to unpause.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. In the Status column, click Paused.
    • To view comprehensive information about the selected virtual machine before you unpause it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click the pencil icon that is located on the right side of Status.
  4. In the confirmation window, click Unpause to unpause the virtual machine.

8.7. Accessing virtual machine consoles

OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands.

8.7.1. Accessing virtual machine consoles in the OpenShift Container Platform web console

You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console.

You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console.

8.7.1.1. Connecting to the serial console

Connect to the serial console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Console tab. The VNC console opens by default.
  4. Click Disconnect to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background.
  5. Click the VNC Console drop-down list and select Serial Console.
  6. Click Disconnect to end the console session.
  7. Optional: Open the serial console in a separate window by clicking Open Console in New Window.
8.7.1.2. Connecting to the VNC console

Connect to the VNC console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Console tab. The VNC console opens by default.
  4. Optional: Open the VNC console in a separate window by clicking Open Console in New Window.
  5. Optional: Send key combinations to the virtual machine by clicking Send Key.
  6. Click outside the console window and then click Disconnect to end the session.
8.7.1.3. Connecting to a Windows virtual machine with RDP

The desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.

To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Consoles tab on the VirtualMachine Details page of the web console and supply it to your preferred RDP client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers.
  • A layer-2 NIC attached to the virtual machine.
  • An RDP client installed on a machine on the same network as the Windows virtual machine.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Click a Windows virtual machine to open the VirtualMachine details page.
  3. Click the Console tab.
  4. In the Console list, select Desktop Viewer.
  5. In the Network Interface list, select the layer-2 NIC.
  6. Click Launch Remote Desktop to download the console.rdp file.
  7. Open an RDP client and reference the console.rdp file. For example, using remmina:

    $ remmina --connect /path/to/console.rdp
  8. Enter the Administrator user name and password to connect to the Windows virtual machine.

8.7.2. Accessing virtual machine consoles by using CLI commands

8.7.2.1. Accessing a virtual machine instance via SSH

You can use SSH to access a virtual machine (VM) after you expose port 22 on it.

The virtctl expose command forwards a virtual machine instance (VMI) port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh service that forwards traffic from a specific port of cluster nodes to port 22 of the <fedora-vm> virtual machine.

Prerequisites

  • You must be in the same project as the VMI.
  • The VMI you want to access must be connected to the default pod network by using the masquerade binding method.
  • The VMI you want to access must be running.
  • Install the OpenShift CLI (oc).

Procedure

  1. Run the following command to create the fedora-vm-ssh service:

    $ virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1
    1
    <fedora-vm> is the name of the VM that you run the fedora-vm-ssh service on.
  2. Check the service to find out which port the service acquired:

    $ oc get svc

    Example output

    NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
    fedora-vm-ssh   NodePort   127.0.0.1      <none>        22:32551/TCP   6s

    In this example, the service acquired the 32551 port.

  3. Log in to the VMI via SSH. Use the ipAddress of any of the cluster nodes and the port that you found in the previous step:

    $ ssh username@<node_IP_address> -p 32551
8.7.2.2. Accessing a virtual machine via SSH with YAML configurations

You can enable an SSH connection to a virtual machine (VM) without the need to run the virtctl expose command. When the YAML file for the VM and the YAML file for the service are configured and applied, the service forwards the SSH traffic to the VM.

The following examples show the configurations for the VM’s YAML file and the service YAML file.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Create a namespace for the VM’s YAML file by using the oc create namespace command and specifying a name for the namespace.

Procedure

  1. In the YAML file for the VM, add the label and a value for exposing the service for SSH connections. Enable the masquerade feature for the interface:

    Example VirtualMachine definition

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      namespace: ssh-ns 1
      name: vm-ssh
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-ssh
            special: vm-ssh 2
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - masquerade: {} 3
                name: testmasquerade 4
              rng: {}
            machine:
              type: ""
            resources:
              requests:
                memory: 1024M
          networks:
          - name: testmasquerade
            pod: {}
          volumes:
          - name: containerdisk
            containerDisk:
              image: kubevirt/fedora-cloud-container-disk-demo
          - name: cloudinitdisk
            cloudInitNoCloud:
              userData: |
                #cloud-config
                user: fedora
                password: fedora
                chpasswd: {expire: False}
    # ...

    1
    Name of the namespace created by the oc create namespace command.
    2
    Label used by the service to identify the virtual machine instances that are enabled for SSH traffic connections. The label can be any key:value pair that is added as a label to this YAML file and as a selector in the service YAML file.
    3
    The interface type is masquerade.
    4
    The name of this interface is testmasquerade.
  2. Create the VM:

    $ oc create -f <path_for_the_VM_YAML_file>
  3. Start the VM:

    $ virtctl start vm-ssh
  4. In the YAML file for the service, specify the service name, port number, and the target port.

    Example Service definition

    apiVersion: v1
    kind: Service
    metadata:
      name: svc-ssh 1
      namespace: ssh-ns 2
    spec:
      ports:
      - targetPort: 22 3
        protocol: TCP
        port: 27017
      selector:
        special: vm-ssh 4
      type: NodePort
    # ...

    1
    Name of the SSH service.
    2
    Name of the namespace created by the oc create namespace command.
    3
    The target port number for the SSH connection.
    4
    The selector name and value must match the label specified in the YAML file for the VM.
  5. Create the service:

    $ oc create -f <path_for_the_service_YAML_file>
  6. Verify that the VM is running:

    $ oc get vmi

    Example output

    NAME    AGE     PHASE       IP              NODENAME
    vm-ssh 6s       Running     10.244.196.152  node01

  7. Check the service to find out which port the service acquired:

    $ oc get svc

    Example output

    NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
    svc-ssh     NodePort       10.106.236.208 <none>        27017:30093/TCP   22s

    In this example, the service acquired the port number 30093.

  8. Run the following command to obtain the IP address for the node:

    $ oc get node <node_name> -o wide

    Example output

    NAME    STATUS   ROLES   AGE    VERSION  INTERNAL-IP      EXTERNAL-IP
    node01  Ready    worker  6d22h  v1.23.0  192.168.55.101   <none>

  9. Log in to the VM via SSH by specifying the IP address of the node where the VM is running and the port number. Use the port number displayed by the oc get svc command and the IP address of the node displayed by the oc get node command. The following example shows the ssh command with the username, node’s IP address, and the port number:

    $ ssh fedora@192.168.55.101 -p 30093
8.7.2.3. Accessing the serial console of a virtual machine instance

The virtctl console command opens a serial console to the specified virtual machine instance.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.

Procedure

  • Connect to the serial console with virtctl:

    $ virtctl console <VMI>
8.7.2.4. Accessing the graphical console of a virtual machine instances with VNC

The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.
Note

If you use virtctl via SSH on a remote machine, you must forward the X session to your machine.

Procedure

  1. Connect to the graphical interface with the virtctl utility:

    $ virtctl vnc <VMI>
  2. If the command failed, try using the -v flag to collect troubleshooting information:

    $ virtctl vnc <VMI> -v 4
8.7.2.5. Connecting to a Windows virtual machine with an RDP console

The Remote Desktop Protocol (RDP) provides a better console experience for connecting to Windows virtual machines.

To connect to a Windows virtual machine with RDP, specify the IP address of the attached L2 NIC to your RDP client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers.
  • A layer 2 NIC attached to the virtual machine.
  • An RDP client installed on a machine on the same network as the Windows virtual machine.

Procedure

  1. Log in to the OpenShift Virtualization cluster through the oc CLI tool as a user with an access token.

    $ oc login -u <user> https://<cluster.example.com>:8443
  2. Use oc describe vmi to display the configuration of the running Windows virtual machine.

    $ oc describe vmi <windows-vmi-name>

    Example output

    ...
    spec:
      networks:
      - name: default
        pod: {}
      - multus:
          networkName: cnv-bridge
        name: bridge-net
    ...
    status:
      interfaces:
      - interfaceName: eth0
        ipAddress: 198.51.100.0/24
        ipAddresses:
          198.51.100.0/24
        mac: a0:36:9f:0f:b1:70
        name: default
      - interfaceName: eth1
        ipAddress: 192.0.2.0/24
        ipAddresses:
          192.0.2.0/24
          2001:db8::/32
        mac: 00:17:a4:77:77:25
        name: bridge-net
    ...

  3. Identify and copy the IP address of the layer 2 network interface. This is 192.0.2.0 in the above example, or 2001:db8:: if you prefer IPv6.
  4. Open an RDP client and use the IP address copied in the previous step for the connection.
  5. Enter the Administrator user name and password to connect to the Windows virtual machine.

8.8. Automating Windows installation with sysprep

You can use Microsoft DVD images and sysprep to automate the installation, setup, and software provisioning of Windows virtual machines.

8.8.1. Using a Windows DVD to create a VM disk image

Microsoft does not provide disk images for download, but you can create a disk image using a Windows DVD. This disk image can then be used to create virtual machines.

Procedure

  1. In the OpenShift Virtualization web console, click StoragePersistentVolumeClaimsCreate PersistentVolumeClaim With Data upload form.
  2. Select the intended project.
  3. Set the Persistent Volume Claim Name.
  4. Upload the VM disk image from the Windows DVD. The image is now available as a boot source to create a new Windows VM.

8.8.2. Using a disk image to install Windows

You can use a disk image to install Windows on your virtual machine.

Prerequisites

  • You must create a disk image using a Windows DVD.
  • You must create an autounattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationCatalog from the side menu.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select Upload (Upload a new file to a PVC) from the Disk source list and browse to the DVD image.
  4. Click Review and create VirtualMachine.
  5. Clear Clone available operating system source to this Virtual Machine.
  6. Clear Start this VirtualMachine after creation.
  7. On the Sysprep section of the Scripts tab, click Edit.
  8. Browse to the autounattend.xml answer file and click Save.
  9. Click Create VirtualMachine.
  10. On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save.

The VM will start with the sysprep disk containing the autounattend.xml answer file.

8.8.3. Generalizing a Windows VM using sysprep

Generalizing an image allows that image to remove all system-specific configuration data when the image is deployed on a virtual machine (VM).

Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines.
  2. Select a Windows VM to open the VirtualMachine details page.
  3. Click the Disks tab.
  4. Click the Options menu kebab for the sysprep disk and select Detach.
  5. Click Detach.
  6. Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool.
  7. Start the sysprep program by running the following command:

    %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
  8. After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.

You can now specialize the VM.

8.8.4. Specializing a Windows virtual machine

Specializing a virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.

Prerequisites

  • You must have a generalized Windows disk image.
  • You must create an unattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationCatalog.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select PVC (clone PVC) from the Disk source list.
  4. Specify the Persistent Volume Claim project and Persistent Volume Claim name of the generalized Windows image.
  5. Click Review and create VirtualMachine.
  6. Click the Scripts tab.
  7. In the Sysprep section, click Edit, browse to the unattend.xml answer file, and click Save.
  8. Click Create VirtualMachine.

During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use.

8.8.5. Additional resources

8.9. Triggering virtual machine failover by resolving a failed node

If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object.

Note

If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks:

  • Failed nodes are automatically recycled.
  • Virtual machines with RunStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes.

8.9.1. Prerequisites

  • A node where a virtual machine was running has the NotReady condition.
  • The virtual machine that was running on the failed node has RunStrategy set to Always.
  • You have installed the OpenShift CLI (oc).

8.9.2. Deleting nodes from a bare metal cluster

When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.

Procedure

Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:

  1. Mark the node as unschedulable:

    $ oc adm cordon <node_name>
  2. Drain all pods on the node:

    $ oc adm drain <node_name> --force=true

    This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.

  3. Delete the node from the cluster:

    $ oc delete node <node_name>

    Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.

  4. If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.

8.9.3. Verifying virtual machine failover

After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI.

8.9.3.1. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

8.10. Installing the QEMU guest agent on virtual machines

The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

8.10.1. Installing QEMU guest agent on a Linux virtual machine

The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service.

To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. Access the virtual machine command line through one of the consoles or by SSH.
  2. Install the QEMU guest agent on the virtual machine:

    $ yum install -y qemu-guest-agent
  3. Ensure the service is persistent and start it:

    $ systemctl enable --now qemu-guest-agent

8.10.2. Installing QEMU guest agent on a Windows virtual machine

For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation.

To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

8.10.2.1. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.
8.10.2.2. Installing VirtIO drivers during Windows installation

Install the VirtIO drivers from the attached SATA CD driver during Windows installation.

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Begin the Windows installation process.
  3. Select the Advanced installation.
  4. The storage destination will not be recognized until the driver is loaded. Click Load driver.
  5. The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Repeat the previous two steps for all required drivers.
  7. Complete the Windows installation.

8.11. Viewing the QEMU guest agent information for virtual machines

When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks.

8.11.1. Prerequisites

8.11.2. About the QEMU guest agent information in the web console

When the QEMU guest agent is installed, the Overview and Details tabs on the VirtualMachine details page displays information about the hostname, operating system, time zone, and logged in users.

The VirtualMachine details page shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems.

Note

If the QEMU guest agent is not installed, the Overview and the Details tabs display information about the operating system that was specified when the virtual machine was created.

8.11.3. Viewing the QEMU guest agent information in the web console

You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine name to open the VirtualMachine details page.
  3. Click the Details tab to view active users.
  4. Click the Disks tab to view information about the file systems.

8.12. Managing config maps, secrets, and service accounts in virtual machines

You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can:

  • Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine.
  • Store non-confidential configuration data in a config map so that a pod or another object can consume the data.
  • Allow a component to access the API server by associating a service account with that component.
Note

OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead.

8.12.1. Adding a secret, config map, or service account to a virtual machine

You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.

These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.

If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page.

Prerequisites

  • The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. In the Environment tab, click Add Config Map, Secret or Service Account.
  4. Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
  5. Optional: Click Reload to revert the environment to its last saved state.
  6. Click Save.

Verification

  1. On the VirtualMachine details page, click the Disks tab and verify that the secret, config map, or service account is included in the list of disks.
  2. Restart the virtual machine by clicking ActionsRestart.

You can now mount the secret, config map, or service account as you would mount any other disk.

8.12.2. Removing a secret, config map, or service account from a virtual machine

Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console.

Prerequisites

  • You must have at least one secret, config map, or service account that is attached to a virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Environment tab.
  4. Find the item that you want to delete in the list, and click Remove delete on the right side of the item.
  5. Click Save.
Note

You can reset the form to the last saved state by clicking Reload.

Verification

  1. On the VirtualMachine details page, click the Disks tab.
  2. Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks.

8.12.3. Additional resources

8.13. Installing VirtIO driver on an existing Windows virtual machine

8.13.1. About VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

See also: Installing Virtio drivers on a new Windows virtual machine.

8.13.2. Supported VirtIO drivers for Microsoft Windows virtual machines

Table 8.2. Supported drivers
Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes displays as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes displays as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

8.13.3. Adding VirtIO drivers container disk to a virtual machine

OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.

Prerequisites

  • Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.

    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration.
  2. The disk is available once the virtual machine has started:

    • If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect.
    • If the virtual machine is not running, use virtctl start <vm>.

After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.

8.13.4. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.

8.13.5. Removing the VirtIO container disk from a virtual machine

After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file.

Procedure

  1. Edit the configuration file and remove the disk and the volume.

    $ oc edit vm <vm-name>
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
  2. Reboot the virtual machine for the changes to take effect.

8.14. Installing VirtIO driver on a new Windows virtual machine

8.14.1. Prerequisites

8.14.2. About VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

See also: Installing VirtIO driver on an existing Windows virtual machine.

8.14.3. Supported VirtIO drivers for Microsoft Windows virtual machines

Table 8.3. Supported drivers
Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes displays as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes displays as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

8.14.4. Adding VirtIO drivers container disk to a virtual machine

OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.

Prerequisites

  • Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.

    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration.
  2. The disk is available once the virtual machine has started:

    • If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect.
    • If the virtual machine is not running, use virtctl start <vm>.

After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.

8.14.5. Installing VirtIO drivers during Windows installation

Install the VirtIO drivers from the attached SATA CD driver during Windows installation.

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Begin the Windows installation process.
  3. Select the Advanced installation.
  4. The storage destination will not be recognized until the driver is loaded. Click Load driver.
  5. The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Repeat the previous two steps for all required drivers.
  7. Complete the Windows installation.

8.14.6. Removing the VirtIO container disk from a virtual machine

After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file.

Procedure

  1. Edit the configuration file and remove the disk and the volume.

    $ oc edit vm <vm-name>
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
  2. Reboot the virtual machine for the changes to take effect.

8.15. Advanced virtual machine management

8.15.1. Working with resource quotas for virtual machines

Create and manage resource quotas for virtual machines.

8.15.1.1. Setting resource quota limits for virtual machines

Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.

Procedure

  1. Set limits for a VM by editing the VirtualMachine manifest. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: with-limits
    spec:
      running: false
      template:
        spec:
          domain:
    # ...
            resources:
              requests:
                memory: 128Mi
              limits:
                memory: 256Mi  1
    1
    This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value.
  2. Save the VirtualMachine manifest.
8.15.1.2. Additional resources

8.15.2. Specifying nodes for virtual machines

You can place virtual machines (VMs) on specific nodes by using node placement rules.

8.15.2.1. About node placement for virtual machines

To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:

  • You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
  • You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
  • Your VMs require specific hardware features that are not present on all available nodes.
  • You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Note

Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.

You can use the following rule types in the spec field of a VirtualMachine manifest:

nodeSelector
Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity

Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object.

Note

Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.

tolerations
Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
8.15.2.2. Node placement examples

The following example YAML file snippets use nodePlacement, affinity, and tolerations fields to customize node placement for virtual machines.

8.15.2.2.1. Example: VM node placement with nodeSelector

In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels.

Warning

If there are no nodes that fit this description, the virtual machine is not scheduled.

Example VM manifest

metadata:
  name: example-vm-node-selector
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      nodeSelector:
        example-key-1: example-value-1
        example-key-2: example-value-2
...

8.15.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity

In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1. If there is no such pod running on any node, the VM is not scheduled.

If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.

Example VM manifest

metadata:
  name: example-vm-pod-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: 1
      - labelSelector:
          matchExpressions:
          - key: example-key-1
            operator: In
            values:
            - example-value-1
        topologyKey: kubernetes.io/hostname
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution: 2
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: example-key-2
              operator: In
              values:
              - example-value-2
          topologyKey: kubernetes.io/hostname
...

1
If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met.
2
If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.15.2.2.3. Example: VM node placement with node affinity

In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.

If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value. However, if all candidate nodes have this label, the scheduler ignores this constraint.

Example VM manifest

metadata:
  name: example-vm-node-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: 1
        nodeSelectorTerms:
        - matchExpressions:
          - key: example.io/example-key
            operator: In
            values:
            - example-value-1
            - example-value-2
      preferredDuringSchedulingIgnoredDuringExecution: 2
      - weight: 1
        preference:
          matchExpressions:
          - key: example-node-label-key
            operator: In
            values:
            - example-node-label-value
...

1
If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met.
2
If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.15.2.2.4. Example: VM node placement with tolerations

In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations, it can schedule onto the tainted nodes.

Note

A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.

Example VM manifest

metadata:
  name: example-vm-tolerations
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "virtualization"
    effect: "NoSchedule"
...

8.15.2.3. Additional resources

8.15.3. Configuring certificate rotation

Configure certificate rotation parameters to replace existing certificates.

8.15.3.1. Configuring certificate rotation

You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR).

Procedure

  1. Open the HyperConverged CR by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format.

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
     name: kubevirt-hyperconverged
     namespace: openshift-cnv
    spec:
      certConfig:
        ca:
          duration: 48h0m0s
          renewBefore: 24h0m0s 1
        server:
          duration: 24h0m0s  2
          renewBefore: 12h0m0s  3
    1
    The value of ca.renewBefore must be less than or equal to the value of ca.duration.
    2
    The value of server.duration must be less than or equal to the value of ca.duration.
    3
    The value of server.renewBefore must be less than or equal to the value of server.duration.
  3. Apply the YAML file to your cluster.
8.15.3.2. Troubleshooting certificate rotation parameters

Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions:

  • The value of ca.renewBefore must be less than or equal to the value of ca.duration.
  • The value of server.duration must be less than or equal to the value of ca.duration.
  • The value of server.renewBefore must be less than or equal to the value of server.duration.

If the default values conflict with these conditions, you will receive an error.

If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration, conflicting with the specified conditions.

Example

certConfig:
   ca:
     duration: 4h0m0s
     renewBefore: 1h0m0s
   server:
     duration: 4h0m0s
     renewBefore: 4h0m0s

This results in the following error message:

error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration

The error message only mentions the first conflict. Review all certConfig values before you proceed.

8.15.4. Automating management tasks

You can automate OpenShift Virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine.

8.15.4.1. About Red Hat Ansible Automation

Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for OpenShift Virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations.

Ansible provides a way to automate OpenShift Virtualization management, which you can also accomplish by using the oc CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules.

8.15.4.2. Automating virtual machine creation

You can use the kubevirt_vm Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform.

Prerequisites

Procedure

  1. Edit an Ansible Playbook YAML file so that it includes the kubevirt_vm task:

      kubevirt_vm:
        namespace:
        name:
        cpu_cores:
        memory:
        disks:
          - name:
            volume:
              containerDisk:
                image:
            disk:
              bus:
    Note

    This snippet only includes the kubevirt_vm portion of the playbook.

  2. Edit the values to reflect the virtual machine you want to create, including the namespace, the number of cpu_cores, the memory, and the disks. For example:

      kubevirt_vm:
        namespace: default
        name: vm1
        cpu_cores: 1
        memory: 64Mi
        disks:
          - name: containerdisk
            volume:
              containerDisk:
                image: kubevirt/cirros-container-disk-demo:latest
            disk:
              bus: virtio
  3. If you want the virtual machine to boot immediately after creation, add state: running to the YAML file. For example:

      kubevirt_vm:
        namespace: default
        name: vm1
        state: running 1
        cpu_cores: 1
    1
    Changing this value to state: absent deletes the virtual machine, if it already exists.
  4. Run the ansible-playbook command, using your playbook’s file name as the only argument:

    $ ansible-playbook create-vm.yaml
  5. Review the output to determine if the play was successful:

    Example output

    (...)
    TASK [Create my first VM] ************************************************************************
    changed: [localhost]
    
    PLAY RECAP ********************************************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  6. If you did not include state: running in your playbook file and you want to boot the VM now, edit the file so that it includes state: running and run the playbook again:

    $ ansible-playbook create-vm.yaml

To verify that the virtual machine was created, try to access the VM console.

8.15.4.3. Example: Ansible Playbook for creating virtual machines

You can use the kubevirt_vm Ansible Playbook to automate virtual machine creation.

The following YAML file is an example of the kubevirt_vm playbook. It includes sample values that you must replace with your own information if you run the playbook.

---
- name: Ansible Playbook 1
  hosts: localhost
  connection: local
  tasks:
    - name: Create my first VM
      kubevirt_vm:
        namespace: default
        name: vm1
        cpu_cores: 1
        memory: 64Mi
        disks:
          - name: containerdisk
            volume:
              containerDisk:
                image: kubevirt/cirros-container-disk-demo:latest
            disk:
              bus: virtio

8.15.5. Using UEFI mode for virtual machines

You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.

8.15.5.1. About UEFI mode for virtual machines

Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.

It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.

8.15.5.2. Booting virtual machines in UEFI mode

You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode:

    Booting in UEFI mode with secure boot active

    apiversion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        special: vm-secureboot
      name: vm-secureboot
    spec:
      template:
        metadata:
          labels:
            special: vm-secureboot
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
            features:
              acpi: {}
              smm:
                enabled: true 1
            firmware:
              bootloader:
                efi:
                  secureBoot: true 2
    ...

    1
    OpenShift Virtualization requires System Management Mode (SMM) to be enabled for Secure Boot in UEFI mode to occur.
    2
    OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
  2. Apply the manifest to your cluster by running the following command:

    $ oc create -f <file_name>.yaml

8.15.6. Configuring PXE booting for virtual machines

PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.

8.15.6.1. Prerequisites
  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.
8.15.6.2. PXE booting with a specified MAC address

As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.

Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

Procedure

  1. Configure a PXE network on the cluster:

    1. Create the network attachment definition file for PXE network pxe-net-conf:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: pxe-net-conf
      spec:
        config: '{
          "cniVersion": "0.3.1",
          "name": "pxe-net-conf",
          "plugins": [
            {
              "type": "cnv-bridge",
              "bridge": "br1",
              "vlan": 1 1
            },
            {
              "type": "cnv-tuning" 2
            }
          ]
        }'
      1
      Optional: The VLAN tag.
      2
      The cnv-tuning plugin provides support for custom MAC addresses.
      Note

      The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN.

  2. Create the network attachment definition by using the file you created in the previous step:

    $ oc create -f pxe-net-conf.yaml
  3. Edit the virtual machine instance configuration file to include the details of the interface and network.

    1. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.

      Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net>:

      interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
      Note

      Boot order is global for interfaces and disks.

    2. Assign a boot device number to the disk to ensure proper booting after operating system provisioning.

      Set the disk bootOrder value to 2:

      devices:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
          bootOrder: 2
    3. Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf>:

      networks:
      - name: default
        pod: {}
      - name: pxe-net
        multus:
          networkName: pxe-net-conf
  4. Create the virtual machine instance:

    $ oc create -f vmi-pxe-boot.yaml

Example output

  virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created

  1. Wait for the virtual machine instance to run:

    $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
      phase: Running
  2. View the virtual machine instance using VNC:

    $ virtctl vnc vmi-pxe-boot
  3. Watch the boot screen to verify that the PXE boot is successful.
  4. Log in to the virtual machine instance:

    $ virtctl console vmi-pxe-boot
  5. Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0, got an IP address from OpenShift Container Platform.

    $ ip addr

Example output

...
3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff

8.15.6.3. OpenShift Virtualization networking glossary

OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins.

The following terms are used throughout OpenShift Virtualization documentation:

Container Network Interface (CNI)
a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
Multus
a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
Custom resource definition (CRD)
a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
Network attachment definition (NAD)
a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
Node network configuration policy (NNCP)
a description of the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.
Preboot eXecution Environment (PXE)
an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

8.15.7. Using huge pages with virtual machines

You can use huge pages as backing memory for virtual machines in your cluster.

8.15.7.1. Prerequisites
8.15.7.2. What huge pages do

Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.

A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.

In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.

8.15.7.3. Configuring huge pages for virtual machines

You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration.

The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi.

Note

The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.

If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.

Prerequisites

  • Nodes must have pre-allocated huge pages configured.

Procedure

  1. In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain. The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi:

    kind: VirtualMachine
    ...
    spec:
      domain:
        resources:
          requests:
            memory: "4Gi" 1
        memory:
          hugepages:
            pageSize: "1Gi" 2
    ...
    1
    The total amount of memory requested for the virtual machine. This value must be divisible by the page size.
    2
    The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi. The page size must be smaller than the requested memory.
  2. Apply the virtual machine configuration:

    $ oc apply -f <virtual_machine>.yaml

8.15.8. Enabling dedicated resources for virtual machines

To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.

8.15.8.1. About dedicated resources

When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.

8.15.8.2. Prerequisites
  • The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads.
  • The virtual machine must be powered off.
8.15.8.3. Enabling dedicated resources for a virtual machine

You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. On the Scheduling tab, click the pencil icon beside Dedicated Resources.
  4. Select Schedule this workload with dedicated resources (guaranteed policy).
  5. Click Save.

8.15.9. Scheduling virtual machines

You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.

8.15.9.1. Policy attributes

You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.

Policy attributeDescription

force

The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU.

require

Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model.

optional

The VM is added to a node if that VM is supported by the host’s physical machine CPU.

disable

The VM cannot be scheduled with CPU node discovery.

forbid

The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled.

8.15.9.2. Setting a policy attribute and CPU feature

You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.

Procedure

  • Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM):

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              features:
                - name: apic 1
                  policy: require 2
    1
    Name of the CPU feature for the VM.
    2
    Policy attribute for the VM.
8.15.9.3. Scheduling virtual machines with the supported CPU model

You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.

Procedure

  • Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: Conroe 1
    1
    CPU model for the VM.
8.15.9.4. Scheduling virtual machines with the host model

When the CPU model for a virtual machine (VM) is set to host-model, the VM inherits the CPU model of the node where it is scheduled.

Procedure

  • Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine:

    apiVersion: kubevirt/v1alpha3
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: host-model 1
    1
    The VM that inherits the CPU model of the node where it is scheduled.

8.15.10. Configuring PCI passthrough

The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.

Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI).

8.15.10.1. About preparing a host device for PCI passthrough

To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator.

To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR.

8.15.10.1.1. Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • Administrative privilege to a working OpenShift Container Platform cluster.
  • Intel or AMD CPU hardware.
  • Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 1
      name: 100-worker-iommu 2
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 3
    ...
    1
    Applies the new kernel argument only to worker nodes.
    2
    The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3
    Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    $ oc get MachineConfig
8.15.10.1.2. Binding PCI devices to the VFIO driver

To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.

Prerequisites

  • You added kernel arguments to enable IOMMU for the CPU.

Procedure

  1. Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device.

    $ lspci -nnv | grep -i nvidia

    Example output

    02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

  2. Create a Butane config file, 100-worker-vfiopci.bu, binding the PCI device to the VFIO driver.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Example

    variant: openshift
    version: 4.10.0
    metadata:
      name: 100-worker-vfiopci
      labels:
        machineconfiguration.openshift.io/role: worker 1
    storage:
      files:
      - path: /etc/modprobe.d/vfio.conf
        mode: 0644
        overwrite: true
        contents:
          inline: |
            options vfio-pci ids=10de:1eb8 2
      - path: /etc/modules-load.d/vfio-pci.conf 3
        mode: 0644
        overwrite: true
        contents:
          inline: vfio-pci

    1
    Applies the new kernel argument only to worker nodes.
    2
    Specify the previously determined vendor-ID value (10de) and the device-ID value (1eb8) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.
    3
    The file that loads the vfio-pci kernel module on the worker nodes.
  3. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
  4. Apply the MachineConfig object to the worker nodes:

    $ oc apply -f 100-worker-vfiopci.yaml
  5. Verify that the MachineConfig object was added.

    $ oc get MachineConfig

    Example output

    NAME                             GENERATEDBYCONTROLLER                      IGNITIONVERSION  AGE
    00-master                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    00-worker                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    100-worker-iommu                                                            3.2.0            30s
    100-worker-vfiopci-configuration                                            3.2.0            30s

Verification

  • Verify that the VFIO driver is loaded.

    $ lspci -nnk -d 10de:

    The output confirms that the VFIO driver is being used.

    Example output

    04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1)
            Subsystem: NVIDIA Corporation Device [10de:1eb8]
            Kernel driver in use: vfio-pci
            Kernel modules: nouveau

8.15.10.1.3. Exposing PCI host devices in the cluster using the CLI

To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices: 1
        pciHostDevices: 2
        - pciDeviceSelector: "10DE:1DB6" 3
          resourceName: "nvidia.com/GV100GL_Tesla_V100" 4
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
        - pciDeviceSelector: "8086:6F54"
          resourceName: "intel.com/qat"
          externalResourceProvider: true 5
    ...

    1
    The host devices that are permitted to be used in the cluster.
    2
    The list of PCI devices available on the node.
    3
    The vendor-ID and the device-ID required to identify the PCI device.
    4
    The name of a PCI host device.
    5
    Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
    Note

    The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization.

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100, nvidia.com/TU104GL_Tesla_T4, and intel.com/qat resource names.

    $ oc describe node <node_name>

    Example output

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250

8.15.10.1.4. Removing PCI host devices from the cluster using the CLI

To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector, resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted.

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices:
        pciHostDevices:
        - pciDeviceSelector: "10DE:1DB6"
          resourceName: "nvidia.com/GV100GL_Tesla_V100"
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
    ...

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name.

    $ oc describe node <node_name>

    Example output

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250

8.15.10.2. Configuring virtual machines for PCI passthrough

After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.

8.15.10.2.1. Assigning a PCI device to a virtual machine

When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.

Procedure

  • Assign the PCI device to a virtual machine as a host device.

    Example

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          hostDevices:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 1
            name: hostdevices1

    1
    The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.

Verification

  • Use the following command to verify that the host device is available from the virtual machine.

    $ lspci -nnk | grep NVIDIA

    Example output

    $ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

8.15.10.3. Additional resources

8.15.11. Configuring vGPU passthrough

Your virtual machines can access a virtual GPU (vGPU) hardware. Assigning a vGPU to your virtual machine allows you do the following:

  • Access a fraction of the underlying hardware’s GPU to achieve high performance benefits in your virtual machine.
  • Streamline resource-intensive I/O operations.
Important

vGPU passthrough can only be assigned to devices that are connected to clusters running in a bare metal environment.

8.15.11.1. Assigning vGPU passthrough devices to a virtual machine

Use the OpenShift Container Platform web console to assign vGPU passthrough devices to your virtual machine.

Prerequisites

  • The virtual machine must be stopped.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization → VirtualMachines from the side menu.
  2. Select the virtual machine to which you want to assign the device.
  3. On the Details tab, click GPU devices.

    If you add a vGPU device as a host device, you cannot access the device with the VNC console.

  4. Click Add GPU device, enter the Name and select the device from the Device name list.
  5. Click Save.
  6. Click the YAML tab to verify that the new devices have been added to your cluster configuration in the hostDevices section.
Note

You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems, such as Windows 10 or RHEL 7.

To display resources that are connected to your cluster, click ComputeHardware Devices from the side menu.

8.15.11.2. Additional resources

8.15.12. Configuring mediated devices

OpenShift Virtualization automatically creates mediated devices, such as virtual GPUs (vGPUs), if you provide a list of devices in the HyperConverged custom resource (CR).

Important

Declarative configuration of mediated devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

8.15.12.1. About using the NVIDIA GPU Operator

The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks related to bootstrapping GPU nodes. Since the GPU is a special resource in the cluster, you must install some components before deploying application workloads onto the GPU. These components include the NVIDIA drivers which enables compute unified device architecture (CUDA), Kubernetes device plugin, container runtime and others such as automatic node labelling, monitoring and more.

Note

The NVIDIA GPU Operator is supported only by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA.

There are two ways to enable GPUs with OpenShift Container Platform OpenShift Virtualization: the OpenShift Container Platform-native way described here and by using the NVIDIA GPU Operator.

The NVIDIA GPU Operator is a Kubernetes Operator that enables OpenShift Container Platform OpenShift Virtualization to expose GPUs to virtualized workloads running on OpenShift Container Platform. It allows users to easily provision and manage GPU-enabled virtual machines, providing them with the ability to run complex artificial intelligence/machine learning (AI/ML) workloads on the same platform as their other workloads. It also provides an easy way to scale the GPU capacity of their infrastructure, allowing for rapid growth of GPU-based workloads.

For more information about using the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated VMs, see NVIDIA GPU Operator with OpenShift Virtualization.

8.15.12.2. About using virtual GPUs with OpenShift Virtualization

Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters.

Note

Refer to your hardware vendor’s documentation for functionality and support details.

Mediated device
A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.
8.15.12.2.1. Prerequisites
  • If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.

8.15.12.2.2. Configuration overview

When configuring mediated devices, an administrator must complete the following tasks:

  • Create the mediated devices.
  • Expose the mediated devices to the cluster.

The HyperConverged CR includes APIs that accomplish both tasks.

Creating mediated devices

...
spec:
  mediatedDevicesConfiguration:
    mediatedDevicesTypes: 1
    - <device_type>
    nodeMediatedDeviceTypes: 2
    - mediatedDevicesTypes: 3
      - <device_type>
      nodeSelector: 4
        <node_selector_key>: <node_selector_value>
...

1
Required: Configures global settings for the cluster.
2
Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDevicesTypes configuration.
3
Required if you use nodeMediatedDeviceTypes. Overrides the global mediatedDevicesTypes configuration for the specified nodes.
4
Required if you use nodeMediatedDeviceTypes. Must include a key:value pair.

Exposing mediated devices to the cluster

...
  permittedHostDevices:
    mediatedDevices:
    - mdevNameSelector: GRID T4-2Q 1
      resourceName: nvidia.com/GRID_T4-2Q 2
...

1
Exposes the mediated devices that map to this value on the host.
Note

You can see the mediated device types that your device supports by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name, substituting the correct values for your system.

For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q. Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type.

2
The resourceName should match that allocated on the node. Find the resourceName by using the following command:
$ oc get $NODE -o json \
  | jq '.status.allocatable \
    | with_entries(select(.key | startswith("nvidia.com/"))) \
    | with_entries(select(.value != "0"))'
8.15.12.2.3. How vGPUs are assigned to nodes

For each physical device, OpenShift Virtualization configures the following values:

  • A single mdev type.
  • The maximum number of instances of the selected mdev type.

The cluster architecture affects how devices are created and assigned to nodes.

Large cluster with multiple cards per node

On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:

...
mediatedDevicesConfiguration:
  mediatedDevicesTypes:
  - nvidia-222
  - nvidia-228
  - nvidia-105
  - nvidia-108
...

In this scenario, each node has two cards, both of which support the following vGPU types:

nvidia-105
...
nvidia-108
nvidia-217
nvidia-299
...

On each node, OpenShift Virtualization creates the following vGPUs:

  • 16 vGPUs of type nvidia-105 on the first card.
  • 2 vGPUs of type nvidia-108 on the second card.
One node has a single card that supports more than one requested vGPU type

OpenShift Virtualization uses the supported type that comes first on the mediatedDevicesTypes list.

For example, the card on a node card supports nvidia-223 and nvidia-224. The following mediatedDevicesTypes list is configured:

...
mediatedDevicesConfiguration:
  mediatedDevicesTypes:
  - nvidia-22
  - nvidia-223
  - nvidia-224
...

In this example, OpenShift Virtualization uses the nvidia-223 type.

8.15.12.2.4. About changing and removing mediated devices

The cluster’s mediated device configuration can be updated with OpenShift Virtualization by:

  • Editing the HyperConverged CR and change the contents of the mediatedDevicesTypes stanza.
  • Changing the node labels that match the nodeMediatedDeviceTypes node selector.
  • Removing the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR.

    Note

    If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.

Depending on the specific changes, these actions cause OpenShift Virtualization to reconfigure mediated devices or remove them from the cluster nodes.

8.15.12.2.5. Preparing hosts for mediated devices

You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices.

8.15.12.2.5.1. Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • Administrative privilege to a working OpenShift Container Platform cluster.
  • Intel or AMD CPU hardware.
  • Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 1
      name: 100-worker-iommu 2
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 3
    ...
    1
    Applies the new kernel argument only to worker nodes.
    2
    The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3
    Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    $ oc get MachineConfig
8.15.12.2.6. Adding and removing mediated devices

You can add or remove mediated devices.

8.15.12.2.6.1. Creating and exposing mediated devices

You can expose and create mediated devices such as virtual GPUs (vGPUs) by editing the HyperConverged custom resource (CR).

Prerequisites

  • You enabled the IOMMU (Input-Output Memory Management Unit) driver.

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the mediated device information to the HyperConverged CR spec, ensuring that you include the mediatedDevicesConfiguration and permittedHostDevices stanzas. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration: <.>
        mediatedDevicesTypes: <.>
        - nvidia-231
        nodeMediatedDeviceTypes: <.>
        - mediatedDevicesTypes: <.>
          - nvidia-233
          nodeSelector:
            kubernetes.io/hostname: node-11.redhat.com
      permittedHostDevices: <.>
        mediatedDevices:
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q
        - mdevNameSelector: GRID T4-8Q
          resourceName: nvidia.com/GRID_T4-8Q
    ...

    <.> Creates mediated devices. <.> Required: Global mediatedDevicesTypes configuration. <.> Optional: Overrides the global configuration for specific nodes. <.> Required if you use nodeMediatedDeviceTypes. <.> Exposes mediated devices to the cluster.

  3. Save your changes and exit the editor.

Verification

  • You can verify that a device was added to a specific node by running the following command:

    $ oc describe node <node_name>
8.15.12.2.6.2. Removing mediated devices from the cluster using the CLI

To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration:
        mediatedDevicesTypes: 1
          - nvidia-231
      permittedHostDevices:
        mediatedDevices: 2
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q

    1
    To remove the nvidia-231 device type, delete it from the mediatedDevicesTypes array.
    2
    To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field.
  3. Save your changes and exit the editor.
8.15.12.3. Using mediated devices

A vGPU is a type of mediated device; the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines.

8.15.12.3.1. Assigning a mediated device to a virtual machine

Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines.

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource.

Procedure

  • Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest:

    Example virtual machine manifest

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          gpus:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 1
            name: gpu1 2
          - deviceName: nvidia.com/GRID_T4-1Q
            name: gpu2

    1
    The resource name associated with the mediated device.
    2
    A name to identify the device on the VM.

Verification

  • To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest:

    $ lspci -nnk | grep <device_name>
8.15.12.4. Additional resources

8.15.13. Configuring a watchdog

Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service.

8.15.13.1. Prerequisites
  • The virtual machine must have kernel support for an i6300esb watchdog device. Red Hat Enterprise Linux (RHEL) images support i6300esb.
8.15.13.2. Defining a watchdog device

Define how the watchdog proceeds when the operating system (OS) no longer responds.

Table 8.4. Available actions

poweroff

The virtual machine (VM) powers down immediately. If spec.running is set to true, or spec.runStrategy is not set to manual, then the VM reboots.

reset

The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it.

shutdown

The VM gracefully powers down by stopping all services.

Procedure

  1. Create a YAML file with the following contents:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm2-rhel84-watchdog
      name: <vm-name>
    spec:
      running: false
      template:
        metadata:
         labels:
            kubevirt.io/vm: vm2-rhel84-watchdog
        spec:
          domain:
            devices:
              watchdog:
                name: <watchdog>
                i6300esb:
                  action: "poweroff" 1
    ...
    1
    Specify the watchdog action (poweroff, reset, or shutdown).

    The example above configures the i6300esb watchdog device on a RHEL8 VM with the poweroff action and exposes the device as /dev/watchdog.

    This device can now be used by the watchdog binary.

  2. Apply the YAML file to your cluster by running the following command:

    $ oc apply -f <file_name>.yaml
Important

This procedure is provided for testing watchdog functionality only and must not be run on production machines.

  1. Run the following command to verify that the VM is connected to the watchdog device:

    $ lspci | grep watchdog -i
  2. Run one of the following commands to confirm the watchdog is active:

    • Trigger a kernel panic:

      # echo c > /proc/sysrq-trigger
    • Terminate the watchdog service:

      # pkill -9 watchdog
8.15.13.3. Installing a watchdog device

Install the watchdog package on your virtual machine and start the watchdog service.

Procedure

  1. As a root user, install the watchdog package and dependencies:

    # yum install watchdog
  2. Uncomment the following line in the /etc/watchdog.conf file, and save the changes:

    #watchdog-device = /dev/watchdog
  3. Enable the watchdog service to start on boot:

    # systemctl enable --now watchdog.service
8.15.13.4. Additional resources

8.15.14. Automatic importing and updating of pre-defined boot sources

You can use boot sources that are system-defined and included with OpenShift Virtualization or user-defined, which you create. System-defined boot source imports and updates are controlled by the product feature gate. You can enable, disable, or re-enable updates using the feature gate. User-defined boot sources are not controlled by the product feature gate and must be individually managed to opt in or opt out of automatic imports and updates.

Important

You must set a default storage class for automatic import and update of boot sources.

8.15.14.1. Enabling automatic boot source updates

If you have pre-defined boot sources from OpenShift Virtualization 4.9, then you must manually opt them in to the automatic boot source updates. All pre-defined boot sources from OpenShift Virtualization 4.10 and later are automatically updated by default.

Procedure

  • Use the following command to apply the dataImportCron label to the data source:

    $ oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true
8.15.14.2. Disabling automatic boot source updates

You can reduce the number of logs on disconnected environments or reduce resource usage by disabling the automatic imports and updates of pre-defined boot sources. Set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged custom resource (CR) to false.

Note

Custom boot sources are not affected by this setting.

Procedure

  • Use the following command to disable automatic updates:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", "value": false}]'
8.15.14.3. Re-enabling automatic boot source updates

If you have previously disabled automatic boot source updates, you must manually re-enable the feature. Set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged custom resource (CR) to true.

Procedure

  • Use the following command to re-enable automatic updates:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", "value": true}]'
8.15.14.4. Enabling automatic updates on custom boot sources

OpenShift Virtualization automatically updates pre-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic imports and updates on any custom boot sources by editing the HyperConverged custom resource (CR).

Procedure

  1. Use the following command to open the HyperConverged CR for editing:

    $ oc edit -n openshift-cnv HyperConverged
  2. Edit the HyperConverged CR, specifying the appropriate template and boot source in the dataImportCronTemplates section. For example:

    Example in CentOS 7

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      dataImportCronTemplates:
      - metadata:
          name: centos7-image-cron
          annotations:
            cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1
        spec:
          schedule: "0 */12 * * *" 2
          template:
            spec:
              source:
                registry: 3
                  url: docker://quay.io/containerdisks/centos:7-2009
              storage:
                resources:
                  requests:
                    storage: 10Gi
          managedDataSource: centos7 4
          retentionPolicy: "None" 5

    1
    This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer.
    2
    Schedule for the job specified in cron format.
    3
    Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod, which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image, but the CDI importer is not authorized to access it.
    4
    For the custom image to be detected as an available boot source, the name of the image’s managedDataSource must match the name of the template’s DataSource, which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file.
    5
    Use All to retain data volumes and data sources when the cron job is deleted. Use None to delete data volumes and data sources when the cron job is deleted.

8.15.15. Enabling descheduler evictions on virtual machines

You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node.

Important

Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

8.15.15.1. Descheduler profiles

Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load.

DevPreviewLongLifecycle

This profile balances resource usage between nodes and enables the following strategies:

  • RemovePodsHavingTooManyRestarts: removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count.
  • LowNodeUtilization: evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler.

    • A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods).
    • A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods).
8.15.15.2. Installing the descheduler

The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles.

Prerequisites

  • Cluster administrator privileges.
  • Access to the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Create the required namespace for the Kube Descheduler Operator.

    1. Navigate to AdministrationNamespaces and click Create Namespace.
    2. Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create.
  3. Install the Kube Descheduler Operator.

    1. Navigate to OperatorsOperatorHub.
    2. Type Kube Descheduler Operator into the filter box.
    3. Select the Kube Descheduler Operator and click Install.
    4. On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.
    5. Adjust the values for the Update Channel and Approval Strategy to the desired values.
    6. Click Install.
  4. Create a descheduler instance.

    1. From the OperatorsInstalled Operators page, click the Kube Descheduler Operator.
    2. Select the Kube Descheduler tab and click Create KubeDescheduler.
    3. Edit the settings as necessary.

      1. Expand the Profiles section and select DevPreviewLongLifecycle. The AffinityAndTaints profile is enabled by default.

        Important

        The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle.

You can also configure the profiles and settings for the descheduler later using the OpenShift CLI (oc).

8.15.15.3. Enabling descheduler evictions on a virtual machine (VM)

After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR).

Prerequisites

  • Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI (oc).
  • Ensure that the VM is not running.

Procedure

  1. Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      template:
        metadata:
          annotations:
            descheduler.alpha.kubernetes.io/evict: "true"
  2. If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object:

    apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      deschedulingIntervalSeconds: 3600
      profiles:
      - DevPreviewLongLifecycle

The descheduler is now enabled on the VM.

8.15.15.4. Additional resources

8.16. Importing virtual machines

8.16.1. TLS certificates for data volume imports

8.16.1.1. Adding TLS certificates for authenticating data volume imports

TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume.

Create the config map by referencing the relative file path for the TLS certificate.

Procedure

  1. Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace.

    $ oc get ns
  2. Create the config map:

    $ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>
8.16.1.2. Example: Config map created from a TLS certificate

The following example is of a config map created from ca.pem TLS certificate.

apiVersion: v1
kind: ConfigMap
metadata:
  name: tls-certs
data:
  ca.pem: |
    -----BEGIN CERTIFICATE-----
    ... <base64 encoded cert> ...
    -----END CERTIFICATE-----

8.16.2. Importing virtual machine images with data volumes

Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry.

Important

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details.

8.16.2.1. Prerequisites
8.16.2.2. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

Note

CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.

8.16.2.3. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.

8.16.2.4. Importing a virtual machine image into storage by using a data volume

You can import a virtual machine image into storage by using a data volume.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint or the image can be built into a container disk and stored in a container registry.

You specify the data source for the image in a VirtualMachine configuration file. When the virtual machine is created, the data volume with the virtual machine image is imported into storage.

Prerequisites

  • To import a virtual machine image you must have the following:

    • A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
    • An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source.
  • To import a container disk, you must have a virtual machine image built into a container disk and stored in a container registry, along with any authentication credentials needed to access the data source.
  • If the virtual machine must communicate with servers that use self-signed certificates or certificates not signed by the system CA bundle, you must create a config map in the same namespace as the data volume.

Procedure

  1. If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml:

    apiVersion: v1
    kind: Secret
    metadata:
      name: endpoint-secret 1
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" 2
      secretKey:   "" 3
    1
    Specify the name of the Secret.
    2
    Specify the Base64-encoded key ID or user name.
    3
    Specify the Base64-encoded secret key or password.
  2. Apply the Secret manifest:

    $ oc apply -f endpoint-secret.yaml
  3. Edit the VirtualMachine manifest, specifying the data source for the virtual machine image you want to import, and save it as vm-fedora-datavolume.yaml:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: vm-fedora-datavolume
      name: vm-fedora-datavolume 1
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: fedora-dv 2
        spec:
          storage:
            resources:
              requests:
                storage: 10Gi
            storageClassName: local
          source:
            http: 3
              url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 4
              secretRef: endpoint-secret 5
              certConfigMap: "" 6
        status: {}
      running: true
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-fedora-datavolume
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: datavolumedisk1
            machine:
              type: ""
            resources:
              requests:
                memory: 1.5Gi
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: fedora-dv
            name: datavolumedisk1
    status: {}
    1
    Specify the name of the virtual machine.
    2
    Specify the name of the data volume.
    3
    Specify http for an HTTP or HTTPS endpoint. Specify registry for a container disk image imported from a registry.
    4
    Specify the URL or registry endpoint of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest".
    5
    Specify the Secret name if you created a Secret for the data source.
    6
    Optional: Specify a CA certificate config map.
  4. Create the virtual machine:

    $ oc create -f vm-fedora-datavolume.yaml
    Note

    The oc create command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded. You can start the virtual machine.

    Data volume provisioning happens in the background, so there is no need to monitor the process.

Verification

  1. The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command:

    $ oc get pods
  2. Monitor the data volume until its status is Succeeded by running the following command:

    $ oc describe dv fedora-dv 1
    1
    Specify the data volume name that you defined in the VirtualMachine manifest.
  3. Verify that provisioning is complete and that the virtual machine has started by accessing its serial console:

    $ virtctl console vm-fedora-datavolume
8.16.2.5. Additional resources

8.16.3. Importing virtual machine images into block storage with data volumes

You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC).

Important

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details.

8.16.3.1. Prerequisites
8.16.3.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.

8.16.3.3. About block persistent volumes

A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification.

8.16.3.4. Creating a local block persistent volume

Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume manifest that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a storage class for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The file name of the persistent volume created in the previous step.
8.16.3.5. Importing a virtual machine image into block storage by using a data volume

You can import a virtual machine image into block storage by using a data volume. You reference the data volume in a VirtualMachine manifest before you create a virtual machine.

Prerequisites

  • A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
  • An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source.

Procedure

  1. If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml:

    apiVersion: v1
    kind: Secret
    metadata:
      name: endpoint-secret 1
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" 2
      secretKey:   "" 3
    1
    Specify the name of the Secret.
    2
    Specify the Base64-encoded key ID or user name.
    3
    Specify the Base64-encoded secret key or password.
  2. Apply the Secret manifest:

    $ oc apply -f endpoint-secret.yaml
  3. Create a DataVolume manifest, specifying the data source for the virtual machine image and Block for storage.volumeMode.

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: import-pv-datavolume 1
    spec:
      storageClassName: local 2
        source:
          http:
            url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 3
            secretRef: endpoint-secret 4
      storage:
        volumeMode: Block 5
        resources:
          requests:
            storage: 10Gi
    1
    Specify the name of the data volume.
    2
    Optional: Set the storage class or omit it to accept the cluster default.
    3
    Specify the HTTP or HTTPS URL of the image to import.
    4
    Specify the Secret name if you created a Secret for the data source.
    5
    The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify Block.
  4. Create the data volume to import the virtual machine image:

    $ oc create -f import-pv-datavolume.yaml

You can reference this data volume in a VirtualMachine manifest before you create a virtual machine.

8.16.3.6. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

Note

CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.

8.16.3.7. Additional resources

8.17. Cloning virtual machines

8.17.1. Enabling user permissions to clone data volumes across namespaces

The isolating nature of namespaces means that users cannot by default clone resources between namespaces.

To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace.

8.17.1.1. Prerequisites
  • Only a user with the cluster-admin role can create cluster roles.
8.17.1.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.

8.17.1.3. Creating RBAC resources for cloning data volumes

Create a new cluster role that enables permissions for all actions for the datavolumes resource.

Procedure

  1. Create a ClusterRole manifest:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <datavolume-cloner> 1
    rules:
    - apiGroups: ["cdi.kubevirt.io"]
      resources: ["datavolumes/source"]
      verbs: ["*"]
    1
    Unique name for the cluster role.
  2. Create the cluster role in the cluster:

    $ oc create -f <datavolume-cloner.yaml> 1
    1
    The file name of the ClusterRole manifest created in the previous step.
  3. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <allow-clone-to-user> 1
      namespace: <Source namespace> 2
    subjects:
    - kind: ServiceAccount
      name: default
      namespace: <Destination namespace> 3
    roleRef:
      kind: ClusterRole
      name: datavolume-cloner 4
      apiGroup: rbac.authorization.k8s.io
    1
    Unique name for the role binding.
    2
    The namespace for the source data volume.
    3
    The namespace to which the data volume is cloned.
    4
    The name of the cluster role created in the previous step.
  4. Create the role binding in the cluster:

    $ oc create -f <datavolume-cloner.yaml> 1
    1
    The file name of the RoleBinding manifest created in the previous step.

8.17.2. Cloning a virtual machine disk into a new data volume

You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file.

Warning

Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem.

However, you can only clone between different volume modes if they are of the contentType: kubevirt.

Tip

When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.

8.17.2.1. Prerequisites
8.17.2.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.

8.17.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume

You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.

Note

When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).

Procedure

  1. Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume.

    For example:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <cloner-datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source-namespace>" 2
          name: "<my-favorite-vm-disk>" 3
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 4
    1
    The name of the new data volume.
    2
    The namespace where the source PVC exists.
    3
    The name of the source PVC.
    4