Este contenido no está disponible en el idioma seleccionado.

Chapter 7. Virtual machines


7.1. Creating VMs from Red Hat images

7.1.1. Creating virtual machines from Red Hat images overview

Red Hat images are golden images. They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images project as snapshots or persistent volume claims (PVCs).

Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates.

Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console.

You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods:

Important

Do not create VMs in the default openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix.

7.1.1.1. About golden images

A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently.

7.1.1.1.1. How do golden images work?

Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences.

After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image.

7.1.1.1.2. Red Hat implementation of golden images

Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs.

7.1.1.2. About VM boot sources

Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications.

Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.

Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the previous default storage class.

7.1.2. Creating virtual machines from instance types

You can simplify virtual machine (VM) creation by using instance types, whether you use the OpenShift Container Platform web console or the CLI to create VMs.

7.1.2.1. About instance types

An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization.

To create a new instance type, you must first create a manifest, either manually or by using the virtctl CLI tool. You then create the instance type object by applying the manifest to your cluster.

OpenShift Virtualization provides two CRDs for configuring instance types:

  • A namespaced object: VirtualMachineInstancetype
  • A cluster-wide object: VirtualMachineClusterInstancetype

These objects use the same VirtualMachineInstancetypeSpec.

7.1.2.1.1. Required attributes

When you configure an instance type, you must define the cpu and memory attributes. Other attributes are optional.

Note

When you create a VM from an instance type, you cannot override any parameters defined in the instance type.

Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type.

You can manually create an instance type manifest. For example:

Example YAML file with required fields

apiVersion: instancetype.kubevirt.io/v1beta1
kind: VirtualMachineInstancetype
metadata:
  name: example-instancetype
spec:
  cpu:
    guest: 1 1
  memory:
    guest: 128Mi 2

1
Required. Specifies the number of vCPUs to allocate to the guest.
2
Required. Specifies an amount of memory to allocate to the guest.

You can create an instance type manifest by using the virtctl CLI utility. For example:

Example virtctl command with required fields

$ virtctl create instancetype --cpu 2 --memory 256Mi

where:

--cpu <value>
Specifies the number of vCPUs to allocate to the guest. Required.
--memory <value>
Specifies an amount of memory to allocate to the guest. Required.
Tip

You can immediately create the object from the new manifest by running the following command:

$ virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -
7.1.2.1.2. Optional attributes

In addition to the required cpu and memory attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec:

annotations
List annotations to apply to the VM.
gpus
List vGPUs for passthrough.
hostDevices
List host devices for passthrough.
ioThreadsPolicy
Define an IO threads policy for managing dedicated disk access.
launchSecurity
Configure Secure Encrypted Virtualization (SEV).
nodeSelector
Specify node selectors to control the nodes where this VM is scheduled.
schedulerName
Define a custom scheduler to use for this VM instead of the default scheduler.

7.1.2.2. Pre-defined instance types

OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes. Some are specialized for specific workloads and others are workload-agnostic.

These instance type resources are named according to their series, version, and size. The size value follows the . delimiter and ranges from nano to 8xlarge.

Table 7.1. common-instancetypes series comparison
Use caseSeriesCharacteristicsvCPU to memory ratioExample resource

Universal

U

  • Burstable CPU performance

1:4

u1.medium
  • 1 vCPUs
  • 4 Gi memory

Overcommitted

O

  • Overcommitted memory
  • Burstable CPU performance

1:4

o1.small
  • 1 vCPU
  • 2Gi memory

Compute-exclusive

CX

  • Hugepages
  • Dedicated CPU
  • Isolated emulator threads
  • vNUMA

1:2

cx1.2xlarge
  • 8 vCPUs
  • 16Gi memory

NVIDIA GPU

GN

  • For VMs that use GPUs provided by the NVIDIA GPU Operator
  • Has predefined GPUs
  • Burstable CPU performance

1:4

gn1.8xlarge
  • 32 vCPUs
  • 128Gi memory

Memory-intensive

M

  • Hugepages
  • Burstable CPU performance

1:8

m1.large
  • 2 vCPUs
  • 16Gi memory

Network-intensive

N

  • Hugepages
  • Dedicated CPU
  • Isolated emulator threads
  • Requires nodes capable of running DPDK workloads

1:2

n1.medium
  • 4 vCPUs
  • 4Gi memory

7.1.2.3. Creating manifests by using the virtctl tool

You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands.

If you have a VirtualMachine manifest, you can create a VM from the command line.

7.1.2.4. Creating a VM from an instance type by using the web console

You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.

You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.

Procedure

  1. In the web console, navigate to Virtualization Catalog.

    The InstanceTypes tab opens by default.

  2. Select either of the following options:

    • Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.

      Note

      The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label.

      • Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
    • Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save.

      Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.

      In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.

      Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.

  3. Click an instance type tile and select the resource size appropriate for your workload.
  4. Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:

    • For a Linux-based volume, follow these steps to configure SSH:

      1. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
      2. Select one of the following options:

        • Use existing: Select a secret from the secrets list.
        • Add new: Follow these steps:

          1. Browse to the public SSH key file or paste the file in the key field.
          2. Enter the secret name.
          3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
      3. Click Save.
    • For a Windows volume, follow either of these set of steps to configure sysprep options:

      • If you have not already added sysprep options for the Windows volume, follow these steps:

        1. Click the edit icon beside Sysprep in the VirtualMachine details section.
        2. Add the Autoattend.xml answer file.
        3. Add the Unattend.xml answer file.
        4. Click Save.
      • If you want to use existing sysprep options for the Windows volume, follow these steps:

        1. Click Attach existing sysprep.
        2. Enter the name of the existing sysprep Unattend.xml answer file.
        3. Click Save.
  5. Optional: If you are creating a Windows VM, you can mount a Windows driver disk:

    1. Click the Customize VirtualMachine button.
    2. On the VirtualMachine details page, click Storage.
    3. Select the Mount Windows drivers disk checkbox.
  6. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
  7. Click Create VirtualMachine.

After the VM is created, you can monitor the status on the VirtualMachine details page.

7.1.3. Creating virtual machines from templates

You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console.

7.1.3.1. About VM templates

Boot sources

You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label.

Templates without a boot source are labeled Boot source required. See Creating virtual machines from custom images.

Customization
You can customize the disk source and VM parameters before you start the VM.

See storage volume types and storage fields for details about disk source settings.

Note

If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console.

Single-node OpenShift
Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles.

7.1.3.2. Creating a VM from a template

You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console.

Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click Boot source available to filter templates with boot sources.

    The catalog displays the default templates. Click All Items to view all available templates for your filters.

  3. Click a template tile to view its details.
  4. Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the Mount Windows drivers disk checkbox.
  5. If you do not need to customize the template or VM parameters, click Quick create VirtualMachine to create a VM from the template.

    If you need to customize the template or VM parameters, do the following:

    1. Click Customize VirtualMachine.
    2. Expand Storage or Optional parameters to edit data source settings.
    3. Click Customize VirtualMachine parameters.

      The Customize and create VirtualMachine pane displays the Overview, YAML, Scheduling, Environment, Network interfaces, Disks, Scripts, and Metadata tabs.

    4. Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key.
    5. Click Create VirtualMachine.

      The VirtualMachine details page displays the provisioning status.

7.1.3.2.1. Storage volume types
Table 7.2. Storage volume types
TypeDescription

ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready.

Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.

cloudInitNoCloud

Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

containerDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.

A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

Note

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

7.1.3.2.2. Storage fields
FieldDescription

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

Size of the disk in GiB.

Type

Type of disk. Example: Disk or CD-ROM

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.

If you do not specify these parameters, the system uses the default storage profile values.

ParameterOptionParameter description

Volume Mode

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This mode is required for live migration.

7.1.3.2.3. Customizing a VM template by using the web console

You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed.

You can remove the deprecated designation from the customized template.

Procedure

  1. Navigate to Virtualization Templates in the web console.
  2. From the list of VM templates, click the template marked as deprecated.
  3. Click Edit next to the pencil icon beside Labels.
  4. Remove the following two labels:

    • template.kubevirt.io/type: "base"
    • template.kubevirt.io/version: "version"
  5. Click Save.
  6. Click the pencil icon beside the number of existing Annotations.
  7. Remove the following annotation:

    • template.kubevirt.io/deprecated
  8. Click Save.

7.1.4. Creating virtual machines from the command line

You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine manifest. You can simplify VM configuration by using an instance type in your VM manifest.

Note

7.1.4.1. Creating manifests by using the virtctl tool

You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands.

7.1.4.2. Creating a VM from a VirtualMachine manifest

You can create a virtual machine (VM) from a VirtualMachine manifest.

Procedure

  1. Edit the VirtualMachine manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM:

    Note

    This example manifest does not configure VM authentication.

    Example manifest for a RHEL VM

     apiVersion: kubevirt.io/v1
     kind: VirtualMachine
     metadata:
      name: rhel-9-minimal
     spec:
      dataVolumeTemplates:
        - metadata:
            name: rhel-9-minimal-volume
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9 1
              namespace: openshift-virtualization-os-images 2
            storage: {}
      instancetype:
        name: u1.medium 3
      preference:
        name: rhel.9 4
      running: true
      template:
        spec:
          domain:
            devices: {}
          volumes:
            - dataVolume:
                name: rhel-9-minimal-volume
              name: rootdisk

    1
    The rhel9 golden image is used to install RHEL 9 as the guest operating system.
    2
    Golden images are stored in the openshift-virtualization-os-images namespace.
    3
    The u1.medium instance type requests 1 vCPU and 4Gi memory for the VM. These resource values cannot be overridden within the VM.
    4
    The rhel.9 preference specifies additional attributes that support the RHEL 9 guest operating system.
  2. Create a virtual machine by using the manifest file:

    $ oc create -f <vm_manifest_file>.yaml
  3. Optional: Start the virtual machine:

    $ virtctl start <vm_name> -n <namespace>

7.2. Creating VMs from custom images

7.2.1. Creating virtual machines from custom images overview

You can create virtual machines (VMs) from custom operating system images by using one of the following methods:

The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the OpenShift Container Platform web console or command line.

Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

You must also install VirtIO drivers on Windows VMs.

The QEMU guest agent is included with Red Hat images.

7.2.2. Creating VMs by using container disks

You can create virtual machines (VMs) by using container disks built from operating system images.

You can enable auto updates for your container disks. See Managing automatic boot source updates for details.

Important

If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can perform the following tasks to resolve this issue:

You create a VM from a container disk by performing the following steps:

  1. Build an operating system image into a container disk and upload it to your container registry.
  2. If your container registry does not have TLS, configure your environment to disable TLS for your registry.
  3. Create a VM with the container disk as the disk source by using the web console or the command line.
Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

7.2.2.1. Building and uploading a container disk

You can build a virtual machine (VM) image into a container disk and upload it to a registry.

The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted.

Note

For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.

Prerequisites

  • You must have podman installed.
  • You must have a QCOW2 or RAW image file.

Procedure

  1. Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of 107, and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440.

    The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result:

    $ cat > Dockerfile << EOF
    FROM registry.access.redhat.com/ubi8/ubi:latest AS builder
    ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1
    RUN chmod 0440 /disk/*
    
    FROM scratch
    COPY --from=builder /disk/* /disk/
    EOF
    1
    Where <vm_image> is the image in either QCOW2 or RAW format. If you use a remote image, replace <vm_image>.qcow2 with the complete URL.
  2. Build and tag the container:

    $ podman build -t <registry>/<container_disk_name>:latest .
  3. Push the container image to the registry:

    $ podman push <registry>/<container_disk_name>:latest

7.2.2.2. Disabling TLS for a container registry

You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource.

Prerequisites

  1. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add a list of insecure registries to the spec.storageImport.insecureRegistries field.

    Example HyperConverged custom resource

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      storageImport:
        insecureRegistries: 1
          - "private-registry-example-1:5000"
          - "private-registry-example-2:5000"

    1
    Replace the examples in this list with valid registry hostnames.

7.2.2.3. Creating a VM from a container disk by using the web console

You can create a virtual machine (VM) by importing a container disk from a container registry by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click a template tile without an available boot source.
  3. Click Customize VirtualMachine.
  4. On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list.
  5. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
  6. Set the disk size.
  7. Click Next.
  8. Click Create VirtualMachine.

7.2.2.4. Creating a VM from a container disk by using the command line

You can create a virtual machine (VM) from a container disk by using the command line.

When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage.

Prerequisites

  • You must have access credentials for the container registry that contains the container disk.

Procedure

  1. Edit the VirtualMachine manifest and save it as a vm-rhel-datavolume.yaml file:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      name: vm-rhel-datavolume 1
      labels:
        kubevirt.io/vm: vm-rhel-datavolume
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: rhel-dv 2
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9
            namespace: openshift-virtualization-os-images
          storage:
            resources:
              requests:
                storage: 10Gi 3
      instancetype:
        name: u1.small 4
      preference:
        inferFromVolume: datavolumedisk1
      runStrategy: Always
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-rhel-datavolume
        spec:
          domain:
            devices: {}
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: rhel-dv
            name: datavolumedisk1
    status: {}
    1
    Specify the name of the VM.
    2
    Specify the name of the data volume.
    3
    Specify the size of the storage requested for the data volume.
    4
    Optional: Specify the instance type to use to control resource sizing of the VM.
  2. Create the VM by running the following command:

    $ oc create -f vm-rhel-datavolume.yaml

    The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded. You can start the VM.

    Data volume provisioning happens in the background, so there is no need to monitor the process.

Verification

  1. The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:

    $ oc get pods
  2. Monitor the data volume until its status is Succeeded by running the following command:

    $ oc describe dv rhel-dv 1
    1
    Specify the data volume name that you defined in the VirtualMachine manifest.
  3. Verify that provisioning is complete and that the VM has started by accessing its serial console:

    $ virtctl console vm-rhel-datavolume

7.2.3. Creating VMs by importing images from web pages

You can create virtual machines (VMs) by importing operating system images from web pages.

Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

7.2.3.1. Creating a VM from an image on a web page by using the web console

You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console.

Prerequisites

  • You must have access to the web page that contains the image.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click a template tile without an available boot source.
  3. Click Customize VirtualMachine.
  4. On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list.
  5. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
  6. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
  7. Set the disk size.
  8. Click Next.
  9. Click Create VirtualMachine.

7.2.3.2. Creating a VM from an image on a web page by using the command line

You can create a virtual machine (VM) from an image on a web page by using the command line.

When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage.

Prerequisites

  • You must have access credentials for the web page that contains the image.

Procedure

  1. Edit the VirtualMachine manifest and save it as a vm-rhel-datavolume.yaml file:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      name: vm-rhel-datavolume 1
      labels:
        kubevirt.io/vm: vm-rhel-datavolume
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: rhel-dv 2
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9
            namespace: openshift-virtualization-os-images
          storage:
            resources:
              requests:
                storage: 10Gi 3
      instancetype:
        name: u1.small 4
      preference:
        inferFromVolume: datavolumedisk1
      runStrategy: Always
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-rhel-datavolume
        spec:
          domain:
            devices: {}
            resources: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: rhel-dv
            name: datavolumedisk1
    status: {}
    1
    Specify the name of the VM.
    2
    Specify the name of the data volume.
    3
    Specify the size of the storage requested for the data volume.
    4
    Optional: Specify the instance type to use to control resource sizing of the VM.
  2. Create the VM by running the following command:

    $ oc create -f vm-rhel-datavolume.yaml

    The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded. You can start the VM.

    Data volume provisioning happens in the background, so there is no need to monitor the process.

Verification

  1. The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command:

    $ oc get pods
  2. Monitor the data volume until its status is Succeeded by running the following command:

    $ oc describe dv rhel-dv 1
    1
    Specify the data volume name that you defined in the VirtualMachine manifest.
  3. Verify that provisioning is complete and that the VM has started by accessing its serial console:

    $ virtctl console vm-rhel-datavolume

7.2.4. Creating VMs by uploading images

You can create virtual machines (VMs) by uploading operating system images from your local machine.

You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM.

Important

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

You must also install VirtIO drivers on Windows VMs.

7.2.4.1. Creating a VM from an uploaded image by using the web console

You can create a virtual machine (VM) from an uploaded operating system image by using the OpenShift Container Platform web console.

Prerequisites

  • You must have an IMG, ISO, or QCOW2 image file.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click a template tile without an available boot source.
  3. Click Customize VirtualMachine.
  4. On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list.
  5. Browse to the image on your local machine and set the disk size.
  6. Click Customize VirtualMachine.
  7. Click Create VirtualMachine.
7.2.4.1.1. Generalizing a VM image

You can generalize a Red Hat Enterprise Linux (RHEL) image to remove all system-specific configuration data before you use the image to create a golden image, a preconfigured snapshot of a virtual machine (VM). You can use a golden image to deploy new VMs.

You can generalize a RHEL VM by using the virtctl, guestfs, and virt-sysprep tools.

Prerequisites

  • You have a RHEL virtual machine (VM) to use as a base VM.
  • You have installed the OpenShift CLI (oc).
  • You have installed the virtctl tool.

Procedure

  1. Stop the RHEL VM if it is running, by entering the following command:

    $ virtctl stop <my_vm_name>
  2. Optional: Clone the virtual machine to avoid losing the data from your original VM. You can then generalize the cloned VM.
  3. Retrieve the dataVolume that stores the root filesystem for the VM by running the following command:

    $ oc get vm <my_vm_name> -o jsonpath="{.spec.template.spec.volumes}{'\n'}"

    Example output

    [{"dataVolume":{"name":"<my_vm_volume>"},"name":"rootdisk"},{"cloudInitNoCloud":{...}]

  4. Retrieve the persistent volume claim (PVC) that matches the listed dataVolume by running the followimg command:

    $ oc get pvc

    Example output

    NAME            STATUS   VOLUME  CAPACITY   ACCESS MODES  STORAGECLASS     AGE
    <my_vm_volume> Bound  …

    Note

    If your cluster configuration does not enable you to clone a VM, to avoid losing the data from your original VM, you can clone the VM PVC to a data volume instead. You can then use the cloned PVC to create a golden image.

    If you are creating a golden image by cloning a PVC, continue with the next steps, using the cloned PVC.

  5. Deploy a new interactive container with libguestfs-tools and attach the PVC to it by running the following command:

    $ virtctl guestfs <my-vm-volume> --uid 107

    This command opens a shell for you to run the next command.

  6. Remove all configurations specific to your system by running the following command:

    $ virt-sysprep -a disk.img
  7. In the OpenShift Container Platform console, click Virtualization Catalog.
  8. Click Add volume.
  9. In the Add volume window:

    1. From the Source type list, select Use existing Volume.
    2. From the Volume project list, select your project.
    3. From the Volume name list, select the correct PVC.
    4. In the Volume name field, enter a name for the new golden image.
    5. From the Preference list, select the RHEL version you are using.
    6. From the Default Instance Type list, select the instance type with the correct CPU and memory requirements for the version of RHEL you selected previously.
    7. Click Save.

The new volume appears in the Select volume to boot from list. This is your new golden image. You can use this volume to create new VMs.

Additional resources for generalizing VMs

7.2.4.2. Creating a Windows VM

You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the OpenShift Container Platform web console.

Prerequisites

Procedure

  1. Upload the Windows image as a new PVC:

    1. Navigate to Storage PersistentVolumeClaims in the web console.
    2. Click Create PersistentVolumeClaim With Data upload form.
    3. Browse to the Windows image and select it.
    4. Enter the PVC name, select the storage class and size and then click Upload.

      The Windows image is uploaded to a PVC.

  2. Configure a new VM by cloning the uploaded PVC:

    1. Navigate to Virtualization Catalog.
    2. Select a Windows template tile and click Customize VirtualMachine.
    3. Select Clone (clone PVC) from the Disk source list.
    4. Select the PVC project, the Windows image PVC, and the disk size.
  3. Apply the answer file to the VM:

    1. Click Customize VirtualMachine parameters.
    2. On the Sysprep section of the Scripts tab, click Edit.
    3. Browse to the autounattend.xml answer file and click Save.
  4. Set the run strategy of the VM:

    1. Clear Start this VirtualMachine after creation so that the VM does not start immediately.
    2. Click Create VirtualMachine.
    3. On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save.
  5. Click the options menu kebab and select Start.

    The VM boots from the sysprep disk containing the autounattend.xml answer file.

7.2.4.2.1. Generalizing a Windows VM image

You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM).

Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation.

Prerequisites

  • A running Windows VM with the QEMU guest agent installed.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines.
  2. Select a Windows VM to open the VirtualMachine details page.
  3. Click Configuration Disks.
  4. Click the Options menu kebab beside the sysprep disk and select Detach.
  5. Click Detach.
  6. Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool.
  7. Start the sysprep program by running the following command:

    %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
  8. After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.

You can now specialize the VM.

7.2.4.2.2. Specializing a Windows VM image

Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.

Prerequisites

  • You must have a generalized Windows disk image.
  • You must create an unattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization Catalog.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select PVC (clone PVC) from the Disk source list.
  4. Select the PVC project and PVC name of the generalized Windows image.
  5. Click Customize VirtualMachine parameters.
  6. Click the Scripts tab.
  7. In the Sysprep section, click Edit, browse to the unattend.xml answer file, and click Save.
  8. Click Create VirtualMachine.

During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use.

7.2.4.3. Creating a VM from an uploaded image by using the command line

You can upload an operating system image by using the virtctl command line tool. You can use an existing data volume or create a new data volume for the image.

Prerequisites

  • You must have an ISO, IMG, or QCOW2 operating system image file.
  • For best performance, compress the image file by using the virt-sparsify tool or the xz or gzip utilities.
  • You must have virtctl installed.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Upload the image by running the virtctl image-upload command:

    $ virtctl image-upload dv <datavolume_name> \ 1
      --size=<datavolume_size> \ 2
      --image-path=</path/to/image> \ 3
    1
    The name of the data volume.
    2
    The size of the data volume. For example: --size=500Mi, --size=1G
    3
    The file path of the image.
    Note
    • If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. When you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  2. Optional. To verify that a data volume was created, view all data volumes by running the following command:

    $ oc get dvs

7.2.5. Installing the QEMU guest agent and VirtIO drivers

The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks.

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

7.2.5.1. Installing the QEMU guest agent

7.2.5.1.1. Installing the QEMU guest agent on a Linux VM

The qemu-guest-agent is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. Log in to the VM by using a console or SSH.
  2. Install the QEMU guest agent by running the following command:

    $ yum install -y qemu-guest-agent
  3. Ensure the service is persistent and start it:

    $ systemctl enable --now qemu-guest-agent

Verification

  • Run the following command to verify that AgentConnected is listed in the VM spec:

    $ oc get vm <vm_name>
7.2.5.1.2. Installing the QEMU guest agent on a Windows VM

For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. In the Windows guest operating system, use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive.
  2. Run the qemu-ga-x86_64.msi installer.

Verification

  1. Obtain a list of network services by running the following command:

    $ net start
  2. Verify that the output contains the QEMU Guest Agent.

7.2.5.2. Installing VirtIO drivers on Windows VMs

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download.

The container-native-virtualization/virtio-win container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the VM.

Table 7.3. Supported drivers
Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes labeled as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

7.2.5.2.1. Attaching VirtIO container disk to Windows VMs during installation

You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM.

Procedure

  1. When creating a Windows VM from a template, click Customize VirtualMachine.
  2. Select Mount Windows drivers disk.
  3. Click the Customize VirtualMachine parameters.
  4. Click Create VirtualMachine.

After the VM is created, the virtio-win SATA CD disk will be attached to the VM.

7.2.5.2.2. Attaching VirtIO container disk to an existing Windows VM

You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM.

Procedure

  1. Navigate to the existing Windows VM, and click Actions Stop.
  2. Go to VM Details Configuration Disks and click Add disk.
  3. Add windows-driver-disk from container source, set the Type to CD-ROM, and then set the Interface to SATA.
  4. Click Save.
  5. Start the VM, and connect to a graphical console.
7.2.5.2.3. Installing VirtIO drivers during Windows installation

You can install the VirtIO drivers while installing Windows on a virtual machine (VM).

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.

Prerequisites

  • A storage device containing the virtio drivers must be attached to the VM.

Procedure

  1. In the Windows operating system, use the File Explorer to navigate to the virtio-win CD drive.
  2. Double-click the drive to run the appropriate installer for your VM.

    For a 64-bit vCPU, select the virtio-win-gt-x64 installer. 32-bit vCPUs are no longer supported.

  3. Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default.
  4. After the installation is complete, select Finish.
  5. Reboot the VM.

Verification

  1. Open the system disk on the PC. This is typically C:.
  2. Navigate to Program Files Virtio-Win.

If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful.

7.2.5.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM

You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM).

Note

This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps.

Prerequisites

  • A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive.

Procedure

  1. Start the VM and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device.
    2. Right-click the device and select Properties.
    3. Click the Details tab and select Hardware Ids in the Property list.
    4. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the VM to complete the driver installation.
7.2.5.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive

You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive.

Tip

Downloading the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time.

Prerequisites

  • You must have access to the Red Hat registry or to the downloaded container-native-virtualization/virtio-win container disk in a restricted environment.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a CD drive by editing the VirtualMachine manifest:

    # ...
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots the VM disks in the order defined in the VirtualMachine manifest. You can either define other VM disks that boot before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.
  2. Apply the changes:

    • If the VM is not running, run the following command:

      $ virtctl start <vm> -n <namespace>
    • If the VM is running, reboot the VM or run the following command:

      $ oc apply -f <vm.yaml>
  3. After the VM has started, install the VirtIO drivers from the SATA CD drive.

7.2.5.3. Updating VirtIO drivers

7.2.5.3.1. Updating VirtIO drivers on a Windows VM

Update the virtio drivers on a Windows virtual machine (VM) by using the Windows Update service.

Prerequisites

  • The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service.

Procedure

  1. In the Windows Guest operating system, click the Windows key and select Settings.
  2. Navigate to Windows Update Advanced Options Optional Updates.
  3. Install all updates from Red Hat, Inc..
  4. Reboot the VM.

Verification

  1. On the Windows VM, navigate to the Device Manager.
  2. Select a device.
  3. Select the Driver tab.
  4. Click Driver Details and confirm that the virtio driver details displays the correct version.

7.2.6. Cloning VMs

You can clone virtual machines (VMs) or create new VMs from snapshots.

7.2.6.1. Cloning a VM by using the web console

You can clone an existing VM by using the web console.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select a VM to open the VirtualMachine details page.
  3. Click Actions.
  4. Select Clone.
  5. On the Clone VirtualMachine page, enter the name of the new VM.
  6. (Optional) Select the Start cloned VM checkbox to start the cloned VM.
  7. Click Clone.

7.2.6.2. Creating a VM from an existing snapshot by using the web console

You can create a new VM by copying an existing snapshot.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select a VM to open the VirtualMachine details page.
  3. Click the Snapshots tab.
  4. Click the actions menu kebab for the snapshot you want to copy.
  5. Select Create VirtualMachine.
  6. Enter the name of the virtual machine.
  7. (Optional) Select the Start this VirtualMachine after creation checkbox to start the new virtual machine.
  8. Click Create.

7.2.6.3. Additional resources

7.2.7. Creating VMs by cloning PVCs

You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images.

You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.

You clone a PVC by creating a data volume that references a source PVC.

7.2.7.1. About cloning

When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods:

  • CSI volume cloning
  • Smart cloning

Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.

7.2.7.1.1. CSI volume cloning

Container Storage Interface (CSI) cloning uses CSI driver features to more efficiently clone a source data volume.

CSI volume cloning has the following requirements:

  • The CSI driver that backs the storage class of the persistent volume claim (PVC) must support volume cloning.
  • For provisioners not recognized by the CDI, the corresponding storage profile must have the cloneStrategy set to CSI Volume Cloning.
  • The source and target PVCs must have the same storage class and volume mode.
  • If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace.
  • The source volume must not be in use.
7.2.7.1.2. Smart cloning

When a Container Storage Interface (CSI) plugin with snapshot capabilities is available, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) from a snapshot, which then allows efficient cloning of additional PVCs.

Smart cloning has the following requirements:

  • A snapshot class associated with the storage class must exist.
  • The source and target PVCs must have the same storage class and volume mode.
  • If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace.
  • The source volume must not be in use.
7.2.7.1.3. Host-assisted cloning

When the requirements for neither Container Storage Interface (CSI) volume cloning nor smart cloning have been met, host-assisted cloning is used as a fallback method. Host-assisted cloning is less efficient than either of the two other cloning methods.

Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created.

Example PVC target annotation

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible
    cdi.kubevirt.io/clonePhase: Succeeded
    cdi.kubevirt.io/cloneType: copy

Example event

NAMESPACE   LAST SEEN   TYPE      REASON                    OBJECT                              MESSAGE
test-ns     0s          Warning   IncompatibleVolumeModes   persistentvolumeclaim/test-target   The volume modes of source and target are incompatible

7.2.7.2. Creating a VM from a PVC by using the web console

You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the OpenShift Container Platform web console.

Prerequisites

  • You must have access to the web page that contains the image.
  • You must have access to the namespace that contains the source PVC.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click a template tile without an available boot source.
  3. Click Customize VirtualMachine.
  4. On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list.
  5. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
  6. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
  7. Select the PVC project and the PVC name.
  8. Set the disk size.
  9. Click Next.
  10. Click Create VirtualMachine.

7.2.7.3. Creating a VM from a PVC by using the command line

You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line.

You can clone a PVC by using one of the following options:

  • Cloning a PVC to a new data volume.

    This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.

  • Cloning a PVC by creating a VirtualMachine manifest with a dataVolumeTemplates stanza.

    This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.

7.2.7.3.1. Cloning a PVC to a data volume

You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line.

You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC.

Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt content type.

Note

Smart-cloning is faster and more efficient than host-assisted cloning because it uses snapshots to clone PVCs. Smart-cloning is supported by storage providers that support snapshots, such as Red Hat OpenShift Data Foundation.

Cloning between different volume modes is not supported for smart-cloning.

Prerequisites

  • The VM with the source PVC must be powered down.
  • If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace.
  • Additional prerequisites for smart-cloning:

    • Your storage provider must support snapshots.
    • The source and target PVCs must have the same storage provider and volume mode.
    • The value of the driver key of the VolumeSnapshotClass object must match the value of the provisioner key of the StorageClass object as shown in the following example:

      Example VolumeSnapshotClass object

      kind: VolumeSnapshotClass
      apiVersion: snapshot.storage.k8s.io/v1
      driver: openshift-storage.rbd.csi.ceph.com
      # ...

      Example StorageClass object

      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      # ...
      provisioner: openshift-storage.rbd.csi.ceph.com

Procedure

  1. Create a DataVolume manifest as shown in the following example:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source_namespace>" 2
          name: "<my_vm_disk>" 3
      storage: {}
    1
    Specify the name of the new data volume.
    2
    Specify the namespace of the source PVC.
    3
    Specify the name of the source PVC.
  2. Create the data volume by running the following command:

    $ oc create -f <datavolume>.yaml
    Note

    Data volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned.

7.2.7.3.2. Creating a VM from a cloned PVC by using a data volume template

You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template.

This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC.

Prerequisites

  • The VM with the source PVC must be powered down.

Procedure

  1. Create a VirtualMachine manifest as shown in the following example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-dv-clone
      name: vm-dv-clone 1
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-dv-clone
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: root-disk
            resources:
              requests:
                memory: 64M
          volumes:
          - dataVolume:
              name: favorite-clone
            name: root-disk
      dataVolumeTemplates:
      - metadata:
          name: favorite-clone
        spec:
          storage:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Gi
          source:
            pvc:
              namespace: <source_namespace> 2
              name: "<source_pvc>" 3
    1
    Specify the name of the VM.
    2
    Specify the namespace of the source PVC.
    3
    Specify the name of the source PVC.
  2. Create the virtual machine with the PVC-cloned data volume:

    $ oc create -f <vm-clone-datavolumetemplate>.yaml

7.3. Connecting to virtual machine consoles

You can connect to the following consoles to access running virtual machines (VMs):

7.3.1. Connecting to the VNC console

You can connect to the VNC console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool.

7.3.1.1. Connecting to the VNC console by using the web console

You can connect to the VNC console of a virtual machine (VM) by using the OpenShift Container Platform web console.

Note

If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.

Procedure

  1. On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page.
  2. Click the Console tab. The VNC console session starts automatically.
  3. Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.

    • Select Ctl + Alt + 1 from the Send key list to restore the default display.
  4. To end the console session, click outside the console pane and then click Disconnect.

7.3.1.2. Connecting to the VNC console by using virtctl

You can use the virtctl command line tool to connect to the VNC console of a running virtual machine.

Note

If you run the virtctl vnc command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh command with the -X or -Y flags.

Prerequisites

  • You must install the virt-viewer package.

Procedure

  1. Run the following command to start the console session:

    $ virtctl vnc <vm_name>
  2. If the connection fails, run the following command to collect troubleshooting information:

    $ virtctl vnc <vm_name> -v 4

7.3.1.3. Generating a temporary token for the VNC console

To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API.

Note

Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command.

Prerequisites

  • A running VM with OpenShift Virtualization 4.14 or later and ssp-operator 4.14 or later

Procedure

  1. Enable the feature gate in the HyperConverged (HCO) custom resource (CR):

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]'
  2. Generate a token by entering the following command:

    $ curl --header "Authorization: Bearer ${TOKEN}" \
         "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"

    The <duration> parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example: 5h30m. If this parameter is not set, the token is valid for 10 minutes by default.

    Sample output:

    { "token": "eyJhb..." }
  3. Optional: Use the token provided in the output to create a variable:

    $ export VNC_TOKEN="<token>"

You can now use the token to access the VNC console of a VM.

Verification

  1. Log in to the cluster by entering the following command:

    $ oc login --token ${VNC_TOKEN}
  2. Test access to the VNC console of the VM by using the virtctl command:

    $ virtctl vnc <vm_name> -n <namespace>
Warning

It is currently not possible to revoke a specific token.

To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution:

$ virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access"
7.3.1.3.1. Granting token generation permission for the VNC console by using the cluster role

As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console.

Procedure

  • Choose to bind the cluster role to either a user or service account.

    • Run the following command to bind the cluster role to a user:

      $ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="${USER_NAME}"
    • Run the following command to bind the cluster role to a service account:

      $ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}"

7.3.2. Connecting to the serial console

You can connect to the serial console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool.

Note

Running concurrent VNC connections to a single virtual machine is not currently supported.

7.3.2.1. Connecting to the serial console by using the web console

You can connect to the serial console of a virtual machine (VM) by using the OpenShift Container Platform web console.

Procedure

  1. On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page.
  2. Click the Console tab. The VNC console session starts automatically.
  3. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
  4. Select Serial console from the console list.
  5. To end the console session, click outside the console pane and then click Disconnect.

7.3.2.2. Connecting to the serial console by using virtctl

You can use the virtctl command line tool to connect to the serial console of a running virtual machine.

Procedure

  1. Run the following command to start the console session:

    $ virtctl console <vm_name>
  2. Press Ctrl+] to end the console session.

7.3.3. Connecting to the desktop viewer

You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP).

7.3.3.1. Connecting to the desktop viewer by using the web console

You can connect to the desktop viewer of a Windows virtual machine (VM) by using the OpenShift Container Platform web console.

Prerequisites

  • You installed the QEMU guest agent on the Windows VM.
  • You have an RDP client installed.

Procedure

  1. On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page.
  2. Click the Console tab. The VNC console session starts automatically.
  3. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
  4. Select Desktop viewer from the console list.
  5. Click Create RDP Service to open the RDP Service dialog.
  6. Select Expose RDP Service and click Save to create a node port service.
  7. Click Launch Remote Desktop to download an .rdp file and launch the desktop viewer.

7.4. Specifying an instance type or preference

You can specify an instance type, a preference, or both to define a set of workload sizing and runtime characteristics for reuse across multiple VMs.

7.4.1. Using flags to specify instance types and preferences

Specify instance types and preferences by using flags.

Prerequisites

  • You must have an instance type, preference, or both on the cluster.

Procedure

  1. To specify an instance type when creating a VM, use the --instancetype flag. To specify a preference, use the --preference flag. The following example includes both flags:

    $ virtctl create vm --instancetype <my_instancetype> --preference <my_preference>
  2. Optional: To specify a namespaced instance type or preference, include the kind in the value passed to the --instancetype or --preference flag command. The namespaced instance type or preference must be in the same namespace you are creating the VM in. The following example includes flags for a namespaced instance type and a namespaced preference:

    $ virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>

7.4.2. Inferring an instance type or preference

Inferring instance types, preferences, or both is enabled by default, and the inferFromVolumeFailure policy of the inferFromVolume attribute is set to Ignore. When inferring from the boot volume, errors are ignored, and the VM is created with the instance type and preference left unset.

However, when flags are applied, the inferFromVolumeFailure policy defaults to Reject. When inferring from the boot volume, errors result in the rejection of the creation of that VM.

You can use the --infer-instancetype and --infer-preference flags to infer which instance type, preference, or both to use to define the workload sizing and runtime characteristics of a VM.

Prerequisites

  • You have installed the virtctl tool.

Procedure

  • To explicitly infer instance types from the volume used to boot the virtual machine, use the --infer-instancetype flag. To explicitly infer preferences, use the --infer-preference flag. The following command includes both flags:

    $ virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference

7.4.3. Setting the inferFromVolume labels

Use the following labels on your PVC, data source, or data volume to instruct the inference mechanism which instance type, preference, or both to use when trying to boot from a volume.

  • A cluster-wide instance type: instancetype.kubevirt.io/default-instancetype label.
  • A namespaced instance type: instancetype.kubevirt.io/default-instancetype-kind label. Defaults to the VirtualMachineClusterInstancetype label if left empty.
  • A cluster-wide preference: instancetype.kubevirt.io/default-preference label.
  • A namespaced preference: instancetype.kubevirt.io/default-preference-kind label. Defaults to VirtualMachineClusterPreference label, if left empty.

Prerequisites

  • You must have an instance type, preference, or both on the cluster.

Procedure

  • To apply a label to a data source, use oc label. The following command applies a label that points to a cluster-wide instance type:

    $ oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>

7.5. Configuring SSH access to virtual machines

You can configure SSH access to virtual machines (VMs) by using the following methods:

  • virtctl ssh command

    You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key.

    You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.

  • virtctl port-forward command

    You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH.

  • Service

    You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.

  • Secondary network

    You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address.

7.5.1. Access configuration considerations

Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements.

Services provide excellent performance and are recommended for applications that are accessed from outside the cluster.

If the internal cluster network cannot handle the traffic load, you can configure a secondary network.

virtctl ssh and virtctl port-forwarding commands
  • Simple to configure.
  • Recommended for troubleshooting VMs.
  • virtctl port-forwarding recommended for automated configuration of VMs with Ansible.
  • Dynamic public SSH keys can be used to provision VMs with Ansible.
  • Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server.
  • The API server must be able to handle the traffic load.
  • The clients must be able to access the API server.
  • The clients must have access credentials for the cluster.
Cluster IP service
  • The internal cluster network must be able to handle the traffic load.
  • The clients must be able to access an internal cluster IP address.
Node port service
  • The internal cluster network must be able to handle the traffic load.
  • The clients must be able to access at least one node.
Load balancer service
  • A load balancer must be configured.
  • Each node must be able to handle the traffic load of one or more load balancer services.
Secondary network
  • Excellent performance because traffic does not go through the internal cluster network.
  • Allows a flexible approach to network topology.
  • Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network.

7.5.2. Using virtctl ssh

You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh command.

This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server.

7.5.2.1. About static and dynamic SSH key management

You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime.

Note

Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.

Static SSH key management

You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot.

You can add the key by using one of the following methods:

  • Add a key to a single VM when you create it by using the web console or the command line.
  • Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project.

Use cases

  • As a VM owner, you can provision all your newly created VMs with a single key.
Dynamic SSH key management

You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources.

You can disable dynamic key management for security reasons. Then, the VM inherits the key management setting of the image from which it was created.

Use cases

  • Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a Secret object that is applied to all VMs in a namespace.
  • User access: You can add your access credentials to all VMs that you create and manage.
  • Ansible provisioning:

    • As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning.
    • As a VM owner, you can create a VM and attach the keys used for Ansible provisioning.
  • Key rotation:

    • As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace.
    • As a workload owner, you can rotate the key for the VMs that you manage.

7.5.2.2. Static key management

You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time.

You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create.

Note

If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually.

7.5.2.2.1. Adding a key when creating a VM from a template

You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.

Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project.

Prerequisites

  • You generated an SSH key pair by running the ssh-keygen command.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click a template tile.

    The guest operating system must support configuration from a cloud-init data source.

  3. Click Customize VirtualMachine.
  4. Click Next.
  5. Click the Scripts tab.
  6. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:

    • Use existing: Select a secret from the secrets list.
    • Add new:

      1. Browse to the SSH key file or paste the file in the key field.
      2. Enter the secret name.
      3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
  7. Click Save.
  8. Click Create VirtualMachine.

    The VirtualMachine details page displays the progress of the VM creation.

Verification

  • Click the Scripts tab on the Configuration tab.

    The secret name is displayed in the Authorized SSH key section.

7.5.2.2.2. Adding a key when creating a VM from an instance type by using the web console

You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.

You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.

You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.

Procedure

  1. In the web console, navigate to Virtualization Catalog.

    The InstanceTypes tab opens by default.

  2. Select either of the following options:

    • Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.

      Note

      The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label.

      • Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
    • Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save.

      Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.

      In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.

      Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.

  3. Click an instance type tile and select the resource size appropriate for your workload.
  4. Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:

    • For a Linux-based volume, follow these steps to configure SSH:

      1. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
      2. Select one of the following options:

        • Use existing: Select a secret from the secrets list.
        • Add new: Follow these steps:

          1. Browse to the public SSH key file or paste the file in the key field.
          2. Enter the secret name.
          3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
      3. Click Save.
    • For a Windows volume, follow either of these set of steps to configure sysprep options:

      • If you have not already added sysprep options for the Windows volume, follow these steps:

        1. Click the edit icon beside Sysprep in the VirtualMachine details section.
        2. Add the Autoattend.xml answer file.
        3. Add the Unattend.xml answer file.
        4. Click Save.
      • If you want to use existing sysprep options for the Windows volume, follow these steps:

        1. Click Attach existing sysprep.
        2. Enter the name of the existing sysprep Unattend.xml answer file.
        3. Click Save.
  5. Optional: If you are creating a Windows VM, you can mount a Windows driver disk:

    1. Click the Customize VirtualMachine button.
    2. On the VirtualMachine details page, click Storage.
    3. Select the Mount Windows drivers disk checkbox.
  6. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
  7. Click Create VirtualMachine.

After the VM is created, you can monitor the status on the VirtualMachine details page.

7.5.2.2.3. Adding a key when creating a VM by using the command line

You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot.

The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data.

Prerequisites

  • You generated an SSH key pair by running the ssh-keygen command.

Procedure

  1. Create a manifest file for a VirtualMachine object and a Secret object:

    Example manifest

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      dataVolumeTemplates:
        - metadata:
            name: example-vm-volume
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9
              namespace: openshift-virtualization-os-images
            storage:
              resources: {}
      instancetype:
        name: u1.medium
      preference:
        name: rhel.9
      running: true
      template:
        spec:
          domain:
            devices: {}
          volumes:
            - dataVolume:
                name: example-vm-volume
              name: rootdisk
            - cloudInitNoCloud: 1
                userData: |-
                  #cloud-config
                  user: cloud-user
              name: cloudinitdisk
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  noCloud: {}
                source:
                  secret:
                    secretName: authorized-keys 2
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: authorized-keys
    data:
      key: c3NoLXJzYSB... 3

    1
    Specify the cloudInitNoCloud data source.
    2
    Specify the Secret object name.
    3
    Paste the public SSH key.
  2. Create the VirtualMachine and Secret objects by running the following command:

    $ oc create -f <manifest_file>.yaml
  3. Start the VM by running the following command:

    $ virtctl start vm example-vm -n example-namespace

Verification

  • Get the VM configuration:

    $ oc describe vm example-vm -n example-namespace

    Example output

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      template:
        spec:
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  noCloud: {}
                source:
                  secret:
                    secretName: authorized-keys
    # ...

7.5.2.3. Dynamic key management

You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. Then, you can update the key at runtime.

Note

Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.

If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created.

7.5.2.3.1. Enabling dynamic key injection when creating a VM from a template

You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the OpenShift Container Platform web console. Then, you can update the key at runtime.

Note

Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.

The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.

Prerequisites

  • You generated an SSH key pair by running the ssh-keygen command.

Procedure

  1. Navigate to Virtualization Catalog in the web console.
  2. Click the Red Hat Enterprise Linux 9 VM tile.
  3. Click Customize VirtualMachine.
  4. Click Next.
  5. Click the Scripts tab.
  6. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:

    • Use existing: Select a secret from the secrets list.
    • Add new:

      1. Browse to the SSH key file or paste the file in the key field.
      2. Enter the secret name.
      3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
  7. Set Dynamic SSH key injection to on.
  8. Click Save.
  9. Click Create VirtualMachine.

    The VirtualMachine details page displays the progress of the VM creation.

Verification

  • Click the Scripts tab on the Configuration tab.

    The secret name is displayed in the Authorized SSH key section.

7.5.2.3.2. Enabling dynamic key injection when creating a VM from an instance type by using the web console

You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.

You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.

You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Then, you can add or revoke the key at runtime.

Note

Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.

The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.

Procedure

  1. In the web console, navigate to Virtualization Catalog.

    The InstanceTypes tab opens by default.

  2. Select either of the following options:

    • Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.

      Note

      The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label.

      • Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
    • Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save.

      Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.

      In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.

      Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.

  3. Click an instance type tile and select the resource size appropriate for your workload.
  4. Click the Red Hat Enterprise Linux 9 VM tile.
  5. Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:

    • For a Linux-based volume, follow these steps to configure SSH:

      1. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
      2. Select one of the following options:

        • Use existing: Select a secret from the secrets list.
        • Add new: Follow these steps:

          1. Browse to the public SSH key file or paste the file in the key field.
          2. Enter the secret name.
          3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
      3. Click Save.
    • For a Windows volume, follow either of these set of steps to configure sysprep options:

      • If you have not already added sysprep options for the Windows volume, follow these steps:

        1. Click the edit icon beside Sysprep in the VirtualMachine details section.
        2. Add the Autoattend.xml answer file.
        3. Add the Unattend.xml answer file.
        4. Click Save.
      • If you want to use existing sysprep options for the Windows volume, follow these steps:

        1. Click Attach existing sysprep.
        2. Enter the name of the existing sysprep Unattend.xml answer file.
        3. Click Save.
  6. Set Dynamic SSH key injection in the VirtualMachine details section to on.
  7. Optional: If you are creating a Windows VM, you can mount a Windows driver disk:

    1. Click the Customize VirtualMachine button.
    2. On the VirtualMachine details page, click Storage.
    3. Select the Mount Windows drivers disk checkbox.
  8. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
  9. Click Create VirtualMachine.

After the VM is created, you can monitor the status on the VirtualMachine details page.

7.5.2.3.3. Enabling dynamic SSH key injection by using the web console

You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console. Then, you can update the public SSH key at runtime.

The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9.

Prerequisites

  • The guest operating system is RHEL 9.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select a VM to open the VirtualMachine details page.
  3. On the Configuration tab, click Scripts.
  4. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:

    • Use existing: Select a secret from the secrets list.
    • Add new:

      1. Browse to the SSH key file or paste the file in the key field.
      2. Enter the secret name.
      3. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
  5. Set Dynamic SSH key injection to on.
  6. Click Save.
7.5.2.3.4. Enabling dynamic key injection by using the command line

You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime.

Note

Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.

The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9.

Prerequisites

  • You generated an SSH key pair by running the ssh-keygen command.

Procedure

  1. Create a manifest file for a VirtualMachine object and a Secret object:

    Example manifest

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      dataVolumeTemplates:
        - metadata:
            name: example-vm-volume
          spec:
            sourceRef:
              kind: DataSource
              name: rhel9
              namespace: openshift-virtualization-os-images
            storage:
              resources: {}
      instancetype:
        name: u1.medium
      preference:
        name: rhel.9
      running: true
      template:
        spec:
          domain:
            devices: {}
          volumes:
            - dataVolume:
                name: example-vm-volume
              name: rootdisk
            - cloudInitNoCloud: 1
                userData: |-
                  #cloud-config
                  runcmd:
                  - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ]
              name: cloudinitdisk
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  qemuGuestAgent:
                    users: ["cloud-user"]
                source:
                  secret:
                    secretName: authorized-keys 2
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: authorized-keys
    data:
      key: c3NoLXJzYSB... 3

    1
    Specify the cloudInitNoCloud data source.
    2
    Specify the Secret object name.
    3
    Paste the public SSH key.
  2. Create the VirtualMachine and Secret objects by running the following command:

    $ oc create -f <manifest_file>.yaml
  3. Start the VM by running the following command:

    $ virtctl start vm example-vm -n example-namespace

Verification

  • Get the VM configuration:

    $ oc describe vm example-vm -n example-namespace

    Example output

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      template:
        spec:
          accessCredentials:
            - sshPublicKey:
                propagationMethod:
                  qemuGuestAgent:
                    users: ["cloud-user"]
                source:
                  secret:
                    secretName: authorized-keys
    # ...

7.5.2.4. Using the virtctl ssh command

You can access a running virtual machine (VM) by using the virtcl ssh command.

Prerequisites

  • You installed the virtctl command line tool.
  • You added a public SSH key to the VM.
  • You have an SSH client installed.
  • The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable.

Procedure

  • Run the virtctl ssh command:

    $ virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1
    1
    Specify the namespace, user name, and the SSH private key. The default SSH key location is /home/user/.ssh. If you save the key in a different location, you must specify the path.

    Example

    $ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key

Tip

You can copy the virtctl ssh command in the web console by selecting Copy SSH command from the options kebab menu beside a VM on the VirtualMachines page.

7.5.3. Using the virtctl port-forward command

You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs.

This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.

Prerequisites

  • You have installed the virtctl client.
  • The virtual machine you want to access is running.
  • The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable.

Procedure

  1. Add the following text to the ~/.ssh/config file on your client machine:

    Host vm/*
      ProxyCommand virtctl port-forward --stdio=true %h %p
  2. Connect to the VM by running the following command:

    $ ssh <user>@vm/<vm_name>.<namespace>

7.5.4. Using a service for SSH access

You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service.

Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls.

If the cluster network cannot handle the traffic load, consider using a secondary network for VM access.

7.5.4.1. About services

A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world.

ClusterIP
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends. ClusterIP is the default service type.
NodePort
Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client.
LoadBalancer
Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
Note

For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator.

7.5.4.2. Creating a service

You can create a service to expose a virtual machine (VM) by using the OpenShift Container Platform web console, virtctl command line tool, or a YAML file.

7.5.4.2.1. Enabling load balancer service creation by using the web console

You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console.

Prerequisites

  • You have configured a load balancer for the cluster.
  • You are logged in as a user with the cluster-admin role.

Procedure

  1. Navigate to Virtualization Overview.
  2. On the Settings tab, click Cluster.
  3. Expand General settings and SSH configuration.
  4. Set SSH over LoadBalancer service to on.
7.5.4.2.2. Creating a service by using the web console

You can create a node port or load balancer service for a virtual machine (VM) by using the OpenShift Container Platform web console.

Prerequisites

  • You configured the cluster network to support either a load balancer or a node port.
  • To create a load balancer service, you enabled the creation of load balancer services.

Procedure

  1. Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page.
  2. On the Details tab, select SSH over LoadBalancer from the SSH service type list.
  3. Optional: Click the copy icon to copy the SSH command to your clipboard.

Verification

  • Check the Services pane on the Details tab to view the new service.
7.5.4.2.3. Creating a service by using virtctl

You can create a service for a virtual machine (VM) by using the virtctl command line tool.

Prerequisites

  • You installed the virtctl command line tool.
  • You configured the cluster network to support the service.
  • The environment where you installed virtctl has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable.

Procedure

  • Create a service by running the following command:

    $ virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1
    1
    Specify the ClusterIP, NodePort, or LoadBalancer service type.

    Example

    $ virtctl expose vm example-vm --name example-service --type NodePort --port 22

Verification

  • Verify the service by running the following command:

    $ oc get service

Next steps

After you create a service with virtctl, you must add special: key to the spec.template.metadata.labels stanza of the VirtualMachine manifest. See Creating a service by using the command line.

7.5.4.2.4. Creating a service by using the command line

You can create a service and associate it with a virtual machine (VM) by using the command line.

Prerequisites

  • You configured the cluster network to support the service.

Procedure

  1. Edit the VirtualMachine manifest to add the label for service creation:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 1
    # ...
    1
    Add special: key to the spec.template.metadata.labels stanza.
    Note

    Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest.

  2. Save the VirtualMachine manifest file to apply your changes.
  3. Create a Service manifest to expose the VM:

    apiVersion: v1
    kind: Service
    metadata:
      name: example-service
      namespace: example-namespace
    spec:
    # ...
      selector:
        special: key 1
      type: NodePort 2
      ports: 3
        protocol: TCP
        port: 80
        targetPort: 9376
        nodePort: 30000
    1
    Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest.
    2
    Specify ClusterIP, NodePort, or LoadBalancer.
    3
    Specifies a collection of network ports and protocols that you want to expose from the virtual machine.
  4. Save the Service manifest file.
  5. Create the service by running the following command:

    $ oc create -f example-service.yaml
  6. Restart the VM to apply the changes.

Verification

  • Query the Service object to verify that it is available:

    $ oc get service -n example-namespace

7.5.4.3. Connecting to a VM exposed by a service by using SSH

You can connect to a virtual machine (VM) that is exposed by a service by using SSH.

Prerequisites

  • You created a service to expose the VM.
  • You have an SSH client installed.
  • You are logged in to the cluster.

Procedure

  • Run the following command to access the VM:

    $ ssh <user_name>@<ip_address> -p <port> 1
    1
    Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service.

7.5.5. Using a secondary network for SSH access

You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH.

Important

Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method.

See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options.

Prerequisites

7.5.5.1. Configuring a VM network interface by using the web console

You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console.

Prerequisites

  • You created a network attachment definition for the network.

Procedure

  1. Navigate to Virtualization VirtualMachines.
  2. Click a VM to view the VirtualMachine details page.
  3. On the Configuration tab, click the Network interfaces tab.
  4. Click Add network interface.
  5. Enter the interface name and select the network attachment definition from the Network list.
  6. Click Save.
  7. Restart the VM to apply the changes.

7.5.5.2. Connecting to a VM attached to a secondary network by using SSH

You can connect to a virtual machine (VM) attached to a secondary network by using SSH.

Prerequisites

  • You attached a VM to a secondary network with a DHCP server.
  • You have an SSH client installed.

Procedure

  1. Obtain the IP address of the VM by running the following command:

    $ oc describe vm <vm_name> -n <namespace>

    Example output

    # ...
    Interfaces:
      Interface Name:  eth0
      Ip Address:      10.244.0.37/24
      Ip Addresses:
        10.244.0.37/24
        fe80::858:aff:fef4:25/64
      Mac:             0a:58:0a:f4:00:25
      Name:            default
    # ...

  2. Connect to the VM by running the following command:

    $ ssh <user_name>@<ip_address> -i <ssh_key>

    Example

    $ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user

Note

7.6. Editing virtual machines

You can update a virtual machine (VM) configuration by using the OpenShift Container Platform web console. You can update the YAML file or the VirtualMachine details page.

You can also edit a VM by using the command line.

To edit a VM to configure disk sharing by using virtual disks or LUN, see Configuring shared volumes for virtual machines.

7.6.1. Hot plugging memory on a virtual machine

You can add or remove the amount of memory allocated to a virtual machine (VM) without having to restart the VM by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to Virtualization VirtualMachines.
  2. Select the required VM to open the VirtualMachine details page.
  3. On the Configuration tab, click Edit CPU|Memory.
  4. Enter the desired amount of memory and click Save.

The system applies these changes immediately. If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a RestartRequired condition is added to the VM.

Note

Linux guests require a kernel version of 5.16 or later and Windows guests require the latest viomem drivers.

7.6.2. Hot plugging CPUs on a virtual machine

You can increase or decrease the number of CPU sockets allocated to a virtual machine (VM) without having to restart the VM by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to Virtualization VirtualMachines.
  2. Select the required VM to open the VirtualMachine details page.
  3. On the Configuration tab, click Edit CPU|Memory.
  4. Select the vCPU radio button.
  5. Enter the desired number of vCPU sockets and click Save.

    If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a RestartRequired condition is added to the VM.

7.6.3. Editing a virtual machine by using the command line

You can edit a virtual machine (VM) by using the command line.

Prerequisites

  • You installed the oc CLI.

Procedure

  1. Obtain the virtual machine configuration by running the following command:

    $ oc edit vm <vm_name>
  2. Edit the YAML configuration.
  3. If you edit a running virtual machine, you need to do one of the following:

    • Restart the virtual machine.
    • Run the following command for the new configuration to take effect:

      $ oc apply vm <vm_name> -n <namespace>

7.6.4. Adding a disk to a virtual machine

You can add a virtual disk to a virtual machine (VM) by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select a VM to open the VirtualMachine details page.
  3. On the Disks tab, click Add disk.
  4. Specify the Source, Name, Size, Type, Interface, and Storage Class.

    1. Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
    2. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.
  5. Click Add.
Note

If the VM is running, you must restart the VM to apply the change.

7.6.4.1. Storage fields

FieldDescription

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

Size of the disk in GiB.

Type

Type of disk. Example: Disk or CD-ROM

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.

If you do not specify these parameters, the system uses the default storage profile values.

ParameterOptionParameter description

Volume Mode

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This mode is required for live migration.

7.6.5. Mounting a Windows driver disk on a virtual machine

You can mount a Windows driver disk on a virtual machine (VM) by using the OpenShift Container Platform web console.

Procedure

  1. Navigate to Virtualization VirtualMachines.
  2. Select the required VM to open the VirtualMachine details page.
  3. On the Configuration tab, click Storage.
  4. Select the Mount Windows drivers disk checkbox.

    The Windows driver disk is displayed in the list of mounted disks.

7.6.6. Adding a secret, config map, or service account to a virtual machine

You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.

These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.

If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page.

Prerequisites

  • The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click Configuration Environment.
  4. Click Add Config Map, Secret or Service Account.
  5. Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
  6. Optional: Click Reload to revert the environment to its last saved state.
  7. Click Save.

Verification

  1. On the VirtualMachine details page, click Configuration Disks and verify that the resource is displayed in the list of disks.
  2. Restart the virtual machine by clicking Actions Restart.

You can now mount the secret, config map, or service account as you would mount any other disk.

Additional resources for config maps, secrets, and service accounts

7.7. Editing boot order

You can update the values for a boot order list by using the web console or the CLI.

With Boot Order in the Virtual Machine Overview page, you can:

  • Select a disk or network interface controller (NIC) and add it to the boot order list.
  • Edit the order of the disks or NICs in the boot order list.
  • Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.

7.7.1. Adding items to a boot order list in the web console

Add items to a boot order list by using the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
  5. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
  6. Add any additional disks or NICs to the boot order list.
  7. Click Save.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

7.7.2. Editing a boot order list in the web console

Edit the boot order list in the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Choose the appropriate method to move the item in the boot order list:

    • If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
    • If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
  6. Click Save.
Note

If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

7.7.3. Editing a boot order list in the YAML configuration file

Edit the boot order list in a YAML configuration file by using the CLI.

Procedure

  1. Open the YAML configuration file for the virtual machine by running the following command:

    $ oc edit vm <vm_name> -n <namespace>
  2. Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:

    disks:
      - bootOrder: 1 1
        disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      - cdrom:
          bus: virtio
        name: cd-drive-1
    interfaces:
      - boot Order: 2 2
        macAddress: '02:96:c4:00:00'
        masquerade: {}
        name: default
    1
    The boot order value specified for the disk.
    2
    The boot order value specified for the network interface controller.
  3. Save the YAML file.

7.7.4. Removing items from a boot order list in the web console

Remove items from a boot order list by using the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Click the Remove icon delete next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

7.8. Deleting virtual machines

You can delete a virtual machine from the web console or by using the oc command line interface.

7.8.1. Deleting a virtual machine using the web console

Deleting a virtual machine permanently removes it from the cluster.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Click the Options menu kebab beside a virtual machine and select Delete.

    Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete.

  3. Optional: Select With grace period or clear Delete disks.
  4. Click Delete to permanently delete the virtual machine.

7.8.2. Deleting a virtual machine by using the CLI

You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines.

Prerequisites

  • Identify the name of the virtual machine that you want to delete.

Procedure

  • Delete the virtual machine by running the following command:

    $ oc delete vm <vm_name>
    Note

    This command only deletes a VM in the current project. Specify the -n <project_name> option if the VM you want to delete is in a different project or namespace.

7.9. Exporting virtual machines

You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.

You create a VirtualMachineExport custom resource (CR) by using the command line interface.

Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes.

Note

You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization.

7.9.1. Creating a VirtualMachineExport custom resource

You can create a VirtualMachineExport custom resource (CR) to export the following objects:

  • Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
  • VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR.
  • PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use.

The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route.

The export server supports the following file formats:

  • raw: Raw disk image file.
  • gzip: Compressed disk image file.
  • dir: PVC directory and files.
  • tar.gz: Compressed PVC file.

Prerequisites

  • The VM must be shut down for a VM export.

Procedure

  1. Create a VirtualMachineExport manifest to export a volume from a VirtualMachine, VirtualMachineSnapshot, or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml:

    VirtualMachineExport example

    apiVersion: export.kubevirt.io/v1beta1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io" 1
        kind: VirtualMachine 2
        name: example-vm
      ttlDuration: 1h 3

    1
    Specify the appropriate API group:
    • "kubevirt.io" for VirtualMachine.
    • "snapshot.kubevirt.io" for VirtualMachineSnapshot.
    • "" for PersistentVolumeClaim.
    2
    Specify VirtualMachine, VirtualMachineSnapshot, or PersistentVolumeClaim.
    3
    Optional. The default duration is 2 hours.
  2. Create the VirtualMachineExport CR:

    $ oc create -f example-export.yaml
  3. Get the VirtualMachineExport CR:

    $ oc get vmexport example-export -o yaml

    The internal and external links for the exported volumes are displayed in the status stanza:

    Output example

    apiVersion: export.kubevirt.io/v1beta1
    kind: VirtualMachineExport
    metadata:
      name: example-export
      namespace: example
    spec:
      source:
        apiGroup: ""
        kind: PersistentVolumeClaim
        name: example-pvc
      tokenSecretRef: example-token
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:10:09Z"
        reason: podReady
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:09:02Z"
        reason: pvcBound
        status: "True"
        type: PVCReady
      links:
        external: 1
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img
            - format: gzip
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz
            name: example-disk
        internal:  2
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img
            - format: gzip
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz
            name: example-disk
      phase: Ready
      serviceName: virt-export-example-export

    1
    External links are accessible from outside the cluster by using an Ingress or Route.
    2
    Internal links are only valid inside the cluster.

7.9.2. Accessing exported virtual machine manifests

After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine manifest and related information from the export server.

Prerequisites

  • You exported a virtual machine or VM snapshot by creating a VirtualMachineExport custom resource (CR).

    Note

    VirtualMachineExport objects that have the spec.source.kind: PersistentVolumeClaim parameter do not generate virtual machine manifests.

Procedure

  1. To access the manifests, you must first copy the certificates from the source cluster to the target cluster.

    1. Log in to the source cluster.
    2. Save the certificates to the cacert.crt file by running the following command:

      $ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1
      1
      Replace <export_name> with the metadata.name value from the VirtualMachineExport object.
    3. Copy the cacert.crt file to the target cluster.
  2. Decode the token in the source cluster and save it to the token_decode file by running the following command:

    $ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1
    1
    Replace <export_name> with the metadata.name value from the VirtualMachineExport object.
  3. Copy the token_decode file to the target cluster.
  4. Get the VirtualMachineExport custom resource by running the following command:

    $ oc get vmexport <export_name> -o yaml
  5. Review the status.links stanza, which is divided into external and internal sections. Note the manifests.url fields within each section:

    Example output

    apiVersion: export.kubevirt.io/v1beta1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io"
        kind: VirtualMachine
        name: example-vm
      tokenSecretRef: example-token
    status:
    #...
      links:
        external:
    #...
          manifests:
          - type: all
            url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1
          - type: auth-header-secret
            url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2
        internal:
    #...
          manifests:
          - type: all
            url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3
          - type: auth-header-secret
            url: https://virt-export-export-pvc.default.svc/internal/manifests/secret
      phase: Ready
      serviceName: virt-export-example-export

    1
    Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the public certificate for the external URL’s ingress or route.
    2
    Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token.
    3
    Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the certificate for the internal URL’s export server.
  6. Log in to the target cluster.
  7. Get the Secret manifest by running the following command:

    $ curl --cacert cacert.crt <secret_manifest_url> -H \ 1
    "x-kubevirt-export-token:token_decode" -H \ 2
    "Accept:application/yaml"
    1
    Replace <secret_manifest_url> with an auth-header-secret URL from the VirtualMachineExport YAML output.
    2
    Reference the token_decode file that you created earlier.

    For example:

    $ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
  8. Get the manifests of type: all, such as the ConfigMap and VirtualMachine manifests, by running the following command:

    $ curl --cacert cacert.crt <all_manifest_url> -H \ 1
    "x-kubevirt-export-token:token_decode" -H \ 2
    "Accept:application/yaml"
    1
    Replace <all_manifest_url> with a URL from the VirtualMachineExport YAML output.
    2
    Reference the token_decode file that you created earlier.

    For example:

    $ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"

Next steps

  • You can now create the ConfigMap and VirtualMachine objects on the target cluster by using the exported manifests.

7.10. Managing virtual machine instances

If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI).

The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port.

7.10.1. About virtual machine instances

A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI).

A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:

  • List standalone VMIs and their details.
  • Edit labels and annotations for a standalone VMI.
  • Delete a standalone VMI.

When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.

Note

Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.

When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the RestartRequired VM condition. Changes are effective on the next reboot, and the condition is removed.

7.10.2. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

7.10.3. Listing standalone virtual machine instances using the web console

Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).

Note

VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.

Procedure

  • Click Virtualization VirtualMachines from the side menu.

    You can identify a standalone VMI by a dark colored badge next to its name.

7.10.4. Editing a standalone virtual machine instance using the web console

You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a standalone VMI to open the VirtualMachineInstance details page.
  3. On the Details tab, click the pencil icon beside Annotations or Labels.
  4. Make the relevant changes and click Save.

7.10.5. Deleting a standalone virtual machine instance using the CLI

You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the VMI that you want to delete.

Procedure

  • Delete the VMI by running the following command:

    $ oc delete vmi <vmi_name>

7.10.6. Deleting a standalone virtual machine instance using the web console

Delete a standalone virtual machine instance (VMI) from the web console.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu.
  2. Click Actions Delete VirtualMachineInstance.
  3. In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.

7.11. Controlling virtual machine states

You can stop, start, restart, and unpause virtual machines from the web console.

You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port.

7.11.1. Starting a virtual machine

You can start a virtual machine from the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to start.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row and click Start VirtualMachine.
    • To view comprehensive information about the selected virtual machine before you start it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Start.
Note

When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.

7.11.2. Stopping a virtual machine

You can stop a virtual machine from the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to stop.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row and click Stop VirtualMachine.
    • To view comprehensive information about the selected virtual machine before you stop it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Stop.

7.11.3. Restarting a virtual machine

You can restart a running virtual machine from the web console.

Important

To avoid errors, do not restart a virtual machine while it has a status of Importing.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to restart.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row and click Restart.
    • To view comprehensive information about the selected virtual machine before you restart it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Restart.

7.11.4. Pausing a virtual machine

You can pause a virtual machine from the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to pause.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row and click Pause VirtualMachine.
    • To view comprehensive information about the selected virtual machine before you pause it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Pause.

7.11.5. Unpausing a virtual machine

You can unpause a paused virtual machine from the web console.

Prerequisites

  • At least one of your virtual machines must have a status of Paused.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to unpause.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row and click Unpause VirtualMachine.
    • To view comprehensive information about the selected virtual machine before you unpause it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Unpause.

7.12. Using virtual Trusted Platform Module devices

Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest.

7.12.1. About vTPM devices

A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip.

You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip.

If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one.

A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass attribute in the HyperConverged custom resource (CR):

kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
spec:
  vmStateStorageClass: <storage_class_name>

# ...
Note

The storage class must be of type Filesystem and support the ReadWriteMany (RWX) access mode.

7.12.2. Adding a vTPM device to a virtual machine

Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have configured a Persistent Volume Claim (PVC) to use a storage class of type Filesystem that supports the ReadWriteMany (RWX) access mode. This is necessary for the vTPM device data to persist across VM reboots.

Procedure

  1. Run the following command to update the VM configuration:

    $ oc edit vm <vm_name> -n <namespace>
  2. Edit the VM specification to add the vTPM device. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
        name: example-vm
    spec:
      template:
        spec:
          domain:
            devices:
              tpm:  1
                persistent: true 2
    # ...
    1
    Adds the vTPM device to the VM.
    2
    Specifies that the vTPM device state persists after the VM is shut down. The default value is false.
  3. To apply your changes, save and exit the editor.
  4. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.

7.13. Managing virtual machines with OpenShift Pipelines

Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container.

By using OpenShift Pipelines tasks and the example pipeline, you can do the following:

  • Create and manage virtual machines (VMs), persistent volume claims (PVCs), data volumes, and data sources.
  • Run commands in VMs.
  • Manipulate disk images with libguestfs tools.

The tasks are located in the task catalog (ArtifactHub).

The example Windows pipeline is located in the pipeline catalog (ArtifactHub).

7.13.1. Prerequisites

  • You have access to an OpenShift Container Platform cluster with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed OpenShift Pipelines.

7.13.2. Supported virtual machine tasks

The following table shows the supported tasks.

Table 7.4. Supported virtual machine tasks
TaskDescription

create-vm-from-manifest

Create a virtual machine from a provided manifest or with virtctl.

create-vm-from-template

Create a virtual machine from a template.

copy-template

Copy a virtual machine template.

modify-vm-template

Modify a virtual machine template.

modify-data-object

Create or delete data volumes or data sources.

cleanup-vm

Run a script or a command in a virtual machine and stop or delete the virtual machine afterward.

disk-virt-customize

Use the virt-customize tool to run a customization script on a target PVC.

disk-virt-sysprep

Use the virt-sysprep tool to run a sysprep script on a target PVC.

wait-for-vmi-status

Wait for a specific status of a virtual machine instance and fail or succeed based on the status.

Note

Virtual machine creation in pipelines now utilizes ClusterInstanceType and ClusterPreference instead of template-based tasks, which have been deprecated. The create-vm-from-template, copy-template, and modify-vm-template commands remain available but are not used in default pipeline tasks.

7.13.3. Windows EFI installer pipeline

You can run the Windows EFI installer pipeline by using the web console or CLI.

The Windows EFI installer pipeline installs Windows 10, Windows 11, or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process.

Note

The Windows EFI installer pipeline uses a config map file with sysprep predefined by OpenShift Container Platform and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep definition.

7.13.3.1. Running the example pipelines using the web console

You can run the example pipelines from the Pipelines menu in the web console.

Procedure

  1. Click Pipelines Pipelines in the side menu.
  2. Select a pipeline to open the Pipeline details page.
  3. From the Actions list, select Start. The Start Pipeline dialog is displayed.
  4. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status.

7.13.3.2. Running the example pipelines using the CLI

Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline.

Procedure

  1. To run the Microsoft Windows 11 installer pipeline, create the following PipelineRun manifest:

    apiVersion: tekton.dev/v1
    kind: PipelineRun
    metadata:
      generateName: windows11-installer-run-
      labels:
        pipelinerun: windows11-installer-run
    spec:
        params:
        -   name: winImageDownloadURL
            value: <windows_image_download_url> 1
        -   name: acceptEula
            value: false 2
        pipelineRef:
            params:
            -   name: catalog
                value: redhat-pipelines
            -   name: type
                value: artifact
            -   name: kind
                value: pipeline
            -   name: name
                value: windows-efi-installer
            -   name: version
                value: 4.17
            resolver: hub
        taskRunSpecs:
        -   pipelineTaskName: modify-windows-iso-file
            PodTemplate:
                securityContext:
                    fsGroup: 107
                    runAsUser: 107
    1
    Specify the URL for the Windows 11 64-bit ISO file. The product’s language must be English (United States).
    2
    Example PipelineRun objects have a special parameter, acceptEula. By setting this parameter, you are agreeing to the applicable Microsoft user license agreements for each deployment or installation of the Microsoft products. If you set it to false, the pipeline exits at the first task.
  2. Apply the PipelineRun manifest:

    $ oc apply -f windows11-customize-run.yaml

7.13.4. Additional resources

7.14. Advanced virtual machine management

7.14.1. Working with resource quotas for virtual machines

Create and manage resource quotas for virtual machines.

7.14.1.1. Setting resource quota limits for virtual machines

Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.

Procedure

  1. Set limits for a VM by editing the VirtualMachine manifest. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: with-limits
    spec:
      running: false
      template:
        spec:
          domain:
    # ...
            resources:
              requests:
                memory: 128Mi
              limits:
                memory: 256Mi  1
    1
    This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value.
  2. Save the VirtualMachine manifest.

7.14.1.2. Additional resources

7.14.2. Configuring the Application-Aware Quota (AAQ) Operator

You can use the Application-Aware Quota (AAQ) Operator to customize and manage resource quotas for individual components in an OpenShift Container Platform cluster.

7.14.2.1. About the AAQ Operator

The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native ResourceQuota object in the OpenShift Container Platform platform.

In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native ResourceQuota object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for OpenShift Virtualization workloads.

OpenShift Virtualization requires significant compute resource allocation to handle virtual machine (VM) live migrations and manage VM infrastructure overhead. When upgrading OpenShift Virtualization, you must migrate VMs to upgrade the virt-launcher pod. However, migrating a VM in the presence of a resource quota can cause the migration, and subsequently the upgrade, to fail.

With AAQ, you can allocate resources for VMs without interfering with cluster-level activities such as upgrades and node maintenance. The AAQ Operator also supports non-compute resources which eliminates the need to manage both the native resource quota and AAQ API objects separately.

7.14.2.1.1. AAQ Operator controller and custom resources

The AAQ Operator introduces two new API objects defined as custom resource definitions (CRDs) for managing alternative quota implementations across multiple namespaces:

  • ApplicationAwareResourceQuota: Sets aggregate quota restrictions enforced per namespace. The ApplicationAwareResourceQuota API is compatible with the native ResourceQuota object and shares the same specification and status definitions.

    Example manifest

    apiVersion: aaq.kubevirt.io/v1alpha1
    kind: ApplicationAwareResourceQuota
    metadata:
      name: example-resource-quota
    spec:
      hard:
        requests.memory: 1Gi
        limits.memory: 1Gi
        requests.cpu/vmi: "1" 1
        requests.memory/vmi: 1Gi 2
    # ...

    1
    The maximum amount of CPU that is allowed for VM workloads in the default namespace.
    2
    The maximum amount of RAM that is allowed for VM workloads in the default namespace.
  • ApplicationAwareClusterResourceQuota: Mirrors the ApplicationAwareResourceQuota object at a cluster scope. It is compatible with the native ClusterResourceQuota API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing the spec.selector.labels or spec.selector.annotations fields.

    Example manifest

    apiVersion: aaq.kubevirt.io/v1alpha1
    kind: ApplicationAwareClusterResourceQuota 1
    metadata:
      name: example-resource-quota
    spec:
      quota:
        hard:
        requests.memory: 1Gi
        limits.memory: 1Gi
        requests.cpu/vmi: "1"
        requests.memory/vmi: 1Gi
      selector:
        annotations: null
        labels:
          matchLabels:
            kubernetes.io/metadata.name: default
    # ...

    1
    You can only create an ApplicationAwareClusterResourceQuota object if the spec.allowApplicationAwareClusterResourceQuota field in the HyperConverged custom resource (CR) is set to true.
    Note

    If both spec.selector.labels and spec.selector.annotations fields are set, only namespaces that match both are selected.

The AAQ controller uses a scheduling gate mechanism to evaluate whether there is enough of a resource available to run a workload. If so, the scheduling gate is removed from the pod and it is considered ready for scheduling. The quota usage status is updated to indicate how much of the quota is used.

If the CPU and memory requests and limits for the workload exceed the enforced quota usage limit, the pod remains in SchedulingGated status until there is enough quota available. The AAQ controller creates an event of type Warning with details on why the quota was exceeded. You can view the event details by using the oc get events command.

Important

Pods that have the spec.nodeName field set to a specific node cannot use namespaces that match the spec.namespaceSelector labels defined in the HyperConverged CR.

7.14.2.2. Enabling the AAQ Operator

To deploy the AAQ Operator, set the enableApplicationAwareQuota feature gate to true in the HyperConverged custom resource (CR).

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges.
  • You have installed the OpenShift CLI (oc).

Procedure

  • Set the enableApplicationAwareQuota feature gate to true in the HyperConverged CR by running the following command:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
     --type json -p '[{"op": "add", "path": "/spec/featureGates/enableApplicationAwareQuota", "value": true}]'

7.14.2.3. Configuring the AAQ Operator by using the CLI

You can configure the AAQ Operator by specifying the fields of the spec.applicationAwareConfig object in the HyperConverged custom resource (CR).

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges.
  • You have installed the OpenShift CLI (oc).

Procedure

  • Update the HyperConverged CR by running the following command:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{
      "spec": {
        "applicationAwareConfig": {
          "vmiCalcConfigName": "DedicatedVirtualResources",
          "namespaceSelector": {
            "matchLabels": {
              "app": "my-app"
            }
          },
          "allowApplicationAwareClusterResourceQuota": true
        }
      }
    }'

    where:

    vmiCalcConfigName

    Specifies how resource counting is managed for pods that run virtual machine (VM) workloads. Possible values are:

    • VmiPodUsage: Counts compute resources for pods associated with VMs in the same way as native resource quotas and excludes migration-related resources.
    • VirtualResources: Counts compute resources based on the VM specifications, using the VM RAM size for memory and virtual CPUs for processing.
    • DedicatedVirtualResources (default): Similar to VirtualResources, but separates resource tracking for pods associated with VMs by adding a /vmi suffix to CPU and memory resource names. For example, requests.cpu/vmi and requests.memory/vmi.
    namespaceSelector
    Determines the namespaces for which an AAQ scheduling gate is added to pods when they are created. If a namespace selector is not defined, the AAQ Operator targets namespaces with the application-aware-quota/enable-gating label as default.
    allowApplicationAwareClusterResourceQuota
    If set to true, you can create and manage the ApplicationAwareClusterResourceQuota object. Setting this attribute to true can increase scheduling time.

7.14.2.4. Additional resources

7.14.3. Specifying nodes for virtual machines

You can place virtual machines (VMs) on specific nodes by using node placement rules.

7.14.3.1. About node placement for virtual machines

To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:

  • You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
  • You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
  • Your VMs require specific hardware features that are not present on all available nodes.
  • You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Note

Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.

You can use the following rule types in the spec field of a VirtualMachine manifest:

nodeSelector
Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object.
tolerations

Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.

Note

Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.

7.14.3.2. Node placement examples

The following example YAML file snippets use nodePlacement, affinity, and tolerations fields to customize node placement for virtual machines.

7.14.3.2.1. Example: VM node placement with nodeSelector

In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels.

Warning

If there are no nodes that fit this description, the virtual machine is not scheduled.

Example VM manifest

metadata:
  name: example-vm-node-selector
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      nodeSelector:
        example-key-1: example-value-1
        example-key-2: example-value-2
# ...

7.14.3.2.2. Example: VM node placement with pod affinity and pod anti-affinity

In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1. If there is no such pod running on any node, the VM is not scheduled.

If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.

Example VM manifest

metadata:
  name: example-vm-pod-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 1
          - labelSelector:
              matchExpressions:
              - key: example-key-1
                operator: In
                values:
                - example-value-1
            topologyKey: kubernetes.io/hostname
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution: 2
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: example-key-2
                  operator: In
                  values:
                  - example-value-2
              topologyKey: kubernetes.io/hostname
# ...

1
If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met.
2
If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
7.14.3.2.3. Example: VM node placement with node affinity

In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.

If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value. However, if all candidate nodes have this label, the scheduler ignores this constraint.

Example VM manifest

metadata:
  name: example-vm-node-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 1
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-key
                operator: In
                values:
                - example-value-1
                - example-value-2
          preferredDuringSchedulingIgnoredDuringExecution: 2
          - weight: 1
            preference:
              matchExpressions:
              - key: example-node-label-key
                operator: In
                values:
                - example-node-label-value
# ...

1
If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met.
2
If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
7.14.3.2.4. Example: VM node placement with tolerations

In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations, it can schedule onto the tainted nodes.

Note

A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.

Example VM manifest

metadata:
  name: example-vm-tolerations
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "virtualization"
    effect: "NoSchedule"
# ...

7.14.3.3. Additional resources

7.14.4. Activating kernel samepage merging (KSM)

OpenShift Virtualization can activate kernel samepage merging (KSM) when nodes are overloaded. KSM deduplicates identical data found in the memory pages of virtual machines (VMs). If you have very similar VMs, KSM can make it possible to schedule more VMs on a single node.

Important

You must only use KSM with trusted workloads.

7.14.4.1. Prerequisites

  • Ensure that an administrator has configured KSM support on any nodes where you want OpenShift Virtualization to activate KSM.

7.14.4.2. About using OpenShift Virtualization to activate KSM

You can configure OpenShift Virtualization to activate kernel samepage merging (KSM) when nodes experience memory overload.

7.14.4.2.1. Configuration methods

You can enable or disable the KSM activation feature for all nodes by using the OpenShift Container Platform web console or by editing the HyperConverged custom resource (CR). The HyperConverged CR supports more granular configuration.

CR configuration

You can configure the KSM activation feature by editing the spec.configuration.ksmConfiguration stanza of the HyperConverged CR.

  • You enable the feature and configure settings by editing the ksmConfiguration stanza.
  • You disable the feature by deleting the ksmConfiguration stanza.
  • You can allow OpenShift Virtualization to enable KSM on only a subset of nodes by adding node selection syntax to the ksmConfiguration.nodeLabelSelector field.
Note

Even if the KSM activation feature is disabled in OpenShift Virtualization, an administrator can still enable KSM on nodes that support it.

7.14.4.2.2. KSM node labels

OpenShift Virtualization identifies nodes that are configured to support KSM and applies the following node labels:

kubevirt.io/ksm-handler-managed: "false"
This label is set to "true" when OpenShift Virtualization activates KSM on a node that is experiencing memory overload. This label is not set to "true" if an administrator activates KSM.
kubevirt.io/ksm-enabled: "false"
This label is set to "true" when KSM is activated on a node, even if OpenShift Virtualization did not activate KSM.

These labels are not applied to nodes that do not support KSM.

7.14.4.3. Configuring KSM activation by using the web console

You can allow OpenShift Virtualization to activate kernel samepage merging (KSM) on all nodes in your cluster by using the OpenShift Container Platform web console.

Procedure

  1. From the side menu, click Virtualization Overview.
  2. Select the Settings tab.
  3. Select the Cluster tab.
  4. Expand Resource management.
  5. Enable or disable the feature for all nodes:

    • Set Kernel Samepage Merging (KSM) to on.
    • Set Kernel Samepage Merging (KSM) to off.

7.14.4.4. Configuring KSM activation by using the CLI

You can enable or disable OpenShift Virtualization’s kernel samepage merging (KSM) activation feature by editing the HyperConverged custom resource (CR). Use this method if you want OpenShift Virtualization to activate KSM on only a subset of nodes.

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Edit the ksmConfiguration stanza:

    • To enable the KSM activation feature for all nodes, set the nodeLabelSelector value to {}. For example:

      apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
        configuration:
          ksmConfiguration:
            nodeLabelSelector: {}
      # ...
    • To enable the KSM activation feature on a subset of nodes, edit the nodeLabelSelector field. Add syntax that matches the nodes where you want OpenShift Virtualization to enable KSM. For example, the following configuration allows OpenShift Virtualization to enable KSM on nodes where both <first_example_key> and <second_example_key> are set to "true":

      apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
        configuration:
          ksmConfiguration:
            nodeLabelSelector:
              matchLabels:
                <first_example_key>: "true"
                <second_example_key>: "true"
      # ...
    • To disable the KSM activation feature, delete the ksmConfiguration stanza. For example:

      apiVersion: hco.kubevirt.io/v1beta1
      kind: HyperConverged
      metadata:
        name: kubevirt-hyperconverged
        namespace: openshift-cnv
      spec:
        configuration:
      # ...
  3. Save the file.

7.14.4.5. Additional resources

7.14.5. Configuring certificate rotation

Configure certificate rotation parameters to replace existing certificates.

7.14.5.1. Configuring certificate rotation

You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR).

Procedure

  1. Open the HyperConverged CR by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format.

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      certConfig:
        ca:
          duration: 48h0m0s
          renewBefore: 24h0m0s 1
        server:
          duration: 24h0m0s  2
          renewBefore: 12h0m0s  3
    1
    The value of ca.renewBefore must be less than or equal to the value of ca.duration.
    2
    The value of server.duration must be less than or equal to the value of ca.duration.
    3
    The value of server.renewBefore must be less than or equal to the value of server.duration.
  3. Apply the YAML file to your cluster.

7.14.5.2. Troubleshooting certificate rotation parameters

Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions:

  • The value of ca.renewBefore must be less than or equal to the value of ca.duration.
  • The value of server.duration must be less than or equal to the value of ca.duration.
  • The value of server.renewBefore must be less than or equal to the value of server.duration.

If the default values conflict with these conditions, you will receive an error.

If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration, conflicting with the specified conditions.

Example

certConfig:
   ca:
     duration: 4h0m0s
     renewBefore: 1h0m0s
   server:
     duration: 4h0m0s
     renewBefore: 4h0m0s

This results in the following error message:

error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration

The error message only mentions the first conflict. Review all certConfig values before you proceed.

7.14.6. Configuring the default CPU model

Use the defaultCPUModel setting in the HyperConverged custom resource (CR) to define a cluster-wide default CPU model.

The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster.

  • If the VM does not have a defined CPU model:

    • The defaultCPUModel is automatically set using the CPU model defined at the cluster-wide level.
  • If both the VM and the cluster have a defined CPU model:

    • The VM’s CPU model takes precedence.
  • If neither the VM nor the cluster have a defined CPU model:

    • The host-model is automatically set using the CPU model defined at the host level.

7.14.6.1. Configuring the default CPU model

Configure the defaultCPUModel by updating the HyperConverged custom resource (CR). You can change the defaultCPUModel while OpenShift Virtualization is running.

Note

The defaultCPUModel is case sensitive.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Open the HyperConverged CR by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the defaultCPUModel field to the CR and set the value to the name of a CPU model that exists in the cluster:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
     name: kubevirt-hyperconverged
     namespace: openshift-cnv
    spec:
      defaultCPUModel: "EPYC"
  3. Apply the YAML file to your cluster.

7.14.7. Using UEFI mode for virtual machines

You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.

7.14.7.1. About UEFI mode for virtual machines

Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.

It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.

7.14.7.2. Booting virtual machines in UEFI mode

You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode:

    Booting in UEFI mode with secure boot active

    apiversion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        special: vm-secureboot
      name: vm-secureboot
    spec:
      template:
        metadata:
          labels:
            special: vm-secureboot
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
            features:
              acpi: {}
              smm:
                enabled: true 1
            firmware:
              bootloader:
                efi:
                  secureBoot: true 2
    # ...

    1
    OpenShift Virtualization requires System Management Mode (SMM) to be enabled for Secure Boot in UEFI mode to occur.
    2
    OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
  2. Apply the manifest to your cluster by running the following command:

    $ oc create -f <file_name>.yaml

7.14.7.3. Enabling persistent EFI

You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM.

Prerequisites

  • You must have cluster administrator privileges.
  • You must have a storage class that supports RWX access mode and FS volume mode.

Procedure

  • Enable the VMPersistentState feature gate by running the following command:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]'

7.14.7.4. Configuring VMs with persistent EFI

You can configure a VM to have EFI persistence enabled by editing its manifest file.

Prerequisites

  • VMPersistentState feature gate enabled.

Procedure

  • Edit the VM manifest file and save to apply settings.

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm
    spec:
      template:
        spec:
          domain:
            firmware:
              bootloader:
                efi:
                  persistent: true
    # ...

7.14.8. Configuring PXE booting for virtual machines

PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.

7.14.8.1. Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

7.14.8.2. PXE booting with a specified MAC address

As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.

Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

Procedure

  1. Configure a PXE network on the cluster:

    1. Create the network attachment definition file for PXE network pxe-net-conf:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: pxe-net-conf 1
      spec:
        config: |
          {
            "cniVersion": "0.3.1",
            "name": "pxe-net-conf", 2
            "type": "bridge", 3
            "bridge": "bridge-interface", 4
            "macspoofchk": false, 5
            "vlan": 100, 6
            "disableContainerInterface": true,
            "preserveDefaultVlan": false 7
          }
      1
      The name for the NetworkAttachmentDefinition object.
      2
      The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition.
      3
      The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin.
      4
      The name of the Linux bridge configured on the node.
      5
      Optional: A flag to enable the MAC spoof check. When set to true, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack.
      6
      Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
      7
      Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true.
  2. Create the network attachment definition by using the file you created in the previous step:

    $ oc create -f pxe-net-conf.yaml
  3. Edit the virtual machine instance configuration file to include the details of the interface and network.

    1. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.

      Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net>:

      interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
      Note

      Boot order is global for interfaces and disks.

    2. Assign a boot device number to the disk to ensure proper booting after operating system provisioning.

      Set the disk bootOrder value to 2:

      devices:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
          bootOrder: 2
    3. Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf>:

      networks:
      - name: default
        pod: {}
      - name: pxe-net
        multus:
          networkName: pxe-net-conf
  4. Create the virtual machine instance:

    $ oc create -f vmi-pxe-boot.yaml

    Example output

      virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created

  5. Wait for the virtual machine instance to run:

    $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
      phase: Running
  6. View the virtual machine instance using VNC:

    $ virtctl vnc vmi-pxe-boot
  7. Watch the boot screen to verify that the PXE boot is successful.
  8. Log in to the virtual machine instance:

    $ virtctl console vmi-pxe-boot

Verification

  1. Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0, got an IP address from OpenShift Container Platform.

    $ ip addr

    Example output

    ...
    3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
       link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff

7.14.8.3. OpenShift Virtualization networking glossary

The following terms are used throughout OpenShift Virtualization documentation:

Container Network Interface (CNI)
A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
Multus
A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
Custom resource definition (CRD)
A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
Network attachment definition (NAD)
A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
Node network configuration policy (NNCP)
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.

7.14.9. Using huge pages with virtual machines

You can use huge pages as backing memory for virtual machines in your cluster.

7.14.9.1. Prerequisites

7.14.9.2. What huge pages do

Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.

A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.

In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.

7.14.9.3. Configuring huge pages for virtual machines

You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration.

The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi.

Note

The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.

If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.

Prerequisites

  • Nodes must have pre-allocated huge pages configured.

Procedure

  1. In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain. The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi:

    kind: VirtualMachine
    # ...
    spec:
      domain:
        resources:
          requests:
            memory: "4Gi" 1
        memory:
          hugepages:
            pageSize: "1Gi" 2
    # ...
    1
    The total amount of memory requested for the virtual machine. This value must be divisible by the page size.
    2
    The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi. The page size must be smaller than the requested memory.
  2. Apply the virtual machine configuration:

    $ oc apply -f <virtual_machine>.yaml

7.14.10. Enabling dedicated resources for virtual machines

To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.

7.14.10.1. About dedicated resources

When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.

7.14.10.2. Prerequisites

  • The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads.
  • The virtual machine must be powered off.

7.14.10.3. Enabling dedicated resources for a virtual machine

You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. On the Configuration Scheduling tab, click the edit icon beside Dedicated Resources.
  4. Select Schedule this workload with dedicated resources (guaranteed policy).
  5. Click Save.

7.14.11. Scheduling virtual machines

You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.

7.14.11.1. Policy attributes

You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.

Policy attributeDescription

force

The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU.

require

Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model.

optional

The VM is added to a node if that VM is supported by the host’s physical machine CPU.

disable

The VM cannot be scheduled with CPU node discovery.

forbid

The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled.

7.14.11.2. Setting a policy attribute and CPU feature

You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.

Procedure

  • Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM):

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              features:
                - name: apic 1
                  policy: require 2
    1
    Name of the CPU feature for the VM.
    2
    Policy attribute for the VM.

7.14.11.3. Scheduling virtual machines with the supported CPU model

You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.

Procedure

  • Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: Conroe 1
    1
    CPU model for the VM.

7.14.11.4. Scheduling virtual machines with the host model

When the CPU model for a virtual machine (VM) is set to host-model, the VM inherits the CPU model of the node where it is scheduled.

Procedure

  • Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine:

    apiVersion: kubevirt/v1alpha3
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: host-model 1
    1
    The VM that inherits the CPU model of the node where it is scheduled.

7.14.11.5. Scheduling virtual machines with a custom scheduler

You can use a custom scheduler to schedule a virtual machine (VM) on a node.

Prerequisites

  • A secondary scheduler is configured for your cluster.

Procedure

  • Add the custom scheduler to the VM configuration by editing the VirtualMachine manifest. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-fedora
    spec:
      running: true
      template:
        spec:
          schedulerName: my-scheduler 1
          domain:
            devices:
              disks:
                - name: containerdisk
                  disk:
                    bus: virtio
    # ...
    1
    The name of the custom scheduler. If the schedulerName value does not match an existing scheduler, the virt-launcher pod stays in a Pending state until the specified scheduler is found.

Verification

  • Verify that the VM is using the custom scheduler specified in the VirtualMachine manifest by checking the virt-launcher pod events:

    1. View the list of pods in your cluster by entering the following command:

      $ oc get pods

      Example output

      NAME                             READY   STATUS    RESTARTS   AGE
      virt-launcher-vm-fedora-dpc87    2/2     Running   0          24m

    2. Run the following command to display the pod events:

      $ oc describe pod virt-launcher-vm-fedora-dpc87

      The value of the From field in the output verifies that the scheduler name matches the custom scheduler specified in the VirtualMachine manifest:

      Example output

      [...]
      Events:
        Type    Reason     Age   From              Message
        ----    ------     ----  ----              -------
        Normal  Scheduled  21m   my-scheduler  Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01
      [...]

Additional resources

7.14.12. Configuring PCI passthrough

The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine (VM). When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.

Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI).

7.14.12.1. Preparing nodes for GPU passthrough

You can prevent GPU operands from deploying on worker nodes that you designated for GPU passthrough.

7.14.12.1.1. Preventing NVIDIA GPU operands from deploying on nodes

If you use the NVIDIA GPU Operator in your cluster, you can apply the nvidia.com/gpu.deploy.operands=false label to nodes that you do not want to configure for GPU or vGPU operands. This label prevents the creation of the pods that configure GPU or vGPU operands and terminates the pods if they already exist.

Prerequisites

  • The OpenShift CLI (oc) is installed.

Procedure

  • Label the node by running the following command:

    $ oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1
    1
    Replace <node_name> with the name of a node where you do not want to install the NVIDIA GPU operands.

Verification

  1. Verify that the label was added to the node by running the following command:

    $ oc describe node <node_name>
  2. Optional: If GPU operands were previously deployed on the node, verify their removal.

    1. Check the status of the pods in the nvidia-gpu-operator namespace by running the following command:

      $ oc get pods -n nvidia-gpu-operator

      Example output

      NAME                             READY   STATUS        RESTARTS   AGE
      gpu-operator-59469b8c5c-hw9wj    1/1     Running       0          8d
      nvidia-sandbox-validator-7hx98   1/1     Running       0          8d
      nvidia-sandbox-validator-hdb7p   1/1     Running       0          8d
      nvidia-sandbox-validator-kxwj7   1/1     Terminating   0          9d
      nvidia-vfio-manager-7w9fs        1/1     Running       0          8d
      nvidia-vfio-manager-866pz        1/1     Running       0          8d
      nvidia-vfio-manager-zqtck        1/1     Terminating   0          9d

    2. Monitor the pod status until the pods with Terminating status are removed:

      $ oc get pods -n nvidia-gpu-operator

      Example output

      NAME                             READY   STATUS    RESTARTS   AGE
      gpu-operator-59469b8c5c-hw9wj    1/1     Running   0          8d
      nvidia-sandbox-validator-7hx98   1/1     Running   0          8d
      nvidia-sandbox-validator-hdb7p   1/1     Running   0          8d
      nvidia-vfio-manager-7w9fs        1/1     Running   0          8d
      nvidia-vfio-manager-866pz        1/1     Running   0          8d

7.14.12.2. Preparing host devices for PCI passthrough

7.14.12.2.1. About preparing a host device for PCI passthrough

To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator.

To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR.

7.14.12.2.2. Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • You have cluster administrator permissions.
  • Your CPU hardware is Intel or AMD.
  • You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 1
      name: 100-worker-iommu 2
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 3
    # ...
    1
    Applies the new kernel argument only to worker nodes.
    2
    The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3
    Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    $ oc get MachineConfig
7.14.12.2.3. Binding PCI devices to the VFIO driver

To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.

Prerequisites

  • You added kernel arguments to enable IOMMU for the CPU.

Procedure

  1. Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device.

    $ lspci -nnv | grep -i nvidia

    Example output

    02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

  2. Create a Butane config file, 100-worker-vfiopci.bu, binding the PCI device to the VFIO driver.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Example

    variant: openshift
    version: 4.17.0
    metadata:
      name: 100-worker-vfiopci
      labels:
        machineconfiguration.openshift.io/role: worker 1
    storage:
      files:
      - path: /etc/modprobe.d/vfio.conf
        mode: 0644
        overwrite: true
        contents:
          inline: |
            options vfio-pci ids=10de:1eb8 2
      - path: /etc/modules-load.d/vfio-pci.conf 3
        mode: 0644
        overwrite: true
        contents:
          inline: vfio-pci

    1
    Applies the new kernel argument only to worker nodes.
    2
    Specify the previously determined vendor-ID value (10de) and the device-ID value (1eb8) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.
    3
    The file that loads the vfio-pci kernel module on the worker nodes.
  3. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
  4. Apply the MachineConfig object to the worker nodes:

    $ oc apply -f 100-worker-vfiopci.yaml
  5. Verify that the MachineConfig object was added.

    $ oc get MachineConfig

    Example output

    NAME                             GENERATEDBYCONTROLLER                      IGNITIONVERSION  AGE
    00-master                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    00-worker                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    100-worker-iommu                                                            3.2.0            30s
    100-worker-vfiopci-configuration                                            3.2.0            30s

Verification

  • Verify that the VFIO driver is loaded.

    $ lspci -nnk -d 10de:

    The output confirms that the VFIO driver is being used.

    Example output

    04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1)
            Subsystem: NVIDIA Corporation Device [10de:1eb8]
            Kernel driver in use: vfio-pci
            Kernel modules: nouveau

7.14.12.2.4. Exposing PCI host devices in the cluster using the CLI

To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices: 1
        pciHostDevices: 2
        - pciDeviceSelector: "10DE:1DB6" 3
          resourceName: "nvidia.com/GV100GL_Tesla_V100" 4
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
        - pciDeviceSelector: "8086:6F54"
          resourceName: "intel.com/qat"
          externalResourceProvider: true 5
    # ...

    1
    The host devices that are permitted to be used in the cluster.
    2
    The list of PCI devices available on the node.
    3
    The vendor-ID and the device-ID required to identify the PCI device.
    4
    The name of a PCI host device.
    5
    Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
    Note

    The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization.

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100, nvidia.com/TU104GL_Tesla_T4, and intel.com/qat resource names.

    $ oc describe node <node_name>

    Example output

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250

7.14.12.2.5. Removing PCI host devices from the cluster using the CLI

To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector, resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted.

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices:
        pciHostDevices:
        - pciDeviceSelector: "10DE:1DB6"
          resourceName: "nvidia.com/GV100GL_Tesla_V100"
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
    # ...

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name.

    $ oc describe node <node_name>

    Example output

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250

7.14.12.3. Configuring virtual machines for PCI passthrough

After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.

7.14.12.3.1. Assigning a PCI device to a virtual machine

When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.

Procedure

  • Assign the PCI device to a virtual machine as a host device.

    Example

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          hostDevices:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 1
            name: hostdevices1

    1
    The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.

Verification

  • Use the following command to verify that the host device is available from the virtual machine.

    $ lspci -nnk | grep NVIDIA

    Example output

    $ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

7.14.12.4. Additional resources

7.14.13. Configuring virtual GPUs

If you have graphics processing unit (GPU) cards, OpenShift Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs).

7.14.13.1. About using virtual GPUs with OpenShift Virtualization

Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters.

Note

Refer to your hardware vendor’s documentation for functionality and support details.

Mediated device
A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.

7.14.13.2. Preparing hosts for mediated devices

You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices.

7.14.13.2.1. Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • You have cluster administrator permissions.
  • Your CPU hardware is Intel or AMD.
  • You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 1
      name: 100-worker-iommu 2
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 3
    # ...
    1
    Applies the new kernel argument only to worker nodes.
    2
    The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3
    Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    $ oc get MachineConfig

7.14.13.3. Configuring the NVIDIA GPU Operator

You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OpenShift Virtualization.

Note

The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase.

7.14.13.3.1. About using the NVIDIA GPU Operator

You can use the NVIDIA GPU Operator with OpenShift Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks that are required when preparing nodes for GPU workloads.

Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads.

7.14.13.3.2. Options for configuring mediated devices

There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OpenShift Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator.

Using the NVIDIA GPU Operator to configure mediated devices
This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OpenShift Virtualization in the NVIDIA documentation.
Using OpenShift Virtualization to configure mediated devices

This method, which is tested by Red Hat, uses OpenShift Virtualization’s capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices.

When using the OpenShift Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation. However, this method differs from the NVIDIA documentation in the following ways:

  • You must not overwrite the default disableMDEVConfiguration: false setting in the HyperConverged custom resource (CR).

    Important

    Setting this feature gate as described in the NVIDIA documentation prevents OpenShift Virtualization from configuring mediated devices.

  • You must configure your ClusterPolicy manifest so that it matches the following example:

    Example manifest

    kind: ClusterPolicy
    apiVersion: nvidia.com/v1
    metadata:
      name: gpu-cluster-policy
    spec:
      operator:
        defaultRuntime: crio
        use_ocp_driver_toolkit: true
        initContainer: {}
      sandboxWorkloads:
        enabled: true
        defaultWorkload: vm-vgpu
      driver:
        enabled: false 1
      dcgmExporter: {}
      dcgm:
        enabled: true
      daemonsets: {}
      devicePlugin: {}
      gfd: {}
      migManager:
        enabled: true
      nodeStatusExporter:
        enabled: true
      mig:
        strategy: single
      toolkit:
        enabled: true
      validator:
        plugin:
          env:
            - name: WITH_WORKLOAD
              value: "true"
      vgpuManager:
        enabled: true 2
        repository: <vgpu_container_registry> 3
        image: <vgpu_image_name>
        version: nvidia-vgpu-manager
      vgpuDeviceManager:
        enabled: false 4
        config:
          name: vgpu-devices-config
          default: default
      sandboxDevicePlugin:
        enabled: false 5
      vfioManager:
        enabled: false 6

    1
    Set this value to false. Not required for VMs.
    2
    Set this value to true. Required for using vGPUs with VMs.
    3
    Substitute <vgpu_container_registry> with your registry value.
    4
    Set this value to false to allow OpenShift Virtualization to configure mediated devices instead of the NVIDIA GPU Operator.
    5
    Set this value to false to prevent discovery and advertising of the vGPU devices to the kubelet.
    6
    Set this value to false to prevent loading the vfio-pci driver. Instead, follow the OpenShift Virtualization documentation to configure PCI passthrough.

Additional resources

7.14.13.4. How vGPUs are assigned to nodes

For each physical device, OpenShift Virtualization configures the following values:

  • A single mdev type.
  • The maximum number of instances of the selected mdev type.

The cluster architecture affects how devices are created and assigned to nodes.

Large cluster with multiple cards per node

On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:

# ...
mediatedDevicesConfiguration:
  mediatedDeviceTypes:
  - nvidia-222
  - nvidia-228
  - nvidia-105
  - nvidia-108
# ...

In this scenario, each node has two cards, both of which support the following vGPU types:

nvidia-105
# ...
nvidia-108
nvidia-217
nvidia-299
# ...

On each node, OpenShift Virtualization creates the following vGPUs:

  • 16 vGPUs of type nvidia-105 on the first card.
  • 2 vGPUs of type nvidia-108 on the second card.
One node has a single card that supports more than one requested vGPU type

OpenShift Virtualization uses the supported type that comes first on the mediatedDeviceTypes list.

For example, the card on a node card supports nvidia-223 and nvidia-224. The following mediatedDeviceTypes list is configured:

# ...
mediatedDevicesConfiguration:
  mediatedDeviceTypes:
  - nvidia-22
  - nvidia-223
  - nvidia-224
# ...

In this example, OpenShift Virtualization uses the nvidia-223 type.

7.14.13.5. Managing mediated devices

Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices.

7.14.13.5.1. Creating and exposing mediated devices

As an administrator, you can create mediated devices and expose them to the cluster by editing the HyperConverged custom resource (CR).

Prerequisites

  • You enabled the Input-Output Memory Management Unit (IOMMU) driver.
  • If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv

    Example 7.1. Example configuration file with mediated devices configured

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration:
        mediatedDeviceTypes:
        - nvidia-231
        nodeMediatedDeviceTypes:
        - mediatedDeviceTypes:
          - nvidia-233
          nodeSelector:
            kubernetes.io/hostname: node-11.redhat.com
      permittedHostDevices:
        mediatedDevices:
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q
        - mdevNameSelector: GRID T4-8Q
          resourceName: nvidia.com/GRID_T4-8Q
    # ...
  2. Create mediated devices by adding them to the spec.mediatedDevicesConfiguration stanza:

    Example YAML snippet

    # ...
    spec:
      mediatedDevicesConfiguration:
        mediatedDeviceTypes: 1
        - <device_type>
        nodeMediatedDeviceTypes: 2
        - mediatedDeviceTypes: 3
          - <device_type>
          nodeSelector: 4
            <node_selector_key>: <node_selector_value>
    # ...

    1
    Required: Configures global settings for the cluster.
    2
    Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDeviceTypes configuration.
    3
    Required if you use nodeMediatedDeviceTypes. Overrides the global mediatedDeviceTypes configuration for the specified nodes.
    4
    Required if you use nodeMediatedDeviceTypes. Must include a key:value pair.
    Important

    Before OpenShift Virtualization 4.14, the mediatedDeviceTypes field was named mediatedDevicesTypes. Ensure that you use the correct field name when configuring mediated devices.

  3. Identify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the HyperConverged CR in the next step.

    1. Find the resourceName value by running the following command:

      $ oc get $NODE -o json \
        | jq '.status.allocatable \
          | with_entries(select(.key | startswith("nvidia.com/"))) \
          | with_entries(select(.value != "0"))'
    2. Find the mdevNameSelector value by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name, substituting the correct values for your system.

      For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q. Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type.

  4. Expose the mediated devices to the cluster by adding the mdevNameSelector and resourceName values to the spec.permittedHostDevices.mediatedDevices stanza of the HyperConverged CR:

    Example YAML snippet

    # ...
      permittedHostDevices:
        mediatedDevices:
        - mdevNameSelector: GRID T4-2Q 1
          resourceName: nvidia.com/GRID_T4-2Q 2
    # ...

    1
    Exposes the mediated devices that map to this value on the host.
    2
    Matches the resource name that is allocated on the node.
  5. Save your changes and exit the editor.

Verification

  • Optional: Confirm that a device was added to a specific node by running the following command:

    $ oc describe node <node_name>
7.14.13.5.2. About changing and removing mediated devices

You can reconfigure or remove mediated devices in several ways:

  • Edit the HyperConverged CR and change the contents of the mediatedDeviceTypes stanza.
  • Change the node labels that match the nodeMediatedDeviceTypes node selector.
  • Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR.

    Note

    If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.

7.14.13.5.3. Removing mediated devices from the cluster

To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration:
        mediatedDeviceTypes: 1
          - nvidia-231
      permittedHostDevices:
        mediatedDevices: 2
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q

    1
    To remove the nvidia-231 device type, delete it from the mediatedDeviceTypes array.
    2
    To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field.
  3. Save your changes and exit the editor.

7.14.13.6. Using mediated devices

You can assign mediated devices to one or more virtual machines.

7.14.13.6.1. Assigning a vGPU to a VM by using the CLI

Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource.
  • The VM is stopped.

Procedure

  • Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest:

    Example virtual machine manifest

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          gpus:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 1
            name: gpu1 2
          - deviceName: nvidia.com/GRID_T4-2Q
            name: gpu2

    1
    The resource name associated with the mediated device.
    2
    A name to identify the device on the VM.

Verification

  • To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest:

    $ lspci -nnk | grep <device_name>
7.14.13.6.2. Assigning a vGPU to a VM by using the web console

You can assign virtual GPUs to virtual machines by using the OpenShift Container Platform web console.

Note

You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems.

Prerequisites

  • The vGPU is configured as a mediated device in your cluster.

    • To view the devices that are connected to your cluster, click Compute Hardware Devices from the side menu.
  • The VM is stopped.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu.
  2. Select the VM that you want to assign the device to.
  3. On the Details tab, click GPU devices.
  4. Click Add GPU device.
  5. Enter an identifying value in the Name field.
  6. From the Device name list, select the device that you want to add to the VM.
  7. Click Save.

Verification

  • To confirm that the devices were added to the VM, click the YAML tab and review the VirtualMachine configuration. Mediated devices are added to the spec.domain.devices stanza.

7.14.13.7. Additional resources

7.14.14. Configuring USB host passthrough

As a cluster administrator, you can expose USB devices in a cluster, making them available for virtual machine (VM) owners to assign to VMs. Enabling this passthrough of USB devices allows a guest to connect to actual USB hardware that is attached to an OpenShift Container Platform node, as if the hardware and the VM are physically connected.

You can expose a USB device by first enabling host passthrough and then configuring the VM to use the USB device.

7.14.14.1. Enabling USB host passthrough

You can enable USB host passthrough at the cluster level.

You specify a resource name and USB device name for each device you want first to add and then assign to a virtual machine (VM). You can allocate more than one device, each of which is known as a selector in the HyperConverged (HCO) custom resource (CR), to a single resource name. If you have multiple, identical USB devices on the cluster, you can choose to allocate a VM to a specific device.

Prerequisites

  • You have access to an OpenShift Container Platform cluster as a user who has the cluster-admin role.

Procedure

  1. Identify the USB device vendor and product by running the following command:

    $ lsusb
  2. Open the HCO CR by running the following commmand:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  3. Add a USB device to the permittedHostDevices stanza, as shown in the following example:

    Example YAML snippet

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
       name: kubevirt-hyperconverged
       namespace: {CNVNamespace}
    spec:
      configuration:
        permittedHostDevices: 1
          usbHostDevices: 2
            - resourceName: kubevirt.io/peripherals 3
              selectors:
                - vendor: "045e"
                  product: "07a5"
                - vendor: "062a"
                  product: "4102"
                - vendor: "072f"
                  product: "b100"

    1
    Lists the host devices that have permission to be used in the cluster.
    2
    Lists the available USB devices.
    3
    Uses resourceName: deviceName for each device you want to add and assign to the VM. In this example, the resource is bound to three devices, each of which is identified by vendor and product and is known as a selector.

7.14.14.2. Configuring a virtual machine connection to a USB device

You can configure virtual machine (VM) access to a USB device. This configuration allows a guest to connect to actual USB hardware that is attached to an OpenShift Container Platform node, as if the hardware and the VM are physically connected.

Procedure

  1. Locate the USB device by running the following command:

    $ oc /dev/serial/by-id/usb-VENDOR_device_name
  2. Open the virtual machine instance custom resource (CR) by running the following commmand:

    $ oc edit vmi vmi-usb
  3. Edit the CR by adding a USB device, as shown in the following example:

    Example configuration

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstance
    metadata:
      labels:
        special: vmi-usb
      name: vmi-usb 1
    spec:
      domain:
        devices:
          hostDevices:
          - deviceName: kubevirt.io/peripherals
            name: local-peripherals
    # ...

    1
    The name of the USB device.

7.14.15. Enabling descheduler evictions on virtual machines

You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node.

7.14.15.1. Descheduler profiles

Use the LongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load.

LongLifecycle

This profile balances resource usage between nodes and enables the following strategies:

  • RemovePodsHavingTooManyRestarts: removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count.
  • LowNodeUtilization: evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler.

    • A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods).
    • A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods).

7.14.15.2. Installing the descheduler

The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles.

By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions.

Important

If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane, because it has the lowest priority value (100000000) of the hosted control plane priority classes.

Prerequisites

  • You are logged in to OpenShift Container Platform as a user with the cluster-admin role.
  • Access to the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Create the required namespace for the Kube Descheduler Operator.

    1. Navigate to Administration Namespaces and click Create Namespace.
    2. Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create.
  3. Install the Kube Descheduler Operator.

    1. Navigate to Operators OperatorHub.
    2. Type Kube Descheduler Operator into the filter box.
    3. Select the Kube Descheduler Operator and click Install.
    4. On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.
    5. Adjust the values for the Update Channel and Approval Strategy to the desired values.
    6. Click Install.
  4. Create a descheduler instance.

    1. From the Operators Installed Operators page, click the Kube Descheduler Operator.
    2. Select the Kube Descheduler tab and click Create KubeDescheduler.
    3. Edit the settings as necessary.

      1. To evict pods instead of simulating the evictions, change the Mode field to Automatic.
      2. Expand the Profiles section and select LongLifecycle. The AffinityAndTaints profile is enabled by default.

        Important

        The only profile currently available for OpenShift Virtualization is LongLifecycle.

You can also configure the profiles and settings for the descheduler later using the OpenShift CLI (oc).

7.14.15.3. Enabling descheduler evictions on a virtual machine (VM)

After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR).

Prerequisites

  • Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI (oc).
  • Ensure that the VM is not running.

Procedure

  1. Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      template:
        metadata:
          annotations:
            descheduler.alpha.kubernetes.io/evict: "true"
  2. Configure the KubeDescheduler object with the LongLifecycle profile and enable background evictions for improved VM eviction stability during live migration:

    apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      deschedulingIntervalSeconds: 3600
      profiles:
      - LongLifecycle 1
      mode: Predictive 2
      profileCustomizations:
        devEnableEvictionsInBackground: true 3
    1
    You can only set the LongLifecycle profile. This profile balances resource usage between nodes.
    2
    By default, the descheduler does not evict pods. To evict pods, set mode to Automatic.
    3
    Enabling devEnableEvictionsInBackground allows evictions to occur in the background, improving stability and mitigating oscillatory behavior during live migrations.

The descheduler is now enabled on the VM.

7.14.15.4. Additional resources

7.14.16. About high availability for virtual machines

You can enable high availability for virtual machines (VMs) by manually deleting a failed node to trigger VM failover or by configuring remediating nodes.

Manually deleting a failed node

If a node fails and machine health checks are not deployed on your cluster, virtual machines with runStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object.

See Deleting a failed node to trigger virtual machine failover.

Configuring remediating nodes

You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks.

For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.

7.14.17. Virtual machine control plane tuning

OpenShift Virtualization offers the following tuning options at the control-plane level:

  • The highBurst profile, which uses fixed QPS and burst rates, to create hundreds of virtual machines (VMs) in one batch
  • Migration setting adjustment based on workload type

7.14.17.1. Configuring a highBurst profile

Use the highBurst profile to create and maintain a large number of virtual machines (VMs) in one cluster.

Procedure

  • Apply the following patch to enable the highBurst tuning policy profile:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \
      --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \
      "value": "highBurst"}]'

Verification

  • Run the following command to verify the highBurst tuning policy profile is enabled:

    $ oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \
      -n openshift-cnv -o go-template --template='{{range $config, \
      $value := .spec.configuration}} {{if eq $config "apiConfiguration" \
      "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \
      {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}

7.14.18. Assigning compute resources

In OpenShift Virtualization, compute resources assigned to virtual machines (VMs) are backed by either guaranteed CPUs or time-sliced CPU shares.

Guaranteed CPUs, also known as CPU reservation, dedicate CPU cores or threads to a specific workload, which makes them unavailable to any other workload. Assigning guaranteed CPUs to a VM ensures that the VM will have sole access to a reserved physical CPU. Enable dedicated resources for VMs to use a guaranteed CPU.

Time-sliced CPUs dedicate a slice of time on a shared physical CPU to each workload. You can specify the size of the slice during VM creation, or when the VM is offline. By default, each vCPU receives 100 milliseconds, or 1/10 of a second, of physical CPU time.

The type of CPU reservation depends on the instance type or VM configuration.

7.14.18.1. Overcommitting CPU resources

Time-slicing allows multiple virtual CPUs (vCPUs) to share a single physical CPU. This is known as CPU overcommitment. Guaranteed VMs can not be overcommitted.

Configure CPU overcommitment to prioritize VM density over performance when assigning CPUs to VMs. With a higher CPU over-commitment of vCPUs, more VMs fit onto a given node.

7.14.18.2. Setting the CPU allocation ratio

The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs.

For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices.

To change the default number of vCPUs mapped to each physical CPU, set the vmiCPUAllocationRatio value in the HyperConverged CR. The pod CPU request is calculated by multiplying the number of vCPUs by the reciprocal of the CPU allocation ratio. For example, if vmiCPUAllocationRatio is set to 10, OpenShift Virtualization will request 10 times fewer CPUs on the pod for that VM.

Procedure

Set the vmiCPUAllocationRatio value in the HyperConverged CR to define a node CPU allocation ratio.

  1. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Set the vmiCPUAllocationRatio:

    ...
    spec:
      resourceRequirements:
        vmiCPUAllocationRatio: 1 1
    # ...
    1
    When vmiCPUAllocationRatio is set to 1, the maximum amount of vCPUs are requested for the pod.

7.14.18.3. Additional resources

7.14.19. About multi-queue functionality

Use multi-queue functionality to scale network throughput and performance on virtual machines (VMs) with multiple vCPUs.

By default, the queueCount value, which is derived from the domain XML, is determined by the number of vCPUs allocated to a VM. Network performance does not scale as the number of vCPUs increases. Additionally, because virtio-net has only one Tx and Rx queue, guests cannot transmit or retrieve packs in parallel.

Note

Enabling virtio-net multiqueue does not offer significant improvements when the number of vNICs in a guest instance is proportional to the number of vCPUs.

7.14.19.1. Known limitations

  • MSI vectors are still consumed if virtio-net multiqueue is enabled in the host but not enabled in the guest operating system by the administrator.
  • Each virtio-net queue consumes 64 KiB of kernel memory for the vhost driver.
  • Starting a VM with more than 16 CPUs results in no connectivity if networkInterfaceMultiqueue is set to 'true' (CNV-16107).

7.14.19.2. Enabling multi-queue functionality

Enable multi-queue functionality for interfaces configured with a VirtIO model.

Procedure

  1. Set the networkInterfaceMultiqueue value to true in the VirtualMachine manifest file of your VM to enable multi-queue functionality:

    apiVersion: kubevirt.io/v1
    kind: VM
    spec:
      domain:
        devices:
          networkInterfaceMultiqueue: true
  2. Save the VirtualMachine manifest file to apply your changes.

7.15. VM disks

7.15.1. Hot-plugging VM disks

You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI).

Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks.

A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM.

You can make a hot plugged disk persistent so that it is permanently mounted on the VM.

Note

Each VM has a virtio-scsi controller so that hot plugged disks can use the scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks.

Regular virtio is not available for hot plugged disks because it is not scalable. Each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand.

7.15.1.1. Hot plugging and hot unplugging a disk by using the web console

You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the OpenShift Container Platform web console.

The hot plugged disk remains attached to the VM until you unplug it.

You can make a hot plugged disk persistent so that it is permanently mounted on the VM.

Prerequisites

  • You must have a data volume or persistent volume claim (PVC) available for hot plugging.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select a running VM to view its details.
  3. On the VirtualMachine details page, click Configuration Disks.
  4. Add a hot plugged disk:

    1. Click Add disk.
    2. In the Add disk (hot plugged) window, select the disk from the Source list and click Save.
  5. Optional: Unplug a hot plugged disk:

    1. Click the options menu kebab beside the disk and select Detach.
    2. Click Detach.
  6. Optional: Make a hot plugged disk persistent:

    1. Click the options menu kebab beside the disk and select Make persistent.
    2. Reboot the VM to apply the change.

7.15.1.2. Hot plugging and hot unplugging a disk by using the command line

You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line.

You can make a hot plugged disk persistent so that it is permanently mounted on the VM.

Prerequisites

  • You must have at least one data volume or persistent volume claim (PVC) available for hot plugging.

Procedure

  • Hot plug a disk by running the following command:

    $ virtctl addvolume <virtual-machine|virtual-machine-instance> \
      --volume-name=<datavolume|PVC> \
      [--persist] [--serial=<label-name>]
    • Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances.
    • The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC.
  • Hot unplug a disk by running the following command:

    $ virtctl removevolume <virtual-machine|virtual-machine-instance> \
      --volume-name=<datavolume|PVC>

7.15.2. Expanding virtual machine disks

You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.

If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes.

You cannot reduce the size of a VM disk.

7.15.2.1. Expanding a VM disk PVC

You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.

If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead.

Procedure

  1. Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand:

    $ oc edit pvc <pvc_name>
  2. Update the disk size:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
       name: vm-disk-expand
    spec:
      accessModes:
         - ReadWriteMany
      resources:
        requests:
           storage: 3Gi 1
    # ...
    1
    Specify the new disk size.

7.15.2.2. Expanding available virtual storage by adding blank data volumes

You can expand the available storage of a virtual machine (VM) by adding blank data volumes.

Prerequisites

  • You must have at least one persistent volume.

Procedure

  1. Create a DataVolume manifest as shown in the following example:

    Example DataVolume manifest

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: blank-image-datavolume
    spec:
      source:
        blank: {}
      storage:
        resources:
          requests:
            storage: <2Gi> 1
      storageClassName: "<storage_class>" 2

    1
    Specify the amount of available space requested for the data volume.
    2
    Optional: If you do not specify a storage class, the default storage class is used.
  2. Create the data volume by running the following command:

    $ oc create -f <blank-image-datavolume>.yaml

7.15.3. Configuring shared volumes for virtual machines

You can configure shared disks to allow multiple virtual machines (VMs) to share the same underlying storage. A shared disk’s volume must be block mode.

You configure disk sharing by exposing the storage as either of these types:

  • An ordinary VM disk
  • A logical unit number (LUN) disk with an SCSI connection and raw device mapping, as required for Windows Failover Clustering for shared volumes

In addition to configuring disk sharing, you can also set an error policy for each ordinary VM disk or LUN disk. The error policy controls how the hypervisor behaves when an input/output error occurs on a disk Read or Write.

7.15.3.1. Configuring disk sharing by using virtual machine disks

You can configure block volumes so that multiple virtual machines (VMs) can share storage.

The application running on the guest operating system determines the storage option you must configure for the VM. A disk of type disk exposes the volume as an ordinary disk to the VM.

You can set an error policy for each disk. The error policy controls how the hypervisor behaves when an input/output error occurs while a disk is being written to or read. The default behavior stops the VM and generates a Kubernetes event.

You can accept the default behavior, or you can set the error policy to one of the following options:

  • report, which reports the error in the guest.
  • ignore, which ignores the error. The Read or Write failure is undetected.
  • enospace, which produces an error indicating that there is not enough disk space.

Prerequisites

  • The volume access mode must be ReadWriteMany (RWX) if the VMs that are sharing disks are running on different nodes.

    If the VMs that are sharing disks are running on the same node, ReadWriteOnce (RWO) volume access mode is sufficient.

  • The storage provider must support the required Container Storage Interface (CSI) driver.

Procedure

  1. Create the VirtualMachine manifest for your VM to set the required values, as shown in the following example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: <vm_name>
    spec:
      template:
    # ...
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
                errorPolicy: report 1
                disk1: disk_one 2
              - disk:
                  bus: virtio
                name: cloudinitdisk
                disk2: disk_two
                shareable: true 3
              interfaces:
              - masquerade: {}
                name: default
    1
    Identifies the error policy.
    2
    Identifies a device as a disk.
    3
    Identifies a shared disk.
  2. Save the VirtualMachine manifest file to apply your changes.

7.15.3.2. Configuring disk sharing by using LUN

To secure data on your VM from outside access, you can enable SCSI persistent reservation and configure a LUN-backed virtual machine disk to be shared among multiple virtual machines. By enabling the shared option, you can use advanced SCSI commands, such as those required for a Windows failover clustering implementation, for managing the underlying storage.

When a storage volume is configured as the LUN disk type, a VM can use the volume as a logical unit number (LUN) device. As a result, the VM can deploy and manage the disk by using SCSI commands.

You reserve a LUN through the SCSI persistent reserve options. To enable the reservation:

  1. Configure the feature gate option
  2. Activate the feature gate option on the LUN disk to issue SCSI device-specific input and output controls (IOCTLs) that the VM requires.

You can set an error policy for each LUN disk. The error policy controls how the hypervisor behaves when an input/output error occurs on a disk Read or Write. The default behavior stops the guest and generates a Kubernetes event.

For a LUN disk with an SCSi connection and a persistent reservation, as required for Windows Failover Clustering for shared volumes, you set the error policy to report.

Prerequisites

  • You must have cluster administrator privileges to configure the feature gate option.
  • The volume access mode must be ReadWriteMany (RWX) if the VMs that are sharing disks are running on different nodes.

    If the VMs that are sharing disks are running on the same node, ReadWriteOnce (RWO) volume access mode is sufficient.

  • The storage provider must support a Container Storage Interface (CSI) driver that uses Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), or iSCSI storage protocols.
  • If you are a cluster administrator and intend to configure disk sharing by using LUN, you must enable the cluster’s feature gate on the HyperConverged custom resource (CR).
  • Disks that you want to share must be in block mode.

Procedure

  1. Edit or create the VirtualMachine manifest for your VM to set the required values, as shown in the following example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-0
    spec:
      template:
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: sata
                name: rootdisk
              - errorPolicy: report 1
                lun: 2
                  bus: scsi
                  reservation: true 3
                name: na-shared
                serial: shared1234
          volumes:
          - dataVolume:
              name: vm-0
            name: rootdisk
          - name: na-shared
            persistentVolumeClaim:
              claimName: pvc-na-share
    1
    Identifies the error policy.
    2
    Identifies a LUN disk.
    3
    Identifies that the persistent reservation is enabled.
  2. Save the VirtualMachine manifest file to apply your changes.
7.15.3.2.1. Configuring disk sharing by using LUN and the web console

You can use the OpenShift Container Platform web console to configure disk sharing by using LUN.

Prerequisites

  • The cluster administrator must enable the persistentreservation feature gate setting.

Procedure

  1. Click Virtualization VirtualMachines in the web console.
  2. Select a VM to open the VirtualMachine details page.
  3. Expand Storage.
  4. On the Disks tab, click Add disk.
  5. Specify the Name, Source, Size, Interface, and Storage Class.
  6. Select LUN as the Type.
  7. Select Shared access (RWX) as the Access Mode.
  8. Select Block as the Volume Mode.
  9. Expand Advanced Settings, and select both checkboxes.
  10. Click Save.
7.15.3.2.2. Configuring disk sharing by using LUN and the command line

You can use the command line to configure disk sharing by using LUN.

Procedure

  1. Edit or create the VirtualMachine manifest for your VM to set the required values, as shown in the following example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-0
    spec:
      template:
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: sata
                name: rootdisk
              - errorPolicy: report
                lun: 1
                  bus: scsi
                  reservation: true 2
                name: na-shared
                serial: shared1234
          volumes:
          - dataVolume:
              name: vm-0
            name: rootdisk
          - name: na-shared
            persistentVolumeClaim:
              claimName: pvc-na-share
    1
    Identifies a LUN disk.
    2
    Identifies that the persistent reservation is enabled.
  2. Save the VirtualMachine manifest file to apply your changes.

7.15.3.3. Enabling the PersistentReservation feature gate

You can enable the SCSI persistentReservation feature gate and allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines.

The persistentReservation feature gate is disabled by default. You can enable the persistentReservation feature gate by using the web console or the command line.

Prerequisites

  • Cluster administrator privileges are required.
  • The volume access mode ReadWriteMany (RWX) is required if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, the ReadWriteOnce (RWO) volume access mode is sufficient.
  • The storage provider must support a Container Storage Interface (CSI) driver that uses Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), or iSCSI storage protocols.
7.15.3.3.1. Enabling the PersistentReservation feature gate by using the web console

You must enable the PersistentReservation feature gate to allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. Enabling the feature gate requires cluster administrator privileges.

Procedure

  1. Click Virtualization Overview in the web console.
  2. Click the Settings tab.
  3. Select Cluster.
  4. Expand SCSI persistent reservation and set Enable persistent reservation to on.
7.15.3.3.2. Enabling the PersistentReservation feature gate by using the command line

You enable the persistentReservation feature gate by using the command line. Enabling the feature gate requires cluster administrator privileges.

Procedure

  1. Enable the persistentReservation feature gate by running the following command:

    $ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \
    '[{"op":"replace","path":"/spec/featureGates/persistentReservation", "value": true}]'
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.