Chapter 10. Virtual machines


10.1. Creating virtual machines

Use one of these procedures to create a virtual machine:

  • Quick Start guided tour
  • Quick create from the Catalog
  • Pasting a pre-configured YAML file with the virtual machine wizard
  • Using the CLI
Warning

Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix.

When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines.

Templates without a boot source are labeled as Boot source required. You can use these templates if you complete the steps for adding a boot source to the virtual machine.

Important

Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for any templates or virtual machines that use data volumes or storage profiles.

10.1.1. Using a Quick Start to create a virtual machine

The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process.

Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine.

Prerequisites

  • Access to the website where you can download the URL link for the operating system image.

Procedure

  1. In the web console, select Quick Starts from the Help menu.
  2. Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine.
  3. Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The Virtualization VirtualMachines page displays the virtual machine.

10.1.2. Quick creating a virtual machine

You can quickly create a virtual machine (VM) by using a template with an available boot source.

Procedure

  1. Click Virtualization Catalog in the side menu.
  2. Click Boot source available to filter templates with boot sources.

    Note

    By default, the template list will show only Default Templates. Click All Items when filtering to see all available templates for your chosen filters.

  3. Click a template to view its details.
  4. Click Quick Create VirtualMachine to create a VM from the template.

    The virtual machine Details page is displayed with the provisioning status.

Verification

  1. Click Events to view a stream of events as the VM is provisioned.
  2. Click Console to verify that the VM booted successfully.

10.1.3. Creating a virtual machine from a customized template

Some templates require additional parameters, for example, a PVC with a boot source. You can customize select parameters of a template to create a virtual machine (VM).

Procedure

  1. In the web console, select a template:

    1. Click Virtualization Catalog in the side menu.
    2. Optional: Filter the templates by project, keyword, operating system, or workload profile.
    3. Click the template that you want to customize.
  2. Click Customize VirtualMachine.
  3. Specify parameters for your VM, including its Name and Disk source. You can optionally specify a data source to clone.

Verification

  1. Click Events to view a stream of events as the VM is provisioned.
  2. Click Console to verify that the VM booted successfully.

Refer to the virtual machine fields section when creating a VM from the web console.

10.1.3.1. Networking fields

NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

10.1.3.2. Storage fields

NameSelectionDescription

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

 

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

 

Size of the disk in GiB.

Type

 

Type of disk. Example: Disk or CD-ROM

Interface

 

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

 

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile.

Note

Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization.

To manually specify Volume Mode and Access Mode, you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default.

NameMode descriptionParameterParameter description

Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Access mode of the persistent volume.

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This is required for some features, such as live migration of virtual machines between nodes.

ReadOnlyMany (ROX)

Volume can be mounted as read only by many nodes.

10.1.3.3. Cloud-init fields

NameDescription

Authorized SSH Keys

The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine.

Custom script

Replaces other options with a field in which you paste a custom cloud-init script.

To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.

10.1.3.4. Pasting in a pre-configured YAML file to create a virtual machine

Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen.

If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Click Create and select With YAML.
  3. Write or paste your virtual machine configuration in the editable window.

    1. Alternatively, use the example virtual machine provided by default in the YAML screen.
  4. Optional: Click Download to download the YAML configuration file in its present state.
  5. Click Create to create the virtual machine.

The virtual machine is listed on the VirtualMachines page.

10.1.4. Using the CLI to create a virtual machine

You can create a virtual machine from a virtualMachine manifest.

Procedure

  1. Edit the VirtualMachine manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM:

    Example 10.1. Example manifest for a RHEL VM

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        app: <vm_name> 1
      name: <vm_name>
    spec:
      dataVolumeTemplates:
      - apiVersion: cdi.kubevirt.io/v1beta1
        kind: DataVolume
        metadata:
          name: <vm_name>
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9
            namespace: openshift-virtualization-os-images
          storage:
            resources:
              requests:
                storage: 30Gi
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: <vm_name>
        spec:
          domain:
            cpu:
              cores: 1
              sockets: 2
              threads: 1
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - masquerade: {}
                name: default
              rng: {}
            features:
              smm:
                enabled: true
            firmware:
              bootloader:
                efi: {}
            resources:
              requests:
                memory: 8Gi
          evictionStrategy: LiveMigrate
          networks:
          - name: default
            pod: {}
          volumes:
          - dataVolume:
              name: <vm_name>
            name: rootdisk
          - cloudInitNoCloud:
              userData: |-
                #cloud-config
                user: cloud-user
                password: '<password>' 2
                chpasswd: { expire: False }
            name: cloudinitdisk
    1
    Specify the name of the virtual machine.
    2
    Specify the password for cloud-user.
  2. Create a virtual machine by using the manifest file:

    $ oc create -f <vm_manifest_file>.yaml
  3. Optional: Start the virtual machine:

    $ virtctl start <vm_name>

10.1.5. Virtual machine storage volume types

Storage volume typeDescription

ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready.

Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.

cloudInitNoCloud

Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

containerDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.

A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

Note

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

10.1.6. About RunStrategies for virtual machines

A RunStrategy for virtual machines determines a virtual machine instance’s (VMI) behavior, depending on a series of conditions. The spec.runStrategy setting exists in the virtual machine configuration process as an alternative to the spec.running setting. The spec.runStrategy setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running setting with only true or false responses. However, the two settings are mutually exclusive. Only either spec.running or spec.runStrategy can be used. An error occurs if both are used.

There are four defined RunStrategies.

Always
A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as spec.running: true.
RerunOnFailure
A VMI is re-created if the previous instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down.
Manual
The start, stop, and restart virtctl client commands can be used to control the VMI’s state and existence.
Halted
No VMI is present when a virtual machine is created, which is the same behavior as spec.running: false.

Different combinations of the start, stop and restart virtctl commands affect which RunStrategy is used.

The following table follows a VM’s transition from different states. The first column shows the VM’s initial RunStrategy. Each additional column shows a virtctl command and the new RunStrategy after that command is run.

Initial RunStrategystartstoprestart

Always

-

Halted

Always

RerunOnFailure

-

Halted

RerunOnFailure

Manual

Manual

Manual

Manual

Halted

Always

-

-

Note

In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always or RerunOnFailure are rescheduled on a new node.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  RunStrategy: Always 1
  template:
# ...
1
The VMI’s current RunStrategy setting.

10.1.7. Additional resources

10.2. Editing virtual machines

You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen.

10.2.1. Editing a virtual machine in the web console

You can edit a virtual machine by using the OpenShift Container Platform web console or the command line interface.

Procedure

  1. Navigate to Virtualization VirtualMachines in the web console.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click any field that has the pencil icon, which indicates that the field is editable. For example, click the current Boot mode setting, such as BIOS or UEFI, to open the Boot mode window and select an option from the list.
  4. Click Save.
Note

If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.2.2. Editing a virtual machine YAML configuration using the web console

You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine.
  3. Click the YAML tab to display the editable configuration.
  4. Optional: You can click Download to download the YAML file locally in its current state.
  5. Edit the file and click Save.

A confirmation message shows that the modification has been successful and includes the updated version number for the object.

10.2.3. Editing a virtual machine YAML configuration using the CLI

Use this procedure to edit a virtual machine YAML configuration using the CLI.

Prerequisites

  • You configured a virtual machine with a YAML object configuration file.
  • You installed the oc CLI.

Procedure

  1. Run the following command to update the virtual machine configuration:

    $ oc edit <object_type> <object_ID>
  2. Open the object configuration.
  3. Edit the YAML.
  4. If you edit a running virtual machine, you need to do one of the following:

    • Restart the virtual machine.
    • Run the following command for the new configuration to take effect:

      $ oc apply <object_type> <object_ID>

10.2.4. Adding a virtual disk to a virtual machine

Use this procedure to add a virtual disk to a virtual machine.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. On the Configuration Disks tab, click Add disk.
  4. Specify the Source, Name, Size, Type, Interface, and Storage Class.

    1. Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
    2. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.
  5. Click Add.
Note

If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine.

The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.

10.2.4.1. Storage fields

NameSelectionDescription

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

 

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

 

Size of the disk in GiB.

Type

 

Type of disk. Example: Disk or CD-ROM

Interface

 

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

 

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile.

Note

Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization.

To manually specify Volume Mode and Access Mode, you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default.

NameMode descriptionParameterParameter description

Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Access mode of the persistent volume.

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This is required for some features, such as live migration of virtual machines between nodes.

ReadOnlyMany (ROX)

Volume can be mounted as read only by many nodes.

10.2.5. Adding a secret, config map, or service account to a virtual machine

You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.

These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.

If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page.

Prerequisites

  • The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click Configuration Environment.
  4. Click Add Config Map, Secret or Service Account.
  5. Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
  6. Optional: Click Reload to revert the environment to its last saved state.
  7. Click Save.

Verification

  1. On the VirtualMachine details page, click Configuration Disks and verify that the resource is displayed in the list of disks.
  2. Restart the virtual machine by clicking Actions Restart.

You can now mount the secret, config map, or service account as you would mount any other disk.

Additional resources for config maps, secrets, and service accounts

10.2.6. Adding a network interface to a virtual machine

Use this procedure to add a network interface to a virtual machine.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. On the Configuration Network interfaces tab, click Add Network Interface.
  4. In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
  5. Click Add.
Note

If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine.

The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.2.6.1. Networking fields

NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

10.3. Editing boot order

You can update the values for a boot order list by using the web console or the CLI.

With Boot Order in the Virtual Machine Overview page, you can:

  • Select a disk or network interface controller (NIC) and add it to the boot order list.
  • Edit the order of the disks or NICs in the boot order list.
  • Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.

10.3.1. Adding items to a boot order list in the web console

Add items to a boot order list by using the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
  5. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
  6. Add any additional disks or NICs to the boot order list.
  7. Click Save.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.3.2. Editing a boot order list in the web console

Edit the boot order list in the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Choose the appropriate method to move the item in the boot order list:

    • If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
    • If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
  6. Click Save.
Note

If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.3.3. Editing a boot order list in the YAML configuration file

Edit the boot order list in a YAML configuration file by using the CLI.

Procedure

  1. Open the YAML configuration file for the virtual machine by running the following command:

    $ oc edit vm example
  2. Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:

    disks:
      - bootOrder: 1 1
        disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      - cdrom:
          bus: virtio
        name: cd-drive-1
    interfaces:
      - boot Order: 2 2
        macAddress: '02:96:c4:00:00'
        masquerade: {}
        name: default
    1
    The boot order value specified for the disk.
    2
    The boot order value specified for the network interface controller.
  3. Save the YAML file.
  4. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.

10.3.4. Removing items from a boot order list in the web console

Remove items from a boot order list by using the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Click the Remove icon delete next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.4. Deleting virtual machines

You can delete a virtual machine from the web console or by using the oc command line interface.

10.4.1. Deleting a virtual machine using the web console

Deleting a virtual machine permanently removes it from the cluster.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Click the Options menu kebab beside a virtual machine and select Delete.

    Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete.

  3. Optional: Select With grace period or clear Delete disks.
  4. Click Delete to permanently delete the virtual machine.

10.4.2. Deleting a virtual machine by using the CLI

You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines.

Prerequisites

  • Identify the name of the virtual machine that you want to delete.

Procedure

  • Delete the virtual machine by running the following command:

    $ oc delete vm <vm_name>
    Note

    This command only deletes a VM in the current project. Specify the -n <project_name> option if the VM you want to delete is in a different project or namespace.

10.5. Exporting virtual machines

You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.

You create a VirtualMachineExport custom resource (CR) by using the command line interface.

Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes.

10.5.1. Creating a VirtualMachineExport custom resource

You can create a VirtualMachineExport custom resource (CR) to export the following objects:

  • Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
  • VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR.
  • PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use.

The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route.

The export server supports the following file formats:

  • raw: Raw disk image file.
  • gzip: Compressed disk image file.
  • dir: PVC directory and files.
  • tar.gz: Compressed PVC file.

Prerequisites

  • The VM must be shut down for a VM export.

Procedure

  1. Create a VirtualMachineExport manifest to export a volume from a VirtualMachine, VirtualMachineSnapshot, or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml:

    VirtualMachineExport example

    apiVersion: export.kubevirt.io/v1alpha1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io" 1
        kind: VirtualMachine 2
        name: example-vm
      ttlDuration: 1h 3

    1
    Specify the appropriate API group:
    • "kubevirt.io" for VirtualMachine.
    • "snapshot.kubevirt.io" for VirtualMachineSnapshot.
    • "" for PersistentVolumeClaim.
    2
    Specify VirtualMachine, VirtualMachineSnapshot, or PersistentVolumeClaim.
    3
    Optional. The default duration is 2 hours.
  2. Create the VirtualMachineExport CR:

    $ oc create -f example-export.yaml
  3. Get the VirtualMachineExport CR:

    $ oc get vmexport example-export -o yaml

    The internal and external links for the exported volumes are displayed in the status stanza:

    Output example

    apiVersion: export.kubevirt.io/v1alpha1
    kind: VirtualMachineExport
    metadata:
      name: example-export
      namespace: example
    spec:
      source:
        apiGroup: ""
        kind: PersistentVolumeClaim
        name: example-pvc
      tokenSecretRef: example-token
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:10:09Z"
        reason: podReady
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:09:02Z"
        reason: pvcBound
        status: "True"
        type: PVCReady
      links:
        external: 1
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img
            - format: gzip
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz
            name: example-disk
        internal:  2
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img
            - format: gzip
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz
            name: example-disk
      phase: Ready
      serviceName: virt-export-example-export

    1
    External links are accessible from outside the cluster by using an Ingress or Route.
    2
    Internal links are only valid inside the cluster.

10.5.2. Accessing exported virtual machine manifests

After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine manifest and related information from the export server.

Prerequisites

  • You exported a virtual machine or VM snapshot by creating a VirtualMachineExport custom resource (CR).

    Note

    VirtualMachineExport objects that have the spec.source.kind: PersistentVolumeClaim parameter do not generate virtual machine manifests.

Procedure

  1. To access the manifests, you must first copy the certificates from the source cluster to the target cluster.

    1. Log in to the source cluster.
    2. Save the certificates to the cacert.crt file by running the following command:

      $ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1
      1
      Replace <export_name> with the metadata.name value from the VirtualMachineExport object.
    3. Copy the cacert.crt file to the target cluster.
  2. Decode the token in the source cluster and save it to the token_decode file by running the following command:

    $ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1
    1
    Replace <export_name> with the metadata.name value from the VirtualMachineExport object.
  3. Copy the token_decode file to the target cluster.
  4. Get the VirtualMachineExport custom resource by running the following command:

    $ oc get vmexport <export_name> -o yaml
  5. Review the status.links stanza, which is divided into external and internal sections. Note the manifests.url fields within each section:

    Example output

    apiVersion: export.kubevirt.io/v1alpha1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io"
        kind: VirtualMachine
        name: example-vm
      tokenSecretRef: example-token
    status:
    #...
      links:
        external:
    #...
          manifests:
          - type: all
            url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1
          - type: auth-header-secret
            url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2
        internal:
    #...
          manifests:
          - type: all
            url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3
          - type: auth-header-secret
            url: https://virt-export-export-pvc.default.svc/internal/manifests/secret
      phase: Ready
      serviceName: virt-export-example-export

    1
    Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the public certificate for the external URL’s ingress or route.
    2
    Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token.
    3
    Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the certificate for the internal URL’s export server.
  6. Log in to the target cluster.
  7. Get the Secret manifest by running the following command:

    $ curl --cacert cacert.crt <secret_manifest_url> -H \ 1
    "x-kubevirt-export-token:token_decode" -H \ 2
    "Accept:application/yaml"
    1
    Replace <secret_manifest_url> with an auth-header-secret URL from the VirtualMachineExport YAML output.
    2
    Reference the token_decode file that you created earlier.

    For example:

    $ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
  8. Get the manifests of type: all, such as the ConfigMap and VirtualMachine manifests, by running the following command:

    $ curl --cacert cacert.crt <all_manifest_url> -H \ 1
    "x-kubevirt-export-token:token_decode" -H \ 2
    "Accept:application/yaml"
    1
    Replace <all_manifest_url> with a URL from the VirtualMachineExport YAML output.
    2
    Reference the token_decode file that you created earlier.

    For example:

    $ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"

Next steps

  • You can now create the ConfigMap and VirtualMachine objects on the target cluster by using the exported manifests.

10.5.3. Additional resources

10.6. Managing virtual machine instances

If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI).

The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port.

10.6.1. About virtual machine instances

A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI).

A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:

  • List standalone VMIs and their details.
  • Edit labels and annotations for a standalone VMI.
  • Delete a standalone VMI.

When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.

Note

Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.

10.6.2. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

10.6.3. Listing standalone virtual machine instances using the web console

Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).

Note

VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.

Procedure

  • Click Virtualization VirtualMachines from the side menu.

    You can identify a standalone VMI by a dark colored badge next to its name.

10.6.4. Editing a standalone virtual machine instance using the web console

You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a standalone VMI to open the VirtualMachineInstance details page.
  3. On the Details tab, click the pencil icon beside Annotations or Labels.
  4. Make the relevant changes and click Save.

10.6.5. Deleting a standalone virtual machine instance using the CLI

You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the VMI that you want to delete.

Procedure

  • Delete the VMI by running the following command:

    $ oc delete vmi <vmi_name>

10.6.6. Deleting a standalone virtual machine instance using the web console

Delete a standalone virtual machine instance (VMI) from the web console.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu.
  2. Click Actions Delete VirtualMachineInstance.
  3. In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.

10.7. Controlling virtual machine states

You can stop, start, restart, and unpause virtual machines from the web console.

You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port.

10.7.1. Starting a virtual machine

You can start a virtual machine from the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to start.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you start it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions.
  4. Select Restart.
  5. In the confirmation window, click Start to start the virtual machine.
Note

When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.

10.7.2. Restarting a virtual machine

You can restart a running virtual machine from the web console.

Important

To avoid errors, do not restart a virtual machine while it has a status of Importing.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to restart.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you restart it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Restart.
  4. In the confirmation window, click Restart to restart the virtual machine.

10.7.3. Stopping a virtual machine

You can stop a virtual machine from the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to stop.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you stop it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions Stop.
  4. In the confirmation window, click Stop to stop the virtual machine.

10.7.4. Unpausing a virtual machine

You can unpause a paused virtual machine from the web console.

Prerequisites

  • At least one of your virtual machines must have a status of Paused.

    Note

    You can pause virtual machines by using the virtctl client.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to unpause.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. In the Status column, click Paused.
    • To view comprehensive information about the selected virtual machine before you unpause it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click the pencil icon that is located on the right side of Status.
  4. In the confirmation window, click Unpause to unpause the virtual machine.

10.8. Accessing virtual machine consoles

OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands.

Note

Running concurrent VNC connections to a single virtual machine is not currently supported.

10.8.1. Accessing virtual machine consoles in the OpenShift Container Platform web console

You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console.

You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console.

10.8.1.1. Connecting to the serial console

Connect to the serial console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Console tab. The VNC console opens by default.
  4. Click Disconnect to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background.
  5. Click the VNC Console drop-down list and select Serial Console.
  6. Click Disconnect to end the console session.
  7. Optional: Open the serial console in a separate window by clicking Open Console in New Window.

10.8.1.2. Connecting to the VNC console

Connect to the VNC console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Console tab. The VNC console opens by default.
  4. Optional: Open the VNC console in a separate window by clicking Open Console in New Window.
  5. Optional: Send key combinations to the virtual machine by clicking Send Key.
  6. Click outside the console window and then click Disconnect to end the session.

10.8.1.3. Connecting to a Windows virtual machine with RDP

The Desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.

To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Console tab on the VirtualMachine details page of the web console and supply it to your preferred RDP client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers.
  • An RDP client installed on a machine on the same network as the Windows virtual machine.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Click a Windows virtual machine to open the VirtualMachine details page.
  3. Click the Console tab.
  4. From the list of consoles, select Desktop viewer.
  5. Click Launch Remote Desktop to download the console.rdp file.
  6. Reference the console.rdp file in your preferred RDP client to connect to the Windows virtual machine.

10.8.1.4. Switching between virtual machine displays

If your Windows virtual machine (VM) has a vGPU attached, you can switch between the default display and the vGPU display by using the web console.

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource and assigned to the VM.
  • The VM is running.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines
  2. Select a Windows virtual machine to open the Overview screen.
  3. Click the Console tab.
  4. From the list of consoles, select VNC console.
  5. Choose the appropriate key combination from the Send Key list:

    1. To access the default VM display, select Ctl + Alt+ 1.
    2. To access the vGPU display, select Ctl + Alt + 2.

Additional resources

10.8.1.5. Copying the SSH command using the web console

Copy the command to connect to a virtual machine (VM) terminal via SSH.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Click the Options menu kebab for your virtual machine and select Copy SSH command.
  3. Paste it in the terminal to access the VM.

10.8.2. Accessing virtual machine consoles by using CLI commands

10.8.2.1. Accessing a virtual machine via SSH by using virtctl

You can use the virtctl ssh command to forward SSH traffic to a virtual machine (VM) by using your local SSH client. If you have previously configured SSH key authentication with the VM, skip to step 2 of the procedure because step 1 is not required.

Note

Heavy SSH traffic on the control plane can slow down the API server. If you regularly need a large number of connections, use a dedicated Kubernetes Service object to access the virtual machine.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have installed the virtctl client.
  • The virtual machine you want to access is running.
  • You are in the same project as the VM.

Procedure

  1. Configure SSH key authentication:

    1. Use the ssh-keygen command to generate an SSH public key pair:

      $ ssh-keygen -f <key_file> 1
      1
      Specify the file in which to store the keys.
    2. Create an SSH authentication secret which contains the SSH public key to access the VM:

      $ oc create secret generic my-pub-key --from-file=key1=<key_file>.pub
    3. Add a reference to the secret in the VirtualMachine manifest. For example:

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        name: testvm
      spec:
        running: true
        template:
          spec:
            accessCredentials:
            - sshPublicKey:
                source:
                  secret:
                    secretName: my-pub-key 1
                propagationMethod:
                  configDrive: {} 2
      # ...
      1
      Reference to the SSH authentication Secret object.
      2
      The SSH public key is injected into the VM as cloud-init metadata using the configDrive provider.
    4. Restart the VM to apply your changes.
  2. Connect to the VM via SSH:

    1. Run the following command to access the VM via SSH:

      $ virtctl ssh -i <key_file> <vm_username>@<vm_name>
    2. Optional: To securely transfer files to or from the VM, use the following commands:

      Copy a file from your machine to the VM

      $ virtctl scp -i <key_file> <filename> <vm_username>@<vm_name>:

      Copy a file from the VM to your machine

      $ virtctl scp -i <key_file> <vm_username@<vm_name>:<filename> .

10.8.2.2. Using OpenSSH and virtctl port-forward

You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs.

This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.

Prerequisites

  • You have installed the virtctl client.
  • The virtual machine you want to access is running.
  • The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable.

Procedure

  1. Add the following text to the ~/.ssh/config file on your client machine:

    Host vm/*
      ProxyCommand virtctl port-forward --stdio=true %h %p
  2. Connect to the VM by running the following command:

    $ ssh <user>@vm/<vm_name>.<namespace>

10.8.2.3. Accessing the serial console of a virtual machine instance

The virtctl console command opens a serial console to the specified virtual machine instance.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.

Procedure

  • Connect to the serial console with virtctl:

    $ virtctl console <VMI>

10.8.2.4. Accessing the graphical console of a virtual machine instances with VNC

The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.
Note

If you use virtctl via SSH on a remote machine, you must forward the X session to your machine.

Procedure

  1. Connect to the graphical interface with the virtctl utility:

    $ virtctl vnc <VMI>
  2. If the command failed, try using the -v flag to collect troubleshooting information:

    $ virtctl vnc <VMI> -v 4

10.8.2.5. Connecting to a Windows virtual machine with an RDP console

Create a Kubernetes Service object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent object is included in the VirtIO drivers.
  • An RDP client installed on your local machine.

Procedure

  1. Edit the VirtualMachine manifest to add the label for service creation:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-ephemeral
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 1
    # ...
    1
    Add the label special: key in the spec.template.metadata.labels section.
    Note

    Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest.

  2. Save the VirtualMachine manifest file to apply your changes.
  3. Create a Service manifest to expose the VM:

    apiVersion: v1
    kind: Service
    metadata:
      name: rdpservice 1
      namespace: example-namespace 2
    spec:
      ports:
      - targetPort: 3389 3
        protocol: TCP
      selector:
        special: key 4
      type: NodePort 5
    # ...
    1
    The name of the Service object.
    2
    The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest.
    3
    The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest.
    4
    The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest.
    5
    The type of service.
  4. Save the Service manifest file.
  5. Create the service by running the following command:

    $ oc create -f <service_name>.yaml
  6. Start the VM. If the VM is already running, restart it.
  7. Query the Service object to verify that it is available:

    $ oc get service -n example-namespace

    Example output for NodePort service

    NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE
    rdpservice   NodePort    172.30.232.73   <none>       3389:30000/TCP    5m

  8. Run the following command to obtain the IP address for the node:

    $ oc get node <node_name> -o wide

    Example output

    NAME    STATUS   ROLES   AGE    VERSION  INTERNAL-IP      EXTERNAL-IP
    node01  Ready    worker  6d22h  v1.24.0  192.168.55.101   <none>

  9. Specify the node IP address and the assigned port in your preferred RDP client.
  10. Enter the user name and password to connect to the Windows virtual machine.

10.9. Automating Windows installation with sysprep

You can use Microsoft DVD images and sysprep to automate the installation, setup, and software provisioning of Windows virtual machines.

10.9.1. Using a Windows DVD to create a VM disk image

Microsoft does not provide disk images for download, but you can create a disk image using a Windows DVD. This disk image can then be used to create virtual machines.

Procedure

  1. In the OpenShift Virtualization web console, click Storage PersistentVolumeClaims Create PersistentVolumeClaim With Data upload form.
  2. Select the intended project.
  3. Set the Persistent Volume Claim Name.
  4. Upload the VM disk image from the Windows DVD. The image is now available as a boot source to create a new Windows VM.

10.9.2. Using a disk image to install Windows

You can use a disk image to install Windows on your virtual machine.

Prerequisites

  • You must create a disk image using a Windows DVD.
  • You must create an autounattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization Catalog from the side menu.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select Upload (Upload a new file to a PVC) from the Disk source list and browse to the DVD image.
  4. Click Review and create VirtualMachine.
  5. Clear Clone available operating system source to this Virtual Machine.
  6. Clear Start this VirtualMachine after creation.
  7. On the Sysprep section of the Scripts tab, click Edit.
  8. Browse to the autounattend.xml answer file and click Save.
  9. Click Create VirtualMachine.
  10. On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save.

The VM will start with the sysprep disk containing the autounattend.xml answer file.

10.9.3. Generalizing a Windows VM using sysprep

Generalizing an image allows that image to remove all system-specific configuration data when the image is deployed on a virtual machine (VM).

Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines.
  2. Select a Windows VM to open the VirtualMachine details page.
  3. Click Configuration Disks.
  4. Click the Options menu kebab beside the sysprep disk and select Detach.
  5. Click Detach.
  6. Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool.
  7. Start the sysprep program by running the following command:

    %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
  8. After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.

You can now specialize the VM.

10.9.4. Specializing a Windows virtual machine

Specializing a virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.

Prerequisites

  • You must have a generalized Windows disk image.
  • You must create an unattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization Catalog.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select PVC (clone PVC) from the Disk source list.
  4. Specify the Persistent Volume Claim project and Persistent Volume Claim name of the generalized Windows image.
  5. Click Review and create VirtualMachine.
  6. Click the Scripts tab.
  7. In the Sysprep section, click Edit, browse to the unattend.xml answer file, and click Save.
  8. Click Create VirtualMachine.

During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use.

10.9.5. Additional resources

10.10. Triggering virtual machine failover by resolving a failed node

If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object.

Note

If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks:

  • Failed nodes are automatically recycled.
  • Virtual machines with RunStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes.

10.10.1. Prerequisites

  • A node where a virtual machine was running has the NotReady condition.
  • The virtual machine that was running on the failed node has RunStrategy set to Always.
  • You have installed the OpenShift CLI (oc).

10.10.2. Deleting nodes from a bare metal cluster

When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.

Procedure

Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:

  1. Mark the node as unschedulable:

    $ oc adm cordon <node_name>
  2. Drain all pods on the node:

    $ oc adm drain <node_name> --force=true

    This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.

  3. Delete the node from the cluster:

    $ oc delete node <node_name>

    Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.

  4. If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.

10.10.3. Verifying virtual machine failover

After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI.

10.10.3.1. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

10.11. Installing the QEMU guest agent and VirtIO drivers

The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

10.11.1. Installing the QEMU guest agent

10.11.1.1. Installing QEMU guest agent on a Linux virtual machine

The qemu-guest-agent is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. Access the virtual machine command line through one of the consoles or by SSH.
  2. Install the QEMU guest agent on the virtual machine:

    $ yum install -y qemu-guest-agent
  3. Ensure the service is persistent and start it:

    $ systemctl enable --now qemu-guest-agent

Verification

  1. Run the following command to verify that AgentConnected is listed in the VM spec:

    $ oc get vm <vm_name>

10.11.1.2. Installing QEMU guest agent on a Windows virtual machine

For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. In the Windows Guest Operating System (OS), use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive.
  2. Run the qemu-ga-x86_64.msi installer.

Verification

  1. Run the following command to verify that the output contains the QEMU Guest Agent:

    $ net start

10.11.2. Installing VirtIO drivers

10.11.2.1. Supported VirtIO drivers for Microsoft Windows virtual machines

Table 10.1. Supported drivers
Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes displays as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes displays as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

10.11.2.2. About VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

10.11.2.3. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.

10.11.2.4. Installing VirtIO drivers during Windows installation

Install the virtio drivers during or after Windows installation.

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.

Prerequisites

  • A storage device containing the virtio drivers must be attached to the VM.

Procedure

  1. In the Windows Guest OS, use the File Explorer to navigate to the virtio-win CD drive.
  2. Double-click to run the appropriate installer for your VM:

    1. For a 64-bit vCPU, use the virtio-win-gt-x64 installer. 32-bit vCPUs are no longer supported.
  3. Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default.
  4. After the installation is complete, select Finish.
  5. Reboot the VM.

Verification

  1. Open the system disk on the PC. This is typically C:.
  2. Navigate to Program Files Virtio-Win.

If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful.

10.11.2.5. Adding VirtIO drivers container disk to a virtual machine

OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.

Prerequisites

  • Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.

    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration.
  2. The disk is available once the virtual machine has started:

    • If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect.
    • If the virtual machine is not running, use virtctl start <vm>.

After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.

10.12. Viewing the QEMU guest agent information for virtual machines

When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks.

If the QEMU guest agent is not installed, the Overview and the Details tabs display information about the operating system that was specified when the virtual machine was created.

10.12.1. Viewing the QEMU guest agent information in the web console

You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host.

Prerequisites

  • Install the QEMU guest agent on the virtual machine.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine name to open the VirtualMachine details page.
  3. Click the Details tab to view active users.
  4. Click the Configuration Disks tab to view information about the file systems.

10.13. Using virtual Trusted Platform Module devices

Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest.

10.13.1. About vTPM devices

A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip.

You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip.

If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one.

vTPM devices also protect virtual machines by temporarily storing secrets without physical hardware. However, using vTPM for persistent secret storage is not currently supported. vTPM discards stored secrets after a VM shuts down.

10.13.2. Adding a vTPM device to a virtual machine

Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also temporarily stores secrets for that VM.

Procedure

  1. Run the following command to update the VM configuration:

    $ oc edit vm <vm_name>
  2. Edit the VM spec so that it includes the tpm: {} line. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
        name: example-vm
    spec:
      template:
        spec:
          domain:
            devices:
              tpm: {} 1
    # ...
    1
    Adds the TPM device to the VM.
  3. To apply your changes, save and exit the editor.
  4. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.

10.14. Managing virtual machines with OpenShift Pipelines

Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container.

The Tekton Tasks Operator (TTO) integrates OpenShift Virtualization with OpenShift Pipelines. TTO includes cluster tasks and example pipelines that allow you to:

  • Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes
  • Run commands in VMs
  • Manipulate disk images with libguestfs tools
Important

Managing virtual machines with Red Hat OpenShift Pipelines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

10.14.1. Prerequisites

  • You have access to an OpenShift Container Platform cluster with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed OpenShift Pipelines.

10.14.2. Deploying the Tekton Tasks Operator resources

The Tekton Tasks Operator (TTO) cluster tasks and example pipelines are not deployed by default when you install OpenShift Virtualization. To deploy TTO resources, enable the deployTektonTaskResources feature gate in the HyperConverged custom resource (CR).

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Set the spec.featureGates.deployTektonTaskResources field to true.

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: kubevirt-hyperconverged
    spec:
      tektonPipelinesNamespace: <user_namespace> 1
      featureGates:
        deployTektonTaskResources: true 2
    # ...
    1
    The namespace where the pipelines are to be run.
    2
    The feature gate to be enabled to deploy TTO resources.
    Note

    The cluster tasks and example pipelines remain available even if you disable the feature gate later.

  3. Save your changes and exit the editor.

10.14.3. Virtual machine tasks supported by the Tekton Tasks Operator

The following table shows the cluster tasks that are included as part of the Tekton Tasks Operator.

Table 10.2. Virtual machine tasks supported by the Tekton Tasks Operator
TaskDescription

create-vm-from-template

Create a virtual machine from a template.

copy-template

Copy a virtual machine template.

modify-vm-template

Modify a virtual machine template.

modify-data-object

Create or delete data volumes or data sources.

cleanup-vm

Run a script or a command in a virtual machine and stop or delete the virtual machine afterward.

disk-virt-customize

Use the virt-customize tool to run a customization script on a target PVC.

disk-virt-sysprep

Use the virt-sysprep tool to run a sysprep script on a target PVC.

wait-for-vmi-status

Wait for a specific status of a virtual machine instance and fail or succeed based on the status.

10.14.4. Example pipelines

The Tekton Tasks Operator includes the following example Pipeline manifests. You can run the example pipelines by using the web console or CLI.

You might have to run more than one installer pipline if you need multiple versions of Windows. If you run more than one installer pipeline, each one requires unique parameters, such as the autounattend config map and base image name. For example, if you need Windows 10 and Windows 11 or Windows Server 2022 images, you have to run both the Windows efi installer pipeline and the Windows bios installer pipeline. However, if you need Windows 11 and Windows Server 2022 images, you have to run only the Windows efi installer pipeline.

Windows EFI installer pipeline
This pipeline installs Windows 11 or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process.
Windows BIOS installer pipeline
This pipeline installs Windows 10 into a new data volume from a Windows installation image, also called an ISO file. A custom answer file is used to run the installation process.
Windows customize pipeline
This pipeline clones the data volume of a basic Windows 10, 11, or Windows Server 2022 installation, customizes it by installing Microsoft SQL Server Express or Microsoft Visual Studio Code, and then creates a new image and template.

10.14.4.1. Running the example pipelines using the web console

You can run the example pipelines from the Pipelines menu in the web console.

Procedure

  1. Click Pipelines Pipelines in the side menu.
  2. Select a pipeline to open the Pipeline details page.
  3. From the Actions list, select Start. The Start Pipeline dialog is displayed.
  4. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status.

10.14.4.2. Running the example pipelines using the CLI

Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline.

Procedure

  1. To run the Windows 10 installer pipeline, create the following PipelineRun manifest:

    apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      generateName: windows10-installer-run-
      labels:
        pipelinerun: windows10-installer-run
    spec:
      params:
      - name: winImageDownloadURL
        value: <link_to_windows_10_iso> 1
      pipelineRef:
        name: windows10-installer
      taskRunSpecs:
        - pipelineTaskName: copy-template
          taskServiceAccountName: copy-template-task
        - pipelineTaskName: modify-vm-template
          taskServiceAccountName: modify-vm-template-task
        - pipelineTaskName: create-vm-from-template
          taskServiceAccountName: create-vm-from-template-task
        - pipelineTaskName: wait-for-vmi-status
          taskServiceAccountName: wait-for-vmi-status-task
        - pipelineTaskName: create-base-dv
          taskServiceAccountName: modify-data-object-task
        - pipelineTaskName: cleanup-vm
          taskServiceAccountName: cleanup-vm-task
      status: {}
    1
    Specify the URL for the Windows 10 64-bit ISO file. The product language must be English (United States).
  2. Apply the PipelineRun manifest:

    $ oc apply -f windows10-installer-run.yaml
  3. To run the Windows 10 customize pipeline, create the following PipelineRun manifest:

    apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      generateName: windows10-customize-run-
      labels:
        pipelinerun: windows10-customize-run
    spec:
      params:
        - name: allowReplaceGoldenTemplate
          value: true
        - name: allowReplaceCustomizationTemplate
          value: true
      pipelineRef:
        name: windows10-customize
      taskRunSpecs:
        - pipelineTaskName: copy-template-customize
          taskServiceAccountName: copy-template-task
        - pipelineTaskName: modify-vm-template-customize
          taskServiceAccountName: modify-vm-template-task
        - pipelineTaskName: create-vm-from-template
          taskServiceAccountName: create-vm-from-template-task
        - pipelineTaskName: wait-for-vmi-status
          taskServiceAccountName: wait-for-vmi-status-task
        - pipelineTaskName: create-base-dv
          taskServiceAccountName: modify-data-object-task
        - pipelineTaskName: cleanup-vm
          taskServiceAccountName: cleanup-vm-task
        - pipelineTaskName: copy-template-golden
          taskServiceAccountName: copy-template-task
        - pipelineTaskName: modify-vm-template-golden
          taskServiceAccountName: modify-vm-template-task
    status: {}
  4. Apply the PipelineRun manifest:

    $ oc apply -f windows10-customize-run.yaml

10.14.5. Additional resources

10.15. Advanced virtual machine management

10.15.1. Working with resource quotas for virtual machines

Create and manage resource quotas for virtual machines.

10.15.1.1. Setting resource quota limits for virtual machines

Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.

Procedure

  1. Set limits for a VM by editing the VirtualMachine manifest. For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: with-limits
    spec:
      running: false
      template:
        spec:
          domain:
    # ...
            resources:
              requests:
                memory: 128Mi
              limits:
                memory: 256Mi  1
    1
    This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value.
  2. Save the VirtualMachine manifest.

10.15.1.2. Additional resources

10.15.2. Specifying nodes for virtual machines

You can place virtual machines (VMs) on specific nodes by using node placement rules.

10.15.2.1. About node placement for virtual machines

To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:

  • You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
  • You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
  • Your VMs require specific hardware features that are not present on all available nodes.
  • You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Note

Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.

You can use the following rule types in the spec field of a VirtualMachine manifest:

nodeSelector
Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity

Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object.

Note

Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.

tolerations
Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.

10.15.2.2. Node placement examples

The following example YAML file snippets use nodePlacement, affinity, and tolerations fields to customize node placement for virtual machines.

10.15.2.2.1. Example: VM node placement with nodeSelector

In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels.

Warning

If there are no nodes that fit this description, the virtual machine is not scheduled.

Example VM manifest

metadata:
  name: example-vm-node-selector
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      nodeSelector:
        example-key-1: example-value-1
        example-key-2: example-value-2
# ...

10.15.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity

In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1. If there is no such pod running on any node, the VM is not scheduled.

If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.

Example VM manifest

metadata:
  name: example-vm-pod-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 1
          - labelSelector:
              matchExpressions:
              - key: example-key-1
                operator: In
                values:
                - example-value-1
            topologyKey: kubernetes.io/hostname
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution: 2
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: example-key-2
                  operator: In
                  values:
                  - example-value-2
              topologyKey: kubernetes.io/hostname
# ...

1
If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met.
2
If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
10.15.2.2.3. Example: VM node placement with node affinity

In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.

If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value. However, if all candidate nodes have this label, the scheduler ignores this constraint.

Example VM manifest

metadata:
  name: example-vm-node-affinity
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution: 1
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-key
                operator: In
                values:
                - example-value-1
                - example-value-2
          preferredDuringSchedulingIgnoredDuringExecution: 2
          - weight: 1
            preference:
              matchExpressions:
              - key: example-node-label-key
                operator: In
                values:
                - example-node-label-value
# ...

1
If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met.
2
If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
10.15.2.2.4. Example: VM node placement with tolerations

In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations, it can schedule onto the tainted nodes.

Note

A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.

Example VM manifest

metadata:
  name: example-vm-tolerations
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  tolerations:
  - key: "key"
    operator: "Equal"
    value: "virtualization"
    effect: "NoSchedule"
# ...

10.15.2.3. Additional resources

10.15.3. Configuring certificate rotation

Configure certificate rotation parameters to replace existing certificates.

10.15.3.1. Configuring certificate rotation

You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR).

Procedure

  1. Open the HyperConverged CR by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format.

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
     name: kubevirt-hyperconverged
     namespace: openshift-cnv
    spec:
      certConfig:
        ca:
          duration: 48h0m0s
          renewBefore: 24h0m0s 1
        server:
          duration: 24h0m0s  2
          renewBefore: 12h0m0s  3
    1
    The value of ca.renewBefore must be less than or equal to the value of ca.duration.
    2
    The value of server.duration must be less than or equal to the value of ca.duration.
    3
    The value of server.renewBefore must be less than or equal to the value of server.duration.
  3. Apply the YAML file to your cluster.

10.15.3.2. Troubleshooting certificate rotation parameters

Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions:

  • The value of ca.renewBefore must be less than or equal to the value of ca.duration.
  • The value of server.duration must be less than or equal to the value of ca.duration.
  • The value of server.renewBefore must be less than or equal to the value of server.duration.

If the default values conflict with these conditions, you will receive an error.

If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration, conflicting with the specified conditions.

Example

certConfig:
   ca:
     duration: 4h0m0s
     renewBefore: 1h0m0s
   server:
     duration: 4h0m0s
     renewBefore: 4h0m0s

This results in the following error message:

error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration

The error message only mentions the first conflict. Review all certConfig values before you proceed.

10.15.4. Configuring the default CPU model

Use the defaultCPUModel setting in the HyperConverged custom resource (CR) to define a cluster-wide default CPU model.

The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster.

  • If the VM does not have a defined CPU model:

    • The defaultCPUModel is automatically set using the CPU model defined at the cluster-wide level.
  • If both the VM and the cluster have a defined CPU model:

    • The VM’s CPU model takes precedence.
  • If neither the VM nor the cluster have a defined CPU model:

    • The host-model is automatically set using the CPU model defined at the host level.

10.15.4.1. Configuring the default CPU model

Configure the defaultCPUModel by updating the HyperConverged custom resource (CR). You can change the defaultCPUModel while OpenShift Virtualization is running.

Note

The defaultCPUModel is case sensitive.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Open the HyperConverged CR by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Add the defaultCPUModel field to the CR and set the value to the name of a CPU model that exists in the cluster:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
     name: kubevirt-hyperconverged
     namespace: openshift-cnv
    spec:
      defaultCPUModel: "EPYC"
  3. Apply the YAML file to your cluster.

10.15.5. Using UEFI mode for virtual machines

You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.

10.15.5.1. About UEFI mode for virtual machines

Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.

It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.

10.15.5.2. Booting virtual machines in UEFI mode

You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode:

    Booting in UEFI mode with secure boot active

    apiversion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        special: vm-secureboot
      name: vm-secureboot
    spec:
      template:
        metadata:
          labels:
            special: vm-secureboot
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
            features:
              acpi: {}
              smm:
                enabled: true 1
            firmware:
              bootloader:
                efi:
                  secureBoot: true 2
    # ...

    1
    OpenShift Virtualization requires System Management Mode (SMM) to be enabled for Secure Boot in UEFI mode to occur.
    2
    OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
  2. Apply the manifest to your cluster by running the following command:

    $ oc create -f <file_name>.yaml

10.15.6. Configuring PXE booting for virtual machines

PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.

10.15.6.1. Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

10.15.6.2. PXE booting with a specified MAC address

As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.

Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

Procedure

  1. Configure a PXE network on the cluster:

    1. Create the network attachment definition file for PXE network pxe-net-conf:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: pxe-net-conf
      spec:
        config: '{
          "cniVersion": "0.3.1",
          "name": "pxe-net-conf",
          "plugins": [
            {
              "type": "cnv-bridge",
              "bridge": "br1",
              "vlan": 1 1
            },
            {
              "type": "cnv-tuning" 2
            }
          ]
        }'
      1
      Optional: The VLAN tag.
      2
      The cnv-tuning plugin provides support for custom MAC addresses.
      Note

      The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN.

  2. Create the network attachment definition by using the file you created in the previous step:

    $ oc create -f pxe-net-conf.yaml
  3. Edit the virtual machine instance configuration file to include the details of the interface and network.

    1. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.

      Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net>:

      interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
      Note

      Boot order is global for interfaces and disks.

    2. Assign a boot device number to the disk to ensure proper booting after operating system provisioning.

      Set the disk bootOrder value to 2:

      devices:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
          bootOrder: 2
    3. Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf>:

      networks:
      - name: default
        pod: {}
      - name: pxe-net
        multus:
          networkName: pxe-net-conf
  4. Create the virtual machine instance:

    $ oc create -f vmi-pxe-boot.yaml

Example output

  virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created

  1. Wait for the virtual machine instance to run:

    $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
      phase: Running
  2. View the virtual machine instance using VNC:

    $ virtctl vnc vmi-pxe-boot
  3. Watch the boot screen to verify that the PXE boot is successful.
  4. Log in to the virtual machine instance:

    $ virtctl console vmi-pxe-boot
  5. Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0, got an IP address from OpenShift Container Platform.

    $ ip addr

Example output

...
3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff

10.15.6.3. OpenShift Virtualization networking glossary

OpenShift Virtualization provides advanced networking functionality by using custom resources and plugins.

The following terms are used throughout OpenShift Virtualization documentation:

Container Network Interface (CNI)
a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
Multus
a "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
Custom resource definition (CRD)
a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
Network attachment definition (NAD)
a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
Node network configuration policy (NNCP)
a description of the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster.
Preboot eXecution Environment (PXE)
an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

10.15.7. Using huge pages with virtual machines

You can use huge pages as backing memory for virtual machines in your cluster.

10.15.7.1. Prerequisites

10.15.7.2. What huge pages do

Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.

A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.

In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.

10.15.7.3. Configuring huge pages for virtual machines

You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration.

The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi.

Note

The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.

If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.

Prerequisites

  • Nodes must have pre-allocated huge pages configured.

Procedure

  1. In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain. The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi:

    kind: VirtualMachine
    # ...
    spec:
      domain:
        resources:
          requests:
            memory: "4Gi" 1
        memory:
          hugepages:
            pageSize: "1Gi" 2
    # ...
    1
    The total amount of memory requested for the virtual machine. This value must be divisible by the page size.
    2
    The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi. The page size must be smaller than the requested memory.
  2. Apply the virtual machine configuration:

    $ oc apply -f <virtual_machine>.yaml

10.15.8. Enabling dedicated resources for virtual machines

To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.

10.15.8.1. About dedicated resources

When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.

10.15.8.2. Prerequisites

  • The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads.
  • The virtual machine must be powered off.

10.15.8.3. Enabling dedicated resources for a virtual machine

You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. On the Configuration Scheduling tab, click the edit icon beside Dedicated Resources.
  4. Select Schedule this workload with dedicated resources (guaranteed policy).
  5. Click Save.

10.15.9. Scheduling virtual machines

You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.

10.15.9.1. Policy attributes

You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.

Policy attributeDescription

force

The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU.

require

Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model.

optional

The VM is added to a node if that VM is supported by the host’s physical machine CPU.

disable

The VM cannot be scheduled with CPU node discovery.

forbid

The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled.

10.15.9.2. Setting a policy attribute and CPU feature

You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.

Procedure

  • Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM):

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              features:
                - name: apic 1
                  policy: require 2
    1
    Name of the CPU feature for the VM.
    2
    Policy attribute for the VM.

10.15.9.3. Scheduling virtual machines with the supported CPU model

You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.

Procedure

  • Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: Conroe 1
    1
    CPU model for the VM.

10.15.9.4. Scheduling virtual machines with the host model

When the CPU model for a virtual machine (VM) is set to host-model, the VM inherits the CPU model of the node where it is scheduled.

Procedure

  • Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine:

    apiVersion: kubevirt/v1alpha3
    kind: VirtualMachine
    metadata:
      name: myvm
    spec:
      template:
        spec:
          domain:
            cpu:
              model: host-model 1
    1
    The VM that inherits the CPU model of the node where it is scheduled.

10.15.10. Configuring PCI passthrough

The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.

Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI).

10.15.10.1. About preparing a host device for PCI passthrough

To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator.

To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR.

10.15.10.1.1. Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • Administrative privilege to a working OpenShift Container Platform cluster.
  • Intel or AMD CPU hardware.
  • Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 1
      name: 100-worker-iommu 2
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 3
    # ...
    1
    Applies the new kernel argument only to worker nodes.
    2
    The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3
    Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    $ oc get MachineConfig
10.15.10.1.2. Binding PCI devices to the VFIO driver

To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.

Prerequisites

  • You added kernel arguments to enable IOMMU for the CPU.

Procedure

  1. Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device.

    $ lspci -nnv | grep -i nvidia

    Example output

    02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

  2. Create a Butane config file, 100-worker-vfiopci.bu, binding the PCI device to the VFIO driver.

    Note

    See "Creating machine configs with Butane" for information about Butane.

    Example

    variant: openshift
    version: 4.13.0
    metadata:
      name: 100-worker-vfiopci
      labels:
        machineconfiguration.openshift.io/role: worker 1
    storage:
      files:
      - path: /etc/modprobe.d/vfio.conf
        mode: 0644
        overwrite: true
        contents:
          inline: |
            options vfio-pci ids=10de:1eb8 2
      - path: /etc/modules-load.d/vfio-pci.conf 3
        mode: 0644
        overwrite: true
        contents:
          inline: vfio-pci

    1
    Applies the new kernel argument only to worker nodes.
    2
    Specify the previously determined vendor-ID value (10de) and the device-ID value (1eb8) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information.
    3
    The file that loads the vfio-pci kernel module on the worker nodes.
  3. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml, containing the configuration to be delivered to the worker nodes:

    $ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
  4. Apply the MachineConfig object to the worker nodes:

    $ oc apply -f 100-worker-vfiopci.yaml
  5. Verify that the MachineConfig object was added.

    $ oc get MachineConfig

    Example output

    NAME                             GENERATEDBYCONTROLLER                      IGNITIONVERSION  AGE
    00-master                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    00-worker                        d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-master-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-container-runtime      d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    01-worker-kubelet                d3da910bfa9f4b599af4ed7f5ac270d55950a3a1   3.2.0            25h
    100-worker-iommu                                                            3.2.0            30s
    100-worker-vfiopci-configuration                                            3.2.0            30s

Verification

  • Verify that the VFIO driver is loaded.

    $ lspci -nnk -d 10de:

    The output confirms that the VFIO driver is being used.

    Example output

    04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1)
            Subsystem: NVIDIA Corporation Device [10de:1eb8]
            Kernel driver in use: vfio-pci
            Kernel modules: nouveau

10.15.10.1.3. Exposing PCI host devices in the cluster using the CLI

To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices: 1
        pciHostDevices: 2
        - pciDeviceSelector: "10DE:1DB6" 3
          resourceName: "nvidia.com/GV100GL_Tesla_V100" 4
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
        - pciDeviceSelector: "8086:6F54"
          resourceName: "intel.com/qat"
          externalResourceProvider: true 5
    # ...

    1
    The host devices that are permitted to be used in the cluster.
    2
    The list of PCI devices available on the node.
    3
    The vendor-ID and the device-ID required to identify the PCI device.
    4
    The name of a PCI host device.
    5
    Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
    Note

    The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization.

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100, nvidia.com/TU104GL_Tesla_T4, and intel.com/qat resource names.

    $ oc describe node <node_name>

    Example output

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  1
      pods:                           250

10.15.10.1.4. Removing PCI host devices from the cluster using the CLI

To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector, resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted.

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      permittedHostDevices:
        pciHostDevices:
        - pciDeviceSelector: "10DE:1DB6"
          resourceName: "nvidia.com/GV100GL_Tesla_V100"
        - pciDeviceSelector: "10DE:1EB8"
          resourceName: "nvidia.com/TU104GL_Tesla_T4"
    # ...

  3. Save your changes and exit the editor.

Verification

  • Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name.

    $ oc describe node <node_name>

    Example output

    Capacity:
      cpu:                            64
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              915128Mi
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         131395264Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250
    Allocatable:
      cpu:                            63500m
      devices.kubevirt.io/kvm:        110
      devices.kubevirt.io/tun:        110
      devices.kubevirt.io/vhost-net:  110
      ephemeral-storage:              863623130526
      hugepages-1Gi:                  0
      hugepages-2Mi:                  0
      memory:                         130244288Ki
      nvidia.com/GV100GL_Tesla_V100   1
      nvidia.com/TU104GL_Tesla_T4     1
      intel.com/qat:                  0
      pods:                           250

10.15.10.2. Configuring virtual machines for PCI passthrough

After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.

10.15.10.2.1. Assigning a PCI device to a virtual machine

When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.

Procedure

  • Assign the PCI device to a virtual machine as a host device.

    Example

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          hostDevices:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 1
            name: hostdevices1

    1
    The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.

Verification

  • Use the following command to verify that the host device is available from the virtual machine.

    $ lspci -nnk | grep NVIDIA

    Example output

    $ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)

10.15.10.3. Additional resources

10.15.11. Configuring vGPU passthrough

Your virtual machines can access a virtual GPU (vGPU) hardware. Assigning a vGPU to your virtual machine allows you do the following:

  • Access a fraction of the underlying hardware’s GPU to achieve high performance benefits in your virtual machine.
  • Streamline resource-intensive I/O operations.
Important

vGPU passthrough can only be assigned to devices that are connected to clusters running in a bare metal environment.

10.15.11.1. Assigning vGPU passthrough devices to a virtual machine

Use the OpenShift Container Platform web console to assign vGPU passthrough devices to your virtual machine.

Prerequisites

  • The virtual machine must be stopped.

Procedure

  1. In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu.
  2. Select the virtual machine to which you want to assign the device.
  3. On the Details tab, click GPU devices.

    If you add a vGPU device as a host device, you cannot access the device with the VNC console.

  4. Click Add GPU device, enter the Name and select the device from the Device name list.
  5. Click Save.
  6. Click the YAML tab to verify that the new devices have been added to your cluster configuration in the hostDevices section.
Note

You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems, such as Windows 10 or RHEL 7.

To display resources that are connected to your cluster, click Compute Hardware Devices from the side menu.

10.15.11.2. Additional resources

10.15.12. Configuring mediated devices

OpenShift Virtualization automatically creates mediated devices, such as virtual GPUs (vGPUs), if you provide a list of devices in the HyperConverged custom resource (CR).

Important

Declarative configuration of mediated devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

10.15.12.1. About using the NVIDIA GPU Operator

The NVIDIA GPU Operator manages NVIDIA GPU resources in a OpenShift Container Platform cluster and automates tasks related to bootstrapping GPU nodes. Because the GPU is a special resource in the cluster, you must install some components before you can deploy application workloads to the GPU. These components include the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features such as automatic node labeling, monitoring, and more.

Note

The NVIDIA GPU Operator is supported only by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA.

There are two ways to enable GPUs with OpenShift Container Platform OpenShift Virtualization: the OpenShift Container Platform-native way described here and by using the NVIDIA GPU Operator.

The NVIDIA GPU Operator is a Kubernetes Operator that uses OpenShift Container Platform OpenShift Virtualization to provision GPUs for virtualized workloads running on OpenShift Container Platform. With the Operator, you can easily provision and manage GPU-enabled virtual machines to run complex artificial intelligence/machine learning (AI/ML) workloads on the same platform as their other workloads. The Operator also provides an easy way to scale the GPU capacity of their infrastructure, enabling rapid growth of GPU-based workloads.

For more information about using the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated VMs, see NVIDIA GPU Operator with OpenShift Virtualization.

10.15.12.2. About using virtual GPUs with OpenShift Virtualization

Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters.

Note

Refer to your hardware vendor’s documentation for functionality and support details.

Mediated device
A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.
10.15.12.2.1. Prerequisites
  • If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.

10.15.12.2.2. Configuration overview

When configuring mediated devices, an administrator must complete the following tasks:

  • Create the mediated devices.
  • Expose the mediated devices to the cluster.

The HyperConverged CR includes APIs that accomplish both tasks.

Creating mediated devices

# ...
spec:
  mediatedDevicesConfiguration:
    mediatedDevicesTypes: 1
    - <device_type>
    nodeMediatedDeviceTypes: 2
    - mediatedDevicesTypes: 3
      - <device_type>
      nodeSelector: 4
        <node_selector_key>: <node_selector_value>
# ...

1
Required: Configures global settings for the cluster.
2
Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDevicesTypes configuration.
3
Required if you use nodeMediatedDeviceTypes. Overrides the global mediatedDevicesTypes configuration for the specified nodes.
4
Required if you use nodeMediatedDeviceTypes. Must include a key:value pair.

Exposing mediated devices to the cluster

# ...
  permittedHostDevices:
    mediatedDevices:
    - mdevNameSelector: GRID T4-2Q 1
      resourceName: nvidia.com/GRID_T4-2Q 2
# ...

1
Exposes the mediated devices that map to this value on the host.
Note

You can see the mediated device types that your device supports by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name, substituting the correct values for your system.

For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q. Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type.

2
The resourceName should match that allocated on the node. Find the resourceName by using the following command:
$ oc get $NODE -o json \
  | jq '.status.allocatable \
    | with_entries(select(.key | startswith("nvidia.com/"))) \
    | with_entries(select(.value != "0"))'
10.15.12.2.3. How vGPUs are assigned to nodes

For each physical device, OpenShift Virtualization configures the following values:

  • A single mdev type.
  • The maximum number of instances of the selected mdev type.

The cluster architecture affects how devices are created and assigned to nodes.

Large cluster with multiple cards per node

On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:

# ...
mediatedDevicesConfiguration:
  mediatedDevicesTypes:
  - nvidia-222
  - nvidia-228
  - nvidia-105
  - nvidia-108
# ...

In this scenario, each node has two cards, both of which support the following vGPU types:

nvidia-105
# ...
nvidia-108
nvidia-217
nvidia-299
# ...

On each node, OpenShift Virtualization creates the following vGPUs:

  • 16 vGPUs of type nvidia-105 on the first card.
  • 2 vGPUs of type nvidia-108 on the second card.
One node has a single card that supports more than one requested vGPU type

OpenShift Virtualization uses the supported type that comes first on the mediatedDevicesTypes list.

For example, the card on a node card supports nvidia-223 and nvidia-224. The following mediatedDevicesTypes list is configured:

# ...
mediatedDevicesConfiguration:
  mediatedDevicesTypes:
  - nvidia-22
  - nvidia-223
  - nvidia-224
# ...

In this example, OpenShift Virtualization uses the nvidia-223 type.

10.15.12.2.4. About changing and removing mediated devices

The cluster’s mediated device configuration can be updated with OpenShift Virtualization by:

  • Editing the HyperConverged CR and change the contents of the mediatedDevicesTypes stanza.
  • Changing the node labels that match the nodeMediatedDeviceTypes node selector.
  • Removing the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR.

    Note

    If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.

Depending on the specific changes, these actions cause OpenShift Virtualization to reconfigure mediated devices or remove them from the cluster nodes.

10.15.12.2.5. Preparing hosts for mediated devices

You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices.

10.15.12.2.5.1. Adding kernel arguments to enable the IOMMU driver

To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig object and add the kernel arguments.

Prerequisites

  • Administrative privilege to a working OpenShift Container Platform cluster.
  • Intel or AMD CPU hardware.
  • Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.

Procedure

  1. Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker 1
      name: 100-worker-iommu 2
    spec:
      config:
        ignition:
          version: 3.2.0
      kernelArguments:
          - intel_iommu=on 3
    # ...
    1
    Applies the new kernel argument only to worker nodes.
    2
    The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on.
    3
    Identifies the kernel argument as intel_iommu for an Intel CPU.
  2. Create the new MachineConfig object:

    $ oc create -f 100-worker-kernel-arg-iommu.yaml

Verification

  • Verify that the new MachineConfig object was added.

    $ oc get MachineConfig
10.15.12.2.6. Adding and removing mediated devices

You can add or remove mediated devices.

10.15.12.2.6.1. Creating and exposing mediated devices

You can expose and create mediated devices such as virtual GPUs (vGPUs) by editing the HyperConverged custom resource (CR).

Prerequisites

  • You enabled the IOMMU (Input-Output Memory Management Unit) driver.

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Add the mediated device information to the HyperConverged CR spec, ensuring that you include the mediatedDevicesConfiguration and permittedHostDevices stanzas. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration: <.>
        mediatedDevicesTypes: <.>
        - nvidia-231
        nodeMediatedDeviceTypes: <.>
        - mediatedDevicesTypes: <.>
          - nvidia-233
          nodeSelector:
            kubernetes.io/hostname: node-11.redhat.com
      permittedHostDevices: <.>
        mediatedDevices:
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q
        - mdevNameSelector: GRID T4-8Q
          resourceName: nvidia.com/GRID_T4-8Q
    # ...

    <.> Creates mediated devices. <.> Required: Global mediatedDevicesTypes configuration. <.> Optional: Overrides the global configuration for specific nodes. <.> Required if you use nodeMediatedDeviceTypes. <.> Exposes mediated devices to the cluster.

  3. Save your changes and exit the editor.

Verification

  • You can verify that a device was added to a specific node by running the following command:

    $ oc describe node <node_name>
10.15.12.2.6.2. Removing mediated devices from the cluster using the CLI

To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR).

Procedure

  1. Edit the HyperConverged CR in your default editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:

    Example configuration file

    apiVersion: hco.kubevirt.io/v1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      mediatedDevicesConfiguration:
        mediatedDevicesTypes: 1
          - nvidia-231
      permittedHostDevices:
        mediatedDevices: 2
        - mdevNameSelector: GRID T4-2Q
          resourceName: nvidia.com/GRID_T4-2Q

    1
    To remove the nvidia-231 device type, delete it from the mediatedDevicesTypes array.
    2
    To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field.
  3. Save your changes and exit the editor.

10.15.12.3. Using mediated devices

A vGPU is a type of mediated device; the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines.

10.15.12.3.1. Assigning a mediated device to a virtual machine

Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines.

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource.

Procedure

  • Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest:

    Example virtual machine manifest

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      domain:
        devices:
          gpus:
          - deviceName: nvidia.com/TU104GL_Tesla_T4 1
            name: gpu1 2
          - deviceName: nvidia.com/GRID_T4-1Q
            name: gpu2

    1
    The resource name associated with the mediated device.
    2
    A name to identify the device on the VM.

Verification

  • To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest:

    $ lspci -nnk | grep <device_name>

10.15.12.4. Additional resources

10.15.13. Enabling descheduler evictions on virtual machines

You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node.

Important

Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

10.15.13.1. Descheduler profiles

Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load.

DevPreviewLongLifecycle

This profile balances resource usage between nodes and enables the following strategies:

  • RemovePodsHavingTooManyRestarts: removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count.
  • LowNodeUtilization: evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler.

    • A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods).
    • A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods).

10.15.13.2. Installing the descheduler

The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles.

By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions.

Important

If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane, because it has the lowest priority value (100000000) of the hosted control plane priority classes.

Prerequisites

  • You are logged in to OpenShift Container Platform as a user with the cluster-admin role.
  • Access to the OpenShift Container Platform web console.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. Create the required namespace for the Kube Descheduler Operator.

    1. Navigate to Administration Namespaces and click Create Namespace.
    2. Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create.
  3. Install the Kube Descheduler Operator.

    1. Navigate to Operators OperatorHub.
    2. Type Kube Descheduler Operator into the filter box.
    3. Select the Kube Descheduler Operator and click Install.
    4. On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.
    5. Adjust the values for the Update Channel and Approval Strategy to the desired values.
    6. Click Install.
  4. Create a descheduler instance.

    1. From the Operators Installed Operators page, click the Kube Descheduler Operator.
    2. Select the Kube Descheduler tab and click Create KubeDescheduler.
    3. Edit the settings as necessary.

      1. To evict pods instead of simulating the evictions, change the Mode field to Automatic.
      2. Expand the Profiles section and select DevPreviewLongLifecycle. The AffinityAndTaints profile is enabled by default.

        Important

        The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle.

You can also configure the profiles and settings for the descheduler later using the OpenShift CLI (oc).

10.15.13.3. Enabling descheduler evictions on a virtual machine (VM)

After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR).

Prerequisites

  • Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI (oc).
  • Ensure that the VM is not running.

Procedure

  1. Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    spec:
      template:
        metadata:
          annotations:
            descheduler.alpha.kubernetes.io/evict: "true"
  2. If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object:

    apiVersion: operator.openshift.io/v1
    kind: KubeDescheduler
    metadata:
      name: cluster
      namespace: openshift-kube-descheduler-operator
    spec:
      deschedulingIntervalSeconds: 3600
      profiles:
      - DevPreviewLongLifecycle
      mode: Predictive 1
    1
    By default, the descheduler does not evict pods. To evict pods, set mode to Automatic.

The descheduler is now enabled on the VM.

10.15.13.4. Additional resources

10.16. Importing virtual machines

10.16.1. TLS certificates for data volume imports

10.16.1.1. Adding TLS certificates for authenticating data volume imports

TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume.

Create the config map by referencing the relative file path for the TLS certificate.

Procedure

  1. Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace.

    $ oc get ns
  2. Create the config map:

    $ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>

10.16.1.2. Example: Config map created from a TLS certificate

The following example is of a config map created from ca.pem TLS certificate.

apiVersion: v1
kind: ConfigMap
metadata:
  name: tls-certs
data:
  ca.pem: |
    -----BEGIN CERTIFICATE-----
    ... <base64 encoded cert> ...
    -----END CERTIFICATE-----

10.16.2. Importing virtual machine images with data volumes

You can import an existing virtual machine image into your OpenShift Container Platform cluster storage. Using the Containerized Data Importer (CDI), you can import the image into a persistent volume claim (PVC) by using a data volume. OpenShift Virtualization uses one or more data volumes to automate the data import and the creation of an underlying PVC. You can attach a data volume to a virtual machine for persistent storage.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry.

Important

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details.

10.16.2.1. Prerequisites

If you intend to import a virtual machine image into block storage with a data volume, you must have an available local block persistent volume.

10.16.2.2. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

Note

CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.

10.16.2.3. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.16.2.4. Local block persistent volumes

If you intend to import a virtual machine image into block storage with a data volume, you must have an available local block persistent volume.

10.16.2.4.1. About block persistent volumes

A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification.

10.16.2.4.2. Creating a local block persistent volume

If you intend to import a virtual machine image into block storage with a data volume, you must have an available local block persistent volume.

Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume manifest that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a storage class for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The file name of the persistent volume created in the previous step.

10.16.2.5. Importing a virtual machine image into storage by using a data volume

You can import a virtual machine image into storage by using a data volume.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint or the image can be built into a container disk and stored in a container registry.

You specify the data source for the image in a VirtualMachine configuration file. When the virtual machine is created, the data volume with the virtual machine image is imported into storage.

Prerequisites

  • To import a virtual machine image you must have the following:

    • A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
    • An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source.
  • To import a container disk, you must have a virtual machine image built into a container disk and stored in a container registry, along with any authentication credentials needed to access the data source.
  • If the virtual machine must communicate with servers that use self-signed certificates or certificates not signed by the system CA bundle, you must create a config map in the same namespace as the data volume.

Procedure

  1. If your data source requires authentication, create a Secret manifest, specifying the data source credentials, and save it as endpoint-secret.yaml:

    apiVersion: v1
    kind: Secret
    metadata:
      name: endpoint-secret 1
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" 2
      secretKey:   "" 3
    1
    Specify the name of the Secret.
    2
    Specify the Base64-encoded key ID or user name.
    3
    Specify the Base64-encoded secret key or password.
  2. Apply the Secret manifest:

    $ oc apply -f endpoint-secret.yaml
  3. Edit the VirtualMachine manifest, specifying the data source for the virtual machine image you want to import, and save it as vm-fedora-datavolume.yaml:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: vm-fedora-datavolume
      name: vm-fedora-datavolume 1
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: fedora-dv 2
        spec:
          storage:
            volumeMode: Block 3
            resources:
              requests:
                storage: 10Gi
            storageClassName: local
          source: 4
            http:
              url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 5
              secretRef: endpoint-secret 6
              certConfigMap: "" 7
            # To use a registry source, uncomment the following lines and delete the preceding HTTP source block
            # registry:
              # url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"
              # secretRef: registry-secret 8
              # certConfigMap: "" 9
        status: {}
      running: true
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-fedora-datavolume
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: datavolumedisk1
            machine:
              type: ""
            resources:
              requests:
                memory: 1.5Gi
          terminationGracePeriodSeconds: 180
          volumes:
          - dataVolume:
              name: fedora-dv
            name: datavolumedisk1
    status: {}
    1
    Specify the name of the virtual machine.
    2
    Specify the name of the data volume.
    3
    The volume and access mode are detected automatically for known storage provisioners. Alternatively, you can specify Block.
    4 5
    Specify either the URL or the registry endpoint of the virtual machine image you want to import using the comment block. For example, if you want to use a registry source, you can comment out or delete the HTTP or HTTPS source block. Ensure that you replace the example values shown here with your own values.
    6 8
    Specify the Secret name if you created a Secret for the data source.
    7 9
    Optional: Specify a CA certificate config map.
  4. Create the virtual machine:

    $ oc create -f vm-fedora-datavolume.yaml
    Note

    The oc create command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded. You can start the virtual machine.

    Data volume provisioning happens in the background, so there is no need to monitor the process.

Verification

  1. The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command:

    $ oc get pods
  2. Monitor the data volume until its status is Succeeded by running the following command:

    $ oc describe dv fedora-dv 1
    1
    Specify the data volume name that you defined in the VirtualMachine manifest.
  3. Verify that provisioning is complete and that the virtual machine has started by accessing its serial console:

    $ virtctl console vm-fedora-datavolume

10.16.2.6. Additional resources

10.17. Cloning virtual machines

10.17.1. Enabling user permissions to clone data volumes across namespaces

The isolating nature of namespaces means that users cannot by default clone resources between namespaces.

To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace.

10.17.1.1. Prerequisites

  • Only a user with the cluster-admin role can create cluster roles.

10.17.1.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.17.1.3. Creating RBAC resources for cloning data volumes

Create a new cluster role that enables permissions for all actions for the datavolumes resource.

Procedure

  1. Create a ClusterRole manifest:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <datavolume-cloner> 1
    rules:
    - apiGroups: ["cdi.kubevirt.io"]
      resources: ["datavolumes/source"]
      verbs: ["*"]
    1
    Unique name for the cluster role.
  2. Create the cluster role in the cluster:

    $ oc create -f <datavolume-cloner.yaml> 1
    1
    The file name of the ClusterRole manifest created in the previous step.
  3. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <allow-clone-to-user> 1
      namespace: <Source namespace> 2
    subjects:
    - kind: ServiceAccount
      name: default
      namespace: <Destination namespace> 3
    roleRef:
      kind: ClusterRole
      name: datavolume-cloner 4
      apiGroup: rbac.authorization.k8s.io
    1
    Unique name for the role binding.
    2
    The namespace for the source data volume.
    3
    The namespace to which the data volume is cloned.
    4
    The name of the cluster role created in the previous step.
  4. Create the role binding in the cluster:

    $ oc create -f <datavolume-cloner.yaml> 1
    1
    The file name of the RoleBinding manifest created in the previous step.

10.17.2. Cloning a virtual machine disk into a new data volume

You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file.

Warning

Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem.

However, you can only clone between different volume modes if they are of the contentType: kubevirt.

Tip

When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.

10.17.2.1. Prerequisites

10.17.2.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.17.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume

You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.

Note

When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).

Procedure

  1. Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume.

    For example:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <cloner-datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source-namespace>" 2
          name: "<my-favorite-vm-disk>" 3
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 4
    1
    The name of the new data volume.
    2
    The namespace where the source PVC exists.
    3
    The name of the source PVC.
    4
    The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
  3. Start cloning the PVC by creating the data volume:

    $ oc create -f <cloner-datavolume>.yaml
    Note

    Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.

10.17.2.4. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.17.3. Cloning a virtual machine by using a data volume template

You can create a new virtual machine by cloning the persistent volume claim (PVC) of an existing VM. By including a dataVolumeTemplate in your virtual machine configuration file, you create a new data volume from the original PVC.

Warning

Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem.

However, you can only clone between different volume modes if they are of the contentType: kubevirt.

Tip

When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.

10.17.3.1. Prerequisites

10.17.3.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.17.3.3. Creating a new virtual machine from a cloned persistent volume claim by using a data volume template

You can create a virtual machine that clones the persistent volume claim (PVC) of an existing virtual machine into a data volume. Reference a dataVolumeTemplate in the virtual machine manifest and the source PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine.

Note

When a data volume is created as part of the data volume template of a virtual machine, the lifecycle of the data volume is then dependent on the virtual machine. If the virtual machine is deleted, the data volume and associated PVC are also deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).

Procedure

  1. Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a VirtualMachine object. The following virtual machine example clones my-favorite-vm-disk, which is located in the source-namespace namespace. The 2Gi data volume called favorite-clone is created from my-favorite-vm-disk.

    For example:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-dv-clone
      name: vm-dv-clone 1
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-dv-clone
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: root-disk
            resources:
              requests:
                memory: 64M
          volumes:
          - dataVolume:
              name: favorite-clone
            name: root-disk
      dataVolumeTemplates:
      - metadata:
          name: favorite-clone
        spec:
          storage:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Gi
          source:
            pvc:
              namespace: "source-namespace"
              name: "my-favorite-vm-disk"
    1
    The virtual machine to create.
  3. Create the virtual machine with the PVC-cloned data volume:

    $ oc create -f <vm-clone-datavolumetemplate>.yaml

10.17.3.4. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.17.4. Cloning a virtual machine disk into a new block storage persistent volume claim

You can clone the persistent volume claim (PVC) of a virtual machine disk into a new block PVC by referencing the source PVC in your clone target data volume configuration file.

Warning

Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem.

However, you can only clone between different volume modes if they are of the contentType: kubevirt.

Tip

When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.

10.17.4.1. Prerequisites

10.17.4.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.17.4.3. About block persistent volumes

A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification.

10.17.4.4. Creating a local block persistent volume

If you intend to import a virtual machine image into block storage with a data volume, you must have an available local block persistent volume.

Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume manifest that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a storage class for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The file name of the persistent volume created in the previous step.

10.17.4.5. Cloning the persistent volume claim of a virtual machine disk into a new data volume

You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.

Note

When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).
  • At least one available block persistent volume (PV) that is the same size as or larger than the source PVC.

Procedure

  1. Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, volumeMode: Block so that an available block PV is used, and the size of the new data volume.

    For example:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <cloner-datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source-namespace>" 2
          name: "<my-favorite-vm-disk>" 3
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 4
        volumeMode: Block 5
    1
    The name of the new data volume.
    2
    The namespace where the source PVC exists.
    3
    The name of the source PVC.
    4
    The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
    5
    Specifies that the destination is a block PV
  3. Start cloning the PVC by creating the data volume:

    $ oc create -f <cloner-datavolume>.yaml
    Note

    Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.

10.17.4.6. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.18. Virtual machine networking

10.18.1. Configuring the virtual machine for the default pod network

You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade binding mode.

Note

Traffic on the virtual Network Interface Cards (vNICs) that are attached to the default pod network is interrupted during live migration.

10.18.1.1. Configuring masquerade mode from the command line

You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.

Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.

Prerequisites

  • The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.

Procedure

  1. Edit the interfaces spec of your virtual machine configuration file:

    kind: VirtualMachine
    spec:
      domain:
        devices:
          interfaces:
            - name: default
              masquerade: {} 1
              ports: 2
                - port: 80
      networks:
      - name: default
        pod: {}
    1
    Connect using masquerade mode.
    2
    Optional: List the ports that you want to expose from the virtual machine, each specified by the port field. The port value must be a number between 0 and 65536. When the ports array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80.
    Note

    Ports 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.

  2. Create the virtual machine:

    $ oc create -f <vm-name>.yaml

10.18.1.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6)

You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.

The Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120. You can edit this value based on your network requirements.

When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.

Prerequisites

  • The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network plugin configured for dual-stack.

Procedure

  1. In a new virtual machine configuration, include an interface with masquerade and configure the IPv6 address and default gateway by using cloud-init.

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: example-vm-ipv6
    # ...
              interfaces:
                - name: default
                  masquerade: {} 1
                  ports:
                    - port: 80 2
          networks:
          - name: default
            pod: {}
          volumes:
          - cloudInitNoCloud:
              networkData: |
                version: 2
                ethernets:
                  eth0:
                    dhcp4: true
                    addresses: [ fd10:0:2::2/120 ] 3
                    gateway6: fd10:0:2::1 4
    1
    Connect using masquerade mode.
    2
    Allows incoming traffic on port 80 to the virtual machine.
    3
    The static IPv6 address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::2/120.
    4
    The gateway IP address as determined by the Network.pod.vmIPv6NetworkCIDR field in the virtual machine instance configuration. The default value is fd10:0:2::1.
  2. Create the virtual machine in the namespace:

    $ oc create -f example-vm-ipv6.yaml

Verification

  • To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"

10.18.1.3. About jumbo frames support

When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes.

The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways:

  • libvirt: If the guest OS has the latest version of the VirtIO driver that can interpret incoming data via a Peripheral Component Interconnect (PCI) config register in the emulated device.
  • DHCP: If the guest DHCP client can read the MTU value from the DHCP server response.
Note

For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using netsh or a similar tool. This is because the Windows DHCP client does not read the MTU value.

10.18.2. Creating a service to expose a virtual machine

You can expose a virtual machine within the cluster or outside the cluster by using a Service object.

10.18.2.1. About services

A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of NodePort and LoadBalancer, exposure to the outside world.

Services can be exposed in the VirtualMachine details Details tab of the web console or by specifying a spec.type in the Service object:

ClusterIP
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends. ClusterIP is the default service type.
NodePort
Exposes the service on the same port of each selected node in the cluster. NodePort makes a service accessible from outside the cluster.
LoadBalancer
Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
Note

For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator.

10.18.2.1.1. Dual-stack support

If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy and the spec.ipFamilies fields in the Service object.

The spec.ipFamilyPolicy field can be set to one of the following values:

SingleStack
The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
PreferDualStack
The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
RequireDualStack
This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to PreferDualStack. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.

You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies field to one of the following array values:

  • [IPv4]
  • [IPv6]
  • [IPv4, IPv6]
  • [IPv6, IPv4]

10.18.2.2. Exposing a virtual machine as a service

Create a ClusterIP, NodePort, or LoadBalancer service to connect to a running virtual machine (VM) from within or outside the cluster.

Procedure

  1. Edit the VirtualMachine manifest to add the label for service creation:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-ephemeral
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 1
    # ...
    1
    Add the label special: key in the spec.template.metadata.labels section.
    Note

    Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest.

  2. Save the VirtualMachine manifest file to apply your changes.
  3. Create a Service manifest to expose the VM:

    apiVersion: v1
    kind: Service
    metadata:
      name: vmservice 1
      namespace: example-namespace 2
    spec:
      externalTrafficPolicy: Cluster 3
      ports:
      - nodePort: 30000 4
        port: 27017
        protocol: TCP
        targetPort: 22 5
      selector:
        special: key 6
      type: NodePort 7
    1
    The name of the Service object.
    2
    The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest.
    3
    Optional: Specifies how the nodes distribute service traffic that is received on external IP addresses. This only applies to NodePort and LoadBalancer service types. The default value is Cluster which routes traffic evenly to all cluster endpoints.
    4
    Optional: When set, the nodePort value must be unique across all services. If not specified, a value in the range above 30000 is dynamically allocated.
    5
    Optional: The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. If targetPort is not specified, it takes the same value as port.
    6
    The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest.
    7
    The type of service. Possible values are ClusterIP, NodePort and LoadBalancer.
  4. Save the Service manifest file.
  5. Create the service by running the following command:

    $ oc create -f <service_name>.yaml
  6. Start the VM. If the VM is already running, restart it.

Verification

  1. Query the Service object to verify that it is available:

    $ oc get service -n example-namespace

    Example output for ClusterIP service

    NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
    vmservice   ClusterIP   172.30.3.149   <none>        27017/TCP   2m

    Example output for NodePort service

    NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE
    vmservice   NodePort    172.30.232.73   <none>       27017:30000/TCP    5m

    Example output for LoadBalancer service

    NAME        TYPE            CLUSTER-IP     EXTERNAL-IP                    PORT(S)           AGE
    vmservice   LoadBalancer    172.30.27.5   172.29.10.235,172.29.10.235     27017:31829/TCP   5s

  2. Choose the appropriate method to connect to the virtual machine:

    • For a ClusterIP service, connect to the VM from within the cluster by using the service IP address and the service port. For example:

      $ ssh fedora@172.30.3.149 -p 27017
    • For a NodePort service, connect to the VM by specifying the node IP address and the node port outside the cluster network. For example:

      $ ssh fedora@$NODE_IP -p 30000
    • For a LoadBalancer service, use the vinagre client to connect to your virtual machine by using the public IP address and port. External ports are dynamically allocated.

10.18.2.3. Additional resources

10.18.3. Connecting a virtual machine to a Linux bridge network

By default, OpenShift Virtualization is installed with a single, internal pod network.

You must create a Linux bridge network attachment definition (NAD) in order to connect to additional networks.

To attach a virtual machine to an additional network:

  1. Create a Linux bridge node network configuration policy.
  2. Create a Linux bridge network attachment definition.
  3. Configure the virtual machine, enabling the virtual machine to recognize the network attachment definition.

For more information about scheduling, interface types, and other node networking activities, see the node networking section.

10.18.3.1. Connecting to the network through the network attachment definition

10.18.3.1.1. Creating a Linux bridge node network configuration policy

Use a NodeNetworkConfigurationPolicy manifest YAML file to create the Linux bridge.

Prerequisites

  • You have installed the Kubernetes NMState Operator.

Procedure

  • Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information.

    apiVersion: nmstate.io/v1
    kind: NodeNetworkConfigurationPolicy
    metadata:
      name: br1-eth1-policy 1
    spec:
      desiredState:
        interfaces:
          - name: br1 2
            description: Linux bridge with eth1 as a port 3
            type: linux-bridge 4
            state: up 5
            ipv4:
              enabled: false 6
            bridge:
              options:
                stp:
                  enabled: false 7
              port:
                - name: eth1 8
    1
    Name of the policy.
    2
    Name of the interface.
    3
    Optional: Human-readable description of the interface.
    4
    The type of interface. This example creates a bridge.
    5
    The requested state for the interface after creation.
    6
    Disables IPv4 in this example.
    7
    Disables STP in this example.
    8
    The node NIC to which the bridge is attached.

10.18.3.2. Creating a Linux bridge network attachment definition

Warning

Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.

10.18.3.2.1. Creating a Linux bridge network attachment definition in the web console

You can create network attachment definitions to provide layer-2 networking to pods and virtual machines.

A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.

Procedure

  1. In the web console, click Networking NetworkAttachmentDefinitions.
  2. Click Create Network Attachment Definition.

    Note

    The network attachment definition must be in the same namespace as the pod or virtual machine.

  3. Enter a unique Name and optional Description.
  4. Select CNV Linux bridge from the Network Type list.
  5. Enter the name of the bridge in the Bridge Name field.
  6. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
  7. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
  8. Click Create.
10.18.3.2.2. Creating a Linux bridge network attachment definition in the CLI

As a network administrator, you can configure a network attachment definition of type cnv-bridge to provide layer-2 networking to pods and virtual machines.

Prerequisites

  • The node must support nftables and the nft binary must be deployed to enable MAC spoof check.

Procedure

  1. Create a network attachment definition in the same namespace as the virtual machine.
  2. Add the virtual machine to the network attachment definition, as in the following example:

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: <bridge-network> 1
      annotations:
        k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "<bridge-network>", 3
        "type": "cnv-bridge", 4
        "bridge": "<bridge-interface>", 5
        "macspoofchk": true, 6
        "vlan": 100, 7
        "preserveDefaultVlan": false 8
      }'
    1
    The name for the NetworkAttachmentDefinition object.
    2
    Optional: Annotation key-value pair for node selection, where bridge-interface must match the name of a bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have the bridge-interface bridge connected.
    3
    The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition.
    4
    The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
    5
    The name of the Linux bridge configured on the node.
    6
    Optional: Flag to enable MAC spoof check. When set to true, you cannot change the MAC address of the pod or guest interface. This attribute provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
    7
    Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
    8
    Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true.
    Note

    A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.

  3. Create the network attachment definition:

    $ oc create -f <network-attachment-definition.yaml> 1
    1
    Where <network-attachment-definition.yaml> is the file name of the network attachment definition manifest.

Verification

  • Verify that the network attachment definition was created by running the following command:

    $ oc get network-attachment-definition <bridge-network>

10.18.3.3. Configuring the virtual machine for a Linux bridge network

10.18.3.3.1. Creating a NIC for a virtual machine in the web console

Create and attach additional NICs to a virtual machine from the web console.

Prerequisites

  • A network attachment definition must be available.

Procedure

  1. In the correct project in the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click Configuration Network interfaces to view the NICs already attached to the virtual machine.
  4. Click Add Network Interface to create a new slot in the list.
  5. Select a network attachment definition from the Network list for the additional network.
  6. Fill in the Name, Model, Type, and MAC Address for the new NIC.
  7. Click Save to save and attach the NIC to the virtual machine.
10.18.3.3.2. Networking fields
NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

10.18.3.3.3. Attaching a virtual machine to an additional network in the CLI

Attach a virtual machine to an additional network by adding a bridge interface and specifying a network attachment definition in the virtual machine configuration.

This procedure uses a YAML file to demonstrate editing the configuration and applying the updated file to the cluster. You can alternatively use the oc edit <object> <name> command to edit an existing virtual machine.

Prerequisites

  • Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect.

Procedure

  1. Create or edit a configuration of a virtual machine that you want to connect to the bridge network.
  2. Add the bridge interface to the spec.template.spec.domain.devices.interfaces list and the network attachment definition to the spec.template.spec.networks list. This example adds a bridge interface called bridge-net that connects to the a-bridge-network network attachment definition:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
        name: <example-vm>
    spec:
      template:
        spec:
          domain:
            devices:
              interfaces:
                - masquerade: {}
                  name: <default>
                - bridge: {}
                  name: <bridge-net> 1
    # ...
          networks:
            - name: <default>
              pod: {}
            - name: <bridge-net> 2
              multus:
                networkName: <network-namespace>/<a-bridge-network> 3
    # ...
    1
    The name of the bridge interface.
    2
    The name of the network. This value must match the name value of the corresponding spec.template.spec.domain.devices.interfaces entry.
    3
    The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the default namespace or the same namespace where the VM is to be created. In this case, multus is used. Multus is a cloud network interface (CNI) plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
  3. Apply the configuration:

    $ oc apply -f <example-vm.yaml>
  4. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.

10.18.3.4. Next steps

10.18.4. Connecting a virtual machine to an SR-IOV network

You can connect a virtual machine (VM) to a Single Root I/O Virtualization (SR-IOV) network by performing the following steps:

  1. Configure an SR-IOV network device.
  2. Configure an SR-IOV network.
  3. Connect the VM to the SR-IOV network.

10.18.4.1. Prerequisites

10.18.4.2. Configuring SR-IOV network devices

The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).

Note

When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes.

It might take several minutes for a configuration change to apply.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the SR-IOV Network Operator.
  • You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
  • You have not selected any control plane nodes for SR-IOV network device configuration.

Procedure

  1. Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration.

    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy
    metadata:
      name: <name> 1
      namespace: openshift-sriov-network-operator 2
    spec:
      resourceName: <sriov_resource_name> 3
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: "true" 4
      priority: <priority> 5
      mtu: <mtu> 6
      numVfs: <num> 7
      nicSelector: 8
        vendor: "<vendor_code>" 9
        deviceID: "<device_id>" 10
        pfNames: ["<pf_name>", ...] 11
        rootDevices: ["<pci_bus_id>", "..."] 12
      deviceType: vfio-pci 13
      isRdma: false 14
    1
    Specify a name for the CR object.
    2
    Specify the namespace where the SR-IOV Operator is installed.
    3
    Specify the resource name of the SR-IOV device plugin. You can create multiple SriovNetworkNodePolicy objects for a resource name.
    4
    Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
    5
    Optional: Specify an integer value between 0 and 99. A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99. The default value is 99.
    6
    Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
    7
    Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 127.
    8
    The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices, you must also specify a value for vendor, deviceID, or pfNames. If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device.
    9
    Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3.
    10
    Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b, 1015, 1017.
    11
    Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
    12
    The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1.
    13
    The vfio-pci driver type is required for virtual functions in OpenShift Virtualization.
    14
    Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false. The default value is false.
    Note

    If isRDMA flag is set to true, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.

  2. Optional: Label the SR-IOV capable cluster nodes with SriovNetworkNodePolicy.Spec.NodeSelector if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes".
  3. Create the SriovNetworkNodePolicy object:

    $ oc create -f <name>-sriov-node-network.yaml

    where <name> specifies the name for this configuration.

    After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status.

  4. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured.

    $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

10.18.4.3. Configuring SR-IOV additional network

You can configure an additional network that uses SR-IOV hardware by creating an SriovNetwork object.

When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object.

Note

Do not modify or delete an SriovNetwork object if it is attached to pods or virtual machines in a running state.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network.
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: <name> 1
  namespace: openshift-sriov-network-operator 2
spec:
  resourceName: <sriov_resource_name> 3
  networkNamespace: <target_namespace> 4
  vlan: <vlan> 5
  spoofChk: "<spoof_check>" 6
  linkState: <link_state> 7
  maxTxRate: <max_tx_rate> 8
  minTxRate: <min_rx_rate> 9
  vlanQoS: <vlan_qos> 10
  trust: "<trust_vf>" 11
  capabilities: <capabilities> 12
1
Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name.
2
Specify the namespace where the SR-IOV Network Operator is installed.
3
Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network.
4
Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork.
5
Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095. The default value is 0.
6
Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off".
Important

You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.

7
Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable, disable and auto.
8
Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF.
9
Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate.
Note

Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847.

10
Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0.
11
Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off".
Important

You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.

12
Optional: Replace <capabilities> with the capabilities to configure for this network.
  1. To create the object, enter the following command. Replace <name> with a name for this additional network.

    $ oc create -f <name>-sriov-network.yaml
  2. Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the previous step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object.

    $ oc get net-attach-def -n <namespace>

10.18.4.4. Connecting a virtual machine to an SR-IOV network

You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration.

Procedure

  1. Include the SR-IOV network details in the spec.domain.devices.interfaces and spec.networks of the VM configuration:

    kind: VirtualMachine
    # ...
    spec:
      domain:
        devices:
          interfaces:
          - name: <default> 1
            masquerade: {} 2
          - name: <nic1> 3
            sriov: {}
      networks:
      - name: <default> 4
        pod: {}
      - name: <nic1> 5
        multus:
            networkName: <sriov-network> 6
    # ...
    1
    A unique name for the interface that is connected to the pod network.
    2
    The masquerade binding to the default pod network.
    3
    A unique name for the SR-IOV interface.
    4
    The name of the pod network interface. This must be the same as the interfaces.name that you defined earlier.
    5
    The name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier.
    6
    The name of the SR-IOV network attachment definition.
  2. Apply the virtual machine configuration:

    $ oc apply -f <vm-sriov.yaml> 1
    1
    The name of the virtual machine YAML file.

10.18.4.5. Configuring a cluster for DPDK workloads

You can use the following procedure to configure an OpenShift Container Platform cluster to run Data Plane Development Kit (DPDK) workloads.

Important

Configuring a cluster for DPDK workloads is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • You have access to the cluster as a user with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have installed the SR-IOV Network Operator.
  • You have installed the Node Tuning Operator.

Procedure

  1. Map your compute nodes topology to determine which Non-Uniform Memory Access (NUMA) CPUs are isolated for DPDK applications and which ones are reserved for the operating system (OS).
  2. Label a subset of the compute nodes with a custom role; for example, worker-dpdk:

    $ oc label node <node_name> node-role.kubernetes.io/worker-dpdk=""
  3. Create a new MachineConfigPool manifest that contains the worker-dpdk label in the spec.machineConfigSelector object:

    Example MachineConfigPool manifest

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: worker-dpdk
      labels:
        machineconfiguration.openshift.io/role: worker-dpdk
    spec:
      machineConfigSelector:
        matchExpressions:
          - key: machineconfiguration.openshift.io/role
            operator: In
            values:
              - worker
              - worker-dpdk
      nodeSelector:
        matchLabels:
          node-role.kubernetes.io/worker-dpdk: ""

  4. Create a PerformanceProfile manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping.

    Example PerformanceProfile manifest

    apiVersion: performance.openshift.io/v2
    kind: PerformanceProfile
    metadata:
      name: profile-1
    spec:
      cpu:
        isolated: 4-39,44-79
        reserved: 0-3,40-43
      hugepages:
        defaultHugepagesSize: 1G
        pages:
        - count: 32
          node: 0
          size: 1G
      net:
        userLevelNetworking: true
      nodeSelector:
        node-role.kubernetes.io/worker-dpdk: ""
      numa:
        topologyPolicy: single-numa-node

    Note

    The compute nodes automatically restart after you apply the MachineConfigPool and PerformanceProfile manifests.

  5. Retrieve the name of the generated RuntimeClass resource from the status.runtimeClass field of the PerformanceProfile object:

    $ oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{"\n"}'
  6. Set the previously obtained RuntimeClass name as the default container runtime class for the virt-launcher pods by adding the following annotation to the HyperConverged custom resource (CR):

    $ oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged \
        kubevirt.kubevirt.io/jsonpatch='[{"op": "add", "path": "/spec/configuration/defaultRuntimeClass", "value": <runtimeclass_name>}]'
    Note

    Adding the annotation to the HyperConverged CR changes a global setting that affects all VMs that are created after the annotation is applied. Setting this annotation breaches support of the OpenShift Virtualization instance and must be used only on test clusters. For best performance, apply for a support exception.

  7. Create an SriovNetworkNodePolicy object with the spec.deviceType field set to vfio-pci:

    Example SriovNetworkNodePolicy manifest

    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetworkNodePolicy
    metadata:
      name: policy-1
      namespace: openshift-sriov-network-operator
    spec:
      resourceName: intel_nics_dpdk
      deviceType: vfio-pci
      mtu: 9000
      numVfs: 4
      priority: 99
      nicSelector:
        vendor: "8086"
        deviceID: "1572"
        pfNames:
          - eno3
        rootDevices:
          - "0000:19:00.2"
      nodeSelector:
        feature.node.kubernetes.io/network-sriov.capable: "true"

10.18.4.6. Configuring a project for DPDK workloads

You can configure the project to run DPDK workloads on SR-IOV hardware.

Prerequisites

  • Your cluster is configured to run DPDK workloads.

Procedure

  1. Create a namespace for your DPDK applications:

    $ oc create ns dpdk-checkup-ns
  2. Create an SriovNetwork object that references the SriovNetworkNodePolicy object. When you create an SriovNetwork object, the SR-IOV Network Operator automatically creates a NetworkAttachmentDefinition object.

    Example SriovNetwork manifest

    apiVersion: sriovnetwork.openshift.io/v1
    kind: SriovNetwork
    metadata:
      name: dpdk-sriovnetwork
      namespace: openshift-sriov-network-operator
    spec:
      ipam: |
        {
          "type": "host-local",
          "subnet": "10.56.217.0/24",
          "rangeStart": "10.56.217.171",
          "rangeEnd": "10.56.217.181",
          "routes": [{
            "dst": "0.0.0.0/0"
          }],
          "gateway": "10.56.217.1"
        }
      networkNamespace: dpdk-checkup-ns 1
      resourceName: intel_nics_dpdk 2
      spoofChk: "off"
      trust: "on"
      vlan: 1019

    1
    The namespace where the NetworkAttachmentDefinition object is deployed.
    2
    The value of the spec.resourceName attribute of the SriovNetworkNodePolicy object that was created when configuring the cluster for DPDK workloads.
  3. Optional: Run the virtual machine latency checkup to verify that the network is properly configured.
  4. Optional: Run the DPDK checkup to verify that the namespace is ready for DPDK workloads.

10.18.4.7. Configuring a virtual machine for DPDK workloads

You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing.

Prerequisites

  • Your cluster is configured to run DPDK workloads.
  • You have created and configured the project in which the VM will run.

Procedure

  1. Edit the VirtualMachine manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages:

    Example VirtualMachine manifest

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: rhel-dpdk-vm
    spec:
      running: true
      template:
        metadata:
          annotations:
            cpu-load-balancing.crio.io: disable 1
            cpu-quota.crio.io: disable 2
            irq-load-balancing.crio.io: disable 3
        spec:
          domain:
            cpu:
              sockets: 1 4
              cores: 5 5
              threads: 2
              dedicatedCpuPlacement: true
              isolateEmulatorThread: true
            interfaces:
              - masquerade: {}
                name: default
              - model: virtio
                name: nic-east
                pciAddress: '0000:07:00.0'
                sriov: {}
              networkInterfaceMultiqueue: true
              rng: {}
          memory:
            hugepages:
              pageSize: 1Gi 6
              guest: 8Gi
          networks:
            - name: default
              pod: {}
            - multus:
                networkName: dpdk-net 7
              name: nic-east
    # ...

    1
    This annotation specifies that load balancing is disabled for CPUs that are used by the container.
    2
    This annotation specifies that the CPU quota is disabled for CPUs that are used by the container.
    3
    This annotation specifies that Interrupt Request (IRQ) load balancing is disabled for CPUs that are used by the container.
    4
    The number of sockets inside the VM. This field must be set to 1 for the CPUs to be scheduled from the same Non-Uniform Memory Access (NUMA) node.
    5
    The number of cores inside the VM. This must be a value greater than or equal to 1. In this example, the VM is scheduled with 5 hyper-threads or 10 CPUs.
    6
    The size of the huge pages. The possible values for x86-64 architecture are 1Gi and 2Mi. In this example, the request is for 8 huge pages of size 1Gi.
    7
    The name of the SR-IOV NetworkAttachmentDefinition object.
  2. Save and exit the editor.
  3. Apply the VirtualMachine manifest:

    $ oc apply -f <file_name>.yaml
  4. Configure the guest operating system. The following example shows the configuration steps for RHEL 8 OS:

    1. Configure huge pages by using the GRUB bootloader command-line interface. In the following example, 8 1G huge pages are specified.

      $ grubby --update-kernel=ALL --args="default_hugepagesz=1GB hugepagesz=1G hugepages=8"
    2. To achieve low-latency tuning by using the cpu-partitioning profile in the TuneD application, run the following commands:

      $ dnf install -y tuned-profiles-cpu-partitioning
      $ echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf

      The first two CPUs (0 and 1) are set aside for house keeping tasks and the rest are isolated for the DPDK application.

      $ tuned-adm profile cpu-partitioning
    3. Override the SR-IOV NIC driver by using the driverctl device driver control utility:

      $ dnf install -y driverctl
      $ driverctl set-override 0000:07:00.0 vfio-pci
  5. Restart the VM to apply the changes.

10.18.4.8. Next steps

10.18.5. Connecting a virtual machine to a service mesh

OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.

10.18.5.1. Prerequisites

10.18.5.2. Configuring a virtual machine for the service mesh

To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the sidecar.istio.io/inject annotation to true. Then expose your VM as a service to view your application in the mesh.

Prerequisites

  • To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These include ports 15000, 15001, 15006, 15008, 15020, 15021, and 15090.

Procedure

  1. Edit the VM configuration file to add the sidecar.istio.io/inject: "true" annotation.

    Example configuration file

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-istio
      name: vm-istio
    spec:
      runStrategy: Always
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-istio
            app: vm-istio 1
          annotations:
            sidecar.istio.io/inject: "true" 2
        spec:
          domain:
            devices:
              interfaces:
              - name: default
                masquerade: {} 3
              disks:
              - disk:
                  bus: virtio
                name: containerdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
            resources:
              requests:
                memory: 1024M
          networks:
          - name: default
            pod: {}
          terminationGracePeriodSeconds: 180
          volumes:
          - containerDisk:
              image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel
            name: containerdisk

    1
    The key/value pair (label) that must be matched to the service selector attribute.
    2
    The annotation to enable automatic sidecar injection.
    3
    The binding method (masquerade mode) for use with the default pod network.
  2. Apply the VM configuration:

    $ oc apply -f <vm_name>.yaml 1
    1
    The name of the virtual machine YAML file.
  3. Create a Service object to expose your VM to the service mesh.

      apiVersion: v1
      kind: Service
      metadata:
        name: vm-istio
      spec:
        selector:
          app: vm-istio 1
        ports:
          - port: 8080
            name: http
            protocol: TCP
    1
    The service selector that determines the set of pods targeted by a service. This attribute corresponds to the spec.metadata.labels field in the VM configuration file. In the above example, the Service object named vm-istio targets TCP port 8080 on any pod with the label app=vm-istio.
  4. Create the service:

    $ oc create -f <service_name>.yaml 1
    1
    The name of the service YAML file.

10.18.6. Configuring IP addresses for virtual machines

You can configure static and dynamic IP addresses for virtual machines.

10.18.6.1. Configuring an IP address for a new virtual machine using cloud-init

You can use cloud-init to configure the IP address of a secondary NIC when you create a virtual machine (VM). The IP address can be dynamically or statically provisioned.

Note

If the VM is connected to the pod network, the pod network interface is the default route unless you update it.

Prerequisites

  • The virtual machine is connected to a secondary network.
  • You have a DHCP server available on the secondary network to configure a dynamic IP for the virtual machine.

Procedure

  • Edit the spec.template.spec.volumes.cloudInitNoCloud.networkData stanza of the virtual machine configuration:

    • To configure a dynamic IP address, specify the interface name and enable DHCP:

      kind: VirtualMachine
      spec:
      # ...
        template:
        # ...
          spec:
            volumes:
            - cloudInitNoCloud:
                networkData: |
                  version: 2
                  ethernets:
                    eth1: 1
                      dhcp4: true
      1
      Specify the interface name.
    • To configure a static IP, specify the interface name and the IP address:

      kind: VirtualMachine
      spec:
      # ...
        template:
        # ...
          spec:
            volumes:
            - cloudInitNoCloud:
                networkData: |
                  version: 2
                  ethernets:
                    eth1: 1
                      addresses:
                      - 10.10.10.14/24 2
      1
      Specify the interface name.
      2
      Specify the static IP address.

10.18.7. Viewing the IP address of NICs on a virtual machine

You can view the IP address for a network interface controller (NIC) by using the web console or the oc client. The QEMU guest agent displays additional information about the virtual machine’s secondary networks.

10.18.7.1. Prerequisites

  • Install the QEMU guest agent on the virtual machine.

10.18.7.2. Viewing the IP address of a virtual machine interface in the CLI

The network interface configuration is included in the oc describe vmi <vmi_name> command.

You can also view the IP address information by running ip addr on the virtual machine, or by running oc get vmi <vmi_name> -o yaml.

Procedure

  • Use the oc describe command to display the virtual machine interface configuration:

    $ oc describe vmi <vmi_name>

    Example output

    # ...
    Interfaces:
       Interface Name:  eth0
       Ip Address:      10.244.0.37/24
       Ip Addresses:
         10.244.0.37/24
         fe80::858:aff:fef4:25/64
       Mac:             0a:58:0a:f4:00:25
       Name:            default
       Interface Name:  v2
       Ip Address:      1.1.1.7/24
       Ip Addresses:
         1.1.1.7/24
         fe80::f4d9:70ff:fe13:9089/64
       Mac:             f6:d9:70:13:90:89
       Interface Name:  v1
       Ip Address:      1.1.1.1/24
       Ip Addresses:
         1.1.1.1/24
         1.1.1.2/24
         1.1.1.4/24
         2001:de7:0:f101::1/64
         2001:db8:0:f101::1/64
         fe80::1420:84ff:fe10:17aa/64
       Mac:             16:20:84:10:17:aa

10.18.7.3. Viewing the IP address of a virtual machine interface in the web console

The IP information is displayed on the VirtualMachine details page for the virtual machine.

Procedure

  1. In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine name to open the VirtualMachine details page.

The information for each attached NIC is displayed under IP Address on the Details tab.

10.18.8. Accessing a virtual machine on a secondary network by using the cluster domain name

You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using the fully qualified domain name (FQDN) of the cluster.

Important

Accessing VMs by using the cluster FQDN is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

10.18.8.1. Configuring DNS server for secondary networks

The Cluster Network Addons Operator (CNAO) deploys the Domain Name Server (DNS) server and monitoring components when you enable the KubeSecondaryDNS feature gate in the HyperConverged custom resource (CR).

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to an OpenShift Container Platform cluster with cluster-admin permissions.

Procedure

  1. Create a LoadBalancer service using MetalLB or any other load balancer to expose the DNS server outside the cluster. The service listens on port 53 and targets port 5353. For example:

    $ oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'
  2. Retrieve the public IP address of the service by querying the Service object:

    $ oc get service -n openshift-cnv

    Example output

    NAME        TYPE            CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
    dns-lb   LoadBalancer       172.30.27.5    10.46.41.94      53:31829/TCP     5s

  3. Deploy the DNS server and monitoring components by editing the HyperConverged CR:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
        featureGates:
          deployKubeSecondaryDNS: true 1
        kubeSecondaryDNSNameServerIP: "10.46.41.94" 2
    # ...
    1
    Set the KubeSecondaryDNS feature gate to true.
    2
    Set the IP address of the service to the value retrieved in step 2.
  4. Retrieve the FQDN of the OpenShift Container Platform cluster by using the following command:

    $ oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain

    Example output

    openshift.example.com

  5. Point to the DNS server by using one of the following methods:

    • Add the kubeSecondaryDNSNameServerIP value to the resolv.conf file on your local machine.

      Note

      Editing the resolv.conf file overwrites any existing DNS settings.

    • Add the kubeSecondaryDNSNameServerIP value and the cluster FQDN to the enterprise DNS server records. For example:

      vm.<FQDN>. IN NS ns.vm.<FQDN>.
      ns.vm.<FQDN>. IN A 10.46.41.94

10.18.8.2. Connecting to a virtual machine on a secondary network by using the cluster FQDN

You can access a virtual machine (VM) that is attached to a secondary network interface from outside the cluster by using the fully qualified domain name (FQDN) of the cluster.

Prerequisites

  • The QEMU guest agent must be running on the virtual machine.
  • The IP address of the VM that you want to connect to, by using a DNS client, must be public.
  • You have configured the DNS server for secondary networks.
  • You have retrieved the fully qualified domain name (FQDN) of the cluster.

Procedure

  1. Retrieve the VM configuration by using the following command:

    $ oc get vm -n <namespace> <vm_name> -o yaml

    Example output

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: example-vm
      name: example-vm
      namespace: example-namespace
    spec:
      running: true
      template:
        metadata:
          labels:
            kubevirt.io/vm: example-vm
        spec:
          domain:
            devices:
    # ...
              interfaces:
                - bridge: {}
                  name: example-nic
    # ...
          networks:
          - multus:
              networkName: bridge-conf
            name: example-nic 1
    # ...

    1
    Specify the name of the secondary network interface.
  2. Connect to the VM by using the ssh command:

    $ ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<FQDN> 1
    1
    Specify the user name, interface name, VM name, VM namespace, and FQDN.

    Example

    $ ssh you@example-nic.example-vm.example-namespace.vm.openshift.example.com

10.18.8.3. Additional resources

10.18.9. Using a MAC address pool for virtual machines

The KubeMacPool component provides a MAC address pool service for virtual machine NICs in a namespace.

10.18.9.1. About KubeMacPool

KubeMacPool provides a MAC address pool per namespace and allocates MAC addresses for virtual machine NICs from the pool. This ensures that the NIC is assigned a unique MAC address that does not conflict with the MAC address of another virtual machine.

Virtual machine instances created from that virtual machine retain the assigned MAC address across reboots.

Note

KubeMacPool does not handle virtual machine instances created independently from a virtual machine.

KubeMacPool is enabled by default when you install OpenShift Virtualization. You can disable a MAC address pool for a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. Re-enable KubeMacPool for the namespace by removing the label.

10.18.9.2. Disabling a MAC address pool for a namespace in the CLI

Disable a MAC address pool for virtual machines in a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace.

Procedure

  • Add the mutatevirtualmachines.kubemacpool.io=ignore label to the namespace. The following example disables KubeMacPool for two namespaces, <namespace1> and <namespace2>:

    $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore

10.18.9.3. Re-enabling a MAC address pool for a namespace in the CLI

If you disabled KubeMacPool for a namespace and want to re-enable it, remove the mutatevirtualmachines.kubemacpool.io=ignore label from the namespace.

Note

Earlier versions of OpenShift Virtualization used the label mutatevirtualmachines.kubemacpool.io=allocate to enable KubeMacPool for a namespace. This is still supported but redundant as KubeMacPool is now enabled by default.

Procedure

  • Remove the KubeMacPool label from the namespace. The following example re-enables KubeMacPool for two namespaces, <namespace1> and <namespace2>:

    $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-

10.19. Virtual machine disks

10.19.1. Configuring local storage for virtual machines

You can configure local storage for virtual machines by using the hostpath provisioner (HPP).

When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use the HPP, you must create an HPP custom resource (CR).

10.19.1.1. Creating a hostpath provisioner with a basic storage pool

You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools stanza. The storage pool specifies the name and path used by the CSI driver.

Prerequisites

  • The directories specified in spec.storagePools.path must have read/write access.
  • The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.

Procedure

  1. Create an hpp_cr.yaml file with a storagePools stanza as in the following example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      storagePools: 1
      - name: any_name
        path: "/var/myvolumes" 2
    workload:
      nodeSelector:
        kubernetes.io/os: linux
    1
    The storagePools stanza is an array to which you can add multiple entries.
    2
    Specify the storage pool directories under this node path.
  2. Save the file and exit.
  3. Create the HPP by running the following command:

    $ oc create -f hpp_cr.yaml
10.19.1.1.1. About creating storage classes

When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object’s parameters after you create it.

In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools stanza.

Note

Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer, the binding and provisioning of the PV is delayed until a pod is created using the PVC.

10.19.1.1.2. Creating a storage class for the CSI driver with the storagePools stanza

You create a storage class custom resource (CR) for the hostpath provisioner (HPP) CSI driver.

Procedure

  1. Create a storageclass_csi.yaml file to define the storage class:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-csi
    provisioner: kubevirt.io.hostpath-provisioner
    reclaimPolicy: Delete 1
    volumeBindingMode: WaitForFirstConsumer 2
    parameters:
      storagePool: my-storage-pool 3
1
The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the default value is Delete.
2
The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.
3
Specify the name of the storage pool defined in the HPP CR.
  1. Save the file and exit.
  2. Create the StorageClass object by running the following command:

    $ oc create -f storageclass_csi.yaml

10.19.1.2. About storage pools created with PVC templates

If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).

A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.

The PVC template is based on the spec stanza of the PersistentVolumeClaim object:

Example PersistentVolumeClaim object

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: iso-pvc
spec:
  volumeMode: Block 1
  storageClassName: my-storage-class
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

1
This value is only required for block volume mode PVs.

You define a storage pool using a pvcTemplate specification in the HPP CR. The Operator creates a PVC from the pvcTemplate specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.

You can combine basic storage pools with storage pools created from PVC templates.

10.19.1.2.1. Creating a storage pool with a PVC template

You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR).

Prerequisites

  • The directories specified in spec.storagePools.path must have read/write access.
  • The storage pools must not be in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.

Procedure

  1. Create an hpp_pvc_template_pool.yaml file for the HPP CR that specifies a persistent volume (PVC) template in the storagePools stanza according to the following example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      storagePools: 1
      - name: my-storage-pool
        path: "/var/myvolumes" 2
        pvcTemplate:
          volumeMode: Block 3
          storageClassName: my-storage-class 4
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi 5
      workload:
        nodeSelector:
          kubernetes.io/os: linux
    1
    The storagePools stanza is an array that can contain both basic and PVC template storage pools.
    2
    Specify the storage pool directories under this node path.
    3
    Optional: The volumeMode parameter can be either Block or Filesystem as long as it matches the provisioned volume format. If no value is specified, the default is Filesystem. If the volumeMode is Block, the mounting pod creates an XFS file system on the block volume before mounting it.
    4
    If the storageClassName parameter is omitted, the default storage class is used to create PVCs. If you omit storageClassName, ensure that the HPP storage class is not the default storage class.
    5
    You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
  2. Save the file and exit.
  3. Create the HPP with a storage pool by running the following command:

    $ oc create -f hpp_pvc_template_pool.yaml

Additional resources

10.19.2. Creating data volumes

You can create a data volume by using either the PVC or storage API.

Important

When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.

To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block.

Tip

Whenever possible, use the storage API to optimize space allocation and maximize performance.

A storage profile is a custom resource that the CDI manages. It provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class.

Storage profiles enable you to create data volumes quickly while reducing coding and minimizing potential errors.

For recognized storage types, the CDI provides values that optimize the creation of PVCs. However, you can configure automatic settings for a storage class if you customize the storage profile.

10.19.2.1. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.19.2.2. Creating data volumes using the storage API

When you create a data volume using the storage API, the Containerized Data Interface (CDI) optimizes your persistent volume claim (PVC) allocation based on the type of storage supported by your selected storage class. You only have to specify the data volume name, namespace, and the amount of storage that you want to allocate.

For example:

  • When using Ceph RBD, accessModes is automatically set to ReadWriteMany, which enables live migration. volumeMode is set to Block to maximize performance.
  • When you are using volumeMode: Filesystem, more space will automatically be requested by the CDI, if required to accommodate file system overhead.

In the following YAML, using the storage API requests a data volume with two gigabytes of usable space. The user does not need to know the volumeMode in order to correctly estimate the required persistent volume claim (PVC) size. The CDI chooses the optimal combination of accessModes and volumeMode attributes automatically. These optimal values are based on the type of storage or the defaults that you define in your storage profile. If you want to provide custom values, they override the system-calculated values.

Example DataVolume definition

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: <datavolume> 1
spec:
  source:
    pvc: 2
      namespace: "<source_namespace>" 3
      name: "<my_vm_disk>" 4
  storage: 5
    resources:
      requests:
        storage: 2Gi 6
    storageClassName: <storage_class> 7

1
The name of the new data volume.
2
Indicate that the source of the import is an existing persistent volume claim (PVC).
3
The namespace where the source PVC exists.
4
The name of the source PVC.
5
Indicates allocation using the storage API.
6
Specifies the amount of available space that you request for the PVC.
7
Optional: The name of the storage class. If the storage class is not specified, the system default storage class is used.

10.19.2.3. Creating data volumes using the PVC API

When you create a data volume using the PVC API, the Containerized Data Interface (CDI) creates the data volume based on what you specify for the following fields:

  • accessModes (ReadWriteOnce, ReadWriteMany, or ReadOnlyMany)
  • volumeMode (Filesystem or Block)
  • capacity of storage (5Gi, for example)

In the following YAML, using the PVC API allocates a data volume with a storage capacity of two gigabytes. You specify an access mode of ReadWriteMany to enable live migration. Because you know the values your system can support, you specify Block storage instead of the default, Filesystem.

Example DataVolume definition

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: <datavolume> 1
spec:
  source:
    pvc: 2
      namespace: "<source_namespace>" 3
      name: "<my_vm_disk>" 4
  pvc: 5
    accessModes: 6
      - ReadWriteMany
    resources:
      requests:
        storage: 2Gi 7
    volumeMode: Block 8
    storageClassName: <storage_class> 9

1
The name of the new data volume.
2
In the source section, pvc indicates that the source of the import is an existing persistent volume claim (PVC).
3
The namespace where the source PVC exists.
4
The name of the source PVC.
5
Indicates allocation using the PVC API.
6
accessModes is required when using the PVC API.
7
Specifies the amount of space you are requesting for your data volume.
8
Specifies that the destination is a block PVC.
9
Optionally, specify the storage class. If the storage class is not specified, the system default storage class is used.
Important

When you explicitly allocate a data volume by using the PVC API and you are not using volumeMode: Block, consider file system overhead.

File system overhead is the amount of space required by the file system to maintain its metadata. The amount of space required for file system metadata is file system dependent. Failing to account for file system overhead in your storage capacity request can result in an underlying persistent volume claim (PVC) that is not large enough to accommodate your virtual machine disk.

If you use the storage API, the CDI will factor in file system overhead and request a larger persistent volume claim (PVC) to ensure that your allocation request is successful.

10.19.2.4. Customizing the storage profile

You can specify default parameters by editing the StorageProfile object for the provisioner’s storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume object.

An empty status section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by the CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations.

Warning

If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created.

Prerequisites

  • Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail.

Procedure

  1. Edit the storage profile. In this example, the provisioner is not recognized by CDI:

    $ oc edit -n openshift-cnv storageprofile <storage_class>

    Example storage profile

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: StorageProfile
    metadata:
      name: <unknown_provisioner_class>
    # ...
    spec: {}
    status:
      provisioner: <unknown_provisioner>
      storageClass: <unknown_provisioner_class>

  2. Provide the needed attribute values in the storage profile:

    Example storage profile

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: StorageProfile
    metadata:
      name: <unknown_provisioner_class>
    # ...
    spec:
      claimPropertySets:
      - accessModes:
        - ReadWriteOnce 1
        volumeMode:
          Filesystem 2
    status:
      provisioner: <unknown_provisioner>
      storageClass: <unknown_provisioner_class>

    1
    The accessModes that you select.
    2
    The volumeMode that you select.

    After you save your changes, the selected values appear in the storage profile status element.

10.19.2.4.1. Setting a default cloning strategy using a storage profile

You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy. Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance.

Cloning strategies can be specified by setting the cloneStrategy attribute in a storage profile to one of these values:

  • snapshot is used by default when snapshots are configured. This cloning strategy uses a temporary volume snapshot to clone the volume. The storage provisioner must support Container Storage Interface (CSI) snapshots.
  • copy uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning.
  • csi-clone uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike snapshot or copy, which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in the StorageProfile object for the provisioner’s storage class.
Note

You can also set clone strategies using the CLI without modifying the default claimPropertySets in your YAML spec section.

Example storage profile

apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
  name: <provisioner_class>
# ...
spec:
  claimPropertySets:
  - accessModes:
    - ReadWriteOnce 1
    volumeMode:
      Filesystem 2
  cloneStrategy: csi-clone 3
status:
  provisioner: <provisioner>
  storageClass: <provisioner_class>

1
Specify the access mode.
2
Specify the volume mode.
3
Specify the default cloning strategy.

10.19.2.5. Additional resources

10.19.3. Reserving PVC space for file system overhead

By default, the OpenShift Virtualization reserves space for file system overhead data in persistent volume claims (PVCs) that use the Filesystem volume mode. You can set the percentage to reserve space for this purpose globally and for specific storage classes.

10.19.3.1. How file system overhead affects space for virtual machine disks

When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem volume mode, you must ensure that there is enough space on the PVC for:

  • The virtual machine disk.
  • The space reserved for file system overhead, such as metadata

By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount.

You can configure a different overhead value by editing the HCO object. You can change the value globally and you can specify values for specific storage classes.

10.19.3.2. Overriding the default file system overhead value

Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead attribute of the HCO object.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Open the HCO object for editing by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Edit the spec.filesystemOverhead fields, populating them with your chosen values:

    # ...
    spec:
      filesystemOverhead:
        global: "<new_global_value>" 1
        storageClass:
          <storage_class_name>: "<new_value_for_this_storage_class>" 2
    1
    The default file system overhead percentage used for any storage classes that do not already have a set value. For example, global: "0.07" reserves 7% of the PVC for file system overhead.
    2
    The file system overhead percentage for the specified storage class. For example, mystorageclass: "0.04" changes the default overhead value for PVCs in the mystorageclass storage class to 4%.
  3. Save and exit the editor to update the HCO object.

Verification

  • View the CDIConfig status and verify your changes by running one of the following commands:

    To generally verify changes to CDIConfig:

    $ oc get cdiconfig -o yaml

    To view your specific changes to CDIConfig:

    $ oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'

10.19.4. Configuring CDI to work with namespaces that have a compute resource quota

You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.

10.19.4.1. About CPU and memory quotas in a namespace

A resource quota, defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.

The HyperConverged custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.

10.19.4.2. Overriding CPU and memory defaults

Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads stanza to the HyperConverged custom resource (CR).

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the HyperConverged CR by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Add the spec.resourceRequirements.storageWorkloads stanza to the CR, setting the values based on your use case. For example:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      resourceRequirements:
        storageWorkloads:
          limits:
            cpu: "500m"
            memory: "2Gi"
          requests:
            cpu: "250m"
            memory: "1Gi"
  3. Save and exit the editor to update the HyperConverged CR.

10.19.4.3. Additional resources

10.19.5. Managing data volume annotations

Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.

10.19.5.1. Example: Data volume annotations

This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network annotation causes the pod to use the multus network named bridge-network as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name> annotation.

Multus network annotation example

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: dv-ann
  annotations:
      v1.multus-cni.io/default-network: bridge-network 1
spec:
  source:
      http:
         url: "example.exampleurl.com"
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi

1
Multus network annotation

10.19.6. Using preallocation for data volumes

The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes.

You can enable preallocation for specific data volumes.

10.19.6.1. About preallocation

The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.

If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:

fallocate
If the file system supports it, CDI uses the operating system’s fallocate call to preallocate space by using the posix_fallocate function, which allocates blocks and marks them as uninitialized.
full
If fallocate mode cannot be used, full mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed.

10.19.6.2. Enabling preallocation for a data volume

You can enable preallocation for specific data volumes by including the spec.preallocation field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI (oc).

Preallocation mode is supported for all CDI source types.

Procedure

  • Specify the spec.preallocation field in the data volume manifest:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: preallocated-datavolume
    spec:
      source: 1
        ...
      pvc:
        ...
      preallocation: true 2
    1
    All CDI source types support preallocation, however preallocation is ignored for cloning operations.
    2
    The preallocation field is a boolean that defaults to false.

10.19.7. Uploading local disk images by using the web console

You can upload a locally stored disk image file by using the web console.

10.19.7.1. Prerequisites

10.19.7.2. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.19.7.3. Uploading an image file using the web console

Use the web console to upload an image file to a new persistent volume claim (PVC). You can later use this PVC to attach the image to new virtual machines.

Prerequisites

  • You must have one of the following:

    • A raw virtual machine image file in either ISO or IMG format.
    • A virtual machine image file in QCOW2 format.
  • For best results, compress your image file according to the following guidelines before you upload it:

    • Compress a raw image file by using xz or gzip.

      Note

      Using a compressed raw image file results in the most efficient upload.

    • Compress a QCOW2 image file by using the method that is recommended for your client:

      • If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool.
      • If you use a Windows client, compress the QCOW2 file by using xz or gzip.

Procedure

  1. From the side menu of the web console, click Storage Persistent Volume Claims.
  2. Click the Create Persistent Volume Claim drop-down list to expand it.
  3. Click With Data Upload Form to open the Upload Data to Persistent Volume Claim page.
  4. Click Browse to open the file manager and select the image that you want to upload, or drag the file into the Drag a file here or browse to upload field.
  5. Optional: Set this image as the default image for a specific operating system.

    1. Select the Attach this data to a virtual machine operating system check box.
    2. Select an operating system from the list.
  6. The Persistent Volume Claim Name field is automatically filled with a unique name and cannot be edited. Take note of the name assigned to the PVC so that you can identify it later, if necessary.
  7. Select a storage class from the Storage Class list.
  8. In the Size field, enter the size value for the PVC. Select the corresponding unit of measurement from the drop-down list.

    Warning

    The PVC size must be larger than the size of the uncompressed virtual disk.

  9. Select an Access Mode that matches the storage class that you selected.
  10. Click Upload.

10.19.7.4. Additional resources

10.19.8. Uploading local disk images by using the virtctl tool

You can upload a locally stored disk image to a new or existing persistent volume claim (PVC) by using the virtctl command-line utility.

10.19.8.1. Prerequisites

10.19.8.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.19.8.3. Creating an upload data volume

You can manually create a data volume with an upload data source to use for uploading local disk images.

Procedure

  1. Create a data volume configuration that specifies spec: source: upload{}:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <upload-datavolume> 1
    spec:
      source:
          upload: {}
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 2
    1
    The name of the data volume.
    2
    The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload.
  2. Create the data volume by running the following command:

    $ oc create -f <upload-datavolume>.yaml

10.19.8.4. Uploading a local disk image to a data volume

You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.

Note

After you upload a local disk image, you can add it to a virtual machine.

Prerequisites

  • You must have one of the following:

    • A raw virtual machine image file in either ISO or IMG format.
    • A virtual machine image file in QCOW2 format.
  • For best results, compress your image file according to the following guidelines before you upload it:

    • Compress a raw image file by using xz or gzip.

      Note

      Using a compressed raw image file results in the most efficient upload.

    • Compress a QCOW2 image file by using the method that is recommended for your client:

      • If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool.
      • If you use a Windows client, compress the QCOW2 file by using xz or gzip.
  • The kubevirt-virtctl package must be installed on the client machine.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Identify the following items:

    • The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically.
    • The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
    • The file location of the virtual machine disk image that you want to upload.
  2. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the previous step. For example:

    $ virtctl image-upload dv <datavolume_name> \ 1
    --size=<datavolume_size> \ 2
    --image-path=</path/to/image> \ 3
    1
    The name of the data volume.
    2
    The size of the data volume. For example: --size=500Mi, --size=1G
    3
    The file path of the virtual machine disk image.
    Note
    • If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  3. Optional. To verify that a data volume was created, view all data volumes by running the following command:

    $ oc get dvs

10.19.8.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.19.8.6. Additional resources

10.19.9. Uploading a local disk image to a block storage persistent volume claim

You can upload a local disk image into a block persistent volume claim (PVC) by using the virtctl command-line utility.

In this workflow, you create a local block device to use as a persistent volume, associate this block volume with an upload data volume, and use virtctl to upload the local disk image into the PVC.

10.19.9.1. Prerequisites

10.19.9.2. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.19.9.3. About block persistent volumes

A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and persistent volume claim (PVC) specification.

10.19.9.4. Creating a local block persistent volume

If you intend to import a virtual machine image into block storage with a data volume, you must have an available local block persistent volume.

Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume manifest that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a storage class for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The file name of the persistent volume created in the previous step.

10.19.9.5. Creating an upload data volume

You can manually create a data volume with an upload data source to use for uploading local disk images.

Procedure

  1. Create a data volume configuration that specifies spec: source: upload{}:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <upload-datavolume> 1
    spec:
      source:
          upload: {}
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 2
    1
    The name of the data volume.
    2
    The size of the data volume. Ensure that this value is greater than or equal to the size of the disk that you upload.
  2. Create the data volume by running the following command:

    $ oc create -f <upload-datavolume>.yaml

10.19.9.6. Uploading a local disk image to a data volume

You can use the virtctl CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.

Note

After you upload a local disk image, you can add it to a virtual machine.

Prerequisites

  • You must have one of the following:

    • A raw virtual machine image file in either ISO or IMG format.
    • A virtual machine image file in QCOW2 format.
  • For best results, compress your image file according to the following guidelines before you upload it:

    • Compress a raw image file by using xz or gzip.

      Note

      Using a compressed raw image file results in the most efficient upload.

    • Compress a QCOW2 image file by using the method that is recommended for your client:

      • If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool.
      • If you use a Windows client, compress the QCOW2 file by using xz or gzip.
  • The kubevirt-virtctl package must be installed on the client machine.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Identify the following items:

    • The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically.
    • The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
    • The file location of the virtual machine disk image that you want to upload.
  2. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the previous step. For example:

    $ virtctl image-upload dv <datavolume_name> \ 1
    --size=<datavolume_size> \ 2
    --image-path=</path/to/image> \ 3
    1
    The name of the data volume.
    2
    The size of the data volume. For example: --size=500Mi, --size=1G
    3
    The file path of the virtual machine disk image.
    Note
    • If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  3. Optional. To verify that a data volume was created, view all data volumes by running the following command:

    $ oc get dvs

10.19.9.7. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.19.9.8. Additional resources

10.19.10. Managing virtual machine snapshots

You can create and delete virtual machine (VM) snapshots for VMs, whether the VMs are powered off (offline) or on (online). You can only restore to a powered off (offline) VM. OpenShift Virtualization supports VM snapshots on the following:

  • Red Hat OpenShift Data Foundation
  • Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API

Online snapshots have a default time deadline of five minutes (5m) that can be changed, if needed.

Important

Online snapshots are supported for virtual machines that have hot-plugged virtual disks. However, hot-plugged disks that are not in the virtual machine specification are not included in the snapshot.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

10.19.10.1. About virtual machine snapshots

A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version.

A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state).

When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken.

The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation.

With the VM snapshots feature, cluster administrators and application developers can:

  • Create a new snapshot
  • List all snapshots attached to a specific VM
  • Restore a VM from a snapshot
  • Delete an existing VM snapshot
10.19.10.1.1. Virtual machine snapshot controller and custom resource definitions (CRDs)

The VM snapshot feature introduces three new API objects defined as CRDs for managing snapshots:

  • VirtualMachineSnapshot: Represents a user request to create a snapshot. It contains information about the current state of the VM.
  • VirtualMachineSnapshotContent: Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM.
  • VirtualMachineRestore: Represents a user request to restore a VM from a snapshot.

The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping.

10.19.10.2. Installing QEMU guest agent on a Linux virtual machine

The qemu-guest-agent is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. Access the virtual machine command line through one of the consoles or by SSH.
  2. Install the QEMU guest agent on the virtual machine:

    $ yum install -y qemu-guest-agent
  3. Ensure the service is persistent and start it:

    $ systemctl enable --now qemu-guest-agent

Verification

  1. Run the following command to verify that AgentConnected is listed in the VM spec:

    $ oc get vm <vm_name>

10.19.10.3. Installing QEMU guest agent on a Windows virtual machine

For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. In the Windows Guest Operating System (OS), use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive.
  2. Run the qemu-ga-x86_64.msi installer.

Verification

  1. Run the following command to verify that the output contains the QEMU Guest Agent:

    $ net start
10.19.10.3.1. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.

10.19.10.4. Creating a virtual machine snapshot in the web console

You can create a virtual machine (VM) snapshot by using the web console.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

The VM snapshot only includes disks that meet the following requirements:

  • Must be either a data volume or persistent volume claim
  • Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. If the virtual machine is running, click Actions Stop to power it down.
  4. Click the Snapshots tab and then click Take Snapshot.
  5. Fill in the Snapshot Name and optional Description fields.
  6. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot.
  7. If your VM has disks that cannot be included in the snapshot and you still wish to proceed, select the I am aware of this warning and wish to proceed checkbox.
  8. Click Save.

10.19.10.5. Creating a virtual machine snapshot in the CLI

You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Kubevirt will coordinate with the QEMU guest agent to create a snapshot of the online VM.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Prerequisites

  • Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots.
  • Install the OpenShift CLI (oc).
  • Optional: Power down the VM for which you want to create a snapshot.

Procedure

  1. Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM.

    For example:

    apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineSnapshot
    metadata:
      name: my-vmsnapshot 1
    spec:
      source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: my-vm 2
    1
    The name of the new VirtualMachineSnapshot object.
    2
    The name of the source VM.
  2. Create the VirtualMachineSnapshot resource. The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot and updates the status and readyToUse fields of the VirtualMachineSnapshot object.

    $ oc create -f <my-vmsnapshot>.yaml
  3. Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot:

    1. Enter the following command:

      $ oc wait my-vm my-vmsnapshot --for condition=Ready
    2. Verify the status of the snapshot:

      • InProgress - The online snapshot operation is still in progress.
      • Succeeded - The online snapshot operation completed successfully.
      • Failed - The online snapshot operaton failed.

        Note

        Online snapshots have a default time deadline of five minutes (5m). If the snapshot does not complete successfully in five minutes, the status is set to failed. Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image.

        To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes (m) or in seconds (s) that you want to specify before the snapshot operation times out.

        To set no deadline, you can specify 0, though this is generally not recommended, as it can result in an unresponsive VM.

        If you do not specify a unit of time such as m or s, the default is seconds (s).

Verification

  1. Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent. The readyToUse flag must be set to true.

    $ oc describe vmsnapshot <my-vmsnapshot>

    Example output

    apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineSnapshot
    metadata:
      creationTimestamp: "2020-09-30T14:41:51Z"
      finalizers:
      - snapshot.kubevirt.io/vmsnapshot-protection
      generation: 5
      name: mysnap
      namespace: default
      resourceVersion: "3897"
      selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot
      uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d
    spec:
      source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: my-vm
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2020-09-30T14:42:03Z"
        reason: Operation complete
        status: "False" 1
        type: Progressing
      - lastProbeTime: null
        lastTransitionTime: "2020-09-30T14:42:03Z"
        reason: Operation complete
        status: "True" 2
        type: Ready
      creationTime: "2020-09-30T14:42:03Z"
      readyToUse: true 3
      sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f
      virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4

    1
    The status field of the Progressing condition specifies if the snapshot is still being created.
    2
    The status field of the Ready condition specifies if the snapshot creation process is complete.
    3
    Specifies if the snapshot is ready to be used.
    4
    Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller.
  2. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot.

10.19.10.6. Verifying online snapshot creation with snapshot indications

Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation.

Prerequisites

  • To view indications, you must have attempted to create an online VM snapshot using the CLI or the web console.

Procedure

  1. Display the output from the snapshot indications by doing one of the following:

    • For snapshots created with the CLI, view indicator output in the VirtualMachineSnapshot object YAML, in the status field.
    • For snapshots created using the web console, click VirtualMachineSnapshot > Status in the Snapshot details screen.
  2. Verify the status of your online VM snapshot:

    • Online indicates that the VM was running during online snapshot creation.
    • NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error.

10.19.10.7. Restoring a virtual machine from a snapshot in the web console

You can restore a virtual machine (VM) to a previous configuration represented by a snapshot in the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. If the virtual machine is running, click Actions Stop to power it down.
  4. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine.
  5. Choose one of the following methods to restore a VM snapshot:

    1. For the snapshot that you want to use as the source to restore the VM, click Restore.
    2. Select a snapshot to open the Snapshot Details screen and click Actions Restore VirtualMachineSnapshot.
  6. In the confirmation pop-up window, click Restore to restore the VM to its previous configuration represented by the snapshot.

10.19.10.8. Restoring a virtual machine from a snapshot in the CLI

You can restore an existing virtual machine (VM) to a previous configuration by using a VM snapshot. You can only restore from an offline VM snapshot.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Power down the VM you want to restore to a previous state.

Procedure

  1. Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source.

    For example:

    apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineRestore
    metadata:
      name: my-vmrestore 1
    spec:
      target:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: my-vm 2
      virtualMachineSnapshotName: my-vmsnapshot 3
    1
    The name of the new VirtualMachineRestore object.
    2
    The name of the target VM you want to restore.
    3
    The name of the VirtualMachineSnapshot object to be used as the source.
  2. Create the VirtualMachineRestore resource. The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content.

    $ oc create -f <my-vmrestore>.yaml

Verification

  • Verify that the VM is restored to the previous state represented by the snapshot. The complete flag must be set to true.

    $ oc get vmrestore <my-vmrestore>

    Example output

    apiVersion: snapshot.kubevirt.io/v1beta1
    kind: VirtualMachineRestore
    metadata:
    creationTimestamp: "2020-09-30T14:46:27Z"
    generation: 5
    name: my-vmrestore
    namespace: default
    ownerReferences:
    - apiVersion: kubevirt.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: VirtualMachine
      name: my-vm
      uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f
      resourceVersion: "5512"
      selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinerestores/my-vmrestore
      uid: 71c679a8-136e-46b0-b9b5-f57175a6a041
      spec:
        target:
          apiGroup: kubevirt.io
          kind: VirtualMachine
          name: my-vm
      virtualMachineSnapshotName: my-vmsnapshot
      status:
      complete: true 1
      conditions:
      - lastProbeTime: null
      lastTransitionTime: "2020-09-30T14:46:28Z"
      reason: Operation complete
      status: "False" 2
      type: Progressing
      - lastProbeTime: null
      lastTransitionTime: "2020-09-30T14:46:28Z"
      reason: Operation complete
      status: "True" 3
      type: Ready
      deletedDataVolumes:
      - test-dv1
      restoreTime: "2020-09-30T14:46:28Z"
      restores:
      - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1
      persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1
      volumeName: datavolumedisk1
      volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1

    1
    Specifies if the process of restoring the VM to the state represented by the snapshot is complete.
    2
    The status field of the Progressing condition specifies if the VM is still being restored.
    3
    The status field of the Ready condition specifies if the VM restoration process is complete.

10.19.10.9. Deleting a virtual machine snapshot in the web console

You can delete an existing virtual machine snapshot by using the web console.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine.
  4. Click the Options menu kebab of the virtual machine snapshot that you want to delete and select Delete VirtualMachineSnapshot.
  5. In the confirmation pop-up window, click Delete to delete the snapshot.

10.19.10.10. Deleting a virtual machine snapshot in the CLI

You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  • Delete the VirtualMachineSnapshot object. The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object.

    $ oc delete vmsnapshot <my-vmsnapshot>

Verification

  • Verify that the snapshot is deleted and no longer attached to this VM:

    $ oc get vmsnapshot

10.19.10.11. Additional resources

10.19.11. Moving a local virtual machine disk to a different node

Virtual machines that use local volume storage can be moved so that they run on a specific node.

You might want to move the virtual machine to a specific node for the following reasons:

  • The current node has limitations to the local storage configuration.
  • The new node is better optimized for the workload of that virtual machine.

To move a virtual machine that uses local storage, you must clone the underlying volume by using a data volume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new data volume, or add the new data volume to another virtual machine.

Tip

When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.

Note

Users without the cluster-admin role require additional user permissions to clone volumes across namespaces.

10.19.11.1. Cloning a local volume to another node

You can move a virtual machine disk so that it runs on a specific node by cloning the underlying persistent volume claim (PVC).

To ensure the virtual machine disk is cloned to the correct node, you must either create a new persistent volume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the data volume.

Note

The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.

Prerequisites

  • The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.

Procedure

  1. Either create a new local PV on the node, or identify a local PV already on the node:

    • Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01.

      kind: PersistentVolume
      apiVersion: v1
      metadata:
        name: <destination-pv> 1
        annotations:
      spec:
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 10Gi 2
        local:
          path: /mnt/local-storage/local/disk1 3
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - node01 4
        persistentVolumeReclaimPolicy: Delete
        storageClassName: local
        volumeMode: Filesystem
      1
      The name of the PV.
      2
      The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
      3
      The mount path on the node.
      4
      The name of the node where you want to create the PV.
    • Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration:

      $ oc get pv <destination-pv> -o yaml

      The following snippet shows that the PV is on node01:

      Example output

      # ...
      spec:
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname 1
                operator: In
                values:
                - node01 2
      # ...

      1
      The kubernetes.io/hostname key uses the node hostname to select a node.
      2
      The hostname of the node.
  2. Add a unique label to the PV:

    $ oc label pv <destination-pv> node=node01
  3. Create a data volume manifest that references the following:

    • The PVC name and namespace of the virtual machine.
    • The label you applied to the PV in the previous step.
    • The size of the destination PV.

      apiVersion: cdi.kubevirt.io/v1beta1
      kind: DataVolume
      metadata:
        name: <clone-datavolume> 1
      spec:
        source:
          pvc:
            name: "<source-vm-disk>" 2
            namespace: "<source-namespace>" 3
        pvc:
          accessModes:
            - ReadWriteOnce
          selector:
            matchLabels:
              node: node01 4
          resources:
            requests:
              storage: <10Gi> 5
      1
      The name of the new data volume.
      2
      The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName.
      3
      The namespace where the source PVC exists.
      4
      The label that you applied to the PV in the previous step.
      5
      The size of the destination PV.
  4. Start the cloning operation by applying the data volume manifest to your cluster:

    $ oc apply -f <clone-datavolume.yaml>

The data volume clones the PVC of the virtual machine into the PV on the specific node.

10.19.12. Expanding virtual storage by adding blank disk images

You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization.

10.19.12.1. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.19.12.2. Creating a blank disk image with data volumes

You can create a new blank disk image in a persistent volume claim by customizing and deploying a data volume configuration file.

Prerequisites

  • At least one available persistent volume.
  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the DataVolume manifest:

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: blank-image-datavolume
    spec:
      source:
          blank: {}
      pvc:
        storageClassName: "hostpath" 1
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 500Mi
    1
    Optional: If you do not specify a storage class, the default storage class is applied.
  2. Create the blank disk image by running the following command:

    $ oc create -f <blank-image-datavolume>.yaml

10.19.12.3. Additional resources

10.19.13. Cloning a data volume using smart-cloning

Smart-cloning is a built-in feature of Red Hat OpenShift Data Foundation. Smart-cloning is faster and more efficient than host-assisted cloning.

You do not need to perform any action to enable smart-cloning, but you need to ensure your storage environment is compatible with smart-cloning to use this feature.

When you create a data volume with a persistent volume claim (PVC) source, you automatically initiate the cloning process. You always receive a clone of the data volume if your environment supports smart-cloning or not. However, you will only receive the performance benefits of smart cloning if your storage provider supports smart-cloning.

10.19.13.1. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.19.13.2. About smart-cloning

When a data volume is smart-cloned, the following occurs:

  1. A snapshot of the source persistent volume claim (PVC) is created.
  2. A PVC is created from the snapshot.
  3. The snapshot is deleted.

10.19.13.3. Cloning a data volume

Prerequisites

For smart-cloning to occur, the following conditions are required:

  • Your storage provider must support snapshots.
  • The source and target PVCs must be defined to the same storage class.
  • The source and target PVCs share the same volumeMode.
  • The VolumeSnapshotClass object must reference the storage class defined to both the source and target PVCs.

Procedure

To initiate cloning of a data volume:

  1. Create a YAML file for a DataVolume object that specifies the name of the new data volume and the name and namespace of the source PVC. In this example, because you specify the storage API, there is no need to specify accessModes or volumeMode. The optimal values will be calculated for you automatically.

    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: <cloner-datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source-namespace>" 2
          name: "<my-favorite-vm-disk>" 3
      storage: 4
        resources:
          requests:
            storage: <2Gi> 5
    1
    The name of the new data volume.
    2
    The namespace where the source PVC exists.
    3
    The name of the source PVC.
    4
    Specifies allocation with the storage API
    5
    The size of the new data volume.
  2. Start cloning the PVC by creating the data volume:

    $ oc create -f <cloner-datavolume>.yaml
    Note

    Data volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.

10.19.13.4. Additional resources

10.19.14. Hot plugging virtual disks

You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI).

10.19.14.1. About hot plugging virtual disks

When you hot plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running.

When you hot unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running.

Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot unplugged. You cannot hot plug or hot unplug container disks.

After you hot plug a virtual disk, it remains attached until you detach it, even if you restart the virtual machine.

In the web console, hot-plugged volumes are marked by a "persistent hotplug" label on the disk in the disk list (VirtualMachine Details Configuration Disks tab). To change a hot-plug volume to a persistent volume, click the Options menu kebab and select Make persistent. The "persistent hotplug" label is removed and the change is applied after the next reboot.

10.19.14.2. About virtio-scsi

In OpenShift Virtualization, each virtual machine (VM) has a virtio-scsi controller so that hot plugged disks can use a scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks.

Regular virtio is not available for hot plugged disks because it is not scalable: each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance, therefore slots might not be available on demand.

10.19.14.3. Hot plugging a virtual disk using the CLI

Hot plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running.

Prerequisites

  • You must have a running virtual machine to hot plug a virtual disk.
  • You must have at least one data volume or persistent volume claim (PVC) available for hot plugging.

Procedure

  • Hot plug a virtual disk by running the following command:

    $ virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> \
    [--persist] [--serial=<label-name>]
    • Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances.
    • The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC.

10.19.14.4. Hot unplugging a virtual disk using the CLI

Hot unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running.

Prerequisites

  • Your virtual machine must be running.
  • You must have at least one data volume or persistent volume claim (PVC) available and hot plugged.

Procedure

  • Hot unplug a virtual disk by running the following command:

    $ virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>

10.19.14.5. Hot plugging a virtual disk using the web console

Hot plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running. When you hot plug a virtual disk, it remains attached to the VMI until you unplug it.

Prerequisites

  • You must have a running virtual machine to hot plug a virtual disk.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select the running virtual machine to which you want to hot plug a virtual disk.
  3. On the VirtualMachine details page, click Configuration Disks.
  4. Click Add disk.
  5. In the Add disk (hot plugged) window, fill in the information for the virtual disk that you want to hot plug.
  6. Click Save.

10.19.14.6. Hot unplugging a virtual disk using the web console

Hot unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running.

Prerequisites

  • Your virtual machine must be running with a hot plugged disk attached.

Procedure

  1. Click Virtualization VirtualMachines from the side menu.
  2. Select the running virtual machine with the disk you want to hot unplug to open the VirtualMachine details page.
  3. Click Configuration Disks.
  4. Click the Options menu kebab beside the virtual disk that you want to hot unplug and select Detach.
  5. Click Detach.

10.19.15. Using container disks with virtual machines

You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage.

Important

If you use large container disks, I/O traffic might increase, impacting worker nodes. This can lead to unavailable nodes. You can resolve this by:

10.19.15.1. About container disks

A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones.

A container disk can either be imported into a persistent volume claim (PVC) by using a data volume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk volume.

10.19.15.1.1. Importing a container disk into a PVC by using a data volume

Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a data volume. You can then attach the data volume to a virtual machine for persistent storage.

10.19.15.1.2. Attaching a container disk to a virtual machine as a containerDisk volume

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine.

Use containerDisk volumes for read-only file systems such as CD-ROMs or for disposable virtual machines.

Important

Using containerDisk volumes for read-write file systems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly.

10.19.15.2. Preparing a container disk for virtual machines

You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a data volume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk volume.

The size of a disk image inside a container disk is limited by the maximum layer size of the registry where the container disk is hosted.

Note

For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.

Prerequisites

  • Install podman if it is not already installed.
  • The virtual machine image must be either QCOW2 or RAW format.

Procedure

  1. Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of 107, and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440.

    The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result:

    $ cat > Dockerfile << EOF
    FROM registry.access.redhat.com/ubi8/ubi:latest AS builder
    ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1
    RUN chmod 0440 /disk/*
    
    FROM scratch
    COPY --from=builder /disk/* /disk/
    EOF
    1
    Where <vm_image> is the virtual machine image in either QCOW2 or RAW format.
    To use a remote virtual machine image, replace <vm_image>.qcow2 with the complete url for the remote image.
  2. Build and tag the container:

    $ podman build -t <registry>/<container_disk_name>:latest .
  3. Push the container image to the registry:

    $ podman push <registry>/<container_disk_name>:latest

If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage.

10.19.15.3. Disabling TLS for a container registry to use as insecure registry

You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource.

Prerequisites

  • Log in to the cluster as a user with the cluster-admin role.

Procedure

  • Edit the HyperConverged custom resource and add a list of insecure registries to the spec.storageImport.insecureRegistries field.

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
      storageImport:
        insecureRegistries: 1
          - "private-registry-example-1:5000"
          - "private-registry-example-2:5000"
    1
    Replace the examples in this list with valid registry hostnames.

10.19.15.4. Next steps

10.19.16. Preparing CDI scratch space

10.19.16.1. About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the dataVolumeTemplate field in the virtual machine (VM) specification.

Note
  • VM disk PVCs that are prepared by using standalone data volumes maintain an independent lifecycle from the VM. If you use the dataVolumeTemplate field in the VM specification to prepare the PVC, the PVC shares the same lifecycle as the VM.

10.19.16.2. About scratch space

The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts.

You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass field of the HyperConverged custom resource.

If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used.

Note

CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block volume mode, you must define a storage class capable of provisioning file volume mode PVCs.

Manual provisioning

If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.

10.19.16.3. CDI operations that require scratch space

TypeReason

Registry imports

CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk.

Upload image

QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion.

HTTP imports of archived images

QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG.

HTTP imports of authenticated images

QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG.

HTTP imports of custom certificates

QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG.

10.19.16.4. Defining a storage class

You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass field to the HyperConverged custom resource (CR).

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the HyperConverged CR by running the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Add the spec.scratchSpaceStorageClass field to the CR, setting the value to the name of a storage class that exists in the cluster:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      scratchSpaceStorageClass: "<storage_class>" 1
    1
    If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated.
  3. Save and exit your default editor to update the HyperConverged CR.

10.19.16.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

10.19.16.6. Additional resources

10.19.17. Re-using persistent volumes

To re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used.

10.19.17.1. About reclaiming statically provisioned persistent volumes

When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.

You can then re-use the PV configuration to create a PV with a different name.

Statically provisioned PVs must have a reclaim policy of Retain to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV.

Important

The Recycle reclaim policy is deprecated in OpenShift Container Platform 4.

10.19.17.2. Reclaiming statically provisioned persistent volumes

Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage.

Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage.

Procedure

  1. Ensure that the reclaim policy of the PV is set to Retain:

    1. Check the reclaim policy of the PV:

      $ oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
    2. If the persistentVolumeReclaimPolicy is not set to Retain, edit the reclaim policy with the following command:

      $ oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
  2. Ensure that no resources are using the PV:

    $ oc describe pvc <pvc_name> | grep 'Mounted By:'

    Remove any resources that use the PVC before continuing.

  3. Delete the PVC to release the PV:

    $ oc delete pvc <pvc_name>
  4. Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use spec parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV:

    $ oc get pv <pv_name> -o yaml > <file_name>.yaml
  5. Delete the PV:

    $ oc delete pv <pv_name>
  6. Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder:

    $ rm -rf <path_to_share_storage>
  7. Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the spec parameters of that file as the basis for a new PV manifest:

    Note

    To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted.

    $ oc create -f <new_pv_name>.yaml

Additional resources

10.19.18. Expanding a virtual machine disk

You can enlarge the size of a virtual machine’s (VM) disk to provide a greater storage capacity by resizing the disk’s persistent volume claim (PVC).

However, you cannot reduce the size of a VM disk.

10.19.18.1. Enlarging a virtual machine disk

Virtual machine (VM) disk enlargement makes extra space available to the virtual machine. However, it is the responsibility of the VM owner to decide how to consume the storage.

If the disk is a Filesystem PVC, the matching file expands to the remaining size while reserving some space for file system overhead.

Procedure

  1. Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand:

    $ oc edit pvc <pvc_name>
  2. Update the disk size:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
       name: vm-disk-expand
    spec:
      accessModes:
         - ReadWriteMany
      resources:
        requests:
           storage: 3Gi 1
    # ...
    1
    Specify the new disk size.

10.19.18.2. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.