Search

Virtualization

download PDF
OpenShift Container Platform 4.12

OpenShift Virtualization installation, usage, and release notes

Red Hat OpenShift Documentation Team

Abstract

This document provides information about how to use OpenShift Virtualization in OpenShift Container Platform.

Chapter 1. About OpenShift Virtualization

Learn about OpenShift Virtualization’s capabilities and support scope.

1.1. What you can do with OpenShift Virtualization

OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.

OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include:

  • Creating and managing Linux and Windows virtual machines
  • Connecting to virtual machines through a variety of consoles and CLI tools
  • Importing and cloning existing virtual machines
  • Managing network interface controllers and storage disks attached to virtual machines
  • Live migrating virtual machines between nodes

An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.

OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features.

Important

When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.

You can use OpenShift Virtualization with the OVN-Kubernetes, OpenShift SDN, or one of the other certified network plugins listed in Certified OpenShift CNI Plug-ins.

1.1.1. OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.12 is supported for use on OpenShift Container Platform 4.12 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

1.2. Single-node OpenShift differences

You can install OpenShift Virtualization on a single-node cluster.

When provisioning a single-node OpenShift cluster with the assisted installer, preconfigured persistent storage is deployed automatically.

  • In OpenShift Virtualization 4.10 and 4.11, the HostPath Provisioner (HPP) is automatically installed.
  • In OpenShift Virtualization 4.12, the OpenShift Data Foundation Logical Volume Manager Operator is the provided out-of-the-box storage solution. You can also manually deploy using the HPP.
Note

Single-node OpenShift does not support high availability. Be aware of the following differences in functionality from a multiple-node cluster:

  • Pod disruption budgets are not supported.
  • Live migration is not supported.
  • Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, templates or virtual machines that use data volumes or storage profiles must not have the eviction strategy set.

1.3. Additional resources

Chapter 2. OpenShift Virtualization architecture

Learn about OpenShift Virtualization architecture.

2.1. How OpenShift Virtualization architecture works

After you install OpenShift Virtualization, the Operator Lifecycle Manager (OLM) deploys operator pods for each component of OpenShift Virtualization:

  • Compute: virt-operator
  • Storage: cdi-operator
  • Network: cluster-network-addons-operator
  • Scaling: ssp-operator
  • Templating: tekton-tasks-operator

OLM also deploys the hyperconverged-cluster-operator pod, which is responsible for the deployment, configuration, and life cycle of other components, and several helper pods: hco-webhook, and hyperconverged-cluster-cli-download.

After all operator pods are successfully deployed, you should create the HyperConverged custom resource (CR). The configurations set in the HyperConverged CR serve as the single source of truth and the entrypoint for OpenShift Virtualization, and guide the behavior of the CRs.

The HyperConverged CR creates corresponding CRs for the operators of all other components within its reconciliation loop. Each operator then creates resources such as daemon sets, config maps, and additional components for the OpenShift Virtualization control plane. For example, when the hco-operator creates the KubeVirt CR, the virt-operator reconciles it and create additional resources such as virt-controller, virt-handler, and virt-api.

The OLM deploys the hostpath-provisioner-operator, but it is not functional until you create a hostpath provisioner (HPP) CR.

CNV Deployments

2.2. About the hco-operator

The hco-operator (HCO) provides a single entry point for deploying and managing OpenShift Virtualization and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators.

hco-operator components
Table 2.1. hco-operator components
ComponentDescription

deployment/hco-webhook

Validates the HyperConverged custom resource contents.

deployment/hyperconverged-cluster-cli-download

Provides the virtctl tool binaries to the cluster so that you can download them directly from the cluster.

KubeVirt/kubevirt-kubevirt-hyperconverged

Contains all operators, CRs, and objects needed by OpenShift Virtualization.

SSP/ssp-kubevirt-hyperconverged

An SSP CR. This is automatically created by the HCO.

CDI/cdi-kubevirt-hyperconverged

A CDI CR. This is automatically created by the HCO.

NetworkAddonsConfig/cluster

A CR that instructs and is managed by the cluster-network-addons-operator.

2.3. About the cdi-operator

The cdi-operator manages the Containerized Data Importer (CDI), and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume.

cdi-operator components
Table 2.2. cdi-operator components
ComponentDescription

deployment/cdi-apiserver

Manages the authorization to upload VM disks into PVCs by issuing secure upload tokens.

deployment/cdi-uploadproxy

Directs external disk upload traffic to the appropriate upload server pod so that it can be written to the correct PVC. Requires a valid upload token.

pod/cdi-importer

Helper pod that imports a virtual machine image into a PVC when creating a data volume.

2.4. About the cluster-network-addons-operator

The cluster-network-addons-operator deploys networking components on a cluster and manages the related resources for extended network functionality.

cluster-network-addons-operator components
Table 2.3. cluster-network-addons-operator components
ComponentDescription

deployment/kubemacpool-cert-manager

Manages TLS certificates of Kubemacpool’s webhooks.

deployment/kubemacpool-mac-controller-manager

Provides a MAC address pooling service for virtual machine (VM) network interface cards (NICs).

daemonset/bridge-marker

Marks network bridges available on nodes as node resources.

daemonset/kube-cni-linux-bridge-plugin

Installs CNI plugins on cluster nodes, enabling the attachment of VMs to Linux bridges through network attachment definitions.

2.5. About the hostpath-provisioner-operator

The hostpath-provisioner-operator deploys and manages the multi-node hostpath provisioner (HPP) and related resources.

hpp-operator components
Table 2.4. hostpath-provisioner-operator components
ComponentDescription

deployment/hpp-pool-hpp-csi-pvc-block-<worker_node_name>

Provides a worker for each node where the hostpath provisioner (HPP) is designated to run. The pods mount the specified backing storage on the node.

daemonset/hostpath-provisioner-csi

Implements the Container Storage Interface (CSI) driver interface of the HPP.

daemonset/hostpath-provisioner

Implements the legacy driver interface of the HPP.

2.6. About the ssp-operator

The ssp-operator deploys the common templates, the related default boot sources, and the template validator.

ssp-operator components
Table 2.5. ssp-operator components
ComponentDescription

deployment/virt-template-validator

Checks vm.kubevirt.io/validations annotations on virtual machines created from templates, and rejects them if they are invalid.

2.7. About the tekton-tasks-operator

The tekton-tasks-operator deploys example pipelines showing the usage of OpenShift Pipelines for VMs. It also deploys additional OpenShift Pipeline tasks that allow users to create VMs from templates, copy and modify templates, and create data volumes.

tekton-tasks-operator components
Table 2.6. tekton-tasks-operator components
ComponentDescription

deployment/create-vm-from-template

Creates a VM from a template.

deployment/copy-template

Copies a VM template.

deployment/modify-vm-template

Creates or removes a VM template.

deployment/modify-data-object

Creates or removes data volumes or data sources.

deployment/cleanup-vm

Runs a script or a command on a VM, then stops or deletes the VM afterward.

deployment/disk-virt-customize

Runs a customize script on a target PVC using virt-customize.

deployment/disk-virt-sysprep

Runs a sysprep script on a target PVC by using virt-sysprep.

deployment/wait-for-vmi-status

Waits for a specific VMI status, then fails or succeeds according to that status.

2.8. About the virt-operator

The virt-operator deploys, upgrades, and manages OpenShift Virtualization without disrupting current virtual machine (VM) workloads.

virt-operator components
Table 2.7. virt-operator components
ComponentDescription

deployment/virt-api

HTTP API server that serves as the entry point for all virtualization-related flows.

deployment/virt-controller

Observes the creation of a new VM instance object and creates a corresponding pod. When the pod is scheduled on a node, virt-controller updates the VM with the node name.

daemonset/virt-handler

Monitors any changes to a VM and instructs virt-launcher to perform the required operations. This component is node-specific.

pod/virt-launcher

Contains the VM that was created by the user as implemented by libvirt and qemu.

Chapter 3. Getting started with OpenShift Virtualization

You can explore the features and functionalities of OpenShift Virtualization by installing and configuring a basic environment.

Note

Cluster configuration procedures require cluster-admin privileges.

3.1. Planning and installing OpenShift Virtualization

Plan and install OpenShift Virtualization on an OpenShift Container Platform cluster:

Planning and installation resources

3.2. Creating and managing virtual machines

Create virtual machines (VMs) by using the web console:

Connect to the VMs:

Manage the VMs:

3.3. Next steps

Chapter 4. Web console overview

The Virtualization section of the OpenShift Container Platform web console contains the following pages for managing and monitoring your OpenShift Virtualization environment.

Table 4.1. Virtualization pages
PageDescription

Overview page

Manage and monitor the OpenShift Virtualization environment.

Catalog page

Create VirtualMachines from a catalog of templates.

VirtualMachines page

Configure and monitor VirtualMachines.

Templates page

Create and manage templates.

DataSources page

Create and manage DataSources for VirtualMachine boot sources.

MigrationPolicies page

Create and manage MigrationPolicies for workloads.

Table 4.2. Key
IconDescription

icon pencil

Edit icon

icon link

Link icon

4.1. Overview page

The Overview page displays resources, metrics, migration progress, and cluster-level settings.

Example 4.1. Overview page

ElementDescription

Download virtctl icon link

Download the virtctl command line tool to manage resources.

Overview tab

Resources, usage, alerts, and status.

Top consumers tab

Top consumers of CPU, memory, and storage resources.

Migrations tab

Status of live migrations.

Settings tab

Cluster-wide settings, including live migration limits and user permissions.

4.1.1. Overview tab

The Overview tab displays resources, usage, alerts, and status.

Example 4.2. Overview tab

ElementDescription

"Getting started resources" card

  • "Quick Starts" tile: Learn how to create, import, and run VirtualMachines with step-by-step instructions and tasks.
  • "Feature highlights" tile: Read the latest information about key virtualization features.
  • "Related operators" tile: Install Operators such as the Kubernetes NMState Operator or the OpenShift Data Foundation Operator.

"VirtualMachines" tile

Number of VirtualMachines, with a chart showing the last 7 days' trend.

"vCPU usage" tile

vCPU usage, with a chart showing the last 7 days' trend.

"Memory" tile

Memory usage, with a chart showing the last 7 days' trend.

"Storage" tile

Storage usage, with a chart showing the last 7 days' trend.

"Alerts" tile

OpenShift Virtualization alerts, grouped by severity.

"VirtualMachine statuses" tile

Number of VirtualMachines, grouped by status.

"VirtualMachines per template" chart

Number of VirtualMachines created from templates, grouped by template name.

4.1.2. Top consumers tab

The Top consumers tab displays the top consumers of CPU, memory, and storage.

Example 4.3. Top consumers tab

ElementDescription

View virtualization dashboard icon link

Link to Observe → Dashboards, which displays the top consumers for OpenShift Virtualization.

Time period list

Select a time period to filter the results.

Top consumers list

Select the number of top consumers to filter the results.

"CPU" chart

VirtualMachines with the highest CPU usage.

"Memory" chart

VirtualMachines with the highest memory usage.

"Memory swap traffic" chart

VirtualMachines with the highest memory swap traffic.

"vCPU wait" chart

VirtualMachines with the highest vCPU wait periods.

"Storage throughput" chart

VirtualMachines with the highest storage throughput usage.

"Storage IOPS" chart

VirtualMachines with the highest storage input/output operations per second usage.

4.1.3. Migrations tab

The Migrations tab displays the status of VirtualMachineInstance migrations.

Example 4.4. Migrations tab

ElementDescription

Time period list

Select a time period to filter VirtualMachineInstanceMigrations.

VirtualMachineInstanceMigrations table

List of VirtualMachineInstance migrations.

4.1.4. Settings tab

The Settings tab displays cluster-wide settings on the following tabs:

Table 4.3. Tabs on Settings tab
TabDescription

General tab

OpenShift Virtualization version and update status.

Live migration tab

Live migration limits and network settings.

Templates project tab

Project for Red Hat templates.

User permissions tab

Cluster-wide user permissions.

4.1.4.1. General tab

The General tab displays the OpenShift Virtualization version and update status.

Example 4.5. General tab

LabelDescription

Service name

OpenShift Virtualization

Provider

Red Hat

Installed version

4.12.13

Update status

Example: Up to date

Channel

Channel selected for updates.

4.1.4.2. Live migration tab

You can configure live migration on the Live migration tab.

Example 4.6. Live migration tab

ElementDescription

Max. migrations per cluster field

Select the maximum number of live migrations per cluster.

Max. migrations per node field

Select the maximum number of live migrations per node.

Live migration network list

Select a dedicated secondary network for live migration.

4.1.4.3. Templates project tab

You can select a project for templates on the Templates project tab.

Example 4.7. Templates project tab

ElementDescription

Project list

Select a project in which to store Red Hat templates. The default template project is openshift.

If you want to define multiple template projects, you must clone the templates on the Templates page for each project.

4.1.4.4. User permissions tab

The User permissions tab displays cluster-wide user permissions for tasks.

Example 4.8. User permissions tab

ElementDescription

User Permissions table

List of tasks, such as Share templates, and permissions.

4.2. Catalog page

You can create a VirtualMachine by selecting a template on the Catalog page.

Example 4.9. Catalog page

ElementDescription

Templates project list

Select the project in which your templates are located.

By default, Red Hat templates are stored in the openshift project. You can edit the template project on the Overview → Settings → Template project tab.

All items|Default templates

Click Default templates to display only default templates.

Boot source available checkbox

Select the checkbox to display templates with an available boot source.

Operating system checkboxes

Select checkboxes to display templates with selected operating systems.

Workload checkboxes

Select checkboxes to display templates with selected workloads.

Search field

Search templates by keyword.

Template tiles

Click a template tile to view template details and to create a VirtualMachine.

4.3. VirtualMachines page

You can create and manage VirtualMachines on the VirtualMachines page.

Example 4.10. VirtualMachines page

ElementDescription

Create → From catalog

Create a VirtualMachine on the Catalog page.

Create → With YAML

Create a VirtualMachine by editing a YAML configuration file.

Filter field

Filter VirtualMachines by status, template, operating system, or node.

Search field

Search for VirtualMachines by name or by label.

VirtualMachines table

List of VirtualMachines.

Click the Options menu kebab beside a VirtualMachine to select Stop, Restart, Pause, Clone, Migrate, Copy SSH command, Edit labels, Edit annotations, or Delete.

Click a VirtualMachine to navigate to the VirtualMachine details page.

4.3.1. VirtualMachine details page

You can configure a VirtualMachine on the VirtualMachine details page.

Example 4.11. VirtualMachine details page

ElementDescription

Actions menu

Click the Actions menu to select Stop, Restart, Pause, Clone, Migrate, Copy SSH command, Edit labels, Edit annotations, or Delete.

Overview tab

Resource usage, alerts, disks, and devices.

Details tab

VirtualMachine configurations.

Metrics tab

Memory, CPU, storage, network, and migration metrics.

YAML tab

VirtualMachine YAML configuration file.

Scheduling tab

Scheduling configurations.

Environment tab

Config map, secret, and service account management.

Events tab

VirtualMachine event stream.

Console tab

Console session management.

Network interfaces tab

Network interface management.

Disks tab

Disk management.

Scripts tab

Cloud-init and SSH key management.

Snapshots tab

Snapshot management.

4.3.1.1. Overview tab

The Overview tab displays resource usage, alerts, and configuration information.

Example 4.12. Overview tab

ElementDescription

"Details" tile

General VirtualMachine information.

"Utilization" tile

CPU, Memory, Storage, and Network transfer charts.

"Hardware devices" tile

GPU and host devices.

"Alerts" tile

OpenShift Virtualization alerts, grouped by severity.

"Snapshots" tile

Take snapshot icon link and Snapshots table.

"Network interfaces" tile

Network interfaces table.

"Disks" tile

Disks table.

4.3.1.2. Details tab

You can configure the VirtualMachine on the Details tab.

Example 4.13. Details tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Name

VirtualMachine name.

Namespace

VirtualMachine namespace.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Description

Click the edit icon to enter a description.

Operating system

Operating system name.

CPU|Memory

Click the edit icon to edit the CPU|Memory request.

The number of CPUs is calculated by using the following formula: sockets * threads * cores.

Machine type

VirtualMachine machine type.

Boot mode

Click the edit icon to edit the boot mode.

Start in pause mode

Click the edit icon to enable this setting.

Template

Name of the template used to create the VirtualMachine.

Created at

VirtualMachine creation date.

Owner

VirtualMachine owner.

Status

VirtualMachine status.

Pod

virt-launcher pod name.

VirtualMachineInstance

VirtualMachineInstance name.

Boot order

Click the edit icon to select a boot source.

IP address

IP address of the VirtualMachine.

Hostname

Hostname of the VirtualMachine.

Time zone

Time zone of the VirtualMachine.

Node

Node on which the VirtualMachine is running.

Workload profile

Click the edit icon to edit the workload profile.

SSH using virtctl

Click the copy icon to copy the virtctl ssh command to the clipboard.

SSH over NodePort

Selecting Create a Service to expose your VirtualMachine for SSH access generates an ssh -p <port> command. Click the copy icon to copy the command to the clipboard.

GPU devices

Click the edit icon to add a GPU device.

Host devices

Click the edit icon to add a host device.

Services section

View services.

Active users section

View active users.

4.3.1.3. Metrics tab

The Metrics tab displays memory, CPU, storage, network, and migration usage charts.

Example 4.14. Metrics tab

ElementDescription

Time range list

Select a time range to filter the results.

Virtualization dashboard icon link

Link to the Workloads tab of the current project.

Utilization section

Memory, CPU, and Network interface charts.

Storage section

Storage total read/write and Storage iops total read/write charts.

Network section

Network in, Network out, and Network bandwidth charts.

Migration section

Migration and KV data transfer rate charts.

4.3.1.4. YAML tab

You can configure the VirtualMachine by editing the YAML file on the YAML tab.

Example 4.15. YAML tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

4.3.1.5. Scheduling tab

You can configure scheduling on the Scheduling tab.

Example 4.16. Scheduling tab

SettingDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Node selector

Click the edit icon to add a label to specify qualifying nodes.

Tolerations

Click the edit icon to add a toleration to specify qualifying nodes.

Affinity rules

Click the edit icon to add an affinity rule.

Descheduler switch

Enable or disable the descheduler. The descheduler evicts a running pod so that the pod can be rescheduled onto a more suitable node.

Dedicated resources

Click the edit icon to select Schedule this workload with dedicated resources (guaranteed policy).

Eviction strategy

Click the edit icon to select LiveMigrate as the VirtualMachineInstance eviction strategy.

4.3.1.6. Environment tab

You can manage config maps, secrets, and service accounts on the Environment tab.

Example 4.17. Environment tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Add Config Map, Secret or Service Account icon link

Click the link and select a config map, secret, or service account from the resource list.

4.3.1.7. Events tab

The Events tab displays a list of VirtualMachine events.

4.3.1.8. Console tab

You can open a console session to the VirtualMachine on the Console tab.

Example 4.18. Console tab

ElementDescription

Guest login credentials section

Expand Guest login credentials to view the credentials created with cloud-init. Click the copy icon to copy the credentials to the clipboard.

Console list

Select VNC console or Serial console.

You can select Desktop viewer to connect to Windows VirtualMachines by using Remote Desktop Protocol (RDP). You must install an RDP client on a machine on the same network.

Send key list

Select a key-stroke combination to send to the console.

Disconnect button

Disconnect the console connection.

You must manually disconnect the console connection if you open a new console session. Otherwise, the first console session continues to run in the background.

4.3.1.9. Network interfaces tab

You can manage network interfaces on the Network interfaces tab.

Example 4.19. Network interfaces tab

SettingDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Add network interface button

Add a network interface to the VirtualMachine.

Filter field

Filter by interface type.

Search field

Search for a network interface by name or by label.

Network interface table

List of network interfaces.

Click the Options menu kebab beside a network interface to select Edit or Delete.

4.3.1.10. Disks tab

You can manage disks on the Disks tab.

Example 4.20. Disks tab

SettingDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Add disk button

Add a disk to the VirtualMachine.

Filter field

Filter by disk type.

Search field

Search for a disk by name.

Disks table

List of VirtualMachine disks.

Click the Options menu kebab beside a disk to select Edit or Detach.

File systems table

List of VirtualMachine file systems.

4.3.1.11. Scripts tab

You can manage the cloud-init and SSH keys of the VirtualMachine on the Scripts tab.

Example 4.21. Scripts tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Cloud-init

Click the edit icon to edit the cloud-init settings.

Authorized SSH Key

Click the edit icon to create a new secret or to attach an existing secret.

4.3.1.12. Snapshots tab

You can create snapshots and restore VirtualMachines from snapshots on the Snapshots tab.

Example 4.22. Snapshots tab

ElementDescription

Take snapshot button

Create a snapshot.

Filter field

Filter snapshots by status.

Search field

Search for snapshots by name or by label.

Snapshot table

List of snapshots.

Click the Options menu kebab beside a snapshot to select Edit labels, Edit annotations, Edit VirtualMachineSnapshot, Delete VirtualMachineSnapshot.

4.4. Templates page

You can create, edit, and clone VirtualMachine templates on the Templates page.

Note

You cannot edit a Red Hat template. You can clone a Red Hat template and edit it to create a custom template.

Example 4.23. Templates page

ElementDescription

Create Template button

Create a template by editing a YAML configuration file.

Filter field

Filter templates by type, boot source, template provider, or operating system.

Search field

Search for templates by name or by label.

Templates table

List of templates.

Click the Options menu kebab beside a template to select Edit, Clone, Edit boot source, Edit boot source reference, Edit labels, Edit annotations, or Delete.

4.4.1. Template details page

You can view template settings and edit custom templates on the Template details page.

Example 4.24. Template details page

ElementDescription

Actions menu

Click the Actions menu to select Edit, Clone, Edit boot source, Edit boot source reference, Edit labels, Edit annotations, or Delete.

Details tab

Template settings and configurations.

YAML tab

YAML configuration file.

Scheduling tab

Scheduling configurations.

Network interfaces tab

Network interface management.

Disks tab

Disk management.

Scripts tab

Cloud-init, SSH key, and Sysprep management.

Parameters tab

Parameters.

4.4.1.1. Details tab

You can configure a custom template on the Details tab.

Example 4.25. Details tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Name

Template name.

Namespace

Template namespace.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Display name

Click the edit icon to edit the display name.

Description

Click the edit icon to enter a description.

Operating system

Operating system name.

CPU|Memory

Click the edit icon to edit the CPU|Memory request.

The number of CPUs is calculated by using the following formula: sockets * threads * cores.

Machine type

Template machine type.

Boot mode

Click the edit icon to edit the boot mode.

Base template

Name of the base template used to create this template.

Created at

Template creation date.

Owner

Template owner.

Boot order

Template boot order.

Boot source

Boot source availability.

Provider

Template provider.

Support

Template support level.

GPU devices

Click the edit icon to add a GPU device.

Host devices

Click the edit icon to add a host device.

4.4.1.2. YAML tab

You can configure a custom template by editing the YAML file on the YAML tab.

Example 4.26. YAML tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Save button

Save changes to the YAML file.

Reload button

Discard your changes and reload the YAML file.

Cancel button

Exit the YAML tab.

Download button

Download the YAML file to your local machine.

4.4.1.3. Scheduling tab

You can configure scheduling on the Scheduling tab.

Example 4.27. Scheduling tab

SettingDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Node selector

Click the edit icon to add a label to specify qualifying nodes.

Tolerations

Click the edit icon to add a toleration to specify qualifying nodes.

Affinity rules

Click the edit icon to add an affinity rule.

Descheduler switch

Enable or disable the descheduler. The descheduler evicts a running pod so that the pod can be rescheduled onto a more suitable node.

Dedicated resources

Click the edit icon to select Schedule this workload with dedicated resources (guaranteed policy).

Eviction strategy

Click the edit icon to select LiveMigrate as the VirtualMachineInstance eviction strategy.

4.4.1.4. Network interfaces tab

You can manage network interfaces on the Network interfaces tab.

Example 4.28. Network interfaces tab

SettingDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Add network interface button

Add a network interface to the template.

Filter field

Filter by interface type.

Search field

Search for a network interface by name or by label.

Network interface table

List of network interfaces.

Click the Options menu kebab beside a network interface to select Edit or Delete.

4.4.1.5. Disks tab

You can manage disks on the Disks tab.

Example 4.29. Disks tab

SettingDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Add disk button

Add a disk to the template.

Filter field

Filter by disk type.

Search field

Search for a disk by name.

Disks table

List of template disks.

Click the Options menu kebab beside a disk to select Edit or Detach.

4.4.1.6. Scripts tab

You can manage the cloud-init settings, SSH keys, and Sysprep answer files on the Scripts tab.

Example 4.30. Scripts tab

ElementDescription

YAML switch

Set to ON to view your live changes in the YAML configuration file.

Cloud-init

Click the edit icon to edit the cloud-init settings.

Authorized SSH Key

Click the edit icon to create a new secret or to attach an existing secret.

Sysprep

Click the edit icon to upload an Autounattend.xml or Unattend.xml answer file to automate Windows VirtualMachine setup.

4.4.1.7. Parameters tab

You can edit selected template settings on the Parameters tab.

Example 4.31. Parameters tab

ElementDescription

VM name

Select Generated (expression) for a generated value, Value to set a default value, or None from the Default value type list.

Data source namespace

Select Generated (expression) for a generated value, Value to set a default value, or None from the Default value type list.

Cloud user password

Select Generated (expression) for a generated value, Value to set a default value, or None from the Default value type list.

4.5. DataSources page

You can create and configure DataSources for VirtualMachine boot sources on the DataSources page.

When you create a DataSource, a DataImportCron resource defines a cron job to poll and import the disk image unless you disable automatic boot source updates.

Example 4.32. DataSources page

ElementDescription

Create DataSource → With form

Create a DataSource by entering the registry URL, disk size, number of revisions, and cron expression in a form.

Create DataSources → With YAML

Create a DataSource by editing a YAML configuration file.

Filter field

Filter DataSources by attributes such as DataImportCron available.

Search field

Search for a DataSource by name or by label.

DataSources table

List of DataSources.

Click the Options menu kebab beside a DataSource to select Edit labels, Edit annotations, or Delete.

Click a DataSource to view the DataSource details page.

4.5.1. DataSource details page

You can configure a DataSource on the DataSource details page.

Example 4.33. DataSource details page

ElementDescription

Details tab

Configure a DataSource by editing a form.

YAML tab

Configure a DataSource by editing a YAML configuration file.

Actions menu

Select Edit labels, Edit annotations, or Delete.

Name

DataSource name.

Namespace

DataSource namespace.

Labels

Click the edit icon to edit the labels.

Annotations

Click the edit icon to edit the annotations.

Conditions

Displays the status conditions of the DataSource.

4.6. MigrationPolicies page

You can manage MigrationPolicies for your workloads on the MigrationPolicies page.

Example 4.34. MigrationPolicies page

ElementDescription

Create MigrationPolicy → With form

Create a MigrationPolicy by entering configurations and labels in a form.

Create MigrationPolicy → With YAML

Create a MigrationPolicy by editing a YAML configuration file.

Name | Label search field

Search for a MigrationPolicy by name or by label.

MigrationPolicies table

List of MigrationPolicies.

Click the Options menu kebab beside a MigrationPolicy to select Edit or Delete.

Click a MigrationPolicy to view the MigrationPolicy details page.

4.6.1. MigrationPolicy details page

You can configure a MigrationPolicy on the MigrationPolicy details page.

Example 4.35. MigrationPolicy details page

ElementDescription

Details tab

Configure a MigrationPolicy by editing a form.

YAML tab

Configure a MigrationPolicy by editing a YAML configuration file.

Actions menu

Select Edit or Delete.

Name

MigrationPolicy name.

Description

MigrationPolicy description.

Configurations

Click the edit icon to update the MigrationPolicy configurations.

Bandwidth per migration

Bandwidth request per migration. For unlimited bandwidth, set the value to 0.

Auto converge

Auto converge policy.

Post-copy

Post-copy policy.

Completion timeout

Completion timeout value in seconds.

Project labels

Click Edit to edit the project labels.

VirtualMachine labels

Click Edit to edit the VirtualMachine labels.

Chapter 5. OpenShift Virtualization release notes

5.1. Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

5.2. About Red Hat OpenShift Virtualization

Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.

OpenShift Virtualization is represented by the OpenShift Virtualization icon.

You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.

Learn more about what you can do with OpenShift Virtualization.

Learn more about OpenShift Virtualization architecture and deployments.

Prepare your cluster for OpenShift Virtualization.

5.2.1. OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.12 is supported for use on OpenShift Container Platform 4.12 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

5.2.2. Supported guest operating systems

To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization.

5.3. New and changed features

  • OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.

    The SVVP Certification applies to:

    • Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
    • Intel and AMD CPUs.
  • OpenShift Virtualization no longer uses the OpenShift Virtualization logo. OpenShift Virtualization is now represented by the OpenShift Virtualization logo for versions 4.9 and later.
  • You can export and download a volume from a virtual machine (VM), a VM snapshot, or a persistent volume claim (PVC) to recreate it on a different cluster or in a different namespace on the same cluster by using the virtctl vmexport command or by creating a VirtualMachineExport custom resource. You can also export the memory-dump for forensic analysis.
  • Standalone data volumes, and data volumes created when using a dataVolumeTemplate to prepare a disk for a VM, are no longer stored in the system. The data volumes are now automatically garbage collected and deleted after the PVC is created.
  • OpenShift Virtualization now provides live migration metrics that you can access by using the OpenShift Container Platform monitoring dashboard.
  • The OpenShift Virtualization Operator now reads the cluster-wide TLS security profile from the APIServer custom resource and propagates it to the OpenShift Virtualization components, including virtualization, storage, networking, and infrastructure.
  • OpenShift Virtualization has runbooks to help you troubleshoot issues that trigger alerts. The alerts are displayed on the VirtualizationOverview page of the web console. Each runbook defines an alert and provides steps to diagnose and resolve the issue. This feature was previously introduced as a Technology Preview and is now generally available.

5.3.1. Quick starts

  • Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the virtualization keyword in the Filter field.

5.3.2. Networking

5.3.3. Web console

  • The Virtualization → Overview page has the following usability enhancements:

    • A Download virtctl link is available.
    • Resource information is customized for administrative and non-administrative users. For example, non-administrative users see only their VMs.
    • The Overview tab displays the number of VMs, and vCPU, memory, and storage usage with charts that show the last 7 days' trend.
    • The Alerts card on the Overview tab displays the alerts grouped by severity.
    • The Top Consumers tab displays the top consumers of CPU, memory, and storage usage over a configurable time period.
    • The Migrations tab displays the progress of VM migrations.
    • The Settings tab displays cluster-wide settings, including live migration limits, live migration network, and templates project.
  • You can create and manage live migration policies in a single location on the Virtualization → MigrationPolicies page.
  • The Metrics tab on the VirtualMachine details page displays memory, CPU, storage, network, and migration metrics of a VM, over a configurable period of time.
  • When you customize a template to create a VM, you can set the YAML switch to ON on each VM configuration tab to view the live changes in the YAML configuration file alongside the form.
  • The Migrations tab on the Virtualization → Overview page displays the progress of virtual machine instance migrations over a configurable time period.
  • You can now define a dedicated network for live migration to minimize disruption to tenant workloads. To select a network, navigate to VirtualizationOverviewSettingsLive migration.

5.3.4. Deprecated features

Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.

5.3.5. Removed features

Removed features are not supported in the current release.

  • Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.12, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a previous version of OpenShift Virtualization.
  • OpenShift Virtualization 4.11 removed support for nmstate, including the following objects:

    • NodeNetworkState
    • NodeNetworkConfigurationPolicy
    • NodeNetworkConfigurationEnactment

    To preserve and support your existing nmstate configuration, install the Kubernetes NMState Operator before updating to OpenShift Virtualization 4.11. For 4.12 for Extended Update Support (EUS) versions, install the Kubernetes NMState Operator after updating to 4.12. You can install the Operator from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (oc).

  • The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (oc).

    You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later 4.10 releases. For Extended Update Support (EUS) versions, you must perform the following tasks before updating to OpenShift Virtualization 4.12 from 4.10.2 and later 4.10 releases:

    • Move all nodes out of maintenance mode.
    • Install the standalone NMO and replace the nodemaintenances.nodemaintenance.kubevirt.io custom resource (CR) with a nodemaintenances.nodemaintenance.medik8s.io CR.

5.4. Technology Preview features

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:

Technology Preview Features Support Scope

  • The Tekton Tasks Operator (TTO) now integrates OpenShift Virtualization with Red Hat OpenShift Pipelines. TTO includes cluster tasks and example pipelines that allow you to:

    • Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes.
    • Run commands in VMs.
    • Manipulate disk images with libguestfs tools.
    • Install Windows 10 into a new data volume from a Windows installation image (ISO file).
    • Customize a basic Windows 10 installation and then create a new image and template.
  • You can now use the guest agent ping probe to determine if the QEMU guest agent is running on a virtual machine.
  • You can now use Microsoft Windows 11 as a guest operating system. However, OpenShift Virtualization 4.12 does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the BitLocker recovery guide.
  • You can create live migration policies with specific parameters, such as bandwidth usage, maximum number of parallel migrations, and timeout, and apply the policies to groups of virtual machines by using virtual machine and namespace labels.

5.5. Bug fixes

  • You can now configure the HyperConverged CR to enable mediated devices before drivers are installed without losing the new device configuration after driver installation. (BZ#2046298)
  • The OVN-Kubernetes cluster network provider no longer crashes from peak RAM and CPU usage if you create a large number of NodePort services. (OCPBUGS-1940)
  • Cloning more than 100 VMs at once no longer intermittently fails if you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage. (BZ#1989527)

5.6. Known issues

  • You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)
  • In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV Reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)
  • When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs storage class fail to migrate and the VM status changes to Paused. This is because both pods try to access the shared ReadWriteMany CephFS volume at the same time. (BZ#2092271)

    • As a workaround, use the ocs-storagecluster-ceph-rbd storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage.
  • The TopoLVM provisioner name string has changed in OpenShift Virtualization 4.12. As a result, the automatic import of operating system images might fail with the following error message (BZ#2158521):

    DataVolume.storage spec is missing accessMode and volumeMode, cannot get access mode from StorageProfile.
    • As a workaround:

      1. Update the claimPropertySets array of the storage profile:

        $ oc patch storageprofile <storage_profile> --type=merge -p '{"spec": {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], "volumeMode": "Block"}, \
            {"accessModes": ["ReadWriteOnce"], "volumeMode": "Filesystem"}]}}'
      2. Delete the affected data volumes in the openshift-virtualization-os-images namespace. They are recreated with the access mode and volume mode from the updated storage profile.
  • When restoring a VM snapshot for storage whose binding mode is WaitForFirstConsumer, the restored PVCs remain in the Pending state and the restore operation does not progress.

    • As a workaround, start the restored VM, stop it, and then start it again. The VM will be scheduled, the PVCs will be in the Bound state, and the restore operation will complete. (BZ#2149654)
  • VMs created from common templates on a Single Node OpenShift (SNO) cluster display a VMCannotBeEvicted alert because the template’s default eviction strategy is LiveMigrate. You can ignore this alert or remove the alert by updating the VM’s eviction strategy. (BZ#2092412)
  • Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-22036)
  • Some persistent volume claim (PVC) annotations created by the Containerized Data Importer (CDI) can cause the virtual machine snapshot restore operation to hang indefinitely. (BZ#2070366)

    • As a workaround, you can remove the annotations manually:

      1. Obtain the VirtualMachineSnapshotContent custom resource (CR) name from the status.virtualMachineSnapshotContentName value in the VirtualMachineSnapshot CR.
      2. Edit the VirtualMachineSnapshotContent CR and remove all lines that contain k8s.io/cloneRequest.
      3. If you did not specify a value for spec.dataVolumeTemplates in the VirtualMachine object, delete any DataVolume and PersistentVolumeClaim objects in this namespace where both of the following conditions are true:

        1. The object’s name begins with restore-.
        2. The object is not referenced by virtual machines.

          This step is optional if you specified a value for spec.dataVolumeTemplates.

      4. Repeat the restore operation with the updated VirtualMachineSnapshot CR.
  • Windows 11 virtual machines do not boot on clusters running in FIPS mode. Windows 11 requires a TPM (trusted platform module) device by default. However, the swtpm (software TPM emulator) package is incompatible with FIPS. (BZ#2089301)
  • If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)

    • As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
  • In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)

    • As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
  • The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. (BZ#2026733)

  • OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)

    • As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
  • If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. (BZ#2055595)

    • As a workaround, you can restart the ceph-mgr to purge the VM clones.
  • VMs that use Logical volume management (LVM) with block storage devices require additional configuration to avoid conflicts with Red Hat Enterprise Linux CoreOS (RHCOS) hosts.

    • As a workaround, you can create a VM, provision an LVM, and restart the VM. This creates an empty system.lvmdevices file. (OCPBUGS-5223)

Chapter 6. Installing

6.1. Preparing your cluster for OpenShift Virtualization

Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements.

Important

You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration.

FIPS mode

If you install your cluster in FIPS mode, no additional setup is required for OpenShift Virtualization.

IPv6

You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. (BZ#2193267)

6.1.1. Hardware and operating system requirements

Review the following hardware and operating system requirements for OpenShift Virtualization.

Supported platforms

Important

Installing OpenShift Virtualization on AWS bare metal instances or on IBM Cloud Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • Bare metal instances or servers offered by other cloud providers are not supported.

CPU requirements

  • Supported by Red Hat Enterprise Linux (RHEL) 8
  • Support for Intel 64 or AMD64 CPU extensions
  • Intel VT or AMD-V hardware virtualization extensions enabled
  • NX (no execute) flag enabled

Storage requirements

  • Supported by OpenShift Container Platform
Warning

If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details.

Operating system requirements

  • Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes

    Note

    RHEL worker nodes are not supported.

  • If your cluster uses worker nodes with different CPUs, live migration failures can occur because different CPUs have different capabilities. To avoid such failures, use CPUs with appropriate capacity for each node and set node affinity on your virtual machines to ensure successful migration. See Configuring a required node affinity rule for more information.

Additional resources

6.1.2. Physical resource overhead requirements

OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.

Important

The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.

6.1.2.1. Memory overhead

Calculate the memory overhead values for OpenShift Virtualization by using the equations below.

Cluster memory overhead

Memory overhead per infrastructure node ≈ 150 MiB

Memory overhead per worker node ≈ 360 MiB

Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.

Virtual machine memory overhead

Memory overhead per virtual machine ≈ (1.002 × requested memory) \
              + 218 MiB \ 1
              + 8 MiB × (number of vCPUs) \ 2
              + 16 MiB × (number of graphics devices) \ 3
              + (additional memory overhead) 4

1
Required for the processes that run in the virt-launcher pod.
2
Number of virtual CPUs requested by the virtual machine.
3
Number of virtual graphics cards requested by the virtual machine.
4
Additional memory overhead:
  • If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
6.1.2.2. CPU overhead

Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.

Cluster CPU overhead

CPU overhead for infrastructure nodes ≈ 4 cores

OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.

CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine

Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.

Virtual machine CPU overhead

If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.

6.1.2.3. Storage overhead

Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.

Cluster storage overhead

Aggregated storage overhead per node ≈ 10 GiB

10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.

Virtual machine storage overhead

Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.

6.1.2.4. Example

As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.

6.1.3. Object maximums

You must consider the following tested object maximums when planning your cluster:

6.1.4. Restricted network environments

If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks.

If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub.

6.1.5. Live migration

Live migration has the following requirements:

  • Shared storage with ReadWriteMany (RWX) access mode.
  • Sufficient RAM and network bandwidth.
  • If the virtual machine uses a host model CPU, the nodes must support the virtual machine’s host model CPU.
Note

You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:

Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)

The default number of migrations that can run in parallel in the cluster is 5.

6.1.6. Snapshots and cloning

See OpenShift Virtualization storage features for snapshot and cloning requirements.

6.1.7. Cluster high-availability options

You can configure one of the following high-availability (HA) options for your cluster:

  • Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks.

    Note

    In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.

  • Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
  • High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node>.

    Note

    Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.

6.2. Specifying nodes for OpenShift Virtualization components

Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.

Note

You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads.

6.2.1. About node placement for virtualization components

You might want to customize where OpenShift Virtualization deploys its components to ensure that:

  • Virtual machines only deploy on nodes that are intended for virtualization workloads.
  • Operators only deploy on infrastructure nodes.
  • Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization.
6.2.1.1. How to apply node placement rules to virtualization components

You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.

  • For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM Subscription object directly. Currently, you cannot configure node placement rules for the Subscription object by using the web console.
  • For components that the OpenShift Virtualization Operators deploy, edit the HyperConverged object directly or configure it by using the web console during OpenShift Virtualization installation.
  • For the hostpath provisioner, edit the HostPathProvisioner object directly or configure it by using the web console.

    Warning

    You must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.

Depending on the object, you can use one or more of the following rule types:

nodeSelector
Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
tolerations
Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
6.2.1.2. Node placement in the OLM Subscription object

To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription object during OpenShift Virtualization installation. You can include node placement rules in the spec.config field, as shown in the following example:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.12.13
  channel: "stable"
  config: 1
1
The config field supports nodeSelector and tolerations, but it does not support affinity.
6.2.1.3. Node placement in the HyperConverged object

To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement under the spec.infra and spec.workloads fields, as shown in the following example:

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement: 1
    ...
  workloads:
    nodePlacement:
    ...
1
The nodePlacement fields support nodeSelector, affinity, and tolerations fields.
6.2.1.4. Node placement in the HostPathProvisioner object

You can configure node placement rules in the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner.

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload: 1
1
The workload field supports nodeSelector, affinity, and tolerations fields.
6.2.1.5. Additional resources

6.2.2. Example manifests

The following example YAML files use nodePlacement, affinity, and tolerations objects to customize node placement for OpenShift Virtualization components.

6.2.2.1. Operator Lifecycle Manager Subscription object
6.2.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object

In this example, nodeSelector is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.12.13
  channel: "stable"
  config:
    nodeSelector:
      example.io/example-infra-key: example-infra-value
6.2.2.1.2. Example: Node placement with tolerations in the OLM Subscription object

In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: hco-operatorhub
  namespace: openshift-cnv
spec:
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  name: kubevirt-hyperconverged
  startingCSV: kubevirt-hyperconverged-operator.v4.12.13
  channel: "stable"
  config:
    tolerations:
    - key: "key"
      operator: "Equal"
      value: "virtualization"
      effect: "NoSchedule"
6.2.2.2. HyperConverged object
6.2.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR

In this example, nodeSelector is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      nodeSelector:
        example.io/example-infra-key: example-infra-value
  workloads:
    nodePlacement:
      nodeSelector:
        example.io/example-workloads-key: example-workloads-value
6.2.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR

In this example, affinity is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  infra:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-infra-key
                operator: In
                values:
                - example-infra-value
  workloads:
    nodePlacement:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: example.io/example-workloads-key
                operator: In
                values:
                - example-workloads-value
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: example.io/num-cpus
                operator: Gt
                values:
                - 8
6.2.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR

In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule taint. Only pods with the matching tolerations are scheduled to these nodes.

apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
  name: kubevirt-hyperconverged
  namespace: openshift-cnv
spec:
  workloads:
    nodePlacement:
      tolerations:
      - key: "key"
        operator: "Equal"
        value: "virtualization"
        effect: "NoSchedule"
6.2.2.3. HostPathProvisioner object
6.2.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object

In this example, nodeSelector is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value.

apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
  name: hostpath-provisioner
spec:
  imagePullPolicy: IfNotPresent
  pathConfig:
    path: "</path/to/backing/directory>"
    useNamingPrefix: false
  workload:
    nodeSelector:
      example.io/example-workloads-key: example-workloads-value

6.3. Installing OpenShift Virtualization using the web console

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.

You can use the OpenShift Container Platform 4.12 web console to subscribe to and deploy the OpenShift Virtualization Operators.

6.3.1. Installing the OpenShift Virtualization Operator

You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console.

Prerequisites

  • Install OpenShift Container Platform 4.12 on your cluster.
  • Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions.

Procedure

  1. From the Administrator perspective, click OperatorsOperatorHub.
  2. In the Filter by keyword field, type Virtualization.
  3. Select the {CNVOperatorDisplayName} tile with the Red Hat source label.
  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
    2. For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist.

      Warning

      Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail.

    3. For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.

      While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.

      Warning

      Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.

  6. Click Install to make the Operator available to the openshift-cnv namespace.
  7. When the Operator installs successfully, click Create HyperConverged.
  8. Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
  9. Click Create to launch OpenShift Virtualization.

Verification

  • Navigate to the WorkloadsPods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.

6.3.2. Next steps

You might want to additionally configure the following components:

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

6.4. Installing OpenShift Virtualization using the CLI

Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster.

Note

To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules.

6.4.1. Prerequisites

  • Install OpenShift Container Platform 4.12 on your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

6.4.2. Subscribing to the OpenShift Virtualization catalog by using the CLI

Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators.

To subscribe, configure Namespace, OperatorGroup, and Subscription objects by applying a single manifest to your cluster.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: kubevirt-hyperconverged-group
      namespace: openshift-cnv
    spec:
      targetNamespaces:
        - openshift-cnv
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: hco-operatorhub
      namespace: openshift-cnv
    spec:
      source: redhat-operators
      sourceNamespace: openshift-marketplace
      name: kubevirt-hyperconverged
      startingCSV: kubevirt-hyperconverged-operator.v4.12.13
      channel: "stable" 1
    1
    Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
  2. Create the required Namespace, OperatorGroup, and Subscription objects for OpenShift Virtualization by running the following command:

    $ oc apply -f <file name>.yaml
Note

You can configure certificate rotation parameters in the YAML file.

6.4.3. Deploying the OpenShift Virtualization Operator by using the CLI

You can deploy the OpenShift Virtualization Operator by using the oc CLI.

Prerequisites

  • An active subscription to the OpenShift Virtualization catalog in the openshift-cnv namespace.

Procedure

  1. Create a YAML file that contains the following manifest:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
      namespace: openshift-cnv
    spec:
  2. Deploy the OpenShift Virtualization Operator by running the following command:

    $ oc apply -f <file_name>.yaml

Verification

  • Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command:

    $ watch oc get csv -n openshift-cnv

    The following output displays if deployment was successful:

    Example output

    NAME                                      DISPLAY                    VERSION   REPLACES   PHASE
    kubevirt-hyperconverged-operator.v4.12.13   OpenShift Virtualization   4.12.13                Succeeded

6.4.4. Next steps

You might want to additionally configure the following components:

  • The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

6.5. Installing the virtctl client

The virtctl client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, Windows, and macOS.

6.5.1. Installing the virtctl client on Linux, Windows, and macOS

Download and install the virtctl client for your operating system.

Procedure

  1. Navigate to Virtualization > Overview in the OpenShift Container Platform web console.
  2. Click the Download virtctl link on the upper right corner of the page and download the virtctl client for your operating system.
  3. Install virtctl:

    • For Linux:

      1. Decompress the archive file:

        $ tar -xvf <virtctl-version-distribution.arch>.tar.gz
      2. Run the following command to make the virtctl binary executable:

        $ chmod +x <path/virtctl-file-name>
      3. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        $ echo $PATH
      4. Set the KUBECONFIG environment variable:

        $ export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig
    • For Windows:

      1. Decompress the archive file.
      2. Navigate the extracted folder hierarchy and double-click the virtctl executable file to install the client.
      3. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        C:\> path
    • For macOS:

      1. Decompress the archive file.
      2. Move the virtctl binary to a directory in your PATH environment variable.

        You can check your path by running the following command:

        echo $PATH

6.5.2. Installing the virtctl as an RPM

You can install the virtctl client on Red Hat Enterprise Linux (RHEL) as an RPM after enabling the OpenShift Virtualization repository.

6.5.2.1. Enabling OpenShift Virtualization repositories

Enable the OpenShift Virtualization repository for your version of Red Hat Enterprise Linux (RHEL).

Prerequisites

  • Your system is registered to a Red Hat account with an active subscription to the "Red Hat Container Native Virtualization" entitlement.

Procedure

  • Enable the appropriate OpenShift Virtualization repository for your operating system by using the subscription-manager CLI tool.

    • To enable the repository for RHEL 8, run:

      # subscription-manager repos --enable cnv-4.12-for-rhel-8-x86_64-rpms
    • To enable the repository for RHEL 7, run:

      # subscription-manager repos --enable rhel-7-server-cnv-4.12-rpms
6.5.2.2. Installing the virtctl client using the yum utility

Install the virtctl client from the kubevirt-virtctl package.

Prerequisites

  • You enabled an OpenShift Virtualization repository on your Red Hat Enterprise Linux (RHEL) system.

Procedure

  • Install the kubevirt-virtctl package:

    # yum install kubevirt-virtctl

6.5.3. Additional resources

6.6. Uninstalling OpenShift Virtualization

You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources.

6.6.1. Uninstalling OpenShift Virtualization by using the web console

You uninstall OpenShift Virtualization by using the web console to perform the following tasks:

Important

You must first delete all virtual machines, and virtual machine instances.

You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

6.6.1.1. Deleting the HyperConverged custom resource

To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to the OperatorsInstalled Operators page.
  2. Select the OpenShift Virtualization Operator.
  3. Click the OpenShift Virtualization Deployment tab.
  4. Click the Options menu kebab beside kubevirt-hyperconverged and select Delete HyperConverged.
  5. Click Delete in the confirmation window.
6.6.1.2. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. Navigate to the OperatorsInstalled Operators page.
  2. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
  3. On the right side of the Operator Details page, select Uninstall Operator from the Actions list.

    An Uninstall Operator? dialog box is displayed.

  4. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.

    Note

    This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

6.6.1.3. Deleting a namespace using the web console

You can delete a namespace by using the OpenShift Container Platform web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to AdministrationNamespaces.
  2. Locate the namespace that you want to delete in the list of namespaces.
  3. On the far right side of the namespace listing, select Delete Namespace from the Options menu kebab .
  4. When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
  5. Click Delete.
6.6.1.4. Deleting OpenShift Virtualization custom resource definitions

You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console.

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate to AdministrationCustomResourceDefinitions.
  2. Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs.
  3. Click the Options menu kebab beside each CRD and select Delete CustomResourceDefinition.

6.6.2. Uninstalling OpenShift Virtualization by using the CLI

You can uninstall OpenShift Virtualization by using the OpenShift CLI (oc).

Prerequisites

  • You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • You have installed the OpenShift CLI (oc).
  • You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster.

Procedure

  1. Delete the HyperConverged custom resource:

    $ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
  2. Delete the OpenShift Virtualization Operator subscription:

    $ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
  3. Delete the OpenShift Virtualization ClusterServiceVersion resource:

    $ oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv
  4. Delete the OpenShift Virtualization namespace:

    $ oc delete namespace openshift-cnv
  5. List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option:

    $ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

    Example output

    customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "hyperconvergeds.hco.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "kubevirts.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "ssps.ssp.kubevirt.io" deleted (dry run)
    customresourcedefinition.apiextensions.k8s.io "tektontasks.tektontasks.kubevirt.io" deleted (dry run)

  6. Delete the CRDs by running the oc delete crd command without the dry-run option:

    $ oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv

Chapter 7. Updating OpenShift Virtualization

Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization.

Note
  • The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI (oc). For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.

    You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later releases:

    • Move all nodes out of maintenance mode.
    • Install the standalone NMO and replace the nodemaintenances.nodemaintenance.kubevirt.io custom resource (CR) with a nodemaintenances.nodemaintenance.medik8s.io CR.

7.1. About updating OpenShift Virtualization

  • Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster.
  • OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the next minor version. You cannot update OpenShift Virtualization to the next minor version without first updating OpenShift Container Platform.
  • OpenShift Virtualization subscriptions use a single update channel that is named stable. The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible.
  • If your subscription’s approval strategy is set to Automatic, the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.12 on OpenShift Container Platform 4.12.

    • Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported.
  • The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.
  • Updating OpenShift Virtualization does not interrupt network connections.
  • Data volumes and their associated persistent volume claims are preserved during update.
Important

If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update.

As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always.

7.1.1. About workload updates

When you update OpenShift Virtualization, virtual machine workloads, including libvirt, virt-launcher, and qemu, update automatically if they support live migration.

Note

Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt, which is used to manage the virtual machine (VM) process.

You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict.

Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default.

When LiveMigrate is the only update strategy enabled:

  • VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled.
  • VMIs that do not support live migration are not disrupted or updated.

    • If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated.

If you enable both LiveMigrate and Evict:

  • VMIs that support live migration use the LiveMigrate update strategy.
  • VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has a runStrategy value of always, a new VMI is created in a new pod with updated components.
Migration attempts and timeouts

When updating workloads, live migration fails if a pod is in the Pending state for the following periods:

5 minutes
If the pod is pending because it is Unschedulable.
15 minutes
If the pod is stuck in the pending state for any reason.

When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely.

Note

Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging.

7.1.2. About EUS-to-EUS updates

Every even-numbered minor version of OpenShift Container Platform, including 4.10 and 4.12, is an Extended Update Support (EUS) version. However, because Kubernetes design mandates serial minor version updates, you cannot directly update from one EUS version to the next.

After you update from the source EUS version to the next odd-numbered minor version, you must sequentially update OpenShift Virtualization to all z-stream releases of that minor version that are on your update path. When you have upgraded to the latest applicable z-stream version, you can then update OpenShift Container Platform to the target EUS minor version.

When the OpenShift Container Platform update succeeds, the corresponding update for OpenShift Virtualization becomes available. You can now update OpenShift Virtualization to the target EUS version.

7.1.2.1. Preparing to update

Before beginning an EUS-to-EUS update, you must:

  • Pause worker nodes' machine config pools before you start an EUS-to-EUS update so that the workers are not rebooted twice.
  • Disable automatic workload updates before you begin the update process. This is to prevent OpenShift Virtualization from migrating or evicting your virtual machines (VMs) until you update to your target EUS version.
Note

By default, OpenShift Virtualization automatically updates workloads, such as the virt-launcher pod, when you update the OpenShift Virtualization Operator. You can configure this behavior in the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource.

Learn more about preparing to perform an EUS-to-EUS update.

7.2. Preventing workload updates during an EUS-to-EUS update

When you update from one Extended Update Support (EUS) version to the next, you must manually disable automatic workload updates to prevent OpenShift Virtualization from migrating or evicting workloads during the update process.

Prerequisites

  • You are running an EUS version of OpenShift Container Platform and want to update to the next EUS version. You have not yet updated to the odd-numbered version in between.
  • You read "Preparing to perform an EUS-to-EUS update" and learned the caveats and requirements that pertain to your OpenShift Container Platform cluster.
  • You paused the worker nodes' machine config pools as directed by the OpenShift Container Platform documentation.
  • It is recommended that you use the default Automatic approval strategy. If you use the Manual approval strategy, you must approve all pending updates in the web console. For more details, refer to the "Manually approving a pending Operator update" section.

Procedure

  1. Back up the current workloadUpdateMethods configuration by running the following command:

    $ WORKLOAD_UPDATE_METHODS=$(oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}')
  2. Turn off all workload update methods by running the following command:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]'

    Example output

    hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

  3. Ensure that the HyperConverged Operator is Upgradeable before you continue. Enter the following command and monitor the output:

    $ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"

    Example 7.1. Example output

    [
      {
        "lastTransitionTime": "2022-12-09T16:29:11Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "ReconcileComplete"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "Available"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "False",
        "type": "Progressing"
      },
      {
        "lastTransitionTime": "2022-12-09T16:39:11Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "False",
        "type": "Degraded"
      },
      {
        "lastTransitionTime": "2022-12-09T20:30:10Z",
        "message": "Reconcile completed successfully",
        "observedGeneration": 3,
        "reason": "ReconcileCompleted",
        "status": "True",
        "type": "Upgradeable" 1
      }
    ]
    1
    The OpenShift Virtualization Operator has the Upgradeable status.
  4. Manually update your cluster from the source EUS version to the next minor version of OpenShift Container Platform:

    $ oc adm upgrade

    Verification

    • Check the current version by running the following command:

      $ oc get clusterversion
      Note

      Updating OpenShift Container Platform to the next version is a prerequisite for updating OpenShift Virtualization. For more details, refer to the "Updating clusters" section of the OpenShift Container Platform documentation.

  5. Update OpenShift Virtualization.

    • With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform.
    • If you use the Manual approval strategy, approve the pending updates by using the web console.
  6. Monitor the OpenShift Virtualization update by running the following command:

    $ oc get csv -n openshift-cnv
  7. Update OpenShift Virtualization to every z-stream version that is available for the non-EUS minor version, monitoring each update by running the command shown in the previous step.
  8. Confirm that OpenShift Virtualization successfully updated to the latest z-stream release of the non-EUS version by running the following command:

    $ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.versions"

    Example output

    [
      {
        "name": "operator",
        "version": "4.12.13"
      }
    ]

  9. Wait until the HyperConverged Operator has the Upgradeable status before you perform the next update. Enter the following command and monitor the output:

    $ oc get hco kubevirt-hyperconverged -n openshift-cnv -o json | jq ".status.conditions"
  10. Update OpenShift Container Platform to the target EUS version.
  11. Confirm that the update succeeded by checking the cluster version:

    $ oc get clusterversion
  12. Update OpenShift Virtualization to the target EUS version.

    • With the default Automatic approval strategy, OpenShift Virtualization automatically updates to the corresponding version after you update OpenShift Container Platform.
    • If you use the Manual approval strategy, approve the pending updates by using the web console.
  13. Monitor the OpenShift Virtualization update by running the following command:

    $ oc get csv -n openshift-cnv

    The update completes when the VERSION field matches the target EUS version and the PHASE field reads Succeeded.

  14. Restore the workload update methods configuration that you backed up:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":$WORKLOAD_UPDATE_METHODS}]"

    Example output

    hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched

    Verification

    • Check the status of VM migration by running the following command:

      $ oc get vmim -A

Next steps

  • You can now unpause the worker nodes' machine config pools.

7.3. Configuring workload update methods

You can configure workload update methods by editing the HyperConverged custom resource (CR).

Prerequisites

  • To use live migration as an update method, you must first enable live migration in the cluster.

    Note

    If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update.

Procedure

  1. To open the HyperConverged CR in your default editor, run the following command:

    $ oc edit hco -n openshift-cnv kubevirt-hyperconverged
  2. Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      workloadUpdateStrategy:
        workloadUpdateMethods: 1
        - LiveMigrate 2
        - Evict 3
        batchEvictionSize: 10 4
        batchEvictionInterval: "1m0s" 5
    ...
    1
    The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict. If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty.
    2
    The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated.
    3
    A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: always configured, a new VMI is created in a new pod with updated components.
    4
    The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method.
    5
    The interval to wait before evicting the next batch of workloads. This does not apply to the LiveMigrate method.
    Note

    You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR.

  3. To apply your changes, save and exit the editor.

7.4. Approving pending Operator updates

7.4.1. Manually approving a pending Operator update

If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
  2. Operators that have a pending update display a status with Upgrade available. Click the name of the Operator you want to update.
  3. Click the Subscription tab. Any update requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for update. When satisfied, click Approve.
  6. Navigate back to the Operators → Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date.

7.5. Monitoring update status

7.5.1. Monitoring OpenShift Virtualization upgrade status

To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE. You can also monitor the CSV conditions in the web console or by running the command provided here.

Note

The PHASE and conditions values are approximations that are based on available information.

Prerequisites

  • Log in to the cluster as a user with the cluster-admin role.
  • Install the OpenShift CLI (oc).

Procedure

  1. Run the following command:

    $ oc get csv -n openshift-cnv
  2. Review the output, checking the PHASE field. For example:

    Example output

    VERSION  REPLACES                                        PHASE
    4.9.0    kubevirt-hyperconverged-operator.v4.8.2         Installing
    4.9.0    kubevirt-hyperconverged-operator.v4.9.0         Replacing

  3. Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command:

    $ oc get hco -n openshift-cnv kubevirt-hyperconverged \
    -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'

    A successful upgrade results in the following output:

    Example output

    ReconcileComplete  True  Reconcile completed successfully
    Available          True  Reconcile completed successfully
    Progressing        False Reconcile completed successfully
    Degraded           False Reconcile completed successfully
    Upgradeable        True  Reconcile completed successfully

7.5.2. Viewing outdated OpenShift Virtualization workloads

You can view a list of outdated workloads by using the CLI.

Note

If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires.

Procedure

  • To view a list of outdated virtual machine instances (VMIs), run the following command:

    $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Note

Configure workload updates to ensure that VMIs update automatically.

7.6. Additional resources

Chapter 8. Security policies

Virtual machine (VM) workloads run as unprivileged pods. So that VMs can use OpenShift Virtualization features, some pods are granted custom security policies that are not available to other pod owners:

  • An extended container_t SELinux policy applies to virt-launcher pods.
  • Security context constraints (SCCs) are defined for the kubevirt-controller service account.

8.1. About workload security

By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization.

For each VM, a virt-launcher pod runs an instance of libvirt in session mode to manage the VM process. In session mode, the libvirt daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege.

There are no supported OpenShift Virtualization features that require root privileges. If a feature requires root, it might not be supported for use with OpenShift Virtualization.

8.2. Extended SELinux policies for virt-launcher pods

The container_t SELinux policy for virt-launcher pods is extended to enable essential functions of OpenShift Virtualization.

  • The following policy is required for network multi-queue, which enables network performance to scale as the number of available vCPUs increases:

    • allow process self (tun_socket (relabelfrom relabelto attach_queue))
  • The following policy allows virt-launcher to read files under the /proc directory, including /proc/cpuinfo and /proc/uptime:

    • allow process proc_type (file (getattr open read))
  • The following policy allows libvirtd to relay network-related debug messages.

    • allow process self (netlink_audit_socket (nlmsg_relay))

      Note

      Without this policy, any attempt to relay network debug messages is blocked. This might fill the node’s audit logs with SELinux denials.

  • The following policies allow libvirtd to access hugetblfs, which is required to support huge pages:

    • allow process hugetlbfs_t (dir (add_name create write remove_name rmdir setattr))
    • allow process hugetlbfs_t (file (create unlink))
  • The following policies allow virtiofs to mount filesystems and access NFS:

    • allow process nfs_t (dir (mounton))
    • allow process proc_t (dir (mounton))
    • allow process proc_t (filesystem (mount unmount))
  • The following policy is inherited from upstream Kubevirt, where it enables passt networking:

    • allow process tmpfs_t (filesystem (mount))
Note

OpenShift Virtualization does not support passt at this time.

8.3. Additional OpenShift Container Platform security context constraints and Linux capabilities for the kubevirt-controller service account

Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.

The virt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These pods are granted permissions by the kubevirt-controller service account.

The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to use OpenShift Virtualization features that are beyond the scope of typical pods.

The kubevirt-controller service account is granted the following SCCs:

  • scc.AllowHostDirVolumePlugin = true
    This allows virtual machines to use the hostpath volume plugin.
  • scc.AllowPrivilegedContainer = false
    This ensures the virt-launcher pod is not run as a privileged container.
  • scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE", "SYS_PTRACE"}

    • SYS_NICE allows setting the CPU affinity.
    • NET_BIND_SERVICE allows DHCP and Slirp operations.
    • SYS_PTRACE enables certain versions of libvirt to find the process ID (PID) of swtpm, a software Trusted Platform Module (TPM) emulator.

8.3.1. Viewing the SCC and RBAC definitions for the kubevirt-controller

You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool:

$ oc get scc kubevirt-controller -o yaml

You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool:

$ oc get clusterrole kubevirt-controller -o yaml

8.4. Additional resources

Chapter 9. Using the CLI tools

The two primary CLI tools used for managing resources in the cluster are:

  • The OpenShift Virtualization virtctl client
  • The OpenShift Container Platform oc client

9.1. Prerequisites

9.2. OpenShift Container Platform client commands

The OpenShift Container Platform oc client is a command-line utility for managing OpenShift Container Platform resources, including the VirtualMachine (vm) and VirtualMachineInstance (vmi) object types.

Note

You can use the -n <namespace> flag to specify a different project.

Table 9.1. oc commands
CommandDescription

oc login -u <user_name>

Log in to the OpenShift Container Platform cluster as <user_name>.

oc get <object_type>

Display a list of objects for the specified object type in the current project.

oc describe <object_type> <resource_name>

Display details of the specific resource in the current project.

oc create -f <object_config>

Create a resource in the current project from a file name or from stdin.

oc edit <object_type> <resource_name>

Edit a resource in the current project.

oc delete <object_type> <resource_name>

Delete a resource in the current project.

For more comprehensive information on oc client commands, see the OpenShift Container Platform CLI tools documentation.

9.3. Virtctl commands

The virtctl client is a command-line utility for managing OpenShift Virtualization resources.

Table 9.2. virtctl general commands
CommandDescription

virtctl version

View the virtctl client and server versions.

virtctl help

View a list of virtctl commands.

virtctl <command> -h|--help

View a list of options for a specific command.

virtctl options

View a list of global command options for any virtctl command.

9.3.1. VM and VMI management commands

You can use virtctl to manage virtual machine (VM) or virtual machine instance (VMI) states and to migrate a VM.

Table 9.3. virtctl VM management commands
CommandDescription

virtctl start <vm_name>

Start a VM.

virtctl start --paused <vm_name>

Start a VM in a paused state. This option enables you to interrupt the boot process from the VNC console.

virtctl stop <vm_name>

Stop a VM.

virtctl stop <vm_name> --grace-period 0 --force

Force stop a VM. This option might cause data inconsistency or data loss.

virtctl pause vm|vmi <vm_name>

Pause a VM or VMI. The machine state is kept in memory.

virtctl unpause vm|vmi <vm_name>

Unpause a VM or VMI.

virtctl migrate <vm_name>

Migrate a VM.

virtctl restart <vm_name>

Restart a VM.

9.3.2. VM and VMI connection commands

You can use virtctl to connect to the serial console, expose a port, set a proxy connection, specify a port, and open a VNC connection to a VM.

Table 9.4. virtctl console, expose, and vnc commands
CommandDescription

virtctl console <vmi_name>

Connect to the serial console of a VMI.

virtctl expose <vm_name>

Create a service that forwards a designated port of a VM or VMI and expose the service on the specified port of the node.

virtctl vnc --kubeconfig=$KUBECONFIG <vmi_name>

Open a Virtual Network Client (VNC) connection to a VMI.

Accessing the graphical console of a VMI through VNC requires a remote viewer on your local machine.

virtctl vnc --kubeconfig=$KUBECONFIG --proxy-only=true <vmi_name>

Display the port number and connect manually to a VMI by using any viewer through the VNC connection.

virtctl vnc --kubeconfig=$KUBECONFIG --port=<port-number> <vmi_name>

Specify a port number to run the proxy on the specified port, if that port is available.

If a port number is not specified, the proxy runs on a random port.

9.3.3. VM volume export commands

You can use virtctl vmexport commands to create, download, or delete a volume exported from a VM, VM snapshot, or persistent volume claim (PVC).

Table 9.5. virtctl vmexport commands
CommandDescription

virtctl vmexport create <vmexport_name> --vm|snapshot|pvc=<object_name>

Create a VirtualMachineExport custom resource (CR) to export a volume from a VM, VM snapshot, or PVC.

  • --vm: Exports the PVCs of a VM.
  • --snapshot: Exports the PVCs contained in a VirtualMachineSnapshot CR.
  • --pvc: Exports a PVC.
  • Optional: --ttl=1h specifies the time to live. The default duration is 2 hours.

virtctl vmexport delete <vmexport_name>

Delete a VirtualMachineExport CR manually.

virtctl vmexport download <vmexport_name> --output=<output_file> --volume=<volume_name>

Download the volume defined in a VirtualMachineExport CR.

  • --output specifies the file format. Example: disk.img.gz.
  • --volume specifies the volume to download. This flag is optional if only one volume is available.

Optional:

  • --keep-vme retains the VirtualMachineExport CR after download. The default behavior is to delete the VirtualMachineExport CR after download.
  • --insecure enables an insecure HTTP connection.

virtctl vmexport download <vmexport_name> --<vm|snapshot|pvc>=<object_name> --output=<output_file> --volume=<volume_name>

Create a VirtualMachineExport CR and then download the volume defined in the CR.

9.3.4. VM memory dump commands

You can use the virtctl memory-dump command to output a virtual machine (VM) memory dump on a PVC. You can specify an existing PVC or use the --create-claim flag to create a new PVC.

Prerequisites

  • The PVC volume mode must be FileSystem.
  • The PVC must be large enough to contain the memory dump.

    The formula for calculating the PVC size is (VMMemorySize + 100Mi) * FileSystemOverhead, where 100Mi is the memory dump overhead.

  • You must enable the hot plug feature gate in the HyperConverged custom resource by running the following command:

    $ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
      --type json -p '[{"op": "add", "path": "/spec/featureGates", \
      "value": "HotplugVolumes"}]'

Downloading the memory dump

You must use the virtctl vmexport download command to download the memory dump:

$ virtctl vmexport download <vmexport_name> --vm\|pvc=<object_name> \
  --volume=<volume_name> --output=<output_file>
Table 9.6. virtctl memory-dump commands
CommandDescription

virtctl memory-dump get <vm_name> --claim-name=<pvc_name>

Save the memory dump of a VM on a PVC. The memory dump status is displayed in the status section of the VirtualMachine resource.

Optional:

  • --create-claim creates a new PVC with the appropriate size. This flag has the following options:

    • --storage-class=<storage_class>: Specify a storage class for the PVC.
    • --access-mode=<access_mode>: Specify ReadWriteOnce or ReadWriteMany.

virtctl memory-dump get <vm_name>

Rerun the virtctl memory-dump command with the same PVC.

This command overwrites the previous memory dump.

virtctl memory-dump remove <vm_name>

Remove a memory dump.

You must remove a memory dump manually if you want to change the target PVC.

This command removes the association between the VM and the PVC, so that the memory dump is not displayed in the status section of the VirtualMachine resource. The PVC is not affected.

9.3.5. Image upload commands

You can use the virtctl image-upload commands to upload a VM image to a data volume.

Table 9.7. virtctl image-upload commands
CommandDescription

virtctl image-upload dv <datavolume_name> --image-path=</path/to/image> --no-create

Upload a VM image to a data volume that already exists.

virtctl image-upload dv <datavolume_name> --size=<datavolume_size> --image-path=</path/to/image>

Upload a VM image to a new data volume of a specified requested size.

9.3.6. Environment information commands

You can use virtctl to view information about versions, file systems, guest operating systems, and logged-in users.

Table 9.8. virtctl environment information commands
CommandDescription

virtctl fslist <vmi_name>

View the file systems available on a guest machine.

virtctl guestosinfo <vmi_name>

View information about the operating systems on a guest machine.

virtctl userlist <vmi_name>

View the logged-in users on a guest machine.

9.4. Creating a container using virtctl guestfs

You can use the virtctl guestfs command to deploy an interactive container with libguestfs-tools and a persistent volume claim (PVC) attached to it.

Procedure

  • To deploy a container with libguestfs-tools, mount the PVC, and attach a shell to it, run the following command:

    $ virtctl guestfs -n <namespace> <pvc_name> 1
    1
    The PVC name is a required argument. If you do not include it, an error message appears.

9.5. Libguestfs tools and virtctl guestfs

Libguestfs tools help you access and modify virtual machine (VM) disk images. You can use libguestfs tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks.

You can also use the virtctl guestfs command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter virt- on the command line and press the Tab key. For example:

CommandDescription

virt-edit -a /dev/vda /etc/motd

Edit a file interactively in your terminal.

virt-customize -a /dev/vda --ssh-inject root:string:<public key example>

Inject an ssh key into the guest and create a login.

virt-df -a /dev/vda -h

See how much disk space is used by a VM.

virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list'

See the full list of all RPMs installed on a guest by creating an output file containing the full list.

virt-cat -a /dev/vda /rpm-list

Display the output file list of all RPMs created using the virt-customize -a /dev/vda --run-command 'rpm -qa > /rpm-list' command in your terminal.

virt-sysprep -a /dev/vda

Seal a virtual machine disk image to be used as a template.

By default, virtctl guestfs creates a session with everything needed to manage a VM disk. However, the command also supports several flag options if you want to customize the behavior:

Flag OptionDescription

--h or --help

Provides help for guestfs.

-n <namespace> option with a <pvc_name> argument

To use a PVC from a specific namespace.

If you do not use the -n <namespace> option, your current project is used. To change projects, use oc project <namespace>.

If you do not include a <pvc_name> argument, an error message appears.

--image string

Lists the libguestfs-tools container image.

You can configure the container to use a custom image by using the --image option.

--kvm

Indicates that kvm is used by the libguestfs-tools container.

By default, virtctl guestfs sets up kvm for the interactive container, which greatly speeds up the libguest-tools execution because it uses QEMU.

If a cluster does not have any kvm supporting nodes, you must disable kvm by setting the option --kvm=false.

If not set, the libguestfs-tools pod remains pending because it cannot be scheduled on any node.

--pull-policy string

Shows the pull policy for the libguestfs image.

You can also overwrite the image’s pull policy by setting the pull-policy option.

The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the libguestfs-tools process starts, the setup cannot avoid a new pod using the same PVC. You must verify that there are no active virtctl guestfs pods before starting the VM that accesses the same PVC.

Note

The virtctl guestfs command accepts only a single PVC attached to the interactive pod.

9.6. Additional resources

Chapter 10. Virtual machines

10.1. Creating virtual machines

Use one of these procedures to create a virtual machine:

  • Quick Start guided tour
  • Quick create from the Catalog
  • Pasting a pre-configured YAML file with the virtual machine wizard
  • Using the CLI
Warning

Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix.

When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines.

Templates without a boot source are labeled as Boot source required. You can use these templates if you complete the steps for adding a boot source to the virtual machine.

Important

Due to differences in storage behavior, some virtual machine templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for any templates or virtual machines that use data volumes or storage profiles.

10.1.1. Using a Quick Start to create a virtual machine

The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process.

Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine.

Prerequisites

  • Access to the website where you can download the URL link for the operating system image.

Procedure

  1. In the web console, select Quick Starts from the Help menu.
  2. Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine.
  3. Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The VirtualizationVirtualMachines page displays the virtual machine.

10.1.2. Quick creating a virtual machine

You can quickly create a virtual machine (VM) by using a template with an available boot source.

Procedure

  1. Click VirtualizationCatalog in the side menu.
  2. Click Boot source available to filter templates with boot sources.

    Note

    By default, the template list will show only Default Templates. Click All Items when filtering to see all available templates for your chosen filters.

  3. Click a template to view its details.
  4. Click Quick Create VirtualMachine to create a VM from the template.

    The virtual machine Details page is displayed with the provisioning status.

Verification

  1. Click Events to view a stream of events as the VM is provisioned.
  2. Click Console to verify that the VM booted successfully.

10.1.3. Creating a virtual machine from a customized template

Some templates require additional parameters, for example, a PVC with a boot source. You can customize select parameters of a template to create a virtual machine (VM).

Procedure

  1. In the web console, select a template:

    1. Click VirtualizationCatalog in the side menu.
    2. Optional: Filter the templates by project, keyword, operating system, or workload profile.
    3. Click the template that you want to customize.
  2. Click Customize VirtualMachine.
  3. Specify parameters for your VM, including its Name and Disk source. You can optionally specify a data source to clone.

Verification

  1. Click Events to view a stream of events as the VM is provisioned.
  2. Click Console to verify that the VM booted successfully.

Refer to the virtual machine fields section when creating a VM from the web console.

10.1.3.1. Networking fields
NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

10.1.3.2. Storage fields
NameSelectionDescription

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

 

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

 

Size of the disk in GiB.

Type

 

Type of disk. Example: Disk or CD-ROM

Interface

 

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

 

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile.

Note

Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization.

To manually specify Volume Mode and Access Mode, you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default.

NameMode descriptionParameterParameter description

Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Access mode of the persistent volume.

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This is required for some features, such as live migration of virtual machines between nodes.

ReadOnlyMany (ROX)

Volume can be mounted as read only by many nodes.

10.1.3.3. Cloud-init fields
NameDescription

Authorized SSH Keys

The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine.

Custom script

Replaces other options with a field in which you paste a custom cloud-init script.

To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.

10.1.3.4. Pasting in a pre-configured YAML file to create a virtual machine

Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen.

If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Click Create and select With YAML.
  3. Write or paste your virtual machine configuration in the editable window.

    1. Alternatively, use the example virtual machine provided by default in the YAML screen.
  4. Optional: Click Download to download the YAML configuration file in its present state.
  5. Click Create to create the virtual machine.

The virtual machine is listed on the VirtualMachines page.

10.1.4. Using the CLI to create a virtual machine

You can create a virtual machine from a virtualMachine manifest.

Procedure

  1. Edit the VirtualMachine manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM:

    Example 10.1. Example manifest for a RHEL VM

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        app: <vm_name> 1
      name: <vm_name>
    spec:
      dataVolumeTemplates:
      - apiVersion: cdi.kubevirt.io/v1beta1
        kind: DataVolume
        metadata:
          name: <vm_name>
        spec:
          sourceRef:
            kind: DataSource
            name: rhel9
            namespace: openshift-virtualization-os-images
          storage:
            resources:
              requests:
                storage: 30Gi
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: <vm_name>
        spec:
          domain:
            cpu:
              cores: 1
              sockets: 2
              threads: 1
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - masquerade: {}
                name: default
              rng: {}
            features:
              smm:
                enabled: true
            firmware:
              bootloader:
                efi: {}
            resources:
              requests:
                memory: 8Gi
          evictionStrategy: LiveMigrate
          networks:
          - name: default
            pod: {}
          volumes:
          - dataVolume:
              name: <vm_name>
            name: rootdisk
          - cloudInitNoCloud:
              userData: |-
                #cloud-config
                user: cloud-user
                password: '<password>' 2
                chpasswd: { expire: False }
            name: cloudinitdisk
    1
    Specify the name of the virtual machine.
    2
    Specify the password for cloud-user.
  2. Create a virtual machine by using the manifest file:

    $ oc create -f <vm_manifest_file>.yaml
  3. Optional: Start the virtual machine:

    $ virtctl start <vm_name>

10.1.5. Virtual machine storage volume types

Storage volume typeDescription

ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready.

Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.

cloudInitNoCloud

Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

containerDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.

A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

Note

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

10.1.6. About RunStrategies for virtual machines

A RunStrategy for virtual machines determines a virtual machine instance’s (VMI) behavior, depending on a series of conditions. The spec.runStrategy setting exists in the virtual machine configuration process as an alternative to the spec.running setting. The spec.runStrategy setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running setting with only true or false responses. However, the two settings are mutually exclusive. Only either spec.running or spec.runStrategy can be used. An error occurs if both are used.

There are four defined RunStrategies.

Always
A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as spec.running: true.
RerunOnFailure
A VMI is re-created if the previous instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down.
Manual
The start, stop, and restart virtctl client commands can be used to control the VMI’s state and existence.
Halted
No VMI is present when a virtual machine is created, which is the same behavior as spec.running: false.

Different combinations of the start, stop and restart virtctl commands affect which RunStrategy is used.

The following table follows a VM’s transition from different states. The first column shows the VM’s initial RunStrategy. Each additional column shows a virtctl command and the new RunStrategy after that command is run.

Initial RunStrategystartstoprestart

Always

-

Halted

Always

RerunOnFailure

-

Halted

RerunOnFailure

Manual

Manual

Manual

Manual

Halted

Always

-

-

Note

In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always or RerunOnFailure are rescheduled on a new node.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  RunStrategy: Always 1
  template:
...
1
The VMI’s current RunStrategy setting.

10.1.7. Additional resources

10.2. Editing virtual machines

You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen.

10.2.1. Editing a virtual machine in the web console

You can edit a virtual machine by using the OpenShift Container Platform web console or the command line interface.

Procedure

  1. Navigate to VirtualizationVirtualMachines in the web console.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click any field that has the pencil icon, which indicates that the field is editable. For example, click the current Boot mode setting, such as BIOS or UEFI, to open the Boot mode window and select an option from the list.
  4. Click Save.
Note

If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.2.2. Editing a virtual machine YAML configuration using the web console

You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine.
  3. Click the YAML tab to display the editable configuration.
  4. Optional: You can click Download to download the YAML file locally in its current state.
  5. Edit the file and click Save.

A confirmation message shows that the modification has been successful and includes the updated version number for the object.

10.2.3. Editing a virtual machine YAML configuration using the CLI

Use this procedure to edit a virtual machine YAML configuration using the CLI.

Prerequisites

  • You configured a virtual machine with a YAML object configuration file.
  • You installed the oc CLI.

Procedure

  1. Run the following command to update the virtual machine configuration:

    $ oc edit <object_type> <object_ID>
  2. Open the object configuration.
  3. Edit the YAML.
  4. If you edit a running virtual machine, you need to do one of the following:

    • Restart the virtual machine.
    • Run the following command for the new configuration to take effect:

      $ oc apply <object_type> <object_ID>

10.2.4. Adding a virtual disk to a virtual machine

Use this procedure to add a virtual disk to a virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details screen.
  3. Click the Disks tab and then click Add disk.
  4. In the Add disk window, specify the Source, Name, Size, Type, Interface, and Storage Class.

    1. Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
    2. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.
  5. Click Add.
Note

If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine.

The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.

10.2.4.1. Editing CD-ROMs for VirtualMachines

Use the following procedure to edit CD-ROMs for virtual machines.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details screen.
  3. Click the Disks tab.
  4. Click the Options menu kebab for the CD-ROM that you want to edit and select Edit.
  5. In the Edit CD-ROM window, edit the fields: Source, Persistent Volume Claim, Name, Type, and Interface.
  6. Click Save.
10.2.4.2. Storage fields
NameSelectionDescription

Source

Blank (creates PVC)

Create an empty disk.

Import via URL (creates PVC)

Import content via URL (HTTP or HTTPS endpoint).

Use an existing PVC

Use a PVC that is already available in the cluster.

Clone existing PVC (creates PVC)

Select an existing PVC available in the cluster and clone it.

Import via Registry (creates PVC)

Import content via container registry.

Container (ephemeral)

Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines.

Name

 

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size

 

Size of the disk in GiB.

Type

 

Type of disk. Example: Disk or CD-ROM

Interface

 

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

 

The storage class that is used to create the disk.

Advanced storage settings

The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks. Before OpenShift Virtualization 4.11, if you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. In OpenShift Virtualization 4.11 and later, the system uses the default values from the storage profile.

Note

Use storage profiles to ensure consistent advanced storage settings when provisioning storage for OpenShift Virtualization.

To manually specify Volume Mode and Access Mode, you must clear the Apply optimized StorageProfile settings checkbox, which is selected by default.

NameMode descriptionParameterParameter description

Volume Mode

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Filesystem

Stores the virtual disk on a file system-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Access mode of the persistent volume.

ReadWriteOnce (RWO)

Volume can be mounted as read-write by a single node.

ReadWriteMany (RWX)

Volume can be mounted as read-write by many nodes at one time.

Note

This is required for some features, such as live migration of virtual machines between nodes.

ReadOnlyMany (ROX)

Volume can be mounted as read only by many nodes.

10.2.5. Adding a network interface to a virtual machine

Use this procedure to add a network interface to a virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details screen.
  3. Click the Network Interfaces tab.
  4. Click Add Network Interface.
  5. In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
  6. Click Add.
Note

If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine.

The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.2.5.1. Networking fields
NameDescription

Name

Name for the network interface controller.

Model

Indicates the model of the network interface controller. Supported values are e1000e and virtio.

Network

List of available network attachment definitions.

Type

List of available binding methods. Select the binding method suitable for the network interface:

  • Default pod network: masquerade
  • Linux bridge network: bridge
  • SR-IOV network: SR-IOV

MAC Address

MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically.

10.2.6. Additional resources

10.3. Editing boot order

You can update the values for a boot order list by using the web console or the CLI.

With Boot Order in the Virtual Machine Overview page, you can:

  • Select a disk or network interface controller (NIC) and add it to the boot order list.
  • Edit the order of the disks or NICs in the boot order list.
  • Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.

10.3.1. Adding items to a boot order list in the web console

Add items to a boot order list by using the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
  5. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
  6. Add any additional disks or NICs to the boot order list.
  7. Click Save.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.3.2. Editing a boot order list in the web console

Edit the boot order list in the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Choose the appropriate method to move the item in the boot order list:

    • If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
    • If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
  6. Click Save.
Note

If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.3.3. Editing a boot order list in the YAML configuration file

Edit the boot order list in a YAML configuration file by using the CLI.

Procedure

  1. Open the YAML configuration file for the virtual machine by running the following command:

    $ oc edit vm example
  2. Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:

    disks:
      - bootOrder: 1 1
        disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      - cdrom:
          bus: virtio
        name: cd-drive-1
    interfaces:
      - boot Order: 2 2
        macAddress: '02:96:c4:00:00'
        masquerade: {}
        name: default
    1
    The boot order value specified for the disk.
    2
    The boot order value specified for the network interface controller.
  3. Save the YAML file.
  4. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.

10.3.4. Removing items from a boot order list in the web console

Remove items from a boot order list by using the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Boot Order.
  5. Click the Remove icon delete next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
Note

If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.

You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.

10.4. Deleting virtual machines

You can delete a virtual machine from the web console or by using the oc command line interface.

10.4.1. Deleting a virtual machine using the web console

Deleting a virtual machine permanently removes it from the cluster.

Note

When you delete a virtual machine, the data volume it uses is automatically deleted.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Click the Options menu kebab of the virtual machine that you want to delete and select Delete.

    • Alternatively, click the virtual machine name to open the VirtualMachine details page and click ActionsDelete.
  3. In the confirmation pop-up window, click Delete to permanently delete the virtual machine.

10.4.2. Deleting a virtual machine by using the CLI

You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines.

Note

When you delete a virtual machine, the data volume it uses is automatically deleted.

Prerequisites

  • Identify the name of the virtual machine that you want to delete.

Procedure

  • Delete the virtual machine by running the following command:

    $ oc delete vm <vm_name>
    Note

    This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace.

10.5. Exporting virtual machines

You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.

You create a VirtualMachineExport custom resource (CR) by using the command line interface.

Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes.

10.5.1. Creating a VirtualMachineExport custom resource

You can create a VirtualMachineExport custom resource (CR) to export the following objects:

  • Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
  • VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR.
  • PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use.

The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route.

The export server supports the following file formats:

  • raw: Raw disk image file.
  • gzip: Compressed disk image file.
  • dir: PVC directory and files.
  • tar.gz: Compressed PVC file.

Prerequisites

  • The VM must be shut down for a VM export.

Procedure

  1. Create a VirtualMachineExport manifest to export a volume from a VirtualMachine, VirtualMachineSnapshot, or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml:

    VirtualMachineExport example

    apiVersion: export.kubevirt.io/v1alpha1
    kind: VirtualMachineExport
    metadata:
      name: example-export
    spec:
      source:
        apiGroup: "kubevirt.io" 1
        kind: VirtualMachine 2
        name: example-vm
      ttlDuration: 1h 3

    1
    Specify the appropriate API group:
    • "kubevirt.io" for VirtualMachine.
    • "snapshot.kubevirt.io" for VirtualMachineSnapshot.
    • "" for PersistentVolumeClaim.
    2
    Specify VirtualMachine, VirtualMachineSnapshot, or PersistentVolumeClaim.
    3
    Optional. The default duration is 2 hours.
  2. Create the VirtualMachineExport CR:

    $ oc create -f example-export.yaml
  3. Get the VirtualMachineExport CR:

    $ oc get vmexport example-export -o yaml

    The internal and external links for the exported volumes are displayed in the status stanza:

    Output example

    apiVersion: export.kubevirt.io/v1alpha1
    kind: VirtualMachineExport
    metadata:
      name: example-export
      namespace: example
    spec:
      source:
        apiGroup: ""
        kind: PersistentVolumeClaim
        name: example-pvc
      tokenSecretRef: example-token
    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:10:09Z"
        reason: podReady
        status: "True"
        type: Ready
      - lastProbeTime: null
        lastTransitionTime: "2022-06-21T14:09:02Z"
        reason: pvcBound
        status: "True"
        type: PVCReady
      links:
        external: 1
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img
            - format: gzip
              url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz
            name: example-disk
        internal:  2
          cert: |-
            -----BEGIN CERTIFICATE-----
            ...
            -----END CERTIFICATE-----
          volumes:
          - formats:
            - format: raw
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img
            - format: gzip
              url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz
            name: example-disk
      phase: Ready
      serviceName: virt-export-example-export

    1
    External links are accessible from outside the cluster by using an Ingress or Route.
    2
    Internal links are only valid inside the cluster.

10.6. Managing virtual machine instances

If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI).

The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port.

10.6.1. About virtual machine instances

A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI).

A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:

  • List standalone VMIs and their details.
  • Edit labels and annotations for a standalone VMI.
  • Delete a standalone VMI.

When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.

Note

Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.

10.6.2. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

10.6.3. Listing standalone virtual machine instances using the web console

Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).

Note

VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.

Procedure

  • Click VirtualizationVirtualMachines from the side menu.

    You can identify a standalone VMI by a dark colored badge next to its name.

10.6.4. Editing a standalone virtual machine instance using the web console

You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a standalone VMI to open the VirtualMachineInstance details page.
  3. On the Details tab, click the pencil icon beside Annotations or Labels.
  4. Make the relevant changes and click Save.

10.6.5. Deleting a standalone virtual machine instance using the CLI

You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the VMI that you want to delete.

Procedure

  • Delete the VMI by running the following command:

    $ oc delete vmi <vmi_name>

10.6.6. Deleting a standalone virtual machine instance using the web console

Delete a standalone virtual machine instance (VMI) from the web console.

Procedure

  1. In the OpenShift Container Platform web console, click VirtualizationVirtualMachines from the side menu.
  2. Click ActionsDelete VirtualMachineInstance.
  3. In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.

10.7. Controlling virtual machine states

You can stop, start, restart, and unpause virtual machines from the web console.

You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port.

10.7.1. Starting a virtual machine

You can start a virtual machine from the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to start.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you start it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click Actions.
  4. Select Restart.
  5. In the confirmation window, click Start to start the virtual machine.
Note

When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.

10.7.2. Restarting a virtual machine

You can restart a running virtual machine from the web console.

Important

To avoid errors, do not restart a virtual machine while it has a status of Importing.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to restart.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you restart it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click ActionsRestart.
  4. In the confirmation window, click Restart to restart the virtual machine.

10.7.3. Stopping a virtual machine

You can stop a virtual machine from the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to stop.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you stop it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click ActionsStop.
  4. In the confirmation window, click Stop to stop the virtual machine.

10.7.4. Unpausing a virtual machine

You can unpause a paused virtual machine from the web console.

Prerequisites

  • At least one of your virtual machines must have a status of Paused.

    Note

    You can pause virtual machines by using the virtctl client.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Find the row that contains the virtual machine that you want to unpause.
  3. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. In the Status column, click Paused.
    • To view comprehensive information about the selected virtual machine before you unpause it:

      1. Access the VirtualMachine details page by clicking the name of the virtual machine.
      2. Click the pencil icon that is located on the right side of Status.
  4. In the confirmation window, click Unpause to unpause the virtual machine.

10.8. Accessing virtual machine consoles

OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands.

Note

Running concurrent VNC connections to a single virtual machine is not currently supported.

10.8.1. Accessing virtual machine consoles in the OpenShift Container Platform web console

You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console.

You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console.

10.8.1.1. Connecting to the serial console

Connect to the serial console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Console tab. The VNC console opens by default.
  4. Click Disconnect to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background.
  5. Click the VNC Console drop-down list and select Serial Console.
  6. Click Disconnect to end the console session.
  7. Optional: Open the serial console in a separate window by clicking Open Console in New Window.
10.8.1.2. Connecting to the VNC console

Connect to the VNC console of a running virtual machine from the Console tab on the VirtualMachine details page of the web console.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Console tab. The VNC console opens by default.
  4. Optional: Open the VNC console in a separate window by clicking Open Console in New Window.
  5. Optional: Send key combinations to the virtual machine by clicking Send Key.
  6. Click outside the console window and then click Disconnect to end the session.
10.8.1.3. Connecting to a Windows virtual machine with RDP

The Desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.

To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Console tab on the VirtualMachine details page of the web console and supply it to your preferred RDP client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers.
  • An RDP client installed on a machine on the same network as the Windows virtual machine.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Click a Windows virtual machine to open the VirtualMachine details page.
  3. Click the Console tab.
  4. From the list of consoles, select Desktop viewer.
  5. Click Launch Remote Desktop to download the console.rdp file.
  6. Reference the console.rdp file in your preferred RDP client to connect to the Windows virtual machine.
10.8.1.4. Switching between virtual machine displays

If your Windows virtual machine (VM) has a vGPU attached, you can switch between the default display and the vGPU display by using the web console.

Prerequisites

  • The mediated device is configured in the HyperConverged custom resource and assigned to the VM.
  • The VM is running.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines
  2. Select a Windows virtual machine to open the Overview screen.
  3. Click the Console tab.
  4. From the list of consoles, select VNC console.
  5. Choose the appropriate key combination from the Send Key list:

    1. To access the default VM display, select Ctl + Alt+ 1.
    2. To access the vGPU display, select Ctl + Alt + 2.

Additional resources

10.8.1.5. Copying the SSH command using the web console

Copy the command to connect to a virtual machine (VM) terminal via SSH.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Click the Options menu kebab for your virtual machine and select Copy SSH command.
  3. Paste it in the terminal to access the VM.

10.8.2. Accessing virtual machine consoles by using CLI commands

10.8.2.1. Accessing a virtual machine via SSH by using virtctl

You can use the virtctl ssh command to forward SSH traffic to a virtual machine (VM) by using your local SSH client. If you have previously configured SSH key authentication with the VM, skip to step 2 of the procedure because step 1 is not required.

Note

Heavy SSH traffic on the control plane can slow down the API server. If you regularly need a large number of connections, use a dedicated Kubernetes Service object to access the virtual machine.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • You have installed the virtctl client.
  • The virtual machine you want to access is running.
  • You are in the same project as the VM.

Procedure

  1. Configure SSH key authentication:

    1. Use the ssh-keygen command to generate an SSH public key pair:

      $ ssh-keygen -f <key_file> 1
      1
      Specify the file in which to store the keys.
    2. Create an SSH authentication secret which contains the SSH public key to access the VM:

      $ oc create secret generic my-pub-key --from-file=key1=<key_file>.pub
    3. Add a reference to the secret in the VirtualMachine manifest. For example:

      apiVersion: kubevirt.io/v1
      kind: VirtualMachine
      metadata:
        name: testvm
      spec:
        running: true
        template:
          spec:
            accessCredentials:
            - sshPublicKey:
                source:
                  secret:
                    secretName: my-pub-key 1
                propagationMethod:
                  configDrive: {} 2
      # ...
      1
      Reference to the SSH authentication Secret object.
      2
      The SSH public key is injected into the VM as cloud-init metadata using the configDrive provider.
    4. Restart the VM to apply your changes.
  2. Connect to the VM via SSH:

    1. Run the following command to access the VM via SSH:

      $ virtctl ssh -i <key_file> <vm_username>@<vm_name>
    2. Optional: To securely transfer files to or from the VM, use the following commands:

      Copy a file from your machine to the VM

      $ virtctl scp -i <key_file> <filename> <vm_username>@<vm_name>:

      Copy a file from the VM to your machine

      $ virtctl scp -i <key_file> <vm_username@<vm_name>:<filename> .

10.8.2.2. Using OpenSSH and virtctl port-forward

You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs.

This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.

Prerequisites

  • You have installed the virtctl client.
  • The virtual machine you want to access is running.
  • The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable.

Procedure

  1. Add the following text to the ~/.ssh/config file on your client machine:

    Host vm/*
      ProxyCommand virtctl port-forward --stdio=true %h %p
  2. Connect to the VM by running the following command:

    $ ssh <user>@vm/<vm_name>.<namespace>
10.8.2.3. Accessing the serial console of a virtual machine instance

The virtctl console command opens a serial console to the specified virtual machine instance.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.

Procedure

  • Connect to the serial console with virtctl:

    $ virtctl console <VMI>
10.8.2.4. Accessing the graphical console of a virtual machine instances with VNC

The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.
Note

If you use virtctl via SSH on a remote machine, you must forward the X session to your machine.

Procedure

  1. Connect to the graphical interface with the virtctl utility:

    $ virtctl vnc <VMI>
  2. If the command failed, try using the -v flag to collect troubleshooting information:

    $ virtctl vnc <VMI> -v 4
10.8.2.5. Connecting to a Windows virtual machine with an RDP console

Create a Kubernetes Service object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent object is included in the VirtIO drivers.
  • An RDP client installed on your local machine.

Procedure

  1. Edit the VirtualMachine manifest to add the label for service creation:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      name: vm-ephemeral
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 1
    # ...
    1
    Add the label special: key in the spec.template.metadata.labels section.
    Note

    Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest.

  2. Save the VirtualMachine manifest file to apply your changes.
  3. Create a Service manifest to expose the VM:

    apiVersion: v1
    kind: Service
    metadata:
      name: rdpservice 1
      namespace: example-namespace 2
    spec:
      ports:
      - targetPort: 3389 3
        protocol: TCP
      selector:
        special: key 4
      type: NodePort 5
    # ...
    1
    The name of the Service object.
    2
    The namespace where the Service object resides. This must match the metadata.namespace field of the VirtualMachine manifest.
    3
    The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest.
    4
    The reference to the label that you added in the spec.template.metadata.labels stanza of the VirtualMachine manifest.
    5
    The type of service.
  4. Save the Service manifest file.
  5. Create the service by running the following command:

    $ oc create -f <service_name>.yaml
  6. Start the VM. If the VM is already running, restart it.
  7. Query the Service object to verify that it is available:

    $ oc get service -n example-namespace

    Example output for NodePort service

    NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE
    rdpservice   NodePort    172.30.232.73   <none>       3389:30000/TCP    5m

  8. Run the following command to obtain the IP address for the node:

    $ oc get node <node_name> -o wide

    Example output

    NAME    STATUS   ROLES   AGE    VERSION  INTERNAL-IP      EXTERNAL-IP
    node01  Ready    worker  6d22h  v1.24.0  192.168.55.101   <none>

  9. Specify the node IP address and the assigned port in your preferred RDP client.
  10. Enter the user name and password to connect to the Windows virtual machine.

10.9. Automating Windows installation with sysprep

You can use Microsoft DVD images and sysprep to automate the installation, setup, and software provisioning of Windows virtual machines.

10.9.1. Using a Windows DVD to create a VM disk image

Microsoft does not provide disk images for download, but you can create a disk image using a Windows DVD. This disk image can then be used to create virtual machines.

Procedure

  1. In the OpenShift Virtualization web console, click StoragePersistentVolumeClaimsCreate PersistentVolumeClaim With Data upload form.
  2. Select the intended project.
  3. Set the Persistent Volume Claim Name.
  4. Upload the VM disk image from the Windows DVD. The image is now available as a boot source to create a new Windows VM.

10.9.2. Using a disk image to install Windows

You can use a disk image to install Windows on your virtual machine.

Prerequisites

  • You must create a disk image using a Windows DVD.
  • You must create an autounattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationCatalog from the side menu.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select Upload (Upload a new file to a PVC) from the Disk source list and browse to the DVD image.
  4. Click Review and create VirtualMachine.
  5. Clear Clone available operating system source to this Virtual Machine.
  6. Clear Start this VirtualMachine after creation.
  7. On the Sysprep section of the Scripts tab, click Edit.
  8. Browse to the autounattend.xml answer file and click Save.
  9. Click Create VirtualMachine.
  10. On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save.

The VM will start with the sysprep disk containing the autounattend.xml answer file.

10.9.3. Generalizing a Windows VM using sysprep

Generalizing an image allows that image to remove all system-specific configuration data when the image is deployed on a virtual machine (VM).

Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines.
  2. Select a Windows VM to open the VirtualMachine details page.
  3. Click the Disks tab.
  4. Click the Options menu kebab for the sysprep disk and select Detach.
  5. Click Detach.
  6. Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool.
  7. Start the sysprep program by running the following command:

    %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
  8. After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.

You can now specialize the VM.

10.9.4. Specializing a Windows virtual machine

Specializing a virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM.

Prerequisites

  • You must have a generalized Windows disk image.
  • You must create an unattend.xml answer file. See the Microsoft documentation for details.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationCatalog.
  2. Select a Windows template and click Customize VirtualMachine.
  3. Select PVC (clone PVC) from the Disk source list.
  4. Specify the Persistent Volume Claim project and Persistent Volume Claim name of the generalized Windows image.
  5. Click Review and create VirtualMachine.
  6. Click the Scripts tab.
  7. In the Sysprep section, click Edit, browse to the unattend.xml answer file, and click Save.
  8. Click Create VirtualMachine.

During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use.

10.9.5. Additional resources

10.10. Triggering virtual machine failover by resolving a failed node

If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object.

Note

If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks:

  • Failed nodes are automatically recycled.
  • Virtual machines with RunStrategy set to Always or RerunOnFailure are automatically scheduled on healthy nodes.

10.10.1. Prerequisites

  • A node where a virtual machine was running has the NotReady condition.
  • The virtual machine that was running on the failed node has RunStrategy set to Always.
  • You have installed the OpenShift CLI (oc).

10.10.2. Deleting nodes from a bare metal cluster

When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.

Procedure

Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:

  1. Mark the node as unschedulable:

    $ oc adm cordon <node_name>
  2. Drain all pods on the node:

    $ oc adm drain <node_name> --force=true

    This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.

  3. Delete the node from the cluster:

    $ oc delete node <node_name>

    Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.

  4. If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.

10.10.3. Verifying virtual machine failover

After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI.

10.10.3.1. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis -A

10.11. Installing the QEMU guest agent on virtual machines

The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

10.11.1. Installing QEMU guest agent on a Linux virtual machine

The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service.

To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

Procedure

  1. Access the virtual machine command line through one of the consoles or by SSH.
  2. Install the QEMU guest agent on the virtual machine:

    $ yum install -y qemu-guest-agent
  3. Ensure the service is persistent and start it:

    $ systemctl enable --now qemu-guest-agent

10.11.2. Installing QEMU guest agent on a Windows virtual machine

For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existing or a new Windows installation.

To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected is listed in the VM spec.

Note

To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.

The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.

10.11.2.1. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.
10.11.2.2. Installing VirtIO drivers during Windows installation

Install the VirtIO drivers from the attached SATA CD driver during Windows installation.

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Begin the Windows installation process.
  3. Select the Advanced installation.
  4. The storage destination will not be recognized until the driver is loaded. Click Load driver.
  5. The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Repeat the previous two steps for all required drivers.
  7. Complete the Windows installation.

10.12. Viewing the QEMU guest agent information for virtual machines

When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks.

10.12.1. Prerequisites

10.12.2. About the QEMU guest agent information in the web console

When the QEMU guest agent is installed, the Overview and Details tabs on the VirtualMachine details page displays information about the hostname, operating system, time zone, and logged in users.

The VirtualMachine details page shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems.

Note

If the QEMU guest agent is not installed, the Overview and the Details tabs display information about the operating system that was specified when the virtual machine was created.

10.12.3. Viewing the QEMU guest agent information in the web console

You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine name to open the VirtualMachine details page.
  3. Click the Details tab to view active users.
  4. Click the Disks tab to view information about the file systems.

10.13. Managing config maps, secrets, and service accounts in virtual machines

You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can:

  • Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine.
  • Store non-confidential configuration data in a config map so that a pod or another object can consume the data.
  • Allow a component to access the API server by associating a service account with that component.
Note

OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead.

10.13.1. Adding a secret, config map, or service account to a virtual machine

You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.

These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.

If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page.

Prerequisites

  • The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. In the Environment tab, click Add Config Map, Secret or Service Account.
  4. Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
  5. Optional: Click Reload to revert the environment to its last saved state.
  6. Click Save.

Verification

  1. On the VirtualMachine details page, click the Disks tab and verify that the secret, config map, or service account is included in the list of disks.
  2. Restart the virtual machine by clicking ActionsRestart.

You can now mount the secret, config map, or service account as you would mount any other disk.

10.13.2. Removing a secret, config map, or service account from a virtual machine

Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console.

Prerequisites

  • You must have at least one secret, config map, or service account that is attached to a virtual machine.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Environment tab.
  4. Find the item that you want to delete in the list, and click Remove delete on the right side of the item.
  5. Click Save.
Note

You can reset the form to the last saved state by clicking Reload.

Verification

  1. On the VirtualMachine details page, click the Disks tab.
  2. Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks.

10.13.3. Additional resources

10.14. Installing VirtIO driver on an existing Windows virtual machine

10.14.1. About VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

See also: Installing Virtio drivers on a new Windows virtual machine.

10.14.2. Supported VirtIO drivers for Microsoft Windows virtual machines

Table 10.1. Supported drivers
Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes displays as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes displays as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

10.14.3. Adding VirtIO drivers container disk to a virtual machine

OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.

Prerequisites

  • Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.

    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration.
  2. The disk is available once the virtual machine has started:

    • If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect.
    • If the virtual machine is not running, use virtctl start <vm>.

After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.

10.14.4. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.

10.14.5. Removing the VirtIO container disk from a virtual machine

After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file.

Procedure

  1. Edit the configuration file and remove the disk and the volume.

    $ oc edit vm <vm-name>
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
  2. Reboot the virtual machine for the changes to take effect.

10.15. Installing VirtIO driver on a new Windows virtual machine

10.15.1. Prerequisites

10.15.2. About VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

See also: Installing VirtIO driver on an existing Windows virtual machine.

10.15.3. Supported VirtIO drivers for Microsoft Windows virtual machines