Chapter 9. Managing VMs
9.1. Installing the QEMU guest agent and VirtIO drivers
The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks.
You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat.
9.1.1. Installing the QEMU guest agent
9.1.1.1. Installing the QEMU guest agent on a Linux VM
The qemu-guest-agent
is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
- Log in to the VM by using a console or SSH.
Install the QEMU guest agent by running the following command:
$ yum install -y qemu-guest-agent
Ensure the service is persistent and start it:
$ systemctl enable --now qemu-guest-agent
Verification
Run the following command to verify that
AgentConnected
is listed in the VM spec:$ oc get vm <vm_name>
9.1.1.2. Installing the QEMU guest agent on a Windows VM
For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
-
In the Windows guest operating system, use the File Explorer to navigate to the
guest-agent
directory in thevirtio-win
CD drive. -
Run the
qemu-ga-x86_64.msi
installer.
Verification
Obtain a list of network services by running the following command:
$ net start
-
Verify that the output contains the
QEMU Guest Agent
.
9.1.2. Installing VirtIO drivers on Windows VMs
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download.
The container-native-virtualization/virtio-win
container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the VM.
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
9.1.2.1. Attaching VirtIO container disk to Windows VMs during installation
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM.
Procedure
- When creating a Windows VM from a template, click Customize VirtualMachine.
- Select Mount Windows drivers disk.
- Click the Customize VirtualMachine parameters.
- Click Create VirtualMachine.
After the VM is created, the virtio-win
SATA CD disk will be attached to the VM.
9.1.2.2. Attaching VirtIO container disk to an existing Windows VM
You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM.
Procedure
-
Navigate to the existing Windows VM, and click Actions
Stop. -
Go to VM Details
Configuration Disks and click Add disk. -
Add
windows-driver-disk
from container source, set the Type to CD-ROM, and then set the Interface to SATA. - Click Save.
- Start the VM, and connect to a graphical console.
9.1.2.3. Installing VirtIO drivers during Windows installation
You can install the VirtIO drivers while installing Windows on a virtual machine (VM).
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Prerequisites
-
A storage device containing the
virtio
drivers must be attached to the VM.
Procedure
-
In the Windows operating system, use the
File Explorer
to navigate to thevirtio-win
CD drive. Double-click the drive to run the appropriate installer for your VM.
For a 64-bit vCPU, select the
virtio-win-gt-x64
installer. 32-bit vCPUs are no longer supported.- Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default.
- After the installation is complete, select Finish.
- Reboot the VM.
Verification
-
Open the system disk on the PC. This is typically
C:
. -
Navigate to Program Files
Virtio-Win.
If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful.
9.1.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM
You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM).
This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps.
Prerequisites
- A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive.
Procedure
- Start the VM and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
- Open the Device Properties to identify the unknown device.
- Right-click the device and select Properties.
- Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the VM to complete the driver installation.
9.1.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive
You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive.
Downloading the container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time.
Prerequisites
-
You must have access to the Red Hat registry or to the downloaded
container-native-virtualization/virtio-win
container disk in a restricted environment.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as a CD drive by editing theVirtualMachine
manifest:# ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- OpenShift Virtualization boots the VM disks in the order defined in the
VirtualMachine
manifest. You can either define other VM disks that boot before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks.
Apply the changes:
If the VM is not running, run the following command:
$ virtctl start <vm> -n <namespace>
If the VM is running, reboot the VM or run the following command:
$ oc apply -f <vm.yaml>
- After the VM has started, install the VirtIO drivers from the SATA CD drive.
9.1.3. Updating VirtIO drivers
9.1.3.1. Updating VirtIO drivers on a Windows VM
Update the virtio
drivers on a Windows virtual machine (VM) by using the Windows Update service.
Prerequisites
- The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service.
Procedure
- In the Windows Guest operating system, click the Windows key and select Settings.
-
Navigate to Windows Update
Advanced Options Optional Updates. - Install all updates from Red Hat, Inc..
- Reboot the VM.
Verification
- On the Windows VM, navigate to the Device Manager.
- Select a device.
- Select the Driver tab.
-
Click Driver Details and confirm that the
virtio
driver details displays the correct version.
9.2. Connecting to virtual machine consoles
You can connect to the following consoles to access running virtual machines (VMs):
9.2.1. Connecting to the VNC console
You can connect to the VNC console of a virtual machine by using the OpenShift Container Platform web console or the virtctl
command line tool.
9.2.1.1. Connecting to the VNC console by using the web console
You can connect to the VNC console of a virtual machine (VM) by using the OpenShift Container Platform web console.
If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display.
Procedure
-
On the Virtualization
VirtualMachines page, click a VM to open the VirtualMachine details page. - Click the Console tab. The VNC console session starts automatically.
Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list.
- Select Ctl + Alt + 1 from the Send key list to restore the default display.
- To end the console session, click outside the console pane and then click Disconnect.
9.2.1.2. Connecting to the VNC console by using virtctl
You can use the virtctl
command line tool to connect to the VNC console of a running virtual machine.
If you run the virtctl vnc
command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh
command with the -X
or -Y
flags.
Prerequisites
-
You must install the
virt-viewer
package.
Procedure
Run the following command to start the console session:
$ virtctl vnc <vm_name>
If the connection fails, run the following command to collect troubleshooting information:
$ virtctl vnc <vm_name> -v 4
9.2.1.3. Generating a temporary token for the VNC console
To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API.
Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command.
Prerequisites
-
A running VM with OpenShift Virtualization 4.14 or later and
ssp-operator
4.14 or later
Procedure
Enable the feature gate in the HyperConverged (
HCO
) custom resource (CR):$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]'
Generate a token by entering the following command:
$ curl --header "Authorization: Bearer ${TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>"
The
<duration>
parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example:5h30m
. If this parameter is not set, the token is valid for 10 minutes by default.Sample output:
{ "token": "eyJhb..." }
Optional: Use the token provided in the output to create a variable:
$ export VNC_TOKEN="<token>"
You can now use the token to access the VNC console of a VM.
Verification
Log in to the cluster by entering the following command:
$ oc login --token ${VNC_TOKEN}
Test access to the VNC console of the VM by using the
virtctl
command:$ virtctl vnc <vm_name> -n <namespace>
It is currently not possible to revoke a specific token.
To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution:
$ virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access"
9.2.1.3.1. Granting token generation permission for the VNC console by using the cluster role
As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console.
Procedure
Choose to bind the cluster role to either a user or service account.
Run the following command to bind the cluster role to a user:
$ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="${USER_NAME}"
Run the following command to bind the cluster role to a service account:
$ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}"
9.2.2. Connecting to the serial console
You can connect to the serial console of a virtual machine by using the OpenShift Container Platform web console or the virtctl
command line tool.
Running concurrent VNC connections to a single virtual machine is not currently supported.
9.2.2.1. Connecting to the serial console by using the web console
You can connect to the serial console of a virtual machine (VM) by using the OpenShift Container Platform web console.
Procedure
-
On the Virtualization
VirtualMachines page, click a VM to open the VirtualMachine details page. - Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Serial console from the console list.
- To end the console session, click outside the console pane and then click Disconnect.
9.2.2.2. Connecting to the serial console by using virtctl
You can use the virtctl
command line tool to connect to the serial console of a running virtual machine.
Procedure
Run the following command to start the console session:
$ virtctl console <vm_name>
-
Press
Ctrl+]
to end the console session.
9.2.3. Connecting to the desktop viewer
You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP).
9.2.3.1. Connecting to the desktop viewer by using the web console
You can connect to the desktop viewer of a Windows virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You installed the QEMU guest agent on the Windows VM.
- You have an RDP client installed.
Procedure
-
On the Virtualization
VirtualMachines page, click a VM to open the VirtualMachine details page. - Click the Console tab. The VNC console session starts automatically.
- Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background.
- Select Desktop viewer from the console list.
- Click Create RDP Service to open the RDP Service dialog.
- Select Expose RDP Service and click Save to create a node port service.
-
Click Launch Remote Desktop to download an
.rdp
file and launch the desktop viewer.
9.3. Configuring SSH access to virtual machines
You can configure SSH access to virtual machines (VMs) by using the following methods:
You create an SSH key pair, add the public key to a VM, and connect to the VM by running the
virtctl ssh
command with the private key.You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source.
You add the
virtctl port-foward
command to your.ssh/config
file and connect to the VM by using OpenSSH.You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service.
You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address.
9.3.1. Access configuration considerations
Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster.
If the internal cluster network cannot handle the traffic load, you can configure a secondary network.
virtctl ssh
andvirtctl port-forwarding
commands- Simple to configure.
- Recommended for troubleshooting VMs.
-
virtctl port-forwarding
recommended for automated configuration of VMs with Ansible. - Dynamic public SSH keys can be used to provision VMs with Ansible.
- Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server.
- The API server must be able to handle the traffic load.
- The clients must be able to access the API server.
- The clients must have access credentials for the cluster.
- Cluster IP service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access an internal cluster IP address.
- Node port service
- The internal cluster network must be able to handle the traffic load.
- The clients must be able to access at least one node.
- Load balancer service
- A load balancer must be configured.
- Each node must be able to handle the traffic load of one or more load balancer services.
- Secondary network
- Excellent performance because traffic does not go through the internal cluster network.
- Allows a flexible approach to network topology.
- Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network.
9.3.2. Using virtctl ssh
You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh
command.
This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server.
9.3.2.1. About static and dynamic SSH key management
You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
Static SSH key management
You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot.
You can add the key by using one of the following methods:
- Add a key to a single VM when you create it by using the web console or the command line.
- Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project.
Use cases
- As a VM owner, you can provision all your newly created VMs with a single key.
Dynamic SSH key management
You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources.
When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM.
Use cases
-
Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a
Secret
object that is applied to all VMs in a namespace. - User access: You can add your access credentials to all VMs that you create and manage.
Ansible provisioning:
- As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning.
- As a VM owner, you can create a VM and attach the keys used for Ansible provisioning.
Key rotation:
- As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace.
- As a workload owner, you can rotate the key for the VMs that you manage.
9.3.2.2. Static key management
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time.
You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create.
If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually.
9.3.2.2.1. Adding a key when creating a VM from a template
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
-
Navigate to Virtualization
Catalog in the web console. Click a template tile.
The guest operating system must support configuration from a cloud-init data source.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
9.3.2.2.2. Creating a VM from an instance type by using the web console
You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data.
Procedure
In the web console, navigate to Virtualization
Catalog. The InstanceTypes tab opens by default.
Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
9.3.2.2.3. Adding a key when creating a VM by using the command line
You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot.
The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
Create a manifest file for a
VirtualMachine
object and aSecret
object:Example manifest
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3
Create the
VirtualMachine
andSecret
objects by running the following command:$ oc create -f <manifest_file>.yaml
Start the VM by running the following command:
$ virtctl start vm example-vm -n example-namespace
Verification
Get the VM configuration:
$ oc describe vm example-vm -n example-namespace
Example output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys # ...
9.3.2.3. Dynamic key management
You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created.
9.3.2.3.1. Enabling dynamic key injection when creating a VM from a template
You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the OpenShift Container Platform web console. Then, you can update the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
-
Navigate to Virtualization
Catalog in the web console. - Click the Red Hat Enterprise Linux 9 VM tile.
- Click Customize VirtualMachine.
- Click Next.
- Click the Scripts tab.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
Click Create VirtualMachine.
The VirtualMachine details page displays the progress of the VM creation.
Verification
Click the Scripts tab on the Configuration tab.
The secret name is displayed in the Authorized SSH key section.
9.3.2.3.2. Creating a VM from an instance type by using the web console
You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM.
You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list.
You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Then, you can add or revoke the key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9.
Procedure
In the web console, navigate to Virtualization
Catalog. The InstanceTypes tab opens by default.
Select either of the following options:
Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list.
NoteThe bootable volume table lists only those volumes in the
openshift-virtualization-os-images
namespace that have theinstancetype.kubevirt.io/default-preference
label.- Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list.
Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a
containerDisk
volume. Click Save.Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link.
In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon next to the Select volume to boot from line.
Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button.
- Click an instance type tile and select the resource size appropriate for your workload.
- Click the Red Hat Enterprise Linux 9 VM tile.
Optional: Choose the virtual machine details, including the VM’s name, that apply to the volume you are booting from:
For a Linux-based volume, follow these steps to configure SSH:
- If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section.
Select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new: Follow these steps:
- Browse to the public SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Click Save.
For a Windows volume, follow either of these set of steps to configure sysprep options:
If you have not already added sysprep options for the Windows volume, follow these steps:
- Click the edit icon beside Sysprep in the VirtualMachine details section.
- Add the Autoattend.xml answer file.
- Add the Unattend.xml answer file.
- Click Save.
If you want to use existing sysprep options for the Windows volume, follow these steps:
- Click Attach existing sysprep.
- Enter the name of the existing sysprep Unattend.xml answer file.
- Click Save.
- Set Dynamic SSH key injection in the VirtualMachine details section to on.
Optional: If you are creating a Windows VM, you can mount a Windows driver disk:
- Click the Customize VirtualMachine button.
- On the VirtualMachine details page, click Storage.
- Select the Mount Windows drivers disk checkbox.
- Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands.
- Click Create VirtualMachine.
After the VM is created, you can monitor the status on the VirtualMachine details page.
9.3.2.3.3. Enabling dynamic SSH key injection by using the web console
You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console. Then, you can update the public SSH key at runtime.
The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9.
Prerequisites
- The guest operating system is RHEL 9.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a VM to open the VirtualMachine details page.
- On the Configuration tab, click Scripts.
If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options:
- Use existing: Select a secret from the secrets list.
Add new:
- Browse to the SSH key file or paste the file in the key field.
- Enter the secret name.
- Optional: Select Automatically apply this key to any new VirtualMachine you create in this project.
- Set Dynamic SSH key injection to on.
- Click Save.
9.3.2.3.4. Enabling dynamic key injection by using the command line
You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime.
Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection.
The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9.
Prerequisites
-
You generated an SSH key pair by running the
ssh-keygen
command.
Procedure
Create a manifest file for a
VirtualMachine
object and aSecret
object:Example manifest
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 runStrategy: Always template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3
Create the
VirtualMachine
andSecret
objects by running the following command:$ oc create -f <manifest_file>.yaml
Start the VM by running the following command:
$ virtctl start vm example-vm -n example-namespace
Verification
Get the VM configuration:
$ oc describe vm example-vm -n example-namespace
Example output
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys # ...
9.3.2.4. Using the virtctl ssh command
You can access a running virtual machine (VM) by using the virtcl ssh
command.
Prerequisites
-
You installed the
virtctl
command line tool. - You added a public SSH key to the VM.
- You have an SSH client installed.
-
The environment where you installed the
virtctl
tool has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Run the
virtctl ssh
command:$ virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1
- 1
- Specify the namespace, user name, and the SSH private key. The default SSH key location is
/home/user/.ssh
. If you save the key in a different location, you must specify the path.
Example
$ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key
You can copy the virtctl ssh
command in the web console by selecting Copy SSH command from the options
menu beside a VM on the VirtualMachines page.
9.3.3. Using the virtctl port-forward command
You can use your local OpenSSH client and the virtctl port-forward
command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs.
This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server.
Prerequisites
-
You have installed the
virtctl
client. - The virtual machine you want to access is running.
-
The environment where you installed the
virtctl
tool has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Add the following text to the
~/.ssh/config
file on your client machine:Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p
Connect to the VM by running the following command:
$ ssh <user>@vm/<vm_name>.<namespace>
9.3.4. Using a service for SSH access
You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service.
Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls.
If the cluster network cannot handle the traffic load, consider using a secondary network for VM access.
9.3.4.1. About services
A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort
and LoadBalancer
types, exposure to the outside world.
- ClusterIP
-
Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client’s request is load balanced among available backends.
ClusterIP
is the default service type. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePort
makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator.
9.3.4.2. Creating a service
You can create a service to expose a virtual machine (VM) by using the OpenShift Container Platform web console, virtctl
command line tool, or a YAML file.
9.3.4.2.1. Enabling load balancer service creation by using the web console
You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You have configured a load balancer for the cluster.
-
You are logged in as a user with the
cluster-admin
role. - You created a network attachment definition for the network.
Procedure
-
Navigate to Virtualization
Overview. - On the Settings tab, click Cluster.
- Expand General settings and SSH configuration.
- Set SSH over LoadBalancer service to on.
9.3.4.2.2. Creating a service by using the web console
You can create a node port or load balancer service for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You configured the cluster network to support either a load balancer or a node port.
- To create a load balancer service, you enabled the creation of load balancer services.
Procedure
- Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page.
- On the Details tab, select SSH over LoadBalancer from the SSH service type list.
-
Optional: Click the copy icon to copy the
SSH
command to your clipboard.
Verification
- Check the Services pane on the Details tab to view the new service.
9.3.4.2.3. Creating a service by using virtctl
You can create a service for a virtual machine (VM) by using the virtctl
command line tool.
Prerequisites
-
You installed the
virtctl
command line tool. - You configured the cluster network to support the service.
-
The environment where you installed
virtctl
has the cluster permissions required to access the VM. For example, you ranoc login
or you set theKUBECONFIG
environment variable.
Procedure
Create a service by running the following command:
$ virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1
- 1
- Specify the
ClusterIP
,NodePort
, orLoadBalancer
service type.
Example
$ virtctl expose vm example-vm --name example-service --type NodePort --port 22
Verification
Verify the service by running the following command:
$ oc get service
Next steps
After you create a service with virtctl
, you must add special: key
to the spec.template.metadata.labels
stanza of the VirtualMachine
manifest. See Creating a service by using the command line.
9.3.4.2.4. Creating a service by using the command line
You can create a service and associate it with a virtual machine (VM) by using the command line.
Prerequisites
- You configured the cluster network to support the service.
Procedure
Edit the
VirtualMachine
manifest to add the label for service creation:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: runStrategy: Halted template: metadata: labels: special: key 1 # ...
- 1
- Add
special: key
to thespec.template.metadata.labels
stanza.
NoteLabels on a virtual machine are passed through to the pod. The
special: key
label must match the label in thespec.selector
attribute of theService
manifest.-
Save the
VirtualMachine
manifest file to apply your changes. Create a
Service
manifest to expose the VM:apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000
-
Save the
Service
manifest file. Create the service by running the following command:
$ oc create -f example-service.yaml
- Restart the VM to apply the changes.
Verification
Query the
Service
object to verify that it is available:$ oc get service -n example-namespace
9.3.4.3. Connecting to a VM exposed by a service by using SSH
You can connect to a virtual machine (VM) that is exposed by a service by using SSH.
Prerequisites
- You created a service to expose the VM.
- You have an SSH client installed.
- You are logged in to the cluster.
Procedure
Run the following command to access the VM:
$ ssh <user_name>@<ip_address> -p <port> 1
- 1
- Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service.
9.3.5. Using a secondary network for SSH access
You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH.
Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method.
See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options.
Prerequisites
- You configured a secondary network such as Linux bridge or SR-IOV.
-
You created a network attachment definition for a Linux bridge network or the SR-IOV Network Operator created a network attachment definition when you created an
SriovNetwork
object.
9.3.5.1. Configuring a VM network interface by using the web console
You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console.
Prerequisites
- You created a network attachment definition for the network.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Click a VM to view the VirtualMachine details page.
- On the Configuration tab, click the Network interfaces tab.
- Click Add network interface.
- Enter the interface name and select the network attachment definition from the Network list.
- Click Save.
- Restart the VM to apply the changes.
9.3.5.2. Connecting to a VM attached to a secondary network by using SSH
You can connect to a virtual machine (VM) attached to a secondary network by using SSH.
Prerequisites
- You attached a VM to a secondary network with a DHCP server.
- You have an SSH client installed.
Procedure
Obtain the IP address of the VM by running the following command:
$ oc describe vm <vm_name> -n <namespace>
Example output
# ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default # ...
Connect to the VM by running the following command:
$ ssh <user_name>@<ip_address> -i <ssh_key>
Example
$ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user
9.4. Editing virtual machines
You can update a virtual machine (VM) configuration by using the OpenShift Container Platform web console. You can update the YAML file or the VirtualMachine details page.
You can also edit a VM by using the command line.
To edit a VM to configure disk sharing by using virtual disks or LUN, see Configuring shared volumes for virtual machines.
9.4.1. Changing the instance type of a VM
You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately.
Prerequisites
- You created the VM by using an instance type.
Procedure
-
In the OpenShift Container Platform web console, click Virtualization
VirtualMachines. - Select a VM to open the VirtualMachine details page.
- Click the Configuration tab.
- On the Details tab, click the instance type text to open the Edit Instancetype dialog. For example, click 1 CPU | 2 GiB Memory.
Edit the instance type by using the Series and Size lists.
- Select an item from the Series list to show the relevant sizes for that series. For example, select General Purpose.
- Select the VM’s new instance type from the Size list. For example, select medium: 1 CPUs, 4Gi Memory, which is available in the General Purpose series.
- Click Save.
Verification
- Click the YAML tab.
- Click Reload.
- Review the VM YAML to confirm that the instance type changed.
9.4.2. Hot plugging memory on a virtual machine
You can add or remove the amount of memory allocated to a virtual machine (VM) without having to restart the VM by using the OpenShift Container Platform web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Edit CPU|Memory.
- Enter the desired amount of memory and click Save.
The system applies these changes immediately. If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a RestartRequired
condition is added to the VM.
Linux guests require a kernel version of 5.16 or later and Windows guests require the latest viomem
drivers.
9.4.3. Hot plugging CPUs on a virtual machine
You can increase or decrease the number of CPU sockets allocated to a virtual machine (VM) without having to restart the VM by using the OpenShift Container Platform web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Edit CPU|Memory.
- Select the vCPU radio button.
Enter the desired number of vCPU sockets and click Save.
If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a
RestartRequired
condition is added to the VM.
9.4.4. Editing a virtual machine by using the command line
You can edit a virtual machine (VM) by using the command line.
Prerequisites
-
You installed the
oc
CLI.
Procedure
Obtain the virtual machine configuration by running the following command:
$ oc edit vm <vm_name>
- Edit the YAML configuration.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine.
Run the following command for the new configuration to take effect:
$ oc apply vm <vm_name> -n <namespace>
9.4.5. Adding a disk to a virtual machine
You can add a virtual disk to a virtual machine (VM) by using the OpenShift Container Platform web console.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a VM to open the VirtualMachine details page.
- On the Disks tab, click Add disk.
Specify the Source, Name, Size, Type, Interface, and Storage Class.
- Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
-
Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the
kubevirt-storage-class-defaults
config map.
- Click Add.
If the VM is running, you must restart the VM to apply the change.
9.4.5.1. Storage fields
Field | Description |
---|---|
Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). |
Use an existing PVC | Use a PVC that is already available in the cluster. |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. |
Import via Registry (creates PVC) | Import content via container registry. |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size | Size of the disk in GiB. |
Type | Type of disk. Example: Disk or CD-ROM |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage Class | The storage class that is used to create the disk. |
Advanced storage settings
The following advanced storage settings are optional and available for Blank, Import via URL, and Clone existing PVC disks.
If you do not specify these parameters, the system uses the default storage profile values.
Parameter | Option | Parameter description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | ReadWriteOnce (RWO) | Volume can be mounted as read-write by a single node. |
ReadWriteMany (RWX) | Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. |
9.4.6. Mounting a Windows driver disk on a virtual machine
You can mount a Windows driver disk on a virtual machine (VM) by using the OpenShift Container Platform web console.
Procedure
-
Navigate to Virtualization
VirtualMachines. - Select the required VM to open the VirtualMachine details page.
- On the Configuration tab, click Storage.
Select the Mount Windows drivers disk checkbox.
The Windows driver disk is displayed in the list of mounted disks.
9.4.7. Adding a secret, config map, or service account to a virtual machine
You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.
These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk.
If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page.
Prerequisites
- The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
-
Click Configuration
Environment. - Click Add Config Map, Secret or Service Account.
- Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource.
- Optional: Click Reload to revert the environment to its last saved state.
- Click Save.
Verification
-
On the VirtualMachine details page, click Configuration
Disks and verify that the resource is displayed in the list of disks. -
Restart the virtual machine by clicking Actions
Restart.
You can now mount the secret, config map, or service account as you would mount any other disk.
Additional resources for config maps, secrets, and service accounts
9.5. Editing boot order
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or network interface controller (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
9.5.1. Adding items to a boot order list in the web console
Add items to a boot order list by using the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
- Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
9.5.2. Editing a boot order list in the web console
Edit the boot order list in the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
9.5.3. Editing a boot order list in the YAML configuration file
Edit the boot order list in a YAML configuration file by using the CLI.
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
$ oc edit vm <vm_name> -n <namespace>
Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:
disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default
- Save the YAML file.
9.5.4. Removing items from a boot order list in the web console
Remove items from a boot order list by using the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
-
Click the Remove icon
next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
9.6. Deleting virtual machines
You can delete a virtual machine from the web console or by using the oc
command line interface.
9.6.1. Deleting a virtual machine using the web console
Deleting a virtual machine permanently removes it from the cluster.
Procedure
-
In the OpenShift Container Platform console, click Virtualization
VirtualMachines from the side menu. Click the Options menu
beside a virtual machine and select Delete.
Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions
Delete. - Optional: Select With grace period or clear Delete disks.
- Click Delete to permanently delete the virtual machine.
9.6.2. Deleting a virtual machine by using the CLI
You can delete a virtual machine by using the oc
command line interface (CLI). The oc
client enables you to perform actions on multiple virtual machines.
Prerequisites
- Identify the name of the virtual machine that you want to delete.
Procedure
Delete the virtual machine by running the following command:
$ oc delete vm <vm_name>
NoteThis command only deletes a VM in the current project. Specify the
-n <project_name>
option if the VM you want to delete is in a different project or namespace.
9.7. Exporting virtual machines
You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes.
You create a VirtualMachineExport
custom resource (CR) by using the command line interface.
Alternatively, you can use the virtctl vmexport
command to create a VirtualMachineExport
CR and to download exported volumes.
You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization.
9.7.1. Creating a VirtualMachineExport custom resource
You can create a VirtualMachineExport
custom resource (CR) to export the following objects:
- Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM.
-
VM snapshot: Exports PVCs contained in a
VirtualMachineSnapshot
CR. -
PVC: Exports a PVC. If the PVC is used by another pod, such as the
virt-launcher
pod, the export remains in aPending
state until the PVC is no longer in use.
The VirtualMachineExport
CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress
or Route
.
The export server supports the following file formats:
-
raw
: Raw disk image file. -
gzip
: Compressed disk image file. -
dir
: PVC directory and files. -
tar.gz
: Compressed PVC file.
Prerequisites
- The VM must be shut down for a VM export.
Procedure
Create a
VirtualMachineExport
manifest to export a volume from aVirtualMachine
,VirtualMachineSnapshot
, orPersistentVolumeClaim
CR according to the following example and save it asexample-export.yaml
:VirtualMachineExport
exampleapiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3
Create the
VirtualMachineExport
CR:$ oc create -f example-export.yaml
Get the
VirtualMachineExport
CR:$ oc get vmexport example-export -o yaml
The internal and external links for the exported volumes are displayed in the
status
stanza:Output example
apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export
9.7.2. Accessing exported virtual machine manifests
After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine
manifest and related information from the export server.
Prerequisites
You exported a virtual machine or VM snapshot by creating a
VirtualMachineExport
custom resource (CR).NoteVirtualMachineExport
objects that have thespec.source.kind: PersistentVolumeClaim
parameter do not generate virtual machine manifests.
Procedure
To access the manifests, you must first copy the certificates from the source cluster to the target cluster.
- Log in to the source cluster.
Save the certificates to the
cacert.crt
file by running the following command:$ oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1
- 1
- Replace
<export_name>
with themetadata.name
value from theVirtualMachineExport
object.
-
Copy the
cacert.crt
file to the target cluster.
Decode the token in the source cluster and save it to the
token_decode
file by running the following command:$ oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1
- 1
- Replace
<export_name>
with themetadata.name
value from theVirtualMachineExport
object.
-
Copy the
token_decode
file to the target cluster. Get the
VirtualMachineExport
custom resource by running the following command:$ oc get vmexport <export_name> -o yaml
Review the
status.links
stanza, which is divided intoexternal
andinternal
sections. Note themanifests.url
fields within each section:Example output
apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export
- 1
- Contains the
VirtualMachine
manifest,DataVolume
manifest, if present, and aConfigMap
manifest that contains the public certificate for the external URL’s ingress or route. - 2
- Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token.
- 3
- Contains the
VirtualMachine
manifest,DataVolume
manifest, if present, and aConfigMap
manifest that contains the certificate for the internal URL’s export server.
- Log in to the target cluster.
Get the
Secret
manifest by running the following command:$ curl --cacert cacert.crt <secret_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml"
For example:
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
Get the manifests of
type: all
, such as theConfigMap
andVirtualMachine
manifests, by running the following command:$ curl --cacert cacert.crt <all_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml"
For example:
$ curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml"
Next steps
-
You can now create the
ConfigMap
andVirtualMachine
objects on the target cluster by using the exported manifests.
9.8. Managing virtual machine instances
If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc
or virtctl
commands from the command-line interface (CLI).
The virtctl
command provides more virtualization options than the oc
command. For example, you can use virtctl
to pause a VM or expose a port.
9.8.1. About virtual machine instances
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc
command-line interface (CLI).
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
- List standalone VMIs and their details.
- Edit labels and annotations for a standalone VMI.
- Delete a standalone VMI.
When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.
Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the RestartRequired
VM condition. Changes are effective on the next reboot, and the condition is removed.
9.8.2. Listing all virtual machine instances using the CLI
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Procedure
List all VMIs by running the following command:
$ oc get vmis -A
9.8.3. Listing standalone virtual machine instances using the web console
Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).
VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.
Procedure
Click Virtualization
VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge next to its name.
9.8.4. Editing a standalone virtual machine instance using the web console
You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable.
Procedure
-
In the OpenShift Container Platform console, click Virtualization
VirtualMachines from the side menu. - Select a standalone VMI to open the VirtualMachineInstance details page.
- On the Details tab, click the pencil icon beside Annotations or Labels.
- Make the relevant changes and click Save.
9.8.5. Deleting a standalone virtual machine instance using the CLI
You can delete a standalone virtual machine instance (VMI) by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the VMI that you want to delete.
Procedure
Delete the VMI by running the following command:
$ oc delete vmi <vmi_name>
9.8.6. Deleting a standalone virtual machine instance using the web console
Delete a standalone virtual machine instance (VMI) from the web console.
Procedure
-
In the OpenShift Container Platform web console, click Virtualization
VirtualMachines from the side menu. -
Click Actions
Delete VirtualMachineInstance. - In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.
9.9. Controlling virtual machine states
You can stop, start, restart, pause, and unpause virtual machines from the web console.
You can use virtctl
to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl
to force stop a VM or expose a port.
9.9.1. Starting a virtual machine
You can start a virtual machine from the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Start VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you start it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Start.
When you start virtual machine that is provisioned from a URL
source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.
9.9.2. Stopping a virtual machine
You can stop a virtual machine from the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Stop VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you stop it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Stop.
9.9.3. Restarting a virtual machine
You can restart a running virtual machine from the web console.
To avoid errors, do not restart a virtual machine while it has a status of Importing.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Restart.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you restart it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Restart.
9.9.4. Pausing a virtual machine
You can pause a virtual machine from the web console.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to pause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Pause VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you pause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Pause.
9.9.5. Unpausing a virtual machine
You can unpause a paused virtual machine from the web console.
Prerequisites
- At least one of your virtual machines must have a status of Paused.
Procedure
-
Click Virtualization
VirtualMachines from the side menu. - Find the row that contains the virtual machine that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row and click Unpause VirtualMachine.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you unpause it:
- Access the VirtualMachine details page by clicking the name of the virtual machine.
-
Click Actions
Unpause.
9.9.6. Controlling the state of multiple virtual machines
You can start, stop, restart, pause, and unpause multiple virtual machines from the web console.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Optional: To limit the number of displayed virtual machines, select a relevant project from the Projects list.
- Select a checkbox next to the virtual machines you want to work with. To select all virtual machines, click the checkbox in the VirtualMachines table header.
- Click Actions and select the intended action from the menu.
9.10. Using virtual Trusted Platform Module devices
Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine
(VM) or VirtualMachineInstance
(VMI) manifest.
With OpenShift Virtualization 4.18 and newer, you can export virtual machines (VMs) with attached vTPM devices, create snapshots of these VMs, and restore VMs from these snapshots. However, cloning a VM with a vTPM device attached to it or creating a new VM from its snapshot is not supported.
9.10.1. About vTPM devices
A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip.
A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass
attribute in the HyperConverged
custom resource (CR):
kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name> # ...
If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one.
9.10.2. Adding a vTPM device to a virtual machine
Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM.
Prerequisites
-
You have installed the OpenShift CLI (
oc
).
Procedure
Run the following command to update the VM configuration:
$ oc edit vm <vm_name> -n <namespace>
Edit the VM specification to add the vTPM device. For example:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2 # ...
- To apply your changes, save and exit the editor.
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
9.11. Managing virtual machines with OpenShift Pipelines
Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container.
By using OpenShift Pipelines tasks and the example pipeline, you can do the following:
- Create and manage virtual machines (VMs), persistent volume claims (PVCs), data volumes, and data sources.
- Run commands in VMs.
-
Manipulate disk images with
libguestfs
tools.
The tasks are located in the task catalog (ArtifactHub).
The example Windows pipeline is located in the pipeline catalog (ArtifactHub).
9.11.1. Prerequisites
-
You have access to an OpenShift Container Platform cluster with
cluster-admin
permissions. -
You have installed the OpenShift CLI (
oc
). - You have installed OpenShift Pipelines.
9.11.2. Supported virtual machine tasks
The following table shows the supported tasks.
Task | Description |
---|---|
|
Create a virtual machine from a provided manifest or with |
| Create a virtual machine from a template. |
| Copy a virtual machine template. |
| Modify a virtual machine template. |
| Create or delete data volumes or data sources. |
| Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. |
|
Use the |
|
Use the |
| Wait for a specific status of a virtual machine instance and fail or succeed based on the status. |
Virtual machine creation in pipelines now utilizes ClusterInstanceType
and ClusterPreference
instead of template-based tasks, which have been deprecated. The create-vm-from-template
, copy-template
, and modify-vm-template
commands remain available but are not used in default pipeline tasks.
9.11.3. Windows EFI installer pipeline
You can run the Windows EFI installer pipeline by using the web console or CLI.
The Windows EFI installer pipeline installs Windows 10, Windows 11, or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process.
The Windows EFI installer pipeline uses a config map file with sysprep
predefined by OpenShift Container Platform and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep
definition.
9.11.3.1. Running the example pipelines using the web console
You can run the example pipelines from the Pipelines menu in the web console.
Procedure
-
Click Pipelines
Pipelines in the side menu. - Select a pipeline to open the Pipeline details page.
- From the Actions list, select Start. The Start Pipeline dialog is displayed.
- Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status.
9.11.3.2. Running the example pipelines using the CLI
Use a PipelineRun
resource to run the example pipelines. A PipelineRun
object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun
object for each task in the pipeline.
Procedure
To run the Microsoft Windows 11 installer pipeline, create the following
PipelineRun
manifest:apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4.18 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107
- 1
- Specify the URL for the Windows 11 64-bit ISO file. The product’s language must be English (United States).
- 2
- Example
PipelineRun
objects have a special parameter,acceptEula
. By setting this parameter, you are agreeing to the applicable Microsoft user license agreements for each deployment or installation of the Microsoft products. If you set it to false, the pipeline exits at the first task.
Apply the
PipelineRun
manifest:$ oc apply -f windows11-customize-run.yaml
9.11.4. Additional resources
9.12. Advanced virtual machine management
9.12.1. Working with resource quotas for virtual machines
Create and manage resource quotas for virtual machines.
9.12.1.1. Setting resource quota limits for virtual machines
Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.
Procedure
Set limits for a VM by editing the
VirtualMachine
manifest. For example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: runStrategy: Halted template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1
- 1
- This configuration is supported because the
limits.memory
value is at least100Mi
larger than therequests.memory
value.
-
Save the
VirtualMachine
manifest.
9.12.1.2. Additional resources
9.12.2. Configuring the Application-Aware Quota (AAQ) Operator
You can use the Application-Aware Quota (AAQ) Operator to customize and manage resource quotas for individual components in an OpenShift Container Platform cluster.
9.12.2.1. About the AAQ Operator
The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native ResourceQuota
object in the OpenShift Container Platform platform.
In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native ResourceQuota
object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for OpenShift Virtualization workloads.
OpenShift Virtualization requires significant compute resource allocation to handle virtual machine (VM) live migrations and manage VM infrastructure overhead. When upgrading OpenShift Virtualization, you must migrate VMs to upgrade the virt-launcher
pod. However, migrating a VM in the presence of a resource quota can cause the migration, and subsequently the upgrade, to fail.
With AAQ, you can allocate resources for VMs without interfering with cluster-level activities such as upgrades and node maintenance. The AAQ Operator also supports non-compute resources which eliminates the need to manage both the native resource quota and AAQ API objects separately.
9.12.2.1.1. AAQ Operator controller and custom resources
The AAQ Operator introduces two new API objects defined as custom resource definitions (CRDs) for managing alternative quota implementations across multiple namespaces:
ApplicationAwareResourceQuota
: Sets aggregate quota restrictions enforced per namespace. TheApplicationAwareResourceQuota
API is compatible with the nativeResourceQuota
object and shares the same specification and status definitions.Example manifest
apiVersion: aaq.kubevirt.io/v1alpha1 kind: ApplicationAwareResourceQuota metadata: name: example-resource-quota spec: hard: requests.memory: 1Gi limits.memory: 1Gi requests.cpu/vmi: "1" 1 requests.memory/vmi: 1Gi 2 # ...
ApplicationAwareClusterResourceQuota
: Mirrors theApplicationAwareResourceQuota
object at a cluster scope. It is compatible with the nativeClusterResourceQuota
API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing thespec.selector.labels
orspec.selector.annotations
fields.Example manifest
apiVersion: aaq.kubevirt.io/v1alpha1 kind: ApplicationAwareClusterResourceQuota 1 metadata: name: example-resource-quota spec: quota: hard: requests.memory: 1Gi limits.memory: 1Gi requests.cpu/vmi: "1" requests.memory/vmi: 1Gi selector: annotations: null labels: matchLabels: kubernetes.io/metadata.name: default # ...
- 1
- You can only create an
ApplicationAwareClusterResourceQuota
object if thespec.allowApplicationAwareClusterResourceQuota
field in theHyperConverged
custom resource (CR) is set totrue
.
NoteIf both
spec.selector.labels
andspec.selector.annotations
fields are set, only namespaces that match both are selected.
The AAQ controller uses a scheduling gate mechanism to evaluate whether there is enough of a resource available to run a workload. If so, the scheduling gate is removed from the pod and it is considered ready for scheduling. The quota usage status is updated to indicate how much of the quota is used.
If the CPU and memory requests and limits for the workload exceed the enforced quota usage limit, the pod remains in SchedulingGated
status until there is enough quota available. The AAQ controller creates an event of type Warning
with details on why the quota was exceeded. You can view the event details by using the oc get events
command.
Pods that have the spec.nodeName
field set to a specific node cannot use namespaces that match the spec.namespaceSelector
labels defined in the HyperConverged
CR.
9.12.2.2. Enabling the AAQ Operator
To deploy the AAQ Operator, set the enableApplicationAwareQuota
feature gate to true
in the HyperConverged
custom resource (CR).
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Set the
enableApplicationAwareQuota
feature gate totrue
in theHyperConverged
CR by running the following command:$ oc patch hco kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "add", "path": "/spec/featureGates/enableApplicationAwareQuota", "value": true}]'
9.12.2.3. Configuring the AAQ Operator by using the CLI
You can configure the AAQ Operator by specifying the fields of the spec.applicationAwareConfig
object in the HyperConverged
custom resource (CR).
Prerequisites
-
You have access to the cluster as a user with
cluster-admin
privileges. -
You have installed the OpenShift CLI (
oc
).
Procedure
Update the
HyperConverged
CR by running the following command:$ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{ "spec": { "applicationAwareConfig": { "vmiCalcConfigName": "DedicatedVirtualResources", "namespaceSelector": { "matchLabels": { "app": "my-app" } }, "allowApplicationAwareClusterResourceQuota": true } } }'
where:
vmiCalcConfigName
Specifies how resource counting is managed for pods that run virtual machine (VM) workloads. Possible values are:
-
VmiPodUsage
: Counts compute resources for pods associated with VMs in the same way as native resource quotas and excludes migration-related resources. -
VirtualResources
: Counts compute resources based on the VM specifications, using the VM RAM size for memory and virtual CPUs for processing. -
DedicatedVirtualResources
(default): Similar toVirtualResources
, but separates resource tracking for pods associated with VMs by adding a/vmi
suffix to CPU and memory resource names. For example,requests.cpu/vmi
andrequests.memory/vmi
.
-
namespaceSelector
-
Determines the namespaces for which an AAQ scheduling gate is added to pods when they are created. If a namespace selector is not defined, the AAQ Operator targets namespaces with the
application-aware-quota/enable-gating
label as default. allowApplicationAwareClusterResourceQuota
-
If set to
true
, you can create and manage theApplicationAwareClusterResourceQuota
object. Setting this attribute totrue
can increase scheduling time.
9.12.2.4. Additional resources
9.12.3. Specifying nodes for virtual machines
You can place virtual machines (VMs) on specific nodes by using node placement rules.
9.12.3.1. About node placement for virtual machines
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
- You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
- You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
- Your VMs require specific hardware features that are not present on all available nodes.
- You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.
You can use the following rule types in the spec
field of a VirtualMachine
manifest:
nodeSelector
- Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
-
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the
VirtualMachine
workload type is based on thePod
object. tolerations
Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
NoteAffinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.
9.12.3.2. Node placement examples
The following example YAML file snippets use nodePlacement
, affinity
, and tolerations
fields to customize node placement for virtual machines.
9.12.3.2.1. Example: VM node placement with nodeSelector
In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1
and example-key-2 = example-value-2
labels.
If there are no nodes that fit this description, the virtual machine is not scheduled.
Example VM manifest
metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 # ...
9.12.3.2.2. Example: VM node placement with pod affinity and pod anti-affinity
In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1
. If there is no such pod running on any node, the VM is not scheduled.
If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2
. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
9.12.3.2.3. Example: VM node placement with node affinity
In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1
or the label example.io/example-key = example-value-2
. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.
If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value
. However, if all candidate nodes have this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
9.12.3.2.4. Example: VM node placement with tolerations
In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule
taint. Because this virtual machine has matching tolerations
, it can schedule onto the tainted nodes.
A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.
Example VM manifest
metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" # ...
9.12.3.3. Additional resources
9.12.4. Configuring the default CPU model
Use the defaultCPUModel
setting in the HyperConverged
custom resource (CR) to define a cluster-wide default CPU model.
The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster.
If the VM does not have a defined CPU model:
-
The
defaultCPUModel
is automatically set using the CPU model defined at the cluster-wide level.
-
The
If both the VM and the cluster have a defined CPU model:
- The VM’s CPU model takes precedence.
If neither the VM nor the cluster have a defined CPU model:
- The host-model is automatically set using the CPU model defined at the host level.
9.12.4.1. Configuring the default CPU model
Configure the defaultCPUModel
by updating the HyperConverged
custom resource (CR). You can change the defaultCPUModel
while OpenShift Virtualization is running.
The defaultCPUModel
is case sensitive.
Prerequisites
- Install the OpenShift CLI (oc).
Procedure
Open the
HyperConverged
CR by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the
defaultCPUModel
field to the CR and set the value to the name of a CPU model that exists in the cluster:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC"
- Apply the YAML file to your cluster.
9.12.5. Using UEFI mode for virtual machines
You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode.
9.12.5.1. About UEFI mode for virtual machines
Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a .efi
extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.
9.12.5.2. Booting virtual machines in UEFI mode
You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine
manifest.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit or create a
VirtualMachine
manifest file. Use thespec.firmware.bootloader
stanza to configure UEFI mode:Booting in UEFI mode with secure boot active
apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 # ...
- 1
- OpenShift Virtualization requires System Management Mode (
SMM
) to be enabled for Secure Boot in UEFI mode to occur. - 2
- OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot.
Apply the manifest to your cluster by running the following command:
$ oc create -f <file_name>.yaml
9.12.5.3. Enabling persistent EFI
You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM.
Prerequisites
- You must have cluster administrator privileges.
- You must have a storage class that supports RWX access mode and FS volume mode.
Procedure
Enable the
VMPersistentState
feature gate by running the following command:$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]'
9.12.5.4. Configuring VMs with persistent EFI
You can configure a VM to have EFI persistence enabled by editing its manifest file.
Prerequisites
-
VMPersistentState
feature gate enabled.
Procedure
Edit the VM manifest file and save to apply settings.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true # ...
9.12.6. Configuring PXE booting for virtual machines
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
9.12.6.1. Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
9.12.6.2. PXE booting with a specified MAC address
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition
object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
pxe-net-conf
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", 2 "type": "bridge", 3 "bridge": "bridge-interface", 4 "macspoofchk": false, 5 "vlan": 100, 6 "disableContainerInterface": true, "preserveDefaultVlan": false 7 }
- 1
- The name for the
NetworkAttachmentDefinition
object. - 2
- The name for the configuration. It is recommended to match the configuration name to the
name
value of the network attachment definition. - 3
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin.
- 4
- The name of the Linux bridge configured on the node.
- 5
- Optional: A flag to enable the MAC spoof check. When set to
true
, you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. - 6
- Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
- 7
- Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is
true
.
Create the network attachment definition by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yaml
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1
NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2
Specify that the network is connected to the previously created network attachment definition. In this scenario,
<pxe-net>
is connected to the network attachment definition called<pxe-net-conf>
:networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml
Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
- Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verification
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from OpenShift Container Platform.$ ip addr
Example output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
9.12.6.3. OpenShift Virtualization networking glossary
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality.
- Multus
- A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Node network configuration policy (NNCP)
-
A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster.
9.12.7. Using huge pages with virtual machines
You can use huge pages as backing memory for virtual machines in your cluster.
9.12.7.1. Prerequisites
- Nodes must have pre-allocated huge pages configured.
9.12.7.2. What huge pages do
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.
9.12.7.3. Configuring huge pages for virtual machines
You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize
and resources.requests.memory
parameters in your virtual machine configuration.
The memory request must be divisible by the page size. For example, you cannot request 500Mi
memory with a page size of 1Gi
.
The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.
If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.
Prerequisites
- Nodes must have pre-allocated huge pages configured. For instructions, see Configuring huge pages at boot time.
Procedure
In your virtual machine configuration, add the
resources.requests.memory
andmemory.hugepages.pageSize
parameters to thespec.domain
. The following configuration snippet is for a virtual machine that requests a total of4Gi
memory with a page size of1Gi
:kind: VirtualMachine # ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 # ...
Apply the virtual machine configuration:
$ oc apply -f <virtual_machine>.yaml
9.12.8. Enabling dedicated resources for virtual machines
To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.
9.12.8.1. About dedicated resources
When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.
9.12.8.2. Prerequisites
-
The CPU Manager must be configured on the node. Verify that the node has the
cpumanager = true
label before scheduling virtual machine workloads. - The virtual machine must be powered off.
9.12.8.3. Enabling dedicated resources for a virtual machine
You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources.
Procedure
-
In the OpenShift Container Platform console, click Virtualization
VirtualMachines from the side menu. - Select a virtual machine to open the VirtualMachine details page.
-
On the Configuration
Scheduling tab, click the edit icon beside Dedicated Resources. - Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.
9.12.9. Scheduling virtual machines
You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.
9.12.9.1. Policy attributes
You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.
Policy attribute | Description |
---|---|
force | The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU. |
require | Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model. |
optional | The VM is added to a node if that VM is supported by the host’s physical machine CPU. |
disable | The VM cannot be scheduled with CPU node discovery. |
forbid | The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. |
9.12.9.2. Setting a policy attribute and CPU feature
You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.
Procedure
Edit the
domain
spec of your VM configuration file. The following example sets the CPU feature and therequire
policy for a virtual machine (VM):apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2
9.12.9.3. Scheduling virtual machines with the supported CPU model
You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.
Procedure
Edit the
domain
spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1
- 1
- CPU model for the VM.
9.12.9.4. Scheduling virtual machines with the host model
When the CPU model for a virtual machine (VM) is set to host-model
, the VM inherits the CPU model of the node where it is scheduled.
Procedure
Edit the
domain
spec of your VM configuration file. The following example showshost-model
being specified for the virtual machine:apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1
- 1
- The VM that inherits the CPU model of the node where it is scheduled.
9.12.9.5. Scheduling virtual machines with a custom scheduler
You can use a custom scheduler to schedule a virtual machine (VM) on a node.
Prerequisites
- A secondary scheduler is configured for your cluster.
Procedure
Add the custom scheduler to the VM configuration by editing the
VirtualMachine
manifest. For example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: runStrategy: Always template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio # ...
- 1
- The name of the custom scheduler. If the
schedulerName
value does not match an existing scheduler, thevirt-launcher
pod stays in aPending
state until the specified scheduler is found.
Verification
Verify that the VM is using the custom scheduler specified in the
VirtualMachine
manifest by checking thevirt-launcher
pod events:View the list of pods in your cluster by entering the following command:
$ oc get pods
Example output
NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m
Run the following command to display the pod events:
$ oc describe pod virt-launcher-vm-fedora-dpc87
The value of the
From
field in the output verifies that the scheduler name matches the custom scheduler specified in theVirtualMachine
manifest:Example output
[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]
Additional resources
9.12.10. Configuring PCI passthrough
The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine (VM). When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.
Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc
command-line interface (CLI).
9.12.10.1. Preparing nodes for GPU passthrough
You can prevent GPU operands from deploying on worker nodes that you designated for GPU passthrough.
9.12.10.1.1. Preventing NVIDIA GPU operands from deploying on nodes
If you use the NVIDIA GPU Operator in your cluster, you can apply the nvidia.com/gpu.deploy.operands=false
label to nodes that you do not want to configure for GPU or vGPU operands. This label prevents the creation of the pods that configure GPU or vGPU operands and terminates the pods if they already exist.
Prerequisites
-
The OpenShift CLI (
oc
) is installed.
Procedure
Label the node by running the following command:
$ oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1
- 1
- Replace
<node_name>
with the name of a node where you do not want to install the NVIDIA GPU operands.
Verification
Verify that the label was added to the node by running the following command:
$ oc describe node <node_name>
Optional: If GPU operands were previously deployed on the node, verify their removal.
Check the status of the pods in the
nvidia-gpu-operator
namespace by running the following command:$ oc get pods -n nvidia-gpu-operator
Example output
NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d
Monitor the pod status until the pods with
Terminating
status are removed:$ oc get pods -n nvidia-gpu-operator
Example output
NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d
9.12.10.2. Preparing host devices for PCI passthrough
9.12.10.2.1. About preparing a host device for PCI passthrough
To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig
object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices
field of the HyperConverged
custom resource (CR). The permittedHostDevices
list is empty when you first install the OpenShift Virtualization Operator.
To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged
CR.
9.12.10.2.2. Adding kernel arguments to enable the IOMMU driver
To enable the IOMMU driver in the kernel, create the MachineConfig
object and add the kernel arguments.
Prerequisites
- You have cluster administrator permissions.
- Your CPU hardware is Intel or AMD.
- You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.
Procedure
Create a
MachineConfig
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ...
Create the new
MachineConfig
object:$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
MachineConfig
object was added.$ oc get MachineConfig
9.12.10.2.3. Binding PCI devices to the VFIO driver
To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID
and device-ID
from each device and create a list with the values. Add this list to the MachineConfig
object. The MachineConfig
Operator generates the /etc/modprobe.d/vfio.conf
on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.
Prerequisites
- You added kernel arguments to enable IOMMU for the CPU.
Procedure
Run the
lspci
command to obtain thevendor-ID
and thedevice-ID
for the PCI device.$ lspci -nnv | grep -i nvidia
Example output
02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
Create a Butane config file,
100-worker-vfiopci.bu
, binding the PCI device to the VFIO driver.NoteSee "Creating machine configs with Butane" for information about Butane.
Example
variant: openshift version: 4.18.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci
- 1
- Applies the new kernel argument only to worker nodes.
- 2
- Specify the previously determined
vendor-ID
value (10de
) and thedevice-ID
value (1eb8
) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. - 3
- The file that loads the vfio-pci kernel module on the worker nodes.
Use Butane to generate a
MachineConfig
object file,100-worker-vfiopci.yaml
, containing the configuration to be delivered to the worker nodes:$ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
Apply the
MachineConfig
object to the worker nodes:$ oc apply -f 100-worker-vfiopci.yaml
Verify that the
MachineConfig
object was added.$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s
Verification
Verify that the VFIO driver is loaded.
$ lspci -nnk -d 10de:
The output confirms that the VFIO driver is being used.
Example output
04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau
9.12.10.2.4. Exposing PCI host devices in the cluster using the CLI
To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices
array of the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the PCI device information to the
spec.permittedHostDevices.pciHostDevices
array. For example:Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 # ...
- 1
- The host devices that are permitted to be used in the cluster.
- 2
- The list of PCI devices available on the node.
- 3
- The
vendor-ID
and thedevice-ID
required to identify the PCI device. - 4
- The name of a PCI host device.
- 5
- Optional: Setting this field to
true
indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
NoteThe above example snippet shows two PCI host devices that are named
nvidia.com/GV100GL_Tesla_V100
andnvidia.com/TU104GL_Tesla_T4
added to the list of permitted host devices in theHyperConverged
CR. These devices have been tested and verified to work with OpenShift Virtualization.- Save your changes and exit the editor.
Verification
Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the
nvidia.com/GV100GL_Tesla_V100
,nvidia.com/TU104GL_Tesla_T4
, andintel.com/qat
resource names.$ oc describe node <node_name>
Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250
9.12.10.2.5. Removing PCI host devices from the cluster using the CLI
To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Remove the PCI device information from the
spec.permittedHostDevices.pciHostDevices
array by deleting thepciDeviceSelector
,resourceName
andexternalResourceProvider
(if applicable) fields for the appropriate device. In this example, theintel.com/qat
resource has been deleted.Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" # ...
- Save your changes and exit the editor.
Verification
Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the
intel.com/qat
resource name.$ oc describe node <node_name>
Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250
9.12.10.3. Configuring virtual machines for PCI passthrough
After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.
9.12.10.3.1. Assigning a PCI device to a virtual machine
When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.
Procedure
Assign the PCI device to a virtual machine as a host device.
Example
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1
- 1
- The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.
Verification
Use the following command to verify that the host device is available from the virtual machine.
$ lspci -nnk | grep NVIDIA
Example output
$ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
9.12.10.4. Additional resources
9.12.11. Configuring virtual GPUs
If you have graphics processing unit (GPU) cards, OpenShift Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs).
9.12.11.1. About using virtual GPUs with OpenShift Virtualization
Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged
custom resource (CR). This automation is especially useful for large clusters.
Refer to your hardware vendor’s documentation for functionality and support details.
- Mediated device
- A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.
9.12.11.2. Preparing hosts for mediated devices
You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices.
9.12.11.2.1. Adding kernel arguments to enable the IOMMU driver
To enable the IOMMU driver in the kernel, create the MachineConfig
object and add the kernel arguments.
Prerequisites
- You have cluster administrator permissions.
- Your CPU hardware is Intel or AMD.
- You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS.
Procedure
Create a
MachineConfig
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ...
Create the new
MachineConfig
object:$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
MachineConfig
object was added.$ oc get MachineConfig
9.12.11.3. Configuring the NVIDIA GPU Operator
You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OpenShift Virtualization.
The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase.
9.12.11.3.1. About using the NVIDIA GPU Operator
You can use the NVIDIA GPU Operator with OpenShift Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks that are required when preparing nodes for GPU workloads.
Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads.
9.12.11.3.2. Options for configuring mediated devices
There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OpenShift Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator.
- Using the NVIDIA GPU Operator to configure mediated devices
- This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OpenShift Virtualization in the NVIDIA documentation.
- Using OpenShift Virtualization to configure mediated devices
This method, which is tested by Red Hat, uses OpenShift Virtualization’s capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices.
When using the OpenShift Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation. However, this method differs from the NVIDIA documentation in the following ways:
You must not overwrite the default
disableMDEVConfiguration: false
setting in theHyperConverged
custom resource (CR).ImportantSetting this feature gate as described in the NVIDIA documentation prevents OpenShift Virtualization from configuring mediated devices.
You must configure your
ClusterPolicy
manifest so that it matches the following example:Example manifest
kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: "true" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6
- 1
- Set this value to
false
. Not required for VMs. - 2
- Set this value to
true
. Required for using vGPUs with VMs. - 3
- Substitute
<vgpu_container_registry>
with your registry value. - 4
- Set this value to
false
to allow OpenShift Virtualization to configure mediated devices instead of the NVIDIA GPU Operator. - 5
- Set this value to
false
to prevent discovery and advertising of the vGPU devices to the kubelet. - 6
- Set this value to
false
to prevent loading thevfio-pci
driver. Instead, follow the OpenShift Virtualization documentation to configure PCI passthrough.
Additional resources
9.12.11.4. How vGPUs are assigned to nodes
For each physical device, OpenShift Virtualization configures the following values:
- A single mdev type.
-
The maximum number of instances of the selected
mdev
type.
The cluster architecture affects how devices are created and assigned to nodes.
- Large cluster with multiple cards per node
On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:
# ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 # ...
In this scenario, each node has two cards, both of which support the following vGPU types:
nvidia-105 # ... nvidia-108 nvidia-217 nvidia-299 # ...
On each node, OpenShift Virtualization creates the following vGPUs:
- 16 vGPUs of type nvidia-105 on the first card.
- 2 vGPUs of type nvidia-108 on the second card.
- One node has a single card that supports more than one requested vGPU type
OpenShift Virtualization uses the supported type that comes first on the
mediatedDeviceTypes
list.For example, the card on a node card supports
nvidia-223
andnvidia-224
. The followingmediatedDeviceTypes
list is configured:# ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224 # ...
In this example, OpenShift Virtualization uses the
nvidia-223
type.
9.12.11.5. Managing mediated devices
Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices.
9.12.11.5.1. Creating and exposing mediated devices
As an administrator, you can create mediated devices and expose them to the cluster by editing the HyperConverged
custom resource (CR).
Prerequisites
- You enabled the Input-Output Memory Management Unit (IOMMU) driver.
If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.
- If you use NVIDIA cards, you installed the NVIDIA GRID driver.
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Example 9.1. Example configuration file with mediated devices configured
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q # ...
Create mediated devices by adding them to the
spec.mediatedDevicesConfiguration
stanza:Example YAML snippet
# ... spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value> # ...
- 1
- Required: Configures global settings for the cluster.
- 2
- Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global
mediatedDeviceTypes
configuration. - 3
- Required if you use
nodeMediatedDeviceTypes
. Overrides the globalmediatedDeviceTypes
configuration for the specified nodes. - 4
- Required if you use
nodeMediatedDeviceTypes
. Must include akey:value
pair.
ImportantBefore OpenShift Virtualization 4.14, the
mediatedDeviceTypes
field was namedmediatedDevicesTypes
. Ensure that you use the correct field name when configuring mediated devices.Identify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the
HyperConverged
CR in the next step.Find the
resourceName
value by running the following command:$ oc get $NODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))'
Find the
mdevNameSelector
value by viewing the contents of/sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name
, substituting the correct values for your system.For example, the name file for the
nvidia-231
type contains the selector stringGRID T4-2Q
. UsingGRID T4-2Q
as themdevNameSelector
value allows nodes to use thenvidia-231
type.
Expose the mediated devices to the cluster by adding the
mdevNameSelector
andresourceName
values to thespec.permittedHostDevices.mediatedDevices
stanza of theHyperConverged
CR:Example YAML snippet
# ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2 # ...
- Save your changes and exit the editor.
Verification
Optional: Confirm that a device was added to a specific node by running the following command:
$ oc describe node <node_name>
9.12.11.5.2. About changing and removing mediated devices
You can reconfigure or remove mediated devices in several ways:
-
Edit the
HyperConverged
CR and change the contents of themediatedDeviceTypes
stanza. -
Change the node labels that match the
nodeMediatedDeviceTypes
node selector. Remove the device information from the
spec.mediatedDevicesConfiguration
andspec.permittedHostDevices
stanzas of theHyperConverged
CR.NoteIf you remove the device information from the
spec.permittedHostDevices
stanza without also removing it from thespec.mediatedDevicesConfiguration
stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.
9.12.11.5.3. Removing mediated devices from the cluster
To remove a mediated device from the cluster, delete the information for that device from the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Remove the device information from the
spec.mediatedDevicesConfiguration
andspec.permittedHostDevices
stanzas of theHyperConverged
CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q
- Save your changes and exit the editor.
9.12.11.6. Using mediated devices
You can assign mediated devices to one or more virtual machines.
9.12.11.6.1. Assigning a vGPU to a VM by using the CLI
Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs).
Prerequisites
-
The mediated device is configured in the
HyperConverged
custom resource. - The VM is stopped.
Procedure
Assign the mediated device to a virtual machine (VM) by editing the
spec.domain.devices.gpus
stanza of theVirtualMachine
manifest:Example virtual machine manifest
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2
Verification
To verify that the device is available from the virtual machine, run the following command, substituting
<device_name>
with thedeviceName
value from theVirtualMachine
manifest:$ lspci -nnk | grep <device_name>
9.12.11.6.2. Assigning a vGPU to a VM by using the web console
You can assign virtual GPUs to virtual machines by using the OpenShift Container Platform web console.
You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems.
Prerequisites
The vGPU is configured as a mediated device in your cluster.
-
To view the devices that are connected to your cluster, click Compute
Hardware Devices from the side menu.
-
To view the devices that are connected to your cluster, click Compute
- The VM is stopped.
Procedure
-
In the OpenShift Container Platform web console, click Virtualization
VirtualMachines from the side menu. - Select the VM that you want to assign the device to.
- On the Details tab, click GPU devices.
- Click Add GPU device.
- Enter an identifying value in the Name field.
- From the Device name list, select the device that you want to add to the VM.
- Click Save.
Verification
-
To confirm that the devices were added to the VM, click the YAML tab and review the
VirtualMachine
configuration. Mediated devices are added to thespec.domain.devices
stanza.
9.12.11.7. Additional resources
9.12.12. Configuring USB host passthrough
As a cluster administrator, you can expose USB devices in a cluster, making them available for virtual machine (VM) owners to assign to VMs. Enabling this passthrough of USB devices allows a guest to connect to actual USB hardware that is attached to an OpenShift Container Platform node, as if the hardware and the VM are physically connected.
You can expose a USB device by first enabling host passthrough and then configuring the VM to use the USB device.
9.12.12.1. Enabling USB host passthrough
You can enable USB host passthrough at the cluster level.
You specify a resource name and USB device name for each device you want first to add and then assign to a virtual machine (VM). You can allocate more than one device, each of which is known as a selector
in the HyperConverged (HCO) custom resource (CR), to a single resource name. If you have multiple, identical USB devices on the cluster, you can choose to allocate a VM to a specific device.
Prerequisites
-
You have access to an OpenShift Container Platform cluster as a user who has the
cluster-admin
role.
Procedure
Identify the USB device vendor and product by running the following command:
$ lsusb
Open the HCO CR by running the following command:
$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add a USB device to the
permittedHostDevices
stanza, as shown in the following example:Example YAML snippet
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: {CNVNamespace} spec: configuration: permittedHostDevices: 1 usbHostDevices: 2 - resourceName: kubevirt.io/peripherals 3 selectors: - vendor: "045e" product: "07a5" - vendor: "062a" product: "4102" - vendor: "072f" product: "b100"
- 1
- Lists the host devices that have permission to be used in the cluster.
- 2
- Lists the available USB devices.
- 3
- Uses
resourceName: deviceName
for each device you want to add and assign to the VM. In this example, the resource is bound to three devices, each of which is identified byvendor
andproduct
and is known as aselector
.
9.12.12.2. Configuring a virtual machine connection to a USB device
You can configure virtual machine (VM) access to a USB device. This configuration allows a guest to connect to actual USB hardware that is attached to an OpenShift Container Platform node, as if the hardware and the VM are physically connected.
Procedure
Locate the USB device by running the following command:
$ oc /dev/serial/by-id/usb-VENDOR_device_name
Open the virtual machine instance custom resource (CR) by running the following command:
$ oc edit vmi vmi-usb
Edit the CR by adding a USB device, as shown in the following example:
Example configuration
apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: labels: special: vmi-usb name: vmi-usb 1 spec: domain: devices: hostDevices: - deviceName: kubevirt.io/peripherals name: local-peripherals # ...
- 1
- The name of the USB device.
9.12.13. Enabling descheduler evictions on virtual machines
You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node.
9.12.13.1. Descheduler profiles
Use the LongLifecycle
profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load.
LongLifecycle
This profile balances resource usage between nodes and enables the following strategies:
-
RemovePodsHavingTooManyRestarts
: removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization
: evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler.- A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods).
- A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods).
-
9.12.13.2. Installing the descheduler
The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles.
By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions.
If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane
, because it has the lowest priority value (100000000
) of the hosted control plane priority classes.
Prerequisites
-
You are logged in to OpenShift Container Platform as a user with the
cluster-admin
role. - Access to the OpenShift Container Platform web console.
Procedure
- Log in to the OpenShift Container Platform web console.
Create the required namespace for the Kube Descheduler Operator.
-
Navigate to Administration
Namespaces and click Create Namespace. -
Enter
openshift-kube-descheduler-operator
in the Name field, enteropenshift.io/cluster-monitoring=true
in the Labels field to enable descheduler metrics, and click Create.
-
Navigate to Administration
Install the Kube Descheduler Operator.
-
Navigate to Operators
OperatorHub. - Type Kube Descheduler Operator into the filter box.
- Select the Kube Descheduler Operator and click Install.
- On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.
- Adjust the values for the Update Channel and Approval Strategy to the desired values.
- Click Install.
-
Navigate to Operators
Create a descheduler instance.
-
From the Operators
Installed Operators page, click the Kube Descheduler Operator. - Select the Kube Descheduler tab and click Create KubeDescheduler.
Edit the settings as necessary.
- To evict pods instead of simulating the evictions, change the Mode field to Automatic.
Expand the Profiles section and select
LongLifecycle
. TheAffinityAndTaints
profile is enabled by default.ImportantThe only profile currently available for OpenShift Virtualization is
LongLifecycle
.
-
From the Operators
You can also configure the profiles and settings for the descheduler later using the OpenShift CLI (oc
).
9.12.13.3. Enabling descheduler evictions on a virtual machine (VM)
After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine
custom resource (CR).
Prerequisites
-
Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI (
oc
). - Ensure that the VM is not running.
Procedure
Before starting the VM, add the
descheduler.alpha.kubernetes.io/evict
annotation to theVirtualMachine
CR:apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true"
Configure the
KubeDescheduler
object with theLongLifecycle
profile and enable background evictions for improved VM eviction stability during live migration:apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - LongLifecycle 1 mode: Predictive 2 profileCustomizations: devEnableEvictionsInBackground: true 3
- 1
- You can only set the
LongLifecycle
profile. This profile balances resource usage between nodes. - 2
- By default, the descheduler does not evict pods. To evict pods, set
mode
toAutomatic
. - 3
- Enabling
devEnableEvictionsInBackground
allows evictions to occur in the background, improving stability and mitigating oscillatory behavior during live migrations.
The descheduler is now enabled on the VM.
9.12.13.4. Additional resources
9.12.14. About high availability for virtual machines
You can enable high availability for virtual machines (VMs) by manually deleting a failed node to trigger VM failover or by configuring remediating nodes.
Manually deleting a failed node
If a node fails and machine health checks are not deployed on your cluster, virtual machines with runStrategy: Always
configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node
object.
See Deleting a failed node to trigger virtual machine failover.
Configuring remediating nodes
You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks.
For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation.
9.12.15. Virtual machine control plane tuning
OpenShift Virtualization offers the following tuning options at the control-plane level:
-
The
highBurst
profile, which uses fixedQPS
andburst
rates, to create hundreds of virtual machines (VMs) in one batch - Migration setting adjustment based on workload type
9.12.15.1. Configuring a highBurst profile
Use the highBurst
profile to create and maintain a large number of virtual machines (VMs) in one cluster.
Procedure
Apply the following patch to enable the
highBurst
tuning policy profile:$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]'
Verification
Run the following command to verify the
highBurst
tuning policy profile is enabled:$ oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range $config, \ $value := .spec.configuration}} {{if eq $config "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{$config}} = {{$value}} {{end}} {{end}} {{"\n"}}
9.12.16. Assigning compute resources
In OpenShift Virtualization, compute resources assigned to virtual machines (VMs) are backed by either guaranteed CPUs or time-sliced CPU shares.
Guaranteed CPUs, also known as CPU reservation, dedicate CPU cores or threads to a specific workload, which makes them unavailable to any other workload. Assigning guaranteed CPUs to a VM ensures that the VM will have sole access to a reserved physical CPU. Enable dedicated resources for VMs to use a guaranteed CPU.
Time-sliced CPUs dedicate a slice of time on a shared physical CPU to each workload. You can specify the size of the slice during VM creation, or when the VM is offline. By default, each vCPU receives 100 milliseconds, or 1/10 of a second, of physical CPU time.
The type of CPU reservation depends on the instance type or VM configuration.
9.12.16.1. Overcommitting CPU resources
Time-slicing allows multiple virtual CPUs (vCPUs) to share a single physical CPU. This is known as CPU overcommitment. Guaranteed VMs can not be overcommitted.
Configure CPU overcommitment to prioritize VM density over performance when assigning CPUs to VMs. With a higher CPU over-commitment of vCPUs, more VMs fit onto a given node.
9.12.16.2. Setting the CPU allocation ratio
The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs.
For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices.
To change the default number of vCPUs mapped to each physical CPU, set the vmiCPUAllocationRatio
value in the HyperConverged
CR. The pod CPU request is calculated by multiplying the number of vCPUs by the reciprocal of the CPU allocation ratio. For example, if vmiCPUAllocationRatio
is set to 10, OpenShift Virtualization will request 10 times fewer CPUs on the pod for that VM.
Procedure
Set the vmiCPUAllocationRatio
value in the HyperConverged
CR to define a node CPU allocation ratio.
Open the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Set the
vmiCPUAllocationRatio
:... spec: resourceRequirements: vmiCPUAllocationRatio: 1 1 # ...
- 1
- When
vmiCPUAllocationRatio
is set to1
, the maximum amount of vCPUs are requested for the pod.
9.12.16.3. Additional resources
9.12.17. About multi-queue functionality
Use multi-queue functionality to scale network throughput and performance on virtual machines (VMs) with multiple vCPUs.
By default, the queueCount
value, which is derived from the domain XML, is determined by the number of vCPUs allocated to a VM. Network performance does not scale as the number of vCPUs increases. Additionally, because virtio-net has only one Tx and Rx queue, guests cannot transmit or retrieve packs in parallel.
Enabling virtio-net multiqueue does not offer significant improvements when the number of vNICs in a guest instance is proportional to the number of vCPUs.
9.12.17.1. Known limitations
- MSI vectors are still consumed if virtio-net multiqueue is enabled in the host but not enabled in the guest operating system by the administrator.
- Each virtio-net queue consumes 64 KiB of kernel memory for the vhost driver.
-
Starting a VM with more than 16 CPUs results in no connectivity if
networkInterfaceMultiqueue
is set to 'true' (CNV-16107).
9.12.17.2. Enabling multi-queue functionality
Enable multi-queue functionality for interfaces configured with a VirtIO model.
Procedure
Set the
networkInterfaceMultiqueue
value totrue
in theVirtualMachine
manifest file of your VM to enable multi-queue functionality:apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true
-
Save the
VirtualMachine
manifest file to apply your changes.
9.12.18. Managing virtual machines by using OpenShift GitOps
To automate and optimize virtual machine (VM) management in OpenShift Virtualization, you can use OpenShift GitOps.
With GitOps, you can set up VM deployments based on configuration files stored in a Git repository. This also makes it easier to automate, update, or replicate these configurations, as well to use version control for tracking their changes.
Prerequisites
- You have a GitHub account. For instructions to set up an account, see Creating an account on GitHub.
- OpenShift Virtualuzation has been installed on your OpenShift cluster. For instructions, see OpenShift Virtualization installation.
- The OpenShift GitOps operator has been installed on your OpenShift cluster. For instructions, see Installing GitOps.
Procedure
Follow the Manage OpenShift virtual machines with GitOps learning path in performing these steps:
- Connect an external Git repository to your Argo CD instance.
- Create the required VM configuration in the Git repository.
- Use the VM configuration to create VMs on your cluster.
Additional resources
9.13. VM disks
9.13.1. Hot-plugging VM disks
You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI).
Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks.
A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Each VM has a virtio-scsi
controller so that hot plugged disks can use the scsi
bus. The virtio-scsi
controller overcomes the limitations of virtio
while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks.
Regular virtio
is not available for hot plugged disks because it is not scalable. Each virtio
disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand.
9.13.1.1. Hot plugging and hot unplugging a disk by using the web console
You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the OpenShift Container Platform web console.
The hot plugged disk remains attached to the VM until you unplug it.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have a data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. - Select a running VM to view its details.
-
On the VirtualMachine details page, click Configuration
Disks. Add a hot plugged disk:
- Click Add disk.
- In the Add disk (hot plugged) window, select the disk from the Source list and click Save.
Optional: Unplug a hot plugged disk:
-
Click the Options menu
beside the disk and select Detach.
- Click Detach.
-
Click the Options menu
Optional: Make a hot plugged disk persistent:
-
Click the Options menu
beside the disk and select Make persistent.
- Reboot the VM to apply the change.
-
Click the Options menu
9.13.1.2. Hot plugging and hot unplugging a disk by using the command line
You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line.
You can make a hot plugged disk persistent so that it is permanently mounted on the VM.
Prerequisites
- You must have at least one data volume or persistent volume claim (PVC) available for hot plugging.
Procedure
Hot plug a disk by running the following command:
$ virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>]
-
Use the optional
--persist
flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the--persist
flag, you can no longer hot plug or hot unplug the virtual disk. The--persist
flag applies to virtual machines, not virtual machine instances. -
The optional
--serial
flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC.
-
Use the optional
Hot unplug a disk by running the following command:
$ virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC>
9.13.2. Expanding virtual machine disks
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk.
If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes.
You cannot reduce the size of a VM disk.
9.13.2.1. Increasing a VM disk size by expanding the PVC of the disk
You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. To specify the increased PVC volume, you can use the web console with the VM running. Alternatively, you can edit the PVC manifest in the CLI.
If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead.
9.13.2.1.1. Expanding a VM disk PVC in the web console
You can increase the size of a VM disk PVC in the web console without leaving the VirtualMachines page and with the VM running.
Procedure
- In the Administrator or Virtualization perspective, open the VirtualMachines page.
- Select the running VM to open its Details page.
- Select the Configuration tab and click Storage.
Click the options menu
next to the disk you want to expand. Select the Edit option.
The Edit disk dialog opens.
- In the PersistentVolumeClaim size field, enter the desired size.
- Click Save.
You can enter any value greater than the current one. However, if the new value exceeds the available size, an error is displayed.
9.13.2.1.2. Expanding a VM disk PVC by editing its manifest
Procedure
Edit the
PersistentVolumeClaim
manifest of the VM disk that you want to expand:$ oc edit pvc <pvc_name>
Update the disk size:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 # ...
- 1
- Specify the new disk size.
9.13.2.2. Expanding available virtual storage by adding blank data volumes
You can expand the available storage of a virtual machine (VM) by adding blank data volumes.
Prerequisites
- You must have at least one persistent volume.
Procedure
Create a
DataVolume
manifest as shown in the following example:Example
DataVolume
manifestapiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: "<storage_class>" 2
Create the data volume by running the following command:
$ oc create -f <blank-image-datavolume>.yaml
Additional resources for data volumes
9.13.4. Migrating VM disks to a different storage class
You can migrate one or more virtual disks to a different storage class without stopping your virtual machine (VM) or virtual machine instance (VMI).
9.13.4.1. Migrating VM disks to a different storage class by using the web console
You can migrate one or more disks attached to a virtual machine (VM) to a different storage class by using the OpenShift Container Platform web console. When performing this action on a running VM, the operation of the VM is not interrupted and the data on the migrated disks remains accessible.
With the OpenShift Virtualization Operator, you can only start storage class migration for one VM at the time and the VM must be running. If you need to migrate more VMs at once or migrate a mix of running and stopped VMs, consider using the Migration Toolkit for Containers (MTC).
Migration Toolkit for Containers is not part of OpenShift Virtualization and requires separate installation.
Storage class migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- You must have a data volume or a persistent volume claim (PVC) available for storage class migration.
- The cluster must have a node available for live migration. As part of the storage class migration, the VM is live migrated to a different node.
- The VM must be running.
Procedure
-
Navigate to Virtualization
VirtualMachines in the web console. Click the Options menu
beside the virtual machine and select Migration
Storage. You can also access this option from the VirtualMachine details page by selecting Actions
Migration Storage. - On the Migration details page, choose whether to migrate the entire VM storage or selected volumes only. If you click Selected volumes, select any disks that you intend to migrate. Click Next to proceed.
- From the list of available options on the Destination StorageClass page, select the storage class to migrate to. Click Next to proceed.
- On the Review page, review the list of affected disks and the target storage class. To start the migration, click Migrate VirtualMachine storage.
- Stay on the Migrate VirtualMachine storage page to watch the progress and wait for the confirmation that the migration completed successfully.
Verification
-
From the VirtualMachine details page, navigate to Configuration
Storage. - Verify that all disks have the expected storage class listed in the Storage class column.