Configuring and managing Linux virtual machines
Setting up your host, creating and administering virtual machines, and understanding virtualization features
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We are committed to providing high-quality documentation and value your feedback. To help us improve, you can submit suggestions or report errors through the Red Hat Jira tracking system.
Procedure
Log in to the Jira website.
If you do not have an account, select the option to create one.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Basic concepts of virtualization in RHEL Copy linkLink copied to clipboard!
You can use your RHEL 10 system as a virtualization host. This means that on RHEL 10, you can create virtual machines with their own operating systems.
For details on the functionality, its advantages, components, and other possible virtualization solutions provided by Red Hat, see the following sections.
1.1. What is virtualization? Copy linkLink copied to clipboard!
By using virtualization, your RHEL 10 system can host multiple virtual machines (VMs), also referred to as guests. VMs use the host’s physical hardware and computing resources to run a separate, virtualized operating system (guest OS) as a user-space process on the host’s operating system.
As a result, you can use virtualization to have operating systems within operating systems.
By using VMs, you can, for example:
- Safely test software configurations and features
- Run legacy software
- Optimize the workload efficiency of your hardware.
For more information about the benefits, see Advantages of virtualization.
For more information about what virtualization is, see the Virtualization topic page.
Next steps
- To start using virtualization in RHEL 10, see Preparing RHEL to host virtual machines.
- In addition to RHEL 10 virtualization, Red Hat offers several specialized virtualization solutions, each with a different user focus and features. For more information, see Red Hat virtualization solutions.
1.2. Advantages of virtualization Copy linkLink copied to clipboard!
Using virtual machines (VMs) has the following benefits in comparison to using physical machines:
Flexible and fine-grained allocation of resources
A VM runs on a host machine, which is usually physical, and physical hardware can also be assigned for the guest OS to use. However, the allocation of physical resources to the VM is done on the software level, and is therefore very flexible. A VM uses a configurable fraction of the host memory, CPUs, or storage space, and that configuration can specify very fine-grained resource requests.
For example, what the guest OS sees as its disk can be represented as a file on the host file system, and the size of that disk is less constrained than the available sizes for physical disks.
Software-controlled configurations
The entire configuration of a VM is saved as data on the host, and is under software control. Therefore, a VM can easily be created, removed, cloned, migrated, operated remotely, or connected to remote storage.
In addition, the current state of the VM can be backed up as a snapshot at any time. A snapshot can then be loaded to restore the system to the saved state.
Separation from the host
A guest OS runs on a virtualized kernel, separate from the host OS. This means that any OS can be installed on a VM, and even if the guest OS becomes unstable or is compromised, the host is not affected in any way.
Space and cost efficiency
A single physical machine can host a large number of VMs. Therefore, it avoids the need for multiple physical machines to do the same tasks, and thus lowers the space, power, and maintenance requirements associated with physical hardware.
Software compatibility
Because a VM can use a different OS than its host, virtualization makes it possible to run applications that were not originally released for your host OS. For example, using a RHEL 8 guest OS, you can run applications released for RHEL 8 on a RHEL 10 host system.
NoteNot all operating systems are supported as a guest OS in a RHEL 10 host. For details, see Recommended features in RHEL 10 virtualization.
1.3. Virtual machine components and their interaction Copy linkLink copied to clipboard!
Virtualization in RHEL 10 consists of the following principal software components:
Hypervisor
The basis of creating virtual machines (VMs) in RHEL 10 is the hypervisor, a software layer that controls hardware and enables running multiple operating systems on a host machine.
The hypervisor includes the Kernel-based Virtual Machine (KVM) module and virtualization kernel drivers. These components ensure that the Linux kernel on the host machine provides resources for virtualization to user-space software.
At the user-space level, the QEMU emulator simulates a complete virtualized hardware platform that the guest operating system can run in, and manages how resources are allocated on the host and presented to the guest.
In addition, the libvirt software suite serves as a management and communication layer. It makes QEMU easier to interact with, enforces security rules, and provides several additional tools for configuring and running VMs.
XML configuration
A host-based XML configuration file (also known as a domain XML file) determines all settings and devices in a specific VM. The configuration includes:
- Metadata, such as the name of the VM, time zone, and other information about the VM.
- A description of the devices in the VM, including virtual CPUs (vCPUS), storage devices, input/output devices, network interface cards, and other hardware, real and virtual.
- VM settings, such as the maximum amount of memory it can use, restart settings, and other settings about the behavior of the VM. For more information about the contents of an XML configuration, see Sample virtual machine XML configuration.
Component interaction
When a VM is started, the hypervisor uses the XML configuration to create an instance of the VM as a user-space process on the host. The hypervisor also makes the VM process accessible to the host-based interfaces, such as the virsh, virt-install, and guestfish utilities, or the web console GUI.
When these virtualization tools are used, libvirt translates their input into instructions for QEMU. QEMU communicates the instructions to KVM, which ensures that the kernel appropriately assigns the resources necessary to carry out the instructions. As a result, QEMU can execute the corresponding user-space changes, such as creating or modifying a VM, or performing an action in the VM’s guest operating system.
QEMU is an essential component of the architecture, but it is not intended to be used directly on RHEL 10 systems, due to security concerns. Therefore, qemu-* commands are not supported by Red Hat, and it is highly recommended to interact with QEMU by using libvirt.
For more information about the host-based interfaces, see Tools and interfaces for virtualization management.
Figure 1.1. RHEL 10 virtualization architecture
1.4. Tools and interfaces for virtualization management Copy linkLink copied to clipboard!
You can manage virtualization in RHEL 10 by using the command line (CLI) or the web console graphical user interface (GUI).
Command-line interface
The CLI is the most powerful method of managing virtualization in RHEL 10. Prominent CLI commands for virtual machine (VM) management include:
virsh - A versatile virtualization command-line utility and shell with a great variety of purposes, depending on the provided arguments. For example:
-
Starting and shutting down a VM -
virsh startandvirsh shutdown -
Listing available VMs -
virsh list -
Creating a VM from a configuration file -
virsh create -
Entering a virtualization shell -
virsh
For more information, see the
virsh(1)man page on your system.-
Starting and shutting down a VM -
-
virt-install- A CLI utility for creating new VMs. For more information, see thevirt-install(1)man page on your system. -
virt-xml- A utility for editing the configuration of a VM. -
guestfish- A utility for examining and modifying VM disk images. For more information, see theguestfish(1)man page on your system.
Graphical interface
To manage virtualization in RHEL 10 in a GUI, you can use the RHEL 10 web console, also known as Cockpit. The web console provides a remotely accessible and easy-to-use GUI for managing VMs and virtualization hosts.
For instructions on enabling virtualization management in the web console, see Setting up the web console to manage virtual machines.
1.5. User-space connection types for virtualization Copy linkLink copied to clipboard!
Virtual machines (VMs) on your host use one of the following libvirt connection types to your RHEL 10 user space. The connection type influences what features the VM user can access.
- System connection (
qemu:///system) -
Provides access to all available features for VM management in RHEL 10. To create or use a VM in the system connection, you must have root privileges on the system or be a part of the
libvirtuser group. - Session connection (
qemu:///session) -
Non-root users that are not in the
libvirtgroup can only create VMs in the session connection, which has to respect the access rights of the local user when accessing resources. For example, when using the session connection, you cannot detect or access VMs created in the system connection or by other users.
In addition, VMs in the session connection cannot use features that require root privileges, such as the following:
-
Advanced networking - You cannot set up system bridges or tap devices. You are limited to user-mode (
passt) networking, and cannot configure full external visibility of the VM. - PCI device passthrough - Modifying the device assignment of PCI host hardware for the VM is not possible.
- Autostart - VMs in the session connection cannot automatically start on system boot.
-
System-level storage pools and VM logs - In the system connection, storage pools and VM log files are saved in system directories, such as
/etc/libvirtand/var/lib/libvirt. In the session connection, the user is limited to files saved in theirhomedirectory. This prevents managing host-wide storage or viewing logs centrally.
To view your current connection type, use the virsh uri command on the host.
Unless explicitly stated otherwise, the information in this documentation assumes you have root privileges and can use the system connection of libvirt.
1.6. Red Hat virtualization solutions Copy linkLink copied to clipboard!
In addition to RHEL, Red Hat offers other products that you can use for hosting virtual machines. These products are built on top of RHEL 10 virtualization features and expand the KVM virtualization capabilities available in RHEL 10.
In addition, many limitations of RHEL 10 virtualization do not apply to these products.
- OpenShift Virtualization
Based on the KubeVirt technology, OpenShift Virtualization is a part of the Red Hat OpenShift Container Platform, and makes it possible to run virtual machines in containers.
For more information about OpenShift Virtualization, see the Red Hat Hybrid Cloud pages or the OpenShift Virtualization documentation.
- Red Hat OpenStack Platform (RHOSP)
Red Hat OpenStack Platform offers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud.
For more information about Red Hat OpenStack Platform, see the Red Hat Customer Portal or the Red Hat OpenStack Platform documentation suite.
Chapter 2. Preparing RHEL to host virtual machines Copy linkLink copied to clipboard!
To use virtualization in RHEL 10, you must install virtualization packages and ensure your system is configured to host virtual machines (VMs). The specific steps to do this vary based on your CPU architecture.
2.1. Preparing an AMD64 or Intel 64 system to host virtual machines Copy linkLink copied to clipboard!
Before creating virtual machines (VMs) on an AMD64 or Intel 64 system running RHEL 10, you must first set up a KVM hypervisor on the system.
Prerequisites
- Red Hat Enterprise Linux 10 is installed and registered on your host machine. For instructions, see the RHEL installation guide.
The following minimum system resources are available:
- 6 GB free disk space for the host, plus another 6 GB for each intended VM.
- 2 GB of RAM for the host, plus another 2 GB for each intended VM.
Procedure
Install the virtualization hypervisor packages.
# dnf install qemu-kvm libvirt virt-install virt-viewer
Start the virtualization services:
# for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done
Verification
Verify that your system is prepared to be a virtualization host:
virt-host-validate
# virt-host-validateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If all virt-host-validate checks return a
PASSvalue, your system is prepared for creating VMs.If any of the checks return a
FAILvalue, follow the displayed instructions to fix the problem.If any of the checks return a
WARNvalue, consider following the displayed instructions to improve virtualization capabilities.
Troubleshooting
If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)Copy to Clipboard Copied! Toggle word wrap Toggle overflow However, VMs on such a host system will fail to boot, rather than have performance problems.
To work around this, you can change the
<domain type>value in the XML configuration of the VM toqemu. Note, however, that Red Hat does not support VMs that use theqemudomain type, and setting this is highly discouraged in production environments.
2.2. Preparing an IBM Z system to host virtual machines Copy linkLink copied to clipboard!
Before creating virtual machines (VMs) on an IBM Z system running RHEL 10, you must first set up a KVM hypervisor on the system.
Prerequisites
- Red Hat Enterprise Linux 10 is installed and registered on your host machine. For instructions, see the RHEL installation guide.
The following minimum system resources are available:
- 6 GB free disk space for the host, plus another 6 GB for each intended VM.
- 2 GB of RAM for the host, plus another 2 GB for each intended VM.
- 4 CPUs on the host. VMs can generally run with a single assigned vCPU, but Red Hat recommends assigning 2 or more vCPUs per VM to avoid VMs becoming unresponsive during high load.
- Your IBM Z host system is using an IBM z14 CPU or later.
RHEL 10 is installed on a logical partition (LPAR). In addition, the LPAR supports the start-interpretive execution (SIE) virtualization functions.
To verify this, search for
siein your/proc/cpuinfofile.grep sie /proc/cpuinfo features : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te sie
# grep sie /proc/cpuinfo features : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te sieCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Install the virtualization packages:
dnf install qemu-kvm libvirt virt-install
# dnf install qemu-kvm libvirt virt-installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the virtualization services:
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done# for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that your system is prepared to be a virtualization host.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If all virt-host-validate checks return a
PASSvalue, your system is prepared for creating VMs.If any of the checks return a
FAILvalue, follow the displayed instructions to fix the problem.If any of the checks return a
WARNvalue, consider following the displayed instructions to improve virtualization capabilities.
Troubleshooting
If KVM virtualization is not supported by your host CPU, virt-host-validate generates the following output:
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)
QEMU: Checking for hardware virtualization: FAIL (Only emulated CPUs are available, performance will be significantly limited)Copy to Clipboard Copied! Toggle word wrap Toggle overflow However, VMs on such a host system will fail to boot, rather than have performance problems.
To work around this, you can change the
<domain type>value in the XML configuration of the VM toqemu. Note, however, that Red Hat does not support VMs that use theqemudomain type, and setting this is highly discouraged in production environments.
2.3. Preparing an ARM 64 system to host virtual machines Copy linkLink copied to clipboard!
Before creating virtual machines (VMs) on a 64-bit system (also known as ARM 64 or AArch64) running RHEL 10, you must first set up a KVM hypervisor on the system.
Prerequisites
- Red Hat Enterprise Linux 10 is installed and registered on your host machine. For instructions, see the RHEL installation guide.
The following minimum system resources are available:
- 6 GB free disk space for the host, plus another 6 GB for each intended guest.
- 4 GB of RAM for the host, plus another 4 GB for each intended guest.
Procedure
Install the virtualization packages:
dnf install qemu-kvm libvirt virt-install
# dnf install qemu-kvm libvirt virt-installCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the virtualization services:
for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done# for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that your system is prepared to be a virtualization host. Run the following command as root:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If all virt-host-validate checks return a
PASSvalue, your system is prepared for creating virtual machines.If any of the checks return a
FAILvalue, follow the displayed instructions to fix the problem.If any of the checks return a
WARNvalue, consider following the displayed instructions to improve virtualization capabilities.
2.4. Setting up the web console to manage virtual machines Copy linkLink copied to clipboard!
Before using the RHEL 10 web console to manage virtual machines (VMs), you must install the web console virtual machine plugin on the host.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
Procedure
Install the
cockpit-machinesplugin.dnf install cockpit-machines
# dnf install cockpit-machinesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Log in to the RHEL 10 web console.
If the installation was successful, appears in the web console side menu.
2.5. Setting up easier access to remote virtualization hosts Copy linkLink copied to clipboard!
When managing virtual machines (VMs) on a remote host system by using libvirt utilities on the command line, you can optimize the process of connecting to the remote host.
By default, to connect to a VM on a remote host, you must use the -c qemu+ssh://root@hostname/system syntax. For example, to use the virsh list command as root on the 192.0.2.1 host:
virsh -c qemu+ssh://root@192.0.2.1/system list
# virsh -c qemu+ssh://root@192.0.2.1/system list
Example output:
root@192.0.2.1's password: Id Name State --------------------------------- 1 remote-guest running
root@192.0.2.1's password:
Id Name State
---------------------------------
1 remote-guest running
However, you can remove the need to specify the connection details in full by modifying your SSH and libvirt configuration. For example:
virsh -c remote-host list
# virsh -c remote-host list
Example output:
root@192.0.2.1's password: Id Name State --------------------------------- 1 remote-guest running
root@192.0.2.1's password:
Id Name State
---------------------------------
1 remote-guest running
To enable this improvement, use the following instructions.
Prerequisites
- Virtualization is enabled on your host system. For instructions, see Preparing RHEL to host virtual machines.
Procedure
Edit the
~/.ssh/configfile on your local host and add an entry similar to the following:Host <host-alias> User root Hostname 192.0.2.1
Host <host-alias> User root Hostname 192.0.2.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, <host-alias> is a shortened name associated with the
root@192.0.2.1remote host connection.Edit the
/etc/libvirt/libvirt.conffile and add lines similar to the following:uri_aliases = [ "<qemu-alias>=qemu+ssh://<host-alias>/system", ]
uri_aliases = [ "<qemu-alias>=qemu+ssh://<host-alias>/system", ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, <qemu-alias> is a host alias that QEMU and
libvirtutilities will use for theqemu+ssh://192.0.2.1/systemconnection.Optional: If you want to use
libvirtutilities exclusively on a single remote host, you can also set a specific connection as the default target forlibvirt-based utilities.Edit the
/etc/libvirt/libvirt.conffile and set the value of theuri_defaultparameter to <qemu-alias> as a defaultlibvirttarget.# These can be used in cases when no URI is supplied by the application # (@uri_default also prevents probing of the hypervisor driver). # uri_default = "<qemu-alias>"
# These can be used in cases when no URI is supplied by the application # (@uri_default also prevents probing of the hypervisor driver). # uri_default = "<qemu-alias>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningYou cannot do this if you also want to manage VMs on your local host or on different remote hosts.
Optional: To avoid having to provide the root password when connecting to a remote host, use one or more of the following methods:
- Set up key-based SSH access to the remote host
- Use SSH connection multiplexing to connect to the remote system
- Kerberos authentication in Identity Management
Verification
Confirm that you can manage remote VMs by using
libvirt-based utilities on the local system with an added-c <qemu-alias>parameter. This automatically performs the commands over SSH on the remote host.For example, verify that the following lists VMs on the 192.0.2.1 remote host, the connection to which was set up as <qemu-alias> in the previous steps:
virsh -c <qemu-alias> list
# virsh -c <qemu-alias> listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
root@192.0.2.1's password: Id Name State ---------------------------------------- 1 example-remote-guest running
root@192.0.2.1's password: Id Name State ---------------------------------------- 1 example-remote-guest runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have set up the default URI to a remote host, ensure that
libvirtcommands automatically apply to the specified remote host.virsh list
$ virsh listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
root@192.0.2.1's password: Id Name State --------------------------------- 1 example-remote-guest running
root@192.0.2.1's password: Id Name State --------------------------------- 1 example-remote-guest runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Creating virtual machines Copy linkLink copied to clipboard!
To create a virtual machine (VM) in RHEL 10, you can use the command line or the RHEL 10 web console.
3.1. Creating virtual machines by using the command line Copy linkLink copied to clipboard!
To create a virtual machine (VM) on your RHEL 10 host, you can use the virt-install utility.
Prerequisites
- Virtualization is enabled on your host system. For instructions, see Preparing RHEL to host virtual machines.
- You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values might vary significantly depending on the intended tasks and workload of the VMs.
An operating system (OS) installation source is available locally or on a network. This can be one of the following:
- An ISO image of an installation medium
A disk image of an existing VM installation
WarningInstalling from a host CD-ROM or DVD-ROM device is not possible in RHEL 10. If you select a CD-ROM or DVD-ROM as the installation source when using any VM installation method available in RHEL 10, the installation will fail. For more information, see the Red Hat Knowledgebase.
Also note that Red Hat provides support only for a limited set of guest operating systems.
-
To create a VM that uses the
systemconnection of libvirt, you must have root privileges or be in thelibvirtuser group on the host. For more information, see User-space connection types for virtualization. - Optional: A Kickstart file can be provided for faster and easier configuration of the installation.
Procedure
To create a VM and start its OS installation, use the
virt-installcommand, along with the following mandatory arguments:--name- the name of the new machine
--memory- the amount of allocated memory
--vcpus- the number of allocated virtual CPUs
--disk- the type and size of the allocated storage
--cdromor--location- the type and location of the OS installation source
--osinfothe OS type and version that you intend to install
NoteTo list all available values for the
--osinfoargument, run thevirt-install --osinfo listcommand.For more details, you can also run the
osinfo-query oscommand. However, you might need to install thelibosinfo-binpackage first.Based on the chosen installation method, the necessary options and values can vary. See the following examples for more details.
Create a VM and install an OS from a local ISO file:
The following command creates a VM named demo-guest1 that installs the Windows 10 OS from an ISO image locally stored in the /home/username/Downloads/Win10install.iso file. This VM is also allocated with 2048 MiB of RAM and 2 vCPUs, and an 80 GiB qcow2 virtual disk is automatically configured for the VM.
virt-install \ --name demo-guest1 --memory 2048 \ --vcpus 2 --disk size=80 --osinfo win10 \ --cdrom /home/username/Downloads/Win10install.iso# virt-install \ --name demo-guest1 --memory 2048 \ --vcpus 2 --disk size=80 --osinfo win10 \ --cdrom /home/username/Downloads/Win10install.isoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a VM, install an OS from a live CD, and do not create a permanent disk:
The following command creates a VM named demo-guest2 that uses the /home/username/Downloads/rhel10.iso image to run a RHEL 10 OS from a live CD. No disk space is assigned to this VM, so changes made during the session will not be preserved. In addition, the VM is allocated with 4096 MiB of RAM and 4 vCPUs.
virt-install \ --name demo-guest2 --memory 4096 --vcpus 4 \ --disk none --livecd --osinfo rhel10.0 \ --cdrom /home/username/Downloads/rhel10.iso# virt-install \ --name demo-guest2 --memory 4096 --vcpus 4 \ --disk none --livecd --osinfo rhel10.0 \ --cdrom /home/username/Downloads/rhel10.isoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a VM and import an existing disk image:
The following command creates a RHEL 10 VM named demo-guest3 that connects to an existing disk image, /home/username/backup/disk.qcow2. This is similar to physically moving a hard drive between machines, so the OS and data available to demo-guest3 are determined by how the image was handled previously. In addition, this VM is allocated with 2048 MiB of RAM and 2 vCPUs.
virt-install \ --name demo-guest3 --memory 2048 --vcpus 2 \ --osinfo rhel10.0 --import \ --disk /home/username/backup/disk.qcow2# virt-install \ --name demo-guest3 --memory 2048 --vcpus 2 \ --osinfo rhel10.0 --import \ --disk /home/username/backup/disk.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that you must use the
--osinfooption when importing a disk image. If it is not provided, the performance of the created VM will be negatively affected.
Create a VM and install an OS from a remote URL:
The following command creates a VM named demo-guest4 that installs from the
http://example.com/OS-installURL. For the installation to start successfully, the URL must contain a working OS installation tree. In addition, the OS is automatically configured by using the /home/username/ks.cfg kickstart file. This VM is also allocated with 2048 MiB of RAM, 2 vCPUs, and a 160 GiB qcow2 virtual disk.virt-install \ --name demo-guest4 --memory 2048 --vcpus 2 --disk size=160 \ --osinfo rhel10.0 --location http://example.com/OS-install \ --initrd-inject /home/username/ks.cfg --extra-args="inst.ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8"# virt-install \ --name demo-guest4 --memory 2048 --vcpus 2 --disk size=160 \ --osinfo rhel10.0 --location http://example.com/OS-install \ --initrd-inject /home/username/ks.cfg --extra-args="inst.ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8"Copy to Clipboard Copied! Toggle word wrap Toggle overflow In addition, if you want to host demo-guest4 on an RHEL 10 on an ARM 64 host, include the following lines to ensure that the kickstart file installs the
kernel-64kpackage:%packages -kernel kernel-64k %end
%packages -kernel kernel-64k %endCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a VM and install an OS in a text-only mode:
The following command creates a VM named demo-guest5 that installs from a
RHEL10.isoimage file in text-only mode, without graphics. It connects the guest console to the serial console. The VM has 16384 MiB of memory, 16 vCPUs, and 280 GiB disk. This kind of installation is useful when connecting to a host over a slow network link.virt-install \ --name demo-guest5 --memory 16384 --vcpus 16 --disk size=280 \ --osinfo rhel10.0 --location RHEL10.iso \ --graphics none --extra-args='console=ttyS0'# virt-install \ --name demo-guest5 --memory 16384 --vcpus 16 --disk size=280 \ --osinfo rhel10.0 --location RHEL10.iso \ --graphics none --extra-args='console=ttyS0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a VM and install an OS in graphical mode:
The following command creates a VM named demo-guest-6, which has the same configuration as demo-guest5, but provides the host device
pci_0003_00_00_0for networking and configures graphics for a graphical installation.virt-install \ --name demo-guest6 --memory 16384 --vcpus 16 --disk size=280 \ --os-info rhel10.0 --location RHEL10.iso --graphics vnc,listen=0.0.0.0,5901 \ --input keyboard,bus=virtio --input mouse,bus=virtio \ --hostdev pci_0003_00_00_0 --network none# virt-install \ --name demo-guest6 --memory 16384 --vcpus 16 --disk size=280 \ --os-info rhel10.0 --location RHEL10.iso --graphics vnc,listen=0.0.0.0,5901 \ --input keyboard,bus=virtio --input mouse,bus=virtio \ --hostdev pci_0003_00_00_0 --network noneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the name of the host device available for installation can be retrieved by using the
virsh nodedev-list --cap pcicommand. To use the the installation GUI, you can connect any VNC viewer to the host’s IP at VNC port 5901 when the installation starts. However, you might have to open this port in the firewall first, for example:firewall-cmd --add-port 5901/tcp
# firewall-cmd --add-port 5901/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a VM on a remote host:
The following command creates a VM named demo-guest7, which has the same configuration as demo-guest5, but resides on the 192.0.2.1 remote host.
virt-install \ --connect qemu+ssh://root@192.0.2.1/system --name demo-guest7 --memory 16384 \ --vcpus 16 --disk size=280 --osinfo rhel10.0 --location RHEL10.iso \ --graphics none --extra-args='console=ttyS0'# virt-install \ --connect qemu+ssh://root@192.0.2.1/system --name demo-guest7 --memory 16384 \ --vcpus 16 --disk size=280 --osinfo rhel10.0 --location RHEL10.iso \ --graphics none --extra-args='console=ttyS0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a VM on a remote host and use a DASD mediated device as storage:
The following command creates a VM named demo-guest-8, which has the same configuration as demo-guest5, but for its storage, it uses a DASD mediated device
mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8, and assigns it device number1111.virt-install \ --name demo-guest8 --memory 16384 --vcpus 16 --disk size=280 \ --osinfo rhel10.0 --location RHEL10.iso --graphics none \ --disk none --hostdev mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8,address.type=ccw,address.cssid=0xfe,address.ssid=0x0,address.devno=0x1111,boot-order=1 \ --extra-args 'rd.dasd=0.0.1111'# virt-install \ --name demo-guest8 --memory 16384 --vcpus 16 --disk size=280 \ --osinfo rhel10.0 --location RHEL10.iso --graphics none \ --disk none --hostdev mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8,address.type=ccw,address.cssid=0xfe,address.ssid=0x0,address.devno=0x1111,boot-order=1 \ --extra-args 'rd.dasd=0.0.1111'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the name of the mediated device available for installation can be retrieved by using the
virsh nodedev-list --cap mdevcommand.
-
For additional options and examples of
virt-installcommands, see thevirt-install (1)man page on your system.
Verification
- If the VM is created successfully, a virt-viewer window opens with a graphical console of the VM and starts the guest OS installation.
Troubleshooting
If
virt-installfails with acannot find default networkerror:Ensure that the
libvirt-daemon-config-networkpackage is installed:dnf info libvirt-daemon-config-network Installed Packages Name : libvirt-daemon-config-network [...]
# dnf info libvirt-daemon-config-network Installed Packages Name : libvirt-daemon-config-network [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
libvirtdefault network is active and configured to start automatically:virsh net-list --all Name State Autostart Persistent -------------------------------------------- default active yes yes
# virsh net-list --all Name State Autostart Persistent -------------------------------------------- default active yes yesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If it is not, activate the default network and set it to auto-start:
virsh net-autostart default Network default marked as autostarted virsh net-start default Network default started
# virsh net-autostart default Network default marked as autostarted # virsh net-start default Network default startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow If activating the default network fails with the following error, the
libvirt-daemon-config-networkpackage has not been installed correctly.error: failed to get network 'default' error: Network not found: no network with matching name 'default'
error: failed to get network 'default' error: Network not found: no network with matching name 'default'Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fix this, re-install
libvirt-daemon-config-network:dnf reinstall libvirt-daemon-config-network
# dnf reinstall libvirt-daemon-config-networkCopy to Clipboard Copied! Toggle word wrap Toggle overflow If activating the default network fails with an error similar to the following, a conflict has occurred between the default network’s subnet and an existing interface on the host.
error: Failed to start network default error: internal error: Network is already in use by interface ens2
error: Failed to start network default error: internal error: Network is already in use by interface ens2Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fix this, use the
virsh net-edit defaultcommand and change the192.0.2.*values in the configuration to a subnet not already in use on the host.
3.2. Creating virtual machines by using the web console Copy linkLink copied to clipboard!
To create virtual machines (VMs) in a GUI on a RHEL 10 host, you can use the web console.
3.2.1. Creating new virtual machines by using the web console Copy linkLink copied to clipboard!
You can create a new virtual machine (VM) on a previously prepared host machine by using the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- Virtualization is enabled on your host system.
- The web console VM plug-in is installed on your host system.
-
To create a VM that uses the
systemconnection of libvirt, you must have root privileges or be in thelibvirtuser group on the host. For more information, see User-space connection types for virtualization. - You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values might vary significantly depending on the intended tasks and workload of the VMs.
Procedure
In the Virtual Machines interface of the web console, click .
The Create new virtual machine dialog appears.
Enter the basic configuration of the VM you want to create.
- Name - The name of the VM.
- Connection - The level of privileges granted to the session. For more details, expand the associated dialog box in the web console.
- Installation type - The installation can use a local installation medium, a URL, a PXE network boot, a cloud base image, or download an operating system from a limited set of guest operating systems.
Operating system - The guest operating system running on the VM. Note that Red Hat provides support only for a limited set of guest operating systems.
NoteTo download and install Red Hat Enterprise Linux directly from web console, you must add an offline token in the Offline token field.
- Storage - The type of storage.
- Storage Limit - The amount of storage space.
- Memory - The amount of memory.
Create the VM:
- If you want the VM to automatically install the operating system, click .
- If you want to edit the VM before the operating system is installed, click .
NoteIf you do not want to install an operating system immediately after creating a VM, you can do it later by selecting the VM in the Virtual Machines interface and clicking the button.
3.2.2. Creating virtual machines by importing disk images with the web console Copy linkLink copied to clipboard!
You can create a virtual machine (VM) by importing a disk image of an existing VM installation in the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values can vary significantly depending on the intended tasks and workload of the VMs.
- You have downloaded a disk image of an existing VM installation.
Procedure
In the Virtual Machines interface of the web console, click .
The Import a virtual machine dialog appears.
Enter the basic configuration of the VM you want to create:
- Name - The name of the VM.
- Disk image - The path to the existing disk image of a VM on the host system.
- Operating system - The operating system running on a VM disk. Note that Red Hat provides support only for a limited set of guest operating systems.
- Memory - The amount of memory to allocate for use by the VM.
Import the VM:
- To install the operating system on the VM without additional edits to the VM settings, click .
- To edit the VM settings before the installation of the operating system, click .
3.2.3. Creating virtual machines with cloud image authentication by using the web console Copy linkLink copied to clipboard!
By default, distro cloud images have no login accounts. However, by using the RHEL web console, you can now create a virtual machine (VM) and specify the root and user account login credentials, which are then passed to cloud-init.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- Virtualization is enabled on your host system.
- You have a sufficient amount of system resources to allocate to your VMs, such as disk space, RAM, or CPUs. The recommended values may vary significantly depending on the intended tasks and workload of the VMs.
Procedure
- Log in to the RHEL 10 web console.
In the interface of the web console, click .
The Create new virtual machine dialog appears.
- In the Name field, enter a name for the VM.
- On the Details tab, in the Installation type field, select Cloud base image.
- In the Installation source field, set the path to the image file on your host system.
Enter the configuration for the VM that you want to create.
- Operating system - The VM’s operating system. Note that Red Hat provides support only for a limited set of guest operating systems.
- Storage - The type of storage with which to configure the VM.
- Storage Limit - The amount of storage space with which to configure the VM.
- Memory - The amount of memory with which to configure the VM.
Click the Automation tab and set your cloud authentication credentials:
- Root password - Enter a root password for your VM. Leave the field blank if you do not want to set a root password.
- User login - Enter a cloud-init user login. Leave this field blank if you do not want to create a user account.
- User password - Enter a password. Leave this field blank if you do not want to create a user account.
Click .
The VM is created.
3.3. Enabling QEMU Guest Agent features on your virtual machines Copy linkLink copied to clipboard!
To use certain features on a virtual machine (VM) hosted on your RHEL 10 system, you must first configure the VM to use the QEMU Guest Agent (GA).
For a complete list of these features, see Virtualization features that require QEMU Guest Agent.
3.3.1. Enabling QEMU Guest Agent on Linux guests Copy linkLink copied to clipboard!
To allow a RHEL host to perform a certain subset of operations on a Linux virtual machine (VM), you must enable the QEMU Guest Agent (GA).
You can enable QEMU GA both on running and shut-down VMs.
Procedure
Create an XML configuration file for the QEMU GA, for example named
qemuga.xml:touch qemuga.xml
# touch qemuga.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines to the file:
<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>
<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the XML file to add QEMU GA to the configuration of the VM.
If the VM is running, use the following command:
virsh attach-device <vm-name> qemuga.xml --live --config
# virsh attach-device <vm-name> qemuga.xml --live --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is shut-down, use the following command:
virsh attach-device <vm-name> qemuga.xml --config
# virsh attach-device <vm-name> qemuga.xml --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In the Linux guest operating system, install the QEMU GA:
dnf install qemu-guest-agent
# dnf install qemu-guest-agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the QEMU GA service on the guest:
systemctl start qemu-guest-agent
# systemctl start qemu-guest-agentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To ensure that QEMU GA is enabled and running on the Linux VM, do any of the following:
-
In the guest operating system, use the
systemctl status qemu-guest-agent | grep Loadedcommand. If the output includesenabled, QEMU GA is active on the VM. -
Use the
virsh domfsinfo <vm-name>command on the host. If it displays any output, QEMU GA is active on the specified VM.
3.3.2. Virtualization features that require QEMU Guest Agent Copy linkLink copied to clipboard!
If you enable QEMU Guest Agent (GA) on a virtual machine (VM), you can use the following commands on your host to manage the VM:
virsh shutdown --mode=agent-
This shutdown method is more reliable than
virsh shutdown --mode=acpi, becausevirsh shutdownused with QEMU GA is guaranteed to shut down a cooperative guest in a clean state.
virsh domfsfreezeandvirsh domfsthaw- Freezes the guest file system in isolation.
virsh domfstrimInstructs the guest to trim its file system, which helps to reduce the data that needs to be transferred during migrations.
ImportantIf you want to use this command to manage a Linux VM, you must also set the following SELinux boolean in the guest operating system:
setsebool virt_qemu_ga_read_nonsecurity_files on
# setsebool virt_qemu_ga_read_nonsecurity_files onCopy to Clipboard Copied! Toggle word wrap Toggle overflow virsh domtime- Queries or sets the guest’s clock.
virsh setvcpus --guest- Instructs the guest to take CPUs offline, which is useful when CPUs cannot be hot-unplugged.
virsh domifaddr --source agent- Queries the guest operating system’s IP address by using QEMU GA. For example, this is useful when the guest interface is directly attached to a host interface.
virsh domfsinfo- Shows a list of mounted file systems in the running guest.
virsh set-user-password- Sets the password for a given user account in the guest.
virsh set-user-sshkeysEdits the authorized SSH keys file for a given user in the guest.
ImportantIf you want to use this command to manage a Linux VM, you must also set the following SELinux boolean in the guest operating system:
setsebool virt_qemu_ga_manage_ssh on
# setsebool virt_qemu_ga_manage_ssh onCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Starting virtual machines Copy linkLink copied to clipboard!
To start a virtual machine (VM) in RHEL 10, you can use the command line interface or the web console GUI.
4.1. Starting a virtual machine by using the command line Copy linkLink copied to clipboard!
To start a shut-down virtual machine (VM) or restore a saved VM, you can use the command-line interface (CLI). By using the CLI, you can start both local and remote VMs.
Prerequisites
- You have created a VM and installed a guest operating system. For details, see Creating virtual machines.
- The VM is defined and inactive.
- You know the name of the VM.
For remote VMs:
- You have the IP address of the host where the VM is located.
- You have root access privileges to the host.
-
If the VM uses a
systemconnection oflibvirt, you have root privileges or belong to thelibvirtuser group on the host. For details, see User-space connection types for virtualization.
Procedure
For a local VM, use the
virsh startutility.For example, the following command starts the demo-guest1 VM.
virsh start demo-guest1 Domain 'demo-guest1' started
# virsh start demo-guest1 Domain 'demo-guest1' startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow For a VM located on a remote host, use the
virsh startutility along with the QEMU+SSH connection to the host.For example, the following command starts the demo-guest1 VM on the 192.0.2.1 host.
virsh -c qemu+ssh://root@192.0.2.1/system start demo-guest1 root@192.0.2.1's password: Domain 'demo-guest1' started
# virsh -c qemu+ssh://root@192.0.2.1/system start demo-guest1 root@192.0.2.1's password: Domain 'demo-guest1' startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. Starting virtual machines by using the web console Copy linkLink copied to clipboard!
If a virtual machine (VM) is in the shut off state, you can start it by using the RHEL 10 web console. You can also configure the VM to be started automatically when the host starts.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- You have created a VM and installed a guest operating system. For details, see Creating virtual machines.
- The VM is defined and currently inactive.
- You know the name of the VM.
- The web console VM plug-in is installed on your system. For details, see Setting up the web console to manage virtual machines.
-
If the VM uses a
systemconnection oflibvirt, you have root privileges or belong to thelibvirtuser group on the host. For details, see User-space connection types for virtualization.
Procedure
In the interface, click the VM you want to start.
A new page opens with detailed information about the selected VM and controls for shutting down and deleting the VM.
Click .
The VM starts, and you can connect to its console or graphical output.
Optional: To configure the VM to start automatically when the host starts, toggle the
Autostartcheckbox in the Overview section.If you use network interfaces that are not managed by libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see starting virtual machines automatically when the host starts.
4.3. Starting virtual machines automatically when the host starts Copy linkLink copied to clipboard!
When a host with a running virtual machine (VM) restarts, the VM is shut down, and must be started again manually by default. To ensure a VM is active whenever its host is running, you can configure the VM to be started automatically.
The following instructions describe setting up VM autostart in the command line. For autostarting VMs in the web console, see Starting virtual machines by using the web console.
Prerequisites
- You have created a virtual machine (VM) and installed a guest operating system. For details, see Creating virtual machines.
Procedure
Use the
virsh autostartutility to configure the VM to start automatically when the host starts.For example, the following command configures the demo-guest1 VM to start automatically.
virsh autostart demo-guest1
# virsh autostart demo-guest1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain '_demo-guest1_' marked as autostarted
Domain '_demo-guest1_' marked as autostartedCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you use network interfaces that are not managed by
libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start.NoteThese interfaces include for example:
-
Bridge devices created by
NetworkManager -
Networks configured to use
<forward mode='bridge'/>
In the systemd configuration directory tree, create a
virtqemud.service.ddirectory if it does not exist yet.mkdir -p /etc/systemd/system/virtqemud.service.d/
# mkdir -p /etc/systemd/system/virtqemud.service.d/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
10-network-online.confsystemd unit override file in the previously created directory. The content of this file overrides the default systemd configuration for thevirtqemudservice.touch /etc/systemd/system/virtqemud.service.d/10-network-online.conf
# touch /etc/systemd/system/virtqemud.service.d/10-network-online.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines to the
10-network-online.conffile. This configuration change ensures systemd starts thevirtqemudservice only after the network on the host is ready.[Unit] After=network-online.target
[Unit] After=network-online.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Bridge devices created by
Verification
View the VM configuration, and check that the autostart option is enabled.
For example, the following command displays basic information about the demo-guest1 VM, including the autostart option.
virsh dominfo demo-guest1
# virsh dominfo demo-guest1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example successful output:
If you use network interfaces that are not managed by
libvirt, check if the content of the10-network-online.conffile is set correctly.cat /etc/systemd/system/virtqemud.service.d/10-network-online.conf
$ cat /etc/systemd/system/virtqemud.service.d/10-network-online.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
[Unit] After=network-online.target
[Unit] After=network-online.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Converting virtual machines to the Q35 machine type Copy linkLink copied to clipboard!
In RHEL 10, the i440fx machine type is deprecated, and will be removed in a future major version of RHEL. Therefore, Red Hat recommends converting your virtual machines (VMs) that use i440fx to use the q35 machine type instead.
In addition, using q35 provides additional benefits in comparison to i440fx, such as Advanced Host Controller Interface (AHCI) and virtual Input-output memory management unit (vIOMMU) emulation.
Note that you can also convert VM configurations that you have not defined yet.
Changing a machine type of a VM is similar to changing the motherboard on a physical machine. As a consequence, converting the machine type of a VM from i440fx to q35 might, in some cases, cause problems with the functionality of the guest operating system.
Prerequisites
A VM on your RHEL 10 host is using the
i440fxmachine type. To confirm this, use the following command:virsh dumpxml <vm-name> | grep machine
# virsh dumpxml <vm-name> | grep machineCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for an
i440fxVM:<type arch='x86_64' *machine='pc-i440fx-10.0.0'*>hvm</type>
<type arch='x86_64' *machine='pc-i440fx-10.0.0'*>hvm</type>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You have backed up the original configuration of the VM, so you can use it for conversion and disaster recovery, if necessary.
virsh dumpxml <vm-name> > <vm-name>-backup.xml
# virsh dumpxml <vm-name> > <vm-name>-backup.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
For undefined VMs, do the following:
Adjust the configuration of the VM to use Q35. As the source configuration, use the backup file that you created previously.
cat <vm-name>-backup.xml | virt-xml --edit --convert-to-q35 > <vm-name-q35>.xml
# cat <vm-name>-backup.xml | virt-xml --edit --convert-to-q35 > <vm-name-q35>.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the VM.
virsh define <vm-name-q35>.xml
# virsh define <vm-name-q35>.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For defined VMs, do the following:
Adjust the configuration of the VM to use Q35.
virt-xml <vm-name> --edit --convert-to-q35
# virt-xml <vm-name> --edit --convert-to-q35Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is running, shut it down.
virsh shutdown <vm-name>
# virsh shutdown <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the machine type of the VM.
virsh dumpxml <vm-name> | grep machine <type arch='x86_64' machine='q35'>hvm</type># virsh dumpxml <vm-name> | grep machine <type arch='x86_64' machine='q35'>hvm</type>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the VM and check that you can log in to the guest operating system.
Troubleshooting
If changing the machine type has made the VM not functional, define a new VM based on the backed-up configuration.
virsh define <vm-name>-backup.xml
# virsh define <vm-name>-backup.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Connecting to virtual machines Copy linkLink copied to clipboard!
To interact with a virtual machine (VM) in RHEL 10, you need to connect to it by doing one of the following:
- For connecting to a VM graphical display in a graphical user interface, use the Virtual Machines pane in the RHEL web console.
-
If you need to interact with a VM graphical display without using the web console, use the
Virt Viewerapplication. - When a graphical display is not possible or not necessary, use an SSH terminal connection.
-
When the virtual machine is not reachable from your system by using a network, use the
virshconsole.
6.1. Connecting to virtual machines by using the web console Copy linkLink copied to clipboard!
To connect to running KVM virtual machines, you can use the web console interface.
6.1.1. Opening a virtual machine graphical console in the web console Copy linkLink copied to clipboard!
To view and interact with the graphical output of a selected virtual machine (VM) in the RHEL 10 web console, use the VNC console or a remote viewer tool.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- Ensure that both the host and the VM support a graphical interface.
- The VMs that you want to interact with are installed and started. For instructions, see Creating virtual machines and Starting virtual machines.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the VM whose graphical console you want to view.
A new page opens with an Overview and a Console section for the VM.
Select in the console drop down menu.
The VNC console appears below the menu in the web interface.
The graphical console appears in the web interface.
Click
You can now interact with the VM console by using the mouse and keyboard in the same manner you interact with a real machine. The display in the VM console reflects the activities being performed on the VM.
NoteThe host on which the web console is running may intercept specific key combinations, such as Ctrl+Alt+Del, which prevents them from being sent to the VM.
To send such key combinations, click the menu and select the key sequence to send.
For example, to send the Ctrl+Alt+Del combination to the VM, click the and select the menu entry.
Optional: You can also display the graphical console of a selected VM in a remote viewer, such as Virt Viewer.
- Select in the console drop down menu.
Click .
The virt viewer,
.vv, file downloads.Open the file to launch Virt Viewer.
NoteYou can launch Virt Viewer from within the web console. Other VNC remote viewers can be launched manually.
Troubleshooting
- If clicking in the graphical console does not have any effect, expand the console to full screen. This is a known issue with the mouse cursor offset.
If launching the Remote Viewer in the web console does not work or is not optimal, you can manually connect with any viewer application by using the following protocols:
-
Address - The default address is
127.0.0.1. You can modify thevnc_listenparameter in/etc/libvirt/qemu.confto change it to the host’s IP address. - VNC port - 5901
-
Address - The default address is
6.1.2. Opening a virtual machine serial console in the web console Copy linkLink copied to clipboard!
You can view the serial console of a selected virtual machine (VM) in the RHEL 10 web console. This is useful when the host machine or the VM is not configured with a graphical interface.
For more information about the serial console, see Opening a virtual machine serial console by using the command line interface.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- The VMs that you want to interact with are installed and started. For instructions, see Creating virtual machines and Starting virtual machines.
Procedure
- Log in to the RHEL 10 web console.
In the pane, click the VM whose serial console you want to view.
A new page opens with an Overview and a Console section for the VM.
Select in the console drop down menu.
The graphical console appears in the web interface.
Optional: You can disconnect and reconnect the serial console from the VM.
- To disconnect the serial console from the VM, click .
- To reconnect the serial console to the VM, click .
6.2. Opening a virtual machine graphical console by using the command line Copy linkLink copied to clipboard!
You can connect to a graphical console of a virtual machine (VM) by opening the VM in the Virt Viewer utility.
Prerequisites
- Your system and the VM that you are connecting to must support graphical displays.
- If the target VM is located on a remote host, you must have connection and root access privileges to the host.
-
Optional: If the target VM is located on a remote host, it is helpful to set up
libvirtand SSH for more convenient access to remote hosts. For instructions, see Setting up easier access to remote virtualization hosts. - The VMs that you want to interact with are installed and started. For instructions, see Creating virtual machines and Starting virtual machines.
Procedure
To connect to a local VM, use the following command and replace guest-name with the name of the VM you want to connect to:
virt-viewer guest-name
# virt-viewer guest-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow To connect to a remote VM, use the
virt-viewercommand with the SSH protocol. For example, the following command connects as root to a VM called guest-name, located on remote system 192.0.2.1. The connection also requires root authentication for 192.0.2.1.virt-viewer --direct --connect qemu+ssh://root@192.0.2.1/system guest-name
# virt-viewer --direct --connect qemu+ssh://root@192.0.2.1/system guest-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
root@192.0.2.1's password:
root@192.0.2.1's password:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If the connection works correctly, the VM display is shown in the Virt Viewer window.
You can interact with the VM console by using the mouse and keyboard in the same manner you interact with a real machine. The display in the VM console reflects the activities being performed on the VM.
Troubleshooting
- If clicking in the graphical console does not have any effect, expand the console to full screen. This is a known issue with the mouse cursor offset.
6.3. Connecting to a virtual machine by using SSH Copy linkLink copied to clipboard!
If you do not need to use the graphical display of a virtual machine (VM), you can interact with the terminal of a VM by using the SSH connection protocol.
Prerequisites
- You have network connection and root access privileges to the target VM.
- If the target VM is located on a remote host, you also have connection and root access privileges to that host.
Your VM network assigns IP addresses by
dnsmasqgenerated bylibvirt. This is the case for example inlibvirtNAT networks.Notably, if your VM is using one of the following network configurations, you cannot connect to the VM by using SSH:
-
hostdevinterfaces - Direct interfaces
- Bridge interaces
-
The
libvirt-nsscomponent is installed and enabled on the VM’s host. If it is not, do the following:Install the
libvirt-nsspackage:dnf install libvirt-nss
# dnf install libvirt-nssCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/etc/nsswitch.conffile and addlibvirt_guestto thehostsline:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The VMs that you want to interact with are installed and started. For instructions, see Creating virtual machines and Starting virtual machines.
Procedure
When connecting to a remote VM, SSH into its physical host first. The following example demonstrates connecting to a host machine
192.0.2.1by using its root credentials:ssh root@192.0.2.1 root@192.0.2.1's password: Last login: Mon Sep 24 12:05:36 2021 root~#
# ssh root@192.0.2.1 root@192.0.2.1's password: Last login: Mon Sep 24 12:05:36 2021 root~#Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the VM’s name and user access credentials to connect to it. For example, the following connects to to the
testguest1VM by using its root credentials:ssh root@testguest1 root@testguest1's password: Last login: Wed Sep 12 12:05:36 2018 root~]#
# ssh root@testguest1 root@testguest1's password: Last login: Wed Sep 12 12:05:36 2018 root~]#Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Troubleshooting
If you do not know the VM’s name, you can list all VMs available on the host by using the
virsh list --allcommand:virsh list --all
# virsh list --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Id Name State ---------------------------------------------------- 2 testguest1 running - testguest2 shut off
Id Name State ---------------------------------------------------- 2 testguest1 running - testguest2 shut offCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Opening a virtual machine serial console by using the command line Copy linkLink copied to clipboard!
To connect to the serial console of a virtual machine (VM), you can use the virsh console command. This is useful for example in the following situations:
- The VM does not provide VNC protocols, so it does not offer video display for GUI tools.
- The VM does not have a network connection, so it cannot be interacted with by using SSH.
Prerequisites
The GRUB boot loader on your host must be configured to use serial console. To verify, check that the
/etc/default/grubfile on your host contains theGRUB_TERMINAL=serialparameter.sudo grep GRUB_TERMINAL /etc/default/grub GRUB_TERMINAL=serial
$ sudo grep GRUB_TERMINAL /etc/default/grub GRUB_TERMINAL=serialCopy to Clipboard Copied! Toggle word wrap Toggle overflow The VM must have a serial console device configured, such as
console type='pty'. To verify, do the following:virsh dumpxml vm-name | grep console <console type='pty' tty='/dev/pts/2'> </console>
# virsh dumpxml vm-name | grep console <console type='pty' tty='/dev/pts/2'> </console>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The VM must have the serial console configured in its kernel command line. To verify this, the
cat /proc/cmdlinecommand output on the VM should include console=<console-name>, where <console-name> is architecture-specific:-
For AMD64 and Intel 64:
ttyS0 For ARM 64:
ttyAMA0NoteThe following commands in this procedure use
ttyS0.cat /proc/cmdline BOOT_IMAGE=/vmlinuz-6.12.0-0.el10_0.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,9600n8 rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb
# cat /proc/cmdline BOOT_IMAGE=/vmlinuz-6.12.0-0.el10_0.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,9600n8 rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgbCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the serial console is not set up properly on a VM, using virsh console to connect to the VM connects you to an unresponsive guest console. However, you can still exit the unresponsive console by using the Ctrl+] shortcut.
To set up serial console on the VM, do the following:
On the VM, enable the
console=ttyS0kernel option:grubby --update-kernel=ALL --args="console=ttyS0"
# grubby --update-kernel=ALL --args="console=ttyS0"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clear the kernel options that might prevent your changes from taking effect.
grub2-editenv - unset kernelopts
# grub2-editenv - unset kerneloptsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the VM.
-
For AMD64 and Intel 64:
The
serial-getty@<console-name>service must be enabled. For example, on AMD64 and Intel 64:systemctl status serial-getty@ttyS0.service ○ serial-getty@ttyS0.service - Serial Getty on ttyS0 Loaded: loaded (/usr/lib/systemd/system/serial-getty@.service; enabled; preset: enabled)# systemctl status serial-getty@ttyS0.service ○ serial-getty@ttyS0.service - Serial Getty on ttyS0 Loaded: loaded (/usr/lib/systemd/system/serial-getty@.service; enabled; preset: enabled)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The VMs that you want to interact with are installed and started. For instructions, see Creating virtual machines and Starting virtual machines.
Procedure
On your host system, use the
virsh consolecommand. The following example connects to the guest1 VM, if thelibvirtdriver supports safe console handling:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You can interact with the virsh console in the same way as with a standard command-line interface.
6.5. Replacing the SPICE remote display protocol with VNC by using the command line Copy linkLink copied to clipboard!
The SPICE remote display protocol is not supported on RHEL 10 hosts. If you have a virtual machine (VM) that is configured to use the SPICE protocol, you can replace the SPICE protocol with the VNC protocol by using the command line. Otherwise, the VM fails to start.
Prerequisites
- You have an existing VM that is configured to use the SPICE remote display protocol and is already shut-down.
- The VMs that you want to interact with are installed and started. For instructions, see Creating virtual machines and Starting virtual machines.
Procedure
On the host, run the following command, and replace `<vm-name>` with the name of the VM that you want to convert to VNC.
virt-xml <vm-name> --edit --convert-to-vnc
# virt-xml <vm-name> --edit --convert-to-vncCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain 'vm-name' defined successfully
Domain 'vm-name' defined successfullyCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis also removes certain SPICE devices from the VM, such as audio and USB passthrough, because they do not have a suitable replacement in the VNC protocol. For more information, see Considerations in adopting RHEL 9.
Verification
Inspect the configuration of the VM you converted, and make sure the graphics type is listed as
vnc.virsh dumpxml -xml <vm-name> | grep "graphics"
# virsh dumpxml -xml <vm-name> | grep "graphics"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
<graphics type='*vnc*' port='5900' autoport='yes' listen='127.0.0.1'>
<graphics type='*vnc*' port='5900' autoport='yes' listen='127.0.0.1'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Shutting down and restarting virtual machines Copy linkLink copied to clipboard!
To shut down or restart a virtual machine on a RHEL 10 host, you can use the command line or the web console GUI.
7.1. Shutting down a virtual machine by using the command line Copy linkLink copied to clipboard!
To shut down a virtual machine (VM), you can use the virsh shutdown command. If the VM is unresponsive, you can force the shutdown by using the virsh destroy command.
Prerequisites
- You have a running VM on your host. For more information, see Starting virtual machines.
Procedure
To shut down a responsive VM, do one of the following:
If you are connected to the guest, use a shutdown command or a GUI element appropriate to the guest operating system.
NoteIn some environments, such as in Linux guests that use the GNOME Desktop, using the GUI power button for suspending or hibernating the guest might instead shut down the VM.
Alternatively, use the
virsh shutdowncommand on the host:If the VM is on a local host:
virsh shutdown <demo-guest1>
# virsh shutdown <demo-guest1>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain 'demo-guest1' is being shutdown
Domain 'demo-guest1' is being shutdownCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is on a remote host (in this example 192.0.2.1):
virsh -c qemu+ssh://root@192.0.2.1/system shutdown <demo-guest1>
# virsh -c qemu+ssh://root@192.0.2.1/system shutdown <demo-guest1>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
root@192.0.2.1's password: Domain 'demo-guest1' is being shutdown
root@192.0.2.1's password: Domain 'demo-guest1' is being shutdownCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the VM is not responding, you can force it to shut down. To do this, use the
virsh destroycommand on the host:virsh destroy <demo-guest1>
# virsh destroy <demo-guest1>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain 'demo-guest1' destroyed
Domain 'demo-guest1' destroyedCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
virsh destroycommand does not actually delete or remove the VM configuration or disk images. It only terminates the running instance of the VM, similarly to pulling the power cord from a physical machine.However, in rare cases,
virsh destroymay cause corruption of the VM’s file system, so use this command only if all other shutdown methods have failed.
Verification
On the host, display the list of your VMs to see their status.
virsh list --all
# virsh list --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Id Name State ------------------------------------------ 1 demo-guest1 shut off
Id Name State ------------------------------------------ 1 demo-guest1 shut offCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Shutting down a virtual machine by using the web console Copy linkLink copied to clipboard!
To shut down a running virtual machine (VM), you can use the interface in the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- You have a running VM on your host. For more information, see Starting virtual machines.
Procedure
- In the interface, find the row of the VM you want to shut down.
On the right side of the row, click .
The VM shuts down.
Troubleshooting
- If the VM does not shut down, click the Menu button next to the button and select .
- To shut down an unresponsive VM, you can also send a non-maskable interrupt by clicking the button in the Menu.
7.3. Restarting a virtual machine by using the command line Copy linkLink copied to clipboard!
To restart a virtual machine (VM), you can use the virsh reboot command. If the VM is unresponsive, you can force the restart by using the virsh destroy command.
Prerequisites
- You have a running VM on your host. For more information, see Starting virtual machines.
Procedure
To restart a responsive VM, do one of the following:
- If you are connected to the guest, use a restart command or a GUI element appropriate to the guest operating system.
Alternatively, use the
virsh rebootcommand on the host:If the VM is on a local host:
virsh reboot demo-guest1
# virsh reboot demo-guest1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain 'demo-guest1' is being rebooted
Domain 'demo-guest1' is being rebootedCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is on a remote host (in this example 192.0.2.1):
virsh -c qemu+ssh://root@192.0.2.1/system reboot demo-guest1
# virsh -c qemu+ssh://root@192.0.2.1/system reboot demo-guest1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
root@192.0.2.1's password: Domain 'demo-guest1' is being rebooted
root@192.0.2.1's password: Domain 'demo-guest1' is being rebootedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the VM is not responding, you can force it to shut down, and then start it:
Force a VM to shut down.
virsh destroy demo-guest1
# virsh destroy demo-guest1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain 'demo-guest1' destroyed
Domain 'demo-guest1' destroyedCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
virsh destroycommand does not actually delete or remove the VM configuration or disk images. It only terminates the running instance of the VM, similarly to pulling the power cord from a physical machine.However, in rare cases,
virsh destroymay cause corruption of the VM’s file system, so use this command only if all other shutdown methods have failed.Start the VM again.
virsh start demo-guest1
# virsh start demo-guest1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Domain 'demo-guest1' started
Domain 'demo-guest1' startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the host, display the list of your VMs to see their status.
virsh list --all
# virsh list --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow Successful output:
Id Name State ------------------------------------------ 1 demo-guest1 running
Id Name State ------------------------------------------ 1 demo-guest1 runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Restarting a virtual machine by using the web console Copy linkLink copied to clipboard!
To restart a running virtual machine (VM), you can use the interface in the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- You have a running VM on your host. For more information, see Starting virtual machines.
Procedure
- In the interface, find the row of the VM you want to restart.
On the right side of the row, click the Menu button .
A drop-down menu of actions appears.
In the drop-down menu, click .
The VM shuts down and restarts.
Troubleshooting
- If the VM does not restart, click the Menu button next to the button and select .
- To shut down an unresponsive VM, you can also send a non-maskable interrupt by clicking the button in the Menu.
Chapter 8. Deleting virtual machines Copy linkLink copied to clipboard!
To delete virtual machines in RHEL 10, use the command line interface or the web console GUI.
8.1. Deleting virtual machines by using the command line Copy linkLink copied to clipboard!
To delete a virtual machine (VM), you can remove its XML configuration and associated storage files from the host by using the command line.
Prerequisites
- The VM is shut-down.
- No other VMs use the same associated storage.
- Optional: Important data from the VM have been backed up.
Procedure
Use the
virsh undefineutility:For example, the following command removes the guest1 VM, its associated storage volumes, and non-volatile RAM, if any.
virsh undefine guest1 --remove-all-storage --nvram Domain 'guest1' has been undefined Volume 'vda'(/home/images/guest1.qcow2) removed.
# virsh undefine guest1 --remove-all-storage --nvram Domain 'guest1' has been undefined Volume 'vda'(/home/images/guest1.qcow2) removed.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure that the VM is no longer present on your host:
virsh list --all
# virsh list --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2. Deleting virtual machines by using the web console Copy linkLink copied to clipboard!
You can delete a virtual machine (VM) and its associated storage files from the host by using the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- Back up important data from the VM.
- Make sure no other VM uses the same associated storage.
- Optional: Shut down the VM.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the Menu button of the VM that you want to delete.
A drop down menu appears with controls for various VM operations.
Click .
A confirmation dialog appears.
- Optional: To delete all or some of the storage files associated with the VM, select the checkboxes next to the storage files you want to delete.
Click .
The VM and any selected storage files are deleted.
Chapter 9. Viewing information about virtual machines Copy linkLink copied to clipboard!
When you need to adjust or troubleshoot any aspect of your virtualization deployment on RHEL 10, the first step you need to perform usually is to view information about the current state and configuration of your virtual machines (VMs).
To do so, you can use the command line or the web console. You can also view the information in the VM’s XML configuration.
9.1. Viewing virtual machine information by using the command line Copy linkLink copied to clipboard!
To retrieve information about virtual machines (VMs) on your host and their configurations, you can use the virsh command-line utility.
Procedure
To obtain a list of VMs on your host:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To obtain basic information about a specific VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To obtain the complete XML configuration of a specific VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For an annotated example of a VM’s XML configuration, see Sample virtual machine XML configuration
For information about a VM’s disks and other block devices:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For instructions on managing a VM’s storage, see Managing storage for virtual machines.
To obtain information about a VM’s file systems and their mountpoints:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To obtain more details about the vCPUs of a specific VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To configure and optimize the vCPUs in your VM, see Optimizing virtual machine CPU performance.
To list all network interfaces of a specific VM:
virsh domiflist testguest5 Interface Type Source Model MAC ------------------------------------------------------------- vnet0 network default virtio 52:54:00:ad:23:fd vnet1 bridge br0 virtio 52:54:00:40:d4:9d
# virsh domiflist testguest5 Interface Type Source Model MAC ------------------------------------------------------------- vnet0 network default virtio 52:54:00:ad:23:fd vnet1 bridge br0 virtio 52:54:00:40:d4:9dCopy to Clipboard Copied! Toggle word wrap Toggle overflow For details about network interfaces, VM networks, and instructions for configuring them, see Configuring virtual machine network connections.
- For instructions on viewing information about storage pools and storage volumes on your host, see Viewing virtual machine storage information by using the CLI.
9.2. Viewing virtual machine information by using the web console Copy linkLink copied to clipboard!
To access a virtualization overview that contains summarized information about available virtual machines (VMs), disks, storage pools, and networks, you can use the web console.
Prerequisites
- The web console VM plug-in is installed on your system.
Procedure
Click in the web console’s side menu.
A dialog box appears with information about the available storage pools, available networks, and the VMs to which the web console is connected.
The information includes the following:
- Storage Pools - The number of storage pools, active or inactive, that can be accessed by the web console and their state.
- Networks - The number of networks, active or inactive, that can be accessed by the web console and their state.
- Name - The name of the VM.
- Connection - The type of libvirt connection, system or session.
- State - The state of the VM.
- Resource usage - Memory and virtual CPU usage of the VM.
Disks - Detailed information about disks assigned to the VM.
NoteChanges to the virtual network interface settings take effect only after restarting the VM.
Additionally, MAC address can only be modified when the VM is shut off.
9.3. Sample virtual machine XML configuration Copy linkLink copied to clipboard!
The XML configuration of a virtual machine (VM), also referred to as a domain XML, determines the VM’s settings and components. The following table shows a sample XML configuration of a VM and explains the contents.
To obtain the XML configuration of a VM, you can use the virsh dumpxml command followed by the VM’s name.
virsh dumpxml testguest1
# virsh dumpxml testguest1
| Domain XML Section | Description |
|---|---|
<domain type='kvm'> <name>Testguest1</name> <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory>
| This is a KVM virtual machine called Testguest1, with 1024 MiB allocated RAM. |
<vcpu placement='static'>1</vcpu>
| The VM is allocated with a single virtual CPU (vCPU). For information about configuring vCPUs, see Optimizing virtual machine CPU performance. |
<os> <type arch='x86_64' machine='pc-q35-rhel10.0.0'>hvm</type> <boot dev='hd'/> </os>
| The machine architecture is set to the AMD64 and Intel 64 architecture, and uses the Intel Q35 machine type to determine feature compatibility. The OS is set to be booted from the hard disk drive. |
<features> <acpi/> <apic/> </features>
| The acpi and apic hypervisor features are enabled. |
<cpu mode='host-model' check='partial'/>
|
The host CPU definitions from capabilities XML (obtainable with |
<clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock>
| The VM’s virtual hardware clock uses the UTC time zone. In addition, three different timers are set up for synchronization with the QEMU hypervisor. |
<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash>
|
When the VM powers off, or its OS terminates unexpectedly, |
<pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm>
| The S3 and S4 ACPI sleep states are disabled for this VM. |
|
|
The VM uses the
The first disk is a virtualized hard-drive based on the
The second disk is a virtualized CD-ROM and its logical device name is set to |
|
|
The VM uses a single controller for attaching USB devices, and a root controller for PCI-Express (PCIe) devices. In addition, a For more information about virtual devices, see Types of virtual devices. |
<interface type='network'> <mac address='52:54:00:65:29:21'/> <source network='default'/> <model type='virtio'/> </interface>
|
A network interface is set up in the VM that uses the For information about configuring the network interface, see Optimizing virtual machine network //performance. |
|
|
A For more information about interacting with VMs, see Interacting with virtual machines by using the web console. |
<input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/>
| The VM uses a virtual usb port, which is set up to receive tablet input, and a virtual ps2 port set up to receive mouse and keyboard input. This is set up automatically and changing these settings is not recommended. |
<graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics>
|
The VM uses the |
|
|
The VM uses |
Chapter 10. Cloning virtual machines Copy linkLink copied to clipboard!
To quickly create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM.
Cloning creates a new VM that uses its own disk image for storage, but most of the clone’s configuration and stored data is identical to the source VM. This makes it possible to prepare multiple VMs optimized for a certain task without the need to optimize each VM individually.
10.1. How cloning virtual machines works Copy linkLink copied to clipboard!
Cloning a virtual machine (VM) copies the XML configuration of the source VM and its disk images, and makes adjustments to the configurations to ensure the uniqueness of the new VM. You can use this process to rapidly generate VMs with a specific configuration and content.
Adjustments to the cloned VM configuration include changing the name of the VM and ensuring it uses the disk image clones. Nevertheless, the data stored on the clone’s virtual disks is identical to the source VM.
If you are planning to create multiple clones of a VM, first create a VM template that does not contain:
- Unique settings, such as persistent network MAC configuration, which can prevent the clones from working correctly.
- Sensitive data, such as SSH keys and password files.
For instructions, see Creating virtual machines templates.
10.2. Creating virtual machine templates Copy linkLink copied to clipboard!
To create multiple virtual machine (VM) clones that work correctly, you can remove information and configurations that are unique to a source VM, such as SSH keys or persistent network MAC configuration. This creates a VM template, which you can use to easily and safely create VM clones.
You can create VM templates by using the virt-sysprep utility or you can create them manually based on your requirements.
10.2.1. Creating a virtual machine template by using virt-sysprep Copy linkLink copied to clipboard!
To create a cloning template from an existing virtual machine (VM), you can use the virt-sysprep utility. This utility removes certain configurations that might cause the clone to work incorrectly, such as specific network settings or system registration metadata.
As a result, virt-sysprep makes creating clones of the VM more efficient, and ensures that the clones work more reliably.
Prerequisites
The
guestfs-toolspackage, which contains thevirt-syspreputility, is installed on your host:dnf install guestfs-tools
# dnf install guestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The source VM intended as a template is shut down.
You know where the disk image for the source VM is located, and you are the owner of the VM’s disk image file.
Note that disk images for VMs created in the system connection of
libvirtare located in the/var/lib/libvirt/imagesdirectory and owned by the root user by default:ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
# ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Any important data on the source VM’s disk has been backed up. If you want to preserve the source VM intact, clone it first and turn the clone into a template.
Procedure
Ensure you are logged in as the owner of the VM’s disk image:
whoami root
# whoami rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Copy the disk image of the VM.
cp /var/lib/libvirt/images/vm-name.qcow2 /var/lib/libvirt/images/vm-name-original.qcow2
# cp /var/lib/libvirt/images/vm-name.qcow2 /var/lib/libvirt/images/vm-name-original.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is used later to verify that the VM was successfully turned into a template.
Use the following command, and replace /var/lib/libvirt/images/vm-name.qcow2 with the path to the disk image of the source VM.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, you can also adjust which modifications the
virt-syspreputility performs on the disk image. For details, see the OPERATIONS section in thevirt-sysprepman page on your system.
Verification
To confirm that the process was successful, compare the modified disk image to the original one. The following example shows a successful creation of a template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.2.2. Creating a virtual machine template manually Copy linkLink copied to clipboard!
To create a template from an existing virtual machine (VM), you can manually remove specific configurations from a guest VM to prepare it for cloning.
Prerequisites
Ensure that you know the location of the disk image for the source VM and are the owner of the VM’s disk image file.
Note that disk images for VMs created in the system connection of libvirt are by default located in the
/var/lib/libvirt/imagesdirectory and owned by the root user:ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2
# ls -la /var/lib/libvirt/images -rw-------. 1 root root 9665380352 Jul 23 14:50 a-really-important-vm.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 an-actual-vm-that-i-use.qcow2 -rw-------. 1 root root 8591507456 Jul 26 2017 totally-not-a-fake-vm.qcow2 -rw-------. 1 root root 10739318784 Sep 20 17:57 another-vm-example.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that the VM is shut down.
- Optional: Any important data on the VM’s disk has been backed up. If you want to preserve the source VM intact, clone it first and edit the clone to create a template.
Procedure
Configure the VM for cloning:
- Install any software needed on the clone.
- Configure any non-unique settings for the operating system.
- Configure any non-unique application settings.
Remove the network configuration:
Remove any persistent udev rules by using the following command:
rm -f /etc/udev/rules.d/70-persistent-net.rules
# rm -f /etc/udev/rules.d/70-persistent-net.rulesCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf udev rules are not removed, the name of the first NIC might be
eth1instead ofeth0.Remove unique information from the
NMConnectionfiles in the/etc/NetworkManager/system-connections/directory.Remove MAC address, IP address, DNS, gateway, and any other unique information or non-desired settings.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Remove similar unique information and non-desired settings from the
/etc/hostsand/etc/resolv.conffiles.
Remove registration details:
For VMs registered on the Red Hat Network (RHN):
rm /etc/sysconfig/rhn/systemid
# rm /etc/sysconfig/rhn/systemidCopy to Clipboard Copied! Toggle word wrap Toggle overflow For VMs registered with Red Hat Subscription Manager (RHSM):
If you do not plan to use the original VM:
subscription-manager unsubscribe --all # subscription-manager unregister # subscription-manager clean
# subscription-manager unsubscribe --all # subscription-manager unregister # subscription-manager cleanCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you plan to use the original VM:
subscription-manager clean
# subscription-manager cleanCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe original RHSM profile remains in the Portal along with your ID code. Use the following command to reactivate your RHSM registration on the VM after it is cloned:
subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9
# subscription-manager register --consumerid=71rd64fx-6216-4409-bf3a-e4b7c7bd8ac9Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Remove other unique details:
Remove SSH public and private key pairs:
rm -rf /etc/ssh/ssh_host_example
# rm -rf /etc/ssh/ssh_host_exampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the configuration of LVM devices:
rm /etc/lvm/devices/system.devices
# rm /etc/lvm/devices/system.devicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Remove any other application-specific identifiers or configurations that might cause conflicts if running on multiple machines.
Remove the
gnome-initial-setup-donefile to configure the VM to run the configuration wizard on the next boot:rm ~/.config/gnome-initial-setup-done
# rm ~/.config/gnome-initial-setup-doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe wizard that runs on the next boot depends on the configurations that have been removed from the VM. In addition, on the first boot of the clone, it is recommended that you change the hostname.
10.3. Cloning a virtual machine by using the command line Copy linkLink copied to clipboard!
To create a new virtual machine (VM) with a specific set of properties, you can clone an existing VM by using the command line.
Prerequisites
- The source VM is shut down.
- Ensure that there is sufficient disk space to store the cloned disk images.
-
The
virt-installpackage is installed on the host. - Optional: When creating multiple VM clones, remove unique data and settings from the source VM to ensure the cloned VMs work properly. For instructions, see Creating virtual machine templates.
Procedure
Use the
virt-cloneutility with options that are appropriate for your environment and use case.Sample use cases
The following command clones a local VM named
example-VM-1and creates theexample-VM-1-cloneVM. It also creates and allocates theexample-VM-1-clone.qcow2disk image in the same location as the disk image of the original VM, and with the same data:virt-clone --original example-VM-1 --auto-clone Allocating 'example-VM-1-clone.qcow2' | 50.0 GB 00:05:37 Clone 'example-VM-1-clone' created successfully.
# virt-clone --original example-VM-1 --auto-clone Allocating 'example-VM-1-clone.qcow2' | 50.0 GB 00:05:37 Clone 'example-VM-1-clone' created successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following command clones a VM named
example-VM-2, and creates a local VM namedexample-VM-3, which uses only two out of multiple disks ofexample-VM-2:virt-clone --original example-VM-2 --name example-VM-3 --file /var/lib/libvirt/images/disk-1-example-VM-2.qcow2 --file /var/lib/libvirt/images/disk-2-example-VM-2.qcow2 Allocating 'disk-1-example-VM-2.qcow2' | 78.0 GB 00:05:37 Allocating 'disk-2-example-VM-2.qcow2' | 80.0 GB 00:05:37 Clone 'example-VM-3' created successfully.
# virt-clone --original example-VM-2 --name example-VM-3 --file /var/lib/libvirt/images/disk-1-example-VM-2.qcow2 --file /var/lib/libvirt/images/disk-2-example-VM-2.qcow2 Allocating 'disk-1-example-VM-2.qcow2' | 78.0 GB 00:05:37 Allocating 'disk-2-example-VM-2.qcow2' | 80.0 GB 00:05:37 Clone 'example-VM-3' created successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To clone your VM to a different host, migrate the VM without undefining it on the local host. For example, the following commands clone the previously created
example-VM-3VM to the192.0.2.1remote system, including its local disks. Note that you require root privileges to run these commands for192.0.2.1:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor additional details and examples of using
virt-clone, see thevirt-clone (1)man page on your system.
Verification
To verify the VM has been successfully cloned and is working correctly:
Confirm the clone has been added to the list of VMs on your host:
virsh list --all Id Name State --------------------------------------- - example-VM-1 shut off - example-VM-1-clone shut off
# virsh list --all Id Name State --------------------------------------- - example-VM-1 shut off - example-VM-1-clone shut offCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the clone and observe if it boots up:
virsh start example-VM-1-clone Domain 'example-VM-1-clone' started
# virsh start example-VM-1-clone Domain 'example-VM-1-clone' startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Cloning a virtual machine by using the web console Copy linkLink copied to clipboard!
To create new virtual machines (VMs) with a specific set of properties, you can clone an existing VM by using the web console.
Cloning a VM also clones the disks associated with that VM.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- Ensure that the VM you want to clone is shut down.
Procedure
- Log in to the RHEL 10 web console.
In the Virtual Machines interface of the web console, click the Menu button of the VM that you want to clone.
A drop down menu is displayed with controls for various VM operations.
Click .
The Create a clone VM dialog is displayed.
- Optional: Enter a new name for the VM clone.
Click .
A new VM is created based on the source VM.
Verification
- Confirm whether the cloned VM is displayed in the list of VMs available on your host.
Chapter 11. Migrating virtual machines Copy linkLink copied to clipboard!
If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host.
11.1. How migrating virtual machines works Copy linkLink copied to clipboard!
To successfully relocate virtual machines (VMs) between hosts with minimal downtime, you must use the appropriate migration type.
You can migrate a running VM without interrupting the workload, with only minor downtime, by using a live migration. By default, the migrated VM is transient on the destination host, and remains defined also on the source host. The essential part of a live migration is transferring the state of the VM’s memory and of any attached virtualized devices to a destination host. For the VM to remain functional on the destination host, the VM’s disk images must remain available to it.
To migrate a shut-off VM, you must use an offline migration, which copies the VM’s configuration to the destination host. For details, see the following table.
| Migration type | Description | Use case | Storage requirements |
|---|---|---|---|
| Live migration | The VM continues to run on the source host machine while KVM is transferring the VM’s memory pages to the destination host. When the migration is nearly complete, KVM very briefly suspends the VM, and resumes it on the destination host. | Useful for VMs that require constant uptime. However, for VMs that modify memory pages faster than KVM can transfer them, such as VMs under heavy I/O load, the live migration might fail. (1) | The VM’s disk images must be accessible both to the source host and the destination host during the migration. (2) |
| Offline migration | Moves the VM’s configuration to the destination host | Recommended for shut-off VMs and in situations when shutting down the VM does not disrupt your workloads. | The VM’s disk images do not have to be accessible to the source or destination host during migration, and can be copied or moved manually to the destination host instead. |
(1) For possible solutions, see: Additional virsh migrate options for live migrations
(2) To achieve this, use one of the following:
- Storage located on a shared network
-
The
--copy-storage-allparameter for thevirsh migratecommand, which copies disk image contents from the source to the destination over the network. - Storage area network (SAN) logical units (LUNs).
- Ceph storage clusters
For easier management of large-scale migrations, explore other Red Hat products, such as:
11.2. Benefits of migrating virtual machines Copy linkLink copied to clipboard!
You can use virtual machine (VM) migration to balance workloads, maintain hardware independence, save energy, and relocate VMs geographically as needed.
Migrating VMs can be useful for:
- Load balancing
- VMs can be moved to host machines with lower usage if their host becomes overloaded, or if another host is under-utilized.
- Hardware independence
- When you need to upgrade, add, or remove hardware devices on the host machine, you can safely relocate VMs to other hosts. This means that VMs do not experience any downtime for hardware improvements.
- Energy saving
- VMs can be redistributed to other hosts, and the unloaded host systems can thus be powered off to save energy and cut costs during low usage periods.
- Geographic migration
- VMs can be moved to another physical location for lower latency or when required for other reasons.
11.3. Limitations for migrating virtual machines Copy linkLink copied to clipboard!
To avoid virtual machine (VM) migration failures and achieve successful VM relocations between hosts, ensure you are aware of the limitations of migrating VMs.
VMs that use certain features and configurations will not work correctly if migrated, or the migration will fail. Such features include:
- Device passthrough
- SR-IOV device assignment (With the exception of migrating a VM with an attached virtual function of a Mellanox networking device, which works correctly.)
- Mediated devices, such as vGPUs (With the exception of migrating a VM with an attached NVIDIA vGPU, which works correctly.)
- A migration between hosts that use Non-Uniform Memory Access (NUMA) pinning works only if the hosts have similar topology. However, the performance on running workloads might be negatively affected by the migration.
- Both the source and destination hosts use specific RHEL versions that are supported for VM migration, see Supported hosts for virtual machine migration
The physical CPUs, both on the source VM and the destination VM, must be identical, otherwise the migration might fail. Any differences between the VMs in the following CPU related areas can cause problems with the migration:
CPU model
- Migrating between an Intel 64 host and an AMD64 host is unsupported, even though they share the x86-64 instruction set.
- For steps to ensure that a VM will work correctly after migrating to a host with a different CPU model, see Verifying host CPU compatibility for virtual machine migration.
- Physical machine firmware versions and settings
- Migrating VMs between ARM 64 hosts is currently only supported between hosts with identical CPUs, firmware, and memory page size. For more information, see How virtualization on ARM 64 differs from AMD64 and Intel 64.
11.4. Migrating a virtual machine by using the command line Copy linkLink copied to clipboard!
If the current host of a virtual machine (VM) becomes unsuitable or cannot be used anymore, or if you want to redistribute the hosting workload, you can migrate the VM to another KVM host.
You can perform a live migration or an offline migration. For differences between the two scenarios, see How migrating virtual machines works.
Prerequisites
- Hypervisor
- The source host and the destination host both use the KVM hypervisor.
- Network connection
-
The source host and the destination host are able to reach each other over the network. Use the
pingutility to verify this. - Open ports
Ensure the following ports are open on the destination host:
- Port 22 is needed for connecting to the destination host by using SSH.
- Port 16514 is needed for connecting to the destination host by using TLS.
- Port 16509 is needed for connecting to the destination host by using TCP.
- Ports 49152-49215 are needed by QEMU for transferring the memory and disk migration data.
- Hosts
- For the migration to be supportable by Red Hat, the source host and destination host must be using specific operating systems and machine types. To ensure this is the case, see Supported hosts for virtual machine migration.
- CPU
- The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
- Storage
The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply:
- You are using storage area network (SAN) logical units (LUNs).
- You are using a Ceph storage clusters.
-
You have created a disk image with the same format and size as the source VM disk and you will use the
--copy-storage-allparameter when migrating the VM. - The disk image is located on a separate networked location. For instructions to set up such shared VM storage, see Sharing virtual machine disk images with other hosts.
- Network bandwidth
When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following:
Monitor the rate of dirty page generation of the VM for a short period of time.
virsh domdirtyrate-calc <example_VM> 30
# virsh domdirtyrate-calc <example_VM> 30Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the monitoring finishes, obtain its results:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, your network bandwidth should be significantly greater than the VM’s dirty page generation rate.
NoteThe value of the
calc_periodoption might differ based on the workload and dirty page rate. You can experiment with severalcalc_periodvalues to determine the most suitable period that aligns with the dirty page rate in your environment.
- Bridge tap network specifics
- When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration.
- Connection protocol
When performing a VM migration, the
virshclient on the source host can use one of several protocols to connect to the libvirt daemon on the destination host. Examples in the following procedure use an SSH connection, but you can choose a different one.If you want libvirt to use an SSH connection, ensure that the
virtqemudsocket is enabled and running on the destination host.systemctl enable --now virtqemud.socket
# systemctl enable --now virtqemud.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you want libvirt to use a TLS connection, ensure that the
virtproxyd-tlssocket is enabled and running on the destination host.systemctl enable --now virtproxyd-tls.socket
# systemctl enable --now virtproxyd-tls.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you want libvirt to use a TCP connection, ensure that the
virtproxyd-tcpsocket is enabled and running on the destination host.systemctl enable --now virtproxyd-tcp.socket
# systemctl enable --now virtproxyd-tcp.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Offline migration
The following command migrates a shut-off
example-VMVM from your local host to the system connection of theexample-destinationhost by using an SSH tunnel.virsh migrate --offline --persistent <example_VM> qemu+ssh://example-destination/system
# virsh migrate --offline --persistent <example_VM> qemu+ssh://example-destination/systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Live migration
The following command migrates the
example-VMVM from your local host to the system connection of theexample-destinationhost by using an SSH tunnel. The VM keeps running during the migration.virsh migrate --live --persistent <example_VM> qemu+ssh://example-destination/system
# virsh migrate --live --persistent <example_VM> qemu+ssh://example-destination/systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the migration to complete. The process might take some time depending on network bandwidth, system load, and the size of the VM. If the
--verboseoption is not used forvirsh migrate, the CLI does not display any progress indicators except errors.When the migration is in progress, you can use the
virsh domjobinfoutility to display the migration statistics.
Multi-FD live migration
You can use multiple parallel connections to the destination host during the live migration. This is also known as multiple file descriptors (multi-FD) migration. With multi-FD migration, you can speed up the migration by utilizing all of the available network bandwidth for the migration process.
virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh://<example-destination>/system
# virsh migrate --live --persistent --parallel --parallel-connections 4 <example_VM> qemu+ssh://<example-destination>/systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example uses 4 multi-FD channels to migrate the <example_VM> VM. It is a good practice to use one channel for each 10 Gbps of available network bandwidth. The default value is 2 channels.
Live migration with an increased downtime limit
To improve the reliability of a live migration, you can set the
maxdowntimeparameter, which specifies the maximum amount of time, in milliseconds, the VM can be paused during live migration. Setting a larger downtime can help to ensure the migration completes successfully.virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>
# virsh migrate-setmaxdowntime <example_VM> <time_interval_in_milliseconds>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Live migration with passwordless SSH authentication
To avoid the need to input the SSH password for the remote host during the migration, you can specify a private key file for the migration to use instead. This can be useful for example in automated migration scripts, or for peer-to-peer migration.
virsh migrate --live --persistent <example_VM> qemu+ssh://<example-destination>/system?keyfile=<path_to_key>
# virsh migrate --live --persistent <example_VM> qemu+ssh://<example-destination>/system?keyfile=<path_to_key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Post-copy migration
If your VM has a large memory footprint, you can perform a post-copy migration, which transfers the source VM’s CPU state first and immediately starts the migrated VM on the destination host. The source VM’s memory pages are transferred after the migrated VM is already running on the destination host. Because of this, a post-copy migration can result in a smaller downtime of the migrated VM.
However, the running VM on the destination host might try to access memory pages that have not yet been transferred, which causes a page fault. If too many page faults occur during the migration, the performance of the migrated VM can be severely degraded.
Given the potential complications of a post-copy migration, it is usually better to use the following command that starts a standard live migration and switches to a post-copy migration if the live migration cannot be finished in a specified amount of time.
virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh://<example-destination>/system
# virsh migrate --live --persistent --postcopy --timeout <time_interval_in_seconds> --timeout-postcopy <example_VM> qemu+ssh://<example-destination>/systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Auto-converged live migration
If your VM is under a heavy memory workload, you can use the
--auto-convergeoption. This option automatically slows down the execution speed of the VM’s CPU. As a consequence, this CPU throttling can help to slow down memory writes, which means the live migration might succeed even in VMs with a heavy memory workload.However, the CPU throttling does not help to resolve workloads where memory writes are not directly related to CPU execution speed, and it can negatively impact the performance of the VM during a live migration.
virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh://<example-destination>/system
# virsh migrate --live --persistent --auto-converge <example_VM> qemu+ssh://<example-destination>/systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
For offline migration:
On the destination host, list the available VMs to verify that the VM was migrated successfully.
virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut offCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For live migration:
On the destination host, list the available VMs to verify the state of the destination VM:
virsh list --all Id Name State ---------------------------------- 10 example-VM-1 running
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the state of the VM is listed as
running, it means that the migration is finished. However, if the live migration is still in progress, the state of the destination VM will be listed aspaused.
For post-copy migration:
On the source host, list the available VMs to verify the state of the source VM.
virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut off
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 shut offCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the destination host, list the available VMs to verify the state of the destination VM.
virsh list --all Id Name State ---------------------------------- 10 example-VM-1 running
# virsh list --all Id Name State ---------------------------------- 10 example-VM-1 runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the state of the source VM is listed as
shut offand the state of the destination VM is listed asrunning, it means that the migration is finished.
11.5. Live migrating a virtual machine by using the web console Copy linkLink copied to clipboard!
To relocate running virtual machines (VMs) to another host without downtime or interruption, you can use the web console. This is also known as live migration.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- Hypervisor: The source host and the destination host both use the KVM hypervisor.
- Hosts: The source and destination hosts are running.
Open ports: Ensure the following ports are open on the destination host.
- Port 22 is needed for connecting to the destination host by using SSH.
- Port 16514 is needed for connecting to the destination host by using TLS.
- Port 16509 is needed for connecting to the destination host by using TCP.
- Ports 49152-49215 are needed by QEMU for transfering the memory and disk migration data.
- CPU: The VM must be compatible with the CPU features of the destination host. To ensure this is the case, see Verifying host CPU compatibility for virtual machine migration.
Storage: The disk images of VMs that will be migrated are accessible to both the source host and the destination host. This is optional for offline migration, but required for migrating a running VM. To ensure storage accessibility for both hosts, one of the following must apply:
- You are using storage area network (SAN) logical units (LUNs).
- You are using a Ceph storage clusters.
-
You have created a disk image with the same format and size as the source VM disk and you will use the
--copy-storage-allparameter when migrating the VM. - The disk image is located on a separate networked location. For instructions to set up such shared VM storage, see Sharing virtual machine disk images with other hosts.
Network bandwidth: When migrating a running VM, your network bandwidth must be higher than the rate in which the VM generates dirty memory pages.
To obtain the dirty page rate of your VM before you start the live migration, do the following on the command line:
Monitor the rate of dirty page generation of the VM for a short period of time.
virsh domdirtyrate-calc vm-name 30
# virsh domdirtyrate-calc vm-name 30Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the monitoring finishes, obtain its results:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the VM is generating 2 MB of dirty memory pages per second. Attempting to live-migrate such a VM on a network with a bandwidth of 2 MB/s or less will cause the live migration not to progress if you do not pause the VM or lower its workload.
To ensure that the live migration finishes successfully, your network bandwidth should be significantly greater than the VM’s dirty page generation rate.
NoteThe value of the
calc_periodoption might differ based on the workload and dirty page rate. You can experiment with severalcalc_periodvalues to determine the most suitable period that aligns with the dirty page rate in your environment.
- Bridge tap network specifics: When migrating an existing VM in a public bridge tap network, the source and destination hosts must be located on the same network. Otherwise, the VM network will not work after migration.
Procedure
In the Virtual Machines interface of the web console, click the Menu button of the VM that you want to migrate.
A drop down menu appears with controls for various VM operations.
Click
The Migrate VM to another host dialog appears.
- Enter the URI of the destination host.
Configure the duration of the migration:
- Permanent - Do not check the box if you want to migrate the VM permanently. Permanent migration completely removes the VM configuration from the source host.
- Temporary - Temporary migration migrates a copy of the VM to the destination host. This copy is deleted from the destination host when the VM is shut down. The original VM remains on the source host.
Click
Your VM is migrated to the destination host.
Verification
To verify whether the VM has been successfully migrated and is working correctly:
- Confirm whether the VM appears in the list of VMs available on the destination host.
- Start the migrated VM and observe if it boots up.
11.6. Live migrating a virtual machine with an attached Mellanox virtual function Copy linkLink copied to clipboard!
You can live migrate a virtual machine (VM) with an attached virtual function (VF) of a supported Mellanox networking device.
Red Hat implements the general functionality of VM live migration with an attached VF of a Mellanox networking device. However, the functionality depends on specific Mellanox device models and firmware versions.
Currently, the VF migration is supported only with a Mellanox CX-7 networking device.
The VF on the Mellanox CX-7 networking device uses a new mlx5_vfio_pci driver, which adds functionality that is necessary for the live migration, and libvirt binds the new driver to the VF automatically.
Red Hat directly supports Mellanox VF live migration only with the included mlx5_vfio_pci driver.
Limitations
Some virtualization features cannot be used when live migrating a VM with an attached virtual function:
Calculating dirty memory page rate generation of the VM.
Currently, when migrating a VM with an attached Mellanox VF, live migration data and statistics provided by
virsh domjobinfoandvirsh domdirtyrate-calccommands are inaccurate, because the calculations only count guest RAM without including the impact of the attached VF.- Using a post-copy live migration.
- Using a virtual I/O Memory Management Unit (vIOMMU) device in the VM.
Additional limitations that are specific to the Mellanox CX-7 networking device:
A CX-7 device with the same Parameter-Set Identification (PSID) and the same firmware version must be used on both the source and the destination hosts.
You can check the PSID of your device with the following command:
mstflint -d <device_pci_address> query | grep -i PSID PSID: MT_1090111019
# mstflint -d <device_pci_address> query | grep -i PSID PSID: MT_1090111019Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On one CX-7 physical function, you can use at maximum 4 VFs for live migration at the same time. For example, you can migrate one VM with 4 attached VFs, or 4 VMs with one VF attached to each VM.
Prerequisites
You have a Mellanox CX-7 networking device with a firmware version that is equal to or greater than 28.36.1010.
Refer to Mellanox documentation for details about supported firmware versions and ensure you are using an up-to-date version of the firmware.
- The host uses the Intel 64, AMD64, or ARM 64 CPU architecture.
- The Mellanox firmware version on the source host must be the same as on the destination host.
The
mstflintpackage is installed on both the source and destination host:dnf install mstflint
# dnf install mstflintCopy to Clipboard Copied! Toggle word wrap Toggle overflow The Mellanox CX-7 networking device has
VF_MIGRATION_MODEset toMIGRATION_ENABLED:mstconfig -d <device_pci_address> query | grep -i VF_migration VF_MIGRATION_MODE MIGRATION_ENABLED(2)
# mstconfig -d <device_pci_address> query | grep -i VF_migration VF_MIGRATION_MODE MIGRATION_ENABLED(2)Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can set
VF_MIGRATION_MODEtoMIGRATION_ENABLEDby using the following command:mstconfig -d <device_pci_address> set VF_MIGRATION_MODE=2
# mstconfig -d <device_pci_address> set VF_MIGRATION_MODE=2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The
openvswitchpackage is installed on both the source and destination host:dnf install openvswitch
# dnf install openvswitchCopy to Clipboard Copied! Toggle word wrap Toggle overflow - All of the general SR-IOV devices prerequisites. For details, see Attaching SR-IOV networking devices to virtual machines
- All of the general VM migration prerequisites. For details, see Migrating a virtual machine by using the command line
Procedure
On the source host, set the Mellanox networking device to the
switchdevmode.devlink dev eswitch set pci/<device_pci_address> mode switchdev
# devlink dev eswitch set pci/<device_pci_address> mode switchdevCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the source host, create a virtual function on the Mellanox device.
echo 1 > /sys/bus/pci/devices/0000\:e1\:00.0/sriov_numvfs
# echo 1 > /sys/bus/pci/devices/0000\:e1\:00.0/sriov_numvfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
/0000\:e1\:00.0/part of the file path is based on the PCI address of the device. In the example it is:0000:e1:00.0On the source host, unbind the VF from its driver.
virsh nodedev-detach <vf_pci_address> --driver pci-stub
# virsh nodedev-detach <vf_pci_address> --driver pci-stubCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can view the PCI address of the VF by using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the source host, enable the migration function of the VF.
devlink port function set pci/0000:e1:00.0/1 migratable enable
# devlink port function set pci/0000:e1:00.0/1 migratable enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
pci/0000:e1:00.0/1refers to the first VF on the Mellanox device with the given PCI address.On the source host, configure Open vSwitch (OVS) for the migration of the VF. If the Mellanox device is in
switchdevmode, it cannot transfer data over the network.Ensure the
openvswitchservice is running.systemctl start openvswitch
# systemctl start openvswitchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable hardware offloading to improve networking performance.
ovs-vsctl set Open_vSwitch . other_config:hw-offload=true
# ovs-vsctl set Open_vSwitch . other_config:hw-offload=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Increase the maximum idle time to ensure network connections remain open during the migration.
ovs-vsctl set Open_vSwitch . other_config:max-idle=300000
# ovs-vsctl set Open_vSwitch . other_config:max-idle=300000Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new bridge in the OVS instance.
ovs-vsctl add-br <bridge_name>
# ovs-vsctl add-br <bridge_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
openvswitchservice.systemctl restart openvswitch
# systemctl restart openvswitchCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the physical Mellanox device to the OVS bridge.
ovs-vsctl add-port <bridge_name> enp225s0np0
# ovs-vsctl add-port <bridge_name> enp225s0np0Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
enp225s0np0is the network interface name of the Mellanox device.Add the VF of the Mellanox device to the OVS bridge.
ovs-vsctl add-port <bridge_name> enp225s0npf0vf0
# ovs-vsctl add-port <bridge_name> enp225s0npf0vf0Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
enp225s0npf0vf0is the network interface name of the VF.
- Repeat steps 1-5 on the destination host.
On the source host, open a new file, such as
mlx_vf.xml, and add the following XML configuration of the VF:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example configures a pass-through of the VF as a network interface for the VM. Ensure the MAC address is unique, and use the PCI address of the VF on the source host.
On the source host, attach the VF XML file to the VM.
virsh attach-device <vm_name> mlx_vf.xml --live --config
# virsh attach-device <vm_name> mlx_vf.xml --live --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
mlx_vf.xmlis the name of the XML file with the VF configuration. Use the--liveoption to attach the device to a running VM.On the source host, start the live migration of the running VM with the attached VF.
virsh migrate --live --domain <vm_name> --desturi qemu+ssh://<destination_host_ip_address>/system
# virsh migrate --live --domain <vm_name> --desturi qemu+ssh://<destination_host_ip_address>/systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more details about performing a live migration, see Migrating a virtual machine by using the command line.
Verification
In the migrated VM, view the network interface name of the Mellanox VF.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the migrated VM, check that the Mellanox VF works, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7. Live migrating a virtual machine with an attached NVIDIA vGPU Copy linkLink copied to clipboard!
If you use virtual GPUs (vGPUs) in your virtualization workloads, you can live migrate a running virtual machine (VM) with an attached vGPU to another KVM host. Currently, this is only possible with NVIDIA GPUs.
Prerequisites
- You have an NVIDIA GPU with an NVIDIA Virtual GPU Software Driver version that supports this functionality. Refer to the relevant NVIDIA vGPU documentation for more details.
- You have a correctly configured NVIDIA vGPU assigned to a VM. For instructions, see: Setting up NVIDIA vGPU devices
It is also possible to live migrate a VM with multiple vGPU devices attached.
- The host uses the Intel 64 or AMD64 CPU architecture.
- All of the vGPU migration prerequisites that are documented by NVIDIA. Refer to the relevant NVIDIA vGPU documentation for more details.
- All of the general VM migration prerequisites. For details, see Migrating a virtual machine by using the command line
Limitations
- Certain NVIDIA GPU features can disable the migration. For more information, see the specific NVIDIA documentation for your graphics card.
- Some GPU workloads are not compatible with the downtime that happens during a migration. As a consequence, the GPU workloads might stop or crash. It is recommended to test if your workloads are compatible with the downtime before attempting a vGPU live migration.
- Currently, vGPU live migration fails if the vGPU driver version differs on the source and destination hosts.
Currently, some general virtualization features cannot be used when live migrating a VM with an attached vGPU:
Calculating dirty memory page rate generation of the VM.
Currently, live migration data and statistics provided by
virsh domjobinfoandvirsh domdirtyrate-calccommands are inaccurate when migrating a VM with an attached vGPU, because the calculations only count guest RAM without including vRAM from the vGPU.- Using a post-copy live migration.
- Using a virtual I/O Memory Management Unit (vIOMMU) device in the VM.
Procedure
For instructions on how to proceed with the live migration, see: Migrating a virtual machine by using the command line
No additional parameters for the migration command are required for the attached vGPU device.
11.8. Sharing virtual machine disk images with other hosts Copy linkLink copied to clipboard!
To perform a live migration of a virtual machine (VM) between supported KVM hosts, you must also migrate the storage of the running VM in a way that makes it possible for the VM to read from and write to the storage during the migration process.
One of the methods to do this is using shared VM storage. The following procedure provides instructions for sharing a locally stored VM image with the source host and the destination host by using the NFS protocol.
Prerequisites
- The VM intended for migration is shut down.
- Optional: A host system is available for hosting the storage that is not the source or destination host, but both the source and the destination host can reach it through the network. This is the optimal solution for shared storage.
- Make sure that NFS file locking is not used as it is not supported in KVM.
- The NFS protocol is installed and enabled on the source and destination hosts. See Deploying an NFS server.
The
virt_use_nfsSELinux boolean is set toon.setsebool virt_use_nfs 1
# setsebool virt_use_nfs 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Connect to the host that will provide shared storage. In this example, it is the
example-shared-storagehost:ssh root@example-shared-storage root@example-shared-storage's password: Last login: Mon Sep 24 12:05:36 2019 root~#
# ssh root@example-shared-storage root@example-shared-storage's password: Last login: Mon Sep 24 12:05:36 2019 root~#Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a directory on the
example-shared-storagehost that will hold the disk image and that will be shared with the migration hosts:mkdir /var/lib/libvirt/shared-images
# mkdir /var/lib/libvirt/shared-imagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the disk image of the VM from the source host to the newly created directory. The following example copies the disk image
example-disk-1of the VM to the/var/lib/libvirt/shared-images/directory of theexample-shared-storagehost:scp /var/lib/libvirt/images/example-disk-1.qcow2 root@example-shared-storage:/var/lib/libvirt/shared-images/example-disk-1.qcow2
# scp /var/lib/libvirt/images/example-disk-1.qcow2 root@example-shared-storage:/var/lib/libvirt/shared-images/example-disk-1.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the host that you want to use for sharing the storage, add the sharing directory to the
/etc/exportsfile. The following example shares the/var/lib/libvirt/shared-imagesdirectory with theexample-source-machineandexample-destination-machinehosts:/var/lib/libvirt/shared-images example-source-machine(rw,no_root_squash) example-destination-machine(rw,no\_root_squash)
# /var/lib/libvirt/shared-images example-source-machine(rw,no_root_squash) example-destination-machine(rw,no\_root_squash)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
exportfs -acommand for the changes in the/etc/exportsfile to take effect.exportfs -a
# exportfs -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow On both the source and destination host, mount the shared directory in the
/var/lib/libvirt/imagesdirectory:mount example-shared-storage:/var/lib/libvirt/shared-images /var/lib/libvirt/images
# mount example-shared-storage:/var/lib/libvirt/shared-images /var/lib/libvirt/imagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Start the VM on the source host and observe if it boots successfully.
11.9. Verifying host CPU compatibility for virtual machine migration Copy linkLink copied to clipboard!
For migrated virtual machines (VMs) to work correctly on the destination host, the CPUs on the source and the destination hosts must be compatible. To ensure that this is the case, calculate a common CPU baseline before you begin the migration.
The instructions in this section use an example migration scenario with the following host CPUs:
- Source host: Intel Core i7-8650U
- Destination hosts: Intel Xeon CPU E5-2620 v2
In addition, this procedure does not apply to 64-bit ARM systems.
Prerequisites
- Virtualization is installed and enabled on your system.
- You have administrator access to the source host and the destination host for the migration.
Procedure
On the source host, obtain its CPU features and paste them into a new XML file, such as
domCaps-CPUs.xml.virsh domcapabilities | xmllint --xpath "//cpu/mode[@name='host-model']" - > domCaps-CPUs.xml
# virsh domcapabilities | xmllint --xpath "//cpu/mode[@name='host-model']" - > domCaps-CPUs.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the XML file, replace the
<mode> </mode>tags with<cpu> </cpu>. Optional: Verify that the content of the
domCaps-CPUs.xmlfile looks similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the destination host, use the following command to obtain its CPU features:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add the obtained CPU features from the destination host to the
domCaps-CPUs.xmlfile on the source host. Again, replace the<mode> </mode>tags with<cpu> </cpu>and save the file. Optional: Verify that the XML file now contains the CPU features from both hosts.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the XML file to calculate the CPU feature baseline for the VM you intend to migrate.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the XML configuration of the VM you intend to migrate, and replace the contents of the
<cpu>section with the settings obtained in the previous step.virsh edit <vm_name>
# virsh edit <vm_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is running, shut down the VM and start it again.
virsh shutdown <vm_name> virsh start <vm_name>
# virsh shutdown <vm_name> # virsh start <vm_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10. Supported hosts for virtual machine migration Copy linkLink copied to clipboard!
For the virtual machine (VM) migration to work properly and be supported by Red Hat, the source and destination hosts must be specific RHEL versions and machine types. The following table shows supported VM migration paths.
VM migration between hosts with the same RHEL version on the supported machine types is also supported.
| Migration method | Release type | Future version example | Support status |
|---|---|---|---|
| Forward | Minor release | 10.0.1 → 10.1 | On supported RHEL 10 systems and machine types |
| Forward | Major release | 9.7 → 10.1 | On supported RHEL systems and machine types |
| Backward | Minor release | 10.1 → 10.0.1 | On supported RHEL 10 systems and machine types |
| Backward | Major release | 10.1 → 9.7 | On supported RHEL systems and machine types |
The support status might vary for other virtualization solutions provided by Red Hat, including RHOSP and OpenShift Virtualization.
Chapter 12. Managing storage for virtual machines Copy linkLink copied to clipboard!
A virtual machine (VM) requires storage for data, program, and system files. You can assign physical or network-based storage to your VMs as virtual storage.
You can also modify how the storage is presented to a VM regardless of the underlying hardware.
12.1. Available methods for attaching storage to virtual machines Copy linkLink copied to clipboard!
To provide storage for your virtual machines (VMs) running on a RHEL 10 host, you can use multiple types of storage hardware and services. Each of these types has different requirements, benefits, and use cases.
- File-based storage
File-based virtual disks are disk image files on your host file system, which are stored in a directory-based
libvirtstorage pool.File-based disks are quick to set up and easy to migrate, but create additional overhead for the local file system, which can have negative impact on the performance.
In addition, certain
libvirtfeatures, such as snapshots, require a file-based virtual disk.For instructions on attaching file-based storage to your VMs, see Attaching a file-based virtual disk to your virtual machine by using the command line or Attaching a file-based virtual disk to your virtual machine by using the web console.
- Disk-based storage
VMs can use an entire physical disk or partition instead of virtual disks.
Disk-based storage has the best performance of the available storage types and also provides direct access to host disks. However, you cannot create snapshots for such storage, and it is difficult to migrate.
For instructions on attaching disk-based storage to your VMs, see Attaching disk-based storage to your virtual machine by using the command line or Attaching disk-based storage to your virtual machine by using the web console.
- LVM-based storage
VMs can use the Logical Volume Manager (LVM) to allocate storage directly from a volume group (VG).
LVM storage has better performance than file-based disks and is easy to resize, but can be more difficult to migrate.
For instructions on attaching LVM-based storage to your VMs, see Attaching LVM-based storage to your virtual machine by using the command line or Attaching LVM-based storage to your virtual machine by using the web console.
- Network-based storage
Instead of local hardware, you can use remote storage, such as the Network File System (NFS).
This is useful for shared storage in clusters or high-availability environments. However, network-based storage is generally slower than local storage, and your network bandwidth can further limit the performance.
For instructions on attaching NFS-based storage to your VMs, see Attaching NFS-based storage to your virtual machine by using the command line or Attaching NFS-based storage to your virtual machine by using the web console.
12.2. Viewing virtual machine storage information by using the web console Copy linkLink copied to clipboard!
By using the web console, you can view detailed information about storage resources available to your virtual machines (VMs).
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
To view a list of the storage pools available on your host, click at the top of the interface.
The Storage pools window appears, showing a list of configured storage pools.
The information includes the following:
- Name - The name of the storage pool.
- Size - The current allocation and the total capacity of the storage pool.
- Connection - The connection used to access the storage pool.
- State - The state of the storage pool.
Click the arrow next to the storage pool whose information you want to see.
The row expands to reveal the Overview pane with detailed information about the selected storage pool.
The information includes:
- Target path - The location of the storage pool.
- Persistent - Indicates whether or not the storage pool has a persistent configuration.
- Autostart - Indicates whether or not the storage pool starts automatically when the system boots up.
- Type - The type of the storage pool.
To view a list of storage volumes associated with the storage pool, click .
The Storage Volumes pane appears, showing a list of configured storage volumes.
The information includes:
- Name - The name of the storage volume.
- Used by - The VM that is currently using the storage volume.
- Size - The size of the volume.
To view virtual disks attached to a specific VM:
- Click in the left-side menu.
Click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Disks section displays information about the disks assigned to the VM, as well as options to Add or Edit disks.
The information includes the following:
- Device - The device type of the disk.
- Used - The amount of disk currently allocated.
- Capacity - The maximum size of the storage volume.
- Bus - The type of disk device that is emulated.
-
Access - Whether the disk is Writeable or Read-only. For
rawdisks, you can also set the access to Writeable and shared. - Source - The disk device or file.
12.3. Viewing virtual machine storage information by using the command line Copy linkLink copied to clipboard!
By using the command line, you can view detailed information about storage resources available to your virtual machines (VMs).
Procedure
To view the available storage pools on the host, run the
virsh pool-listcommand with options for the required granularity of the list. For example, the following options display all available information about all storage pools on your host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
For additional options available for viewing storage pool information, use the
virsh pool-list --helpcommand.
-
For additional options available for viewing storage pool information, use the
To list the storage volumes in a specified storage pool, use the
virsh vol-listcommand.virsh vol-list --pool <RHEL-Storage-Pool> --details Name Path Type Capacity Allocation --------------------------------------------------------------------------------------------- RHEL_Volume.qcow2 /home/VirtualMachines/RHEL8_Volume.qcow2 file 60.00 GiB 13.93 GiB
# virsh vol-list --pool <RHEL-Storage-Pool> --details Name Path Type Capacity Allocation --------------------------------------------------------------------------------------------- RHEL_Volume.qcow2 /home/VirtualMachines/RHEL8_Volume.qcow2 file 60.00 GiB 13.93 GiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow To view all block devices attached to a virtual machine, use the
virsh domblklistcommand.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
12.4. Attaching storage to virtual machines Copy linkLink copied to clipboard!
To add storage to a virtual machine (VM), you can attach a storage resource to the VM as a virtual disk.
Similarly to physical storage devices, virtual disks are independent from the VMs that they are attached to, and can be moved to other VMs.
You can use multiple types of storage resources to add a virtual disk to a VM.
12.4.1. Attaching a file-based virtual disk to your virtual machine by using the command line Copy linkLink copied to clipboard!
To provide local storage for a virtual machine, the easiest option typically is to attach a file-based virtual disk with the .qcow2 or .raw format.
To do so on the command line, you can use one of the following methods:
Create a file-based storage volume in a directory-based storage pool managed by
libvirt. This requires multiple steps, but provides better integration with the hypervisor.Note that a default directory-based storage volume is created automatically when creating the first VM on your RHEL 10 host. The name of this storage pool is based on the name of the directory in which you save the disk image. For example, by default, in the
systemsession oflibvirt, the disk image is saved in the/var/lib/libvirt/images/directory and the storage pool is namedimages.Use the
qemu-imgcommand to create a virtual disk as a file on the host file system. This is a faster method, but does not provide integration withlibvirt.As a result, virtual disks created by using
qemu-imgare more difficult to manage after creation.
A file-based virtual disk can also be created and attached when creating a new VM on the command line. To do so, use the --disk option with the virt-install utility. For detailed instructions, see Creating virtual machines.
Procedure
Optional: If you want to create a virtual disk as a storage volume, but you do not want to use the default
imagesstorage pool or another existing storage pool on the host, create and set up a new directory-based storage pool.Configure a directory-type storage pool. For example, to create a storage pool named
guest_images_dirthat uses the/guest_imagesdirectory:virsh pool-define-as guest_images_dir dir --target "/guest_images" Pool guest_images_dir defined
# virsh pool-define-as guest_images_dir dir --target "/guest_images" Pool guest_images_dir definedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a target path for the storage pool based on the configuration you previously defined.
virsh pool-build guest_images_dir Pool guest_images_dir built
# virsh pool-build guest_images_dir Pool guest_images_dir builtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the storage pool.
virsh pool-start guest_images_dir Pool guest_images_dir started
# virsh pool-start guest_images_dir Pool guest_images_dir startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Set the storage pool to start on host boot.
virsh pool-autostart guest_images_dir Pool guest_images_dir marked as autostarted
# virsh pool-autostart guest_images_dir Pool guest_images_dir marked as autostartedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify that the storage pool is in the
runningstate. Check if the sizes reported are as expected and if autostart is configured correctly.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a file-based virtual disk. To do so, use one of the following methods:
To quickly create a file-based VM disk not managed by
libvirt, use theqemu-imgutility.For example, the following command creates a
qcow2disk image named test-image with the size of 30 gigabytes:qemu-img create -f qcow2 test-image 30G Formatting 'test-image', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=32212254720 lazy_refcounts=off refcount_bits=16
# qemu-img create -f qcow2 test-image 30G Formatting 'test-image', fmt=qcow2 cluster_size=65536 extended_l2=off compression_type=zlib size=32212254720 lazy_refcounts=off refcount_bits=16Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a file-based VM disk managed by
libvirt, define the disk as a storage volume based on an existing directory-based storage pool.For example, the following command creates a 20 GB
qcow2volume namedvm-disk1and based on theguest_images_dirstorage pool:virsh vol-create-as --pool guest_images_dir --name vm-disk1 --capacity 20GB --format qcow2 Vol vm-disk1 created
# virsh vol-create-as --pool guest_images_dir --name vm-disk1 --capacity 20GB --format qcow2 Vol vm-disk1 createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Locate the virtual disk that you created:
-
For a VM disk created with
qemu-img, this is typically your current directory. For a storage volume, examine the storage pool that the volume belongs to:
virsh vol-list --pool guest_images_dir --details Name Path Type Capacity Allocation -------------------------------------------------------------------------- vm-disk1 /guest-images/vm-disk1 file 20.00 GiB 196.00 KiB
# virsh vol-list --pool guest_images_dir --details Name Path Type Capacity Allocation -------------------------------------------------------------------------- vm-disk1 /guest-images/vm-disk1 file 20.00 GiB 196.00 KiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
For a VM disk created with
Find out which target devices are already used in the VM to which you want to attach the disk:
virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -
# virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Attach the disk to a VM by using the
virsh attach-diskcommand. Provide a target device that is not in use in the VM.For example, the following command attaches the previously created
test-disk1as thevdcdevice to thetestguest1VM:virsh attach-disk testguest1 /guest-images/vm-disk1 vdc --persistent
# virsh attach-disk testguest1 /guest-images/vm-disk1 vdc --persistentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.4.2. Attaching a file-based virtual disk to your virtual machine by using the web console Copy linkLink copied to clipboard!
To provide local storage for a virtual machine, the easiest option typically is to attach a file-based virtual disk with the .qcow2 or .raw format.
To do so, create a file-based storage volume in a directory-based storage pool managed by libvirt. A default directory-based storage volume is created automatically when creating the first VM on your RHEL 10 host. The name of this storage pool is based on the name of the directory in which you save the disk image. For example, by default, in the system session of libvirt, the disk image is saved in the /var/lib/libvirt/images/ directory and the storage pool is named images.
A file-based virtual disk can also be created and attached when creating a new VM in the web console. To do so, use the Storage option in the Create virtual machine dialog. For detailed instructions, see creating virtual machines by using the web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
Optional: If you do not want to use the default
imagesstorage pool to create a new virtual disk, create a new storage pool.-
Click
Storage Poolsat the top of the Virtual Machines interface. →Create storage pool. - In the Create Storage Pool dialog, enter a name for the storage pool.
- In the Type drop-down menu, select Filesystem directory.
Enter the following information:
- Target path - The location of the storage pool.
- Startup - Whether or not the storage pool starts when the host boots.
Click .
The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.
-
Click
Create a new storage volume based on an existing storage pool.
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
Storage Volumes→Create volume. Enter the following information in the Create Storage Volume dialog:
- Name - The name of the storage volume.
- Size - The size of the storage volume in MiB or GiB.
-
Format - The format of the storage volume. The supported types are
qcow2andraw.
- Click .
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
- Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Add the created storage volume as a disk to a VM.
In the interface, click the VM for which you want to create and attach the new disk.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
- Scroll to .
- In the Disks section, click .
- In the Add disks dialog, select .
- Select the storage pool and storage volume that you want to use for the disk.
Select whether or not the disk will be persistent
NoteTransient disks can only be added to VMs that are running.
- Optional: Click and adjust the cache type, bus type, and disk identifier of the storage volume.
- Click .
Verification
- In the guest operating system of the VM, confirm that the disk image has become available as an unformatted and un-allocated disk.
12.4.3. Attaching disk-based storage to your virtual machine by using the command line Copy linkLink copied to clipboard!
To provide local storage for a virtual machine (VM), you can use a disk-based disk image. This type of disk image is based on a disk partition on your host and uses the .qcow2 or .raw format.
To attach disk-based storage to a VM by using the command line, use one of the following methods:
-
When creating a new VM, create and attach a new disk as a part of the
virt-installcommand, by using the--diskoption. For detailed instructions, see Creating virtual machines. - For an existing VM, create a disk-based storage volume and attach it to the VM. For instructions, see the following procedure.
Prerequisites
Ensure your hypervisor supports disk-based storage pools:
virsh pool-capabilities | grep "'disk' supported='yes'"
# virsh pool-capabilities | grep "'disk' supported='yes'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command displays any output, disk-based pools are supported.
Prepare a device on which you will base the storage pool. For this purpose, prefer partitions (for example,
/dev/sdb1) or LVM volumes. If you provide a VM with write access to an entire disk or block device (for example,/dev/sdb), the VM will likely partition it or create its own LVM groups on it. This can result in system errors on the host.However, if you require using an entire block device for the storage pool, Red Hat recommends protecting any important partitions on the device from GRUB’s
os-proberfunction. To do so, edit the/etc/default/grubfile and apply one of the following configurations:Disable
os-prober.GRUB_DISABLE_OS_PROBER=true
GRUB_DISABLE_OS_PROBER=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Prevent
os-proberfrom discovering the partition that you want to use. For example:GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"
GRUB_OS_PROBER_SKIP_LIST="5ef6313a-257c-4d43@/dev/sdb1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Back up any data on the selected storage device before creating a storage pool. Depending on the version of
libvirtbeing used, dedicating a disk to a storage pool may reformat and erase all data currently stored on the disk device.
Procedure
Create and set up a new disk-based storage pool, if you do not already have one.
Define and create a disk-type storage pool. The following example creates a storage pool named
guest_images_diskthat uses the /dev/sdb device and is mounted on the /dev directory.virsh pool-define-as guest_images_disk disk --source-format=gpt --source-dev=/dev/sdb --target /dev Pool guest_images_disk defined
# virsh pool-define-as guest_images_disk disk --source-format=gpt --source-dev=/dev/sdb --target /dev Pool guest_images_disk definedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage pool target path for a pre-formatted file-system storage pool, initialize the storage source device, and define the format of the data.
virsh pool-build guest_images_disk Pool guest_images_disk built
# virsh pool-build guest_images_disk Pool guest_images_disk builtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify that the pool was created.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the storage pool.
virsh pool-start guest_images_disk Pool guest_images_disk started
# virsh pool-start guest_images_disk Pool guest_images_disk startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
virsh pool-startcommand is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.Optional: Turn on autostart.
By default, a storage pool defined with
virshis not set to automatically start each time virtualization services start. Use thevirsh pool-autostartcommand to configure the storage pool to autostart.virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostarted
# virsh pool-autostart guest_images_disk Pool guest_images_disk marked as autostartedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a disk-based storage volume. For example, the following command creates a 20 GB
qcow2volume namedvm-disk1and based on theguest_images_diskstorage pool:virsh vol-create-as --pool guest_images_disk --name sdb1 --capacity 20GB --format extended Vol vm-disk1 created
# virsh vol-create-as --pool guest_images_disk --name sdb1 --capacity 20GB --format extended Vol vm-disk1 createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the storage volume as a virtual disk to a VM.
Locate the storage volume that you created. To do so, examine the storage pool that the volume belongs to:
virsh vol-list --pool guest_images_disk --details Name Path Type Capacity Allocation --------------------------------------------------------------------- sdb1 /dev/sdb1 block 20.00 GiB 20.00 GiB
# virsh vol-list --pool guest_images_disk --details Name Path Type Capacity Allocation --------------------------------------------------------------------- sdb1 /dev/sdb1 block 20.00 GiB 20.00 GiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find out which target devices are already used in the VM to which you want to attach the disk:
virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -
# virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Attach the disk to a VM by using the
virsh attach-diskcommand. Provide a target device that is not in use in the VM.For example, the following command attaches the previously created
vm-disk1as thevdcdevice to thetestguest1VM:virsh attach-disk testguest1 /dev/sdb1 vdc --persistent
# virsh attach-disk testguest1 /dev/sdb1 vdc --persistentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.4.4. Attaching disk-based storage to your virtual machine by using the web console Copy linkLink copied to clipboard!
To provide local storage for a virtual machine, the easiest option typically is to attach a file-based virtual disk with the .qcow2 or .raw format.
To attach disk-based storage to a VM by using the web console, use one of the following methods:
-
When creating a new VM, create and attach a new disk by using the
Storageoption in theCreate virtual machinedialog. For detailed instructions, see Creating virtual machines by using the web console. - For an existing VM, create a disk-based storage volume and attach it to the VM. For instructions, see the following procedure.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
Create and set up a new disk-based storage pool, if you do not already have one.
-
Click
Storage Poolsat the top of the Virtual Machines interface. →Create storage pool. - In the Create Storage Pool dialog, enter a name for the storage pool.
In the Type drop-down menu, select Physical disk device.
NoteIf you do not see the Physical disk device option in the drop-down menu, then your hypervisor does not support disk-based storage pools.
Enter the following information:
- Target Path - The path specifying the target device. This will be the path used for the storage pool.
-
Source path - The path specifying the storage device. For example,
/dev/sdb. - Format - The type of the partition table.
- Startup - Whether or not the storage pool starts when the host boots.
Click .
The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.
-
Click
Create a new storage volume based on an existing storage pool.
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
Storage Volumes→Create volume. Enter the following information in the Create Storage Volume dialog:
- Name - The name of the storage volume.
- Size - The size of the storage volume in MiB or GiB.
- Format - The format of the storage volume.
- Click .
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
- Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Add the created storage volume as a disk to a VM.
In the interface, click the VM for which you want to create and attach the new disk.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
- Scroll to .
- In the Disks section, click .
- In the Add disks dialog, select .
- Select the storage pool and storage volume that you want to use for the disk.
Select whether or not the disk will be persistent
NoteTransient disks can only be added to VMs that are running.
- Optional: Click and adjust the cache type, bus type, and disk identifier of the storage volume.
- Click .
Verification
- In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.4.5. Attaching LVM-based storage to your virtual machine by using the command line Copy linkLink copied to clipboard!
To provide local storage for a virtual machine (VM), you can use an LVM-based storage volume. This type of disk image is based on an LVM volume group, and uses the .qcow2 or .raw format.
To attach LVM-based storage to a VM by using the command line, use one of the following methods:
-
When creating a new VM, create and attach a new disk by using the
Storageoption in theCreate virtual machinedialog. For detailed instructions, see Creating virtual machines by using the web console. - For an existing VM, create an LVM-based storage volume and attach it to the VM. For instructions, see the following procedure.
Note that LVM-based storage volumes have the following limitations:
- LVM-based storage pools do not provide the full flexibility of LVM.
-
LVM-based storage pools are volume groups. You can create volume groups by using the
virshutility, but this way you can only have one device in the created volume group. To create a volume group with multiple devices, use the LVM utility instead, see How to create a volume group in Linux with LVM. -
LVM-based storage pools require a full disk partition. If you activate a new partition or device by using
virshcommands, the partition will be formatted and all data will be erased. If you are using a host’s existing volume group, as in the following procedure, nothing will be erased.
Prerequisites
Ensure your hypervisor supports LVM-based storage pools:
virsh pool-capabilities | grep "'logical' supported='yes'"
# virsh pool-capabilities | grep "'logical' supported='yes'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command displays any output, LVM-based pools are supported.
- Make sure an LVM volume group exists on your host. For instructions on creating one, see Creating an LVM volume group.
- Back up any data on the selected storage device before creating a storage pool. Dedicating a disk partition to a storage pool will reformat and erase all data currently stored on the disk device.
Procedure
Create and set up a new LVM-based storage pool, if you do not already have one.
Define an LVM-type storage pool. For example, the following command defines a storage pool named
guest_images_lvmthat uses thelvm_vgvolume group and is mounted on the/dev/lvm_vgdirectory:virsh pool-define-as guest_images_lvm logical --source-dev /dev/sdb --target /dev/lvm_vg Pool guest_images_lvm defined
# virsh pool-define-as guest_images_lvm logical --source-dev /dev/sdb --target /dev/lvm_vg Pool guest_images_lvm definedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage pool based on the configuration you previously defined.
virsh pool-build guest_images_lvm Pool guest_images_lvm built
# virsh pool-build guest_images_lvm Pool guest_images_lvm builtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify that the pool was created.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the storage pool.
virsh pool-start guest_images_lvm Pool guest_images_lvm started
# virsh pool-start guest_images_lvm Pool guest_images_lvm startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
virsh pool-startcommand is only necessary for persistent storage pools. Transient storage pools are automatically started when they are created.Optional: Turn on autostart.
By default, a storage pool defined with
virshis not set to automatically start each time virtualization services start. Use thevirsh pool-autostartcommand to configure the storage pool to autostart.virsh pool-autostart guest_images_lvm Pool guest_images_lvm marked as autostarted
# virsh pool-autostart guest_images_lvm Pool guest_images_lvm marked as autostartedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an LVM-based storage volume. For example, the following command creates a 20 GB
qcow2volume namedvm-disk1and based on theguest_images_lvmstorage pool:virsh vol-create-as --pool guest_images_lvm --name vm-disk1 --capacity 20GB --format qcow2 Vol vm-disk1 created
# virsh vol-create-as --pool guest_images_lvm --name vm-disk1 --capacity 20GB --format qcow2 Vol vm-disk1 createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the storage volume as a virtual disk to a VM.
Locate the storage volume that you created. To do so, examine the storage pool that the volume belongs to:
virsh vol-list --pool guest_images_lvm --details Name Path Type Capacity Allocation ----------------------------------------------------------------------------- vm-disk1 /dev/guest_images_lvm/vm-disk1 block 20.00 GiB 196.00 KiB
# virsh vol-list --pool guest_images_lvm --details Name Path Type Capacity Allocation ----------------------------------------------------------------------------- vm-disk1 /dev/guest_images_lvm/vm-disk1 block 20.00 GiB 196.00 KiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find out which target devices are already used in the VM to which you want to attach the disk:
virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -
# virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Attach the disk to a VM by using the
virsh attach-diskcommand. Provide a target device that is not in use in the VM.For example, the following command attaches the previously created
vm-disk1as thevdcdevice to thetestguest1VM:virsh attach-disk testguest1 /dev/guest_images_lvm/vm-disk1 vdc --persistent
# virsh attach-disk testguest1 /dev/guest_images_lvm/vm-disk1 vdc --persistentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.4.6. Attaching LVM-based storage to your virtual machine by using the web console Copy linkLink copied to clipboard!
To provide local storage for a virtual machine (VM), you can use an LVM-based storage volume. This type of disk image is based on an LVM volume group, and uses the .qcow2 or .raw format.
To attach disk-based storage to a VM by using the web console, use one of the following methods:
-
When creating a new VM, create and attach a new disk by using the
Storageoption in theCreate virtual machinedialog. For detailed instructions, see Creating virtual machines by using the web console. - For an existing VM, create an LVM-based storage volume and attach it to the VM. For instructions, see the following procedure.
Note that LVM-based storage volumes have the following limitations:
- LVM-based storage pools do not provide the full flexibility of LVM.
-
LVM-based storage pools are volume groups. You can create volume groups by using the
virshutility, but this way you can only have one device in the created volume group. To create a volume group with multiple devices, use the LVM utility instead, see How to create a volume group in Linux with LVM. -
LVM-based storage pools require a full disk partition. If you activate a new partition or device by using
virshcommands, the partition will be formatted and all data will be erased. If you are using a host’s existing volume group, as in the following procedure, nothing will be erased.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- An LVM volume group exists on your host. For instructions on creating one, see Creating an LVM volume group.
Procedure
- Log in to the RHEL 10 web console.
Create and set up a new directory-based storage pool, if you do not already have one.
-
Click
Storage Poolsat the top of the Virtual Machines interface. →Create storage pool. - In the Create Storage Pool dialog, enter a name for the storage pool.
In the Type drop-down menu, select LVM volume group.
NoteIf you do not see the LVM volume group option in the drop-down menu, then your hypervisor does not support disk-based storage pools.
Enter the following information:
- Source volume group - The name of the LVM volume group that you wish to use.
- Startup - Whether or not the storage pool starts when the host boots.
Click .
The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.
-
Click
Create a new storage volume based on an existing storage pool.
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
Storage Volumes→Create volume. Enter the following information in the Create Storage Volume dialog:
- Name - The name of the storage volume.
- Size - The size of the storage volume in MiB or GiB.
-
Format - The format of the storage volume. The supported types are
qcow2andraw.
- Click .
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
- Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Add the created storage volume as a disk to a VM.
In the interface, click the VM for which you want to create and attach the new disk.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
- Scroll to .
- In the Disks section, click .
- In the Add disks dialog, select .
- Select the storage pool and storage volume that you want to use for the disk.
Select whether or not the disk will be persistent
NoteTransient disks can only be added to VMs that are running.
- Optional: Click and adjust the cache type, bus type, and disk identifier of the storage volume.
- Click .
Verification
- In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.4.7. Attaching NFS-based storage to your virtual machine by using the command line Copy linkLink copied to clipboard!
To provide networke storage for a virtual machine (VM), you can use a storage volume based on a Network File Sytem (NFS) server.
To attach NFS-based storage to a VM by using the command line, use one of the following methods:
-
When creating a new VM, create and attach a new disk by using the
Storageoption in theCreate virtual machinedialog. For detailed instructions, see Creating virtual machines by using the web console. - For an existing VM, create an NFS-based storage volume and attach it to the VM. For instructions, see the following procedure.
Prerequisites
Ensure your hypervisor supports NFS-based storage pools:
virsh pool-capabilities | grep "<value>nfs</value>"
# virsh pool-capabilities | grep "<value>nfs</value>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command displays any output, NFS-based pools are supported.
- You must have an available NFS that you can use. For details, see Mounting NFS shares
Procedure
Create and set up a new NFS-based storage pool, if you do not already have one.
Define and create an NFS-type storage pool. For example, to create a storage pool named
guest_images_netfsthat uses an NFS server with IP111.222.111.222mounted on the server directory/home/net_mountby using the target directory/var/lib/libvirt/images/nfspool:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage pool based on the configuration you previously defined.
virsh pool-build guest_images_netfs Pool guest_images_netfs built
# virsh pool-build guest_images_netfs Pool guest_images_netfs builtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify that the pool was created.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the storage pool.
virsh pool-start guest_images_netfs Pool guest_images_netfs started
# virsh pool-start guest_images_netfs Pool guest_images_netfs startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Turn on autostart.
By default, a storage pool defined with
virshis not set to automatically start each time virtualization services start. Use thevirsh pool-autostartcommand to configure the storage pool to autostart.virsh pool-autostart guest_images_netfs Pool guest_images_netfs marked as autostarted
# virsh pool-autostart guest_images_netfs Pool guest_images_netfs marked as autostartedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an NFS-based storage volume. For example, the following command creates a 20 GB
qcow2volume namedvm-disk1and based on theguest_images_netfsstorage pool:virsh vol-create-as --pool guest_images_netfs --name vm-disk1 --capacity 20GB --format qcow2 Vol vm-disk1 created
# virsh vol-create-as --pool guest_images_netfs --name vm-disk1 --capacity 20GB --format qcow2 Vol vm-disk1 createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the storage volume as a virtual disk to a VM.
Locate the storage volume that you created. To do so, examine the storage pool that the volume belongs to:
virsh vol-list --pool guest_images_netfs --details Name Path Type Capacity Allocation ------------------------------------------------------------------------------------- vm-disk1 /var/lib/libvirt/images/nfspool/vm-disk1 file 20.00 GiB 196.00 KiB
# virsh vol-list --pool guest_images_netfs --details Name Path Type Capacity Allocation ------------------------------------------------------------------------------------- vm-disk1 /var/lib/libvirt/images/nfspool/vm-disk1 file 20.00 GiB 196.00 KiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find out which target devices are already used in the VM to which you want to attach the disk:
virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -
# virsh domblklist --details <vm-name> Type Device Target Source ---------------------------------------------------------------- file disk *vda /home/VirtualMachines/vm-name.qcow2 file cdrom vdb -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Attach the disk to a VM by using the
virsh attach-diskcommand. Provide a target device that is not in use in the VM.For example, the following command attaches the previously created
vm-disk1as thevdcdevice to thetestguest1VM:virsh attach-disk testguest1 /var/lib/libvirt/images/nfspool/vm-disk1 vdc --persistent
# virsh attach-disk testguest1 /var/lib/libvirt/images/nfspool/vm-disk1 vdc --persistentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Inspect the XML configuration of the VM to which you attached the disk to see if the configuration is correct.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.4.8. Attaching NFS-based storage to your virtual machine by using the web console Copy linkLink copied to clipboard!
To provide networke storage for a virtual machine (VM), you can use a storage volume based on a Network File Sytem (NFS) server.
To attach NFS-based storage to a VM by using the web console, use one of the following methods:
-
When creating a new VM, create and attach a new disk by using the
Storageoption in theCreate virtual machinedialog. For detailed instructions, see Creating virtual machines by using the web console. - For an existing VM, create an NFS-based storage volume and attach it to the VM. For instructions, see the following procedure.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
Create and set up a new NFS-based storage pool, if you do not already have one.
-
Click
Storage Poolsat the top of the Virtual Machines interface. →Create storage pool. - In the Create Storage Pool dialog, enter a name for the storage pool.
In the Type drop-down menu, select Network file system.
NoteIf you do not see the Network file system option in the drop-down menu, then your hypervisor does not support NFS-based storage pools.
Enter the following information:
- Target path - The path specifying the target. This will be the path used for the storage pool.
- Host - The hostname of the network server where the mount point is located. This can be a hostname or an IP address.
- Source path - The directory used on the network server.
- Startup - Whether or not the storage pool starts when the host boots.
Click .
The storage pool is created, the Create Storage Pool dialog closes, and the new storage pool appears in the list of storage pools.
-
Click
Create a new storage volume based on an existing storage pool.
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
Storage Volumes→Create volume. Enter the following information in the Create Storage Volume dialog:
- Name - The name of the storage volume.
- Size - The size of the storage volume in MiB or GiB.
-
Format - The format of the storage volume. The supported types are
qcow2andraw.
- Click .
-
In the Storage Pools window, click the storage pool from which you want to create a storage volume. →
- Optional: Check the consistency of the disk, to avoid issues with data corruption or disk fragmentation. For instructions, see Checking the consistency of a virtual disk.
Add the created storage volume as a disk to a VM.
In the interface, click the VM for which you want to create and attach the new disk.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
- Scroll to .
- In the Disks section, click .
- In the Add disks dialog, select .
- Select the storage pool and storage volume that you want to use for the disk.
Select whether or not the disk will be persistent
NoteTransient disks can only be added to VMs that are running.
- Optional: Click and adjust the cache type, bus type, and disk identifier of the storage volume.
- Click .
Verification
- In the guest operating system of the VM, confirm that the disk image has become available as an un-formatted and un-allocated disk.
12.5. Checking the consistency of a virtual disk Copy linkLink copied to clipboard!
Before attaching a disk image to a virtual machine (VM), ensure that the disk image does not have problems, such as corruption or high fragmentation. To do so, you can use the qemu-img check command.
If needed, you can also use this command to attempt repairing the disk image.
Prerequisites
- Any virtual machines (VMs) that use the disk image must be shut down.
Procedure
Use the
qemu-img checkcommand on the image you want to test. For example:qemu-img check <test-name.qcow2> No errors were found on the image. 327434/327680 = 99.92% allocated, 0.00% fragmented, 0.00% compressed clusters Image end offset: 21478375424
# qemu-img check <test-name.qcow2> No errors were found on the image. 327434/327680 = 99.92% allocated, 0.00% fragmented, 0.00% compressed clusters Image end offset: 21478375424Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the check finds problems on the disk image, the output of the command looks similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To attempt repairing the detected issues, use the
qemu-img checkcommand with the-r alloption. Note, however, that this might fix only some of the problems.WarningRepairing the disk image can cause data corruption or other issues. Back up the disk image before attempting the repair.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output indicates the number of problems found on the disk image after the repair.
-
If further disk image repairs are required, you can use various
libguestfstools in theguestfishshell.
12.6. Resizing a virtual disk Copy linkLink copied to clipboard!
If an existing disk image requires additional space, you can use the qemu-img resize utility to change the size of the image to fit your use case.
Prerequisites
- You have created a backup of the disk image.
Any virtual machines (VMs) that use the disk image must be shutdown.
WarningResizing the disk image of a running VM can cause data corruption or other issues.
- The hard disk of the host has sufficient free space for the intended disk image size.
- Optional: You have ensured that the disk image does not have data corruption or similar problems. For instructions, see Checking the consistency of a virtual disk.
Procedure
Determine the location of the disk image file for the VM you want to resize. For example:
virsh domblklist <vm-name> Target Source ---------------------------------------------------------- vda /home/username/disk-images/example-image.qcow2
# virsh domblklist <vm-name> Target Source ---------------------------------------------------------- vda /home/username/disk-images/example-image.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Back up the current disk image.
cp <example-image.qcow2> <example-image-backup.qcow2>
# cp <example-image.qcow2> <example-image-backup.qcow2>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
qemu-img resizeutility to resize the image.For example, to increase the <example-image.qcow2> size by 10 gigabytes:
qemu-img resize <example-image.qcow2> +10G
# qemu-img resize <example-image.qcow2> +10GCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resize the file system, partitions, or physical volumes inside the disk image to use the additional space. To do so in a RHEL guest operating system, use the instructions in Managing storage devices and Managing file systems.
Verification
Display information about the resized image and see if it has the intended size:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the resized disk image for potential errors. For instructions, see Checking the consistency of a virtual disk.
12.7. Converting between virtual disk formats Copy linkLink copied to clipboard!
You can convert the virtual disk image to a different format by using the qemu-img convert command. For example, converting between virtual disk image formats might be necessary if you want to attach the disk image to a virtual machine (VM) running on a different hypervisor.
Prerequisites
- Any virtual machines (VMs) that use the disk image must be shut down.
- The source disk image format must be supported for conversion by QEMU. For a detailed list, see Supported disk image formats.
Procedure
Use the
qemu-img convertcommand to convert an existing virtual disk image to a different format. For example, to convert a raw disk image to a QCOW2 disk image:qemu-img convert -f raw <original-image.img> -O qcow2 <converted-image.qcow2>
# qemu-img convert -f raw <original-image.img> -O qcow2 <converted-image.qcow2>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display information about the converted image and see if it has the intended format and size.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the disk image for potential errors. for instructions, see Checking the consistency of a virtual disk.
12.8. Removing virtual machine storage by using the command line Copy linkLink copied to clipboard!
If you no longer require a virtual disk to be attached to a virtual machine (VM), or if you want to free up host storage resources, you can remove storage from a VM.
By using the command line, you can do any of the following:
- Detach the virtual disk from the VM.
- Delete the virtual disk and its content.
- Deactivate the storage pool related to the virtual disk.
- Delete the storage pool related to the virtual disk.
Procedure
To detach a virtual disk from a VM, use the
virsh detach-diskcommand.Optional: List all storage devices attached to the VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
targetparameter to detach the disk. For example, to detach the disk connected to asvdcto thetestguestVM, use the following command:virsh detach-disk testguest vdc --persistent
# virsh detach-disk testguest vdc --persistentCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To delete the disk, do one of the following:
If the disk is managed as a storage volume, use the
virsh vol-deletecommand. For example, to delete volumetest-disk2associated with storage poolRHEL-storage-pool:virsh vol-delete --pool RHEL-storage-pool test-disk2
# virsh vol-delete --pool RHEL-storage-pool test-disk2Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the disk is purely file-based, remove the file.
rm /home/VirtualMachines/test-disk2.qcow2
# rm /home/VirtualMachines/test-disk2.qcow2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To deactivate a storage pool, use the
virsh pool-destroycommand.When you deactivate a storage pool, no new volumes can be created in that pool. However, any VMs that have volumes in that pool will continue to run. This is useful, for example, if you want to limit the number of volumes that can be created in a pool to increase system performance.
virsh pool-destroy RHEL-storage-pool Pool RHEL-storage-pool destroyed
# virsh pool-destroy RHEL-storage-pool Pool RHEL-storage-pool destroyedCopy to Clipboard Copied! Toggle word wrap Toggle overflow To completely remove a storage pool, delete its definition by using the
virsh pool-undefinecommand.virsh pool-undefine RHEL-storage-pool Pool RHEL-storage-pool has been undefined
# virsh pool-undefine RHEL-storage-pool Pool RHEL-storage-pool has been undefinedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To confirm that your changes to VM storage have been successful, inspect the current state of virtual storage on your host.
For instructions, see Viewing virtual machine storage information by using the command line.
12.9. Removing virtual machine storage by using the web console Copy linkLink copied to clipboard!
If you no longer require a virtual disk to be attached to a virtual machine (VM), or if you want to free up host storage resources, you can remove storage from a VM.
By using the web console, you can do any of the following:
- Detach the virtual disk from the VM.
- Delete the virtual disk and its content.
- Deactivate the storage pool related to the virtual disk.
- Delete the storage pool related to the virtual disk.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
To detach a virtual disk from a VM, use the following steps:
In the interface, click the VM from which you want to detach a disk.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Disks section displays information about the disks assigned to the VM, as well as options to Add or Edit disks.
- On the right side of the row for the disk that you want to detach, click the Menu button .
In the drop-down menu that appears, click the button.
A
Remove disk from VM?confirmation dialog box appears.In the confirmation dialog box, click . Optionally, if you also want to remove the disk image, click .
The virtual disk is detached from the VM.
To delete the disk, do one of the following:
If the disk is managed as a storage volume, click at the top of the Virtual Machines tab. → Click the name of the storage pool that contains the disk. → Click . → Select the storage volume you want to remove. → Click .
virsh vol-delete --pool RHEL-storage-pool test-disk2
# virsh vol-delete --pool RHEL-storage-pool test-disk2Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the disk is a file not managed as a storage volume (for example if it was created by
qemu-img), you must use a graphical file manager or the command line to delete it. The RHEL web console currently does not support deleting individual files.
To deactivate a storage pool, use the following steps.
When you deactivate a storage pool, no new volumes can be created in that pool. However, any VMs that have volumes in that pool will continue to run. This is useful, for example, if you want to limit the number of volumes that can be created in a pool to increase system performance.
- Click at the top of the Virtual Machines tab. The Storage Pools window appears, showing a list of configured storage pools.
Click on the storage pool row.
The storage pool is deactivated.
To completely remove a storage pool, use the following steps:
Click on the Virtual Machines tab.
The Storage Pools window appears, showing a list of configured storage pools.
Click the Menu button of the storage pool you want to delete and click .
A confirmation dialog appears.
- Optional: To delete the storage volumes inside the pool, select the corresponding check boxes in the dialog.
Click .
The storage pool is deleted. If you selected the checkbox in the previous step, the associated storage volumes are deleted as well.
Verification
To confirm that your changes to VM storage have been successful, inspect the current state of virtual storage on your host.
For instructions, see Viewing virtual machine storage information by using the web console.
12.10. Supported disk image formats Copy linkLink copied to clipboard!
To run a virtual machine (VM) on RHEL, you must use a disk image with a supported format. You can also convert certain unsupported disk images to a supported format.
Supported disk image formats for VMs
You can use disk images that use the following formats to run VMs in RHEL:
- qcow2 - Provides certain additional features, such as compression.
- raw - Might provide better performance.
- luks - Disk images encrypted by using the Linux Unified Key Setup (LUKS) specification.
Supported disk image formats for conversion
-
If required, you can convert your disk images between the
rawandqcow2formats by using theqemu-img convertcommand. -
If you require converting a vmdk disk image to a
raworqcow2format, convert the VM that uses the disk to KVM by using thevirt-v2vutility. To convert other disk image formats to
raworqcow2, you can use theqemu-img convertcommand. For a list of formats that work with this command, see the QEMU documentation.Note that in most cases, converting the disk image format of a non-KVM virtual machine to
qcow2orrawis not sufficient for the VM to correctly run on RHEL KVM. In addition to converting the disk image, corresponding drivers must be installed and configured in the guest operating system of the VM. For supported hypervisor conversion, use thevirt-v2vutility.
Chapter 13. Saving and restoring virtual machine state by using snapshots Copy linkLink copied to clipboard!
To save the current state of a virtual machine (VM), you can create a snapshot of the VM. Afterwards, you can revert to the snapshot to return the VM to the saved state.
A VM snapshot contains the disk image of the VM. If you create a snapshot from a running VM, also known as a live snapshot, the snapshot also contains the memory state of the VM, which includes running processes and applications.
Creating snapshots can be useful, for example, for the following tasks:
- Saving a clean state of the guest operating system
- Ensuring that you have a restore point before performing a potentially destructive operation on the VM
To create a VM snapshot or revert to one, you can use the command line (CLI) or the RHEL web console.
13.1. Support limitations for virtual machine snapshots Copy linkLink copied to clipboard!
Red Hat supports the snapshot functionality for virtual machines (VMs) on RHEL only when you use external snapshots.
Currently, you can create external snapshots on RHEL only when all of the following requirements are met:
- The VM is using file-based storage.
You create the VM snapshot only in one of the following scenarios:
- The VM is shut-down.
-
If the VM is running, you use the
--disk-only --quiesceoptions or the--live --memspecoptions.
Most other configurations create internal snapshots, which are deprecated in RHEL 10. Internal snapshots might work for your use case, but Red Hat does not provide full testing and support for them.
Do not use internal snapshots in production environments.
To ensure that a snapshot is supported, display the XML configuration of the snapshot and check the snapshot type and storage:
virsh snapshot-dumpxml <vm-name> <snapshot-name>
# virsh snapshot-dumpxml <vm-name> <snapshot-name>
Example output of a supported snapshot:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output of an unsupported snapshot:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
13.2. Creating virtual machine snapshots by using the command line Copy linkLink copied to clipboard!
To save the state of a virtual machine (VM) in a snapshot, you can use the virsh snapshot-create-as command.
Prerequisites
The VM uses file-based storage. To check whether this is the case, use the following command and ensure that for the
diskdevice, it displaysdisk typeasfile:virsh dumpxml <vm-name> | grep "disk type" <disk type='file' device='disk'> <disk type='file' device='cdrom'># virsh dumpxml <vm-name> | grep "disk type" <disk type='file' device='disk'> <disk type='file' device='cdrom'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to create a VM snapshot that includes the memory of a running VM, you must have sufficient disk space to store the memory of the VM.
- The minimum recommended space for saving the memory of a VM is equal to the VM’s assigned RAM. For example, saving the memory of a VM with 32 GB RAM requires up to 32 GB of disk space.
- If the VM is under heavy I/O load, significant additional disk space might be required.
- If the VM has assigned VFIO passthrough devices, additional disk space might be required.
If a snapshot is created without pausing the VM, additional disk space might be required.
ImportantRed Hat recommends not saving the memory of a running VMs that is under very high workload or that uses VFIO passthrough devices. Saving the memory of such VMs might fill up the host disk and degrade the system. Instead, consider creating snapshots without memory for such VMs.
In addition, note that not all VFIO devices are capable of creating snapshot with memory. Currently, creating a snapshot with memory works correctly only in the following situations:
- The attached VFIO device is a Mellanox VF with the migration capability enabled.
- The attached VFIO device is an NVIDIA vGPU with the migration capability enabled.
Procedure
To create a VM snapshot with the required parameters, use the
virsh snapshot-create-ascommand.virsh snapshot-create-as <vm-name> <snapshot-name> <optional-description> <additional-parameters>
# virsh snapshot-create-as <vm-name> <snapshot-name> <optional-description> <additional-parameters>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a snapshot of a shut-down VM, use the
--disk-onlyparameter. For example, the following command createsSnapshot1from the current disk-state of the shut-downTestguest1VM:virsh snapshot-create-as Testguest1 Snapshot1 --disk-only Domain snapshot Snapshot1 created.
# virsh snapshot-create-as Testguest1 Snapshot1 --disk-only Domain snapshot Snapshot1 created.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a snapshot that saves the disk-state of a running VM but not its memory, use the
--disk-only --quiesceparameters. For example, the following command createsSnapshot2from the current disk state of the runningTestguest2VM, with the descriptionclean system install:virsh snapshot-create-as Testguest2 Snapshot2 "clean system install" --disk-only --quiesce Domain snapshot Snapshot2 created.
# virsh snapshot-create-as Testguest2 Snapshot2 "clean system install" --disk-only --quiesce Domain snapshot Snapshot2 created.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To create a snapshot that pauses a running VM and saves its disk-state and memory, use the
--memspecparameter. For example, the following command pauses theTestguest3VM and createsSnapshot3from the current disk and memory state of the VM. The VM memory is saved in the/var/lib/libvirt/images/saved_memory.imgfile. When the snapshot is complete, the VM automatically resumes operation.virsh snapshot-create-as Testguest3 Snapshot3 --memspec /var/lib/libvirt/images/saved_memory.img Domain snapshot Snapshot3 created.
# virsh snapshot-create-as Testguest3 Snapshot3 --memspec /var/lib/libvirt/images/saved_memory.img Domain snapshot Snapshot3 created.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pausing the VM during the snapshot process creates downtime, but might work more reliably than creating a live snapshot of a running VM (by using the
--liveoption), especially for VMs under a heavy load.To create a snapshot that saves the disk-state of a running VM as well as its live memory, use the
--live --memspecparameters. For example, the following command createsSnapshot4from the current disk and memory state of the runningTestguest4VM, and saves the memory state in the/var/lib/libvirt/images/saved_memory2.imgfile.virsh snapshot-create-as Testguest4 Snapshot4 --live --memspec /var/lib/libvirt/images/saved_memory2.img Domain snapshot Snapshot4 created.
# virsh snapshot-create-as Testguest4 Snapshot4 --live --memspec /var/lib/libvirt/images/saved_memory2.img Domain snapshot Snapshot4 created.Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningSaving the memory of a VM in a snapshot saves the state of the running processes in the guest operating system of the VM. However, when you revert to such a snapshot, the processes might fail due to a variety of factors, such as loss of network connectivity or unsynchronized system time.
Verification
List the snapshots associated with the specified VM:
virsh snapshot-list <Testguest1> Name Creation Time State -------------------------------------------------------------- Snapshot1 2024-01-30 18:34:58 +0100 shutoff
# virsh snapshot-list <Testguest1> Name Creation Time State -------------------------------------------------------------- Snapshot1 2024-01-30 18:34:58 +0100 shutoffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the snapshot has been created as external:
virsh snapshot-dumpxml <Testguest1> <Snapshot1> | grep external <disk name='vda' snapshot='external' type='file'>
# virsh snapshot-dumpxml <Testguest1> <Snapshot1> | grep external <disk name='vda' snapshot='external' type='file'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command includes
snapshot='external', the snapshot is external and therefore fully supported by Red Hat.
13.3. Creating virtual machine snapshots by using the web console Copy linkLink copied to clipboard!
To save the state of a virtual machine (VM) in a snapshot, you can use the RHEL web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
The VM uses file-based storage. To ensure that this is the case, perform the following steps:
- In the Virtual machines interface of the web console, click the VM of which you want to create a snapshot.
- In the Disks pane of the management overview, check the Source column of the listed devices. In all devices that list a source, this source must be File.
Procedure
- Log in to the RHEL 10 web console.
In the Virtual machines interface of the web console, click the VM of which you want to create a snapshot.
A management overview of the VM opens.
-
In the Snapshots pane of the management overview, click the
Create snapshotbutton. - Enter a name for the snapshot, and optionally a description.
-
Click
Create.
Verification
- To ensure that creating the snapshot has succeeded, check that the snapshot is now listed in the Snapshots pane of the VM.
Verify that the snapshot has been created as external. To do so, use the following command on the command line of the host:
virsh snapshot-dumpxml <Testguest1> <Snapshot1> | grep external <disk name='vda' snapshot='external' type='file'>
# virsh snapshot-dumpxml <Testguest1> <Snapshot1> | grep external <disk name='vda' snapshot='external' type='file'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command includes
snapshot='external', the snapshot is external and therefore supported by Red Hat.
13.4. Reverting to a virtual machine snapshot by using the command line Copy linkLink copied to clipboard!
To return a virtual machine (VM) to the state saved in a snapshot, you can use the command-line-interface (CLI).
Prerequisites
- A snapshot of the VM is available, which you have previously created in the web console or by using the command line.
- Optional: You have created a snapshot of the current state of the VM. If you revert to a previous snapshot without saving the current state, changes performed on the VM since the last snapshot will be lost.
Procedure
Use the
virsh snapshot-revertutility and specify the name of the VM and the name of the snapshot to which you want to revert. For example, the following command reverts theTestguest2VM to theclean-installsnapshot.virsh snapshot-revert Testguest2 clean-install Domain snapshot clean-install reverted
# virsh snapshot-revert Testguest2 clean-install Domain snapshot clean-install revertedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the currently active snapshot for the reverted VM:
virsh snapshot-current Testguest2 --name clean-install
# virsh snapshot-current Testguest2 --name clean-installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
13.5. Reverting to a virtual machine snapshot by using the web console Copy linkLink copied to clipboard!
To return a virtual machine (VM) to the state saved in a snapshot, you can use the RHEL web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- A snapshot of the VM is available, which you have previously created in the web console or by using the command line.
- Optional: You have created a snapshot of the current state of the VM. If you revert to a previous snapshot without saving the current state, changes performed on the VM since the last snapshot will be lost.
Procedure
- Log in to the RHEL 10 web console.
In the Virtual machines interface of the web console, click the VM whose state you want to revert.
A management overview of the VM opens.
-
In the Snapshots pane of the management overview, click the
Revertbutton next to the snapshot to which you want to revert. - Wait until the revert operation finishes. Depending on the size of the snapshot and how different it is from the current state, this might take up to several minutes.
Verification
- In the Snapshots pane, if a green check symbol now displays on the left side of the selected snapshot, you have successfully reverted to it.
13.6. Deleting virtual machine snapshots by using the command line Copy linkLink copied to clipboard!
When a virtual machine (VM) snapshot is no longer useful for you, you can delete it on the command line to free up the disk space that it uses.
Prerequisites
Optional: A child snapshot exists for the snapshot you want to delete.
A child snapshot is created automatically when you have an active snapshot and create a new snapshot. If you delete a snapshot that does not have any children, you will lose any changes saved in the snapshot after it was created from its parent snapshot.
To view the parent-child structure of snapshots in a VM, use the
virsh snapshot-list --treecommand. The following example showsLatest-snapshotas a child ofRedundant-snapshot.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Use the
virsh snapshot-deletecommand to delete the snapshot. For example, the following command deletesRedundant-snapshotfrom theTestguest1VM:virsh snapshot-delete Testguest1 Redundant-snapshot Domain snapshot Redundant-snapshot deleted
# virsh snapshot-delete Testguest1 Redundant-snapshot Domain snapshot Redundant-snapshot deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To ensure that the snapshot that you deleted is no longer present, display the existing snapshots of the impacted VM and their parent-child structure:
virsh snapshot-list --tree <Testguest1> Clean-install-snapshot | +- Latest-snapshot
# virsh snapshot-list --tree <Testguest1> Clean-install-snapshot | +- Latest-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
Redundant-snapshothas been deleted andLatest-snapshothas become the child ofClean-install-snapshot.
13.7. Deleting virtual machine snapshots by using the web console Copy linkLink copied to clipboard!
When a virtual machine (VM) snapshot is no longer useful for you, you can delete it in the web console to free up the disk space that it uses.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Optional: A child snapshot exists for the snapshot you want to delete.
A child snapshot is created automatically when you have an active snapshot and create a new snapshot. If you delete a snapshot that does not have any children, you will lose any changes saved in the snapshot after it was created from its parent snapshot.
To check that the snapshot has a child, confirm that the snapshot is listed in the Parent snapshot column of the Snapshots in the web console overview of the VM.
Procedure
In the Virtual machines interface of the web console, click the VM whose snapshot you want to delete.
A management overview of the VM opens.
-
In the Snapshots pane of the management overview, click the
Deletebutton next to the snapshot that you want to delete. - Wait until the delete operation finishes. Depending on the size of the snapshot, this might take up to several minutes.
Verification
- If the snapshot no longer appears in the Snapshots pane, it has been deleted successfully.
Chapter 14. Attaching host devices to virtual machines Copy linkLink copied to clipboard!
You can expand the functionality of a virtual machine (VM) by attaching a host device to the VM. When attaching a host device to the VM, a virtual device is used for this purpose, which is a software abstraction of the hardware device.
14.1. How virtual devices work Copy linkLink copied to clipboard!
To provide virtual machines (VMs) with various capabilities, VMs use software abstractions of hardware devices.
Just like physical machines, VMs require specialized devices to provide functions to the system, such as processing power, memory, storage, networking, or graphics. Physical systems usually use hardware devices for these purposes. However, because VMs work as software processes, they need to use software abstractions of such devices instead, referred to as virtual devices.
The basics of virtual devices
Virtual devices attached to a VM can be configured when creating the VM, and can also be managed on an existing VM. Generally, virtual devices can be attached or detached from a VM only when the VM is shut off, but some can be added or removed when the VM is running. This feature is referred to as device hot plug and hot unplug.
When creating a new VM, libvirt automatically creates and configures a default set of essential virtual devices, unless specified otherwise by the user. These are based on the host system architecture and machine type, and usually include:
- the CPU
- memory
- a keyboard
- a network interface controller (NIC)
- various device controllers
- a video card
- a sound card
To manage virtual devices after the VM is created, use the command line (CLI). However, to manage virtual storage devices and network interfaces, you can also use the RHEL 10 web console.
Performance or flexibility
For some types of devices, RHEL 10 supports multiple implementations, often with a trade-off between performance and flexibility.
For example, the physical storage used for virtual disks can be represented by files in various formats, such as qcow2 or raw, and presented to the VM by using a variety of controllers:
- an emulated controller
-
virtio-scsi -
virtio-blk
An emulated controller is slower than a virtio controller, because virtio devices are designed specifically for virtualization purposes. However, emulated controllers make it possible to run operating systems that have no drivers for virtio devices. Similarly, virtio-scsi offers a more complete support for SCSI commands, and makes it possible to attach a larger number of disks to the VM. Finally, virtio-blk provides better performance than both virtio-scsi and emulated controllers, but a more limited range of use cases. For example, attaching a physical disk as a LUN device to a VM is not possible when using virtio-blk.
For more information about types of virtual devices, see Types of virtual devices.
14.2. Types of virtual devices Copy linkLink copied to clipboard!
To choose the appropriate device type for your virtual machines (VMs), consider your requirements for performance, compatibility, and functionality.
Virtualization in RHEL 10 can present several distinct types of virtual devices that you can attach to VMs:
- Emulated devices
Emulated devices are software implementations of widely used physical devices. Drivers designed for physical devices are also compatible with emulated devices. Therefore, emulated devices can be used very flexibly.
However, because they need to faithfully emulate a particular type of hardware, emulated devices might suffer a significant performance loss compared with the corresponding physical devices or more optimized virtual devices.
The following types of emulated devices are supported:
- Virtual CPUs (vCPUs), with a large choice of CPU models available. The performance impact of emulation depends significantly on the differences between the host CPU and the emulated vCPU.
- Emulated system components, such as PCI bus controllers.
- Emulated storage controllers, such as SATA, SCSI or even IDE.
- Emulated sound devices, such as ICH9, ICH6 or AC97.
- Emulated graphics cards, such as VGA cards.
- Emulated network devices, such as rtl8139.
- Paravirtualized devices
Paravirtualization provides a fast and efficient method for exposing virtual devices to VMs. Paravirtualized devices expose interfaces that are designed specifically for use in VMs, and thus significantly increase device performance. RHEL 10 provides paravirtualized devices to VMs by using the virtio API as a layer between the hypervisor and the VM. The drawback of this approach is that it requires a specific device driver in the guest operating system.
It is recommended to use paravirtualized devices instead of emulated devices for VM whenever possible, notably if they are running I/O intensive applications. Paravirtualized devices decrease I/O latency and increase I/O throughput, in some cases bringing them very close to bare metal performance. Other paravirtualized devices also add functionality to VMs that is not otherwise available.
The following types of paravirtualized devices are supported:
-
The paravirtualized network device (
virtio-net). Paravirtualized storage controllers:
-
virtio-blk- provides block device emulation. -
virtio-scsi- provides more complete SCSI emulation.
-
- The paravirtualized clock.
-
The paravirtualized serial device (
virtio-serial). -
The balloon device (
virtio-balloon), used to dynamically distribute memory between a VM and its host. -
The paravirtualized random number generator (
virtio-rng).
-
The paravirtualized network device (
- Physically shared devices
Certain hardware platforms enable VMs to directly access various hardware devices and components. This process is known as device assignment or passthrough.
When attached in this way, some aspects of the physical device are directly available to the VM as they would be to a physical machine. This provides superior performance for the device when used in the VM. However, devices physically attached to a VM become unavailable to the host, and also cannot be migrated.
Nevertheless, some devices can be shared across multiple VMs. For example, in certain cases a single physical device can provide multiple mediated devices, which can then be assigned to distinct VMs.
The following types of passthrough devices are supported:
- USB, PCI, and SCSI passthrough - expose common industry standard buses directly to VMs to make their specific features available to guest software.
- Single-root I/O virtualization (SR-IOV) - a specification that enables hardware-enforced isolation of PCI Express resources. This makes it safe and efficient to partition a single physical PCI resource into virtual PCI functions. It is commonly used for network interface cards (NICs).
- N_Port ID virtualization (NPIV) - a Fibre Channel technology to share a single physical host bus adapter (HBA) with multiple virtual ports.
- GPUs and vGPUs - accelerators for specific kinds of graphic or compute workloads. Some GPUs can be attached directly to a VM, while certain types also offer the ability to create virtual GPUs (vGPUs) that share the underlying physical hardware.
Some devices of these types might be unsupported or not compatible with RHEL. If you require assistance with setting up virtual devices, consult Red Hat support.
14.3. Attaching USB devices to virtual machines by using the command line Copy linkLink copied to clipboard!
When using a virtual machine (VM), you can access and control a USB device, such as a flash drive or a web camera, that is attached to the host system. In this scenario, the host system passes control of the device to the VM. This is also known as a USB-passthrough.
To attach a USB device to a VM, you can include the USB device information in the XML configuration file of the VM.
Prerequisites
- Ensure the device you want to pass through to the VM is attached to the host.
Procedure
Locate the bus and device values of the USB that you want to attach to the VM.
For example, the following command displays a list of USB devices attached to the host. The device we will use in this example is attached on bus 001 as device 005.
lsusb [...] Bus 001 Device 003: ID 2567:0a2b Intel Corp. Bus 001 Device 005: ID 0407:6252 Kingston River 2.0 [...]
# lsusb [...] Bus 001 Device 003: ID 2567:0a2b Intel Corp. Bus 001 Device 005: ID 0407:6252 Kingston River 2.0 [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-xmlutility along with the--add-deviceargument.For example, the following command attaches a USB flash drive to the
example-VM-1VM.virt-xml example-VM-1 --add-device --hostdev 001.005 Domain 'example-VM-1' defined successfully.
# virt-xml example-VM-1 --add-device --hostdev 001.005 Domain 'example-VM-1' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo attach a USB device to a running VM, add the
--updateargument to the command.
Verification
Use the
virsh dumpxmlcommand to see if the device’s XML definition has been added to the <devices> section in the VM’s XML configuration file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the VM and test if the device is present and works as expected.
14.4. Attaching PCI devices to virtual machines by using the command line Copy linkLink copied to clipboard!
When using a virtual machine (VM), you can access and control a PCI device, such as a storage or network controller, that is attached to the host system. In this scenario, the host system passes control of the device to the VM. This is also known as a PCI device assignment, or PCI passthrough.
To use a PCI hardware device attached to your host in a virtual machine (VM), you can detach the device from the host and assign it to the VM.
This procedure describes generic PCI device assignment. For instructions on assigning specific types of PCI devices, see the relevant procedures:
Prerequisites
If your host is using the IBM Z architecture, the
vfiokernel modules must be loaded on the host. To verify, use the following command:lsmod | grep vfio
# lsmod | grep vfioCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output must contain the following modules:
-
vfio_pci -
vfio_pci_core -
vfio_iommu_type1
-
Procedure
Obtain the PCI address identifier of the device that you want to use. For example, if you want to use a NVME disk attached to the host, the following output showss it as device
0000:65:00.0.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the XML configuration of the VM to which you want to attach the PCI device.
virsh edit vm-name
# virsh edit vm-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following
<hostdev>configuration to the<devices>section of the XML file.Replace the values on the
addressline with the PCI address of your device. Optionally, to change the PCI address that the device will use in the VM, you can configure a different address on the<address type="pci">line.For example, if the device address on the host is
0000:65:00.0, and you want it to use0000:02:00.0in the guest, use the following configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: On IBM Z hosts, you can modify how the guest operating system will detect the PCI device. To do this, add a
<zpci>sub-element to the<address>element. In the<zpci>line, you can adjust theuidandfidvalues, which modifies the PCI address and function ID of the device in the guest operating system.Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example:
-
uid="0x0008"sets the domain PCI address of the device in the VM to0008:00:00.0. fid="0x001807"sets the slot value of the device to0x001807. As a result, the device configuration in the file system of the VM is saved to/sys/bus/pci/slots/00001087/address.If these values are not specified,
libvirtconfigures them automatically.
-
- Save the XML configuration.
If the VM is running, shut it down.
virsh shutdown vm-name
# virsh shutdown vm-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Start the VM and log in to its guest operating system.
In the guest operating system, confirm that the PCI device is listed.
For example, if you configured guest device address as
0000:02:00.0, use the following command:lspci -nkD | grep 0000:02:00.0 0000:02:00.0 8086:9a09 (rev 01)
# lspci -nkD | grep 0000:02:00.0 0000:02:00.0 8086:9a09 (rev 01)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.5. Attaching host devices to virtual machines by using the web console Copy linkLink copied to clipboard!
To add specific functionalities to your virtual machine (VM), you can use the web console to attach host devices to the VM.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
If you are attaching PCI devices, ensure that the status of the
managedattribute of thehostdevelement is set toyes.NoteWhen attaching PCI devices to your VM, do not omit the
managedattribute of thehostdevelement, or set it tono. If you do so, PCI devices cannot automatically detach from the host when you pass them to the VM. They also cannot automatically reattach to the host when you turn off the VM.As a consequence, the host might become unresponsive or shut down unexpectedly.
You can find the status of the
managedattribute in your VM’s XML configuration. The following example opens the XML configuration of theexample-VM-1VM.virsh edit example-VM-1
# virsh edit example-VM-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Back up important data from the VM.
Optional: Back up the XML configuration of your VM. For example, to back up the
example-VM-1VM:virsh dumpxml example-VM-1 > example-VM-1.xml
# virsh dumpxml example-VM-1 > example-VM-1.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the VM to which you want to attach a host device.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Host devices section displays information about the devices attached to the VM and options to Add or Remove devices.
Click .
The Add host device dialog is displayed.
- Select the device you want to attach to the VM.
Click
The selected device is attached to the VM.
Verification
- Run the VM and check if the device is displayed in the Host devices section.
14.6. Removing USB devices from virtual machines by using the command line Copy linkLink copied to clipboard!
To remove a USB device from a virtual machine (VM), you can remove the USB device information from the XML configuration of the VM.
Procedure
Locate the bus and device values of the USB that you want to remove from the VM.
For example, the following command displays a list of USB devices attached to the host. The device we will use in this example is attached on bus 001 as device 005.
lsusb [...] Bus 001 Device 003: ID 2567:0a2b Intel Corp. Bus 001 Device 005: ID 0407:6252 Kingston River 2.0 [...]
# lsusb [...] Bus 001 Device 003: ID 2567:0a2b Intel Corp. Bus 001 Device 005: ID 0407:6252 Kingston River 2.0 [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-xmlutility along with the--remove-deviceargument.For example, the following command removes a USB flash drive, attached to the host as device 005 on bus 001, from the
example-VM-1VM.virt-xml example-VM-1 --remove-device --hostdev 001.005 Domain 'example-VM-1' defined successfully.
# virt-xml example-VM-1 --remove-device --hostdev 001.005 Domain 'example-VM-1' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove a USB device from a running VM, add the
--updateargument to this command.
Verification
- Run the VM and check if the device has been removed from the list of devices.
14.7. Removing PCI devices from virtual machines by using the command line Copy linkLink copied to clipboard!
To remove a PCI device from a virtual machine (VM), remove the device information from the XML configuration of the VM.
Procedure
In the XML configuration of the VM to which the PCI device is attached, locate the
<address domain>line in the<hostdev>section with the device’s setting.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virsh detach-devicecommand with the the--hostdevoption and the device address.For example, the following command persistently removes the device located in the previous step.
virt detach-device <VM-name> --hostdev 0000:65:00.0 --config Domain 'VM-name' defined successfully.
# virt detach-device <VM-name> --hostdev 0000:65:00.0 --config Domain 'VM-name' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo remove a PCI device from a running VM, add the
--liveargument to the previous command.Optional: Re-attach the PCI device to the host. For example the following command re-attaches the device removed from the VM in the previous step:
virsh nodedev-reattach pci_0000_65_00_0 Device pci_0000_65_00_0 re-attached
# virsh nodedev-reattach pci_0000_65_00_0 Device pci_0000_65_00_0 re-attachedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the XML configuration of the VM again, and check that the
<hostdev>section of the device no longer appears.virsh dumpxml <VM-name>
# virsh dumpxml <VM-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
14.8. Removing host devices from virtual machines by using the web console Copy linkLink copied to clipboard!
To free up resources, modify the functionalities of your VM, or both, you can use the web console to modify the VM and remove host devices that are no longer required.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Optional: Back up the XML configuration of your VM by using
virsh dumpxml example-VM-1and sending the output to a file. For example, the following backs up the configuration of your testguest1 VM as thetestguest1.xmlfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
In the interface, click the VM from which you want to remove a host device.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Host devices section displays information about the devices attached to the VM and options to Add or Remove devices.
Click the button next to the device you want to remove from the VM.
A remove device confirmation dialog is displayed.
Click .
The device is removed from the VM.
Troubleshooting
If removing a host device causes your VM to become unbootable, use the
virsh defineutility to restore the XML configuration by reloading the XML configuration file you backed up previously.virsh define testguest1.xml
# virsh define testguest1.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.9. Attaching ISO images to virtual machines Copy linkLink copied to clipboard!
When using a virtual machine (VM), you can access information stored in an ISO image on the host. To do so, attach the ISO image to the VM as a virtual optical drive, such as a CD drive or a DVD drive.
14.9.1. Attaching ISO images to virtual machines by using the command line Copy linkLink copied to clipboard!
To attach an ISO image as a virtual optical drive, edit the XML configuration file of the virtual machine (VM) and add the new drive.
Prerequisites
- You must store and copy path of the ISO image on the host machine.
Procedure
Use the
virt-xmlutility with the--add-deviceargument:For example, the following command attaches the
example-ISO-nameISO image, stored in the/home/username/Downloadsdirectory, to theexample-VM-nameVM.virt-xml example-VM-name --add-device --disk /home/username/Downloads/example-ISO-name.iso,device=cdrom Domain 'example-VM-name' defined successfully.
# virt-xml example-VM-name --add-device --disk /home/username/Downloads/example-ISO-name.iso,device=cdrom Domain 'example-VM-name' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Run the VM and test if the device is present and works as expected.
14.9.2. Replacing ISO images in virtual optical drives Copy linkLink copied to clipboard!
To replace an ISO image attached as a virtual optical drive to a virtual machine (VM), edit the XML configuration file of the VM and specify the replacement.
Prerequisites
- You must store the ISO image on the host machine.
- You must know the path to the ISO image.
Procedure
Locate the target device where the ISO image is attached to the VM. You can find this information in the VM’s XML configuration file.
For example, the following command displays the
example-VM-nameVM’s XML configuration file, where the target device for the virtual optical drive issda.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-xmlutility with the--editargument.For example, the following command replaces the
example-ISO-nameISO image, attached to theexample-VM-nameVM at targetsda, with theexample-ISO-name-2ISO image stored in the/dev/cdromdirectory.virt-xml example-VM-name --edit target=sda --disk /dev/cdrom/example-ISO-name-2.iso Domain 'example-VM-name' defined successfully.
# virt-xml example-VM-name --edit target=sda --disk /dev/cdrom/example-ISO-name-2.iso Domain 'example-VM-name' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Run the VM and test if the device is replaced and works as expected.
14.9.3. Removing ISO images from virtual machines by using the command line Copy linkLink copied to clipboard!
To remove an ISO image attached to a virtual machine (VM), edit the XML configuration file of the VM.
Procedure
Locate the target device where the ISO image is attached to the VM. You can find this information in the VM’s XML configuration file.
For example, the following command displays the
example-VM-nameVM’s XML configuration file, where the target device for the virtual optical drive issda.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
virt-xmlutility with the--remove-deviceargument.For example, the following command removes the optical drive attached as target
sdafrom theexample-VM-nameVM.virt-xml example-VM-name --remove-device --disk target=sda Domain 'example-VM-name' defined successfully.
# virt-xml example-VM-name --remove-device --disk target=sda Domain 'example-VM-name' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Confirm that the device is no longer listed in the XML configuration file of the VM.
14.10. Attaching DASD devices to virtual machines on IBM Z Copy linkLink copied to clipboard!
By using the vfio-ccw feature, you can assign direct-access storage devices (DASDs) as mediated devices to your virtual machines (VMs) on IBM Z hosts. This for example makes it possible for the VM to access a z/OS data set, or to provide the assigned DASDs to a z/OS machine.
Prerequisites
- You have a system with IBM Z hardware architecture supported with the FICON protocol.
- You have a target VM of a Linux operating system.
The driverctl package is installed.
dnf install driverctl
# dnf install driverctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The necessary
vfiokernel modules have been loaded on the host.lsmod | grep vfio
# lsmod | grep vfioCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command must contain the following modules:
-
vfio_ccw -
vfio_mdev -
vfio_iommu_type1
-
You have a spare DASD device for exclusive use by the VM, and you know the identifier of the device.
The following procedure uses
0.0.002cas an example. When performing the commands, replace0.0.002cwith the identifier of your DASD device.
Procedure
Obtain the subchannel identifier of the DASD device.
lscss -d 0.0.002c Device Subchan. DevType CU Type Use PIM PAM POM CHPIDs ---------------------------------------------------------------------- 0.0.002c 0.0.29a8 3390/0c 3990/e9 yes f0 f0 ff 02111221 00000000
# lscss -d 0.0.002c Device Subchan. DevType CU Type Use PIM PAM POM CHPIDs ---------------------------------------------------------------------- 0.0.002c 0.0.29a8 3390/0c 3990/e9 yes f0 f0 ff 02111221 00000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the subchannel identifier is detected as
0.0.29a8. In the following commands of this procedure, replace0.0.29a8with the detected subchannel identifier of your device.If the
lscsscommand in the previous step only displayed the header output and no device information, perform the following steps:Remove the device from the
cio_ignorelist.cio_ignore -r 0.0.002c
# cio_ignore -r 0.0.002cCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the guest operating system, edit the kernel command line of the VM and add the device identifier with a
!mark to the line that starts withcio_ignore=, if it is not present already.cio_ignore=all,!condev,!0.0.002c
cio_ignore=all,!condev,!0.0.002cCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat step 1 on the host to obtain the subchannel identifier.
Bind the subchannel to the
vfio_ccwpassthrough driver.driverctl -b css set-override 0.0.29a8 vfio_ccw
# driverctl -b css set-override 0.0.29a8 vfio_ccwCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis binds the 0.0.29a8 subchannel to
vfio_ccwpersistently, which means the DASD will not be usable on the host. If you need to use the device on the host, you must first remove the automatic binding to 'vfio_ccw' and rebind the subchannel to the default driver:# driverctl -b css unset-override 0.0.29a8
Define and start the DASD mediated device.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Shut down the VM, if it is running.
Display the UUID of the previously defined device and save it for the next step.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the mediated device to the VM. To do so, use the
virsh editutility to edit the XML configuration of the VM, add the following section to the XML, and replace theuuidvalue with the UUID you obtained in the previous step.<hostdev mode='subsystem' type='mdev' model='vfio-ccw'> <source> <address uuid="30820a6f-b1a5-4503-91ca-0c10ba12345a"/> </source> </hostdev><hostdev mode='subsystem' type='mdev' model='vfio-ccw'> <source> <address uuid="30820a6f-b1a5-4503-91ca-0c10ba12345a"/> </source> </hostdev>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Configure the mediated device to start automatically on host boot.
virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
# virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure that the mediated device is configured correctly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the identifier that
libvirtassigned to the mediated DASD device. To do so, display the XML configuration of the VM and look for avfio-ccwdevice.Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the assigned identifier of the device is
0.0.0009.- Start the VM and log in to its guest operating system.
In the guest operating system, confirm that the DASD device is listed. For example:
lscss | grep 0.0.0009 0.0.0009 0.0.0007 3390/0c 3990/e9 f0 f0 ff 12212231 00000000
# lscss | grep 0.0.0009 0.0.0009 0.0.0007 3390/0c 3990/e9 f0 f0 ff 12212231 00000000Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the guest operating system, set the device online. For example:
chccwdev -e 0.0009 Setting device 0.0.0009 online Done
# chccwdev -e 0.0009 Setting device 0.0.0009 online DoneCopy to Clipboard Copied! Toggle word wrap Toggle overflow
14.11. Attaching a watchdog device to a virtual machine by using the web console Copy linkLink copied to clipboard!
To force the virtual machine (VM) to perform a specified action when it stops responding, you can attach virtual watchdog devices to a VM.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- You have installed the web console VM plug-in on your system. For more information, see Setting up the web console to manage virtual machines.
Procedure
On the command line, install the watchdog service.
# dnf install watchdog
- Shut down the VM.
Add the watchdog service to the VM.
# virt-xml vmname --add-device --watchdog action=reset --update
- Run the VM.
- Log in to the RHEL 10 web console.
- In the interface of the web console, click on the VM to which you want to add the watchdog device.
Click next to the Watchdog field in the Overview pane.
The Add watchdog device type dialog is displayed.
- Select the action that you want the watchdog device to perform if the VM stops responding.
- Click .
Verification
- The action you selected is visible next to the Watchdog field in the Overview pane.
Chapter 15. Configuring virtual machine network connections Copy linkLink copied to clipboard!
For your virtual machines (VMs) to connect over a network to your host, to other VMs on your host, and to locations on an external network, the VM networking must be configured accordingly. To provide VM networking, the RHEL 10 hypervisor and newly created VMs have a default network configuration, which can also be modified further.
For example:
- You can enable the VMs on your host to be discovered and connected to locations outside the host, as if the VMs were on the same network as the host.
- You can partially or completely isolate a VM from inbound network traffic to increase its security and minimize the risk of any problems with the VM impacting the host.
15.1. How virtual networks work Copy linkLink copied to clipboard!
The connection of virtual machines (VMs) to other devices and locations on a network is facilitated by the host hardware. Virtual networking uses the concept of a virtual network switch.
A virtual network switch is a software construct that operates on a host machine. VMs connect to the network through the virtual network switch. Based on the configuration of the virtual switch, a VM can use an existing virtual network managed by the hypervisor, or a different network connection method.
The following figure shows a virtual network switch connecting two VMs to the network:
From the perspective of a guest operating system, a virtual network connection is the same as a physical network connection. Host machines view virtual network switches as network interfaces. When the virtnetworkd service is first installed and started, it creates virbr0, the default network interface for VMs.
To view information about this interface, use the ip utility on the host.
ip addr show virbr0 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0
$ ip addr show virbr0
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UNKNOWN link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff
inet 192.0.2.1/24 brd 192.0.2.255 scope global virbr0
By default, all VMs on a single host are connected to the same NAT-type virtual network, named default, which uses the virbr0 interface. For details, see Virtual networking default configuration.
For basic outbound-only network access from VMs, no additional network setup is usually needed, because the default network is installed along with the libvirt-daemon-config-network package, and is automatically started when the virtnetworkd service is started.
If a different VM network functionality is needed, you can create additional virtual networks and network interfaces and configure your VMs to use them. In addition to the default NAT, these networks and interfaces can be configured to use one of the following modes:
- Routed mode
- Bridged mode
- Isolated mode
- Open mode
15.2. The default configuration for virtual machine networks Copy linkLink copied to clipboard!
When the virtnetworkd service is first installed on a virtualization host, it contains an initial virtual network configuration in network address translation (NAT) mode. By default, all VMs on the host are connected to the same libvirt virtual network, named default. VMs on this network can connect to locations both on the host and on the network beyond the host, but with the following limitations:
-
VMs on the network are visible to the host and other VMs on the host, but the network traffic is affected by the firewalls in the guest operating system’s network stack and by the
libvirtnetwork filtering rules attached to the guest interface. - VMs on the network can connect to locations outside the host but are not visible to them. Outbound traffic is affected by the NAT rules, as well as the host system’s firewall.
The following diagram illustrates the default VM network configuration:
15.3. Network connection types for virtual machines Copy linkLink copied to clipboard!
To modify the networking properties and behavior of your VMs, change the type of virtual network or interface the VMs use. You can select from the following connection types available to VMs in RHEL 10.
15.3.1. Virtual networking with network address translation Copy linkLink copied to clipboard!
By default, virtual network switches operate in network address translation (NAT) mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected VMs to use the host machine’s IP address for communication with any external network. When the virtual network switch is operating in NAT mode, computers external to the host cannot communicate with the VMs inside the host.
Virtual network switches use NAT configured by firewall rules. Editing these rules while the switch is running is not recommended, because incorrect rules might result in the switch being unable to communicate.
15.3.2. Virtual networking in routed mode Copy linkLink copied to clipboard!
When using Routed mode, the virtual switch connects to the physical LAN connected to the host machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, the virtual machines (VMs) are all in a single subnet, separate from the host machine. The VM subnet is routed through a virtual switch, which exists on the host machine. This enables incoming connections, but requires extra routing-table entries for systems on the external network.
Routed mode uses routing based on the IP address:
A common topology that uses routed mode is virtual server hosting (VSH). A VSH provider may have several host machines, each with two physical network connections. One interface is used for management and accounting, the other for the VMs to connect through. Each VM has its own public IP address, but the host machines use private IP addresses so that only internal administrators can manage the VMs.
15.3.3. Virtual networking in bridged mode Copy linkLink copied to clipboard!
In most VM networking modes, VMs automatically create and connect to the virbr0 virtual bridge. In contrast, in bridged mode, the VM connects to an existing Linux bridge on the host. As a result, the VM is directly visible on the physical network. This enables incoming connections, but does not require any extra routing-table entries.
Bridged mode uses connection switching based on the MAC address:
In bridged mode, the VM appear within the same subnet as the host machine. All other physical machines on the same physical network can detect the VM and access it.
Bridged network bonding
It is possible to use multiple physical bridge interfaces on the hypervisor by joining them together with a bond. The bond can then be added to a bridge, after which the VMs can be added to the bridge as well. However, the bonding driver has several modes of operation, and not all of these modes work with a bridge where VMs are in use.
Bonding modes 1, 2, and 4 are usable.
In contrast, modes 0, 3, 5, or 6 are likely to cause the connection to fail. Also note that media-independent interface (MII) monitoring should be used to monitor bonding modes, as Address Resolution Protocol (ARP) monitoring does not work correctly.
For more information about bonding modes, see the Red Hat Knowledgebase solution Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?.
Common scenarios
The most common use cases for bridged mode include:
- Deploying VMs in an existing network alongside host machines, making the difference between virtual and physical machines invisible to the user.
- Deploying VMs without making any changes to existing physical network configuration settings.
- Deploying VMs that must be easily accessible to an existing physical network. Placing VMs on a physical network where they must access DHCP services.
- Connecting VMs to an existing network where virtual LANs (VLANs) are used.
- A demilitarized zone (DMZ) network. For a DMZ deployment with VMs, Red Hat recommends setting up the DMZ at the physical network router and switches, and connecting the VMs to the physical network by using bridged mode.
15.4. Virtual networking in isolated mode Copy linkLink copied to clipboard!
By using isolated mode, virtual machines connected to the virtual switch can communicate with each other and with the host machine, but their traffic will not pass outside of the host machine, and they cannot receive traffic from outside the host machine. Using dnsmasq in this mode is required for basic functionality such as DHCP.
15.4.1. Virtual networking in open mode Copy linkLink copied to clipboard!
When using open mode for networking, libvirt does not generate any firewall rules for the network. As a result, libvirt does not overwrite firewall rules provided by the host, and the user can therefore manually manage the VM’s firewall rules.
15.5. Comparison of virtual machine connection types Copy linkLink copied to clipboard!
The following table provides information about the locations to which selected types of virtual machine (VM) network configurations can connect, and to which they are visible.
| Connection to the host | Connection to other VMs on the host | Connection to outside locations | Visible to outside locations | |
|---|---|---|---|---|
| Bridged mode | YES | YES | YES | YES |
| NAT | YES | YES | YES | no |
| Routed mode | YES | YES | YES | YES |
| Isolated mode | YES | YES | no | no |
| Open mode | Depends on the host’s firewall rules | |||
15.6. Using the web console for managing virtual machine network interfaces Copy linkLink copied to clipboard!
To manage the virtual network interfaces for virtual machines (VMs) on your host, you can use the RHEL 10 web console.
15.6.1. Viewing and editing virtual network interface information in the web console Copy linkLink copied to clipboard!
To view and modify the virtual network interfaces on a selected virtual machine (VM), you can use the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.
The information includes the following:
Type - The type of network interface for the VM. The types include virtual network, bridge to LAN, and direct attachment.
NoteGeneric Ethernet connection is not supported in RHEL 10 and later.
- Model type - The model of the virtual network interface.
- MAC Address - The MAC address of the virtual network interface.
- IP Address - The IP address of the virtual network interface.
- Source - The source of the network interface. This is dependent on the network type.
- State - The state of the virtual network interface.
- To edit the virtual network interface settings, Click . The Virtual Network Interface Settings dialog opens.
- Change the interface type, source, model, or MAC address.
Click . The network interface is modified.
NoteChanges to the virtual network interface settings take effect only after restarting the VM.
Additionally, MAC address can only be modified when the VM is shut off.
15.6.2. Adding and connecting virtual network interfaces in the web console Copy linkLink copied to clipboard!
To create a virtual network interface and connect a virtual machine (VM) to it, you can use the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Edit, or Plug network interfaces.
If no network interfaces are available that meet your requirements, you can create a new interface by clicking the button.
- In the Add network interface dialog, select the type and source of the interface, as well as other options, based on your requirements.
- Click .
Click in the row of the virtual network interface you want to connect.
The selected virtual network interface connects to the VM.
15.6.3. Disconnecting and removing virtual network interfaces in the web console Copy linkLink copied to clipboard!
To disconnect virtual network interfaces connected to a selected virtual machine (VM), you can use the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Scroll to .
The Networks Interfaces section displays information about the virtual network interface configured for the VM as well as options to Add, Delete, Edit, or Unplug network interfaces.
Click in the row of the virtual network interface you want to disconnect.
The selected virtual network interface disconnects from the VM.
- Optional: If you want to delete the virtual network interface from the host, click the menu button in the pane of the interface, then click .
15.7. Managing SR-IOV networking devices Copy linkLink copied to clipboard!
An emulated virtual device often uses more CPU and memory than a hardware network device. This can limit the performance of a virtual machine (VM). However, if any devices on your virtualization host support Single Root I/O Virtualization (SR-IOV), you can use this feature to improve the device performance, and possibly also the overall performance of your VMs.
15.7.1. What is SR-IOV? Copy linkLink copied to clipboard!
Single-root I/O virtualization (SR-IOV) is a specification that enables a single PCI Express (PCIe) device to present multiple separate PCI devices, called virtual functions (VFs), to the host system.
Each of these devices:
- Is able to provide the same or similar service as the original PCIe device.
- Appears at a different address on the host PCI bus.
- Can be assigned to a different VM by using VFIO assignment.
For example, a single SR-IOV capable network device can present VFs to multiple VMs. All of the VFs use the same physical card, the same network connection, and the same network cable, but each of the VMs directly controls its own hardware network device and uses no extra resources from the host.
How SR-IOV works
The SR-IOV functionality is possible thanks to the introduction of the following PCIe functions:
- Physical functions (PFs) - A PCIe function that provides the functionality of its device (for example networking) to the host, but can also create and manage a set of VFs. Each SR-IOV capable device has one or more PFs.
- Virtual functions (VFs) - Lightweight PCIe functions that behave as independent devices. Each VF is derived from a PF. The maximum number of VFs a device can have depends on the device hardware. Each VF can be assigned only to a single VM at a time, but a VM can have multiple VFs assigned to it.
VMs recognize VFs as virtual devices. For example, a VF created by an SR-IOV network device appears as a network card to a VM to which it is assigned, in the same way as a physical network card appears to the host system.
Figure 15.1. SR-IOV architecture
Advantages
The primary advantages of using SR-IOV VFs rather than emulated devices are:
- Improved performance
- Reduced use of host CPU and memory resources
For example, a VF attached to a VM as a vNIC performs at almost the same level as a physical NIC, and much better than paravirtualized or emulated NICs. In particular, when multiple VFs are used simultaneously on a single host, the performance benefits can be significant.
Disadvantages
- To modify the configuration of a PF, you must first change the number of VFs exposed by the PF to zero. Therefore, you also need to remove the devices provided by these VFs from the VM to which they are assigned.
- A VM with an VFIO-assigned devices attached, including SR-IOV VFs, cannot be migrated to another host. In some cases, you can work around this limitation by pairing the assigned device with an emulated device. For example, you can bond an assigned networking VF to an emulated vNIC, and remove the VF before the migration.
- In addition, VFIO-assigned devices require pinning of VM memory, which increases the memory consumption of the VM and prevents the use of memory ballooning on the VM.
15.7.2. Attaching SR-IOV networking devices to virtual machines Copy linkLink copied to clipboard!
To assign an SR-IOV networking device to a virtual machine (VM), you must create a virtual function (VF) from an SR-IOV capable network interface on the host and assign the VF as a device to a specified VM.
Prerequisites
The CPU and the firmware of your host support the I/O Memory Management Unit (IOMMU).
- If using an Intel CPU, it must support the Intel Virtualization Technology for Directed I/O (VT-d).
- If using an AMD CPU, it must support the AMD-Vi feature.
The host system uses Access Control Service (ACS) to provide direct memory access (DMA) isolation for PCIe topology. Verify this with the system vendor.
For additional information, see Hardware Considerations for Implementing SR-IOV.
The physical network device supports SR-IOV. To verify if any network devices on your system support SR-IOV, use the
lspci -vcommand and look forSingle Root I/O Virtualization (SR-IOV)in the output.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The host network interface you want to use for creating VFs is running. For example, to activate the eth1 interface and verify it is running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For SR-IOV device assignment to work, the IOMMU feature must be enabled in the host BIOS and kernel. To do so:
On an Intel host, enable VT-d:
Regenerate the GRUB configuration with the
intel_iommu=onandiommu=ptparameters:grubby --args="intel_iommu=on iommu=pt" --update-kernel=ALL
# grubby --args="intel_iommu=on iommu=pt" --update-kernel=ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the host.
On an AMD host, enable AMD-Vi:
Regenerate the GRUB configuration with the
iommu=ptparameter:grubby --args="iommu=pt" --update-kernel=ALL
# grubby --args="iommu=pt" --update-kernel=ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the host.
On an ARM 64 host, the required SMMU feature is enabled by default. For better performance, configure also the
iommu=ptparameter:Regenerate the GRUB configuration with the
iommu=ptparameter:grubby --args="iommu=pt" --update-kernel=ALL
# grubby --args="iommu=pt" --update-kernel=ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the host.
Procedure
Optional: Confirm the maximum number of VFs your network device can use. To do so, use the following command and replace eth1 with your SR-IOV compatible network device.
cat /sys/class/net/eth1/device/sriov_totalvfs 7
# cat /sys/class/net/eth1/device/sriov_totalvfs 7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to create a virtual function (VF):
echo VF-number > /sys/class/net/network-interface/device/sriov_numvfs
# echo VF-number > /sys/class/net/network-interface/device/sriov_numvfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the command, replace:
- VF-number with the number of VFs you want to create on the PF.
- network-interface with the name of the network interface for which the VFs will be created.
The following example creates 2 VFs from the eth1 network interface:
echo 2 > /sys/class/net/eth1/device/sriov_numvfs
# echo 2 > /sys/class/net/eth1/device/sriov_numvfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the VFs have been added:
lspci | grep Ethernet 82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
# lspci | grep Ethernet 82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01) 82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the created VFs persistent by creating a udev rule for the network interface you used to create the VFs. For example, for the eth1 interface, create the
/etc/udev/rules.d/eth1.rulesfile, and add the following line:ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="ixgbe", ATTR{device/sriov_numvfs}="2"ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="ixgbe", ATTR{device/sriov_numvfs}="2"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This ensures that the two VFs that use the
ixgbedriver will automatically be available for theeth1interface when the host starts. If you do not require persistent SR-IOV devices, skip this step.WarningCurrently, the setting described above does not work correctly when attempting to make VFs persistent on Broadcom NetXtreme II BCM57810 adapters. In addition, attaching VFs based on these adapters to Windows VMs is currently not reliable.
Hot plug one of the newly added VF interface devices to a running VM.
virsh attach-interface <vm_name> hostdev 0000:82:10.0 --mac 52:54:00:00:01:01 --managed --live --config
# virsh attach-interface <vm_name> hostdev 0000:82:10.0 --mac 52:54:00:00:01:01 --managed --live --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
--liveoption attaches the device to a running VM, without persistence between boots. The--configoption makes the configuration changes persistent. To attach the device to a shut down VM, do not use the--liveoption.The
--macoption specifies a MAC address for the attached interface. If you do not specify a MAC address for the interface, the VM automatically generates a permanent, pseudorandom address that begins with 52:54:00.ImportantIf you assign an SR-IOV VF to a virtual machine by manually adding a device entry to the <hostdev> section of your VM’s XML configuration file, the MAC address is not permanently assigned and network settings in the guest usually need to be reconfigured on every host reboot.
To avoid these complications, use the
virsh attach-interfacecommand as described in this step.
Verification
- If the procedure is successful, the guest operating system detects a new network interface controller.
15.7.3. Supported devices for SR-IOV assignment Copy linkLink copied to clipboard!
Not all devices can be used for SR-IOV. The following devices have been tested and verified as compatible with SR-IOV in RHEL 10.
Networking devices
-
Intel 82599ES 10 Gigabit Ethernet Controller - uses the
ixgbedriver -
Intel Ethernet Controller XL710 Series - uses the
i40edriver -
Intel Ethernet Network Adapter XXV710 - uses the
i40edriver -
Intel 82576 Gigabit Ethernet Controller - uses the
igbdriver -
Broadcom NetXtreme II BCM57810 - uses the
bnx2xdriver -
Ethernet Controller E810-C for QSFP - uses the
icedriver -
SFC9220 10/40G Ethernet Controller - uses the
sfcdriver -
FastLinQ QL41000 Series 10/25/40/50GbE Controller - uses the
qededriver - Mellanox MT27710 Ethernet Adapter Cards
- Mellanox MT2892 Family [ConnectX-6 Dx]
- Mellanox MT2910 [ConnextX-7]
15.8. Booting virtual machines from a PXE server Copy linkLink copied to clipboard!
Virtual machines (VMs) that use Preboot Execution Environment (PXE) can boot and load their configuration from a network. You can use libvirt to boot VMs from a PXE server on a virtual or bridged network.
The following procedures are provided only as examples. Ensure that you have sufficient backups before proceeding.
15.8.1. Setting up a PXE boot server on a virtual network Copy linkLink copied to clipboard!
To configure virtual machines on your host to boot from a boot image available on the virtual network, you must configure a libvirt virtual network to provide Preboot Execution Environment (PXE).
Prerequisites
A local PXE server (DHCP and TFTP), such as:
- libvirt internal server
- manually configured dhcpd and tftpd
- dnsmasq
- Cobbler server
-
PXE boot images, such as
PXELINUXconfigured by Cobbler or manually.
Procedure
-
Place the PXE boot images and configuration in
/var/lib/tftpbootfolder. Set folder permissions:
chmod -R a+r /var/lib/tftpboot
# chmod -R a+r /var/lib/tftpbootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set folder ownership:
chown -R nobody: /var/lib/tftpboot
# chown -R nobody: /var/lib/tftpbootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update SELinux context:
chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpboot
# chcon -R --reference /usr/sbin/dnsmasq /var/lib/tftpboot # chcon -R --reference /usr/libexec/libvirt_leaseshelper /var/lib/tftpbootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down the virtual network:
virsh net-destroy default
# virsh net-destroy defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the virtual network configuration file in your default editor:
virsh net-edit default
# virsh net-edit defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
<ip>element to include the appropriate address, network mask, DHCP address range, and boot file, where example-pxelinux is the name of the boot image file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the virtual network:
virsh net-start default
# virsh net-start defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
defaultvirtual network is active:virsh net-list Name State Autostart Persistent --------------------------------------------------- default active no no
# virsh net-list Name State Autostart Persistent --------------------------------------------------- default active no noCopy to Clipboard Copied! Toggle word wrap Toggle overflow
15.8.2. Booting virtual machines by using PXE and a virtual network Copy linkLink copied to clipboard!
To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a virtual network, you must enable PXE booting.
Prerequisites
- A PXE boot server is set up on the virtual network as described in Setting up a PXE boot server on a virtual network.
Procedure
Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the
defaultvirtual network, into a new 10 GB QCOW2 image file:virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10
# virt-install --pxe --network network=default --memory 2048 --vcpus 2 --disk size=10Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can manually edit the XML configuration file of an existing VM. To do so, ensure the guest network is configured to use your virtual network and that the network is configured to be the primary boot device:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Start the VM by using the
virsh startcommand. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.
15.8.3. Booting virtual machines by using PXE and a bridged network Copy linkLink copied to clipboard!
To boot virtual machines (VMs) from a Preboot Execution Environment (PXE) server available on a bridged network, you must enable PXE booting.
Prerequisites
- Network bridging is enabled.
- A PXE boot server is available on the bridged network.
Procedure
Create a new VM with PXE booting enabled. For example, to install from a PXE, available on the
breth0bridged network, into a new 10 GB QCOW2 image file:virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10
# virt-install --pxe --network bridge=breth0 --memory 2048 --vcpus 2 --disk size=10Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can manually edit the XML configuration file of an existing VM. To do so, ensure that the VM is configured with a bridged network and that the network is configured to be the primary boot device:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Start the VM by using the
virsh startcommand. If PXE is configured correctly, the VM boots from a boot image available on the PXE server.
15.9. Configuring externally visible virtual machines Copy linkLink copied to clipboard!
In many scenarios, the default virtual machine (VM) networking configuration is sufficient. However, if you need to adjust the configuration for your VMs to become reachable from external systems, you can use the command line (CLI) or the RHEL 10 web console.
15.9.1. Configuring externally visible virtual machines by using the command line Copy linkLink copied to clipboard!
If you require a virtual machine (VM) to appear on the same external network as the hypervisor, you must use bridged mode. To do so, attach the VM to a bridge device connected to the hypervisor’s physical network device.
By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual bridge on the host. This ensures that the VM can use the host’s network interface controller (NIC) for connecting to outside networks, but the VM is not reachable from external systems.
Prerequisites
- A shut-down existing VM with the default NAT setup.
The IP configuration of the hypervisor. This varies depending on the network connection of the host. As an example, this procedure uses a scenario where the host is connected to the network by using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.
To obtain the IP configuration of the ethernet interface, use the
ip addrutility:ip addr [...] enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s25# ip addr [...] enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s25Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create and set up a bridge connection for the physical interface on the host. For instructions, see the Configuring a network bridge.
Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of the physical ethernet interface to the bridge interface.
Modify the VM’s network to use the created bridged interface. For example, the following sets testguest to use bridge0.
virt-xml testguest --edit --network bridge=bridge0 Domain 'testguest' defined successfully.
# virt-xml testguest --edit --network bridge=bridge0 Domain 'testguest' defined successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the VM.
virsh start testguest
# virsh start testguestCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the guest operating system, adjust the IP and DHCP settings of the system’s network interface as if the VM was another physical system in the same network as the hypervisor.
The specific steps for this differ depending on the guest operating system used by the VM. For example, if the guest operating system is RHEL 10, see Configuring an Ethernet connection.
Verification
Ensure the newly created bridge is running and contains both the host’s physical interface and the interface of the VM.
ip link show master bridge0 2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff 10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:89:15:40 brd ff:ff:ff:ff:ff:ff# ip link show master bridge0 2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 54:ee:75:49:dc:46 brd ff:ff:ff:ff:ff:ff 10: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether fe:54:00:89:15:40 brd ff:ff:ff:ff:ff:ffCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the VM is displayed on the same external network as the hypervisor:
In the guest operating system, obtain the network ID of the system. For example, if it is a Linux guest:
ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0# ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow From an external system connected to the local network, connect to the VM by using the obtained ID.
ssh root@192.0.2.1 root@192.0.2.1's password: Last login: Mon Sep 24 12:05:36 2019 root~#*
# ssh root@192.0.2.1 root@192.0.2.1's password: Last login: Mon Sep 24 12:05:36 2019 root~#*Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the connection works, the network has been configured successfully.
Troubleshooting
In certain situations, such as when using a client-to-site VPN while the VM is hosted on the client, using bridged mode for making your VMs available to external locations is not possible.
To work around this problem, you can set destination NAT by using
nftablesfor the VM.
15.9.2. Configuring externally visible virtual machines by using the web console Copy linkLink copied to clipboard!
If you require a VM to appear on the same external network as the hypervisor, you must use bridged mode. To do so, attach the VM to a bridge device connected to the hypervisor’s physical network device. To use the RHEL 10 web console for this, follow the instructions below.
By default, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual bridge on the host. This ensures that the VM can use the host’s network interface controller (NIC) for connecting to outside networks, but the VM is not reachable from external systems.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
- A shut-down existing VM with the default NAT setup.
The IP configuration of the hypervisor. This varies depending on the network connection of the host. As an example, this procedure uses a scenario where the host is connected to the network by using an ethernet cable, and the hosts' physical NIC MAC address is assigned to a static IP on a DHCP server. Therefore, the ethernet interface is treated as the hypervisor IP.
To obtain the IP configuration of the ethernet interface, go to the
Networkingtab in the web console, and see theInterfacessection.
Procedure
Create and set up a bridge connection for the physical interface on the host. For instructions, see Configuring network bridges in the web console.
Note that in a scenario where static IP assignment is used, you must move the IPv4 setting of the physical ethernet interface to the bridge interface.
Modify the VM’s network to use the bridged interface. In the Network Interfaces tab of the VM:
- Click
In the
Add Virtual Network Interfacedialog, set:-
Interface Type to
Bridge to LAN -
Source to the newly created bridge, for example
bridge0
-
Interface Type to
- Click
- Optional: Click for all the other interfaces connected to the VM.
- Click to start the VM.
In the guest operating system, adjust the IP and DHCP settings of the system’s network interface as if the VM was another physical system in the same network as the hypervisor.
The specific steps for this will differ depending on the guest operating system used by the VM. For example, if the guest operating system is RHEL 10, see Configuring an Ethernet connection.
Verification
- In the Networking tab of the host’s web console, click the row with the newly created bridge to ensure it is running and contains both the host’s physical interface and the interface of the VM.
Ensure the VM is displayed on the same external network as the hypervisor.
In the guest operating system, obtain the network ID of the system. For example, if it is a Linux guest:
ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0# ip addr [...] enp0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:09:15:46 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global dynamic noprefixroute enp0s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow From an external system connected to the local network, connect to the VM by using the obtained ID.
ssh root@192.0.2.1 root@192.0.2.1's password: Last login: Mon Sep 24 12:05:36 2019 root~#*
# ssh root@192.0.2.1 root@192.0.2.1's password: Last login: Mon Sep 24 12:05:36 2019 root~#*Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the connection works, the network has been configured successfully.
Troubleshooting
- In certain situations, such as when using a client-to-site VPN while the VM is hosted on the client, using bridged mode for making your VMs available to external locations is not possible.
15.9.3. Replacing macvtap connections Copy linkLink copied to clipboard!
Using macvtap connections is supported in RHEL 10. However, in comparison to other available virtual machine (VM) networking configurations, macvtap has suboptimal performance and is more difficult to set up correctly. If your use case does not explicitly require macvtap, use a different supported networking configuration.
macvtap is a Linux networking device driver that creates a virtual network interface, through which VMs have direct access to the physical network interface on the host machine. If you are using a macvtap mode in your VM, consider instead using the following network configurations:
- Instead of macvtap bridge mode, use the Linux bridge configuration.
- Instead of macvtap passthrough mode, use PCI Passthrough.
15.10. Configuring bridges on a network bond to connect virtual machines with the network Copy linkLink copied to clipboard!
The network bridge connects virtual machines (VMs) with the same network as the host. If you want to connect VMs on one host to another host or to VMs on another host, a bridge establishes communication between them. However, the bridge does not provide a fail-over mechanism.
To handle failures in communication, a network bond handles communication in case of failure of a network interface. To maintain fault tolerance and redundancy, the active-backup bonding mechanism determines that only one port is active in the bond and does not require any switch configuration. If an active port fails, an alternate port becomes active to retain communication between configured VMs in the network.
15.10.1. Configuring network interfaces on a network bond by using nmcli Copy linkLink copied to clipboard!
To configure a network bond on the command line, use the nmcli utility.
Prerequisites
-
Two or more physical network devices are installed on the server, and they are not configured in any
NetworkManagerconnection profile.
Procedure
Create a bond interface:
nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup"
# nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates a bond named
bond0that uses theactive-backupmode.Assign the Ethernet interfaces to the bond:
nmcli connection add type ethernet slave-type bond con-name bond0-port1 ifname enp7s0 master bond0 nmcli connection add type ethernet slave-type bond con-name bond0-port2 ifname enp8s0 master bond0
# nmcli connection add type ethernet slave-type bond con-name bond0-port1 ifname enp7s0 master bond0 # nmcli connection add type ethernet slave-type bond con-name bond0-port2 ifname enp8s0 master bond0Copy to Clipboard Copied! Toggle word wrap Toggle overflow These commands create profiles for
enp7s0andenp8s0, and add them to thebond0connection.Configure the IPv4 settings:
- To use DHCP, no action is required.
To set a static IPv4 address, network mask, default gateway, and DNS server to the
bond0connection, enter:nmcli connection modify bond0 ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.253 ipv4.dns-search example.com ipv4.method manual
# nmcli connection modify bond0 ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.253 ipv4.dns-search example.com ipv4.method manualCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the IPv6 settings:
- To use stateless address autoconfiguration (SLAAC), no action is required.
To set a static IPv6 address, network mask, default gateway, and DNS server to the
bond0connection, enter:nmcli connection modify bond0 ipv6.addresses 2001:db8:1::1/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::fffd ipv6.dns-search example.com ipv6.method manual
# nmcli connection modify bond0 ipv6.addresses 2001:db8:1::1/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::fffd ipv6.dns-search example.com ipv6.method manualCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: If you want to set any parameters on the bond ports, use the following command:
nmcli connection modify bond0-port1 bond-port.<parameter> <value>
# nmcli connection modify bond0-port1 bond-port.<parameter> <value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure that Red Hat Enterprise Linux enables all ports automatically when the bond is enabled:
nmcli connection modify bond0 connection.autoconnect-ports 1
# nmcli connection modify bond0 connection.autoconnect-ports 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Activate the bridge:
nmcli connection up bond0
# nmcli connection up bond0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Temporarily remove the network cable from the host.
Note that there is no method to properly test link failure events using software utilities. Tools that deactivate connections, such as nmcli, show only the bonding driver’s ability to handle port configuration changes and not actual link failure events.
Display the status of the bond:
cat /proc/net/bonding/bond0
# cat /proc/net/bonding/bond0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.10.2. Configuring a network bridge for network bonds by using nmcli Copy linkLink copied to clipboard!
To create a network bridge for network bonds, configure a bond interface that combines multiple network interfaces for improved traffic handling. As a result, VMs can use the network bridge to access the network through the bonded network interfaces. To configure this, you can use the nmcli utility.
Prerequisites
- You have created and configured a network bond. For instructions, see Configuring network interfaces on a network bond by using nmcli
Procedure
Create a bridge interface:
nmcli connection add type bridge con-name br0 ifname br0 ipv4.method disabled ipv6.method disabled
# nmcli connection add type bridge con-name br0 ifname br0 ipv4.method disabled ipv6.method disabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
bond0bond to thebr0bridge:nmcli connection modify bond0 master br0
# nmcli connection modify bond0 master br0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure that Red Hat Enterprise Linux enables all ports automatically when the bridge is enabled:
nmcli connection modify br0 connection.autoconnect-ports 1
# nmcli connection modify br0 connection.autoconnect-ports 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reactivate the bridge:
nmcli connection up br0
# nmcli connection up br0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
iputility to display the link status of Ethernet devices that are ports of a specific bridge:ip link show master br0 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:38:a9:4d brd ff:ff:ff:ff:ff:ff ...# ip link show master br0 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:38:a9:4d brd ff:ff:ff:ff:ff:ff ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
bridgeutility to display the status of Ethernet devices that are ports of any bridge device:bridge link show 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100 ...
# bridge link show 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To display the status for a specific Ethernet device, use the
bridge link show dev <ethernet_device_name>command.
15.10.3. Creating a virtual network in libvirt with an existing bond interface Copy linkLink copied to clipboard!
To enable virtual machines (VM) to use the br0 bridge with the bond, first add a virtual network to the libvirtd service that uses this bridge.
Prerequisites
-
You installed the
libvirtpackage. -
You started and enabled the
libvirtdservice. -
You configured the
br0device with the bond on Red Hat Enterprise Linux. For instructions, see Configuring a network bridge for network bonds by using nmcli.
Procedure
Create the
~/bond0-bridge.xmlfile with the following content:<network> <name>bond0-bridge</name> <forward mode="bridge" /> <bridge name="br0" /> </network>
<network> <name>bond0-bridge</name> <forward mode="bridge" /> <bridge name="br0" /> </network>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
~/bond0-bridge.xmlfile to create a new virtual network inlibvirt:virsh net-define ~/bond0-bridge.xml
# virsh net-define ~/bond0-bridge.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
~/bond0-bridge.xmlfile:rm ~/bond0-bridge.xml
# rm ~/bond0-bridge.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
bond0-bridgevirtual network:virsh net-start bond0-bridge
# virsh net-start bond0-bridgeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
bond0-bridgevirtual network to start automatically when thelibvirtdservice starts:virsh net-autostart bond0-bridge
# virsh net-autostart bond0-bridgeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the list of virtual networks:
virsh net-list Name State Autostart Persistent ---------------------------------------------------- bond0-bridge active yes yes ...
# virsh net-list Name State Autostart Persistent ---------------------------------------------------- bond0-bridge active yes yes ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
15.10.4. Configuring virtual machines to use a bond interface Copy linkLink copied to clipboard!
To configure a VM to use a bridge device with a bond interface on the host, create a new VM that uses the bond0-bridge virtual network or update the settings of existing VMs to use this network.
Perform this procedure on the RHEL hosts.
Prerequisites
-
You configured the
bond0-bridgevirtual network inlibvirtd. For instructions, see Creating a virtual network in libvirt with an existing bond interface.
Procedure
To create a new VM and configure it to use the
bond0-bridgenetwork, pass the--network network:bond0-bridgeoption to thevirt-installutility when you create the VM:virt-install ... --network network:bond0-bridge
# virt-install ... --network network:bond0-bridgeCopy to Clipboard Copied! Toggle word wrap Toggle overflow To change the network settings of an existing VM:
Connect the VM’s network interface to the
bond0-bridgevirtual network:virt-xml <example_vm> --edit --network network=bond0-bridge
# virt-xml <example_vm> --edit --network network=bond0-bridgeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Shut down the VM, and start it again:
virsh shutdown <example_vm> virsh start <example_vm>
# virsh shutdown <example_vm> # virsh start <example_vm>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the virtual network interfaces of the VM on the host:
virsh domiflist <example_vm> Interface Type Source Model MAC ------------------------------------------------------------------- vnet1 bridge bond0-bridge virtio 52:54:00:c5:98:1c
# virsh domiflist <example_vm> Interface Type Source Model MAC ------------------------------------------------------------------- vnet1 bridge bond0-bridge virtio 52:54:00:c5:98:1cCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the interfaces attached to the
br0bridge:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the
libvirtdservice dynamically updates the bridge’s configuration. When you start a VM which uses thebond0-bridgenetwork, the correspondingvnet*device on the host is displayed as a port of the bridge.
15.11. Configuring the passt user-space connection Copy linkLink copied to clipboard!
If you require non-privileged access to a virtual network, for example when using a session connection of libvirt, you can configure your virtual machine (VM) to use the passt networking back end.
Prerequisites
The
passtpackage has been installed on your system.dnf install passt
# dnf install passtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Open the XML configuration of the VM on which you want to use a
passtconnection. For example:virsh edit <testguest1>
# virsh edit <testguest1>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the
<devices>section, add an<interface type='user'>element that usespasstas its backend type.For example, the following configuration sets up a
passtconnection that uses addresses and routes copied from the host interface associated with the first default route:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, when using
passt, you can specify multiple<portForward>elements to forward incoming network traffic for the host to this VM interface. You can also customize interface IP addresses. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example configuration sets up a
passtconnection with the following parameters:-
The VM copies the network routes for forwarding traffic from the
eth0host interface. -
The interface MAC is set to
52:54:00:98:d8:b7. If unset, a random one will be generated. -
The IPv4 address is set to
192.0.2.1/24, and the IPv6 address is set to::ffff:c000:201. -
The TCP port
2022on the host forwards its network traffic to port22on the VM. -
The TCP address
2001:db8:ac10:fd01::1:10on host interfaceeth0and port8080forwards its network traffic to port8080on the VM. Port4433forwards to port3444on the VM. -
The UDP address
1.2.3.4and ports5000 - 5009and5016 - 5020on the host forward their network traffic to ports6000 - 6009and6016 - 6020on the VM.
-
The VM copies the network routes for forwarding traffic from the
- Save the XML configuration.
Verification
Start or restart the VM you configured with
passt:virsh reboot <vm-name> virsh start <vm-name>
# virsh reboot <vm-name> # virsh start <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM boots successfully, it is now using the
passtnetworking back end.
Chapter 16. Managing GPU devices in virtual machines Copy linkLink copied to clipboard!
To prepare your virtual machine (VM) for running AI workloads or to enhance the graphical performance of the VM, you can assign a RHEL host GPU to the VM.
- You can detach the GPU from the host and pass full control of the GPU directly to the VM.
- You can create multiple mediated devices from a physical GPU, and assign these devices as virtual GPUs (vGPUs) to multiple guests. This is currently only supported on selected NVIDIA GPUs.
GPU assignment is currently only supported on Intel 64 and AMD64 systems.
16.1. Assigning a GPU to a virtual machine Copy linkLink copied to clipboard!
To access and control GPUs that are attached to the host system, you must configure the host system to pass direct control of the GPU to the virtual machine (VM).
If you are looking for information about assigning a virtual GPU, see Managing NVIDIA vGPU devices.
Prerequisites
You must enable IOMMU support on the host machine kernel.
On an Intel host, you must enable VT-d:
Regenerate the GRUB configuration with the
intel_iommu=onandiommu=ptparameters:grubby --args="intel_iommu=on iommu_pt" --update-kernel DEFAULT
# grubby --args="intel_iommu=on iommu_pt" --update-kernel DEFAULTCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the host.
On an AMD host, you must enable AMD-Vi.
Note that on AMD hosts, IOMMU is enabled by default, you can add
iommu=ptto switch it to pass-through mode:Regenerate the GRUB configuration with the
iommu=ptparameter:grubby --args="iommu=pt" --update-kernel DEFAULT
# grubby --args="iommu=pt" --update-kernel DEFAULTCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ptoption only enables IOMMU for devices used in pass-through mode and provides better host performance. However, not all hardware supports the option. You can still assign devices even when this option is not enabled.- Reboot the host.
Procedure
Prevent the driver from binding to the GPU.
Identify the PCI bus address to which the GPU is attached.
lspci -Dnn | grep VGA 0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [10de:11fa] (rev a1)
# lspci -Dnn | grep VGA 0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [10de:11fa] (rev a1)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Prevent the host’s graphics driver from using the GPU. To do so, use the GPU PCI ID with the pci-stub driver.
For example, the following command prevents the driver from binding to the GPU attached at the 10de:11fa bus:
grubby --args="pci-stub.ids=10de:11fa" --update-kernel DEFAULT
# grubby --args="pci-stub.ids=10de:11fa" --update-kernel DEFAULTCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the host.
Optional: If certain GPU functions, such as audio, cannot be passed through to the VM due to support limitations, you can modify the driver bindings of the endpoints within an IOMMU group to pass through only the necessary GPU functions.
Convert the GPU settings to XML and note the PCI address of the endpoints that you want to prevent from attaching to the host drivers.
To do so, convert the GPU’s PCI bus address to a libvirt-compatible format by adding the
pci_prefix to the address, and converting the delimiters to underscores.For example, the following command displays the XML configuration of the GPU attached at the
0000:02:00.0bus address.virsh nodedev-dumpxml pci_0000_02_00_0
# virsh nodedev-dumpxml pci_0000_02_00_0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Prevent the endpoints from attaching to the host driver.
In this example, to assign the GPU to a VM, prevent the endpoints that correspond to the audio function,
<address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>, from attaching to the host audio driver, and instead attach the endpoints to VFIO-PCI.driverctl set-override 0000:02:00.1 vfio-pci
# driverctl set-override 0000:02:00.1 vfio-pciCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Attach the GPU to the VM
Create an XML configuration file for the GPU by using the PCI bus address.
For example, you can create the following XML file, GPU-Assign.xml, by using parameters from the GPU’s bus address.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the file on the host system.
Merge the file with the VM’s XML configuration.
For example, the following command merges the GPU XML file, GPU-Assign.xml, with the XML configuration file of the
System1VM.virsh attach-device System1 --file /home/GPU-Assign.xml --persistent Device attached successfully.
# virsh attach-device System1 --file /home/GPU-Assign.xml --persistent Device attached successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe GPU is attached as a secondary graphics device to the VM. Assigning a GPU as the primary graphics device is not supported, and Red Hat does not recommend removing the primary emulated graphics device in the VM’s XML configuration.
Verification
-
The device is displayed under the
<devices>section in VM’s XML configuration.
16.2. Managing NVIDIA vGPU devices Copy linkLink copied to clipboard!
The vGPU feature makes it possible to divide a physical NVIDIA GPU device into multiple virtual devices, referred to as mediated devices. These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs can share the performance of a single physical GPU.
Assigning a physical GPU to VMs, with or without using mediated devices, makes it impossible for the host to use the GPU.
16.2.1. Setting up NVIDIA vGPU devices Copy linkLink copied to clipboard!
To set up the NVIDIA vGPU feature, you need to download NVIDIA vGPU drivers for your GPU device, create mediated devices, and assign them to the intended virtual machines.
Prerequisites
Your GPU supports vGPU mediated devices. For an up-to-date list of NVIDIA GPUs that support creating vGPUs, see the NVIDIA vGPU software documentation.
If you do not know which GPU your host is using, install the lshw package and use the
lshw -C displaycommand. The following example shows the system is using an NVIDIA Tesla P4 GPU, compatible with vGPU.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Download the NVIDIA vGPU drivers and install them on your system. For instructions, see the NVIDIA documentation.
If the NVIDIA software installer did not create the /etc/modprobe.d/nvidia-installer-disable-nouveau.conf file, create a
conffile of any name in /etc/modprobe.d/, and add the following lines in the file:blacklist nouveau options nouveau modeset=0
blacklist nouveau options nouveau modeset=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Regenerate the initial RAM disk for the current kernel, then reboot.
dracut --force reboot
# dracut --force # rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the kernel has loaded the
nvidia_vgpu_vfiomodule and that thenvidia-vgpu-mgr.serviceservice is running.Copy to Clipboard Copied! Toggle word wrap Toggle overflow In addition, if creating vGPU on an NVIDIA GPU device that is based on the Ampere (or later) architecture, ensure that virtual functions are enabled for the physical GPU. For instructions, see the NVIDIA documentation.
Generate a device UUID.
uuidgen 30820a6f-b1a5-4503-91ca-0c10ba58692a
# uuidgen 30820a6f-b1a5-4503-91ca-0c10ba58692aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Prepare an XML file with a configuration of the mediated device, based on the detected GPU hardware. For example, the following configures a mediated device of the
nvidia-63vGPU type on an NVIDIA Tesla P4 card that runs on the 0000:01:00.0 PCI bus and uses the UUID generated in the previous step.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define a vGPU mediated device based on the XML file you prepared. For example:
virsh nodedev-define vgpu-test.xml Node device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 created from vgpu-test.xml
# virsh nodedev-define vgpu-test.xml Node device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 created from vgpu-test.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify that the mediated device is listed as inactive.
virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
# virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the vGPU mediated device you created.
virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 started
# virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Ensure that the mediated device is listed as active.
virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
# virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the vGPU device to start automatically after the host reboots
virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 marked as autostarted
# virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Device mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 marked as autostartedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the mediated device to a VM that you want to share the vGPU resources. To do so, add the following lines, along with the previously genereated UUID, to the <devices/> sections in the XML configuration of the VM.
To attach a single vGPU to a VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that each UUID can only be assigned to one VM at a time.
To attach multiple vGPUs to a VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- For full functionality of the vGPU mediated devices to be available on the assigned VMs, set up NVIDIA vGPU guest software licensing on the VMs. For further information and instructions, see the NVIDIA Virtual GPU Software License Server User Guide.
Verification
Query the capabilities of the vGPU you created, and ensure it is listed as active and persistent.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the VM and verify that the guest operating system detects the mediated device as an NVIDIA GPU. For example, if the VM uses Linux:
lspci -d 10de: -k 07:00.0 VGA compatible controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1) Subsystem: NVIDIA Corporation Device 12ce Kernel driver in use: nvidia Kernel modules: nouveau, nvidia_drm, nvidia# lspci -d 10de: -k 07:00.0 VGA compatible controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1) Subsystem: NVIDIA Corporation Device 12ce Kernel driver in use: nvidia Kernel modules: nouveau, nvidia_drm, nvidiaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Troubleshooting
- When using RHEL 10 in a VM, the only available display protocol is Wayland. However, Wayland is currently not supported by Nvidia vGPU guest driver. As a consequence, you are not able to start GNOME desktop running on vGPU. For more information, see NVIDIA vGPU documentation.
16.2.2. Removing NVIDIA vGPU devices Copy linkLink copied to clipboard!
To change the configuration of assigned vGPU mediated devices, you need to remove the existing devices from the assigned VMs.
Prerequisites
- The VM from which you want to remove the device is shut down.
Procedure
Obtain the ID of the mediated device that you want to remove.
virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
# virsh nodedev-list --cap mdev mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the running instance of the vGPU mediated device.
virsh nodedev-destroy mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Destroyed node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'
# virsh nodedev-destroy mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Destroyed node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Ensure the mediated device has been deactivated.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the device from the XML configuration of the VM. To do so, use the
virsh editutility to edit the XML configuration of the VM, and remove the mdev’s configuration segment. The segment will look similar to the following:<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev><hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that stopping and detaching the mediated device does not delete it, but rather keeps it as defined. As such, you can restart and attach the device to a different VM.
Optional: To delete the stopped mediated device, remove its definition.
virsh nodedev-undefine mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Undefined node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'
# virsh nodedev-undefine mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0 Undefined node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you only stopped and detached the device, ensure the mediated device is listed as inactive.
virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0
# virsh nodedev-list --cap mdev --inactive mdev_30820a6f_b1a5_4503_91ca_0c10ba58692a_0000_01_00_0Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you also deleted the device, ensure the following command does not display it.
virsh nodedev-list --cap mdev
# virsh nodedev-list --cap mdevCopy to Clipboard Copied! Toggle word wrap Toggle overflow
16.2.3. Obtaining NVIDIA vGPU information about your system Copy linkLink copied to clipboard!
To evaluate the capabilities of the vGPU features available to you, you can obtain additional information about the mediated devices on your system, such as:
- How many mediated devices of a given type can be created
- What mediated devices are already configured on your system.
Procedure
To see the available GPUs devices on your host that can support vGPU mediated devices, use the
virsh nodedev-list --cap mdev_typescommand. For example, the following shows a system with two NVIDIA Quadro RTX6000 devices.virsh nodedev-list --cap mdev_types pci_0000_5b_00_0 pci_0000_9b_00_0
# virsh nodedev-list --cap mdev_types pci_0000_5b_00_0 pci_0000_9b_00_0Copy to Clipboard Copied! Toggle word wrap Toggle overflow To display vGPU types supported by a specific GPU device, as well as additional metadata, use the
virsh nodedev-dumpxmlcommand.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
16.2.4. Remote desktop streaming services for NVIDIA vGPU Copy linkLink copied to clipboard!
The following remote desktop streaming services are supported on the RHEL 10 hypervisor with NVIDIA vGPU or NVIDIA GPU passthrough enabled:
- HP ZCentral Remote Boost/Teradici
- NICE DCV
- Mechdyne TGX
For support details, see the appropriate vendor support matrix.
Chapter 17. Optimizing virtual machine performance Copy linkLink copied to clipboard!
Virtual machines (VMs) always experience some degree of performance deterioration in comparison to the host. However, you can use a variety of methods to minimize the performance impact of virtualization in RHEL 10, so that your hardware infrastructure resources can be used as efficiently as possible.
17.1. What influences virtual machine performance Copy linkLink copied to clipboard!
Virtual machines (VMs) run as user-space processes on the host. The hypervisor needs to convert the host’s system resources so that the VMs can use them. As a consequence, a portion of the resources is consumed by the conversion, and the VM cannot achieve the same performance efficiency as the host.
17.1.1. The impact of virtualization on system performance Copy linkLink copied to clipboard!
More specific reasons for VM performance loss include:
- Virtual CPUs (vCPUs) are implemented as threads on the host, handled by the Linux scheduler.
- VMs do not automatically inherit optimization features, such as NUMA or huge pages, from the host kernel.
- Disk and network I/O settings of the host might have a significant performance impact on the VM.
- Network traffic typically travels to a VM through a software-based bridge.
- Depending on the host devices and their models, there might be significant overhead due to emulation of particular hardware.
The severity of the virtualization impact on the VM performance is influenced by a variety factors, which include:
- The number of concurrently running VMs.
- The amount of virtual devices used by each VM.
- The device types used by the VMs.
17.1.2. Reducing VM performance loss Copy linkLink copied to clipboard!
RHEL 10 provides a number of features you can use to reduce the negative performance effects of virtualization. Notably:
-
The
TuneDservice can automatically optimize the resource distribution and performance of your VMs. - Block I/O tuning can improve the performances of the VM’s block devices, such as disks.
- NUMA tuning can increase vCPU performance.
- Virtual networking can be optimized in various ways.
Tuning VM performance can have negative effects on other virtualization functions. For example, it can make migrating the modified VM more difficult.
17.2. Optimizing virtual machine performance by using TuneD Copy linkLink copied to clipboard!
For an automated method of optimizing the performance of your virtual machines (VMs), you can use the TuneD utility.
TuneD is a tuning profile delivery mechanism that adapts RHEL for certain workload characteristics, such as requirements for CPU-intensive tasks or storage-network throughput responsiveness. It provides a number of tuning profiles that are pre-configured to enhance performance and reduce power consumption in a number of specific use cases. You can edit these profiles or create new profiles to create performance solutions tailored to your environment, including virtualized environments.
To optimize RHEL 10 for virtualization, use the following profiles:
-
For RHEL 10 virtual machines, use the virtual-guest profile. It is based on the generally applicable
throughput-performanceprofile, but also decreases the swappiness of virtual memory. - For RHEL 10 virtualization hosts, use the virtual-host profile. This enables more aggressive writeback of dirty memory pages, which benefits the host performance.
Prerequisites
-
The
TuneDservice is installed and enabled.
Procedure
List the available
TuneDprofiles.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a new
TuneDprofile or edit an existingTuneDprofile.For more information, see Managing TuneD profiles.
Activate a
TuneDprofile.tuned-adm profile selected-profile
# tuned-adm profile selected-profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To optimize a virtualization host, use the virtual-host profile.
tuned-adm profile virtual-host
# tuned-adm profile virtual-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow On a RHEL guest operating system, use the virtual-guest profile.
tuned-adm profile virtual-guest
# tuned-adm profile virtual-guestCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the active profile for
TuneD.tuned-adm active Current active profile: virtual-host
# tuned-adm active Current active profile: virtual-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
TuneDprofile settings have been applied on your system.tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.# tuned-adm verify Verification succeeded, current system settings match the preset profile. See tuned log file ('/var/log/tuned/tuned.log') for details.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Virtual machine performance optimization for specific workloads Copy linkLink copied to clipboard!
Virtual machines (VMs) are frequently dedicated to perform a specific workload. You can improve the performance of your VMs by optimizing their configuration for the intended workload.
| Use case | IOThread | vCPU pinning | vNUMA pinning | huge pages | multi-queue |
|---|---|---|---|---|---|
| Database | For database disks | Yes* | Yes* | Yes* | Yes, see: multi-queue virtio-blk, virtio-scsi |
| Virtualized Network Function (VNF) | No | Yes | Yes | Yes | Yes, see: multi-queue virtio-net |
| High Performance Computing (HPC) | No | Yes | Yes | Yes | No |
| Backup Server | For backup disks | No | No | No | Yes, see: multi-queue virtio-blk, virtio-scsi |
| VM with many CPUs (Usually more than 32) | No | Yes* | Yes* | No | No |
| VM with large RAM (Usually more than 128 GB) | No | No | Yes* | Yes | No |
* If the VM has enough CPUs and RAM to use more than one NUMA node.
A VM can fit in more than one category of use cases. In this situation, you should apply all of the suggested configurations.
17.4. Configuring virtual machine memory Copy linkLink copied to clipboard!
To improve the performance of a virtual machine (VM), you can assign additional host RAM to the VM. Similarly, you can decrease the amount of memory allocated to a VM so the host memory can be allocated to other VMs or tasks.
17.4.1. Memory overcommitment Copy linkLink copied to clipboard!
To ensure optimal use of memory resources available to your host, memory overcommitment is enabled by default in RHEL. By using memory overcommit, you can assign more memory to virtual machines (VMs) than is available to your host. The RHEL kernel then automatically assigns memory to VMs that require it.
This is because VMs running on the KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each VM functions as a Linux process where the host’s Linux kernel allocates memory only when requested.
In addition, the host’s memory manager can move the VM’s memory between its own physical memory and swap space. When memory overcommitment is enabled, the kernel can decide to allocate less physical memory than is requested by a VM, because often the requested amount of memory is not fully used by the VM’s process.
Note, however, that with frequent overcommitment for memory-intensive workloads, the system can still become unstable.
Memory overcommitment requires you to allocate sufficient swap space on the host physical machine to accommodate all VMs, as well as enough memory for the host physical machine’s processes. For instructions on the basic recommended swap space size, see What is the recommended swap size for Red Hat platforms? (Red Hat Knowledgebase).
Possible methods to deal with memory shortages on the host include the following:
- Allocate less memory per VM.
- Add more physical memory to the host.
- Use larger swap space.
A VM will run slower if it is swapped frequently. In addition, overcommitting can cause the system to run out of memory (OOM), which may lead to the Linux kernel shutting down important system processes.
Memory overcommit is not supported with device assignment. This is because when device assignment is in use, all virtual machine memory must be statically pre-allocated to enable direct memory access (DMA) with the assigned device.
17.4.2. Adding and removing virtual machine memory by using virtio-mem Copy linkLink copied to clipboard!
RHEL 10 provides the virtio-mem paravirtualized memory device. This device makes it possible to dynamically add or remove host memory in virtual machines (VMs).
17.4.2.1. Overview of virtio-mem Copy linkLink copied to clipboard!
virtio-mem is a paravirtualized memory device that can be used to dynamically add or remove host memory in virtual machines (VMs). For example, you can use this device to move memory resources between running VMs or to resize VM memory in cloud setups based on your current requirements.
By using virtio-mem, you can increase the memory of a VM beyond its initial size, and shrink it back to its original size, in units that can have the size of 2 to several hundred mebibytes (MiBs), depending on the used memory backing and the operating system running inside the VM. Note, however, that virtio-mem also relies on a specific guest operating system configuration, especially to reliably unplug memory.
virtio-mem feature limitations
virtio-mem is currently not compatible with the following features:
- Using memory locking for real-time applications on the host
- Using encrypted virtualization on the host
-
Combining
virtio-memwithmemballooninflation and deflation on the host -
Unloading or reloading the
virtio_memdriver in a VM -
Using vhost-user devices, with the exception of
virtiofs
17.4.2.2. Configuring memory onlining in virtual machines Copy linkLink copied to clipboard!
Before using virtio-mem to attach memory to a running virtual machine (also known as memory hot-plugging), you must configure the virtual machine (VM) operating system to automatically set the hot-plugged memory to an online state. Otherwise, the guest operating system is not able to use the additional memory.
By default in RHEL, memory onlining is configured with udev rules. However, when using virtio-mem, configure memory onlining directly in the kernel.
Prerequisites
- The host uses the Intel 64, AMD64, ARM 64, or IBM Z CPU architecture.
- The host uses RHEL 10 as the operating system.
VMs running on the host use one of the following operating system versions:
On Intel 64 and AMD64 hosts: RHEL 8.10, RHEL 9.4 or later, RHEL 10.0 or later, or supported 64-bit version of Windows
ImportantUnplugging memory from a running VM is disabled by default in RHEL 8.10 VMs.
For a list of supported Windows versions, see: Certified Guest Operating Systems
- On ARM 64 hosts: RHEL 9.6 or later or RHEL 10.0 or later
- On IBM Z hosts: RHEL 9.7 or later or RHEL 10.1 or later
You have chosen the optimal configuration for memory onlining:
-
online_movable -
online_kernel -
auto-movable
To learn about differences between these configurations, see: Comparison of memory onlining configurations
-
Procedure
To set memory onlining to use the
online_movableconfiguration in the VM:Set the
memhp_default_statekernel command line parameter toonline_movable:grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_movable
# grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_movableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the VM.
To set memory onlining to use the
online_kernelconfiguration in the VM:Set the
memhp_default_statekernel command line parameter toonline_kernel:grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_kernel
# grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online_kernelCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the VM.
To use the
auto-movablememory onlining policy in the VM:Set the
memhp_default_statekernel command line parameter toonline:grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=online
# grubby --update-kernel=ALL --remove-args=memhp_default_state --args=memhp_default_state=onlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
memory_hotplug.online_policykernel command line parameter toauto-movable:grubby --update-kernel=ALL --remove-args="memory_hotplug.online_policy" --args=memory_hotplug.online_policy=auto-movable
# grubby --update-kernel=ALL --remove-args="memory_hotplug.online_policy" --args=memory_hotplug.online_policy=auto-movableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To further tune the
auto-movableonlining policy, change thememory_hotplug.auto_movable_ratioandmemory_hotplug.auto_movable_numa_awareparameters:grubby --update-kernel=ALL --remove-args="memory_hotplug.auto_movable_ratio" --args=memory_hotplug.auto_movable_ratio=<percentage> grubby --update-kernel=ALL --remove-args="memory_hotplug.memory_auto_movable_numa_aware" --args=memory_hotplug.auto_movable_numa_aware=<y/n>
# grubby --update-kernel=ALL --remove-args="memory_hotplug.auto_movable_ratio" --args=memory_hotplug.auto_movable_ratio=<percentage> # grubby --update-kernel=ALL --remove-args="memory_hotplug.memory_auto_movable_numa_aware" --args=memory_hotplug.auto_movable_numa_aware=<y/n>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
memory_hotplug.auto_movable_ratio parametersets the maximum ratio of memory only available for movable allocations compared to memory available for any allocations. The ratio is expressed in percents and the default value is: 301 (%), which is a 3:1 ratio. The
memory_hotplug.auto_movable_numa_awareparameter controls whether thememory_hotplug.auto_movable_ratioparameter applies to memory across all available NUMA nodes or only for memory within a single NUMA node. The default value is: y (yes)For example, if the maximum ratio is set to 301% and the
memory_hotplug.auto_movable_numa_awareis set to y (yes), than the 3:1 ratio is applied even within the NUMA node with the attachedvirtio-memdevice. If the parameter is set to n (no), the maximum 3:1 ratio is applied only for all the NUMA nodes as a whole.Additionally, if the ratio is not exceeded, the newly hot-plugged memory will be available only for movable allocations. Otherwise, the newly hot-plugged memory will be available for both movable and unmovable allocations.
-
The
- Reboot the VM.
Verification
To see if the
online_movableconfiguration has been set correctly, check the current value of thememhp_default_statekernel parameter:cat /sys/devices/system/memory/auto_online_blocks online_movable
# cat /sys/devices/system/memory/auto_online_blocks online_movableCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see if the
online_kernelconfiguration has been set correctly, check the current value of thememhp_default_statekernel parameter:cat /sys/devices/system/memory/auto_online_blocks online_kernel
# cat /sys/devices/system/memory/auto_online_blocks online_kernelCopy to Clipboard Copied! Toggle word wrap Toggle overflow To see if the
auto-movableconfiguration has been set correctly, check the following kernel parameters:memhp_default_state:cat /sys/devices/system/memory/auto_online_blocks online
# cat /sys/devices/system/memory/auto_online_blocks onlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow memory_hotplug.online_policy:cat /sys/module/memory_hotplug/parameters/online_policy auto-movable
# cat /sys/module/memory_hotplug/parameters/online_policy auto-movableCopy to Clipboard Copied! Toggle word wrap Toggle overflow memory_hotplug.auto_movable_ratio:cat /sys/module/memory_hotplug/parameters/auto_movable_ratio 301
# cat /sys/module/memory_hotplug/parameters/auto_movable_ratio 301Copy to Clipboard Copied! Toggle word wrap Toggle overflow memory_hotplug.auto_movable_numa_aware:cat /sys/module/memory_hotplug/parameters/auto_movable_numa_aware y
# cat /sys/module/memory_hotplug/parameters/auto_movable_numa_aware yCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4.2.3. Attaching a virtio-mem device to virtual machines Copy linkLink copied to clipboard!
To attach additional memory to a running virtual machine (also known as memory hot-plugging) and afterwards be able to resize the hot-plugged memory, you can use a virtio-mem device.
Specifically, you can use libvirt XML configuration files and virsh commands to define and attach virtio-mem devices to virtual machines (VMs).
Prerequisites
- The host uses the Intel 64, AMD64, ARM 64, or IBM Z CPU architecture.
- The host uses RHEL 10 as the operating system.
VMs running on the host use one of the following operating system versions:
On Intel 64 and AMD64 hosts: RHEL 8.10, RHEL 9.4 or later, RHEL 10.0 or later, or supported 64-bit version of Windows
ImportantUnplugging memory from a running VM is disabled by default in RHEL 8.10 VMs.
For a list of supported Windows versions, see: Certified Guest Operating Systems
- On ARM 64 hosts: RHEL 9.6 or later or RHEL 10.0 or later
- On IBM Z hosts: RHEL 9.7 or later or RHEL 10.1 or later
- The VM has memory onlining configured. For instructions, see: Configuring memory onlining in virtual machines
Procedure
Ensure that the XML configuration of the target VM includes the
maxMemoryparameter:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the XML configuration of the
testguest1VM defines amaxMemoryparameter with a 128 gibibyte (GiB) size. ThemaxMemorysize specifies the maximum memory the VM can use, which includes both initial and hot-plugged memory.Create and open an XML file to define
virtio-memdevices on the host, for example:vim virtio-mem-device.xml
# vim virtio-mem-device.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add XML definitions of
virtio-memdevices to the file and save it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two
virtio-memdevices are defined with the following parameters:-
size: This is the maximum size of the device. In the example, it is 48 GiB. Thesizemust be a multiple of theblocksize. -
node: This is the assigned vNUMA node for thevirtio-memdevice. -
block: This is the block size of the device. It must be at least the size of the Transparent Huge Page (THP), which is 2 MiB on Intel 64 and AMD64 CPU architecture. On ARM64 architecture, the size of THP can be 2 MiB or 512 MiB depending on the base page size. The 2 MiB block size on Intel 64 or AMD64 architecture is usually a good default choice. When usingvirtio-memwith Virtual Function I/O (VFIO) or mediated devices (mdev), the total number of blocks across allvirtio-memdevices must not be larger than 32768, otherwise the plugging of RAM might fail. -
requested: This is the amount of memory you attach to the VM with thevirtio-memdevice. However, it is just a request towards the VM and it might not be resolved successfully, for example if the VM is not properly configured. Therequestedsize must be a multiple of theblocksize and cannot exceed the maximum definedsize. -
current: This represents the current size thevirtio-memdevice provides to the VM. Thecurrentsize can differ from therequestedsize, for example when requests cannot be completed or when rebooting the VM. alias: This is an optional user-defined alias that you can use to specify the intendedvirtio-memdevice, for example when editing the device with libvirt commands. All user-defined aliases in libvirt must start with the "ua-" prefix.Apart from these specific parameters,
libvirthandles thevirtio-memdevice like any other PCI device.For more information on managing PCI devices attached to VMs, see: Managing virtual devices
-
Use the XML file to attach the defined
virtio-memdevices to a VM. For example, to permanently attach the two devices defined in thevirtio-mem-device.xmlto the runningtestguest1VM:virsh attach-device testguest1 virtio-mem-device.xml --live --config
# virsh attach-device testguest1 virtio-mem-device.xml --live --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
--liveoption attaches the device to a running VM only, without persistence between boots. The--configoption makes the configuration changes persistent. You can also attach the device to a shutdown VM without the--liveoption.Optional: To dynamically change the
requestedsize of avirtio-memdevice attached to a running VM, use thevirsh update-memory-devicecommand:virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 4GiB
# virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 4GiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example:
-
testguest1is the VM you want to update. -
--alias ua-virtiomem0is thevirtio-memdevice specified by a previously defined alias. --requested-size 4GiBchanges therequestedsize of thevirtio-memdevice to 4 GiB.WarningUnplugging memory from a running VM by reducing the
requestedsize might be unreliable. Whether this process succeeds depends on various factors, such as the memory onlining policy that is used.In some cases, the guest operating system cannot complete the request successfully, because changing the amount of hot-plugged memory is not possible at that time.
Additionally, unplugging memory from a running VM is disabled by default in RHEL 8.10 VMs.
-
Optional: To unplug a
virtio-memdevice from a shut-down VM, use thevirsh detach-devicecommand:virsh detach-device testguest1 virtio-mem-device.xml
# virsh detach-device testguest1 virtio-mem-device.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To unplug a
virtio-memdevice from a running VM:Change the
requestedsize of thevirtio-memdevice to 0, otherwise the attempt to unplug avirtio-memdevice from a running VM will fail.virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 0
# virsh update-memory-device testguest1 --alias ua-virtiomem0 --requested-size 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unplug a
virtio-memdevice from the running VM:virsh detach-device testguest1 virtio-mem-device.xml --config
# virsh detach-device testguest1 virtio-mem-device.xml --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
In the VM, check the available RAM and see if the total amount now includes the hot-plugged memory:
free -h total used free shared buff/cache available Mem: 31Gi 5.5Gi 14Gi 1.3Gi 11Gi 23Gi Swap: 8.0Gi 0B 8.0Gi# free -h total used free shared buff/cache available Mem: 31Gi 5.5Gi 14Gi 1.3Gi 11Gi 23Gi Swap: 8.0Gi 0B 8.0GiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow The current amount of plugged-in RAM can be also viewed on the host by displaying the XML configuration of the running VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example:
-
<currentMemory unit='GiB'>31</currentMemory>represents the total RAM available in the VM from all sources. -
<current unit='GiB'>16</current>represents the current size of the plugged-in RAM provided by thevirtio-memdevice.
-
17.4.2.4. Comparison of memory onlining configurations Copy linkLink copied to clipboard!
When attaching memory to a running RHEL virtual machine (also known as memory hot-plugging), you must set the hot-plugged memory to an online state in the virtual machine (VM) operating system. Otherwise, the system will not be able to use the memory.
To do this, you can use several different configurations. The following table summarizes the main considerations when choosing between the available memory onlining configurations.
| Configuration name | Unplugging memory from a VM | A risk of creating a memory zone imbalance | A potential use case | Memory requirements of the intended workload |
|---|---|---|---|---|
|
| Hot-plugged memory can be reliably unplugged. | Yes | Hot-plugging a comparatively small amount of memory | Mostly user-space memory |
|
| Movable portions of hot-plugged memory can be reliably unplugged. | Minimal | Hot-plugging a large amount of memory | Mostly user-space memory |
|
| Hot-plugged memory cannot be reliably unplugged. | No | Unreliable memory unplugging is acceptable. | User-space or kernel-space memory |
A zone imbalance is a lack of available memory pages in one of the Linux memory zones. A zone imbalance can negatively impact the system performance. For example, the kernel might crash if it runs out of free memory for unmovable allocations. Usually, movable allocations contain mostly user-space memory pages and unmovable allocations contain mostly kernel-space memory pages.
17.4.3. Configuring virtual machines to use huge pages Copy linkLink copied to clipboard!
In certain use cases, you can improve memory allocation for your virtual machines (VMs) by using huge pages instead of the default 4 KiB memory pages. For example, huge pages can improve performance for VMs with high memory utilization, such as database servers.
Prerequisites
- The host is configured to use huge pages in memory allocation. For instructions, see Configuring HugeTLB at boot time
- The selected VM is shut down.
Procedure
Open the XML configuration of the selected VM. For example, to edit a
testguestVM, run the following command:virsh edit testguest
# virsh edit testguestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the huge page configuration of the VM. For example, to configure the VM to use 1 GiB huge pages, add the following lines to the
<memoryBacking>section in the configuration:<memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking><memoryBacking> <hugepages> <page size='1' unit='GiB'/> </hugepages> </memoryBacking>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Start the VM.
Confirm that the host has successfully allocated huge pages for the running VM. On the host, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you add together the number of free and reserved huge pages (
HugePages_Free+HugePages_Rsvd), the result should be less than the total number of huge pages (HugePages_Total). The difference is the number of huge pages that is used by the running VM.
17.5. Optimizing virtual machine I/O performance Copy linkLink copied to clipboard!
The input and output (I/O) capabilities of a virtual machine (VM) can significantly limit the VM’s overall efficiency. To address this, you can optimize a VM’s I/O by configuring block I/O parameters.
17.5.1. Tuning block I/O in virtual machines Copy linkLink copied to clipboard!
When multiple block devices, such as storage drives, are being used by one or more virtual machines (VMs), you can adjust the I/O priority of specific virtual devices by modifying their I/O weights.
Increasing the I/O weight of a device increases its priority for I/O bandwidth, and as a result, it provides the device with more host resources. Similarly, reducing a device’s weight makes the device consume less host resources.
Each device’s weight value must be within the 100 to 1000 range. Alternatively, the value can be 0, which removes that device from per-device listings.
Procedure
Display the current
<blkio>parameters for a VM:# virsh dumpxml VM-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the I/O weight of a specified device:
virsh blkiotune VM-name --device-weights device, I/O-weight
# virsh blkiotune VM-name --device-weights device, I/O-weightCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the following changes the weight of the /dev/sda device in the testguest1 VM to 500.
virsh blkiotune testguest1 --device-weights /dev/sda, 500
# virsh blkiotune testguest1 --device-weights /dev/sda, 500Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the VM’s block I/O parameters have been configured correctly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantCertain kernels do not support setting I/O weights for specific devices. If the previous step does not display the weights as expected, it is likely that this feature is not compatible with your host kernel.
17.5.2. Configuring disk I/O throttling in virtual machines Copy linkLink copied to clipboard!
When several virtual machines (VMs) are running simultaneously, they can interfere with system performance by using excessive disk I/O. To prevent this, you can use disk I/O throttling.
With disk I/O throttling, you can set a limit on disk I/O requests sent from the VMs to the host machine. This can prevent a VM from over-utilizing shared resources and impacting the performance of other VMs.
Disk I/O throttling can be useful in various situations, for example when VMs that belong to different customers are running on the same host, or when quality of service guarantees are given for different VMs. Disk I/O throttling can also be used to simulate slower disks.
To enable disk I/O throttling, set a limit on disk I/O requests sent from each block device attached to VMs to the host machine.
I/O throttling can be applied independently to each block device attached to a VM, and supports limits on throughput and I/O operations.
Procedure
Use the
virsh domblklistcommand to list the names of all the disk devices on a specified VM.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the host block device where the virtual disk that you want to throttle is mounted.
For example, if you want to throttle the
sdbvirtual disk from the previous step, the following output shows that the disk is mounted on the/dev/nvme0n1p3partition.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set I/O limits for the block device by using the
virsh blkiotunecommand.virsh blkiotune VM-name --parameter device,limit
# virsh blkiotune VM-name --parameter device,limitCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example throttles the
sdbdisk on thetestguest1VM to 1000 read and write I/O operations per second and to 50 MB per second read and write throughput.virsh blkiotune testguest1 --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800
# virsh blkiotune testguest1 --device-read-iops-sec /dev/nvme0n1p3,1000 --device-write-iops-sec /dev/nvme0n1p3,1000 --device-write-bytes-sec /dev/nvme0n1p3,52428800 --device-read-bytes-sec /dev/nvme0n1p3,52428800Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantRed Hat does not support using the
virsh blkdeviotunecommand to configure I/O throttling in VMs.+ For more information about unsupported features when using RHEL 10 as a VM host, see Unsupported features in RHEL 10 virtualization.
17.5.3. Enabling multi-queue on storage devices Copy linkLink copied to clipboard!
When using virtio-blk or virtio-scsi storage devices in your virtual machines (VMs), you can improve your storage performance and scalability by using the multi-queue feature. With multi-queue, each virtual CPU (vCPU) can have a separate queue and interrupt to use without affecting other vCPUs.
Note that _multi-queue feature is enabled by default for the Q35 machine type, but on the i440fx machine type, you must enable it manually. You can tune the number of queues to be optimal for your workload, but the optimal number differs for each type of workload and you must test which number of queues works best in your case.
Procedure
To enable multi-queue on a storage device, edit the XML configuration of the VM.
virsh edit <example_vm>
# virsh edit <example_vm>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the XML configuration, find the intended storage device and change the
queuesparameter to use multiple I/O queues. Replace N with the number of vCPUs in the VM, up to 16.A
virtio-blkexample:Copy to Clipboard Copied! Toggle word wrap Toggle overflow A
virtio-scsiexample:<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>
<controller type='scsi' index='0' model='virtio-scsi'> <driver queues='N' /> </controller>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Restart the VM for the changes to take effect.
17.5.4. Configuring dedicated IOThreads Copy linkLink copied to clipboard!
To improve the I/O performance of a disk on your virtual machine (VM), you can configure a dedicated IOThread that is used to manage the I/O operations of the VM’s disk.
Normally, the I/O operations of a disk are a part of the main QEMU thread, which can decrease the responsiveness of the VM as a whole during intensive I/O workloads. By separating the I/O operations to a dedicated IOThread, you can significantly increase the responsiveness and performance of your VM.
Procedure
- Shut down the selected VM if it is running.
On the host, add or edit the
<iothreads>tag in the XML configuration of the VM. For example, to create a singleIOThreadfor atestguest1VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor optimal results, use only 1-2
IOThreadsper CPU on the host.Assign a dedicated
IOThreadto a VM disk. For example, to assign anIOThreadwith ID of1to a disk on thetestguest1VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIOThreadIDs start from 1 and you must dedicate only a singleIOThreadto a disk.Usually, a one dedicated
IOThreadper VM is sufficient for optimal performance.When using
virtio-scsistorage devices, assign a dedicatedIOThreadto thevirtio-scsicontroller. For example, to assign anIOThreadwith ID of1to a controller on thetestguest1VM:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Evaluate the impact of your changes on your VM performance. For details, see: Virtual machine performance monitoring tools
17.5.5. Configuring virtual disk caching Copy linkLink copied to clipboard!
For intensive I/O workloads, selecting the optimal caching mode can significantly increase the virtual machine (VM) performance.
The KVM hypervisor provides several virtual disk caching modes:
writethrough- Host page cache is used for reading only. Writes are reported as completed only when the data has been committed to the storage device. The sustained IO performance is decreased but this mode has good write guarantees.
writeback-
Host page cache is used for both reading and writing. Writes are reported as complete when data reaches the host’s memory cache, not physical storage. This mode has faster IO performance than
writethroughbut it is possible to lose data on host failure. none- Host page cache is bypassed entirely. This mode relies directly on the write queue of the physical disk, so it has a predictable sustained IO performance and offers good write guarantees on a stable guest. It is also a safe cache mode for VM live migration.
Procedure
- Shut down the selected VM if it is running.
Edit the XML configuration of the selected VM.
virsh edit <vm_name>
# virsh edit <vm_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the disk device and edit the
cacheoption in thedrivertag.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.6. Optimizing virtual machine CPU performance Copy linkLink copied to clipboard!
Much like physical CPUs in host machines, virtual CPUs (vCPUs) are critical to virtual machine (VM) performance. Optimizing vCPUs can have a significant impact on the resource efficiency of your VMs.
The general steps to optimize your vCPU include the following:
- Adjust how many host CPUs are assigned to the VM. You can do this by using the CLI or the web console.
Ensure that the vCPU model is aligned with the CPU model of the host. For example, to set the testguest1 VM to use the CPU model of the host:
virt-xml testguest1 --edit --cpu host-model
# virt-xml testguest1 --edit --cpu host-modelCopy to Clipboard Copied! Toggle word wrap Toggle overflow On an ARM 64 system, use
--cpu host-passthrough.- Manage kernel same-page merging (KSM).
If your host machine uses Non-Uniform Memory Access (NUMA), you can also configure NUMA for its VMs. This maps the host’s CPU and memory processes onto the CPU and memory processes of the VM as closely as possible. In effect, NUMA tuning provides the vCPU with a more streamlined access to the system memory allocated to the VM, which can improve the vCPU processing effectiveness.
For details, see Configuring NUMA in a virtual machine and Virtual machine performance optimization for specific workloads.
17.6.1. vCPU overcommitment Copy linkLink copied to clipboard!
By using virtual CPU (vCPU) overcommitment, you can have a setup where the sum of all vCPUs in virtual machines (VMs) running on a host exceeds the number of physical CPUs on the host.
However, you might experience performance deterioration when simultaneously running more cores in your VMs than are physically available on the host.
Best practices for vCPU overcommitment include the following:
- Assign the minimum amount of vCPUs required by by the VM’s workloads for best performance.
- Avoid overcommitting vCPUs in production without extensive testing.
- If overcomitting vCPUs, the safe ratio is typically 5 vCPUs to 1 physical CPU for loads under 100%.
- It is not recommended to have more than 10 total allocated vCPUs per physical processor core.
- Monitor CPU usage to prevent performance degradation under heavy loads.
Applications that use 100% of memory or processing resources may become unstable in overcommitted environments. Because the CPU overcommit ratio is workload-dependent, do not overcommit memory or CPUs in a production environment without extensive testing.
17.6.2. Adding and removing virtual CPUs by using the command line Copy linkLink copied to clipboard!
To increase or optimize the CPU performance of a virtual machine (VM), you can add or remove virtual CPUs (vCPUs) assigned to the VM.
When performed on a running VM, this is also referred to as vCPU hot plugging and hot unplugging. However, note that vCPU hot unplug is not supported in RHEL 10, and Red Hat highly discourages its use.
Procedure
Optional: View the current state of the vCPUs in the selected VM. For example, to display the number of vCPUs on the testguest VM:
virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1
# virsh vcpucount testguest maximum config 4 maximum live 2 current config 2 current live 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow This output indicates that testguest is currently using 1 vCPU, and 1 more vCPu can be hot plugged to it to increase the VM’s performance. However, after reboot, the number of vCPUs testguest uses will change to 2, and it will be possible to hot plug 2 more vCPUs.
Adjust the maximum number of vCPUs that can be attached to the VM, which takes effect on the VM’s next boot.
For example, to increase the maximum vCPU count for the testguest VM to 8:
virsh setvcpus testguest 8 --maximum --config
# virsh setvcpus testguest 8 --maximum --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the maximum may be limited by the CPU topology, host hardware, the hypervisor, and other factors.
Adjust the current number of vCPUs attached to the VM, up to the maximum configured in the previous step. For example:
To increase the number of vCPUs attached to the running testguest VM to 4:
virsh setvcpus testguest 4 --live
# virsh setvcpus testguest 4 --liveCopy to Clipboard Copied! Toggle word wrap Toggle overflow This increases the VM’s performance and host load footprint of testguest until the VM’s next boot.
To permanently decrease the number of vCPUs attached to the testguest VM to 1:
virsh setvcpus testguest 1 --config
# virsh setvcpus testguest 1 --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow This decreases the VM’s performance and host load footprint of testguest after the VM’s next boot. However, if needed, additional vCPUs can be hot plugged to the VM to temporarily increase its performance.
Verification
Confirm that the current state of vCPU for the VM reflects your changes.
virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4
# virsh vcpucount testguest maximum config 8 maximum live 4 current config 1 current live 4Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.6.3. Managing virtual CPUs by using the web console Copy linkLink copied to clipboard!
To review and configure virtual CPUs used by virtual machines (VMs) on your host, you can use the RHEL 10 web console.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- The web console VM plug-in is installed on your system.
Procedure
- Log in to the RHEL 10 web console.
In the interface, click the VM whose information you want to see.
A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.
Click next to the number of vCPUs in the Overview pane.
The vCPU details dialog appears.
Configure the virtual CPUs for the selected VM.
vCPU Count - The number of vCPUs currently in use.
NoteThe vCPU count cannot be greater than the vCPU Maximum.
- vCPU Maximum - The maximum number of virtual CPUs that can be configured for the VM. If this value is higher than the vCPU Count, additional vCPUs can be attached to the VM.
- Sockets - The number of sockets to expose to the VM.
- Cores per socket - The number of cores for each socket to expose to the VM.
Threads per core - The number of threads for each core to expose to the VM.
ImportantNote that the Sockets, Cores per socket, and Threads per core options adjust the CPU topology of the VM. This may be beneficial for vCPU performance and may impact the functionality of certain software in the guest OS. If a different setting is not required by your deployment, keep the default values.
Click .
The virtual CPUs for the VM are configured.
- If the VM is running, restart it for the changes to virtual CPU settings to take effect.
17.6.4. Configuring NUMA in a virtual machine Copy linkLink copied to clipboard!
To configure Non-Uniform Memory Access (NUMA) settings of a virtual machine (VM), you can either use automated utilities or manual setup. By configuring NUMA settings, you can improve VM performance by aligning vCPUs with memory resources on NUMA-compatible hosts.
For ease of use, you can set up a VM’s NUMA configuration by using automated utilities and services. However, manual NUMA setup is more likely to yield a significant performance improvement.
Prerequisites
The host is a NUMA-compatible machine. To detect whether this is the case, use the
virsh nodeinfocommand and see theNUMA cell(s)line:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the value of the line is 2 or greater, the host is NUMA-compatible.
Optional: You have the
numactlpackage installed on the host.dnf install numactl
# dnf install numactlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Set the VM’s NUMA policy to
Preferred. For example, to configure the testguest5 VM:virt-xml testguest5 --edit --vcpus placement=auto virt-xml testguest5 --edit --numatune mode=preferred
# virt-xml testguest5 --edit --vcpus placement=auto # virt-xml testguest5 --edit --numatune mode=preferredCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set up the
numadservice to automatically align the VM CPU with memory resources.echo 1 > /proc/sys/kernel/numa_balancing
# echo 1 > /proc/sys/kernel/numa_balancingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
numadservice.systemctl start numad
# systemctl start numadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Tune NUMA settings manually. Specify which host NUMA nodes will be assigned specifically to a certain VM. This can improve the host memory usage by the VM’s vCPU.
Use the
numactlcommand to view the NUMA topology on the host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the XML configuration of a VM to assign CPU and memory resources to specific NUMA nodes. For example, the following configuration sets testguest6 to use vCPUs 0-7 on NUMA node
0and vCPUS 8-15 on NUMA node1. Both nodes are also assigned 16 GiB of VM’s memory.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor best performance results, it is a good practice to respect the maximum memory size for each NUMA node on the host.
- If the VM is running, restart it to apply the configuration.
Troubleshooting
17.6.5. Configuring virtual CPU pinning Copy linkLink copied to clipboard!
To improve the CPU performance of a virtual machine (VM), you can pin a virtual CPU (vCPU) to a specific physical CPU thread on the host. This ensures that the vCPU will have its own dedicated physical CPU thread, which can significantly improve the vCPU performance.
To further optimize the CPU performance, you can also pin QEMU process threads associated with a specified VM to a specific host CPU.
Procedure
Check the CPU topology on the host:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the output contains NUMA nodes and the available physical CPU threads on the host.
Check the number of vCPU threads inside the VM:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the output contains NUMA nodes and the available vCPU threads inside the VM.
Pin specific vCPU threads from a VM to a specific host CPU or range of CPUs. This is suggested as a safe method of vCPU performance improvement.
For example, the following commands pin vCPU threads 0 to 3 of the testguest6 VM to host CPUs 1, 3, 5, 7, respectively:
virsh vcpupin testguest6 0 1 virsh vcpupin testguest6 1 3 virsh vcpupin testguest6 2 5 virsh vcpupin testguest6 3 7
# virsh vcpupin testguest6 0 1 # virsh vcpupin testguest6 1 3 # virsh vcpupin testguest6 2 5 # virsh vcpupin testguest6 3 7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Verify whether the vCPU threads are successfully pinned to CPUs.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: After pinning vCPU threads, you can also pin QEMU process threads associated with a specified VM to a specific host CPU or range of CPUs. This can further help the QEMU process to run more efficiently on the physical CPU.
For example, the following commands pin the QEMU process thread of testguest6 to CPUs 2 and 4, and verify this was successful:
virsh emulatorpin testguest6 2,4 virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 2,4# virsh emulatorpin testguest6 2,4 # virsh emulatorpin testguest6 emulator: CPU Affinity ---------------------------------- *: 2,4Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.6.6. Configuring virtual CPU capping Copy linkLink copied to clipboard!
To limit the amount of CPU resources a virtual machine (VM) can use, you can set up virtual CPU (vCPU) capping. vCPU capping can improve the overall performance by preventing excessive use of host’s CPU resources by a single VM and by making it easier for the hypervisor to manage CPU scheduling.
Procedure
View the current vCPU scheduling configuration on the host.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To configure an absolute vCPU cap for a VM, set the
vcpu_periodandvcpu_quotaparameters. Both parameters use a numerical value that represents a time duration in microseconds.Set the
vcpu_periodparameter by using thevirsh schedinfocommand. For example:virsh schedinfo <vm_name> --set vcpu_period=100000
# virsh schedinfo <vm_name> --set vcpu_period=100000Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
vcpu_periodis set to 100,000 microseconds, which means the scheduler enforces vCPU capping during this time interval.You can also use the
--live --configoptions to configure a running VM without restarting it.Set the
vcpu_quotaparameter by using thevirsh schedinfocommand. For example:virsh schedinfo <vm_name> --set vcpu_quota=50000
# virsh schedinfo <vm_name> --set vcpu_quota=50000Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the
vcpu_quotais set to 50,000 microseconds, which specifies the maximum amount of CPU time that the VM can use during thevcpu_periodtime interval. In this case,vcpu_quotais set as the half ofvcpu_period, so the VM can use up to 50% of the CPU time during that interval.You can also use the
--live --configoptions to configure a running VM without restarting it.
Verification
Check that the vCPU scheduling parameters have the correct values.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.6.7. Tuning CPU weights Copy linkLink copied to clipboard!
To improve the CPU performance of a specific virtal machine (VM), you can prioritize CPU time allocation for that VM. To do this, use the CPU weight setting.
This setting controls how much CPU time a virtual machine (VM) receives compared to other running VMs. When you increase the CPU weight of a specific VM, this VM receives more CPU time relative to other VMs.
To configure the CPU weight of a VM, adjust its cpu_shares parameter. The possible CPU weight values range from 0 to 262144 and the default value for a new VM is 1024.
Procedure
Check the current CPU weight of a VM.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the CPU weight to a preferred value.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
cpu_sharesis set to 2048. This means that if all other VMs have the value set to 1024, this VM receives approximately twice the amount of CPU time.You can also use the
--live --configoptions to configure a running VM without restarting it.
17.6.8. Enabling and disabling kernel same-page merging Copy linkLink copied to clipboard!
To optimize CPU efficiency of your virtual machines (VMs), you can enable or disable kernel same-page merging (KSM).
KSM improves memory density by sharing identical memory pages between VMs. Therefore, enabling KSM might improve memory efficiency of your VM deployment. However, enabling KSM also increases CPU utilization, and might negatively affect overall performance depending on the workload.
In RHEL 10, KSM is disabled by default. To enable KSM and test its impact on your VM performance, see the following instructions.
Prerequisites
- Root access to your host system.
Procedure
Enable KSM:
WarningEnabling KSM increases CPU utilization and affects overall CPU performance.
Install the
ksmtunedservice:dnf install ksmtuned
# dnf install ksmtunedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the service:
To enable KSM for a single session, use the
systemctlutility to start theksmandksmtunedservices.systemctl start ksm systemctl start ksmtuned
# systemctl start ksm # systemctl start ksmtunedCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable KSM persistently, use the
systemctlutility to enable theksmandksmtunedservices.systemctl enable ksm Created symlink /etc/systemd/system/multi-user.target.wants/ksm.service → /usr/lib/systemd/system/ksm.service systemctl enable ksmtuned Created symlink /etc/systemd/system/multi-user.target.wants/ksmtuned.service → /usr/lib/systemd/system/ksmtuned.service
# systemctl enable ksm Created symlink /etc/systemd/system/multi-user.target.wants/ksm.service → /usr/lib/systemd/system/ksm.service # systemctl enable ksmtuned Created symlink /etc/systemd/system/multi-user.target.wants/ksmtuned.service → /usr/lib/systemd/system/ksmtuned.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Monitor the performance and resource consumption of VMs on your host to evaluate the benefits of activating KSM. Specifically, ensure that the additional CPU usage by KSM does not offset the memory improvements and does not cause additional performance issues. In latency-sensitive workloads, also pay attention to cross-NUMA page merges.
Optional: If KSM has not improved your VM performance, disable it:
To disable KSM for a single session, use the
systemctlutility to stopksmandksmtunedservices.systemctl stop ksm systemctl stop ksmtuned
# systemctl stop ksm # systemctl stop ksmtunedCopy to Clipboard Copied! Toggle word wrap Toggle overflow To disable KSM persistently, use the
systemctlutility to disableksmandksmtunedservices.systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.
# systemctl disable ksm Removed /etc/systemd/system/multi-user.target.wants/ksm.service. # systemctl disable ksmtuned Removed /etc/systemd/system/multi-user.target.wants/ksmtuned.service.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMemory pages shared between VMs before deactivating KSM will remain shared. To stop sharing, delete all the
PageKSMpages in the system by using the following command:echo 2 > /sys/kernel/mm/ksm/run
# echo 2 > /sys/kernel/mm/ksm/runCopy to Clipboard Copied! Toggle word wrap Toggle overflow However, this command increases memory usage, and might cause performance problems on your host or your VMs.
17.7. Optimizing virtual machine network performance Copy linkLink copied to clipboard!
Due to the virtual nature of a VM’s network interface controller (NIC), the VM loses a portion of its allocated host network bandwidth. This can reduce the overall workload efficiency of the VM. To minimize the negative impact of virtualization on the virtual NIC (vNIC) throughput, you can use a variety of methods.
Procedure
Use any of the following methods and observe if it has a beneficial effect on your VM network performance:
Enable the vhost_net module
On the host, ensure the
vhost_netkernel feature is enabled:lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_net
# lsmod | grep vhost vhost_net 32768 1 vhost 53248 1 vhost_net tap 24576 1 vhost_net tun 57344 6 vhost_netCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is blank, enable the
vhost_netkernel module:modprobe vhost_net
# modprobe vhost_netCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Set up multi-queue virtio-net
To set up the multi-queue virtio-net feature for a VM, use the
virsh editcommand to edit to the XML configuration of the VM. In the XML, add the following to the<devices>section, and replaceNwith the number of vCPUs in the VM, up to 16:<interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface><interface type='network'> <source network='default'/> <model type='virtio'/> <driver name='vhost' queues='N'/> </interface>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VM is running, restart it for the changes to take effect.
Batching network packets
In Linux VM configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To set up packet batching, use the following command on the host, and replace tap0 with the name of the network interface that the VMs use:
ethtool -C tap0 rx-frames 64
# ethtool -C tap0 rx-frames 64Copy to Clipboard Copied! Toggle word wrap Toggle overflow SR-IOV
If your host NIC supports SR-IOV, use SR-IOV device assignment for your vNICs.
For more information, see Managing SR-IOV devices.
17.8. Virtual machine performance monitoring tools Copy linkLink copied to clipboard!
To identify what consumes the most virtual machine (VM) resources and which aspect of VM performance needs optimization, you can use a variety of performance diagnostic tools.
Default OS performance monitoring tools
For standard performance evaluation, you can use the utilities provided by default by your host and guest operating systems:
On your RHEL 10 host, as root, use the
toputility or the system monitor application, and look forqemuandvirtin the output. This shows how much host system resources your VMs are consuming.-
If the monitoring tool displays that any of the
qemuorvirtprocesses consume a large portion of the host CPU or memory capacity, use theperfutility to investigate. For details, see below. -
In addition, if a
vhost_netthread process, named for example vhost_net-1234, is displayed as consuming an excessive amount of host CPU capacity, consider using virtual network optimization features, such asmulti-queue virtio-net.
-
If the monitoring tool displays that any of the
On the guest operating system, use performance utilities and applications available on the system to evaluate which processes consume the most system resources.
-
On Linux systems, you can use the
toputility. - On Windows systems, you can use the Task Manager application.
-
On Linux systems, you can use the
perf kvm
You can use the perf utility to collect and analyze virtualization-specific statistics about the performance of your RHEL 10 host. To do so:
On the host, install the perf package:
dnf install perf
# dnf install perfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use one of the
perf kvm statcommands to display perf statistics for your virtualization host:-
For real-time monitoring of your hypervisor, use the
perf kvm stat livecommand. -
To log the perf data of your hypervisor over a period of time, activate the logging by using the
perf kvm stat recordcommand. After the command is canceled or interrupted, the data is saved in theperf.data.guestfile, which can be analyzed by using theperf kvm stat reportcommand.
-
For real-time monitoring of your hypervisor, use the
Analyze the
perfoutput for types ofVM-EXITevents and their distribution. For example, thePAUSE_INSTRUCTIONevents should be infrequent, but in the following output, the high occurrence of this event suggests that the host CPUs are not handling the running vCPUs well. In such a scenario, consider shutting down some of your active VMs, removing vCPUs from these VMs, or tuning the performance of the vCPUs.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Other event types that can signal problems in the output of
perf kvm statinclude:-
INSN_EMULATION- suggests suboptimal VM I/O configuration.
-
For more information about using perf to monitor virtualization performance, see the perf-kvm(1) man page on your system.
numastat
To see the current NUMA configuration of your system, you can use the numastat utility, which is provided by installing the numactl package.
The following shows a host with 4 running VMs, each obtaining memory from multiple NUMA nodes. This is not optimal for vCPU performance, and warrants adjusting:
In contrast, the following shows memory being provided to each VM by a single node, which is significantly more efficient.
Chapter 18. Securing virtual machines Copy linkLink copied to clipboard!
As an administrator of a RHEL 10 system with virtual machines (VMs), you can take a variety of measures to lower the risk of your guest and host operating systems being infected by malicious software.
18.1. How security works in virtual machines Copy linkLink copied to clipboard!
When using virtual machines (VMs), securing your environment requires additional considerations.
A single host machine can house multiple guest operating systems. These systems are connected with the host through the hypervisor, and usually also through a virtual network. As a consequence, each VM can be used as a vector for attacking the host with malicious software, and the host can be used as a vector for attacking any of the VMs.
Figure 18.1. A potential malware attack vector on a virtualization host
Because the hypervisor uses the host kernel to manage VMs, services running on the VM’s operating system are frequently used for injecting malicious code into the host system. However, you can protect your system against such security threats by using a number of security features on your host and your guest systems.
These features, such as SELinux or QEMU sandboxing, provide various measures that make it more difficult for malicious code to attack the hypervisor and transfer between your host and your VMs.
Figure 18.2. Prevented malware attacks on a virtualization host
Many of the features that RHEL 10 provides for VM security are always active and do not have to be enabled or configured. For details, see Default features for virtual machine security.
In addition, you can adhere to a variety of best practices to minimize the vulnerability of your VMs and your hypervisor. For more information, see Best practices for securing virtual machines.
18.2. Best practices for securing virtual machines Copy linkLink copied to clipboard!
To significantly decrease the risk of your virtual machines (VM) being infected with malicious code and used as attack vectors to infect your host system, you can increase the security of your systems by using a variety of methods.
On the guest side:
Secure the virtual machine as if it was a physical machine. The specific methods available to enhance security depend on the guest OS.
If your VM is running RHEL 10, see Securing RHEL 10 for detailed instructions on improving the security of your guest system.
On the host side:
- When managing VMs remotely, use cryptographic utilities such as SSH and network protocols such as SSL for connecting to the VMs.
Ensure SELinux is in Enforcing mode:
getenforce Enforcing
# getenforce EnforcingCopy to Clipboard Copied! Toggle word wrap Toggle overflow If SELinux is disabled or in Permissive mode, see the Using SELinux document for instructions on activating Enforcing mode.
NoteSELinux Enforcing mode also enables the sVirt RHEL 10 feature. This is a set of specialized SELinux booleans for virtualization, which can be manually adjusted for fine-grained VM security management.
Use VMs with SecureBoot:
SecureBoot is a feature that ensures that your VM is running a cryptographically signed OS. This prevents VMs whose OS has been altered by a malware attack from booting.
SecureBoot can only be applied when installing a Linux VM that uses OVMF firmware on an AMD64 or Intel 64 host. For instructions, see Creating a SecureBoot virtual machine.
Do not use
qemu-*commands, such asqemu-kvm.QEMU is an essential component of the virtualization architecture in RHEL 10, but it is difficult to manage manually, and improper QEMU configurations may cause security vulnerabilities. Therefore, using most
qemu-*commands is not supported by Red Hat. Instead, use libvirt utilities, such asvirsh,virt-install, andvirt-xml, as these orchestrate QEMU according to the best practices.Note, however, that the
qemu-imgutility is supported for management of virtual disk images.
18.3. Default features for virtual machine security Copy linkLink copied to clipboard!
The libvirt software suite provides a number of security features that are automatically enabled when using virtualization in RHEL 10
You can use these in addition to manual means of improving the security of your virtual machines (VMs), which are listed in Best practices for securing virtual machines.
- System and session connections
The access all the available utilities for virtual machine management on a RHEL 10 host, you need to use the system connection of
libvirt(qemu:///system). To do so, you must have root privileges on the system or be a part of the libvirt user group.Non-root users that are not in the libvirt group can only access a session connection of
libvirt(qemu:///session), which has to respect the access rights of the local user when accessing resources.For details, see User-space connection types for virtualization.
- Virtual machine separation
- Individual VMs run as isolated processes on the host, and rely on security enforced by the host kernel. Therefore, a VM cannot read or access the memory or storage of other VMs on the same host.
- QEMU sandboxing
- A feature that prevents QEMU code from executing system calls that can compromise the security of the host.
- Kernel Address Space Randomization (KASLR)
- Enables randomizing the physical and virtual addresses at which the kernel image is decompressed. Thus, KASLR prevents guest security exploits based on the location of kernel objects.
18.4. Limiting what actions are available to virtual machine users Copy linkLink copied to clipboard!
In some cases, actions that users of virtual machines (VMs) hosted on RHEL 10 can perform by default might pose a security risk. To prevent this, you can limit the actions available to VM users by configuring the libvirt daemons to use the polkit policy toolkit on the host machine.
Procedure
Optional: Ensure your system’s
polkitcontrol policies related tolibvirtare set up according to your preferences.Find all libvirt-related files in the
/usr/share/polkit-1/actions/and/usr/share/polkit-1/rules.d/directories.ls /usr/share/polkit-1/actions | grep libvirt ls /usr/share/polkit-1/rules.d | grep libvirt
# ls /usr/share/polkit-1/actions | grep libvirt # ls /usr/share/polkit-1/rules.d | grep libvirtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open the files and review the rule settings.
For information about reading the syntax of
polkitcontrol policies, useman polkit.Modify the
libvirtcontrol policies. To do so:-
Create a new
.rulesfile in the/etc/polkit-1/rules.d/directory. Add your custom policies to this file, and save it.
For further information and examples of
libvirtcontrol policies, see thelibvirtupstream documentation.
-
Create a new
Configure your VMs to use access policies determined by
polkit.To do so, find all configuration files for virtualization drivers in the
/etc/libvirt/directory, and uncomment theaccess_drivers = [ "polkit" ]line in them.find /etc/libvirt/ -name virt*d.conf -exec sed -i 's/#access_drivers = \[ "polkit" \]/access_drivers = \[ "polkit" \]/g' {} +# find /etc/libvirt/ -name virt*d.conf -exec sed -i 's/#access_drivers = \[ "polkit" \]/access_drivers = \[ "polkit" \]/g' {} +Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each file that you modified in the previous step, restart the corresponding service.
For example, if you have modified
/etc/libvirt/virtqemud.conf, restart thevirtqemudservice.systemctl try-restart virtqemud
# systemctl try-restart virtqemudCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
As a user whose VM actions you intended to limit, perform one of the restricted actions.
For example, if unprivileged users are restricted from viewing VMs created in the system session:
virsh -c qemu:///system list --all Id Name State -------------------------------
$ virsh -c qemu:///system list --all Id Name State -------------------------------Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command does not list any VMs even though one or more VMs exist on your system,
polkitsuccessfully restricts the action for unprivileged users.
Troubleshooting
Currently, configuring
libvirtto usepolkitmakes it impossible to connect to VMs by using the RHEL 10 web console, due to an incompatibility with thelibvirt-dbusservice.If you require fine-grained access control of VMs in the web console, create a custom D-Bus policy. For more information, see the Red Hat Knowledgebase solution How to configure fine-grained control of Virtual Machines in Cockpit.
18.5. Configuring VNC passwords Copy linkLink copied to clipboard!
To manage access to the graphical output of a virtual machine (VM), you can configure a password for the VNC console of the VM.
With a VNC password configured on a VM, users of the VMs must enter the password when attempting to view or interact with the VNC graphical console of the VMs, for example by using the virt-viewer utility.
VNC passwords are not a sufficient measure for ensuring the security of a VM environment. For details, see QEMU documentation on VNC security.
In addition, the VNC password is saved in plain text in the configuration of the VM, so for the password to be effective, the user must not be able to display the VM configuration.
Prerequisites
The VM that you want to protect with a VNC password has VNC graphics configured.
To ensure that this is the case, use the
virsh dumpxmlcommand as follows:virsh dumpxml <vm-name> | grep graphics <graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1> </graphics>
# virsh dumpxml <vm-name> | grep graphics <graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1> </graphics>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Open the configuration of the VM that you want to assign a VNC password to.
virsh edit <vm-name>
# virsh edit <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the
<graphics>line of the configuration, add thepasswdattribute and the password string. The password must be 8 characters or fewer.<graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1 passwd='<password>'>
<graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1 passwd='<password>'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: In addition, define a date and time when the password will expire.
<graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1 passwd='<password>' passwdValidTo='2025-02-01T15:30:00'>
<graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1 passwd='<password>' passwdValidTo='2025-02-01T15:30:00'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the password will expire on February 1st 2025, at 15:30 UTC.
- Save the configuration.
Verification
Start the modified VM.
virsh start <vm-name>
# virsh start <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open a graphical console of the VM, for example by using the
virt-viewerutility:virt-viewer <vm-name>
# virt-viewer <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the VNC password has been configured properly, a dialogue window appears that requests you to enter the password.
18.6. SELinux booleans for virtualization Copy linkLink copied to clipboard!
RHEL 10 provides the sVirt feature, which is a set of specialized SELinux booleans that are automatically enabled on a host with SELinux in Enforcing mode.
For fine-grained configuration of virtual machines security on a RHEL 10 system, you can configure SELinux booleans on the host to ensure the hypervisor acts in a specific way.
To list all virtualization-related booleans and their statuses, use the getsebool -a | grep virt command:
To enable a specific boolean, use the setsebool -P boolean_name on command as root. To disable a boolean, use setsebool -P boolean_name off.
The following table lists virtualization-related booleans available in RHEL 10 and what they do when enabled:
| SELinux Boolean | Description |
|---|---|
| staff_use_svirt | Enables non-root users to create and transition VMs to sVirt. |
| unprivuser_use_svirt | Enables unprivileged users to create and transition VMs to sVirt. |
| virt_sandbox_use_audit | Enables sandbox containers to send audit messages. |
| virt_sandbox_use_netlink | Enables sandbox containers to use netlink system calls. |
| virt_sandbox_use_sys_admin | Enables sandbox containers to use sys_admin system calls, such as mount. |
| virt_transition_userdomain | Enables virtual processes to run as user domains. |
| virt_use_comm | Enables virt to use serial/parallel communication ports. |
| virt_use_execmem | Enables confined virtual guests to use executable memory and executable stack. |
| virt_use_fusefs | Enables virt to read FUSE mounted files. |
| virt_use_nfs | Enables virt to manage NFS mounted files. |
| virt_use_rawip | Enables virt to interact with rawip sockets. |
| virt_use_samba | Enables virt to manage CIFS mounted files. |
| virt_use_sanlock | Enables confined virtual guests to interact with the sanlock. |
| virt_use_usb | Enables virt to use USB devices. |
| virt_use_xserver | Enables virtual machine to interact with the X Window System. |
18.7. Creating a Secure Boot virtual machine Copy linkLink copied to clipboard!
To improve the security of your virtualization host, you can create Linux virtual machines (VMs) that use the Secure Boot feature. Secure Boot ensures that the VM is running a cryptographically signed operating system (OS).
This can be useful if the guest OS of a VM has been altered by malware. In such a scenario, Secure Boot prevents the VM from booting, which stops the potential spread of the malware to your host machine.
Prerequisites
- The VM is the Q35 machine type.
- Your host system uses the AMD64 or Intel 64 architecture.
The
edk2-OVMFpackages is installed:dnf install edk2-ovmf
# dnf install edk2-ovmfCopy to Clipboard Copied! Toggle word wrap Toggle overflow An operating system (OS) installation source is available locally or on a network. This can be one of the following formats:
- An ISO image of an installation medium
A disk image of an existing VM installation
WarningInstalling from a host CD-ROM or DVD-ROM device is not possible in RHEL 10. If you select a CD-ROM or DVD-ROM as the installation source when using any VM installation method available in RHEL 10, the installation will fail. For more information, see RHEL 7 or higher can’t install guest OS from CD/DVD-ROM (Red Hat Knowledgebase).
- Optional: A Kickstart file can be provided for faster and easier configuration of the installation.
Procedure
Use the
virt-installcommand to create a VM as detailed in Creating virtual machines by using the command line. For the--bootoption, use theuefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fdvalue. This uses theOVMF_VARS.secboot.fdandOVMF_CODE.secboot.fdfiles as templates for the VM’s non-volatile RAM (NVRAM) settings, which enables the Secure Boot feature.For example:
virt-install --name rhel8sb --memory 4096 --vcpus 4 --os-variant rhel10.0 --boot uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd --disk boot_order=2,size=10 --disk boot_order=1,device=cdrom,bus=scsi,path=/images/RHEL-{ProductNumber}.0-installation.iso# virt-install --name rhel8sb --memory 4096 --vcpus 4 --os-variant rhel10.0 --boot uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd --disk boot_order=2,size=10 --disk boot_order=1,device=cdrom,bus=scsi,path=/images/RHEL-{ProductNumber}.0-installation.isoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Follow the OS installation procedure according to the instructions on the screen.
Verification
- After the guest OS is installed, access the VM’s command line by opening the terminal in the graphical guest console or connecting to the guest OS using SSH.
To confirm that Secure Boot has been enabled on the VM, use the
mokutil --sb-statecommand:mokutil --sb-state SecureBoot enabled
# mokutil --sb-state SecureBoot enabledCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.8. Setting up IBM Secure Execution on IBM Z Copy linkLink copied to clipboard!
When using IBM Z hardware to run a RHEL 10 host, you can improve the security of your virtual machines (VMs) by configuring the IBM Secure Execution feature for the VMs.
IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing a VM’s state and memory contents. As a result, even if the host is compromised, it cannot be used as a vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent untrusted hosts from obtaining sensitive information from the VM.
You can convert an existing VM on an IBM Z host into a secured VM by enabling IBM Secure Execution.
For securing production environments, consult the IBM documentation on fully securing workloads with Secure Execution, which explains how to further secure your workloads.
18.8.1. Configuring a VM manually for IBM Secure Execution Copy linkLink copied to clipboard!
You can configure IBM Secure Execution by manually logging in to the guest VM and performing configuration steps within the guest operating system. This method provides direct control over the configuration process and is suitable for production environments where you need to verify each step of the setup.
For securing production environments, consult the IBM documentation on fully securing workloads with Secure Execution, which explains how to further secure your workloads.
Prerequisites
The system hardware is one of the following:
- IBM z15 or later
- IBM LinuxONE III or later
The Secure Execution feature is enabled for your system. To verify, use:
grep facilities /proc/cpuinfo | grep 158
# grep facilities /proc/cpuinfo | grep 158Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command displays any output, your CPU is compatible with Secure Execution.
The kernel includes support for Secure Execution. To confirm, use:
ls /sys/firmware | grep uv
# ls /sys/firmware | grep uvCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the command generates any output, your kernel supports Secure Execution.
The host CPU model contains the
unpackfacility. To confirm, use:virsh domcapabilities | grep unpack <feature policy='require' name='unpack'/>
# virsh domcapabilities | grep unpack <feature policy='require' name='unpack'/>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command generates the above output, your CPU host model is compatible with Secure Execution.
The CPU mode of the VM is set to
host-model.virsh dumpxml <vm_name> | grep "<cpu mode='host-model'/>"
# virsh dumpxml <vm_name> | grep "<cpu mode='host-model'/>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command generates any output, the VM’s CPU mode is set correctly.
The genprotimg package must be installed on the host.
dnf install genprotimg
# dnf install genprotimgCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
guestfs-toolspackage is installed on the host in case you want to modify the VM image directly from the host.dnf install guestfs-tools
# dnf install guestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You have obtained and verified the IBM Z host key document. For details, see Verifying the host key document in IBM documentation.
Procedure
Add the
prot_virt=1kernel parameter to the boot configuration of the host.grubby --update-kernel=ALL --args="prot_virt=1"
# grubby --update-kernel=ALL --args="prot_virt=1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the boot menu:
# zipl
-
Use
virsh editto modify the XML configuration of the VM you want to secure. Add
<launchSecurity type="s390-pv"/>to the under the</devices>line. For example:[...] </memballoon> </devices> <launchSecurity type="s390-pv"/> </domain>[...] </memballoon> </devices> <launchSecurity type="s390-pv"/> </domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the
<devices>section of the configuration includes avirtio-rngdevice (<rng model="virtio">), remove all lines of the<rng> </rng>block. Optional: If the VM that you want to secure is using 32 GiB of RAM or more, add the
<async-teardown enabled='yes'/>line to the<features></features>section in its XML configuration on the host.This improves the performance of rebooting or stopping such Secure Execution guests.
Log in to the VM you want to secure and create a parameter file. For example:
touch ~/secure-parameters
# touch ~/secure-parametersCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the
/boot/loader/entriesdirectory of the guest operating system, identify the boot loader entry with the latest version:ls /boot/loader/entries -l [...] -rw-r--r--. 1 root root 281 Oct 9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf
# ls /boot/loader/entries -l [...] -rw-r--r--. 1 root root 281 Oct 9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the kernel options line of the boot loader entry in the guest operating system:
cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf | grep options options root=/dev/mapper/rhel-root rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap
# cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf | grep options options root=/dev/mapper/rhel-root rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swapCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the content of the options line and
swiotlb=262144to the created parameters file in the guest operating system.echo "root=/dev/mapper/rhel-root rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap swiotlb=262144" > ~/secure-parameters
# echo "root=/dev/mapper/rhel-root rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap swiotlb=262144" > ~/secure-parametersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new IBM Secure Execution image in the guest operating system.
For example, the following creates a
/boot/secure-imagesecured image based on the/boot/vmlinuz-4.18.0-240.el8.s390ximage, using thesecure-parametersfile, the/boot/initramfs-4.18.0-240.el8.s390x.imginitial RAM disk file, and theHKD-8651-000201C048.crthost key document.genprotimg -i /boot/vmlinuz-4.18.0-240.el8.s390x -r /boot/initramfs-4.18.0-240.el8.s390x.img -p ~/secure-parameters -k HKD-8651-00020089A8.crt -o /boot/secure-image
# genprotimg -i /boot/vmlinuz-4.18.0-240.el8.s390x -r /boot/initramfs-4.18.0-240.el8.s390x.img -p ~/secure-parameters -k HKD-8651-00020089A8.crt -o /boot/secure-imageCopy to Clipboard Copied! Toggle word wrap Toggle overflow By using the
genprotimgutility creates the secure image, which contains the kernel parameters, initial RAM disk, and boot image.Update the VM’s boot menu to boot from the secure image. In addition, remove the lines starting with
initrdandoptions, as they are not needed.For example, in a RHEL 8.3 VM, the boot menu can be edited in the
/boot/loader/entries/directory:cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf title Red Hat Enterprise Linux 8.3 version 4.18.0-240.el8.s390x linux /boot/secure-image [...]
# cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf title Red Hat Enterprise Linux 8.3 version 4.18.0-240.el8.s390x linux /boot/secure-image [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the bootable disk image in the guest operating system:
zipl -V
# zipl -VCopy to Clipboard Copied! Toggle word wrap Toggle overflow Securely remove the original unprotected files in the guest operating system. For example:
shred /boot/vmlinuz-4.18.0-240.el8.s390x shred /boot/initramfs-4.18.0-240.el8.s390x.img shred secure-parameters
# shred /boot/vmlinuz-4.18.0-240.el8.s390x # shred /boot/initramfs-4.18.0-240.el8.s390x.img # shred secure-parametersCopy to Clipboard Copied! Toggle word wrap Toggle overflow The original boot image, the initial RAM image, and the kernel parameter file are unprotected, and if they are not removed, VMs with Secure Execution enabled can still be vulnerable to hacking attempts or sensitive data mining.
Verification
On the host, use the
virsh dumpxmlutility to confirm the XML configuration of the secured VM. The configuration must include the<launchSecurity type="s390-pv"/>element, and no <rng model="virtio"> lines.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.8.2. Configuring a VM from the host for IBM Secure Execution Copy linkLink copied to clipboard!
You can configure IBM Secure Execution directly from the host by using the guestfs-tools package without needing to boot the VM. However, this method is suitable only for testing and development environments where you need to quickly configure multiple VMs or automate the setup process.
For securing production environments, consult the IBM documentation on fully securing workloads with Secure Execution, which explains how to further secure your workloads.
Prerequisites
The system hardware is one of the following:
- IBM z15 or later
- IBM LinuxONE III or later
The Secure Execution feature is enabled for your system. To verify, use:
grep facilities /proc/cpuinfo | grep 158
# grep facilities /proc/cpuinfo | grep 158Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this command displays any output, your CPU is compatible with Secure Execution.
The kernel includes support for Secure Execution. To confirm, use:
ls /sys/firmware | grep uv
# ls /sys/firmware | grep uvCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the command generates any output, your kernel supports Secure Execution.
The host CPU model contains the
unpackfacility. To confirm, use:virsh domcapabilities | grep unpack <feature policy='require' name='unpack'/>
# virsh domcapabilities | grep unpack <feature policy='require' name='unpack'/>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command generates the above output, your CPU host model is compatible with Secure Execution.
The CPU mode of the VM is set to
host-model.virsh dumpxml <vm_name> | grep "<cpu mode='host-model'/>"
# virsh dumpxml <vm_name> | grep "<cpu mode='host-model'/>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the command generates any output, the VM’s CPU mode is set correctly.
The genprotimg package must be installed on the host.
dnf install genprotimg
# dnf install genprotimgCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
guestfs-toolspackage is installed on the host in case you want to modify the VM image directly from the host.dnf install guestfs-tools
# dnf install guestfs-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You have obtained and verified the IBM Z host key document. For details, see Verifying the host key document in IBM documentation.
Procedure
Add the
prot_virt=1kernel parameter to the boot configuration of the host.grubby --update-kernel=ALL --args="prot_virt=1"
# grubby --update-kernel=ALL --args="prot_virt=1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the boot menu:
# zipl
-
Use
virsh editto modify the XML configuration of the VM you want to secure. Add
<launchSecurity type="s390-pv"/>to the under the</devices>line. For example:[...] </memballoon> </devices> <launchSecurity type="s390-pv"/> </domain>[...] </memballoon> </devices> <launchSecurity type="s390-pv"/> </domain>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the
<devices>section of the configuration includes avirtio-rngdevice (<rng model="virtio">), remove all lines of the<rng> </rng>block. Optional: If the VM that you want to secure is using 32 GiB of RAM or more, add the
<async-teardown enabled='yes'/>line to the<features></features>section in its XML configuration on the host.This improves the performance of rebooting or stopping such Secure Execution guests.
On the host, create a script that contains the host key document and that configures the existing VM to use Secure Execution. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure the VM is shut-down.
On the host, add the script to the existing VM image by using
guestfs-toolsand mark it to run on first boot.virt-customize -a <vm_image_path> --selinux-relabel --firstboot <script_path>
# virt-customize -a <vm_image_path> --selinux-relabel --firstboot <script_path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Boot the VM from the image with the added script.
The script runs on first boot, and then shuts down the VM again. As a result, the VM is now configured to run with Secure Execution on the host that has the corresponding host key.
Verification
On the host, use the
virsh dumpxmlutility to confirm the XML configuration of the secured VM. The configuration must include the<launchSecurity type="s390-pv"/>element, and no <rng model="virtio"> lines.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.9. Attaching cryptographic coprocessors to virtual machines on IBM Z Copy linkLink copied to clipboard!
To use hardware encryption in your virtual machine (VM) on an IBM Z host, create mediated devices from a cryptographic coprocessor device and assign them to the intended VMs.
Prerequisites
- Your host is running on IBM Z hardware.
The cryptographic coprocessor is compatible with device assignment. To confirm this, ensure that the
typeof your coprocessor is listed asCEX4or later.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
vfio_apkernel module is loaded. To verify, use:lsmod | grep vfio_ap vfio_ap 24576 0 [...]
# lsmod | grep vfio_ap vfio_ap 24576 0 [...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow To load the module, use:
modprobe vfio_ap
# modprobe vfio_apCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
s390utilsversion supportsaphandling:lszdev --list-types ... ap Cryptographic Adjunct Processor (AP) device ...
# lszdev --list-types ... ap Cryptographic Adjunct Processor (AP) device ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Obtain the decimal values for the devices that you want to assign to the VM. For example, for the devices
05.0004and05.00ab:echo "obase=10; ibase=16; 04" | bc 4 echo "obase=10; ibase=16; AB" | bc 171
# echo "obase=10; ibase=16; 04" | bc 4 # echo "obase=10; ibase=16; AB" | bc 171Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the host, reassign the devices to the
vfio-apdrivers:chzdev -t ap apmask=-5 aqmask=-4,-171
# chzdev -t ap apmask=-5 aqmask=-4,-171Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo assign the devices persistently, use the
-pflag.Verify that the cryptographic devices have been reassigned correctly.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the DRIVER values of the domain queues changed to
vfio_ap, the reassignment succeeded.Create an XML snippet that defines a new mediated device.
The following example shows defining a persistent mediated device and assigning queues to it. Specifically, the
vfio_ap.xmlXML snippet in this example assigns a domain adapter0x05, domain queues0x0004and0x00ab, and a control domain0x00abto the mediated device.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new mediated device from the
vfio_ap.xmlXML snippet.virsh nodedev-define vfio_ap.xml Node device 'mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix' defined from 'vfio_ap.xml'
# virsh nodedev-define vfio_ap.xml Node device 'mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix' defined from 'vfio_ap.xml'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the mediated device that you created in the previous step, in this case
mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix.virsh nodedev-start mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix Device mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix started
# virsh nodedev-start mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix Device mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix startedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the configuration has been applied correctly
cat /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-passthrough/devices/669d9b23-fe1b-4ecb-be08-a2fabca99b71/matrix 05.0004 05.00ab
# cat /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-passthrough/devices/669d9b23-fe1b-4ecb-be08-a2fabca99b71/matrix 05.0004 05.00abCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output contains the numerical values of queues that you have previously assigned to
vfio-ap, the process was successful.Attach the mediated device to the VM.
Display the UUID of the mediated device that you created and save it for the next step.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and open an XML file for the cryptographic card mediated device. For example:
vim crypto-dev.xml
# vim crypto-dev.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines to the file and save it. Replace the
uuidvalue with the UUID you obtained in step a.<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ap'> <source> <address uuid='8f9c4a73-1411-48d2-895d-34db9ac18f85'/> </source> </hostdev><hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ap'> <source> <address uuid='8f9c4a73-1411-48d2-895d-34db9ac18f85'/> </source> </hostdev>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the XML file to attach the mediated device to the VM. For example, to permanently attach a device defined in the
crypto-dev.xmlfile to the runningtestguest1VM:virsh attach-device testguest1 crypto-dev.xml --live --config
# virsh attach-device testguest1 crypto-dev.xml --live --configCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
--liveoption attaches the device to a running VM only, without persistence between boots. The--configoption makes the configuration changes persistent. You can use the--configoption alone to attach the device to a shut-down VM.Note that each UUID can only be assigned to one VM at a time.
Verification
Ensure that the guest operating system detects the assigned cryptographic devices.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command in the guest operating system will be identical to that on a host logical partition with the same cryptographic coprocessor devices available.
In the guest operating system, confirm that a control domain has been successfully assigned to the cryptographic devices.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If
lszcrypt -d CdisplaysUandBintersections in the cryptographic device matrix, the control domain assignment was successful.
Chapter 19. Sharing files between the host and its virtual machines Copy linkLink copied to clipboard!
You might frequently require to share data between your host system and the virtual machines (VMs) it runs. To do so quickly and efficiently, you can use the virtio file system (virtiofs).
19.1. Sharing files between the host and Linux virtual machines by using the command line Copy linkLink copied to clipboard!
When using RHEL 10 as your hypervisor, you can share files between your host system and its virtual machines (VM) by using the virtiofs feature.
Prerequisites
- Virtualization is installed and enabled on your RHEL 10 host.
A directory is available that you want to share with your VMs. If you do not want to share any of your existing directories, create a new one, for example named shared-files.
mkdir /root/shared-files
# mkdir /root/shared-filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The VM you want to share files with is using a Linux distribution as its guest operating system.
Procedure
For each directory on the host that you want to share with your VM, set it as a virtiofs file system in the VM’s XML configuration.
Open the XML configuration of the intended VM.
virsh edit vm-name
# virsh edit vm-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add an entry similar to the following to the
<devices>section of the VM’s XML configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example sets the
/root/shared-filesdirectory on the host to be visible ashost-file-shareto the VM.
Set up shared memory for the VM. To do so, add shared memory backing to the
<domain>section of the XML configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Boot up the VM.
virsh start vm-name
# virsh start vm-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the file system in the guest operating system. The following example mounts the previously configured
host-file-sharedirectory with a Linux guest operating system.mount -t virtiofs host-file-share /mnt
# mount -t virtiofs host-file-share /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Ensure that the shared directory became accessible on the VM and that you can now open files stored in the directory.
Troubleshooting
-
File-system mount options related to access time, such as
noatimeandstrictatime, are not likely to work with virtiofs, and Red Hat discourages their use.
19.2. Sharing files between the host and Linux virtual machines by using the web console Copy linkLink copied to clipboard!
To share files between your host system and its virtual machines (VM), you can use the virtiofs feature in the RHEL web console.
Prerequisites
- The web console VM plug-in is installed on your system.
A directory that you want to share with your VMs. If you do not want to share any of your existing directories, create a new one, for example named shared-files.
mkdir /home/shared-files
# mkdir /home/shared-filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The VM you want to share data with is using a Linux distribution as its guest operating system.
Procedure
In the interface, click the VM with which you want to share files.
A new page opens with an Overview section with basic information about the selected VM and a Console section.
Scroll to .
The Shared directories section displays information about the host files and directories shared with that VM and options to Add or Remove a shared directory.
Click .
The Share a host directory with the guest dialog is displayed.
Enter the following information:
- Source path - The path to the host directory that you want to share.
- Mount tag - The tag that the VM uses to mount the directory.
Set additional options:
-
Extended attributes - Set whether to enable extended attributes,
xattr, on the shared files and directories.
-
Extended attributes - Set whether to enable extended attributes,
Click .
The selected directory is shared with the VM.
Verification
- Ensure that the shared directory is accessible on the VM and you can now open files stored in that directory.
Chapter 20. Diagnosing virtual machine problems Copy linkLink copied to clipboard!
When working with virtual machines (VMs), you might encounter problems with varying levels of complexity and severity. For more complex problems, you might have to capture VM-related data and logs to report or diagnose the problems.
The following sections provide detailed information about generating logs and diagnosing some common VM problems, as well as about reporting these problems.
20.1. Generating libvirt debug logs Copy linkLink copied to clipboard!
To diagnose virtual machine (VM) problems, it is helpful to generate and review libvirt debug logs. Attaching debug logs is also useful when asking for support to resolve VM-related problems.
20.1.1. Understanding libvirt debug logs Copy linkLink copied to clipboard!
Debug logs are text files that contain data about events that occur during virtual machine (VM) runtime. The logs provide information about fundamental server-side functionalities, such as host libraries and the libvirt daemon. The log files also contain the standard error output (stderr) of all running VMs.
Debug logging is not enabled by default and has to be enabled when libvirt starts.
-
To collect
libvirtdebug logs for your current session, see Enabling libvirt debug logs during runtime. -
To collect
libvirtdebug logs by default, see Enabling libvirt debug logs persistently.
Afterwards, you can attach the logs when requesting support with a VM problem. For details, see Attaching libvirt debug logs to support requests.
20.1.2. Enabling libvirt debug logs persistently Copy linkLink copied to clipboard!
To ensure that your virtualization host automatically logs debug information, configure libvirt debug logging to be automatically enabled whenever libvirt starts.
By default, virtqemud is the main libvirt daemon in RHEL 10. To make persistent changes in the libvirt configuration, you must edit the virtqemud.conf file, located in the /etc/libvirt directory.
Procedure
-
Open the
virtqemud.conffile in an editor. Replace or set the filters according to your requirements.
Expand Table 20.1. Debugging filter values 1
logs all messages generated by
libvirt.2
logs all non-debugging information.
3
logs all warning and error messages. This is the default value.
4
logs only error messages.
Sample daemon settings for logging filters
The following settings:
-
Log all error and warning messages from the
remote,util.json, andrpclayers -
Log only error messages from the
eventlayer. -
Save the filtered logs to
/var/log/libvirt/libvirt.log
log_filters="3:remote 4:event 3:util.json 3:rpc" log_outputs="1:file:/var/log/libvirt/libvirt.log"
log_filters="3:remote 4:event 3:util.json 3:rpc" log_outputs="1:file:/var/log/libvirt/libvirt.log"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Log all error and warning messages from the
- Save and exit.
Restart the
libvirtdaemon.systemctl restart virtqemud.service
$ systemctl restart virtqemud.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.1.3. Enabling libvirt debug logs during runtime Copy linkLink copied to clipboard!
To temporarily gather virtualization debugging information, you can modify the libvirt daemon’s runtime settings to enable debug logs and save them to an output file.
This is useful when restarting the libvirt daemon is not possible because restarting fixes the problem, or because there is another process, such as migration or backup, running at the same time. Modifying runtime settings is also useful if you want to try a command without editing the configuration files or restarting the daemon.
Prerequisites
-
Make sure the
libvirt-adminpackage is installed.
Procedure
Optional: Back up the active set of log filters.
virt-admin -c virtqemud:///system daemon-log-filters >> virt-filters-backup
# virt-admin -c virtqemud:///system daemon-log-filters >> virt-filters-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow This makes it possible to restore the active set of filter after generating the logs. If you do not restore the filters, the messages continue to be logged, which may affect system performance.
Use the
virt-adminutility to enable debugging and set the filters according to your requirements.Expand Table 20.2. Debugging filter values 1
logs all messages generated by libvirt.
2
logs all non-debugging information.
3
logs all warning and error messages. This is the default value.
4
logs only error messages.
Sample virt-admin setting for logging filters
The following command:
-
Logs all error and warning messages from the
remote,util.json, andrpclayers -
Logs only error messages from the
eventlayer.
virt-admin -c virtqemud:///system daemon-log-filters "3:remote 4:event 3:util.json 3:rpc"
# virt-admin -c virtqemud:///system daemon-log-filters "3:remote 4:event 3:util.json 3:rpc"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Logs all error and warning messages from the
Use the
virt-adminutility to save the logs to a specific file or directory.For example, the following command saves the log output to the
libvirt.logfile in the/var/log/libvirt/directory.virt-admin -c virtqemud:///system daemon-log-outputs "1:file:/var/log/libvirt/libvirt.log"
# virt-admin -c virtqemud:///system daemon-log-outputs "1:file:/var/log/libvirt/libvirt.log"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: You can also remove the filters to generate a log file that contains all VM-related information. However, it is not recommended since this file may contain a large amount of redundant information produced by libvirt’s modules.
Use the
virt-adminutility to specify an empty set of filters.virt-admin -c virtqemud:///system daemon-log-filters Logging filters:
# virt-admin -c virtqemud:///system daemon-log-filters Logging filters:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Optional: Restore the filters to their original state by using the backup file that you created previously.
virt-admin -c virtqemud:///system daemon-log-filters "<original-filters>"
# virt-admin -c virtqemud:///system daemon-log-filters "<original-filters>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this command, replace
<original-filters>with the content ofvirt-filters-backup.Note that if you do not restore the filters, the messages continue to be logged, which may affect system performance.
20.1.4. Attaching libvirt debug logs to support requests Copy linkLink copied to clipboard!
To diagnose and resolve virtual machine (VM) problems, you might have to request additional support. Attaching the debug logs to the support request is highly recommended to ensure that the support team has access to all the information they need to provide a quick resolution of the VM-related problem.
Procedure
- To report a problem and request support, open a support case.
Based on the encountered problems, attach the following logs along with your report:
-
For problems with the
libvirtservice, attach the/var/log/libvirt/libvirt.logfile from the host. For problems with a specific VM, attach its respective log file.
For example, for the testguest1 VM, attach the
testguest1.logfile, which can be found at/var/log/libvirt/qemu/testguest1.log.
-
For problems with the
20.2. Dumping a virtual machine core Copy linkLink copied to clipboard!
To analyze why a virtual machine (VM) crashed or malfunctioned, you can dump the VM core to a file on disk for later analysis and diagnostics.
20.2.1. How virtual machine core dumping works Copy linkLink copied to clipboard!
Virtual machine (VM) core dumps save information about the state of a VM to help you debug it.
A VM requires numerous running processes to work accurately and efficiently. In some cases, a running VM might terminate unexpectedly or malfunction while you are using it. Restarting the VM might cause the data to be reset or lost, which makes it difficult to diagnose the exact problem that caused the VM to crash.
In such cases, you can use the virsh dump utility to save (or dump) the core of a VM to a file before you reboot the VM. The core dump file contains a raw physical memory image of the VM which contains detailed information about the VM. This information can be used to diagnose VM problems, either manually, or by using a tool such as the crash utility.
20.2.2. Creating a virtual machine core dump file Copy linkLink copied to clipboard!
A virtual machine (VM) core dump contains detailed information about the state of a VM at any given time. This information is similar to a snapshot of the VM, and can help you detect problems if a VM malfunctions or shuts down suddenly.
Prerequisites
- Make sure you have sufficient disk space to save the file. Note that the space occupied by the VM core dump depends on the amount of RAM allocated to the VM.
Procedure
Use the
virsh dumputility.For example, the following command dumps the
test-guest1VM’s cores, its memory and the CPU common register file tosample-core.filein the/core/filedirectory.virsh dump test-guest1 /core/file/sample-core.file --memory-only Domain 'test-guest1' dumped to /core/file/sample-core.file
# virsh dump test-guest1 /core/file/sample-core.file --memory-only Domain 'test-guest1' dumped to /core/file/sample-core.fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe
crashutility no longer supports the default file format of thevirsh dumpcommand. To analyze a core dump file by usingcrash, you must create the file with the--memory-onlyoption.Additionally, you must use the
--memory-onlyoption when creating a core dump file to attach to a Red Hat Support Case.
Troubleshooting
If the virsh dump command fails with a System is deadlocked on memory error, ensure you are assigning sufficient memory for the core dump file. To do so, use the following crashkernel option value. Alternatively, do not use crashkernel at all, which assigns core dump memory automatically.
crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
20.3. Backtracing virtual machine processes Copy linkLink copied to clipboard!
When a process related to a virtual machine (VM) malfunctions, you can use the gstack command along with the process identifier (PID) to generate an execution stack trace of the malfunctioning process. If the process is a part of a thread group, then all the threads are traced as well.
Prerequisites
Ensure that the
GDBpackage is installed.For details about installing
GDBand the available components, see Installing the GNU Debugger.Make sure you know the PID of the processes that you want to backtrace.
You can find the PID by using the
pgrepcommand followed by the name of the process. For example:pgrep libvirt 22014 22025
# pgrep libvirt 22014 22025Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Use the
gstackutility followed by the PID of the process you want to backtrace.For example, the following command backtraces the
libvirtprocess with the PID 22014.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on using
gstack, see thegstackman page on your system.
20.4. Additional resources for reporting virtual machine problems and providing logs Copy linkLink copied to clipboard!
To request additional help and support with virtual machines, as well as other Red Hat software, you can do any of the following:
Raise a service request by using the redhat-support-tool command line option, the Red Hat Portal UI, or several methods of FTP.
- To report problems and request support, see Open a Support Case.
Upload the SOS Report and the log files when you submit a service request.
This ensures that the Red Hat support engineer has all the necessary diagnostic information for reference.
- For more information about SOS reports, see the Red Hat Knowledgebase solution What is an SOS Report and how to create one in Red Hat Enterprise Linux?
- For information about attaching log files, see the Red Hat Knowledgebase solution How to provide files to Red Hat Support?
Chapter 21. Backing up and recovering virtual machines Copy linkLink copied to clipboard!
To ensure that you do not lose the setup and data of a virtual machine (VM) if your current host becomes unavailable or non-functional, back up the configuration and storage of the VM. Afterwards, for disaster recovery, you can create a new VM that uses the backed up configuration and storage.
If you plan to create a backup to recover a VM while the host remains functional, it might be more efficient to use snapshots instead. For details, see Saving and restoring virtual machine state by using snapshots.
21.1. Backing up a virtual machine Copy linkLink copied to clipboard!
To ensure that you do not lose the setup and data of a virtual machine (VM) if your current host becomes unavailable or non-functional, back up the configuration and disks of the VM. This allows you to later recover the backup on a functional host.
Procedure
Save the XML configuration of the VM to a separate file. For example, to save the configration of the testguest1 VM to the
/home/backup/testguest1-backup.xmlfile, use the following command:virsh dumpxml testguest1 > /home/backup/testguest1-backup.xml
# virsh dumpxml testguest1 > /home/backup/testguest1-backup.xmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Back up the disks of the VM. To do so, use the
virsh backup-beginutility. Optional: Customize your backup settings. By default,
libvirtcreates copies of all disks in the VM in the same directory as the original disks. If you want to use a different configuration for your backup, create an XML file with your required settings. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example configuration ensures that the
vdadisk is backed up by using thelibvirtdefaults, thevdbdisk is backed up in therawformat as/home/backup/vdb.backup, and that thevdcdisk is not backed up.Start the VM.
virsh start <vm-name>
# virsh start <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the disks of the VM by using the
virsh backup-beginutility. The following command uses the default backup settings, which creates copies of all disks that the VM uses, and saves them in the same directory as the original disksvirsh backup-begin <vm-name>
# virsh backup-begin <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, to apply custom backup settings, you can specify the backup XML configuration that you created previously.
virsh backup-begin <vm-name> --backupxml <backup-XML-location>
# virsh backup-begin <vm-name> --backupxml <backup-XML-location>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Ensure that the backup operation has completed.
virsh domjobinfo <vm-name> --completed
# virsh domjobinfo <vm-name> --completedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example successful output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the target location of the backup disks. For example, if you used the default backup configuration and the VM disks are located in the default
/var/lib/libvirt/images/directory, use the following command:ls -l /var/lib/libvirt/images
# ls -l /var/lib/libvirt/imagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example successful output:
-rw-------. 1 qemu qemu 42956488704 Oct 10 15:49 RHEL_10_beta.qcow2 -rw-------. 1 root root 42956488704 Oct 29 16:12 RHEL_10_beta.qcow.21761750144 -rw-------. 1 qemu qemu 196688 Oct 10 15:49 extra-storage.qcow2 -rw-------. 1 root root 196688 Oct 29 16:12 extra-storage.qcow2.1761750144
-rw-------. 1 qemu qemu 42956488704 Oct 10 15:49 RHEL_10_beta.qcow2 -rw-------. 1 root root 42956488704 Oct 29 16:12 RHEL_10_beta.qcow.21761750144 -rw-------. 1 qemu qemu 196688 Oct 10 15:49 extra-storage.qcow2 -rw-------. 1 root root 196688 Oct 29 16:12 extra-storage.qcow2.1761750144Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
21.2. Recovering a virtual machine Copy linkLink copied to clipboard!
For disaster recovery of your virtual machines (VMs), you can create a new VM that uses the XML configuration and storage that you previously backed up.
Prerequisites
- You have created a backup of the VM’s XML configuration and storage. For instructions, see Backing up a virtual machine.
Procedure
Inspect the backup XML configuration of the VM for the specific storage settings.
cat </path/to/backup.xml>
# cat </path/to/backup.xml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output section:
<disk type='file' device='disk'> <driver name='qemu' type='qcow2' discard='unmap'/> <source file='/var/lib/libvirt/images/vm-name.qcow2' index='3'/><disk type='file' device='disk'> <driver name='qemu' type='qcow2' discard='unmap'/> <source file='/var/lib/libvirt/images/vm-name.qcow2' index='3'/>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Move the backup disks to the locations specified in the backup XML and ensure that their names correspond with the values in the XML.
Define a new VM based on the backup XML.
virsh define --file </path/to/backup.xml>
# virsh define --file </path/to/backup.xml>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Modify the XML configuration of the newly created VM to adjust for settings that might be different from the original host, such as the network.
virsh edit <vm-name>
# virsh edit <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Start the VM and check that the guest operating system has been recovered correctly.
virsh start <vm-name>
# virsh start <vm-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 22. Creating nested virtual machines Copy linkLink copied to clipboard!
If you require a different host operating system than what your local host is running, you can use nested virtual machines (VMs). This eliminates the need for additional physical hardware.
In most environments, nested virtualization is only available as a Technology Preview in RHEL 10.
For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization.
22.1. What is nested virtualization? Copy linkLink copied to clipboard!
With nested virtualization, you can run virtual machines (VMs) within other VMs. A standard VM that runs on a physical host can also act as a second hypervisor and create its own VMs.
Nested virtualization terminology
- Level 0 (
L0) - A physical host, a bare-metal machine.
- Level 1 (
L1) -
A standard VM, running on an
L0physical host, that can act as an additional virtual host. - Level 2 (
L2) A nested VM running on an
L1virtual host.Important: The second level of virtualization severely limits the performance of an
L2VM. For this reason, nested virtualization is primarily intended for development and testing scenarios, such as:- Debugging hypervisors in a constrained environment
- Testing larger virtual deployments on a limited amount of physical resources
In most environments, nested virtualization is only available as a Technology Preview in RHEL 10.
For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization.
22.2. Support limitations for nested virtualization Copy linkLink copied to clipboard!
In most environments, nested virtualization is only available as a Technology Preview in RHEL 10.
However, you can use a Windows virtual machine (VM) with the Windows Subsystem for Linux (WSL2) to create a virtual Linux environment inside the Windows VM. This use case is fully supported on RHEL 10 under specific conditions.
To learn more about the relevant terminology for nested virtualization, see What is nested virtualization?
Supported environments
To create a supported deployment of nested virtualization, create an L1 Windows VM on a RHEL 9 or RHEL 10 L0 host and use WSL2 to create a virtual Linux environment inside the L1 Windows VM. Currently, this is the only supported nested environment.
The L0 host must be an Intel or AMD system. Other architectures, such as ARM or IBM Z, are currently not supported.
You must use only the following operating system versions:
On the L0 host: | On the L1 VMs: |
|---|---|
| RHEL 10.0 and later | Windows Server 2019 and later with WSL2 |
| Windows 10 and later with WSL2 |
See Microsoft documentation for instructions on installing WSL2 and choosing supported Linux distributions.
To create a supported nested environment, use one of the following procedures:
Technology Preview environments
These nested environments are available only as a Technology Preview and are not supported.
The L0 host must be an Intel, AMD, or IBM Z system. Nested virtualization currently does not work on other architectures, such as ARM.
You must use only the following operating system versions for the deployment to work:
On the L0 host: | On the L1 VMs: | On the L2 VMs: |
|---|---|---|
| RHEL 10.0 and later | RHEL 9.6 and later | RHEL 9.6 and later |
| RHEL 10.0 and later | RHEL 10.0 and later | |
| Windows Server 2016 and later with Hyper-V | Windows Server 2019 and later | |
| Windows 10 and later with Hyper-V |
Creating RHEL L1 VMs is not tested when used in other Red Hat virtualization offerings. These include:
- Red Hat Virtualization
- Red Hat OpenStack Platform
- OpenShift Virtualization
To create a Technology Preview nested environment, use one of the following procedures:
Hypervisor limitations
-
Currently, Red Hat tests nesting only on RHEL-KVM. When RHEL is used as the
L0hypervisor, you can use RHEL or Windows as theL1hypervisor. -
When using an
L1RHEL VM on a non-KVML0hypervisor, such as VMware ESXi or Amazon Web Services (AWS), creatingL2VMs in the RHEL guest operating system has not been tested and might not work.
Feature limitations
-
Use of
L2VMs as hypervisors and creatingL3guests has not been properly tested and is not expected to work. -
Migrating VMs currently does not work on AMD systems if nested virtualization has been enabled on the
L0host. On an IBM Z system, huge-page backing storage and nested virtualization cannot be used at the same time.
modprobe kvm hpage=1 nested=1 modprobe: ERROR: could not insert 'kvm': Invalid argument dmesg |tail -1 [90226.508366] kvm-s390: A KVM host that supports nesting cannot back its KVM guests with huge pages
# modprobe kvm hpage=1 nested=1 modprobe: ERROR: could not insert 'kvm': Invalid argument # dmesg |tail -1 [90226.508366] kvm-s390: A KVM host that supports nesting cannot back its KVM guests with huge pagesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Some features available on the
L0host might be unavailable on theL1hypervisor.
22.3. Creating a nested virtual machine on Intel Copy linkLink copied to clipboard!
To create an L2 nested virtual machine (VM) on an Intel host, you must enable and configure nested virtualization capabilities on both the L0 host and the L1 VM.
In most environments, nested virtualization is only available as a Technology Preview in RHEL 10.
For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization.
Prerequisites
- An L0 RHEL 10 host running an L1 VM.
-
The hypervisor CPU must support nested virtualization. To verify, use the
cat /proc/cpuinfocommand on the L0 hypervisor. If the output of the command includes thevmxandeptflags, creating L2 VMs is possible. This is generally the case on Intel Xeon v3 cores and later. Ensure that nested virtualization is enabled on the L0 host:
cat /sys/module/kvm_intel/parameters/nested
# cat /sys/module/kvm_intel/parameters/nestedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps, and continue with the Procedure section.
If the command returns 0 or N but your system supports nested virtualization, use the following steps to enable the feature.
Unload the
kvm_intelmodule:modprobe -r kvm_intel
# modprobe -r kvm_intelCopy to Clipboard Copied! Toggle word wrap Toggle overflow Activate the nesting feature:
modprobe kvm_intel nested=1
# modprobe kvm_intel nested=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The nesting feature is now enabled, but only until the next reboot of the L0 host. To enable it permanently, add the following line to the
/etc/modprobe.d/kvm.conffile:options kvm_intel nested=1
options kvm_intel nested=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Configure your L1 VM for nested virtualization.
Open the XML configuration of the VM. The following example opens the configuration of the Intel-L1 VM:
virsh edit Intel-L1
# virsh edit Intel-L1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the VM to use
host-passthroughCPU mode by editing the<cpu>element:<cpu mode='host-passthrough'/>
<cpu mode='host-passthrough'/>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you require the VM to use a specific CPU model, configure the VM to use
customCPU mode. Inside the<cpu>element, add a<feature policy='require' name='vmx'/>element and a<model>element with the CPU model specified inside. For example:<cpu mode ='custom' match ='exact' check='partial'> <model fallback='allow'>Haswell-noTSX</model> <feature policy='require' name='vmx'/> ... </cpu>
<cpu mode ='custom' match ='exact' check='partial'> <model fallback='allow'>Haswell-noTSX</model> <feature policy='require' name='vmx'/> ... </cpu>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the L1 VM.
22.4. Creating a nested virtual machine on AMD Copy linkLink copied to clipboard!
To create an L2 nested virtual machine (VM) on an AMD host, you must enable and configure nested virtualization capabilities on both the L0 host and the L1 VM.
In most environments, nested virtualization is only available as a Technology Preview in RHEL 10.
For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization.
Prerequisites
- An L0 RHEL 10 host running an L1 virtual machine (VM).
-
The hypervisor CPU must support nested virtualization. To verify, use the
cat /proc/cpuinfocommand on the L0 hypervisor. If the output of the command includes thesvmandnptflags, creating L2 VMs is possible. This is generally the case on AMD EPYC cores and later. Ensure that nested virtualization is enabled on the L0 host:
cat /sys/module/kvm_amd/parameters/nested
# cat /sys/module/kvm_amd/parameters/nestedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps, and continue with the Procedure section.
If the command returns 0 or N, use the following steps to enable the feature.
- Stop all running VMs on the L0 host.
Unload the
kvm_amdmodule:modprobe -r kvm_amd
# modprobe -r kvm_amdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Activate the nesting feature:
modprobe kvm_amd nested=1
# modprobe kvm_amd nested=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The nesting feature is now enabled, but only until the next reboot of the L0 host. To enable it permanently, add the following to the
/etc/modprobe.d/kvm.conffile:options kvm_amd nested=1
options kvm_amd nested=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Configure your L1 VM for nested virtualization.
Open the XML configuration of the VM. The following example opens the configuration of the AMD-L1 VM:
virsh edit AMD-L1
# virsh edit AMD-L1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the VM to use
host-passthroughCPU mode by editing the<cpu>element:<cpu mode='host-passthrough'/>
<cpu mode='host-passthrough'/>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you require the VM to use a specific CPU model, configure the VM to use
customCPU mode. Inside the<cpu>element, add a<feature policy='require' name='svm'/>element and a<model>element with the CPU model specified inside. For example:<cpu mode="custom" match="exact" check="none"> <model fallback="allow">EPYC-IBPB</model> <feature policy="require" name="svm"/> ... </cpu>
<cpu mode="custom" match="exact" check="none"> <model fallback="allow">EPYC-IBPB</model> <feature policy="require" name="svm"/> ... </cpu>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the L1 VM.
22.5. Creating a nested virtual machine on IBM Z Copy linkLink copied to clipboard!
To create an L2 nested virtual machine (VM) on an IBM Z host, you must enable and configure nested virtualization capabilities on both the L0 host and the L1 VM.
IBM Z does not really provide a bare-metal L0 host. Instead, user systems are set up on a logical partition (LPAR), which is already a virtualized system, so it is often referred to as L1. However, for better alignment with other architectures in this guide, the following steps refer to IBM Z as if it provides an L0 host.
To learn more about nested virtualization, see: What is nested virtualization?
In most environments, nested virtualization is only available as a Technology Preview in RHEL 10.
For detailed descriptions of the supported and unsupported environments, see Support limitations for nested virtualization.
Prerequisites
- An L0 RHEL 10 host running an L1 virtual machine (VM).
-
The hypervisor CPU must support nested virtualization. To verify this is the case, use the
cat /proc/cpuinfocommand on the L0 hypervisor. If the output of the command includes thesieflag, creating L2 VMs is possible. Ensure that nested virtualization is enabled on the L0 host:
cat /sys/module/kvm/parameters/nested
# cat /sys/module/kvm/parameters/nestedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If the command returns 1 or Y, the feature is enabled. Skip the remaining prerequisite steps, and continue with the Procedure section.
If the command returns 0 or N, use the following steps to enable the feature.
- Stop all running VMs on the L0 host.
Unload the
kvmmodule:modprobe -r kvm
# modprobe -r kvmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Activate the nesting feature:
modprobe kvm nested=1
# modprobe kvm nested=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The nesting feature is now enabled, but only until the next reboot of the L0 host. To enable it permanently, add the following line to the
/etc/modprobe.d/kvm.conffile:options kvm nested=1
options kvm nested=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
- Create an L2 VM within the L1 VM. To do this, follow the same procedure as when creating the L1 VM.
Chapter 23. Feature support and limitations in RHEL 10 virtualization Copy linkLink copied to clipboard!
When using Red Hat Enterprise Linux 10 (RHEL 10) virtualization in a production environment, be aware of the relevant feature support and restrictions.
23.1. How RHEL virtualization support works Copy linkLink copied to clipboard!
A set of support limitations applies to virtualization in Red Hat Enterprise Linux 10 (RHEL 10). This means that when you use certain features or exceed a certain amount of allocated resources when using virtual machines in RHEL 10, Red Hat will provide only limited support for these guests unless you have a specific subscription plan.
Features listed in Recommended features in RHEL 10 virtualization have been tested and certified by Red Hat to work with the KVM hypervisor on a RHEL 10 system. Therefore, they are fully supported and recommended for use in virtualization in RHEL 10.
Features listed in Unsupported features in RHEL 10 virtualization may work, but are not supported and not intended for use in RHEL 10. Therefore, Red Hat strongly recommends not using these features in RHEL 10 with KVM.
Resource allocation limits in RHEL 10 virtualization lists the maximum amount of specific resources supported on a KVM guest in RHEL 10. Guests that exceed these limits are considered as Technology Preview by Red Hat.
In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 10 virtualization are supported. However, some of these have not been completely tested and therefore may not be fully optimized.
Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
23.2. Recommended features in RHEL 10 virtualization Copy linkLink copied to clipboard!
The following features are recommended for use with the KVM hypervisor included with Red Hat Enterprise Linux 10 (RHEL 10):
Host system architectures
RHEL 10 with KVM is only supported on the following host architectures:
- AMD64 and Intel 64
- IBM Z - IBM z14 systems and later
- ARM 64
Any other hardware architectures are not supported for using RHEL 10 as a KVM virtualization host, and Red Hat highly discourages doing so.
Guest operating systems
Red Hat provides support with KVM virtual machines that use specific guest operating systems (OSs). For a detailed list of certified guest OSs, see Certified Guest Operating Systems in the Red Hat KnowledgeBase.
Note, however, that by default, your guest OS does not use the same subscription as your host. Therefore, you must activate a separate license or subscription for the guest OS to work properly.
In addition, the pass-through devices that you attach to the VM must be supported by both the host OS and the guest OS.
Similarly, for optimal function of your deployment, Red Hat recommends that the CPU model and features that you define in the XML configuration of a VM are supported by both the host OS and the guest OS.
To view the certified CPUs and other hardware for various versions of RHEL, see the Red Hat Ecosystem Catalog.
Machine types
To ensure that your VM is compatible with your host architecture and that the guest OS runs optimally, the VM must use an appropriate machine type.
In RHEL 10, pc-i440fx-rhel7.6.0 and earlier machine types, which were default in earlier major versions of RHEL, are no longer supported. As a consequence, attempting to start a VM with such machine types on a RHEL 10 host fails with an unsupported configuration error. If you encounter this problem after upgrading your host to RHEL 10, see the Red Hat Knowledgebase solution Invalid virtual machines that used to work with RHEL 9 and newer hypervisors.
When creating a VM by using the command line, the virt-install utility provides multiple methods of setting the machine type.
-
When you use the
--os-variantoption,virt-installautomatically selects the machine type recommended for your host CPU and supported by the guest OS. -
If you do not use
--os-variantor require a different machine type, use the--machineoption to specify the machine type explicitly. -
If you specify a
--machinevalue that is unsupported or not compatible with your host,virt-installfails and displays an error message.
The recommended machine types for KVM virtual machines on supported architectures, and the corresponding values for the --machine option, are as follows. Y stands for the latest minor version of RHEL 10.
| Architecture | Recommended machine type | Machine type value |
|---|---|---|
| Intel 64 and AMD64 (x86_64) |
|
|
| IBM Z (s390x) |
|
|
| ARM 64 (AArch64) |
|
|
To obtain the machine type of an existing VM:
virsh dumpxml VM-name | grep machine=
# virsh dumpxml VM-name | grep machine=
To view the full list of machine types supported on your host:
/usr/libexec/qemu-kvm -M help
# /usr/libexec/qemu-kvm -M help
23.3. Unsupported features in RHEL 10 virtualization Copy linkLink copied to clipboard!
The following features are not supported by the KVM hypervisor included with Red Hat Enterprise Linux 10 (RHEL 10):
Many of these limitations may not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Features supported by other virtualization solutions are described below.
For support details of the respective virtualization solutions, consult the relevant documentation.
Host system architectures
RHEL 10 with KVM is not supported on any host architectures that are not listed in Recommended features in RHEL 10 virtualization.
Guest operating systems
KVM virtual machines (VMs) that use the following guest operating systems (OSs) are not supported on a RHEL 10 host:
- Windows 8.1 and earlier
- Windows Server 2012 R2 and earlier
- macOS
- Solaris for x86 systems
- Any operating system released before 2009
For a list of guest OSs supported on RHEL hosts and other virtualization solutions, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.
Creating VMs in containers
Red Hat does not support creating KVM virtual machines in any type of container that includes the elements of the RHEL 10 hypervisor (such as the QEMU emulator or the libvirt package).
To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering.
Specific virsh commands and options
Not every parameter that you can use with the virsh utility has been tested and certified as production-ready by Red Hat. Therefore, any virsh commands and options that are not explicitly recommended by Red Hat documentation may not work correctly, and Red Hat recommends not using them in your production environment.
Notably, unsupported virsh commands include the following:
-
virsh iface-*commands, such asvirsh iface-startandvirsh iface-destroy -
virsh blkdeviotune -
virsh snapshot-*commands that do not support external snapshots. For details, see Support limitations for virtual machine snapshots.
The QEMU command line
QEMU is an essential component of the virtualization architecture in RHEL 10, but it is difficult to manage manually, and improper QEMU configurations might cause security vulnerabilities. Therefore, using qemu-* command-line utilities, such as, qemu-kvm is not supported by Red Hat. Instead, use libvirt utilities, such as virt-install, virt-xml, and supported virsh commands, as these orchestrate QEMU according to the best practices. However, the qemu-img utility is supported for management of virtual disk images.
vCPU hot unplug
Removing a virtual CPU (vCPU) from a running VM, also referred to as a vCPU hot unplug, is not supported in RHEL 10. Note that adding vCPUs to a running VM, or vCPU hot plug, is supported on Intel 64 and AMD64 CPU architectures. For more information, see Overview of virtualization features support in RHEL 10.
RDMA-based migration
In RHEL 10, migrating virtual machines (VMs) by using Remote Direct Memory Access (RDMA) is no longer supported. Therefore, Red Hat highly discourages using the rdma URI for VM migration.
QEMU-side I/O throttling
Using the virsh blkdeviotune utility to configure maximum input and output levels for operations on virtual disk, also known as QEMU-side I/O throttling, is not supported in RHEL 10.
To set up I/O throttling in RHEL 10, use virsh blkiotune. This is also known as libvirt-side I/O throttling. For instructions, see Disk I/O throttling in virtual machines.
Other solutions:
- QEMU-side I/O throttling is also supported in RHOSP. For more information, see Red Hat Knowledgebase solutions Setting Resource Limitation on Disk and the Use Quality-of-Service Specifications section in the RHOSP Storage Guide.
- In addition, OpenShift Virtualizaton supports QEMU-side I/O throttling as well.
Storage live migration
Moving the disk of a VM to another location while the VM is running on a single host, for example by using the virsh blockcopy command, is not supported in RHEL 10.
Note, however, that migrating a VM along with its storage, for example by using the virsh migrate --live --copy-storage-all command, is supported.
Other solutions:
- Storage live migration is supported in RHOSP, but with some limitations. For details, see Migrate a Volume.
vHost Data Path Acceleration
On RHEL 10 hosts, it is possible to configure vHost Data Path Acceleration (vDPA) for virtio devices, but Red Hat currently does not support this feature, and strongly discourages its use in production environments.
vhost-user
RHEL 10 does not support the implementation of a user-space vHost interface.
Other solutions:
-
vhost-useris supported in RHOSP, but only forvirtio-netinterfaces. For more information, see the Red Hat Knowledgebase solution virtio-net implementation and vhost user ports. -
OpenShift Virtualization supports
vhost-useras well.
S3 and S4 system power states
Suspending a VM to the Suspend to RAM (S3) or Suspend to disk (S4) system power states is not supported. Note that these features are disabled by default, and enabling them will make your VM not supportable by Red Hat.
Note that the S3 and S4 states are also currently not supported in any other virtualization solution provided by Red Hat.
virtio-crypto
Using the virtio-crypto device in RHEL 10 is not supported and RHEL strongly discourages its use.
Note that virtio-crypto devices are also not supported in any other virtualization solution provided by Red Hat.
virtio-multitouch-device, virtio-multitouch-pci
Using the virtio-multitouch-device and virtio-multitouch-pci devices in RHEL 10 is not supported and RHEL strongly discourages their use.
Incremental live backup
Configuring a VM backup that only saves VM changes since the last backup, also known as incremental live backup, is not supported in RHEL 10, and Red Hat highly discourages its use.
Other solutions: * Use 3rd party backup solutions instead.
net_failover
Using the net_failover driver to set up an automated network device failover mechanism is not supported in RHEL 10.
Note that net_failover is also currently not supported in any other virtualization solution provided by Red Hat.
TCG
QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This mode does not require hardware virtualization support. However, TCG is not supported by Red Hat.
TCG-based guests can be recognized by examining its XML configuration, for example using the virsh dumpxml command.
The configuration file of a TCG guest contains the following line:
<domain type='qemu'>
<domain type='qemu'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The configuration file of a KVM guest contains the following line:
<domain type='kvm'>
<domain type='kvm'>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
SR-IOV InfiniBand networking devices
Attaching InfiniBand networking devices to VMs using Single-root I/O virtualization (SR-IOV) is not supported.
23.4. Resource allocation limits in RHEL 10 virtualization Copy linkLink copied to clipboard!
The following limits apply to virtualized resources that can be allocated to a single KVM virtual machine (VM) on a Red Hat Enterprise Linux 10 (RHEL 10) host. If you VM exceeds these limits, it is considered a Technology Preview by Red Hat.
Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Maximum vCPUs per VM
For the maximum amount of vCPUs and memory that is supported on a single VM running on a RHEL 10 host, see: Virtualization limits for Red Hat Enterprise Linux with KVM
PCI devices per VM
RHEL 10 supports 32 PCI device slots per VM bus, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per bus when multi-function capabilities are enabled in the VM, and no PCI bridges are used.
Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some buses do not make all 256 device addresses available for the user; for example, the root bus has several built-in devices occupying slots.
Virtualized IDE devices
KVM is limited to a maximum of 4 virtualized IDE devices per VM.
23.5. How virtualization on IBM Z differs from AMD64 and Intel 64 Copy linkLink copied to clipboard!
KVM virtualization in RHEL 10 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:
- PCI and USB devices
Virtual PCI and USB devices are not supported on IBM Z. This also means that
virtio-*-pcidevices are unsupported, andvirtio-*-ccwdevices should be used instead. For example, usevirtio-net-ccwinstead ofvirtio-net-pci.Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.
- Supported guest operating system
- Red Hat only supports VMs hosted on IBM Z if they use RHEL 8, 9, or 10 as their guest operating system.
- Device boot order
IBM Z does not support the
<boot dev='device'>XML configuration element. To define device boot order, use the<boot order='number'>element in the<devices>section of the XML.NoteUsing
<boot order='number'>for boot order management is recommended on all host architectures.In addition, you can select the required boot entry by using the architecture-specific
loadparmattribute in the<boot>element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Memory hot plug
- Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
- NUMA topology
-
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by
libvirton IBM Z. Therefore, tuning vCPU performance by using NUMA is not possible on these systems. - GPU devices
- Assigning GPU devices is not supported on IBM Z systems.
- vfio-ap
- VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture.
- vfio-ccw
- VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture.
- SMBIOS
- SMBIOS configuration is not available on IBM Z.
- Live VM memory dumps
-
Creating a memory dump of a VM might fail on IBM Z when using the
--liveoption. In rare cases, creating a VM snapshot might also fail. To prevent these issues, use RHEL 10.2 or later on the host and avoid using the--liveoption. - Watchdog devices
If using watchdog devices in your VM on an IBM Z host, use the
diag288model. For example:<devices> <watchdog model='diag288' action='poweroff'/> </devices>
<devices> <watchdog model='diag288' action='poweroff'/> </devices>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - kvm-clock
-
The
kvm-clockservice is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z. - v2v and p2v
-
The
virt-v2vandvirt-p2vutilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z. - Migrations
To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the
host-modelCPU mode. Thehost-passthroughandmaximumCPU modes are not recommended, as they are generally not migration-safe.If you want to specify an explicit CPU model in the
customCPU mode, follow these guidelines:-
Do not use CPU models that end with
-base. -
Do not use the
qemu,maxorhostCPU model.
To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without
-baseat the end.-
If you have both the source host and the destination host running, you can instead use the
virsh hypervisor-cpu-baselinecommand on the destination host to obtain a suitable CPU model. For details, see Verifying host CPU compatibility for virtual machine migration. - For more information about supported machine types in RHEL 10, see Recommended features in RHEL {ProductNumber} virtualization.
-
Do not use CPU models that end with
- PXE installation and booting
When using PXE to run a VM on IBM Z, a specific configuration is required for the
pxelinux.cfg/defaultfile. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Secure Execution
-
You can boot a VM with a prepared secure guest image by defining
<launchSecurity type="s390-pv"/>in the XML configuration of the VM. This encrypts the VM’s memory to protect it from unwanted access by the hypervisor.
Note that the following features are not supported when running a VM in secure execution mode:
-
Device passthrough by using
vfio -
Obtaining memory information by using
virsh domstatsandvirsh memstat -
The
memballoonandvirtio-rngvirtual devices - Memory backing by using huge pages
- Live and non-live VM migrations
- Saving and restoring VMs
-
VM snapshots, including memory snapshots (using the
--memspecoption) -
Full memory dumps. Instead, specify the
--memory-onlyoption for thevirsh dumpcommand. - 248 or more vCPUs. The vCPU limit for secure guests is 247.
- Nested virtualization
23.6. How virtualization on ARM 64 differs from AMD64 and Intel 64 Copy linkLink copied to clipboard!
KVM virtualization in RHEL 10 on ARM 64 systems (also known as AArch64) is different from KVM on AMD64 and Intel 64 systems in several aspects. These include, but are not limited to, the following:
- Guest operating systems
- The only guest operating systems currently supported on ARM 64 virtual machines (VMs) are RHEL 9 and RHEL 10.
- vCPU hot plug and hot unplug
- Attaching a virtual CPU (vCPU) to a running VM, also referred to as a vCPU hot plug, is currently not supported on ARM 64 hosts. In addition, like on AMD64 and Intel 64 hosts, removing a vCPU from a running VM (vCPU hot unplug), is not supported on ARM 64.
- SecureBoot
- The SecureBoot feature is not available on ARM 64 systems.
- Migration
- Migrating VMs between ARM 64 hosts is currently only supported between hosts with identical CPUs, firmware, and memory page size.
- Memory page sizes
ARM 64 currently supports running VMs with 64 KB or 4 KB memory page sizes, however both the host and the guest must use the same memory page size. Configurations where host and guest have different memory page sizes are not supported.
However, you can install alternative page size kernels. By default, RHEL 10 uses a 4 KB memory page size. If you want to run a VM with a 64 KB memory page size, your host must be using a kernel with 64 KB memory page size. When creating the VM, you must install it with the
kernel-64k package, for example by including the following parameter in the kickstart file:%packages -kernel kernel-64k %end
%packages -kernel kernel-64k %endCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Huge pages
ARM 64 hosts with 64 KB memory page size support huge memory pages with the following sizes:
- 2 MB
- 512 MB
16 GB
When you use transparent huge pages (THP) on an ARM 64 host with 64 KB memory page size, it supports only 512 MB huge pages.
ARM 64 hosts with 4 KB memory page size support huge memory pages with the following sizes:
- 64 KB
- 2 MB
- 32 MB
1024 MB
When you use transparent huge pages (THP) on an ARM 64 host with 4 KB memory page size, it supports only 2 MB huge pages.
- SVE
The ARM 64 architecture provides the Scalable Vector Expansion (SVE) feature. If the host supports the feature, using SVE in your VMs improves the speed of vector mathematics computation and string operations in these VMs.
The base-line level of SVE is enabled by default on host CPUs that support it. However, Red Hat recommends configuring each vector length explicitly. This ensures that the VM can only be launched on compatible hosts. To do so:
Verify that your CPU has the SVE feature:
grep -m 1 Features /proc/cpuinfo | grep -w sve Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm fcma dcpop sve
# grep -m 1 Features /proc/cpuinfo | grep -w sve Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm fcma dcpop sveCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command includes
sveor if its exit code is 0, your CPU supports SVE.Open the XML configuration of the VM you want to modify:
virsh edit vm-name
# virsh edit vm-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
<cpu>element similarly to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example explicitly enables SVE vector lengths 128, 256, and 512, and explicitly disables vector length 384.
- CPU models
-
VMs on ARM 64 currently only support the
host-passthroughCPU model. - PXE
Booting in the Preboot Execution Environment (PXE) is functional but not supported, Red Hat strongly discourages using it in production environments.
If you require PXE booting, it is only possible with the
virtio-net-pcinetwork interface controller (NIC).- EDK2
ARM 64 guests use UEFI firmware included in the
edk2-aarch64package, which provides a similar interface as OVMF UEFI on AMD64 and Intel 64, and implements a similar set of features.Specifically,
edk2-aarch64provides a built-in UEFI shell, but does not support the following functionality:- SecureBoot
- Management Mode
- kvm-clock
-
The
kvm-clockservice does not have to be configured for time management in VMs on ARM 64. - Peripheral devices
ARM 64 systems support a partly different set of peripheral devices than AMD64 and Intel 64 devices.
- Only PCIe topologies are supported.
-
ARM 64 systems support
virtiodevices by using thevirtio-*-pcidrivers. Therefore, onlyvirtio-pcidevices are supported. -
The
virtio-gpudriver is only supported for graphical installs. -
ARM 64 systems support
usb-mouseandusb-tabletdevices for graphical installs only. Other USB devices, USB passthrough, or USB redirect are not supported. - Device assignment that uses Virtual Function I/O (VFIO) is supported only for NICs (physical and virtual functions).
- Emulated devices
The following devices are not supported on ARM 64:
- Emulated sound devices, such as ICH9, ICH6 or AC97.
- Emulated graphics cards, such as VGA cards.
-
Emulated network devices, such as
rtl8139.
- Nested virtualization
- Creating nested VMs is currently not possible on ARM 64 hosts.
- v2v and p2v
-
The
virt-v2vandvirt-p2vutilities are only supported on the AMD64 and Intel 64 architecture and are, therefore, not provided on ARM 64.
23.7. Overview of virtualization features support in RHEL 10 Copy linkLink copied to clipboard!
For comparative information about the support state of selected virtualization features in RHEL 10 across the available system architectures, consult the following tables.
| Intel 64 and AMD64 | IBM Z | ARM 64 |
|---|---|---|
| Supported | Supported | Supported |
| Intel 64 and AMD64 | IBM Z | ARM 64 | |
|---|---|---|---|
| CPU hot plug | Supported | Supported | UNSUPPORTED |
| CPU hot unplug | UNSUPPORTED | UNSUPPORTED | UNSUPPORTED |
| Memory hot plug [a] | Supported | Supported | Supported |
| Memory hot unplug [b] | Supported | Supported | Supported |
| Peripheral device hot plug | Supported | Supported [c] | Supported |
| Peripheral device hot unplug | Supported | Supported [d] | Supported |
[a]
Requires using the virtio-mem device
[b]
Requires using the virtio-mem device
| |||
| Intel 64 and AMD64 | IBM Z | ARM 64 | |
|---|---|---|---|
| NUMA tuning | Supported | UNSUPPORTED | Supported |
| SR-IOV devices | Supported | UNSUPPORTED | Supported |
| virt-v2v and p2v | Supported | UNSUPPORTED | UNAVAILABLE |
Note that some of the unsupported features are supported on other Red Hat products, such as Red Hat Virtualization and Red Hat OpenStack platform. For more information, see Unsupported features in RHEL 10 virtualization.