Este conteúdo não está disponível no idioma selecionado.
Chapter 24. Feature support and limitations in RHEL 9 virtualization
This document provides information about feature support and restrictions in Red Hat Enterprise Linux 9 (RHEL 9) virtualization.
24.1. How RHEL virtualization support works
A set of support limitations applies to virtualization in Red Hat Enterprise Linux 9 (RHEL 9). This means that when you use certain features or exceed a certain amount of allocated resources when using virtual machines in RHEL 9, Red Hat will not support these guests unless you have a specific subscription plan.
Features listed in Recommended features in RHEL 9 virtualization have been tested and certified by Red Hat to work with the KVM hypervisor on a RHEL 9 system. Therefore, they are fully supported and recommended for use in virtualization in RHEL 9.
Features listed in Unsupported features in RHEL 9 virtualization may work, but are not supported and not intended for use in RHEL 9. Therefore, Red Hat strongly recommends not using these features in RHEL 9 with KVM.
Resource allocation limits in RHEL 9 virtualization lists the maximum amount of specific resources supported on a KVM guest in RHEL 9. Guests that exceed these limits are not supported by Red Hat.
In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 9 virtualization are supported. However, some of these have not been completely tested and therefore may not be fully optimized.
Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
24.2. Recommended features in RHEL 9 virtualization
The following features are recommended for use with the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9):
Host system architectures
RHEL 9 with KVM is only supported on the following host architectures:
- AMD64 and Intel 64
- IBM Z - IBM z13 systems and later
- ARM 64
Any other hardware architectures are not supported for using RHEL 9 as a KVM virtualization host, and Red Hat highly discourages doing so.
Guest operating systems
Red Hat provides support with KVM virtual machines that use specific guest operating systems (OSs). For a detailed list of supported guest OSs, see the Certified Guest Operating Systems in the Red Hat KnowledgeBase.
Note, however, that by default, your guest OS does not use the same subscription as your host. Therefore, you must activate a separate licence or subscription for the guest OS to work properly.
In addition, the pass-through devices that you attach to the VM must be supported by both the host OS and the guest OS.
Similarly, for optimal function of your deployment, Red Hat recommends that the CPU model and features that you define in the XML configuration of a VM are supported by both the host OS and the guest OS.
To view the certified CPUs and other hardware for various versions of RHEL, see the Red Hat Ecosystem Catalog.
Machine types
To ensure that your VM is compatible with your host architecture and that the guest OS runs optimally, the VM must use an appropriate machine type.
In RHEL 9, pc-i440fx-rhel7.5.0
and earlier machine types, which were default in earlier major versions of RHEL, are no longer supported. As a consequence, attempting to start a VM with such machine types on a RHEL 9 host fails with an unsupported configuration
error. If you encounter this problem after upgrading your host to RHEL 9, see the Red Hat Knowledgebase solution Invalid virtual machines that used to work with RHEL 9 and newer hypervisors.
When creating a VM by using the command line, the virt-install
utility provides multiple methods of setting the machine type.
-
When you use the
--os-variant
option,virt-install
automatically selects the machine type recommended for your host CPU and supported by the guest OS. -
If you do not use
--os-variant
or require a different machine type, use the--machine
option to specify the machine type explicitly. -
If you specify a
--machine
value that is unsupported or not compatible with your host,virt-install
fails and displays an error message.
The recommended machine types for KVM virtual machines on supported architectures, and the corresponding values for the --machine
option, are as follows. Y stands for the latest minor version of RHEL 9.
-
On Intel 64 and AMD64 (x86_64):
pc-q35-rhel9.Y.0
--machine=q35
-
On IBM Z (s390x):
s390-ccw-virtio-rhel9.Y.0
--machine=s390-ccw-virtio
-
On ARM 64:
virt-rhel9.Y.0
--machine=virt
To obtain the machine type of an existing VM:
# virsh dumpxml VM-name | grep machine=
To view the full list of machine types supported on your host:
# /usr/libexec/qemu-kvm -M help
24.3. Unsupported features in RHEL 9 virtualization
The following features are not supported by the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9):
Many of these limitations may not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Features supported by other virtualization solutions are described as such in the following paragraphs.
Host system architectures
RHEL 9 with KVM is not supported on any host architectures that are not listed in Recommended features in RHEL 9 virtualization.
Guest operating systems
KVM virtual machines (VMs) that use the following guest operating systems (OSs) are not supported on a RHEL 9 host:
- Windows 8.1 and earlier
- Windows Server 2012 R2 and earlier
- macOS
- Solaris for x86 systems
- Any operating system released before 2009
For a list of guest OSs supported on RHEL hosts and other virtualization solutions, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM.
Creating VMs in containers
Red Hat does not support creating KVM virtual machines in any type of container that includes the elements of the RHEL 9 hypervisor (such as the QEMU
emulator or the libvirt
package).
To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering.
Specific virsh commands and options
Not every parameter that you can use with the virsh
utility has been tested and certified as production-ready by Red Hat. Therefore, any virsh
commands and options that are not explicitly recommended by Red Hat documentation may not work correctly, and Red Hat recommends not using them in your production environment.
Notably, unsupported virsh
commands include the following:
-
virsh iface-*
commands, such asvirsh iface-start
andvirsh iface-destroy
-
virsh blkdeviotune
-
virsh snapshot-*
commands, such asvirsh snapshot-create
andvirsh snapshot-revert
The QEMU command line
QEMU is an essential component of the virtualization architecture in RHEL 9, but it is difficult to manage manually, and improper QEMU configurations might cause security vulnerabilities. Therefore, using qemu-*
command-line utilities, such as, qemu-kvm
is not supported by Red Hat. Instead, use libvirt utilities, such as virt-install
, virt-xml
, and supported virsh
commands, as these orchestrate QEMU according to the best practices. However, the qemu-img
utility is supported for management of virtual disk images.
vCPU hot unplug
Removing a virtual CPU (vCPU) from a running VM, also referred to as a vCPU hot unplug, is not supported in RHEL 9.
Memory hot unplug
Removing a memory device attached to a running VM, also referred to as a memory hot unplug, is unsupported in RHEL 9.
QEMU-side I/O throttling
Using the virsh blkdeviotune
utility to configure maximum input and output levels for operations on virtual disk, also known as QEMU-side I/O throttling, is not supported in RHEL 9.
To set up I/O throttling in RHEL 9, use virsh blkiotune
. This is also known as libvirt-side I/O throttling. For instructions, see Disk I/O throttling in virtual machines.
Other solutions:
- QEMU-side I/O throttling is also supported in RHOSP. For more information, see Red Hat Knowledgebase solutions Setting Resource Limitation on Disk and the Use Quality-of-Service Specifications section in the RHOSP Storage Guide.
- In addition, OpenShift Virtualizaton supports QEMU-side I/O throttling as well.
Storage live migration
Migrating a disk image of a running VM between hosts is not supported in RHEL 9.
Other solutions:
- Storage live migration is supported in RHOSP, but with some limitations. For details, see Migrate a Volume.
Internal snapshots
Creating and using internal snapshots for VMs is deprecated in RHEL 9, and highly discouraged for use in production environment. Instead, use external snapshots. For details, see Support limitations for virtual machine snapshots.
Other solutions:
- RHOSP supports live snapshots. For details, see Importing virtual machines into the overcloud.
- Live snapshots are also supported on OpenShift Virtualization.
vHost Data Path Acceleration
On RHEL 9 hosts, it is possible to configure vHost Data Path Acceleration (vDPA) for virtio devices, but Red Hat currently does not support this feature, and strongly discourages its use in production environments.
vhost-user
RHEL 9 does not support the implementation of a user-space vHost interface.
Other solutions:
-
vhost-user
is supported in RHOSP, but only forvirtio-net
interfaces. For more information, see the Red Hat Knowledgebase solution virtio-net implementation and vhost user ports. -
OpenShift Virtualization supports
vhost-user
as well.
S3 and S4 system power states
Suspending a VM to the Suspend to RAM (S3) or Suspend to disk (S4) system power states is not supported. Note that these features are disabled by default, and enabling them will make your VM not supportable by Red Hat.
Note that the S3 and S4 states are also currently not supported in any other virtualization solution provided by Red Hat.
S3-PR on a multipathed vDisk
SCSI3 persistent reservation (S3-PR) on a multipathed vDisk is not supported in RHEL 9. As a consequence, Windows Cluster is not supported in RHEL 9.
virtio-crypto
Using the virtio-crypto device in RHEL 9 is not supported and RHEL strongly discourages its use.
Note that virtio-crypto devices are also not supported in any other virtualization solution provided by Red Hat.
virtio-multitouch-device, virtio-multitouch-pci
Using the virtio-multitouch-device and virtio-multitouch-pci devices in RHEL 9 is not supported and RHEL strongly discourages their use.
Incremental live backup
Configuring a VM backup that only saves VM changes since the last backup, also known as incremental live backup, is not supported in RHEL 9, and Red Hat highly discourages its use.
net_failover
Using the net_failover
driver to set up an automated network device failover mechanism is not supported in RHEL 9.
Note that net_failover
is also currently not supported in any other virtualization solution provided by Red Hat.
TCG
QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This mode does not require hardware virtualization support. However, TCG is not supported by Red Hat.
TCG-based guests can be recognized by examining its XML configuration, for example using the virsh dumpxml
command.
The configuration file of a TCG guest contains the following line:
<domain type='qemu'>
The configuration file of a KVM guest contains the following line:
<domain type='kvm'>
SR-IOV InfiniBand networking devices
Attaching InfiniBand networking devices to VMs using Single-root I/O virtualization (SR-IOV) is not supported.
SGIO
Attaching SCSI devices to VMs by using SCSI generic I/O (SGIO) is not supported on RHEL 9. To detect whether your VM has an attached SGIO device, check the VM configuration for the following lines:
<disk type="block" device="lun">
<hostdev mode='subsystem' type='scsi'>
24.4. Resource allocation limits in RHEL 9 virtualization
The following limits apply to virtualized resources that can be allocated to a single KVM virtual machine (VM) on a Red Hat Enterprise Linux 9 (RHEL 9) host.
Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP).
Maximum vCPUs per VM
For the maximum amount of vCPUs and memory that is supported on a single VM running on a RHEL 9 host, see: Virtualization limits for Red Hat Enterprise Linux with KVM
PCI devices per VM
RHEL 9 supports 32 PCI device slots per VM bus, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per bus when multi-function capabilities are enabled in the VM, and no PCI bridges are used.
Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some buses do not make all 256 device addresses available for the user; for example, the root bus has several built-in devices occupying slots.
Virtualized IDE devices
KVM is limited to a maximum of 4 virtualized IDE devices per VM.
24.5. Supported disk image formats
To run a virtual machine (VM) on RHEL, you must use a disk image with a supported format. You can also convert certain unsupported disk images to a supported format.
Supported disk image formats for VMs
You can use disk images that use the following formats to run VMs in RHEL:
- qcow2 - Provides certain additional features, such as compression.
- raw - Might provide better performance.
- luks - Disk images encrypted by using the Linux Unified Key Setup (LUKS) specification.
Supported disk image formats for conversion
-
If required, you can convert your disk images between the
raw
andqcow2
formats by using theqemu-img convert
command. -
If you require converting a vmdk disk image to a
raw
orqcow2
format, convert the VM that uses the disk to KVM by using thevirt-v2v
utility. To convert other disk image formats to
raw
orqcow2
, you can use theqemu-img convert
command. For a list of formats that work with this command, see the QEMU documentation.Note that in most cases, converting the disk image format of a non-KVM virtual machine to
qcow2
orraw
is not sufficient for the VM to correctly run on RHEL KVM. In addition to converting the disk image, corresponding drivers must be installed and configured in the guest operating system of the VM. For supported hypervisor conversion, use thevirt-v2v
utility.
24.6. How virtualization on IBM Z differs from AMD64 and Intel 64
KVM virtualization in RHEL 9 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following:
- PCI and USB devices
Virtual PCI and USB devices are not supported on IBM Z. This also means that
virtio-*-pci
devices are unsupported, andvirtio-*-ccw
devices should be used instead. For example, usevirtio-net-ccw
instead ofvirtio-net-pci
.Note that direct attachment of PCI devices, also known as PCI passthrough, is supported.
- Supported guest operating system
- Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system.
- Device boot order
IBM Z does not support the
<boot dev='device'>
XML configuration element. To define device boot order, use the<boot order='number'>
element in the<devices>
section of the XML.NoteUsing
<boot order='number'>
for boot order management is recommended on all host architectures.In addition, you can select the required boot entry by using the architecture-specific
loadparm
attribute in the<boot>
element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry:<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/qcow2'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/> <boot order='1' loadparm='2'/> </disk>
- Memory hot plug
- Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM (memory hot unplug) is also not possible on IBM Z, as well as on AMD64 and Intel 64.
- NUMA topology
-
Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by
libvirt
on IBM Z. Therefore, tuning vCPU performance by using NUMA is not possible on these systems. - GPU devices
- Assigning GPU devices is not supported on IBM Z systems.
- vfio-ap
- VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture.
- vfio-ccw
- VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture.
- SMBIOS
- SMBIOS configuration is not available on IBM Z.
- Watchdog devices
If using watchdog devices in your VM on an IBM Z host, use the
diag288
model. For example:<devices> <watchdog model='diag288' action='poweroff'/> </devices>
- kvm-clock
-
The
kvm-clock
service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z. - v2v and p2v
-
The
virt-v2v
andvirt-p2v
utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z. - Migrations
To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the
host-model
CPU mode. Thehost-passthrough
andmaximum
CPU modes are not recommended, as they are generally not migration-safe.If you want to specify an explicit CPU model in the
custom
CPU mode, follow these guidelines:-
Do not use CPU models that end with
-base
. -
Do not use the
qemu
,max
orhost
CPU model.
To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without
-base
at the end.-
If you have both the source host and the destination host running, you can instead use the
virsh hypervisor-cpu-baseline
command on the destination host to obtain a suitable CPU model. For details, see Verifying host CPU compatibility for virtual machine migration. - For more information about supported machine types in RHEL 9, see Recommended features in RHEL 9 virtualization.
-
Do not use CPU models that end with
- PXE installation and booting
When using PXE to run a VM on IBM Z, a specific configuration is required for the
pxelinux.cfg/default
file. For example:# pxelinux default linux label linux kernel kernel.img initrd initrd.img append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/
- Secure Execution
-
You can boot a VM with a prepared secure guest image by defining
<launchSecurity type="s390-pv"/>
in the XML configuration of the VM. This encrypts the VM’s memory to protect it from unwanted access by the hypervisor.
Note that the following features are not supported when running a VM in secure execution mode:
-
Device passthrough by using
vfio
-
Obtaining memory information by using
virsh domstats
andvirsh memstat
-
The
memballoon
andvirtio-rng
virtual devices - Memory backing by using huge pages
- Live and non-live VM migrations
- Saving and restoring VMs
-
VM snapshots, including memory snapshots (using the
--memspec
option) -
Full memory dumps. Instead, specify the
--memory-only
option for thevirsh dump
command. - 248 or more vCPUs. The vCPU limit for secure guests is 247.
Additional resources
24.7. How virtualization on ARM 64 differs from AMD64 and Intel 64
KVM virtualization in RHEL 9 on ARM 64 systems (also known as AArch64) is different from KVM on AMD64 and Intel 64 systems in several aspects. These include, but are not limited to, the following:
- Guest operating systems
- The only guest operating system currently supported on ARM 64 virtual machines (VMs) is RHEL 9.
- vCPU hot plug and hot unplug
- Attaching a virtual CPU (vCPU) to a running VM, also referred to as a vCPU hot plug, is currently not supported on ARM 64 hosts. In addition, like on AMD64 and Intel 64 hosts, removing a vCPU from a running VM (vCPU hot unplug), is not supported on ARM 64.
- SecureBoot
- The SecureBoot feature is not available on ARM 64 systems.
- Migration
- Migrating VMs between ARM 64 hosts is currently not supported.
- Saving and restoring VMs
- Saving and restoring a VM is currently unsupported on an ARM 64 host.
- Memory page sizes
ARM 64 currently supports running VMs with 64 KB or 4 KB memory page sizes, however both the host and the guest must use the same memory page size. Configurations where host and guest have different memory page sizes are not supported.
By default, RHEL 9 uses a 4 KB memory page size. If you want to run a VM with a 64 KB memory page size, your host must be using a kernel with 64 KB memory page size. When creating the VM, you must install it with the
kernel-64k package
, for example by including the following parameter in the kickstart file:%packages -kernel kernel-64k %end
- Huge pages
ARM 64 hosts with 64 KB memory page size support huge memory pages with the following sizes:
- 2 MB
- 512 MB
16 GB
When you use transparent huge pages (THP) on an ARM 64 host with 64 KB memory page size, it supports only 512 MB huge pages.
ARM 64 hosts with 4 KB memory page size support huge memory pages with the following sizes:
- 64 KB
- 2 MB
- 32 MB
1024 MB
When you use transparent huge pages (THP) on an ARM 64 host with 4 KB memory page size, it supports only 2 MB huge pages.
- SVE
The ARM 64 architecture provides the Scalable Vector Expansion (SVE) feature. If the host supports the feature, using SVE in your VMs improves the speed of vector mathematics computation and string operations in these VMs.
The base-line level of SVE is enabled by default on host CPUs that support it. However, Red Hat recommends configuring each vector length explicitly. This ensures that the VM can only be launched on compatible hosts. To do so:
Verify that your CPU has the SVE feature:
# grep -m 1 Features /proc/cpuinfo | grep -w sve Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm fcma dcpop sve
If the output of this command includes
sve
or if its exit code is 0, your CPU supports SVE.Open the XML configuration of the VM you want to modify:
# virsh edit vm-name
Edit the
<cpu>
element similarly to the following:<cpu mode='host-passthrough' check='none'> <feature policy='require' name='sve'/> <feature policy='require' name='sve128'/> <feature policy='require' name='sve256'/> <feature policy='disable' name='sve384'/> <feature policy='require' name='sve512'/> </cpu>
This example explicitly enables SVE vector lengths 128, 256, and 512, and explicitly disables vector length 384.
- CPU models
-
VMs on ARM 64 currently only support the
host-passthrough
CPU model. - PXE
Booting in the Preboot Execution Environment (PXE) is functional but not supported, Red Hat strongly discourages using it in production environments.
If you require PXE booting, it is only possible with the
virtio-net-pci
network interface controller (NIC).
- EDK2
ARM 64 guests use UEFI firmware included in the
edk2-aarch64
package, which provides a similar interface as OVMF UEFI on AMD64 and Intel 64, and implements a similar set of features.Specifically,
edk2-aarch64
provides a built-in UEFI shell, but does not support the following functionality:- SecureBoot
- Management Mode
- kvm-clock
-
The
kvm-clock
service does not have to be configured for time management in VMs on ARM 64. - Peripheral devices
ARM 64 systems support a partly different set of peripheral devices than AMD64 and Intel 64 devices.
- Only PCIe topologies are supported.
-
ARM 64 systems support
virtio
devices by using thevirtio-*-pci
drivers. In addition, thevirtio-iommu
andvirtio-input
devices are unsupported. -
The
virtio-gpu
driver is only supported for graphical installs. -
ARM 64 systems support
usb-mouse
andusb-tablet
devices for graphical installs only. Other USB devices, USB passthrough, or USB redirect are not supported. - Device assignment that uses Virtual Function I/O (VFIO) is supported only for NICs (physical and virtual functions).
- Emulated devices
The following devices are not supported on ARM 64:
- Emulated sound devices, such as ICH9, ICH6 or AC97.
- Emulated graphics cards, such as VGA cards.
-
Emulated network devices, such as
rtl8139
.
- GPU devices
- Assigning GPU devices is currently not supported on ARM 64 systems.
- Nested virtualization
- Creating nested VMs is currently not possible on ARM 64 hosts.
- v2v and p2v
-
The
virt-v2v
andvirt-p2v
utilities are only supported on the AMD64 and Intel 64 architecture and are, therefore, not provided on ARM 64.
24.8. An overview of virtualization features support in RHEL 9
The following tables provide comparative information about the support state of selected virtualization features in RHEL 9 across the available system architectures.
Intel 64 and AMD64 | IBM Z | ARM 64 |
---|---|---|
Supported | Supported | Supported |
Intel 64 and AMD64 | IBM Z | ARM 64 | |
---|---|---|---|
CPU hot plug | Supported | Supported | UNSUPPORTED |
CPU hot unplug | UNSUPPORTED | UNSUPPORTED | UNSUPPORTED |
Memory hot plug | Supported | UNSUPPORTED | Supported |
Memory hot unplug | UNSUPPORTED | UNSUPPORTED | UNSUPPORTED |
Peripheral device hot plug | Supported | Supported [a] | Supported |
Peripheral device hot unplug | Supported | Supported [b] | Supported |
Intel 64 and AMD64 | IBM Z | ARM 64 | |
---|---|---|---|
NUMA tuning | Supported | UNSUPPORTED | Supported |
SR-IOV devices | Supported | UNSUPPORTED | Supported |
virt-v2v and p2v | Supported | UNSUPPORTED | UNAVAILABLE |
Note that some of the unsupported features are supported on other Red Hat products, such as Red Hat Virtualization and Red Hat OpenStack platform. For more information, see Unsupported features in RHEL 9 virtualization.
Additional sources