Este conteúdo não está disponível no idioma selecionado.

Chapter 18. Securing virtual machines


As an administrator of a RHEL 10 system with virtual machines (VMs), you can take a variety of measures to lower the risk of your guest and host operating systems being infected by malicious software.

18.1. How security works in virtual machines

When using virtual machines (VMs), securing your environment requires additional considerations.

A single host machine can house multiple guest operating systems. These systems are connected with the host through the hypervisor, and usually also through a virtual network. As a consequence, each VM can be used as a vector for attacking the host with malicious software, and the host can be used as a vector for attacking any of the VMs.

Figure 18.1. A potential malware attack vector on a virtualization host

virt sec successful attack

Because the hypervisor uses the host kernel to manage VMs, services running on the VM’s operating system are frequently used for injecting malicious code into the host system. However, you can protect your system against such security threats by using a number of security features on your host and your guest systems.

These features, such as SELinux or QEMU sandboxing, provide various measures that make it more difficult for malicious code to attack the hypervisor and transfer between your host and your VMs.

Figure 18.2. Prevented malware attacks on a virtualization host

virt sec prevented attack

Many of the features that RHEL 10 provides for VM security are always active and do not have to be enabled or configured. For details, see Default features for virtual machine security.

In addition, you can adhere to a variety of best practices to minimize the vulnerability of your VMs and your hypervisor. For more information, see Best practices for securing virtual machines.

18.2. Best practices for securing virtual machines

To significantly decrease the risk of your virtual machines (VM) being infected with malicious code and used as attack vectors to infect your host system, you can increase the security of your systems by using a variety of methods.

On the guest side:

  • Secure the virtual machine as if it was a physical machine. The specific methods available to enhance security depend on the guest OS.

    If your VM is running RHEL 10, see Securing RHEL 10 for detailed instructions on improving the security of your guest system.

On the host side:

  • When managing VMs remotely, use cryptographic utilities such as SSH and network protocols such as SSL for connecting to the VMs.
  • Ensure SELinux is in Enforcing mode:

    # getenforce
    Enforcing

    If SELinux is disabled or in Permissive mode, see the Using SELinux document for instructions on activating Enforcing mode.

    Note

    SELinux Enforcing mode also enables the sVirt RHEL 10 feature. This is a set of specialized SELinux booleans for virtualization, which can be manually adjusted for fine-grained VM security management.

  • Use VMs with SecureBoot:

    SecureBoot is a feature that ensures that your VM is running a cryptographically signed OS. This prevents VMs whose OS has been altered by a malware attack from booting.

    SecureBoot can only be applied when installing a Linux VM that uses OVMF firmware on an AMD64 or Intel 64 host. For instructions, see Creating a SecureBoot virtual machine.

  • Do not use qemu-* commands, such as qemu-kvm.

    QEMU is an essential component of the virtualization architecture in RHEL 10, but it is difficult to manage manually, and improper QEMU configurations may cause security vulnerabilities. Therefore, using most qemu-* commands is not supported by Red Hat. Instead, use libvirt utilities, such as virsh, virt-install, and virt-xml, as these orchestrate QEMU according to the best practices.

    Note, however, that the qemu-img utility is supported for management of virtual disk images.

18.3. Default features for virtual machine security

The libvirt software suite provides a number of security features that are automatically enabled when using virtualization in RHEL 10

You can use these in addition to manual means of improving the security of your virtual machines (VMs), which are listed in Best practices for securing virtual machines.

System and session connections

To access all the available utilities for virtual machine management on a RHEL 10 host, you need to use the system connection of libvirt (qemu:///system). To do so, you must have root privileges on the system or be a part of the libvirt user group.

Non-root users that are not in the libvirt group can only access a session connection of libvirt (qemu:///session), which has to respect the access rights of the local user when accessing resources.

For details, see User-space connection types for virtualization.

Virtual machine separation
Individual VMs run as isolated processes on the host, and rely on security enforced by the host kernel. Therefore, a VM cannot read or access the memory or storage of other VMs on the same host.
QEMU sandboxing
A feature that prevents QEMU code from executing system calls that can compromise the security of the host.
Kernel Address Space Randomization (KASLR)
Enables randomizing the physical and virtual addresses at which the kernel image is decompressed. Thus, KASLR prevents guest security exploits based on the location of kernel objects.

18.4. Limiting what actions are available to virtual machine users

In some cases, actions that users of virtual machines (VMs) hosted on RHEL 10 can perform by default might pose a security risk. To prevent this, you can limit the actions available to VM users by configuring the libvirt daemons to use the polkit policy toolkit on the host machine.

Procedure

  1. Optional: Ensure your system’s polkit control policies related to libvirt are set up according to your preferences.

    1. Find all libvirt-related files in the /usr/share/polkit-1/actions/ and /usr/share/polkit-1/rules.d/ directories.

      # ls /usr/share/polkit-1/actions | grep libvirt
      # ls /usr/share/polkit-1/rules.d | grep libvirt
    2. Open the files and review the rule settings.

      For information about reading the syntax of polkit control policies, use man polkit.

    3. Modify the libvirt control policies. To do so:

      1. Create a new .rules file in the /etc/polkit-1/rules.d/ directory.
      2. Add your custom policies to this file, and save it.

        For further information and examples of libvirt control policies, see the libvirt upstream documentation.

  2. Configure your VMs to use access policies determined by polkit.

    To do so, find all configuration files for virtualization drivers in the /etc/libvirt/ directory, and uncomment the access_drivers = [ "polkit" ] line in them.

    # find /etc/libvirt/ -name virt*d.conf -exec sed -i 's/#access_drivers = \[ "polkit" \]/access_drivers = \[ "polkit" \]/g' {} +
  3. For each file that you modified in the previous step, restart the corresponding service.

    For example, if you have modified /etc/libvirt/virtqemud.conf, restart the virtqemud service.

    # systemctl try-restart virtqemud

Verification

  • As a user whose VM actions you intended to limit, perform one of the restricted actions.

    For example, if unprivileged users are restricted from viewing VMs created in the system session:

    $ virsh -c qemu:///system list --all
    Id   Name           State
    -------------------------------

    If this command does not list any VMs even though one or more VMs exist on your system, polkit successfully restricts the action for unprivileged users.

Troubleshooting

18.5. Configuring VNC passwords

To manage access to the graphical output of a virtual machine (VM), you can configure a password for the VNC console of the VM.

With a VNC password configured on a VM, users of the VMs must enter the password when attempting to view or interact with the VNC graphical console of the VMs, for example by using the virt-viewer utility.

Important

VNC passwords are not a sufficient measure for ensuring the security of a VM environment. For details, see QEMU documentation on VNC security.

In addition, the VNC password is saved in plain text in the configuration of the VM, so for the password to be effective, the user must not be able to display the VM configuration.

Prerequisites

  • The VM that you want to protect with a VNC password has VNC graphics configured.

    To ensure that this is the case, use the virsh dumpxml command as follows:

    # virsh dumpxml <vm-name> | grep graphics
    
     <graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1>
     </graphics>

Procedure

  1. Open the configuration of the VM that you want to assign a VNC password to.

    # virsh edit <vm-name>
  2. On the <graphics> line of the configuration, add the passwd attribute and the password string. The password must be 8 characters or fewer.

     <graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1 passwd='<password>'>
    • Optional: In addition, define a date and time when the password will expire.

       <graphics type='vnc' ports='-1' autoport=yes listen=127.0.0.1 passwd='<password>' passwdValidTo='2025-02-01T15:30:00'>

      In this example, the password will expire on February 1st 2025, at 15:30 UTC.

  3. Save the configuration.

Verification

  1. Start the modified VM.

    # virsh start <vm-name>
  2. Open a graphical console of the VM, for example by using the virt-viewer utility:

    # virt-viewer <vm-name>

    If the VNC password has been configured properly, a dialog window appears that requests you to enter the password.

18.6. SELinux booleans for virtualization

RHEL 10 provides the sVirt feature, which is a set of specialized SELinux booleans that are automatically enabled on a host with SELinux in Enforcing mode.

For fine-grained configuration of virtual machines security on a RHEL 10 system, you can configure SELinux booleans on the host to ensure the hypervisor acts in a specific way.

To list all virtualization-related booleans and their statuses, use the getsebool -a | grep virt command:

$ getsebool -a | grep virt
[...]
virt_sandbox_use_netlink --> off
virt_sandbox_use_sys_admin --> off
virt_transition_userdomain --> off
virt_use_comm --> off
virt_use_execmem --> off
virt_use_fusefs --> off
[...]

To enable a specific boolean, use the setsebool -P boolean_name on command as root. To disable a boolean, use setsebool -P boolean_name off.

The following table lists virtualization-related booleans available in RHEL 10 and what they do when enabled:

Expand
Table 18.1. SELinux virtualization booleans
SELinux BooleanDescription

staff_use_svirt

Enables non-root users to create and transition VMs to sVirt.

unprivuser_use_svirt

Enables unprivileged users to create and transition VMs to sVirt.

virt_sandbox_use_audit

Enables sandbox containers to send audit messages.

virt_sandbox_use_netlink

Enables sandbox containers to use netlink system calls.

virt_sandbox_use_sys_admin

Enables sandbox containers to use sys_admin system calls, such as mount.

virt_transition_userdomain

Enables virtual processes to run as user domains.

virt_use_comm

Enables virt to use serial/parallel communication ports.

virt_use_execmem

Enables confined virtual guests to use executable memory and executable stack.

virt_use_fusefs

Enables virt to read FUSE mounted files.

virt_use_nfs

Enables virt to manage NFS mounted files.

virt_use_rawip

Enables virt to interact with rawip sockets.

virt_use_samba

Enables virt to manage CIFS mounted files.

virt_use_sanlock

Enables confined virtual guests to interact with the sanlock.

virt_use_usb

Enables virt to use USB devices.

virt_use_xserver

Enables virtual machine to interact with the X Window System.

18.7. Creating a Secure Boot virtual machine

To improve the security of your virtualization host, you can create Linux virtual machines (VMs) that use the Secure Boot feature. Secure Boot ensures that the VM is running a cryptographically signed operating system (OS).

This can be useful if the guest OS of a VM has been altered by malware. In such a scenario, Secure Boot prevents the VM from booting, which stops the potential spread of the malware to your host machine.

Prerequisites

  • The VM is the Q35 machine type.
  • Your host system uses the AMD64 or Intel 64 architecture.
  • The edk2-OVMF package is installed:

    # dnf install edk2-ovmf
  • An operating system (OS) installation source is available locally or on a network. This can be one of the following formats:

    • An ISO image of an installation medium
    • A disk image of an existing VM installation

      Warning

      Installing from a host CD-ROM or DVD-ROM device is not possible in RHEL 10. If you select a CD-ROM or DVD-ROM as the installation source when using any VM installation method available in RHEL 10, the installation will fail. For more information, see RHEL 7 or higher can’t install guest OS from CD/DVD-ROM (Red Hat Knowledgebase).

  • Optional: A Kickstart file can be provided for faster and easier configuration of the installation.

Procedure

  1. Use the virt-install command to create a VM as detailed in Creating virtual machines by using the command line. For the --boot option, use the uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd value. This uses the OVMF_VARS.secboot.fd and OVMF_CODE.secboot.fd files as templates for the VM’s non-volatile RAM (NVRAM) settings, which enables the Secure Boot feature.

    For example:

    # virt-install --name rhel8sb --memory 4096 --vcpus 4 --os-variant rhel10.0 --boot uefi,nvram_template=/usr/share/OVMF/OVMF_VARS.secboot.fd --disk boot_order=2,size=10 --disk boot_order=1,device=cdrom,bus=scsi,path=/images/RHEL-{ProductNumber}.0-installation.iso
  2. Follow the OS installation procedure according to the instructions on the screen.

Verification

  1. After the guest OS is installed, access the VM’s command line by opening the terminal in the graphical guest console or connecting to the guest OS using SSH.
  2. To confirm that Secure Boot has been enabled on the VM, use the mokutil --sb-state command:

    # mokutil --sb-state
    SecureBoot enabled

18.8. Setting up IBM Secure Execution on IBM Z

When using IBM Z hardware to run a RHEL 10 host, you can improve the security of your virtual machines (VMs) by configuring the IBM Secure Execution feature for the VMs.

IBM Secure Execution, also known as Protected Virtualization, prevents the host system from accessing a VM’s state and memory contents. As a result, even if the host is compromised, it cannot be used as a vector for attacking the guest operating system. In addition, Secure Execution can be used to prevent untrusted hosts from obtaining sensitive information from the VM.

You can convert an existing VM on an IBM Z host into a secured VM by enabling IBM Secure Execution.

Important

For securing production environments, consult the IBM documentation on fully securing workloads with Secure Execution, which explains how to further secure your workloads.

18.8.1. Configuring a VM manually for IBM Secure Execution

You can configure IBM Secure Execution by manually logging in to the guest VM and performing configuration steps within the guest operating system. This method provides direct control over the configuration process and is suitable for production environments where you need to verify each step of the setup.

Important

For securing production environments, consult the IBM documentation on fully securing workloads with Secure Execution, which explains how to further secure your workloads.

Prerequisites

  • The system hardware is one of the following:

    • IBM z15 or later
    • IBM LinuxONE III or later
  • The Secure Execution feature is enabled for your system. To verify, use:

    # grep facilities /proc/cpuinfo | grep 158

    If this command displays any output, your CPU is compatible with Secure Execution.

  • The kernel includes support for Secure Execution. To confirm, use:

    # ls /sys/firmware | grep uv

    If the command generates any output, your kernel supports Secure Execution.

  • The host CPU model contains the unpack facility. To confirm, use:

    # virsh domcapabilities | grep unpack
    <feature policy='require' name='unpack'/>

    If the command generates the above output, your CPU host model is compatible with Secure Execution.

  • The CPU mode of the VM is set to host-model.

    # virsh dumpxml <vm_name> | grep "<cpu mode='host-model'/>"

    If the command generates any output, the VM’s CPU mode is set correctly.

  • The genprotimg package must be installed on the host.

    # dnf install genprotimg
  • The guestfs-tools package is installed on the host in case you want to modify the VM image directly from the host.

    # dnf install guestfs-tools
  • You have obtained and verified the IBM Z host key document. For details, see Verifying the host key document in IBM documentation.

Procedure

  1. Add the prot_virt=1 kernel parameter to the boot configuration of the host.

    # grubby --update-kernel=ALL --args="prot_virt=1"
  2. Update the boot menu:

    # zipl

  3. Use virsh edit to modify the XML configuration of the VM you want to secure.
  4. Add <launchSecurity type="s390-pv"/> to the under the </devices> line. For example:

    [...]
        </memballoon>
      </devices>
      <launchSecurity type="s390-pv"/>
    </domain>
  5. If the <devices> section of the configuration includes a virtio-rng device (<rng model="virtio">), remove all lines of the <rng> </rng> block.
  6. Optional: If the VM that you want to secure is using 32 GiB of RAM or more, add the <async-teardown enabled='yes'/> line to the <features></features> section in its XML configuration on the host.

    This improves the performance of rebooting or stopping such Secure Execution guests.

  7. Log in to the VM you want to secure and create a parameter file. For example:

    # touch ~/secure-parameters
  8. In the /boot/loader/entries directory of the guest operating system, identify the boot loader entry with the latest version:

    # ls /boot/loader/entries -l
    [...]
    -rw-r--r--. 1 root root  281 Oct  9 15:51 3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf
  9. Retrieve the kernel options line of the boot loader entry in the guest operating system:

    # cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf | grep options
    options root=/dev/mapper/rhel-root
    rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap
  10. Add the content of the options line and swiotlb=262144 to the created parameters file in the guest operating system.

    # echo "root=/dev/mapper/rhel-root rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap swiotlb=262144" > ~/secure-parameters
  11. Generate a new IBM Secure Execution image in the guest operating system.

    For example, the following creates a /boot/secure-image secured image based on the /boot/vmlinuz-4.18.0-240.el8.s390x image, using the secure-parameters file, the /boot/initramfs-4.18.0-240.el8.s390x.img initial RAM disk file, and the HKD-8651-000201C048.crt host key document.

    # genprotimg -i /boot/vmlinuz-4.18.0-240.el8.s390x -r /boot/initramfs-4.18.0-240.el8.s390x.img -p ~/secure-parameters -k HKD-8651-00020089A8.crt -o /boot/secure-image

    By using the genprotimg utility creates the secure image, which contains the kernel parameters, initial RAM disk, and boot image.

  12. Update the VM’s boot menu to boot from the secure image. In addition, remove the lines starting with initrd and options, as they are not needed.

    For example, in a RHEL 8.3 VM, the boot menu can be edited in the /boot/loader/entries/ directory:

    # cat /boot/loader/entries/3ab27a195c2849429927b00679db15c1-4.18.0-240.el8.s390x.conf
    title Red Hat Enterprise Linux 8.3
    version 4.18.0-240.el8.s390x
    linux /boot/secure-image
    [...]
  13. Create the bootable disk image in the guest operating system:

    # zipl -V
  14. Securely remove the original unprotected files in the guest operating system. For example:

    # shred /boot/vmlinuz-4.18.0-240.el8.s390x
    # shred /boot/initramfs-4.18.0-240.el8.s390x.img
    # shred secure-parameters

    The original boot image, the initial RAM image, and the kernel parameter file are unprotected, and if they are not removed, VMs with Secure Execution enabled can still be vulnerable to hacking attempts or sensitive data mining.

Verification

  • On the host, use the virsh dumpxml utility to confirm the XML configuration of the secured VM. The configuration must include the <launchSecurity type="s390-pv"/> element, and no <rng model="virtio"> lines.

    # virsh dumpxml vm-name
    [...]
      <cpu mode='host-model'/>
      <devices>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none' io='native'>
          <source file='/var/lib/libvirt/images/secure-guest.qcow2'/>
          <target dev='vda' bus='virtio'/>
        </disk>
        <interface type='network'>
          <source network='default'/>
          <model type='virtio'/>
        </interface>
        <console type='pty'/>
        <memballoon model='none'/>
      </devices>
      <launchSecurity type="s390-pv"/>
    </domain>

18.8.2. Configuring a VM from the host for IBM Secure Execution

You can configure IBM Secure Execution directly from the host by using the guestfs-tools package without needing to boot the VM. However, this method is suitable only for testing and development environments where you need to quickly configure multiple VMs or automate the setup process.

Important

For securing production environments, consult the IBM documentation on fully securing workloads with Secure Execution, which explains how to further secure your workloads.

Prerequisites

  • The system hardware is one of the following:

    • IBM z15 or later
    • IBM LinuxONE III or later
  • The Secure Execution feature is enabled for your system. To verify, use:

    # grep facilities /proc/cpuinfo | grep 158

    If this command displays any output, your CPU is compatible with Secure Execution.

  • The kernel includes support for Secure Execution. To confirm, use:

    # ls /sys/firmware | grep uv

    If the command generates any output, your kernel supports Secure Execution.

  • The host CPU model contains the unpack facility. To confirm, use:

    # virsh domcapabilities | grep unpack
    <feature policy='require' name='unpack'/>

    If the command generates the above output, your CPU host model is compatible with Secure Execution.

  • The CPU mode of the VM is set to host-model.

    # virsh dumpxml <vm_name> | grep "<cpu mode='host-model'/>"

    If the command generates any output, the VM’s CPU mode is set correctly.

  • The genprotimg package must be installed on the host.

    # dnf install genprotimg
  • The guestfs-tools package is installed on the host in case you want to modify the VM image directly from the host.

    # dnf install guestfs-tools
  • You have obtained and verified the IBM Z host key document. For details, see Verifying the host key document in IBM documentation.

Procedure

  1. Add the prot_virt=1 kernel parameter to the boot configuration of the host.

    # grubby --update-kernel=ALL --args="prot_virt=1"
  2. Update the boot menu:

    # zipl

  3. Use virsh edit to modify the XML configuration of the VM you want to secure.
  4. Add <launchSecurity type="s390-pv"/> to the under the </devices> line. For example:

    [...]
        </memballoon>
      </devices>
      <launchSecurity type="s390-pv"/>
    </domain>
  5. If the <devices> section of the configuration includes a virtio-rng device (<rng model="virtio">), remove all lines of the <rng> </rng> block.
  6. Optional: If the VM that you want to secure is using 32 GiB of RAM or more, add the <async-teardown enabled='yes'/> line to the <features></features> section in its XML configuration on the host.

    This improves the performance of rebooting or stopping such Secure Execution guests.

  7. On the host, create a script that contains the host key document and that configures the existing VM to use Secure Execution. For example:

    #!/usr/bin/bash
    
    echo "$(cat /proc/cmdline) swiotlb=262144" > parmfile
    
    cat > ./HKD.crt << EOF
    -----BEGIN CERTIFICATE-----
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    1234569901234569901234569901234569901234569901234569901234569900
    xLPRGYwhmXzKDg==
    -----END CERTIFICATE-----
    EOF
    
    version=$(uname -r)
    
    kernel=/boot/vmlinuz-$version
    initrd=/boot/initramfs-$version.img
    
    genprotimg -k ./HKD.crt -p ./parmfile -i $kernel -r $initrd -o /boot/secure-linux --no-verify
    
    cat >> /etc/zipl.conf<< EOF
    
    [secure]
    target=/boot
    image=/boot/secure-linux
    EOF
    
    zipl -V
    
    shutdown -h now
  8. Ensure the VM is shut-down.
  9. On the host, add the script to the existing VM image by using guestfs-tools and mark it to run on first boot.

    # virt-customize -a <vm_image_path> --selinux-relabel --firstboot <script_path>
  10. Boot the VM from the image with the added script.

    The script runs on first boot, and then shuts down the VM again. As a result, the VM is now configured to run with Secure Execution on the host that has the corresponding host key.

Verification

  • On the host, use the virsh dumpxml utility to confirm the XML configuration of the secured VM. The configuration must include the <launchSecurity type="s390-pv"/> element, and no <rng model="virtio"> lines.

    # virsh dumpxml vm-name
    [...]
      <cpu mode='host-model'/>
      <devices>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none' io='native'>
          <source file='/var/lib/libvirt/images/secure-guest.qcow2'/>
          <target dev='vda' bus='virtio'/>
        </disk>
        <interface type='network'>
          <source network='default'/>
          <model type='virtio'/>
        </interface>
        <console type='pty'/>
        <memballoon model='none'/>
      </devices>
      <launchSecurity type="s390-pv"/>
    </domain>

18.9. Attaching cryptographic coprocessors to virtual machines on IBM Z

To use hardware encryption in your virtual machine (VM) on an IBM Z host, create mediated devices from a cryptographic coprocessor device and assign them to the intended VMs.

Prerequisites

  • Your host is running on IBM Z hardware.
  • The cryptographic coprocessor is compatible with device assignment. To confirm this, ensure that the type of your coprocessor is listed as CEX4 or later.

    # lszcrypt -V
    
    CARD.DOMAIN TYPE  MODE        STATUS  REQUESTS  PENDING HWTYPE QDEPTH FUNCTIONS  DRIVER
    --------------------------------------------------------------------------------------------
    05         CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4card
    05.0004    CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4queue
    05.00ab    CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4queue
  • The vfio_ap kernel module is loaded. To verify, use:

    # lsmod | grep vfio_ap
    vfio_ap         24576  0
    [...]

    To load the module, use:

    # modprobe vfio_ap
  • The s390utils version supports ap handling:

    # lszdev --list-types
    ...
    ap           Cryptographic Adjunct Processor (AP) device
    ...

Procedure

  1. Obtain the decimal values for the devices that you want to assign to the VM. For example, for the devices 05.0004 and 05.00ab:

    # echo "obase=10; ibase=16; 04" | bc
    4
    # echo "obase=10; ibase=16; AB" | bc
    171
  2. On the host, reassign the devices to the vfio-ap drivers:

    # chzdev -t ap apmask=-5 aqmask=-4,-171
    Note

    To assign the devices persistently, use the -p flag.

  3. Verify that the cryptographic devices have been reassigned correctly.

    # lszcrypt -V
    
    CARD.DOMAIN TYPE  MODE        STATUS  REQUESTS  PENDING HWTYPE QDEPTH FUNCTIONS  DRIVER
    --------------------------------------------------------------------------------------------
    05          CEX5C CCA-Coproc  -              1        0     11     08 S--D--N--  cex4card
    05.0004     CEX5C CCA-Coproc  -              1        0     11     08 S--D--N--  vfio_ap
    05.00ab     CEX5C CCA-Coproc  -              1        0     11     08 S--D--N--  vfio_ap

    If the DRIVER values of the domain queues changed to vfio_ap, the reassignment succeeded.

  4. Create an XML snippet that defines a new mediated device.

    The following example shows defining a persistent mediated device and assigning queues to it. Specifically, the vfio_ap.xml XML snippet in this example assigns a domain adapter 0x05, domain queues 0x0004 and 0x00ab, and a control domain 0x00ab to the mediated device.

    # vim vfio_ap.xml
    
    <device>
      <parent>ap_matrix</parent>
      <capability type="mdev">
        <type id="vfio_ap-passthrough"/>
        <attr name='assign_adapter' value='0x05'/>
        <attr name='assign_domain' value='0x0004'/>
        <attr name='assign_domain' value='0x00ab'/>
        <attr name='assign_control_domain' value='0x00ab'/>
      </capability>
    </device>
  5. Create a new mediated device from the vfio_ap.xml XML snippet.

    # virsh nodedev-define vfio_ap.xml
    Node device 'mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix' defined from 'vfio_ap.xml'
  6. Start the mediated device that you created in the previous step, in this case mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix.

    # virsh nodedev-start mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix
    Device mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix started
  7. Check that the configuration has been applied correctly

    # cat /sys/devices/vfio_ap/matrix/mdev_supported_types/vfio_ap-passthrough/devices/669d9b23-fe1b-4ecb-be08-a2fabca99b71/matrix
    05.0004
    05.00ab

    If the output contains the numerical values of queues that you have previously assigned to vfio-ap, the process was successful.

  8. Attach the mediated device to the VM.

    1. Display the UUID of the mediated device that you created and save it for the next step.

      # virsh nodedev-dumpxml mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix
      
      <device>
        <name>mdev_8f9c4a73_1411_48d2_895d_34db9ac18f85_matrix</name>
        <parent>ap_matrix</parent>
        <capability type='mdev'>
          <type id='vfio_ap-passthrough'/>
          <uuid>8f9c4a73-1411-48d2-895d-34db9ac18f85</uuid>
          <iommuGroup number='0'/>
          <attr name='assign_adapter' value='0x05'/>
          <attr name='assign_domain' value='0x0004'/>
          <attr name='assign_domain' value='0x00ab'/>
          <attr name='assign_control_domain' value='0x00ab'/>
        </capability>
      </device>
    2. Create and open an XML file for the cryptographic card mediated device. For example:

      # vim crypto-dev.xml
    3. Add the following lines to the file and save it. Replace the uuid value with the UUID you obtained in step a.

      <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ap'>
        <source>
          <address uuid='8f9c4a73-1411-48d2-895d-34db9ac18f85'/>
        </source>
      </hostdev>
    4. Use the XML file to attach the mediated device to the VM. For example, to permanently attach a device defined in the crypto-dev.xml file to the running testguest1 VM:

      # virsh attach-device testguest1 crypto-dev.xml --live --config

      The --live option attaches the device to a running VM only, without persistence between boots. The --config option makes the configuration changes persistent. You can use the --config option alone to attach the device to a shut-down VM.

      Note that each UUID can only be assigned to one VM at a time.

Verification

  1. Ensure that the guest operating system detects the assigned cryptographic devices.

    # lszcrypt -V
    
    CARD.DOMAIN TYPE  MODE        STATUS  REQUESTS  PENDING HWTYPE QDEPTH FUNCTIONS  DRIVER
    --------------------------------------------------------------------------------------------
    05          CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4card
    05.0004     CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4queue
    05.00ab     CEX5C CCA-Coproc  online         1        0     11     08 S--D--N--  cex4queue

    The output of this command in the guest operating system will be identical to that on a host logical partition with the same cryptographic coprocessor devices available.

  2. In the guest operating system, confirm that a control domain has been successfully assigned to the cryptographic devices.

    # lszcrypt -d C
    
    DOMAIN 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f
    ------------------------------------------------------
        00  .  .  .  .  U  .  .  .  .  .  .  .  .  .  .  .
        10  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        20  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        30  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        40  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        50  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        60  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        70  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        80  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        90  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        a0  .  .  .  .  .  .  .  .  .  .  .  B  .  .  .  .
        b0  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        c0  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        d0  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        e0  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
        f0  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .
    ------------------------------------------------------
    C: Control domain
    U: Usage domain
    B: Both (Control + Usage domain)

    If lszcrypt -d C displays U and B intersections in the cryptographic device matrix, the control domain assignment was successful.

18.10. Enabling SEV-SNP

Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) is a hardware-based security feature that provides strong memory encryption and integrity protection for virtual machines (VMs). This isolates VMs from the hypervisor and other host system software. SEV-SNP is available only with AMD CPUs.

Important

Enabling SEV-SNP on a RHEL host is a Technology Preview only. However, enabling SEV-SNP on a RHEL guest is fully supported.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

18.10.1. Enabling SEV-SNP on a RHEL host

To enable Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) on a RHEL host, you must ensure that your system meets the prerequisites and then install the snphost and libvirt-daemon-kvm packages.

SEV-SNP is a hardware-based security feature that provides strong memory encryption and integrity protection for virtual machines (VMs). This isolates VMs from the hypervisor and other host system software. SEV-SNP is available only with AMD CPUs.

Important

Using SEV-SNP on a RHEL host is a Technology Preview only. Technology Preview features are not supported with Red Hat production service levels agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • Your host uses the following hardware and software:

    • An AMD CPU that supports SEV-SNP, such as a supported model from the AMD EPYC series
    • RHEL 10.0 or later
    • SEV firmware version 1.55 or later
    • Sufficient system memory for SNP Memory Coverage configuration
  • The following settings are enabled in UEFI CPU menu:

    • SVM Mode (Secure Virtual Machine Mode)
    • SEV-SNP Support or Secure Nested Paging
    • SMEE (Secure Memory Encryption)

Procedure

  • Install the necessary packages on the RHEL host.

    # dnf install snphost libvirt-daemon-kvm

Verification

  • Verify that SEV-SNP is enabled on the host.

    # virt-host-validate qemu
    
      QEMU: Checking for hardware virtualization:   PASS
      QEMU: Checking if device '/dev/kvm' exists:   PASS
      QEMU: Checking if device '/dev/kvm' is accessible:   PASS
      QEMU: Checking if device '/dev/vhost-net' exists:   PASS
      QEMU: Checking if device '/dev/net/tun' exists:   PASS
      QEMU: Checking for cgroup 'cpu' controller support:   PASS
      QEMU: Checking for cgroup 'cpuacct' controller support:   PASS
      QEMU: Checking for cgroup 'cpuset' controller support:   PASS
      QEMU: Checking for cgroup 'memory' controller support:   PASS
      QEMU: Checking for cgroup 'devices' controller support:   PASS
      QEMU: Checking for cgroup 'blkio' controller support:   PASS
      QEMU: Checking for device assignment IOMMU support:   PASS
      QEMU: Checking if IOMMU is enabled by kernel:   PASS
      QEMU: Checking for secure guest support:   PASS

    If SEV-SNP is enabled on the host, the Checking for secure guest support line reports PASS.

18.10.2. Enabling SEV-SNP on a RHEL guest

To enable Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) in a RHEL virtual machine (VM), you can apply SEV-SNP configuration when creating the VM. Alternatively, you can enable SEV-SNP in an existing RHEL VM by editing the VM configuration.

SEV-SNP is a hardware-based security feature that provides strong memory encryption and integrity protection for VMs. This isolates VMs from the hypervisor and other host system software. SEV-SNP is available only with AMD CPUs.

Prerequisites

  • SEV-SNP is enabled on your RHEL host. For instructions, see Enabling SEV-SNP on a RHEL host
  • The host has libvirt, virt-install, and virt-xml installed
  • The VM uses a supported RHEL version as the guest operating system:

    • RHEL 9.2 or later
    • RHEL 10.0 or later

Procedure

  • To create a new RHEL VM with SEV-SNP enabled:

    • Use the virt-install utility with the --launchSecurity sev-snp,policy=0x30000 option. For example:

      # virt-install \
          --name <vm_name> --os-info rhel9.6 --memory 2048 --vcpus 2 \
          --boot uefi --import --disk /var/lib/libvirt/images/disk.qcow2 \
          --graphics none --console pty,target_type=serial \
          --launchSecurity sev-snp,policy=0x30000
  • To enable SEV-SNP functionality in an existing RHEL VM:

    1. Shut down the existing virtual machine if it is running.

      # virsh shutdown <vm_name>
    2. Export and back up the current VM configuration.

      # virsh dumpxml <vm_name> > /tmp/<vm_name>-backup.xml
    3. Add SEV-SNP launch security configuration to the VM domain.

      # virt-xml <vm_name> --edit --launchSecurity type=sev-snp,policy=0x30000

      This command adds the SEV-SNP configuration to the VM’s domain XML. Alternatively, you can manually edit the configuration by using virsh edit vm-name and add the <launchSecurity type='sev-snp'> element.

    4. Ensure the VM is configured with a compatible CPU model.

      # virsh dumpxml <vm_name> --xpath //cpu
      
      <cpu mode="host-passthrough" check="none" migratable="on"/>

      The CPU should be set to host-passthrough or use an AMD EPYC model. If not, update it:

      # virt-xml <vm_name> --edit --cpu mode=host-passthrough
    5. Start the VM.

      # virsh start <vm_name>

Verification

  1. On the host, verify that the VM is running with SEV-SNP enabled.

    # virsh dumpxml --xpath //launchSecurity <vm_name>
    
    <launchSecurity type="sev-snp">
      <policy>0x00030000</policy>
    </launchSecurity>
  2. Log in to the guest VM.
  3. Verify that the SEV-SNP guest device exists.

    # ls -l /dev/sev_guest

    If SEV-SNP is properly enabled in the guest, this command lists the /dev/sev_guest character device.

    Important

    Checking the existence of /dev/sev_guest proves only that the VM is configured and operating correctly. To prove that the VM is using SEV-SNP to secure it against a hostile host, you must perform a cryptographic attestation of the guest.

    For more information about attestation, see Learn about Confidential Computing Attestation (Red Hat Blog).

Troubleshooting

  • If SEV-SNP is not detected in the guest, verify that the host SEV-SNP configuration is correct and that the VM was restarted after the configuration change.
  • To revert to the original configuration, restore from the backup:

    # virsh undefine <vm_name>
    # virsh define /tmp/<vm_name>-backup.xml

18.11. Enabling TDX

Trust Domain Extensions (TDX) is a hardware-based security feature that provides strong memory encryption and integrity protection for virtual machines (VMs). This isolates VMs from the hypervisor and other host system software. TDX is available only with Intel CPUs.

Important

Enabling TDX on a RHEL host is a Technology Preview only. However, enabling TDX on a RHEL guest is fully supported.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

18.11.1. Enabling TDX on a RHEL host

To enable Trust Domain Extensions (TDX) on a RHEL host, you must ensure that your system meets the prerequisites, then install the libvirt-daemon-kvm and tdx-qgs packages.

TDX is a hardware-based security feature that provides strong memory encryption and integrity protection for virtual machines (VMs). This isolates VMs from the hypervisor and other host system software. TDX is available only with Intel CPUs.

Important

Using TDX on a RHEL host is a Technology Preview only. Technology Preview features are not supported with Red Hat production service levels agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • Intel CPU that supports TDX, such as 5th Generation Intel Xeon Scalable Processors or newer
  • Your host uses the following hardware and software:

    • RHEL 10.1 or later
    • TDX firmware version 1.5 or later
  • The following settings are enabled in the UEFI CPU menu:

    • Software Guard Extensions (SGX)
    • SGX Auto MP Registration Agent
    • Total Memory Encryption (TME)
    • Total Memory Encryption Multi-Tenant (TME-MT)
    • Trust Domain Extension (TDX)
    • TDX Secure Arbitration Mode Loader (SEAM Loader)

Procedure

  1. Install the necessary packages on the RHEL host.

    # dnf install libvirt-daemon-kvm tdx-qgs
  2. Disable hibernation on the host and enable IOMMU.

    # grubby --update-kernel=ALL --args="nohibernate intel_iommu=on"

    Hibernation must be disabled for TDX to function properly.

  3. Update the bootloader configuration and reboot the system.

    # grub2-mkconfig -o /boot/grub2/grub.cfg
    # reboot

    The system must be rebooted for the TDX configuration to take effect.

Verification

  • Verify that TDX is enabled on the host.

    # virt-host-validate qemu
    
      QEMU: Checking for hardware virtualization:   PASS
      QEMU: Checking if device '/dev/kvm' exists:   PASS
      QEMU: Checking if device '/dev/kvm' is accessible:   PASS
      QEMU: Checking if device '/dev/vhost-net' exists:   PASS
      QEMU: Checking if device '/dev/net/tun' exists:   PASS
      QEMU: Checking for cgroup 'cpu' controller support:   PASS
      QEMU: Checking for cgroup 'cpuacct' controller support:   PASS
      QEMU: Checking for cgroup 'cpuset' controller support:   PASS
      QEMU: Checking for cgroup 'memory' controller support:   PASS
      QEMU: Checking for cgroup 'devices' controller support:   PASS
      QEMU: Checking for cgroup 'blkio' controller support:   PASS
      QEMU: Checking for device assignment IOMMU support:   PASS
      QEMU: Checking if IOMMU is enabled by kernel:   PASS
      QEMU: Checking for secure guest support:   PASS

    If TDX is enabled on the host, the Checking for secure guest support line reports PASS.

18.11.2. Enabling TDX on a RHEL guest

To enable Domain Extensions (TDX) functionality in a RHEL virtual machine (VM), you can apply TDX configuration when creating the VM. Alternatively, you can enable TDX in an existing RHEL VM by editing the VM configuration.

TDX is a hardware-based security feature that provides strong memory encryption and integrity protection for VMs, that isolates VMs from the hypervisor and other host system software. TDX is available only with Intel CPUs.

Prerequisites

  • TDX is enabled on your RHEL host. For instructions, see Enabling TDX on a RHEL host
  • The host has libvirt, virt-install, and virt-xml installed.
  • The VM uses a supported RHEL version as the guest operating system:

    • RHEL 9.6 or later
    • RHEL 10.0 or later
Note

TDX is incompatible with kdump. Enabling TDX in a VM will cause kdump to fail in that VM.

Procedure

  • To create a new RHEL VM with TDX enabled:

    • Use the virt-install utility with the --launchSecurity tdx,quoteGenerationService=on option. For example:

      # virt-install \
           --name <vm_name> --os-info rhel9.6 --memory 2048 --vcpus 2 \
           --boot uefi --import --disk /var/lib/libvirt/images/disk.qcow2 \
           --graphics none --console pty,target_type=serial \
           --launchSecurity tdx,quoteGenerationService=on
  • To enable TDX functionality in an existing RHEL VM:

    1. Shut down the existing VM if it is running.

      # virsh shutdown <vm_name>
    2. Export and back up the current VM configuration.

      # virsh dumpxml <vm_name> > /tmp/<vm_name>-backup.xml
    3. Add TDX launch security configuration to the VM domain.

      # virt-xml <vm_name> --edit --launchSecurity type=tdx

      This command adds the TDX configuration to the VM’s domain XML. Alternatively, you can manually edit the configuration by using virsh edit <vm_name> and add the <launchSecurity type='tdx'> element.

    4. Ensure the VM is configured with a compatible CPU model.

      # virsh dumpxml <vm_name> --xpath //cpu
      
      <cpu mode="host-passthrough" check="none" migratable="on"/>

      The CPU should be set to host-passthrough or use an Intel Xeon model. If not, update it:

      # virt-xml <vm_name> --edit --cpu mode=host-passthrough
    5. Start the VM.

      # virsh start <vm_name>

Verification

  1. On the host, verify that the VM is running with TDX enabled.

    # virsh dumpxml --xpath //launchSecurity <vm_name>
    
    <launchSecurity type="tdx">
      <quoteGenerationService/>
    </launchSecurity>
  2. Log in to the VM.
  3. Verify that the TDX guest device exists.

    # ls -l /dev/tdx_guest

    If TDX is properly enabled in the guest, this command lists the /dev/tdx_guest character device.

    Important

    Checking the existence of /dev/tdx_guest proves only that the VM is configured and operating correctly. To prove that the VM is using TDX to secure it against a hostile host, you must perform a cryptographic attestation of the guest.

    For more information about attestation, see Learn about Confidential Computing Attestation (Red Hat Blog).

Troubleshooting

  • If TDX is not detected in the guest, verify that the host TDX configuration is correct and that the VM was restarted after the configuration change.
  • To revert to the original configuration, restore from the backup:

    # virsh undefine <vm_name>
    # virsh define /tmp/<vm_name>-backup.xml
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo