Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 14. Attaching host devices to virtual machines


You can expand the functionality of a virtual machine (VM) by attaching a host device to the VM. When attaching a host device to the VM, a virtual device is used for this purpose, which is a software abstraction of the hardware device.

14.1. How virtual devices work

To provide virtual machines (VMs) with various capabilities, VMs use software abstractions of hardware devices.

Just like physical machines, VMs require specialized devices to provide functions to the system, such as processing power, memory, storage, networking, or graphics. Physical systems usually use hardware devices for these purposes. However, because VMs work as software processes, they need to use software abstractions of such devices instead, referred to as virtual devices.

The basics of virtual devices

Virtual devices attached to a VM can be configured when creating the VM, and can also be managed on an existing VM. Generally, virtual devices can be attached or detached from a VM only when the VM is shut off, but some can be added or removed when the VM is running. This feature is referred to as device hot plug and hot unplug.

When creating a new VM, libvirt automatically creates and configures a default set of essential virtual devices, unless specified otherwise by the user. These are based on the host system architecture and machine type, and usually include:

  • the CPU
  • memory
  • a keyboard
  • a network interface controller (NIC)
  • various device controllers
  • a video card
  • a sound card

To manage virtual devices after the VM is created, use the command line. However, to manage virtual storage devices and network interfaces, you can also use the RHEL 10 web console.

Performance or flexibility

For some types of devices, RHEL 10 supports multiple implementations, often with a trade-off between performance and flexibility.

For example, the physical storage used for virtual disks can be represented by files in various formats, such as qcow2 or raw, and presented to the VM by using a variety of controllers:

  • an emulated controller
  • virtio-scsi
  • virtio-blk

An emulated controller is slower than a virtio controller, because virtio devices are designed specifically for virtualization purposes. However, emulated controllers make it possible to run operating systems that have no drivers for virtio devices. Similarly, virtio-scsi offers a more complete support for SCSI commands, and makes it possible to attach a larger number of disks to the VM. Finally, virtio-blk provides better performance than both virtio-scsi and emulated controllers, but a more limited range of use cases. For example, attaching a physical disk as a LUN device to a VM is not possible when using virtio-blk..

14.2. Types of virtual devices

To choose the appropriate device type for your virtual machines (VMs), consider your requirements for performance, compatibility, and functionality.

Virtualization in RHEL 10 can present several distinct types of virtual devices that you can attach to VMs:

Emulated devices

Emulated devices are software implementations of widely used physical devices. Drivers designed for physical devices are also compatible with emulated devices. Therefore, emulated devices can be used very flexibly.

However, because they need to faithfully emulate a particular type of hardware, emulated devices might suffer a significant performance loss compared with the corresponding physical devices or more optimized virtual devices.

The following types of emulated devices are supported:

  • Virtual CPUs (vCPUs), with a large choice of CPU models available. The performance impact of emulation depends significantly on the differences between the host CPU and the emulated vCPU.
  • Emulated system components, such as PCI bus controllers.
  • Emulated storage controllers, such as SATA, SCSI or even IDE.
  • Emulated sound devices, such as ICH9, ICH6 or AC97.
  • Emulated graphics cards, such as VGA cards.
  • Emulated network devices, such as rtl8139.
Paravirtualized devices

Paravirtualization provides a fast and efficient method for exposing virtual devices to VMs. Paravirtualized devices expose interfaces that are designed specifically for use in VMs, and thus significantly increase device performance. RHEL 10 provides paravirtualized devices to VMs by using the virtio API as a layer between the hypervisor and the VM. The drawback of this approach is that it requires a specific device driver in the guest operating system.

It is recommended to use paravirtualized devices instead of emulated devices for VM whenever possible, notably if they are running I/O intensive applications. Paravirtualized devices decrease I/O latency and increase I/O throughput, in some cases bringing them very close to bare metal performance. Other paravirtualized devices also add functionality to VMs that is not otherwise available.

The following types of paravirtualized devices are supported:

  • The paravirtualized network device (virtio-net).
  • Paravirtualized storage controllers:

    • virtio-blk - provides block device emulation.
    • virtio-scsi - provides more complete SCSI emulation.
  • The paravirtualized clock.
  • The paravirtualized serial device (virtio-serial).
  • The balloon device (virtio-balloon), used to dynamically distribute memory between a VM and its host.
  • The paravirtualized random number generator (virtio-rng).
Physically shared devices

Certain hardware platforms enable VMs to directly access various hardware devices and components. This process is known as device assignment or passthrough.

When attached in this way, some aspects of the physical device are directly available to the VM as they would be to a physical machine. This provides superior performance for the device when used in the VM. However, devices physically attached to a VM become unavailable to the host, and also cannot be migrated.

Nevertheless, some devices can be shared across multiple VMs. For example, in certain cases a single physical device can provide multiple mediated devices, which can then be assigned to distinct VMs.

The following types of passthrough devices are supported:

  • USB, PCI, and SCSI passthrough - expose common industry standard buses directly to VMs to make their specific features available to guest software.
  • Single-root I/O virtualization (SR-IOV) - a specification that enables hardware-enforced isolation of PCI Express resources. This makes it safe and efficient to partition a single physical PCI resource into virtual PCI functions. It is commonly used for network interface cards (NICs).
  • N_Port ID virtualization (NPIV) - a Fibre Channel technology to share a single physical host bus adapter (HBA) with multiple virtual ports.
  • GPUs and vGPUs - accelerators for specific kinds of graphic or compute workloads. Some GPUs can be attached directly to a VM, while certain types also offer the ability to create virtual GPUs (vGPUs) that share the underlying physical hardware.
Note

Some devices of these types might be unsupported or not compatible with RHEL. If you require assistance with setting up virtual devices, consult Red Hat support.

When using a virtual machine (VM), you can access and control a USB device, such as a flash drive or a web camera, that is attached to the host system. In this scenario, the host system passes control of the device to the VM. This is also known as a USB-passthrough.

To attach a USB device to a VM, you can include the USB device information in the XML configuration file of the VM.

Prerequisites

  • Ensure the device you want to pass through to the VM is attached to the host.

Procedure

  1. Locate the bus and device values of the USB that you want to attach to the VM.

    For example, the following command displays a list of USB devices attached to the host. The device we will use in this example is attached on bus 001 as device 005.

    # lsusb
    [...]
    Bus 001 Device 003: ID 2567:0a2b Intel Corp.
    Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
    [...]
  2. Use the virt-xml utility along with the --add-device argument.

    For example, the following command attaches a USB flash drive to the example-VM-1 VM.

    # virt-xml example-VM-1 --add-device --hostdev 001.005
    Domain 'example-VM-1' defined successfully.
    Note

    To attach a USB device to a running VM, add the --update argument to the command.

Verification

  1. Use the virsh dumpxml command to see if the device’s XML definition has been added to the <devices> section in the VM’s XML configuration file.

    # virsh dumpxml example-VM-1
    [...]
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0407'/>
        <product id='0x6252'/>
        <address bus='1' device='5'/>
      </source>
      <alias name='hostdev0'/>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    [...]
  2. Run the VM and test if the device is present and works as expected.

When using a virtual machine (VM), you can access and control a PCI device, such as a storage or network controller, that is attached to the host system. In this scenario, the host system passes control of the device to the VM. This is also known as a PCI device assignment, or PCI passthrough.

To use a PCI hardware device attached to your host in a virtual machine (VM), you can detach the device from the host and assign it to the VM.

Note

This procedure describes generic PCI device assignment. For instructions on assigning specific types of PCI devices, see the relevant procedures:

Prerequisites

  • If your host is using the IBM Z architecture, the vfio kernel modules must be loaded on the host. To verify, use the following command:

    # lsmod | grep vfio

    The output must contain the following modules:

    • vfio_pci
    • vfio_pci_core
    • vfio_iommu_type1

Procedure

  1. Obtain the PCI address identifier of the device that you want to use. For example, if you want to use a NVME disk attached to the host, the following output shows it as device 0000:65:00.0.

    # lspci -nkD
    
    0000:00:00.0 0600: 8086:a708 (rev 01)
    	Subsystem: 17aa:230e
    	Kernel driver in use: igen6_edac
    	Kernel modules: igen6_edac
    0000:00:02.0 0300: 8086:a7a1 (rev 04)
    	Subsystem: 17aa:230e
    	Kernel driver in use: i915
    	Kernel modules: i915, xe
    0000:00:04.0 1180: 8086:a71d (rev 01)
    	Subsystem: 17aa:230e
    	Kernel driver in use: thermal_pci
    	Kernel modules: processor_thermal_device_pci
    0000:00:05.0 0604: 8086:a74d (rev 01)
    	Subsystem: 17aa:230e
    	Kernel driver in use: pcieport
    0000:00:07.0 0604: 8086:a76e (rev 01)
    	Subsystem: 17aa:230e
    	Kernel driver in use: pcieport
    0000:65:00.0 0108: 144d:a822 (rev 01)
        DeviceName: PCIe SSD in Slot 0 Bay 2
        Subsystem: 1028:1fd9
        Kernel driver in use: nvme
        Kernel modules: nvme
    0000:6a:00.0 0108: 1179:0110 (rev 01)
        DeviceName: PCIe SSD in Slot 11 Bay 2
        Subsystem: 1028:1ffb
        Kernel driver in use: nvme
        Kernel modules: nvme
  2. Open the XML configuration of the VM to which you want to attach the PCI device.

    # virsh edit vm-name
  3. Add the following <hostdev> configuration to the <devices> section of the XML file.

    Replace the values on the address line with the PCI address of your device. Optionally, to change the PCI address that the device will use in the VM, you can configure a different address on the <address type="pci"> line.

    For example, if the device address on the host is 0000:65:00.0, and you want it to use 0000:02:00.0 in the guest, use the following configuration:

    <hostdev mode="subsystem" type="pci" managed="yes">
      <driver name="vfio"/>
       <source>
        <address domain="0x0000" bus="0x65" slot="0x00" function="0x0"/>
       </source>
       <address type="pci" domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </hostdev>
  4. Optional: On IBM Z hosts, you can modify how the guest operating system will detect the PCI device. To do this, add a <zpci> sub-element to the <address> element. In the <zpci> line, you can adjust the uid and fid values, which modifies the PCI address and function ID of the device in the guest operating system.

    <hostdev mode="subsystem" type="pci" managed="yes">
      <driver name="vfio"/>
       <source>
        <address domain="0x0000" bus="0x65" slot="0x00" function="0x0"/>
       </source>
       <address type="pci" domain='0x0000' bus='0x02' slot='0x00' function='0x0'>
         <zpci uid="0x0008" fid="0x001807"/>
       </address>
    </hostdev>

    In this example:

    • uid="0x0008" sets the domain PCI address of the device in the VM to 0008:00:00.0.
    • fid="0x001807" sets the slot value of the device to 0x001807. As a result, the device configuration in the file system of the VM is saved to /sys/bus/pci/slots/00001087/address.

      If these values are not specified, libvirt configures them automatically.

  5. Save the XML configuration.
  6. If the VM is running, shut it down.

    # virsh shutdown vm-name

Verification

  1. Start the VM and log in to its guest operating system.
  2. In the guest operating system, confirm that the PCI device is listed.

    For example, if you configured guest device address as 0000:02:00.0, use the following command:

    # lspci -nkD | grep 0000:02:00.0
    
    0000:02:00.0 8086:9a09 (rev 01)

To add specific functionalities to your virtual machine (VM), you can use the web console to attach host devices to the VM.

Prerequisites

  • You have installed the RHEL 10 web console.

    For instructions, see Installing and enabling the web console.

  • If you are attaching PCI devices, ensure that the status of the managed attribute of the hostdev element is set to yes.

    Note

    When attaching PCI devices to your VM, do not omit the managed attribute of the hostdev element, or set it to no. If you do so, PCI devices cannot automatically detach from the host when you pass them to the VM. They also cannot automatically reattach to the host when you turn off the VM.

    As a consequence, the host might become unresponsive or shut down unexpectedly.

    You can find the status of the managed attribute in your VM’s XML configuration. The following example opens the XML configuration of the example-VM-1 VM.

    # virsh edit example-VM-1
  • Back up important data from the VM.
  • Optional: Back up the XML configuration of your VM. For example, to back up the example-VM-1 VM:

    # virsh dumpxml example-VM-1 > example-VM-1.xml
  • The web console VM plug-in is installed on your system.

Procedure

  1. Log in to the RHEL 10 web console.
  2. In the Virtual Machines interface, click the VM to which you want to attach a host device.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  3. Scroll to Host devices.

    The Host devices section displays information about the devices attached to the VM and options to Add or Remove devices.

  4. Click Add host device.

    The Add host device dialog is displayed.

    Image displaying the Add host device dialog box.
  5. Select the device you want to attach to the VM.
  6. Click Add

    The selected device is attached to the VM.

Verification

  • Run the VM and check if the device is displayed in the Host devices section.

To remove a USB device from a virtual machine (VM), you can remove the USB device information from the XML configuration of the VM.

Procedure

  1. Locate the bus and device values of the USB that you want to remove from the VM.

    For example, the following command displays a list of USB devices attached to the host. The device we will use in this example is attached on bus 001 as device 005.

    # lsusb
    [...]
    Bus 001 Device 003: ID 2567:0a2b Intel Corp.
    Bus 001 Device 005: ID 0407:6252 Kingston River 2.0
    [...]
  2. Use the virt-xml utility along with the --remove-device argument.

    For example, the following command removes a USB flash drive, attached to the host as device 005 on bus 001, from the example-VM-1 VM.

    # virt-xml example-VM-1 --remove-device --hostdev 001.005
    Domain 'example-VM-1' defined successfully.

    To remove a USB device from a running VM, add the --update argument to this command.

Verification

  • Run the VM and check if the device has been removed from the list of devices.

To remove a PCI device from a virtual machine (VM), remove the device information from the XML configuration of the VM.

Procedure

  1. In the XML configuration of the VM to which the PCI device is attached, locate the <address domain> line in the <hostdev> section with the device’s setting.

    # virsh dumpxml <VM-name>
    
    [...]
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x65' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </hostdev>
    [...]
  2. Use the virsh detach-device command with the --hostdev option and the device address.

    For example, the following command persistently removes the device located in the previous step.

    # virsh detach-device <VM-name> --hostdev 0000:65:00.0 --config
    Domain 'VM-name' defined successfully.
    Note

    To remove a PCI device from a running VM, add the --live argument to the previous command.

  3. Optional: Re-attach the PCI device to the host. For example the following command re-attaches the device removed from the VM in the previous step:

    # virsh nodedev-reattach pci_0000_65_00_0
    Device pci_0000_65_00_0 re-attached

Verification

  1. Display the XML configuration of the VM again, and check that the <hostdev> section of the device no longer appears.

    # virsh dumpxml <VM-name>

To free up resources, modify the functionalities of your VM, or both, you can use the web console to modify the VM and remove host devices that are no longer required.

Prerequisites

  • You have installed the RHEL 10 web console.

    For instructions, see Installing and enabling the web console.

  • The web console VM plug-in is installed on your system.
  • Optional: Back up the XML configuration of your VM by using virsh dumpxml example-VM-1 and sending the output to a file. For example, the following backs up the configuration of your testguest1 VM as the testguest1.xml file:

    # virsh dumpxml testguest1 > testguest1.xml
    # cat testguest1.xml
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>testguest1</name>
      <uuid>ede29304-fe0c-4ca4-abcd-d246481acd18</uuid>
      [...]
    </domain>

Procedure

  1. In the Virtual Machines interface, click the VM from which you want to remove a host device.

    A new page opens with an Overview section with basic information about the selected VM and a Console section to access the VM’s graphical interface.

  2. Scroll to Host devices.

    The Host devices section displays information about the devices attached to the VM and options to Add or Remove devices.

  3. Click the Remove button next to the device you want to remove from the VM.

    A remove device confirmation dialog is displayed.

  4. Click Remove.

    The device is removed from the VM.

Troubleshooting

  • If removing a host device causes your VM to become unbootable, use the virsh define utility to restore the XML configuration by reloading the XML configuration file you backed up previously.

    # virsh define testguest1.xml

14.9. Attaching ISO images to virtual machines

When using a virtual machine (VM), you can access information stored in an ISO image on the host. To do so, attach the ISO image to the VM as a virtual optical drive, such as a CD drive or a DVD drive.

To attach an ISO image as a virtual optical drive, edit the XML configuration file of the virtual machine (VM) and add the new drive.

Prerequisites

  • You must store and copy path of the ISO image on the host machine.

Procedure

  • Use the virt-xml utility with the --add-device argument:

    For example, the following command attaches the example-ISO-name ISO image, stored in the /home/username/Downloads directory, to the example-VM-name VM.

    # virt-xml example-VM-name --add-device --disk /home/username/Downloads/example-ISO-name.iso,device=cdrom
    
    Domain 'example-VM-name' defined successfully.

Verification

  • Run the VM and test if the device is present and works as expected.

14.9.2. Replacing ISO images in virtual optical drives

To replace an ISO image attached as a virtual optical drive to a virtual machine (VM), edit the XML configuration file of the VM and specify the replacement.

Prerequisites

  • You must store the ISO image on the host machine.
  • You must know the path to the ISO image.

Procedure

  1. Locate the target device where the ISO image is attached to the VM. You can find this information in the VM’s XML configuration file.

    For example, the following command displays the example-VM-name VM’s XML configuration file, where the target device for the virtual optical drive is sda.

    # virsh dumpxml example-VM-name
    ...
    <disk>
      ...
      <source file='$(/home/username/Downloads/example-ISO-name.iso)'/>
      <target dev='sda' bus='sata'/>
      ...
    </disk>
    ...
  2. Use the virt-xml utility with the --edit argument.

    For example, the following command replaces the example-ISO-name ISO image, attached to the example-VM-name VM at target sda, with the example-ISO-name-2 ISO image stored in the /dev/cdrom directory.

    # virt-xml example-VM-name --edit target=sda --disk /dev/cdrom/example-ISO-name-2.iso
    Domain 'example-VM-name' defined successfully.

Verification

  • Run the VM and test if the device is replaced and works as expected.

To remove an ISO image attached to a virtual machine (VM), edit the XML configuration file of the VM.

Procedure

  1. Locate the target device where the ISO image is attached to the VM. You can find this information in the VM’s XML configuration file.

    For example, the following command displays the example-VM-name VM’s XML configuration file, where the target device for the virtual optical drive is sda.

    # virsh dumpxml example-VM-name
    ...
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sda' bus='sata'/>
      ...
    </disk>
    ...
  2. Use the virt-xml utility with the --remove-device argument.

    For example, the following command removes the optical drive attached as target sda from the example-VM-name VM.

    # virt-xml example-VM-name --remove-device --disk target=sda
    Domain 'example-VM-name' defined successfully.

Verification

  • Confirm that the device is no longer listed in the XML configuration file of the VM.

14.10. Configuring SCSI passthrough for virtual machines

To provide a virtual machine (VM) with direct access to a host SCSI device, such as a Storage Area Network (SAN) Logical Unit Number (LUN) disk device, you can configure SCSI passthrough.

You can pass local disks or multipath devices with SCSI passthrough.

  • When passing local disks, the VM uses one path to the disk, for example, a single /dev/disk/by-path/ or /dev/sdb device.
  • With multipath devices, the host presents the same LUN through multiple paths and aggregates them into one mapper device, for example /dev/mapper/mpatha. This provides redundancy and failover if one path fails.

Prerequisites

Procedure

  1. Open the XML configuration of the VM:

    # virsh edit <vm_name>
  2. In the <devices> section, add a line for the VirtIO-SCSI controller, if it is not present already:

    <controller type='scsi' model='virtio-scsi' index='0'/>
  3. For multipath devices, identify the multipath device mapper on the host:

    # multipath -l
    mpatha (36001438005deb1d00000000000000001) dm-0 NETAPP   ,LUN
    size=100G features='0' hwhandler='0' wp=rw
    `-+- policy='service-time 0' prio=50 status=active
      `- 2:0:0:1 sdb 8:16  active ready running
      `- 3:0:0:1 sdc 8:32  active ready running

    Note the multipath device name, for example, mpatha. The device path is /dev/mapper/<name>, for example, /dev/mapper/mpatha. You can also list multipath device nodes with the ls /dev/mapper/ command.

  4. Create and open an XML file to define the SCSI disk device on the host. For example:

    # vim scsi-passthrough-device.xml
  5. Add the SCSI device configuration to the XML file:

    • For a multipath device:

      <disk type='block' device='lun'>
              <driver name='qemu' type='raw'/>
              <source dev='/dev/mapper/mpatha'/>
              <target dev='sdb' bus='scsi'/>
              <alias name='ua-scsi-mpath0'/>
              <address type='drive' controller='0' bus='0' target='0' unit='1'/>
      </disk>

      In this example, the multipath device is defined as a single disk element:

      • The <source dev='/dev/mapper/mpatha'/> specifies the device-mapper multipath device on the host.
      • The host multipath layer already aggregates the paths, so the VM receives one block device and path failover is handled on the host.
    • For passing a local disk:

      <disk type='block' device='lun'>
              <driver name='qemu' type='raw'/>
              <source dev='/dev/sdb'/>
              <target dev='sdc' bus='scsi'/>
              <alias name='ua-scsi-lun0'/>
              <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </disk>

      In this example, a SCSI disk device is defined with the following parameters:

      • type='block': Specifies that the device is a block device.
      • device='lun': Indicates that this is a SCSI Logical Unit Number (LUN) device passthrough.
      • <driver name='qemu' type='raw'/>: Specifies the QEMU driver with raw format for direct device access.
      • <source dev='/dev/sdb'/>: Specifies the host block device path. You can use device nodes, for example /dev/sdb, directly or use /dev/disk/by-path/ entries for better persistence across reboots.
      • <target dev='sdc' bus='scsi'/>: Specifies how the device is displayed in the VM. The device is displayed as sdc on the SCSI bus.
      • alias: This is an optional user-defined alias that you can use to specify the intended device, for example when detaching the device with libvirt commands. All user-defined aliases in libvirt must start with the "ua-" prefix.
      • <address type='drive' controller='0' bus='0' target='0' unit='0'/>: Specifies how the device is displayed in the VM. The controller attribute refers to a SCSI controller in the VM, which must exist before attaching the device.
  6. Use the XML file to attach the defined SCSI disk device to a VM. For example, to permanently attach the device defined in scsi-passthrough-device.xml to the running <vm_name> VM:

    # virsh attach-device <vm_name> scsi-passthrough-device.xml --live --config

    The --live option attaches the device to a running VM only, without persistence between boots. The --config option makes the configuration changes persistent. You can also attach the device to a shutdown VM without the --live option.

  7. Optional: When you no longer need the SCSI disk to be attached to the VM, you can detach it by using the virsh detach-device command:

    1. To detach a SCSI disk device from a shut-down VM:

      # virsh detach-device <vm_name> scsi-passthrough-device.xml --config
    2. To detach a SCSI disk device from a running VM:

      # virsh detach-device <vm_name> scsi-passthrough-device.xml --live --config
      Warning

      Detaching a SCSI device from a running VM might cause data loss or corruption if the device is in use. Ensure that the device is not being accessed by any applications in the guest operating system before detaching it.

Verification

  • In RHEL VMs, you can list block devices to verify the SCSI device is visible in the guest. For multipath configuration, the LUN disk is listed as a single block device:

    # lsblk -nd -o name,size,type,wwn
    NAME   SIZE TYPE WWN
    sda    20G  disk
    sdb   100G  disk  0x36001438005deb1d00000000000000001

    In this example, the SCSI device is listed as a disk with a World Wide Name (WWN). For multipath configuration, the guest is presented with a single block device and path failover is handled on the host.

    Check that the size of the presented device is the same as the size of the LUN disk on the host.

  • On the host, you can verify the device attachment by displaying the XML configuration of the running VM:

    # virsh dumpxml <vm_name>
    <domain type='kvm'>
      <name>vm_name</name>
      ...
      <devices>
        ...
        <disk type='block' device='lun'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/sdb'/>
          <target dev='sdc' bus='scsi'/>
          <alias name='ua-scsi-lun0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        ...
      </devices>
      ...
    </domain>

    In this example, the disk element shows that the SCSI device from the host path /dev/sdb is attached to the VM as sdc on the SCSI bus.

By using SCSI3-Persistent Reservation (S3-PR), multiple virtual machines (VMs) can coordinate access to shared storage devices. This is required for Linux clustering solutions, such as Pacemaker, and for Windows Server Failover Clustering.

With S3-PR, VMs can register and manage persistent reservations on storage devices to prevent conflicts when multiple VMs access the same storage.

You can configure S3-PR for both singlepath and multipath devices in VMs running on a RHEL host by modifying their XML configuration files.

Prerequisites

Procedure

  1. Open the XML configuration of the VM:

    # virsh edit <vm_name>
  2. In the <devices> section, add a line for the VirtIO-SCSI controller if it is not present already:

    <controller type='scsi' model='virtio-scsi' index='0'/>
  3. Edit the VM configuration to enable S3-PR support. The configuration depends on whether you are using singlepath or multipath vDisks:

    • For multipath devices, add the reservations element with managed='yes' to the multipathed disk device:

      <disk type='block' device='lun'>
        <driver name='qemu' type='raw' cache='none'/>
        <source dev='/dev/mapper/mpatha'>
          <reservations managed='yes'/>
        </source>
        <target dev='sda' bus='scsi'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </disk>

      In this example, the multipathed device /dev/mapper/mpatha is configured with S3-PR support:

      • device='lun': Indicates that this is a SCSI Logical Unit Number (LUN) device, which is required for S3-PR support with block devices.
      • reservations managed='yes': Enables S3-PR support and allows libvirt to manage the persistent reservation helper.
    • For singlepath devices, add the reservations element with managed='yes' to the disk device that requires S3-PR support:

      <disk type='block' device='lun'>
        <driver name='qemu' type='raw' cache='none'/>
        <source dev='/dev/sdb'>
          <reservations managed='yes'/>
        </source>
        <target dev='sda' bus='scsi'/>
        <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      </disk>

      In this example:

      • type='block': Specifies that the device is a block device.
      • device='lun': Indicates that this is a SCSI Logical Unit Number (LUN) device, which is required for S3-PR support with block devices.
      • <source dev='/dev/sdb'/>: Specifies the host block device path. You can use device nodes, for example /dev/sdb, directly or use /dev/disk/by-path/ entries for better persistence across reboots.
      • <driver name='qemu' type='raw' cache='none'/>: Specifies the QEMU driver with raw format for direct device access.
      • reservations managed='yes': Enables S3-PR support and allows libvirt to manage the persistent reservation helper.
      • The disk uses the VirtIO-SCSI bus, which is required for S3-PR support.
  4. Save the XML configuration and start the VM.

    # virsh start <vm_name>

Verification

  • You can verify the configuration on the host by displaying the XML configuration of the running VM.

    # virsh dumpxml <vm_name>

    Look for the reservations managed='yes' element in the disk device configuration.

  • In RHEL VMs, you can also use the sg3_utils package to check the persistent reservation capabilities of the SCSI device:

    1. Install the sg3_utils package in the RHEL VM:

      # dnf install sg3_utils
    2. Check the SCSI device’s persistent reservation capabilities:

      # sg_persist --in --report-capabilities /dev/sda

      If the device supports S3-PR, the output looks similar to the following:

      LBP-2: 0
      PTPL_C: 0
      TMC: 0
      [PTPL_A: 1]
      [PR_TYPE: 1, 3, 5]

      In this example, PTPL_A: 1 indicates that the device supports persistent reservations, and PR_TYPE: 1, 3, 5 shows the supported reservation types.

14.12. Attaching DASD devices to virtual machines on IBM Z

By using the vfio-ccw feature, you can assign direct-access storage devices (DASDs) as mediated devices to your virtual machines (VMs) on IBM Z hosts. This for example makes it possible for the VM to access a z/OS data set, or to provide the assigned DASDs to a z/OS machine.

Prerequisites

  • You have a system with IBM Z hardware architecture supported with the FICON protocol.
  • You have a target VM of a Linux operating system.
  • The driverctl package is installed.

    # dnf install driverctl
  • The necessary vfio kernel modules have been loaded on the host.

    # lsmod | grep vfio

    The output of this command must contain the following modules:

    • vfio_ccw
    • vfio_mdev
    • vfio_iommu_type1
  • You have a spare DASD device for exclusive use by the VM, and you know the identifier of the device.

    The following procedure uses 0.0.002c as an example. When performing the commands, replace 0.0.002c with the identifier of your DASD device.

Procedure

  1. Obtain the subchannel identifier of the DASD device.

    # lscss -d 0.0.002c
    Device   Subchan.  DevType CU Type Use  PIM PAM POM  CHPIDs
    ----------------------------------------------------------------------
    0.0.002c 0.0.29a8  3390/0c 3990/e9 yes  f0  f0  ff   02111221 00000000

    In this example, the subchannel identifier is detected as 0.0.29a8. In the following commands of this procedure, replace 0.0.29a8 with the detected subchannel identifier of your device.

  2. If the lscss command in the previous step only displayed the header output and no device information, perform the following steps:

    1. Remove the device from the cio_ignore list.

      # cio_ignore -r 0.0.002c
    2. In the guest operating system, edit the kernel command line of the VM and add the device identifier with a ! mark to the line that starts with cio_ignore=, if it is not present already.

      cio_ignore=all,!condev,!0.0.002c
    3. Repeat step 1 on the host to obtain the subchannel identifier.
  3. Bind the subchannel to the vfio_ccw passthrough driver.

    # driverctl -b css set-override 0.0.29a8 vfio_ccw
    Note

    This binds the 0.0.29a8 subchannel to vfio_ccw persistently, which means the DASD will not be usable on the host. If you need to use the device on the host, you must first remove the automatic binding to 'vfio_ccw' and rebind the subchannel to the default driver:

    # driverctl -b css unset-override 0.0.29a8

  4. Define and start the DASD mediated device.

    # cat nodedev.xml
    <device>
        <parent>css_0_0_29a8</parent>
        <capability type="mdev">
            <type id="vfio_ccw-io"/>
        </capability>
    </device>
    
    # virsh nodedev-define nodedev.xml
    Node device 'mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8' defined from 'nodedev.xml'
    
    # virsh nodedev-start mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    Device mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8 started
  5. Shut down the VM, if it is running.
  6. Display the UUID of the previously defined device and save it for the next step.

    # virsh nodedev-dumpxml mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    
    <device>
      <name>mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8</name>
      <parent>css_0_0_29a8</parent>
      <capability type='mdev'>
        <type id='vfio_ccw-io'/>
        <uuid>30820a6f-b1a5-4503-91ca-0c10ba12345a</uuid>
        <iommuGroup number='0'/>
        <attr name='assign_adapter' value='0x02'/>
        <attr name='assign_domain' value='0x002b'/>
      </capability>
    </device>
  7. Attach the mediated device to the VM. To do so, use the virsh edit utility to edit the XML configuration of the VM, add the following section to the XML, and replace the uuid value with the UUID you obtained in the previous step.

    <hostdev mode='subsystem' type='mdev' model='vfio-ccw'>
      <source>
        <address uuid="30820a6f-b1a5-4503-91ca-0c10ba12345a"/>
      </source>
    </hostdev>
  8. Optional: Configure the mediated device to start automatically on host boot.

    # virsh nodedev-autostart mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8

Verification

  1. Ensure that the mediated device is configured correctly.

    # virsh nodedev-info mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    Name:           mdev_30820a6f_b1a5_4503_91ca_0c10ba12345a_0_0_29a8
    Parent:         css_0_0_0121
    Active:         yes
    Persistent:     yes
    Autostart:      yes
  2. Obtain the identifier that libvirt assigned to the mediated DASD device. To do so, display the XML configuration of the VM and look for a vfio-ccw device.

    # virsh dumpxml vm-name
    
    <domain>
    [...]
        <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ccw'>
          <source>
            <address uuid='10620d2f-ed4d-437b-8aff-beda461541f9'/>
          </source>
          <alias name='hostdev0'/>
          <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0009'/>
        </hostdev>
    [...]
    </domain>

    In this example, the assigned identifier of the device is 0.0.0009.

  3. Start the VM and log in to its guest operating system.
  4. In the guest operating system, confirm that the DASD device is listed. For example:

    # lscss | grep 0.0.0009
    0.0.0009 0.0.0007  3390/0c 3990/e9      f0  f0  ff   12212231 00000000
  5. In the guest operating system, set the device online. For example:

    # chccwdev -e 0.0009
    Setting device 0.0.0009 online
    Done

To force the virtual machine (VM) to perform a specified action when it stops responding, you can attach virtual watchdog devices to a VM.

Prerequisites

Procedure

  1. On the command line, install the watchdog service.

    # dnf install watchdog

  2. Shut down the VM.
  3. Add the watchdog service to the VM.

    # virt-xml vmname --add-device --watchdog action=reset --update

  4. Run the VM.
  5. Log in to the RHEL 10 web console.
  6. In the Virtual Machines interface of the web console, click the VM to which you want to add the watchdog device.
  7. Click add next to the Watchdog field in the Overview pane.

    The Add watchdog device type dialog is displayed.

  8. Select the action that you want the watchdog device to perform if the VM stops responding.
  9. Click Add.

Verification

  • The action you selected is visible next to the Watchdog field in the Overview pane.
Red Hat logoGithubredditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance. Découvrez nos récentes mises à jour.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez le Blog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

Theme

© 2026 Red Hat
Retour au début