Este conteúdo não está disponível no idioma selecionado.
Appendix E. Working with IOMMU Groups
Introduced in Red Hat Enterprise Linux 7, Virtual Function I/O (VFIO) is a set of Linux kernel modules that provide a user-space driver framework. This framework uses input–output memory management unit (IOMMU) protection to enable secure device access for user-space drivers. VFIO enables user-space drivers such as the Data Plane Development Kit (DPDK), as well as the more common PCI device assignment.
VFIO uses IOMMU groups to isolate devices and prevent unintentional Direct Memory Access (DMA) between two devices running on the same host physical machine, which would impact host and guest functionality. IOMMU groups are available in Red Hat Enterprise Linux 7, which is a significant improvement over the legacy KVM device assignment that is available in Red Hat Enterprise Linux 6. This appendix highlights the following:
- An overview of IOMMU groups
- The importance of device isolation
- VFIO benefits
E.1. IOMMU Overview
An IOMMU creates a virtual address space for the device, where each I/O Virtual Address (IOVA) may translate to different addresses in the physical system memory. When the translation is completed, the devices are connected to a different address within the physical system's memory. Without an IOMMU, all devices have a shared, flat view of the physical memory because they lack memory address translation. With an IOMMU, devices receive the IOVA space as a new address space, which is useful for device assignment.
Different IOMMUs have different levels of functionality. In the past, IOMMUs were limited, providing only translation, and often only for a small window of the address space. For example, the IOMMU would only reserve a small window (1 GB or less) of IOVA space in low memory, which was shared by multiple devices. The AMD graphics address remapping table (GART), when used as a general-purpose IOMMU, is an example of this model. These classic IOMMUs mostly provided two capabilities: bounce buffers and address coalescing.
- Bounce buffers are necessary when the addressing capabilities of the device are less than that of the platform. For example, if a device's address space is limited to 4GB (32 bits) of memory and the driver was to allocate to a buffer above 4 GB, the device would not be able to directly access the buffer. Such a situation necessitates using a bounce buffer; a buffer space located in lower memory, where the device can perform a DMA operation. The data in the buffer is only copied to the driver's allocated buffer on completion of the operation. In other words, the buffer is bounced from a lower memory address to a higher memory address. IOMMUs avoid bounce buffering by providing an IOVA translation within the device's address space. This allows the device to perform a DMA operation directly into the buffer, even when the buffer extends beyond the physical address space of the device. Historically, this IOMMU feature was often the exclusive use case for the IOMMU, but with the adoption of PCI-Express (PCIe), the ability to support addressing above 4GB is required for all non-legacy endpoints.
- In traditional memory allocation, blocks of memory are assigned and freed based on the needs of the application. Using this method creates memory gaps scattered throughout the physical address space. It would be better if the memory gaps were coalesced so they can be used more efficiently, or in basic terms it would be better if the memory gaps were gathered together. The IOMMU coalesces these scattered memory lists through the IOVA space, sometimes referred to as scatter-gather lists. In doing so the IOMMU creates contiguous DMA operations and ultimately increases the efficiency of the I/O performance. In the simplest example, a driver may allocate two 4KB buffers that are not contiguous in the physical memory space. The IOMMU can allocate a contiguous range for these buffers allowing the I/O device to do a single 8KB DMA rather than two separate 4KB DMAs.
Although memory coalescing and bounce buffering are important for high performance I/O on the host, the IOMMU feature that is essential for a virtualization environment is the isolation capability of modern IOMMUs. Isolation was not possible on a wide scale prior to PCI-Express, because conventional PCI does not tag transactions with an ID of the requesting device (requester ID). Even though PCI-X included some degree of a requester ID, the rules for interconnecting devices that take ownership of the transaction did not provide complete support for device isolation.
With PCIe, each device’s transaction is tagged with a requester ID unique to the device (the PCI bus/device/function number, often abbreviated as BDF), which is used to reference a unique IOVA table for that device. Now that isolation is possible, the IOVA space cannot only be used for translation operations such as offloading unreachable memory and coalescing memory, but it can also be used to restrict DMA access from the device. This allows devices to be isolated from each other, preventing duplicate assignment of memory spaces, which is essential for proper guest virtual machine device management. Using these features on a guest virtual machine involves populating the IOVA space for the assigned device with the guest-physical-to-host-physical memory mappings for the virtual machine. Once this is done, the device transparently performs DMA operations in the guest virtual machine’s address space.
[1]
The original content for this appendix was provided by Alex Williamson, Principal Software Engineer.