Chapter 15. PCI passthrough


This chapter covers using PCI passthrough with Xen and KVM hypervisors.
KVM and Xen hypervisors support attaching PCI devices on the host system to guests. PCI passthrough allows guests to have exclusive access to PCI devices for a range of tasks. PCI passthrough allows PCI devices to appear and behave as if they were physically attached to the guest operating system.
PCI devices are limited by the virtualized system architecture. Out of the 32 available PCI devices for a guest 4 are not removable. This means there are up to 28 PCI slots available for additional devices per guest. Each PCI device can have up to 8 functions; some PCI devices have multiple functions and only use one slot. Para-virtualized network, para-virtualized disk devices, or other PCI devices using VT-d all use slots or functions. The exact number of devices available is difficult to calculate due to the number of available devices. Each guest can use up to 32 PCI devices with each device having up to 8 functions.
The VT-d or AMD IOMMU extensions must be enabled in BIOS.

Procedure 15.1. Preparing an Intel system for PCI passthrough

  1. Enable the Intel VT-d extensions

    The Intel VT-d extensions provides hardware support for directly assigning a physical devices to guest. The main benefit of the feature is to improve the performance as native for device access.
    The VT-d extensions are required for PCI passthrough with Red Hat Enterprise Linux. The extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default.
    These extensions are often called various terms in BIOS which differ from manufacturer to manufacturer. Consult your system manufacturer's documentation.
  2. Activate Intel VT-d in the kernel

    Activate Intel VT-d in the kernel by appending the intel_iommu=on parameter to the kernel line of the kernel line in the /boot/grub/grub.conf file.
    The example below is a modified grub.conf file with Intel VT-d activated.
    default=0
    timeout=5
    splashimage=(hd0,0)/grub/splash.xpm.gz
    hiddenmenu
    title Red Hat Enterprise Linux Server (2.6.18-190.el5)
       root (hd0,0)
       kernel /vmlinuz-2.6.18-190.el5 ro root=/dev/VolGroup00/LogVol00 intel_iommu=on
       initrd /initrd-2.6.18-190.el5.img
  3. Ready to use

    Reboot the system to enable the changes. Your system is now PCI passthrough capable.

Procedure 15.2. Preparing an AMD system for PCI passthrough

  • Enable AMD IOMMU extensions

    The AMD IOMMU extensions are required for PCI passthrough with Red Hat Enterprise Linux. The extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default.
AMD systems only require that the IOMMU is enabled in the BIOS. The system is ready for PCI passthrough once the IOMMU is enabled.

Important

Xen and KVM require different kernel arguments to enable PCI passthrough. The previous instructions are for KVM. For both AMD and Intel systems, PCI passthrough on Xen requires the iommu=on parameter to the hypervisor command line. Modify the /boot/grub/grub.conf file as follows to enable PCI passthrough:
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-192.el5)
   root (hd0,0)
   kernel /xen.gz-2.6.18-192.el5 iommu=on
   module /vmlinuz-2.6.18-192.el5xen ro root=/dev/VolGroup00/LogVol00 
   module /initrd-2.6.18-190.el5xen.img

15.1. Adding a PCI device with virsh

These steps cover adding a PCI device to a fully virtualized guest under the Xen or KVM hypervisors using hardware-assisted PCI passthrough. See Section 15.5, “PCI passthrough for para-virtualized Xen guests on Red Hat Enterprise Linux” for details on adding a PCI device to a para-virtualized Xen guest.

Important

The VT-d or AMD IOMMU extensions must be enabled in BIOS.
This example uses a USB controller device with the PCI identifier code, pci_8086_3a6c, and a fully virtualized guest named win2k3.
  1. Identify the device

    Identify the PCI device designated for passthrough to the guest. The virsh nodedev-list command lists all devices attached to the system. The --tree option is useful for identifying devices attached to the PCI device (for example, disk controllers and USB controllers).
    # virsh nodedev-list --tree
    For a list of only PCI devices, run the following command:
    # virsh nodedev-list | grep pci
    Each PCI device is identified by a string in the following format (where 8086 is a variable that in this case represents Intel equipment, and **** is a four digit hexadecimal code specific to each device):
    pci_8086_****

    Note

    Comparing lspci output to lspci -n (which turns off name resolution) output can assist in deriving which device has which device identifier code.
    Record the PCI device number; the number is needed in other steps.
  2. Information on the domain, bus and function are available from output of the virsh nodedev-dumpxml command:
    # virsh nodedev-dumpxml pci_8086_3a6c
    <device>
      <name>pci_8086_3a6c</name>
      <parent>computer</parent>
      <capability type='pci'>
        <domain>0</domain>
        <bus>0</bus>
        <slot>26</slot>
        <function>7</function>
        <id='0x3a6c'>82801JD/DO (ICH10 Family) USB2 EHCI Controller #2</product>
        <vendor id='0x8086'>Intel Corporation</vendor>
      </capability>
    </device>
  3. Detach the device from the system. Attached devices cannot be used and may cause various errors if connected to a guest without detaching first.
    # virsh nodedev-dettach pci_8086_3a6c 
    Device pci_8086_3a6c dettached
  4. Convert slot and function values to hexadecimal values (from decimal) to get the PCI bus addresses. Append "0x" to the beginning of the output to tell the computer that the value is a hexadecimal number.
    For example, if bus = 0, slot = 26 and function = 7 run the following:
    $ printf %x 0
    0
    $ printf %x 26
    1a
    $ printf %x 7
    7
    The values to use:
    bus='0x00'
    slot='0x1a'
    function='0x7'
  5. Run virsh edit (or virsh attach device) and add a device entry in the <devices> section to attach the PCI device to the guest. Only run this command on offline guests. Red Hat Enterprise Linux does not support hotplugging PCI devices at this time.
    # virsh edit win2k3
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
          <address domain='0x0000' bus='0x00' slot='0x1a' function='0x7'/>
      </source>
    </hostdev>
  6. Once the guest system is configured to use the PCI address, we need to tell the host system to stop using it. The ehci driver is loaded by default for the USB PCI controller.
    $ readlink /sys/bus/pci/devices/0000\:00\:1d.7/driver
    ../../../bus/pci/drivers/ehci_hcd
  7. Detach the device:
    $ virsh nodedev-dettach pci_8086_3a6c
  8. Verify it is now under the control of pci_stub:
    $ readlink /sys/bus/pci/devices/0000\:00\:1d.7/driver
    ../../../bus/pci/drivers/pci-stub
  9. Set a sebool to allow the management of the PCI device from the guest:
    # setsebool -P virt_use_sysfs 1
  10. Start the guest system :
    # virsh start win2k3
The PCI device should now be successfully attached to the guest and accessible to the guest operating system.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.