Rechercher

13.7. Managing SR-IOV devices

download PDF

An emulated virtual device often uses more CPU and memory than a hardware network device. This can limit the performance of a virtual machine (VM). However, if any devices on your virtualization host support Single Root I/O Virtualization (SR-IOV), you can use this feature to improve the device performance, and possibly also the overall performance of your VMs.

13.7.1. What is SR-IOV?

Single-root I/O virtualization (SR-IOV) is a specification that enables a single PCI Express (PCIe) device to present multiple separate PCI devices, called virtual functions (VFs), to the host system. Each of these devices:

  • Is able to provide the same or similar service as the original PCIe device.
  • Appears at a different address on the host PCI bus.
  • Can be assigned to a different VM using VFIO assignment.

For example, a single SR-IOV capable network device can present VFs to multiple VMs. While all of the VFs use the same physical card, the same network connection, and the same network cable, each of the VMs directly controls its own hardware network device, and uses no extra resources from the host.

How SR-IOV works

The SR-IOV functionality is possible thanks to the introduction of the following PCIe functions:

  • Physical functions (PFs) - A PCIe function that provides the functionality of its device (for example networking) to the host, but can also create and manage a set of VFs. Each SR-IOV capable device has one or more PFs.
  • Virtual functions (VFs) - Lightweight PCIe functions that behave as independent devices. Each VF is derived from a PF. The maximum number of VFs a device can have depends on the device hardware. Each VF can be assigned only to a single VM at a time, but a VM can have multiple VFs assigned to it.

VMs recognize VFs as virtual devices. For example, a VF created by an SR-IOV network device appears as a network card to a VM to which it is assigned, in the same way as a physical network card appears to the host system.

Figure 13.1. SR-IOV architecture

virt SR IOV

Avantages

The primary advantages of using SR-IOV VFs rather than emulated devices are:

  • Improved performance
  • Reduced use of host CPU and memory resources

For example, a VF attached to a VM as a vNIC performs at almost the same level as a physical NIC, and much better than paravirtualized or emulated NICs. In particular, when multiple VFs are used simultaneously on a single host, the performance benefits can be significant.

Inconvénients

  • To modify the configuration of a PF, you must first change the number of VFs exposed by the PF to zero. Therefore, you also need to remove the devices provided by these VFs from the VM to which they are assigned.
  • A VM with an VFIO-assigned devices attached, including SR-IOV VFs, cannot be migrated to another host. In some cases, you can work around this limitation by pairing the assigned device with an emulated device. For example, you can bond an assigned networking VF to an emulated vNIC, and remove the VF before the migration.
  • In addition, VFIO-assigned devices require pinning of VM memory, which increases the memory consumption of the VM and prevents the use of memory ballooning on the VM.

13.7.2. Attaching SR-IOV networking devices to virtual machines

To attach an SR-IOV networking device to a virtual machine (VM) on an Intel or AMD host, you must create a virtual function (VF) from an SR-IOV capable network interface on the host and assign the VF as a device to a specified VM. For details, see the following instructions.

Conditions préalables

  • The CPU and the firmware of your host support the I/O Memory Management Unit (IOMMU).

    • If using an Intel CPU, it must support the Intel Virtualization Technology for Directed I/O (VT-d).
    • If using an AMD CPU, it must support the AMD-Vi feature.
  • The host system uses Access Control Service (ACS) to provide direct memory access (DMA) isolation for PCIe topology. Verify this with the system vendor.

    For additional information, see Hardware Considerations for Implementing SR-IOV.

  • The physical network device supports SR-IOV. To verify if any network devices on your system support SR-IOV, use the lspci -v command and look for Single Root I/O Virtualization (SR-IOV) in the output.

    # lspci -v
    [...]
    02:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
    	Subsystem: Intel Corporation Gigabit ET Dual Port Server Adapter
    	Flags: bus master, fast devsel, latency 0, IRQ 16, NUMA node 0
    	Memory at fcba0000 (32-bit, non-prefetchable) [size=128K]
    [...]
    	Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
    	Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
    	Kernel driver in use: igb
    	Kernel modules: igb
    [...]
  • The host network interface you want to use for creating VFs is running. For example, to activate the eth1 interface and verify it is running:

    # ip link set eth1 up
    # ip link show eth1
    8: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
       link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
       vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
       vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
  • For SR-IOV device assignment to work, the IOMMU feature must be enabled in the host BIOS and kernel. To do so:

    • On an Intel host, enable VT-d:

      1. Regenerate the GRUB configuration with the intel_iommu=on and iommu=pt parameters:

        # grubby --args="intel_iommu=on iommu=pt" --update-kernel=ALL
      2. Reboot the host.
    • On an AMD host, enable AMD-Vi:

      1. Regenerate the GRUB configuration with the iommu=pt parameter:

        # grubby --args="iommu=pt" --update-kernel=ALL
      2. Reboot the host.

Procédure

  1. Optional: Confirm the maximum number of VFs your network device can use. To do so, use the following command and replace eth1 with your SR-IOV compatible network device.

    # cat /sys/class/net/eth1/device/sriov_totalvfs
    7
  2. Use the following command to create a virtual function (VF):

    # echo VF-number > /sys/class/net/network-interface/device/sriov_numvfs

    In the command, replace:

    * _VF-number_ with the number of VFs you want to create on the PF.
    * _network-interface_ with the name of the network interface for which the VFs will be created.

    The following example creates 2 VFs from the eth1 network interface:

    # echo 2 > /sys/class/net/eth1/device/sriov_numvfs
  3. Verify the VFs have been added:

    # lspci | grep Ethernet
    82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
    82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
  4. Make the created VFs persistent by creating a udev rule for the network interface you used to create the VFs. For example, for the eth1 interface, create the /etc/udev/rules.d/eth1.rules file, and add the following line:

    ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="ixgbe", ATTR{device/sriov_numvfs}="2"

    This ensures that the two VFs that use the ixgbe driver will automatically be available for the eth1 interface when the host starts. If you do not require persistent SR-IOV devices, skip this step.

    Avertissement

    Currently, the setting described above does not work correctly when attempting to make VFs persistent on Broadcom NetXtreme II BCM57810 adapters. In addition, attaching VFs based on these adapters to Windows VMs is currently not reliable.

  5. Hot-plug one of the newly added VF interface devices to a running VM.

    # virsh attach-interface testguest1 hostdev 0000:82:10.0 --managed --live --config

Vérification

  • If the procedure is successful, the guest operating system detects a new network interface card.

13.7.3. Supported devices for SR-IOV assignment

Not all devices can be used for SR-IOV. The following devices have been tested and verified as compatible with SR-IOV in RHEL 9.

Networking devices

  • Intel 82599ES 10 Gigabit Ethernet Controller - uses the ixgbe driver
  • Intel Ethernet Controller XL710 Series - uses the i40e driver
  • Mellanox ConnectX-5 Ethernet Adapter Cards - use the mlx5_core driver
  • Intel Ethernet Network Adapter XXV710 - uses the i40e driver
  • Intel 82576 Gigabit Ethernet Controller - uses the igb driver
  • Broadcom NetXtreme II BCM57810 - uses the bnx2x driver
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.