Ce contenu n'est pas disponible dans la langue sélectionnée.
16.7. Assigning GPU Devices
- GPU PCI Device Assignment - Using this method, it is possible to remove a GPU device from the host and assign it to a single guest.
- NVIDIA vGPU Assignment - This method makes it possible to create multiple mediated devices from a physical GPU, and assign these devices as virtual GPUs to multiple guests. This is only supported on selected NVIDIA GPUs, and only one mediated device can be assigned to a single guest.
16.7.1. GPU PCI Device Assignment
- NVIDIA Quadro K-Series, M-Series, P-Series, and later architectures (models 2000 series or later)
- NVIDIA Tesla K-Series, M-Series, and later architectures
Note
lspci
command, detach the device from host, attach it to the guest, and configure Xorg on the guest - as described in the following procedures:
Procedure 16.13. Enable IOMMU support in the host machine kernel
Edit the kernel command line
For an Intel VT-d system, IOMMU is activated by adding theintel_iommu=on
andiommu=pt
parameters to the kernel command line. For an AMD-Vi system, the option needed is onlyiommu=pt
. To enable this option, edit or add theGRUB_CMDLINX_LINUX
line to the/etc/sysconfig/grub
configuration file as follows:GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) rhgb quiet intel_iommu=on iommu=pt"
Note
For further information on IOMMU, see Appendix E, Working with IOMMU Groups.Regenerate the boot loader configuration
For the changes to the kernel command line to apply, regenerate the boot loader configuration using thegrub2-mkconfig
command:#
grub2-mkconfig -o /etc/grub2.cfg
Note that if you are using a UEFI-based host, the target file should be/etc/grub2-efi.cfg
.Reboot the host
For the changes to take effect, reboot the host machine:#
reboot
Procedure 16.14. Excluding the GPU device from binding to the host physical machine driver
Identify the PCI bus address
To identify the PCI bus address and IDs of the device, run the followinglspci
command. In this example, a VGA controller such as an NVIDIA Quadro or GRID card is used:#
lspci -Dnn | grep VGA
0000:02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106GL [Quadro K4000] [10de:11fa] (rev a1)The resulting search reveals that the PCI bus address of this device is 0000:02:00.0 and the PCI IDs for the device are 10de:11fa.Prevent the native host machine driver from using the GPU device
To prevent the native host machine driver from using the GPU device, you can use a PCI ID with the pci-stub driver. To do this, append thepci-stub.ids
option, with the PCI IDs as its value, to theGRUB_CMDLINX_LINUX
line located in the/etc/sysconfig/grub
configuration file, for example as follows:GRUB_CMDLINE_LINUX="rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us $([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/rhcrashkernel-param || :) rhgb quiet intel_iommu=on iommu=pt pci-stub.ids=10de:11fa"
To add additional PCI IDs for pci-stub, separate them with a comma.Regenerate the boot loader configuration
Regenerate the boot loader configuration using thegrub2-mkconfig
to include this option:#
grub2-mkconfig -o /etc/grub2.cfg
Note that if you are using a UEFI-based host, the target file should be/etc/grub2-efi.cfg
.Reboot the host machine
In order for the changes to take effect, reboot the host machine:#
reboot
Procedure 16.15. Optional: Editing the GPU IOMMU configuration
Display the XML information of the GPU
To display the settings of the GPU in XML form, you first need to convert its PCI bus address to libvirt-compatible format by appendingpci_
and converting delimiters to underscores. In this example, the GPU PCI device identified with the 0000:02:00.0 bus address (as obtained in the previous procedure) becomespci_0000_02_00_0
. Use the libvirt address of the device with thevirsh nodedev-dumpxml
to display its XML configuration:#
virsh nodedev-dumpxml pci_0000_02_00_0
<device> <name>pci_0000_02_00_0</name> <path>/sys/devices/pci0000:00/0000:00:03.0/0000:02:00.0</path> <parent>pci_0000_00_03_0</parent> <driver> <name>pci-stub</name> </driver> <capability type='pci'> <domain>0</domain> <bus>2</bus> <slot>0</slot> <function>0</function> <product id='0x11fa'>GK106GL [Quadro K4000]</product> <vendor id='0x10de'>NVIDIA Corporation</vendor> <!-- pay attention to the following lines --> <iommuGroup number='13'> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </iommuGroup> <pci-express> <link validity='cap' port='0' speed='8' width='16'/> <link validity='sta' speed='2.5' width='16'/> </pci-express> </capability> </device>
Note the<iommuGroup>
element of the XML. The iommuGroup indicates a set of devices that are considered isolated from other devices due to IOMMU capabilities and PCI bus topologies. All of the endpoint devices within the iommuGroup (meaning devices that are not PCIe root ports, bridges, or switch ports) need to be unbound from the native host drivers in order to be assigned to a guest. In the example above, the group is composed of the GPU device (0000:02:00.0) as well as the companion audio device (0000:02:00.1). For more information, see Appendix E, Working with IOMMU Groups.Adjust IOMMU settings
In this example, assignment of NVIDIA audio functions is not supported due to hardware issues with legacy interrupt support. In addition, the GPU audio function is generally not useful without the GPU itself. Therefore, in order to assign the GPU to a guest, the audio function must first be detached from native host drivers. This can be done using one of the following:- Detect the PCI ID for the device and append it to the
pci-stub.ids
option in the/etc/sysconfig/grub
file, as detailed in Procedure 16.14, “Excluding the GPU device from binding to the host physical machine driver” - Use the
virsh nodedev-detach
command, for example as follows:#
virsh nodedev-detach pci_0000_02_00_1
Device pci_0000_02_00_1 detached
Procedure 16.16. Attaching the GPU
- Using the Virtual Machine Manager interface. For details, see Section 16.1.2, “Assigning a PCI Device with virt-manager”.
- Creating an XML configuration fragment for the GPU and attaching it with the
virsh attach-device
:- Create an XML for the device, similar to the following:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> </hostdev>
- Save this to a file and run
virsh attach-device [domain] [file] --persistent
to include the XML in the guest configuration. Note that the assigned GPU is added in addition to the existing emulated graphics device in the guest machine. The assigned GPU is handled as a secondary graphics device in the virtual machine. Assignment as a primary graphics device is not supported and emulated graphics devices in the guest's XML should not be removed.
- Editing the guest XML configuration using the
virsh edit
command and adding the appropriate XML segment manually.
Procedure 16.17. Ḿodifying the Xorg configuration on the guest
- In the guest, use the
lspci
command to determine the PCI bus adress of the GPU:#
lspci | grep VGA
00:02.0 VGA compatible controller: Device 1234:111 00:09.0 VGA compatible controller: NVIDIA Corporation GK106GL [Quadro K4000] (rev a1)In this example, the bus address is 00:09.0. - In the
/etc/X11/xorg.conf
file on the guest, add aBusID
option with the detected address adjusted as follows:Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:0:9:0" EndSection
Important
If the bus address detected in Step 1 is hexadecimal, you need to convert the values between delimiters to the decimal system. For example, 00:0a.0 should be converted into PCI:0:10:0.
Note
modprobe.blacklist=nouveau
on the kernel command line during install. For information on other guest virtual machines, see the operating system's specific documentation.
16.7.2. NVIDIA vGPU Assignment
Important
16.7.2.1. NVIDIA vGPU Setup
- Obtain the NVIDIA vGPU drivers and install them on your system. For instructions, see the NVIDIA documentation.
- If the NVIDIA software installer did not create the
/etc/modprobe.d/nvidia-installer-disable-nouveau.conf
file, create a.conf
file (of any name) in the/etc/modprobe.d/
directory. Add the following lines in the file:blacklist nouveau options nouveau modeset=0
- Regenerate the initial ramdisk for the current kernel, then reboot:
#
dracut --force
#reboot
If you need to use a prior supported kernel version with mediated devices, regenerate the initial ramdisk for all installed kernel versions:#
dracut --regenerate-all --force
#reboot
- Check that the
nvidia_vgpu_vfio
module has been loaded by the kernel and that thenvidia-vgpu-mgr.service
service is running.#
lsmod | grep nvidia_vgpu_vfio
nvidia_vgpu_vfio 45011 0 nvidia 14333621 10 nvidia_vgpu_vfio mdev 20414 2 vfio_mdev,nvidia_vgpu_vfio vfio 32695 3 vfio_mdev,nvidia_vgpu_vfio,vfio_iommu_type1 #systemctl status nvidia-vgpu-mgr.service
nvidia-vgpu-mgr.service - NVIDIA vGPU Manager Daemon Loaded: loaded (/usr/lib/systemd/system/nvidia-vgpu-mgr.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-03-16 10:17:36 CET; 5h 8min ago Main PID: 1553 (nvidia-vgpu-mgr) [...] - Write a device UUID to
/sys/class/mdev_bus/pci_dev/mdev_supported_types/type-id/create
, where pci_dev is the PCI address of the host GPU, and type-id is an ID of the host GPU type.The following example shows how to create a mediated device ofnvidia-63
vGPU type on an NVIDIA Tesla P4 card:#
uuidgen
30820a6f-b1a5-4503-91ca-0c10ba58692a #echo "30820a6f-b1a5-4503-91ca-0c10ba58692a" > /sys/class/mdev_bus/0000:01:00.0/mdev_supported_types/nvidia-63/create
For type-id values for specific devices, see section 1.3.1. Virtual GPU Types in Virtual GPU software documentation. Note that only Q-series NVIDIA vGPUs, such asGRID P4-2Q
, are supported as mediated device GPU types on Linux guests. - Add the following lines to the <devices/> sections in XML configurations of guests that you want to share the vGPU resources. Use the UUID value generated by the
uuidgen
command in the previous step. Each UUID can only be assigned to one guest at a time.<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci'> <source> <address uuid='30820a6f-b1a5-4503-91ca-0c10ba58692a'/> </source> </hostdev>
Important
For the vGPU mediated devices to work properly on the assigned guests, NVIDIA vGPU guest software licensing needs to be set up for the guests. For further information and instructions, see the NVIDIA virtual GPU software documentation.
16.7.2.2. Setting up and using the VNC console for video streaming with NVIDIA vGPU
Warning
- Install NVIDIA vGPU drivers and configure NVIDIA vGPU on your system as described in Section 16.7.2.1, “NVIDIA vGPU Setup”. Ensure the mediated device's XML configuration includes the display='on' parameter. For example:
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/> </source> </hostdev>
- Optionally, set the VM's video model type as none. For example:
<video> <model type='none'/> </video>
If this is not specified, you receive two different display outputs - one from an emulated Cirrus or QXL card and one from NVIDIA vGPU. Also note that using model type='none' currently makes it impossible to see the boot graphical output until the drivers are initialized. As a result, the first graphical output displayed is the login screen. - Ensure the XML configuration of the VM's graphics type is vnc.For example:
<graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics>
- Start the virtual machine.
- Connect to the virtual machine using the VNC viewer remote desktop client.
Note
If the VM is set up with an emulated VGA as the primary video device and vGPU as the secondary device, use the ctrl+alt+2 keyboard shortcut to switch to the vGPU display.
16.7.2.3. Removing NVIDIA vGPU Devices
30820a6f-b1a5-4503-91ca-0c10ba58692a
.
# echo 1 > /sys/bus/mdev/devices/uuid/remove
echo: write error: Device or resource busy
16.7.2.4. Querying NVIDIA vGPU Capabilities
virsh nodedev-list --cap mdev_types
and virsh nodedev-dumpxml
commands. For example, the following displays available vGPU types on a Tesla P4 card:
$ virsh nodedev-list --cap mdev_types pci_0000_01_00_0 $ virsh nodedev-dumpxml pci_0000_01_00_0 <...> <capability type='mdev_types'> <type id='nvidia-70'> <name>GRID P4-8A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>1</availableInstances> </type> <type id='nvidia-69'> <name>GRID P4-4A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-67'> <name>GRID P4-1A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-65'> <name>GRID P4-4Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>2</availableInstances> </type> <type id='nvidia-63'> <name>GRID P4-1Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-71'> <name>GRID P4-1B</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>8</availableInstances> </type> <type id='nvidia-68'> <name>GRID P4-2A</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>4</availableInstances> </type> <type id='nvidia-66'> <name>GRID P4-8Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>1</availableInstances> </type> <type id='nvidia-64'> <name>GRID P4-2Q</name> <deviceAPI>vfio-pci</deviceAPI> <availableInstances>4</availableInstances> </type> </capability> </...>
16.7.2.5. Remote Desktop Streaming Services for NVIDIA vGPU
- HP-RGS
- Mechdyne TGX - It is currently not possible to use Mechdyne TGX with Windows Server 2016 guests.
- NICE DCV - When using this streaming service, Red Hat recommends using fixed resolution settings, as using dynamic resolution in some cases results in a black screen.
16.7.2.6. Setting up the VNC console for video streaming with NVIDIA vGPU
Introduction
Important
Configuration
- Install NVIDIA vGPU drivers and configure NVIDIA vGPU on your host as described in Section 16.7.2, “NVIDIA vGPU Assignment”. Ensure the mediated device's XML configuration includes the display='on' parameter. For example:
<hostdev mode='subsystem' type='mdev' managed='no' model='vfio-pci' display='on'> <source> <address uuid='ba26a3e2-8e1e-4f39-9de7-b26bd210268a'/> </source> </hostdev>
- Optionally, set the VM's video model type as none. For example:
<video> <model type='none'/> </video>
- Ensure the XML configuration of the VM's graphics type is spice or vnc.An example for spice:
<graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics>
An example for vnc:<graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics>
- Start the virtual machine.
- Connect to the virtual machine using a client appropriate to the graphics protocol you configured in the previous steps.
- For VNC, use the VNC viewer remote desktop client. If the VM is set up with an emulated VGA as the primary video device and vGPU as the secondary, use the ctrl+alt+2 keyboard shortcut to switch to the vGPU display.
- For SPICE, use the virt-viewer application.