Chapter 8. Configuring PCI passthrough
You can use PCI passthrough to attach a physical PCI device, such as a graphics card or a network device, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host.
Using PCI passthrough with routed provider networks
The Compute service does not support single networks that span multiple provider networks. When a network contains multiple physical networks, the Compute service only uses the first physical network. Therefore, if you are using routed provider networks you must use the same physical_network
name across all the Compute nodes.
If you use routed provider networks with VLAN or flat networks, you must use the same physical_network
name for all segments. You then create multiple segments for the network and map the segments to the appropriate subnets.
To enable your cloud users to create instances with PCI devices attached, you must complete the following:
- Designate Compute nodes for PCI passthrough.
- Configure the Compute nodes for PCI passthrough that have the required PCI devices.
- Deploy the overcloud.
- Create a flavor for launching instances with PCI devices attached.
Prerequisites
- The Compute nodes have the required PCI devices.
8.1. Designating Compute nodes for PCI passthrough
To designate Compute nodes for instances with physical PCI devices attached, you must create a new role file to configure the PCI passthrough role, and configure the bare metal nodes with a PCI passthrough resource class to use to tag the Compute nodes for PCI passthrough.
The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:[stack@director ~]$ source ~/stackrc
Generate a new roles data file named
roles_data_pci_passthrough.yaml
that includes theController
,Compute
, andComputePCI
roles, along with any other roles that you need for the overcloud:(undercloud)$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_pci_passthrough.yaml \ Compute:ComputePCI Compute Controller
Open
roles_data_pci_passthrough.yaml
and edit or add the following parameters and sections:Section/Parameter Current value New value Role comment
Role: Compute
Role: ComputePCI
Role name
name: Compute
name: ComputePCI
description
Basic Compute Node role
PCI Passthrough Compute Node role
HostnameFormatDefault
%stackname%-novacompute-%index%
%stackname%-novacomputepci-%index%
deprecated_nic_config_name
compute.yaml
compute-pci-passthrough.yaml
-
Register the PCI passthrough Compute nodes for the overcloud by adding them to your node definition template,
node.json
ornode.yaml
. For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware:
(undercloud)$ openstack overcloud node introspect \ --all-manageable --provide
For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.
Tag each bare metal node that you want to designate for PCI passthrough with a custom PCI passthrough resource class:
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.PCI-PASSTHROUGH <node>
Replace
<node>
with the ID of the bare metal node.Add the
ComputePCI
role to your node definition file,overcloud-baremetal-deploy.yaml
, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:- name: Controller count: 3 - name: Compute count: 3 - name: ComputePCI count: 1 defaults: resource_class: baremetal.PCI-PASSTHROUGH network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1
- 1
- You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the
network_config
property, then the default network definitions are used.
For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes. For an example node definition file, see Example node definition file.
Run the provisioning command to provision the new nodes for your role:
(undercloud)$ openstack overcloud node provision \ --stack <stack> \ [--network-config \] --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud
. -
Include the
--network-config
optional argument to provide the network definitions to thecli-overcloud-node-network-config.yaml
Ansible playbook. If you do not define the network definitions by using thenetwork_config
property, then the default network definitions are used.
-
Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
available
toactive
:(undercloud)$ watch openstack baremetal node list
If you did not run the provisioning command with the
--network-config
option, then configure the<Role>NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to your NIC template files:parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputePCINetworkConfigTemplate: /home/stack/templates/nic-configs/<pci_passthrough_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
Replace
<pci_passthrough_net_top>
with the name of the file that contains the network topology of theComputePCI
role, for example,compute.yaml
to use the default network topology.
8.2. Configuring a PCI passthrough Compute node
To enable your cloud users to create instances with PCI devices attached, you must configure both the Compute nodes that have the PCI devices and the Controller nodes.
Procedure
-
Create an environment file to configure the Controller node on the overcloud for PCI passthrough, for example,
pci_passthrough_controller.yaml
. Add
PciPassthroughFilter
to theNovaSchedulerEnabledFilters
parameter inpci_passthrough_controller.yaml
:parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter
To specify the PCI alias for the devices on the Controller node, add the following configuration to
pci_passthrough_controller.yaml
:parameter_defaults: ... ControllerExtraConfig: nova::pci::aliases: - name: "a1" product_id: "1572" vendor_id: "8086" device_type: "type-PF"
For more information about configuring the
device_type
field, see PCI passthrough device type field.NoteIf the
nova-api
service is running in a role different from theController
role, replaceControllerExtraConfig
with the user role in the format<Role>ExtraConfig
.Optional: To set a default NUMA affinity policy for PCI passthrough devices, add
numa_policy
to thenova::pci::aliases:
configuration from step 3:parameter_defaults: ... ControllerExtraConfig: nova::pci::aliases: - name: "a1" product_id: "1572" vendor_id: "8086" device_type: "type-PF" numa_policy: "preferred"
-
To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example,
pci_passthrough_compute.yaml
. To specify the available PCIs for the devices on the Compute node, use the
vendor_id
andproduct_id
options to add all matching PCI devices to the pool of PCI devices available for passthrough to instances. For example, to add all Intel® Ethernet Controller X710 devices to the pool of PCI devices available for passthrough to instances, add the following configuration topci_passthrough_compute.yaml
:parameter_defaults: ... ComputePCIParameters: NovaPCIPassthrough: - vendor_id: "8086" product_id: "1572"
For more information about how to configure
NovaPCIPassthrough
, see Guidelines for configuringNovaPCIPassthrough
.You must create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the PCI passthrough Compute node, add the following to
pci_passthrough_compute.yaml
:parameter_defaults: ... ComputePCIExtraConfig: nova::pci::aliases: - name: "a1" product_id: "1572" vendor_id: "8086" device_type: "type-PF"
NoteThe Compute node aliases must be identical to the aliases on the Controller node. Therefore, if you added
numa_affinity
tonova::pci::aliases
inpci_passthrough_controller.yaml
, then you must also add it tonova::pci::aliases
inpci_passthrough_compute.yaml
.To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the
KernelArgs
parameter topci_passthrough_compute.yaml
. For example, use the followingKernalArgs
settings to enable an Intel IOMMU:parameter_defaults: ... ComputePCIParameters: KernelArgs: "intel_iommu=on iommu=pt"
To enable an AMD IOMMU, set
KernelArgs
to"amd_iommu=on iommu=pt"
.NoteWhen you first add the
KernelArgs
parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to defineKernelArgs
.Add your custom environment files to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -r /home/stack/templates/roles_data_pci_passthrough.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/pci_passthrough_controller.yaml \ -e /home/stack/templates/pci_passthrough_compute.yaml \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ -e /home/stack/templates/node-info.yaml
Create and configure the flavors that your cloud users can use to request the PCI devices. The following example requests two devices, each with a vendor ID of
8086
and a product ID of1572
, using the alias defined in step 7:(overcloud)$ openstack flavor set \ --property "pci_passthrough:alias"="a1:2" device_passthrough
Optional: To override the default NUMA affinity policy for PCI passthrough devices, you can add the NUMA affinity policy property key to the flavor or the image:
To override the default NUMA affinity policy by using the flavor, add the
hw:pci_numa_affinity_policy
property key:(overcloud)$ openstack flavor set \ --property "hw:pci_numa_affinity_policy"="required" \ device_passthrough
For more information about the valid values for
hw:pci_numa_affinity_policy
, see Flavor metadata.To override the default NUMA affinity policy by using the image, add the
hw_pci_numa_affinity_policy
property key:(overcloud)$ openstack image set \ --property hw_pci_numa_affinity_policy=required \ device_passthrough_image
NoteIf you set the NUMA affinity policy on both the image and the flavor then the property values must match. The flavor setting takes precedence over the image and default settings. Therefore, the configuration of the NUMA affinity policy on the image only takes effect if the property is not set on the flavor.
Verification
Create an instance with a PCI passthrough device:
$ openstack server create --flavor device_passthrough \ --image <image> --wait test-pci
- Log in to the instance as a cloud user. For more information, see Connecting to an instance.
To verify that the PCI device is accessible from the instance, enter the following command from the instance:
$ lspci -nn | grep <device_name>
8.3. PCI passthrough device type field
The Compute service categorizes PCI devices into one of three types, depending on the capabilities the devices report. The following lists the valid values that you can set the device_type
field to:
type-PF
- The device supports SR-IOV and is the parent or root device. Specify this device type to passthrough a device that supports SR-IOV in its entirety.
type-VF
- The device is a child device of a device that supports SR-IOV.
type-PCI
-
The device does not support SR-IOV. This is the default device type if the
device_type
field is not set.
You must configure the Compute and Controller nodes with the same device_type
.
8.4. Guidelines for configuring NovaPCIPassthrough
-
Do not use the
devname
parameter when configuring PCI passthrough, as the device name of a NIC can change. Instead, usevendor_id
andproduct_id
because they are more stable, or use theaddress
of the NIC. -
To pass through a specific Physical Function (PF), you can use the
address
parameter because the PCI address is unique to each device. Alternatively, you can use theproduct_id
parameter to pass through a PF, but you must also specify theaddress
of the PF if you have multiple PFs of the same type. -
To pass through all the Virtual Functions (VFs) specify only the
product_id
andvendor_id
of the VFs that you want to use for PCI passthrough. You must also specify theaddress
of the VF if you are using SRIOV for NIC partitioning and you are running OVS on a VF. -
To pass through only the VFs for a PF but not the PF itself, you can use the
address
parameter to specify the PCI address of the PF andproduct_id
to specify the product ID of the VF.
Configuring the address
parameter
The address
parameter specifies the PCI address of the device. You can set the value of the address
parameter using either a String or a dict
mapping.
- String format
If you specify the address using a string you can include wildcards (*), as shown in the following example:
NovaPCIPassthrough: - address: "*:0a:00.*" physical_network: physnet1
- Dictionary format
If you specify the address using the dictionary format you can include regular expression syntax, as shown in the following example:
NovaPCIPassthrough: - address: domain: ".*" bus: "02" slot: "01" function: "[0-2]" physical_network: net1
The Compute service restricts the configuration of address
fields to the following maximum values:
- domain - 0xFFFF
- bus - 0xFF
- slot - 0x1F
- function - 0x7
The Compute service supports PCI devices with a 16-bit address domain. The Compute service ignores PCI devices with a 32-bit address domain.