Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Configuring VDPA Compute nodes to enable instances that use VDPA ports
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
VIRTIO data path acceleration (VDPA) provides wirespeed data transfer over VIRTIO. A VDPA device provides a VIRTIO abstraction over an SR-IOV virtual function (VF), which enables VFs to be consumed without a vendor-specific driver on the instance.
When you use a NIC as a VDPA interface it must be dedicated to the VDPA interface. You cannot use the NIC for other connections because you must configure the NIC’s physical function (PF) in switchdev
mode and manage the PF by using hardware offloaded OVS.
To enable your cloud users to create instances that use VDPA ports, complete the following tasks:
- Optional: Designate Compute nodes for VDPA.
- Configure the Compute nodes for VDPA that have the required VDPA drivers.
- Deploy the overcloud.
If the VDPA hardware is limited, you can also configure a host aggregate to optimize scheduling on the VDPA Compute nodes. To schedule only instances that request VDPA on the VDPA Compute nodes, create a host aggregate of the Compute nodes that have the VDPA hardware, and configure the Compute scheduler to place only VDPA instances on the host aggregate. For more information, see Filtering by isolating host aggregates and Creating and managing host aggregates.
Prerequisites
- Your Compute nodes have the required VDPA devices and drivers.
- Your Compute nodes have Mellanox NICs.
- Your overcloud is configured for OVS hardware offload. For more information, see Configuring OVS hardware offload.
- Your overcloud is configured to use ML2/OVN.
6.1. Designating Compute nodes for VDPA Link kopierenLink in die Zwischenablage kopiert!
To designate Compute nodes for instances that request a VIRTIO data path acceleration (VDPA) interface, create a new role file to configure the VDPA role, and configure the bare-metal nodes with a VDPA resource class to tag the Compute nodes for VDPA.
The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, scale down the overcloud to unprovision the node, then scale up the overcloud to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:source ~/stackrc
[stack@director ~]$ source ~/stackrc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate a new roles data file named
roles_data_vdpa.yaml
that includes theController
,Compute
, andComputeVdpa
roles:openstack overcloud roles \ generate -o /home/stack/templates/roles_data_vdpa.yaml \ ComputeVdpa Compute Controller
(undercloud)$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_vdpa.yaml \ ComputeVdpa Compute Controller
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
roles_data_vdpa.yaml
file for the VDPA role:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Register the VDPA Compute nodes for the overcloud by adding them to your node definition template:
node.json
ornode.yaml
. For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware:
openstack overcloud node introspect \ --all-manageable --provide
(undercloud)$ openstack overcloud node introspect \ --all-manageable --provide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide.
Tag each bare-metal node that you want to designate for VDPA with a custom VDPA resource class:
openstack baremetal node set \ --resource-class baremetal.VDPA <node>
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.VDPA <node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<node>
with the name or UUID of the bare-metal node.Add the
ComputeVdpa
role to your node definition file,overcloud-baremetal-deploy.yaml
, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<role_topology_file>
with the name of the topology file to use for the ComputeVdpa role, for example,myRoleTopology.j2
. You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Director Installation and Usage guide. To use the default network definition settings, do not includenetwork_config
in the role definition.
For more information about the properties that you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes. For an example node definition file, see Example node definition file.
-
Replace
Provision the new nodes for your role:
openstack overcloud node provision \ [--stack <stack>] \ [--network-config \]
(undercloud)$ openstack overcloud node provision \ [--stack <stack>] \ [--network-config \] --output <deployment_file> \ /home/stack/templates/overcloud-baremetal-deploy.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. Defaults toovercloud
. -
Optional: Include the
--network-config
optional argument to provide the network definitions to thecli-overcloud-node-network-config.yaml
Ansible playbook. If you have not defined the network definitions in the node definition file by using thenetwork_config
property, then the default network definitions are used. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml
.
-
Optional: Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
available
toactive
:watch openstack baremetal node list
(undercloud)$ watch openstack baremetal node list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you ran the provisioning command without the
--network-config
option, then configure the<Role>NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to your NIC template files:parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeVdpaNetworkConfigTemplate: /home/stack/templates/nic-configs/<vdpa_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeVdpaNetworkConfigTemplate: /home/stack/templates/nic-configs/<vdpa_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<vdpa_net_top>
with the name of the file that contains the network topology of theComputeVdpa
role, for example,compute.yaml
to use the default network topology.
6.2. Configuring a VDPA Compute node Link kopierenLink in die Zwischenablage kopiert!
To enable your cloud users to create instances that use VIRTIO data path acceleration (VDPA) ports, configure the Compute nodes that have the VDPA devices.
Procedure
-
Create a new Compute environment file for configuring VDPA Compute nodes, for example,
vdpa_compute.yaml
. Add
PciPassthroughFilter
andNUMATopologyFilter
to theNovaSchedulerDefaultFilters
parameter invdpa_compute.yaml
:parameter_defaults: NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']
parameter_defaults: NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
NovaPCIPassthrough
parameter tovdpa_compute.yaml
to specify the available PCIs for the VDPA devices on the Compute node. For example, to add NVIDIA® ConnectX®-6 Dx devices to the pool of PCI devices that are available for passthrough to instances, add the following configuration tovdpa_compute.yaml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about how to configure
NovaPCIPassthrough
, see Guidelines for configuringNovaPCIPassthrough
.Enable the input–output memory management unit (IOMMU) in each Compute node BIOS by adding the
KernelArgs
parameter tovdpa_compute.yaml
. For example, use the followingKernalArgs
settings to enable an Intel Corporation IOMMU:parameter_defaults: ... ComputeVdpaParameters: ... KernelArgs: "intel_iommu=on iommu=pt"
parameter_defaults: ... ComputeVdpaParameters: ... KernelArgs: "intel_iommu=on iommu=pt"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable an AMD IOMMU, set
KernelArgs
to"amd_iommu=on iommu=pt"
.NoteWhen you first add the
KernelArgs
parameter to the configuration of a role, the overcloud nodes automatically reboot during overcloud deployment. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to defineKernelArgs
.Open your network environment file, and add the following configuration to define the physical network:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<bridge_map_1>
, and all bridge mappings up to<bridge_map_n>
, with the logical to physical bridge mappings that you want to use for the VDPA bridge. For example,tenant:br-tenant
. -
Replace
<tunnel_types>
with a comma-separated list of the tunnel types for the project network. For example,geneve
. -
Replace
<network_types>
with a comma-separated list of the project network types for the Networking service (neutron). The first type that you specify is used until all available networks are exhausted, then the next type is used. For example,geneve,vlan
. -
Replace
<network_vlan_range_1>
, and all physical network and VLAN ranges up to<network_vlan_range_n>
, with the ML2 and OVN VLAN mapping ranges that you want to support. For example,datacentre:1:1000
,tenant:100:299
.
-
Replace
Open your network interface template, and add the following configuration to specify your VDPA-supported network interfaces as a member of the OVN bridge:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add your custom environment files to the stack with your other environment files and deploy the overcloud:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Create an instance with a VDPA device. For more information, see Creating an instance with a VDPA interface in the Creating and Managing Instances guide.
- Log in to the instance as a cloud user. For more information, see Connecting to an instance in the Creating and Managing Instances guide.
Verify that the VDPA device is accessible from the instance:
openstack port show vdpa-port
$ openstack port show vdpa-port
Copy to Clipboard Copied! Toggle word wrap Toggle overflow