Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 9. Configuring VDPA Compute nodes to enable instances that use VDPA ports
VIRTIO data path acceleration (VDPA) provides wirespeed data transfer over VIRTIO. A VDPA device provides a VIRTIO abstraction over an SR-IOV virtual function (VF), which enables VFs to be consumed without a vendor-specific driver on the instance.
When you use a NIC as a VDPA interface it must be dedicated to the VDPA interface. You cannot use the NIC for other connections because you must configure the NIC’s physical function (PF) in switchdev
mode and manage the PF by using hardware offloaded OVS.
To enable your cloud users to create instances that use VDPA ports, complete the following tasks:
- Optional: Designate Compute nodes for VDPA.
- Configure the Compute nodes for VDPA that have the required VDPA drivers.
- Deploy the overcloud.
If the VDPA hardware is limited, you can also configure a host aggregate to optimize scheduling on the VDPA Compute nodes. To schedule only instances that request VDPA on the VDPA Compute nodes, create a host aggregate of the Compute nodes that have the VDPA hardware, and configure the Compute scheduler to place only VDPA instances on the host aggregate. For more information, see Filtering by isolating host aggregates and Creating and managing host aggregates.
Prerequisites
- Your Compute nodes have the required VDPA devices and drivers.
- Your Compute nodes have Mellanox NICs.
- Your overcloud is configured for OVS hardware offload. For more information, see Configuring OVS hardware offload.
- Your overcloud is configured to use ML2/OVN.
9.1. Designating Compute nodes for VDPA
To designate Compute nodes for instances that request a VIRTIO data path acceleration (VDPA) interface, create a new role file to configure the VDPA role, and configure the bare-metal nodes with a VDPA resource class to tag the Compute nodes for VDPA.
The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, scale down the overcloud to unprovision the node, then scale up the overcloud to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:[stack@director ~]$ source ~/stackrc
Generate a new roles data file named
roles_data_vdpa.yaml
that includes theController
,Compute
, andComputeVdpa
roles:(undercloud)$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_vdpa.yaml \ ComputeVdpa Compute Controller
Update the
roles_data_vdpa.yaml
file for the VDPA role:############################################################################### # Role: ComputeVdpa # ############################################################################### - name: ComputeVdpa description: | VDPA Compute Node role CountDefault: 1 # Create external Neutron bridge tags: - compute - external_bridge networks: InternalApi: subnet: internal_api_subnet Tenant: subnet: tenant_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-computevdpa-%index%' deprecated_nic_config_name: compute-vdpa.yaml
-
Register the VDPA Compute nodes for the overcloud by adding them to your node definition template:
node.json
ornode.yaml
. For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware:
(undercloud)$ openstack overcloud node introspect \ --all-manageable --provide
For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.
Tag each bare-metal node that you want to designate for VDPA with a custom VDPA resource class:
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.VDPA <node>
Replace
<node>
with the name or UUID of the bare-metal node.Add the
ComputeVdpa
role to your node definition file,overcloud-baremetal-deploy.yaml
, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:- name: Controller count: 3 - name: Compute count: 3 - name: ComputeVdpa count: 1 defaults: resource_class: baremetal.VDPA network_config: template: /home/stack/templates/nic-config/<role_topology_file>
-
Replace
<role_topology_file>
with the name of the topology file to use for theComputeVdpa
role, for example,vdpa_net_top.j2
. You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. To use the default network definition settings, do not includenetwork_config
in the role definition.
For more information about the properties that you can use to configure node attributes in your node definition file, see Bare-metal node provisioning attributes. For an example node definition file, see Example node definition file.
-
Replace
Open your network interface template,
vdpa_net_top.j2
, and add the following configuration to specify your VDPA-supported network interfaces as a member of the OVS bridge:- type: ovs_bridge name: br-tenant members: - type: sriov_pf name: enp6s0f0 numvfs: 8 use_dhcp: false vdpa: true link_mode: switchdev - type: sriov_pf name: enp6s0f1 numvfs: 8 use_dhcp: false vdpa: true link_mode: switchdev
Provision the new nodes for your role:
(undercloud)$ openstack overcloud node provision \ [--stack <stack>] \ [--network-config \] --output <deployment_file> \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Optional: Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. Defaults toovercloud
. -
Optional: Include the
--network-config
optional argument to provide the network definitions to thecli-overcloud-node-network-config.yaml
Ansible playbook. If you have not defined the network definitions in the node definition file by using thenetwork_config
property, then the default network definitions are used. -
Replace
<deployment_file>
with the name of the heat environment file to generate for inclusion in the deployment command, for example/home/stack/templates/overcloud-baremetal-deployed.yaml
.
-
Optional: Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
available
toactive
:(undercloud)$ watch openstack baremetal node list
If you ran the provisioning command without the
--network-config
option, then configure the<Role>NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to your NIC template files:parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeVdpaNetworkConfigTemplate: /home/stack/templates/nic-configs/<role_topology_file> ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
Replace
<role_topology_file>
with the name of the file that contains the network topology of theComputeVdpa
role, for example,vdpa_net_top.j2
. Set tocompute.j2
to use the default network topology.
9.2. Configuring a VDPA Compute node
To enable your cloud users to create instances that use VIRTIO data path acceleration (VDPA) ports, configure the Compute nodes that have the VDPA devices.
Procedure
-
Create a new Compute environment file for configuring VDPA Compute nodes, for example,
vdpa_compute.yaml
. Add
PciPassthroughFilter
andNUMATopologyFilter
to theNovaSchedulerEnabledFilters
parameter invdpa_compute.yaml
:parameter_defaults: NovaSchedulerEnabledFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']
Add the
NovaPCIPassthrough
parameter tovdpa_compute.yaml
to specify the available PCIs for the VDPA devices on the Compute node. For example, to add NVIDIA® ConnectX®-6 Dx devices to the pool of PCI devices that are available for passthrough to instances, add the following configuration tovdpa_compute.yaml
:parameter_defaults: ... ComputeVdpaParameters: NovaPCIPassthrough: - vendor_id: "15b3" product_id: "101d" address: "06:00.0" physical_network: "tenant" - vendor_id: "15b3" product_id: "101d" address: "06:00.1" physical_network: "tenant"
For more information about how to configure
NovaPCIPassthrough
, see Guidelines for configuringNovaPCIPassthrough
.Enable the input–output memory management unit (IOMMU) in each Compute node BIOS by adding the
KernelArgs
parameter tovdpa_compute.yaml
. For example, use the followingKernalArgs
settings to enable an Intel Corporation IOMMU:parameter_defaults: ... ComputeVdpaParameters: ... KernelArgs: "intel_iommu=on iommu=pt"
To enable an AMD IOMMU, set
KernelArgs
to"amd_iommu=on iommu=pt"
.NoteWhen you first add the
KernelArgs
parameter to the configuration of a role, the overcloud nodes automatically reboot during overcloud deployment. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to defineKernelArgs
.Open your network environment file, and add the following configuration to define the physical network:
parameter_defaults: ... NeutronBridgeMappings: - <bridge_map_1> - <bridge_map_n> NeutronTunnelTypes: '<tunnel_types>' NeutronNetworkType: '<network_types>' NeutronNetworkVLANRanges: - <network_vlan_range_1> - <network_vlan_range_n>
-
Replace
<bridge_map_1>
, and all bridge mappings up to<bridge_map_n>
, with the logical to physical bridge mappings that you want to use for the VDPA bridge. For example,tenant:br-tenant
. -
Replace
<tunnel_types>
with a comma-separated list of the tunnel types for the project network. For example,geneve
. -
Replace
<network_types>
with a comma-separated list of the project network types for the Networking service (neutron). The first type that you specify is used until all available networks are exhausted, then the next type is used. For example,geneve,vlan
. -
Replace
<network_vlan_range_1>
, and all physical network and VLAN ranges up to<network_vlan_range_n>
, with the ML2 and OVN VLAN mapping ranges that you want to support. For example,datacentre:1:1000
,tenant:100:299
.
-
Replace
Add your custom environment files to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -r /home/stack/templates/roles_data_vdpa.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/vdpa_compute.yaml \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ -e /home/stack/templates/node-info.yaml
Verification
- Create an instance with a VDPA device. For more information, see Creating an instance with a VDPA interface in the Creating and managing instances guide.
- Log in to the instance as a cloud user. For more information, see Connecting to an instance in the Creating and managing instances guide.
Verify that the VDPA device is accessible from the instance:
$ openstack port show vdpa-port