Questo contenuto non è disponibile nella lingua selezionata.
Chapter 6. Deploying SR-IOV technologies
In your Red Hat OpenStack Platform NFV deployment, you can achieve higher performance with single root I/O virtualization (SR-IOV), when you configure direct access from your instances to a shared PCIe resource through virtual resources.
6.1. Prerequisites Copia collegamentoCollegamento copiato negli appunti!
- For details on how to install and configure the undercloud before deploying the overcloud, see the Director Installation and Usage Guide.
Do not manually edit any values in /etc/tuned/cpu-partitioning-variables.conf that director heat templates modify.
6.2. Configuring SR-IOV Copia collegamentoCollegamento copiato negli appunti!
The following CPU assignments, memory allocation, and NIC configurations are examples, and might be different from your use case.
Generate the built-in
ComputeSriovrole to define nodes in the OpenStack cluster that runNeutronSriovAgent,NeutronSriovHostConfig, and default compute services.openstack overcloud roles generate \ -o /home/stack/templates/roles_data.yaml \ Controller ComputeSriov
# openstack overcloud roles generate \ -o /home/stack/templates/roles_data.yaml \ Controller ComputeSriovCopy to Clipboard Copied! Toggle word wrap Toggle overflow To prepare the SR-IOV containers, include the
neutron-sriov.yamlandroles_data.yamlfiles when you generate theovercloud_images.yamlfile.sudo openstack tripleo container image prepare \ --roles-file ~/templates/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml \ -e ~/containers-prepare-parameter.yaml \ --output-env-file=/home/stack/templates/overcloud_images.yaml
sudo openstack tripleo container image prepare \ --roles-file ~/templates/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-sriov.yaml \ -e ~/containers-prepare-parameter.yaml \ --output-env-file=/home/stack/templates/overcloud_images.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on container image preparation, see Director Installation and Usage.
Configure the parameters for the SR-IOV nodes under
parameter_defaultsappropriately for your cluster, and your hardware configuration. Typically, you add these settings to thenetwork-environment.yamlfile.NeutronNetworkType: 'vlan' NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''NeutronNetworkType: 'vlan' NeutronNetworkVLANRanges: - tenant:22:22 - tenant:25:25 NeutronTunnelTypes: ''Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the same file, configure role specific parameters for SR-IOV compute nodes.
NoteThe
numvfsparameter replaces theNeutronSriovNumVFsparameter in the network configuration templates. Red Hat does not support modification of theNeutronSriovNumVFsparameter or thenumvfsparameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again. TheNovaVcpuPinSetparameter is now deprecated, and is replaced byNovaComputeCpuDedicatedSetfor dedicated, pinned workflows.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the SR-IOV enabled interfaces in the
compute.yamlnetwork configuration template. To create SR-IOV virtual functions (VFs), configure the interfaces as standalone NICs:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the list of default filters includes the value
AggregateInstanceExtraSpecsFilter.NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','AggregateInstanceExtraSpecsFilter']
NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','AggregateInstanceExtraSpecsFilter']Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Run the
overcloud_deploy.shscript.
6.3. NIC partitioning Copia collegamentoCollegamento copiato negli appunti!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
You can configure single root I/O virtualization (SR-IOV) so that an Red Hat OpenStack Platform host can use virtual functions (VFs).
When you partition a single, high-speed NIC into multiple VFs, you can use the NIC for both control and data plane traffic. You can then apply a QoS (Quality of Service) priority value to VF interfaces as desired.
Procedure
Ensure that you complete the following steps when creating the templates for an overcloud deployment:
Use the interface type
sriov_pfin anos-net-configrole file to configure a physical function that the host can use.- type: sriov_pf name: <interface name> use_dhcp: false numvfs: <number of vfs> promisc: <true/false> #optional (Defaults to true)- type: sriov_pf name: <interface name> use_dhcp: false numvfs: <number of vfs> promisc: <true/false> #optional (Defaults to true)Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
numvfsparameter replaces theNeutronSriovNumVFsparameter in the network configuration templates. Red Hat does not support modification of theNeutronSriovNumVFsparameter or thenumvfsparameter after deployment. If you modify either parameter after deployment, it might cause a disruption for the running instances that have an SR-IOV port on that physical function (PF). In this case, you must hard reboot these instances to make the SR-IOV PCI device available again.Use the interface type
sriov_vfto configure virtual functions in a bond that the host can use.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The VLAN tag must be unique across all VFs that belong to a common PF device. You must assign VLAN tags to an interface type:
- linux_bond
- ovs_bridge
- ovs_dpdk_port
- The applicable VF ID range starts at zero, and ends at the maximum number of VFs minus one.
To reserve virtual functions for VMs, use the
NovaPCIPassthroughparameter. You must assign a regex value to theaddressparameter to identify the VFs that you want to pass through to Nova, to be used by virtual instances, and not by the host.You can obtain these values from
lspci, so, if necessary, boot a compute node into a Linux environment to obtain this information.The
lspcicommand returns the address of each device in the format<bus>:<device>:<slot>. Enter these address values in theNovaPCIPassthroughparameter in the following format:NovaPCIPassthrough: - physical_network: "sriovnet2" address: {"domain": ".*", "bus": "06", "slot": "11", "function": "[5-7]"} - physical_network: "sriovnet2" address: {"domain": ".*", "bus": "06", "slot": "10", "function": "[5]"}NovaPCIPassthrough: - physical_network: "sriovnet2" address: {"domain": ".*", "bus": "06", "slot": "11", "function": "[5-7]"} - physical_network: "sriovnet2" address: {"domain": ".*", "bus": "06", "slot": "10", "function": "[5]"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that
IOMMUis enabled on all nodes that require NIC partitioning. For example, if you want NIC Partitioning for compute nodes, enable IOMMU using theKernelArgsparameter for that role:parameter_defaults: ComputeParameters: KernelArgs: "intel_iommu=on iommu=pt"parameter_defaults: ComputeParameters: KernelArgs: "intel_iommu=on iommu=pt"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Validation
Check the number of VFs.
[root@overcloud-compute-0 heat-admin]# cat /sys/class/net/p4p1/device/sriov_numvfs 10 [root@overcloud-compute-0 heat-admin]# cat /sys/class/net/p4p2/device/sriov_numvfs 10
[root@overcloud-compute-0 heat-admin]# cat /sys/class/net/p4p1/device/sriov_numvfs 10 [root@overcloud-compute-0 heat-admin]# cat /sys/class/net/p4p2/device/sriov_numvfs 10Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check Linux bonds.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List OVS bonds.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show OVS connections.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you used NovaPCIPassthrough to pass VFs to instances, test by deploying an SR-IOV instance.
The following bond modes are supported:
- balance-slb
- active-backup
6.4. Configuring OVS hardware offload Copia collegamentoCollegamento copiato negli appunti!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
The procedure for OVS hardware offload configuration shares many of the same steps as configuring SR-IOV.
Procedure
Generate the
ComputeSriovrole:openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriov
openstack overcloud roles generate -o roles_data.yaml Controller ComputeSriovCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
physical_networkparameter to match your environment.-
For VLAN, set the
physical_networkparameter to the name of the network you create in neutron after deployment. This value should also be inNeutronBridgeMappings. -
For VXLAN, set the
physical_networkparameter to the string valuenull. Ensure the
OvsHwOffloadparameter under role specific parameters has a value oftrue.Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
For VLAN, set the
Ensure that the list of default filters includes
NUMATopologyFilter:NovaSchedulerDefaultFilters: [\'RetryFilter',\'AvailabilityZoneFilter',\'ComputeFilter',\'ComputeCapabilitiesFilter',\'ImagePropertiesFilter',\'ServerGroupAntiAffinityFilter',\'ServerGroupAffinityFilter',\'PciPassthroughFilter',\'NUMATopologyFilter']
NovaSchedulerDefaultFilters: [\'RetryFilter',\'AvailabilityZoneFilter',\'ComputeFilter',\'ComputeCapabilitiesFilter',\'ImagePropertiesFilter',\'ServerGroupAntiAffinityFilter',\'ServerGroupAffinityFilter',\'PciPassthroughFilter',\'NUMATopologyFilter']Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure one or more network interfaces intended for hardware offload in the
compute-sriov.yamlconfiguration file:NoteDo not use the
NeutronSriovNumVFsparameter when configuring Open vSwitch hardware offload. Thenumvfsparameter specifies the number of VFs in a network configuration file used byos-net-config.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not configure Mellanox network interfaces as a nic-config interface type
ovs-vlanbecause this prevents tunnel endpoints such as VXLAN from passing traffic due to driver limitations.Include the
ovs-hw-offload.yamlfile in theovercloud deploycommand:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4.1. Verifying OVS hardware offload Copia collegamentoCollegamento copiato negli appunti!
Confirm that a PCI device is in
switchdevmode:devlink dev eswitch show pci/0000:03:00.0
# devlink dev eswitch show pci/0000:03:00.0 pci/0000:03:00.0: mode switchdev inline-mode none encap enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if offload is enabled in OVS:
ovs-vsctl get Open_vSwitch . other_config:hw-offload
# ovs-vsctl get Open_vSwitch . other_config:hw-offload “true”Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.5. Deploying an instance for SR-IOV Copia collegamentoCollegamento copiato negli appunti!
Use host aggregates to separate high performance compute hosts. For information on creating host aggregates and associated flavors for scheduling see Creating host aggregates.
Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on the Compute node in the Instances and Images Guide.
Deploy an instance for single root I/O virtualization (SR-IOV) by performing the following steps:
Create a flavor.
openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>
# openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipYou can specify the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces by adding the extra spec
hw:pci_numa_affinity_policyto your flavor. For more information, see Update flavor metadata in the Instance and Images Guide.Create the network.
openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp
# openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> # openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the port.
Use vnic-type
directto create an SR-IOV virtual function (VF) port.openstack port create --network net1 --vnic-type direct sriov_port
# openstack port create --network net1 --vnic-type direct sriov_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to create a virtual function with hardware offload.
openstack port create --network net1 --vnic-type direct --binding-profile '{"capabilities": ["switchdev"]} sriov_hwoffload_port# openstack port create --network net1 --vnic-type direct --binding-profile '{"capabilities": ["switchdev"]} sriov_hwoffload_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use vnic-type
direct-physicalto create an SR-IOV PF port.openstack port create --network net1 --vnic-type direct-physical sriov_port
# openstack port create --network net1 --vnic-type direct-physical sriov_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy an instance.
openstack server create --flavor <flavor> --image <image> --nic port-id=<id> <instance name>
# openstack server create --flavor <flavor> --image <image> --nic port-id=<id> <instance name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.6. Creating host aggregates Copia collegamentoCollegamento copiato negli appunti!
For better performance, deploy guests that have cpu pinning and hugepages. You can schedule high performance instances on a subset of hosts by matching aggregate metadata with flavor metadata.
Procedure
Ensure that the
AggregateInstanceExtraSpecsFiltervalue is included in thescheduler_default_filtersparameter in thenova.conffile. This configuration can be set through the heat parameterNovaSchedulerDefaultFiltersunder role-specific parameters before deployment.ComputeOvsDpdkSriovParameters: NovaSchedulerDefaultFilters: ['AggregateInstanceExtraSpecsFilter', 'RetryFilter','AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']ComputeOvsDpdkSriovParameters: NovaSchedulerDefaultFilters: ['AggregateInstanceExtraSpecsFilter', 'RetryFilter','AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo add this parameter to the configuration of an exiting cluster, you can add it to the heat templates, and run the original deployment script again.
Create an aggregate group for SR-IOV, and add relevant hosts. Define metadata, for example,
sriov=true, that matches defined flavor metadata.openstack aggregate create sriov_group openstack aggregate add host sriov_group compute-sriov-0.localdomain openstack aggregate set --property sriov=true sriov_group
# openstack aggregate create sriov_group # openstack aggregate add host sriov_group compute-sriov-0.localdomain # openstack aggregate set --property sriov=true sriov_groupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor.
openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>
# openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set additional flavor properties. Note that the defined metadata,
sriov=true, matches the defined metadata on the SR-IOV aggregate.openstack flavor set --property aggregate_instance_extra_specs:sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>
openstack flavor set --property aggregate_instance_extra_specs:sriov=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow