此内容没有您所选择的语言版本。
Chapter 4. Configure DPDK Accelerated Open vSwitch (OVS) for Networking
This chapter covers DPDK with Open vSwitch installation and tuning within the Red Hat OpenStack Platform environment.
See Planning Your OVS-DPDK Deployment to understand the parameters used to configure OVS-DPDK.
This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and the Network Functions Virtualization Planning Guide to understand the hardware and configuration options.
Do not edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that are modified by these director heat templates.
In the following procedures, you need to:
-
Update the appropriate
network-environment.yamlfile to include parameters for kernel arguments and DPDK arguments. -
Update the
compute.yamlfile to include the bridge for DPDK interface parameters. -
Update the
controller.yamlfile to include the same bridge details for DPDK interface parameters. -
Run the
overcloud_deploy.shscript to deploy the overcloud with the DPDK parameters.
For deployments that use hugepages, you also need to configure reserved_huge_pages. See How to set reserved_huge_pages in /etc/nova/nova.conf in Red Hat OpenStack Platform 10 for details.
Before you begin the procedure, ensure that you have the following:
- Red Hat OpenStack Platform 10 with Red Hat Enterprise Linux 7.5
- OVS-DPDK 2.9
- Tested NIC. For a list of tested NICs for NFV, see Tested NICs.
Red Hat OpenStack Platform 10 with OVS 2.9 operates in OVS client mode for OVS-DPDK deployments.
4.1. Naming Conventions 复制链接链接已复制到粘贴板!
We recommend that you follow a consistent naming convention when you use custom roles in your OpenStack deployment, especially with multiple nodes. This naming convention can assist you when creating the following files and configurations:
instackenv.json- To differentiate between nodes with different hardware or NIC capabilities."name":"computeovsdpdk-0"
"name":"computeovsdpdk-0"Copy to Clipboard Copied! Toggle word wrap Toggle overflow roles_data.yaml- To differentiate between compute-based roles that support DPDK.`ComputeOvsDpdk`
`ComputeOvsDpdk`Copy to Clipboard Copied! Toggle word wrap Toggle overflow network-environment.yaml- To ensure that you match the custom role to the correct flavor name.`OvercloudComputeOvsDpdkFlavor: computeovsdpdk`
`OvercloudComputeOvsDpdkFlavor: computeovsdpdk`Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
nic-configfile names - To differentiate NIC yaml files for compute nodes that support DPDK interfaces. Flavor creation - To help you match a flavor and
capabilities:profilevalue to the appropriate bare metal node and custom role.openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 4 computeovsdpdk openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computeovsdpdk" computeovsdpdk
# openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 4 computeovsdpdk # openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computeovsdpdk" computeovsdpdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Bare metal node - To ensure that you match the bare metal node with the appropriate hardware and
capability:profilevalue.openstack baremetal node update computeovsdpdk-0 add properties/capabilities='profile:computeovsdpdk,boot_option:local'
# openstack baremetal node update computeovsdpdk-0 add properties/capabilities='profile:computeovsdpdk,boot_option:local'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The flavor name does not have to match the capabilities:profile value for the flavor, but the flavor capabilities:profile value must match the bare metal node properties/capabilities='profile value. All three use computeovsdpdk in this example.
Ensure that all your nodes used for a custom role and profile have the same CPU, RAM, and PCI hardware topology.
This section covers the procedures to configure and deploy OVS-DPDK with two data plane ports in an OVS-DPDK bond, with control plane Linux bonding for your OpenStack environment.
4.2.1. Modify first-boot.yaml 复制链接链接已复制到粘贴板!
Modify the first-boot.yaml file to set up OVS and DPDK parameters and to configure tuned for CPU affinity.
If you have included the following lines in the first-boot.yaml file in a previous deployment, remove these lines for Red Hat OpenStack Platform 10 with Open vSwitch 2.9.
Add additional resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the DPDK parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
tunedconfiguration to provide CPU affinity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the kernel arguments.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Modify network-environment.yaml 复制链接链接已复制到粘贴板!
Add the custom resources for OVS-DPDK under
resource_registry.resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlresource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, disable the tunnel type (set the value to""), and set the network type tovlan.NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'
NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, map the physical network to the virtual bridge.NeutronBridgeMappings: 'tenant:br-link0'
NeutronBridgeMappings: 'tenant:br-link0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range.NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'
NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example sets the VLAN ranges on the physical network.
Under
parameter_defaults, set the OVS-DPDK configuration parameters.NoteNeutronDPDKCoreListandNeutronDPDKMemoryChannelsare the required settings for this procedure. Attempting to deploy DPDK without appropriate values causes the deployment to fail or lead to unstable deployments.Provide a list of cores that can be used as DPDK poll mode drivers (PMDs) in the format -
[allowed_pattern: "'[0-9,-]+'"].NeutronDpdkCoreList: "'2,22,3,23'"
NeutronDpdkCoreList: "'2,22,3,23'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
To optimize OVS-DPDK performance, consider the following options:
-
Select CPUs associated with the NUMA node of the DPDK interface. Use
cat /sys/class/net/<interface>/device/numa_nodeto list the NUMA node associated with an interface and uselscputo list the CPUs associated with that NUMA node. -
Group CPU siblings together (in case of hyper-threading). Use
cat /sys/devices/system/cpu/<cpu>/topology/thread_siblings_listto find the sibling of a CPU. - Reserve CPU 0 for the host process.
- Isolate CPUs assigned to PMD so that the host process does not use these CPUs.
Use
NovaVcpuPinsetto exclude CPUs assigned to PMD from Compute scheduling.Provide the number of memory channels in the format -
[allowed_pattern: "[0-9]+"].NeutronDpdkMemoryChannels: "4"
NeutronDpdkMemoryChannels: "4"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the memory pre-allocated from the hugepage pool for each socket.
NeutronDpdkSocketMemory: "'3072,1024'"
NeutronDpdkSocketMemory: "'3072,1024'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is a comma-separated string, in ascending order of the CPU socket. This example assumes a 2 NUMA node configuration and sets socket 0 to pre-allocate 1024 MB of huge pages, and sets socket 1 to pre-allocate 1024 MB. If you have a single NUMA node system, set this value to 1024,0.
Set the DPDK driver type for OVS bridges.
NeutronDpdkDriverType: "vfio-pci"
NeutronDpdkDriverType: "vfio-pci"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Under
parameter_defaults, set the vhost-user socket directory for OVS.NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"
NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, reserve the RAM for the host processes.NovaReservedHostMemory: 4096
NovaReservedHostMemory: 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set a comma-separated list or range of physical CPU cores to reserve for virtual machine processes.NovaVcpuPinSet: "4-19,24-39"
NovaVcpuPinSet: "4-19,24-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, list the the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, add theComputeKernelArgsparameters to add these parameters to the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"
ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese huge pages are consumed by the virtual machines, and also by OVS-DPDK using the
NeutronDpdkSocketMemoryparameter as shown in this procedure. The number of huge pages available for the virtual machines is thebootparameter minus theNeutronDpdkSocketMemory.You need to add
hw:mem_page_size=1GBto the flavor you associate with the DPDK instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be tuned.The given argument is appended to the tuned
cpu-partitioningprofile.HostIsolatedCoreList: "2-19,22-39"
HostIsolatedCoreList: "2-19,22-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameters_default, set the logical OVS-DPDK cores list. These cores must be mutually exclusive from the list of cores inNeutronDpdkCoreListandNovaVcpuPinSet.HostCpusList: "'0,20,1,21'"
HostCpusList: "'0,20,1,21'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Modify controller.yaml 复制链接链接已复制到粘贴板!
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OVS bridge for access to neutron-dhcp-agent and neutron-metadata-agent services.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4. Modify compute.yaml 复制链接链接已复制到粘贴板!
Modify the default compute.yaml file and make the following changes:
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bridge with two DPDK ports in an OVS-DPDK data plane bond to link to the controller.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.
4.2.5. Run the overcloud_deploy.sh Script 复制链接链接已复制到粘贴板!
The following example defines the openstack overcloud deploy command for the OVS-DPDK environment within a bash script:
-
/usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yamlis the location of the defaultneutron-ovs-dpdk.yamlfile, which enables the OVS-DPDK parameters for the Compute role. -
/home/stack/<relative-directory>/network-environment.yamlis the path for thenetwork-environment.yamlfile. Use this file to overwrite the default values from theneutron-ovs-dpdk.yamlfile.
This configuration of OVS-DPDK does not support security groups and live migrations.
This section covers the procedures to configure single-port OVS-DPDK with control plane Linux bonding and VXLAN tunnelling for your OpenStack environment.
4.3.1. Modify first-boot.yaml 复制链接链接已复制到粘贴板!
Modify the first-boot.yaml file to set up OVS and DPDK parameters and to configure tuned for CPU affinity.
If you have included the following lines in the first-boot.yaml file in a previous deployment, remove these lines for Red Hat OpenStack Platform 10 with Open vSwitch 2.9.
Add additional resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the DPDK parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
tunedconfiguration to provide CPU affinity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the kernel arguments.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2. Modify network-environment.yaml 复制链接链接已复制到粘贴板!
Add the custom resources for OVS-DPDK under
resource_registry.resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlresource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the tunnel type and the tenant type tovxlan.NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan'
NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the OVS-DPDK configuration parameters.NoteNeutronDPDKCoreListandNeutronDPDKMemoryChannelsare the required settings for this procedure. Attempting to deploy DPDK without appropriate values causeS the deployment to fail or lead to unstable deployments.Provide a list of cores that can be used as DPDK poll mode drivers (PMDs) in the format -
[allowed_pattern: "'[0-9,-]+'"].NeutronDpdkCoreList: "'2,22,3,23'"
NeutronDpdkCoreList: "'2,22,3,23'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
To optimize OVS-DPDK performance, consider the following options:
-
Select CPUs associated with the NUMA node of the DPDK interface. Use
cat /sys/class/net/<interface>/device/numa_nodeto list the NUMA node associated with an interface and uselscputo list the CPUs associated with that NUMA node. -
Group CPU siblings together (in case of hyper-threading). Use
cat /sys/devices/system/cpu/<cpu>/topology/thread_siblings_listto find the sibling of a CPU. - Reserve CPU 0 for the host process.
- Isolate CPUs assigned to PMD so that the host process does not use these CPUs.
Use
NovaVcpuPinsetto exclude CPUs assigned to PMD from Compute scheduling.Provide the number of memory channels in the format -
[allowed_pattern: "[0-9]+"].NeutronDpdkMemoryChannels: "4"
NeutronDpdkMemoryChannels: "4"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the memory pre-allocated from the hugepage pool for each socket.
NeutronDpdkSocketMemory: "'3072,1024'"
NeutronDpdkSocketMemory: "'3072,1024'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is a comma-separated string, in ascending order of the CPU socket. If you have a single NUMA node system, set this value to 3072,0.
Set the DPDK driver type for OVS bridges.
NeutronDpdkDriverType: "vfio-pci"
NeutronDpdkDriverType: "vfio-pci"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Under
parameter_defaults, set the vhost-user socket directory for OVS.NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"
NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, reserve the RAM for the host processes.NovaReservedHostMemory: 4096
NovaReservedHostMemory: 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set a comma-separated list or range of physical CPU cores to reserve for virtual machine processes.NovaVcpuPinSet: "4-19,24-39"
NovaVcpuPinSet: "4-19,24-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, list the the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, add theComputeKernelArgsparameters to add these parameters to the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"
ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese huge pages are consumed by the virtual machines, and also by OVS-DPDK using the
NeutronDpdkSocketMemoryparameter as shown in this procedure. The number of huge pages available for the virtual machines is thebootparameter minus theNeutronDpdkSocketMemory.You need to add
hw:mem_page_size=1GBto the flavor you associate with the DPDK instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be tuned.The given argument is appended to the tuned
cpu-partitioningprofile.HostIsolatedCoreList: "2-19,22-39"
HostIsolatedCoreList: "2-19,22-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameters_default, set the logical OVS-DPDK cores list. These cores must be mutually exclusive from the list of cores inNeutronDpdkCoreListandNovaVcpuPinSet.HostCpusList: "'0,20,22-39'"
HostCpusList: "'0,20,22-39'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3. Modify controller.yaml 复制链接链接已复制到粘贴板!
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OVS bridge for access to neutron-dhcp-agent and neutron-metadata-agent services.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.4. Modify compute.yaml 复制链接链接已复制到粘贴板!
Create the compute-ovs-dpdk.yaml file from the default compute.yaml file and make the following changes:
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bridge with a DPDK port to link to the controller.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.
4.3.5. Run the overcloud_deploy.sh Script 复制链接链接已复制到粘贴板!
The following example defines the openstack overcloud deploy command for the OVS-DPDK environment within a Bash script:
-
/usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yamlis the location of the defaultneutron-ovs-dpdk.yamlfile, which enables the OVS-DPDK parameters for the Compute role. -
/home/stack/<relative-directory>/network-environment.yamlis the path for thenetwork-environment.yamlfile. Use this file to overwrite the default values from theneutron-ovs-dpdk.yamlfile.
This configuration of OVS-DPDK does not support security groups and live migrations.
4.4. Set the MTU Value for OVS-DPDK Interfaces 复制链接链接已复制到粘贴板!
Red Hat OpenStack Platform supports jumbo frames for OVS-DPDK. To set the MTU value for jumbo frames you must:
-
Set the global MTU value for networking in the
network-environment.yamlfile. -
Set the physical DPDK port MTU value in the
compute.yamlfile. This value is also used by the vhost user interface. - Set the MTU value within any guest instances on the Compute node to ensure that you have a comparable MTU value from end to end in your configuration.
VXLAN packets include an extra 50 bytes in the header. Calculate your MTU requirements based on these additional header bytes. For example, an MTU value of 9000 means the VXLAN tunnel MTU value is 8950 to account for these extra bytes.
You do not need any special configuration for the physical NIC since the NIC is controlled by the DPDK PMD and has the same MTU value set by the compute.yaml file. You cannot set an MTU value larger than the maximum value supported by the physical NIC.
To set the MTU value for OVS-DPDK interfaces:
Set the
NeutronGlobalPhysnetMtuparameter in thenetwork-environment.yamlfile.parameter_defaults: # Global MTU configuration on Neutron NeutronGlobalPhysnetMtu: 9000
parameter_defaults: # Global MTU configuration on Neutron NeutronGlobalPhysnetMtu: 9000Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the NeutronDpdkSocketMemory value in the
network-environment.yamlfile is large enough to support jumbo frames. See Memory Parameters for details.Set the MTU value on the bridge to the Compute node in the
controller.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To set the MTU values for the OVS-DPDK interfaces and bonds in the compute.yaml file:
4.5. Set Multiqueue for OVS-DPDK Interfaces 复制链接链接已复制到粘贴板!
To set the number of queues for an OVS-DPDK port on the Compute node, modify the compute.yaml file as follows:
4.6. Known Limitations 复制链接链接已复制到粘贴板!
There are certain limitations when configuring OVS-DPDK with Red Hat OpenStack Platform 10 for the NFV use case:
- Use Linux bonds for control plane networks. Ensure both PCI devices used in the bond are on the same NUMA node for optimum performance. Neutron Linux bridge configuration is not supported by Red Hat.
- Huge pages are required for every instance running on the hosts with OVS-DPDK. If huge pages are not present in the guest, the interface will appear but not function.
There is a performance degradation of services that use tap devices, because these devices do not support DPDK. For example, services such as DVR, FWaaS, and LBaaS use tap devices.
-
With OVS-DPDK, you can enable DVR with
netdev datapath, but this has poor performance and is not suitable for a production environment. DVR uses kernel namespace and tap devices to perform the routing. - To ensure the DVR routing performs well with OVS-DPDK, you need to use a controller such as ODL which implements routing as OpenFlow rules. With OVS-DPDK, OpenFlow routing removes the bottleneck introduced by the Linux kernel interfaces so that the full performance of datapath is maintained.
-
With OVS-DPDK, you can enable DVR with
-
When using OVS-DPDK, all bridges should be of type
ovs_user_bridgeon the Compute node. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridge.
4.7. Create a Flavor and Deploy an Instance for OVS-DPDK 复制链接链接已复制到粘贴板!
After you have completed configuring OVS-DPDK for your Red Hat OpenStack Platform deployment with NFV, you can create a flavor and deploy an instance with the following steps:
Create an aggregate group and add a host to it for OVS-DPDK. Define metadata, for example,
"aggregate_instance_extra_specs:dpdk"="true", that matches flavor metadata.openstack aggregate create dpdk_group # openstack aggregate set --property \ "aggregate_instance_extra_specs:dpdk"="true" dpdk_group # openstack aggregate add host dpdk compute-ovs-dpdk-0.localdomain
# openstack aggregate create dpdk_group # openstack aggregate set --property \ "aggregate_instance_extra_specs:dpdk"="true" dpdk_group # openstack aggregate add host dpdk compute-ovs-dpdk-0.localdomainCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor.
openstack flavor create <flavor --ram <MB> --disk <GB> --vcpus <#>
# openstack flavor create <flavor --ram <MB> --disk <GB> --vcpus <#>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set additional flavor properties. Note that the defined metadata,
"aggregate_instance_extra_specs:dpdk"=true", matches the defined metadata on the DPDK aggregate.openstack flavor set --property "aggregate_instance_extra_specs:dpdk"="true" \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=large <flavor>
# openstack flavor set --property "aggregate_instance_extra_specs:dpdk"="true" \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=large <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the network.
openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
# openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subnet.
openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp
# openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an instance.
openstack server create --flavor <flavor> --image <glance_image> --nic net-id=net1 <name>
# openstack server create --flavor <flavor> --image <glance_image> --nic net-id=net1 <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You have now deployed an instance for the OVS-DPDK with NFV use case.
To improve performance, you can pin the Qemu emulator thread to an alternate core.
Determine which cores are used as vCPUs for your instance.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Select the core you want to pin the emulator thread to. Ensure the selected core is from the NovaVcpuPinSet.
#virsh emulatorpin <vm-name> --cpulist 2
#virsh emulatorpin <vm-name> --cpulist 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe pCPU associated with the emulator pin thread consumes one vCPU (two threads if hyperthreading is enabled) from the
NovaVcpuPinSet.
4.8. Troubleshooting the Configuration 复制链接链接已复制到粘贴板!
This section describes the steps to troubleshoot the DPDK-OVS configuration.
Review the bridge configuration and confirm that the bridge was created with the
datapath_type=netdev. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the OVS service by confirming that the
neutron-ovs-agentis configured to start automatically.systemctl status neutron-openvswitch-agent.service neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2015-11-23 14:49:31 AEST; 25min ago
# systemctl status neutron-openvswitch-agent.service neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2015-11-23 14:49:31 AEST; 25min agoCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the service is having trouble starting, you can view any related messages.
journalctl -t neutron-openvswitch-agent.service
# journalctl -t neutron-openvswitch-agent.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the PMD CPU mask of the
ovs-dpdkare pinned to the CPUs. In case of HT, use sibling CPUs.For example, take
CPU4:cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20
# cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20Copy to Clipboard Copied! Toggle word wrap Toggle overflow So, using CPU 4 and 20:
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010
# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the status.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow