此内容没有您所选择的语言版本。
Network Functions Virtualization Configuration Guide
Configuring the Network Functions Virtualization (NFV) OpenStack Deployment
Abstract
Preface 复制链接链接已复制到粘贴板!
Red Hat OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads.
This guide describes the steps to configure SR-IOV and DPDK-accelerated Open vSwitch (OVS) using the Red Hat OpenStack Platform 10 director for NFV deployments.
Chapter 1. Overview 复制链接链接已复制到粘贴板!
Network Functions Virtualization (NFV) is a software-based solution that virtualizes a network function on general-purpose, cloud-based infrastructure. NFV allows the Communication Service Provider to move away from traditional hardware.
This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and the Network Functions Virtualization Planning Guide to understand the hardware and configuration options.
Red Hat OpenStack Platform 10 director allows you to isolate the overcloud networks (for example, external, tenant, internal API and so on). You can deploy a network on a single network interface or distributed over a multiple host network interface. Network isolation in a Red Hat OpenStack Platform 10 installation is configured using template files. If you do not provide template files, all the service networks are deployed on the provisioning network. There are multiple types of template configuration files:
-
network-environment.yaml- Contains the network details such as subnets and IP address ranges that are used to configure the network on the overcloud nodes. This file also contains the different settings that override the default parameter values for various scenarios. -
Host templates (for example,
compute.yaml,controller.yamland so on) - Define the network interface configuration for the overcloud nodes. first-boot.yaml- Provides various configuration steps, for example:- Grub arguments.
- DPDK parameters.
-
Tuned installation and configuration. The
tunedpackage contains thetuneddaemon that monitors the use of system components and dynamically tunes system settings based on that monitoring information. To provide proper CPU affinity configuration in OVS-DPDK and SR-IOV deployments, you should use thetuned-cpu-partitioningprofile.
These heat template files are located at /usr/share/openstack-tripleo-heat-templates/ on the undercloud node.
For samples of these heat template files for NFV, see the Sample YAML Files.
NFV configuration makes use of YAML files. See YAML in a Nutshell for an introduction to the YAML file format.
The following sections provide more details on how to configure the heat template files for NFV using the Red Hat OpenStack Platform director.
1.1. Composable Roles 复制链接链接已复制到粘贴板!
With Red Hat OpenStack Platform 10, you can use composable roles to create custom deployment roles for NFV. Composable roles allow you to add or remove services from each role. For more information on Composable Roles, see Composable Roles and Services.
To configure composable roles:
-
Copy and modify the
roles-data.yamlfile to add the composable role for OVS-DPDK or SR-IOV. - Create an OpenStack flavor and assign the appropriate properties to that flavor.
- Associate this new flavor with a node.
-
Update the appropriate
network-environment.yamlfile to include parameters for kernel arguments and DPDK or SR-IOV arguments. -
Run the
overcloud_deploy.shscript to deploy the overcloud with the composable roles.
Chapter 2. Updating Red Hat OpenStack Platform with NFV 复制链接链接已复制到粘贴板!
There are additional considerations and steps needed to update Red Hat OpenStack Platform when you have OVS-DPDK configured. The steps are covered in Director-Based Environments: Performing Updates to Minor Versions in the Upgrading Red Hat OpenStack Platform Guide.
This chapter covers the configuration of Single Root Input/Output Virtualization (SR-IOV) within the Red Hat OpenStack Platform 10 environment using the director.
This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and the Network Functions Virtualization Planning Guide to understand the hardware and configuration options.
Do not edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that are modified by these director heat templates.
In the following procedure, you need to update the network-environment.yaml file to include parameters for kernel arguments, SR-IOV driver, PCI passthrough and so on. You must also update the compute.yaml file to include the SR-IOV interface parameters, and run the overcloud_deploy.sh script to deploy the overcloud with the SR-IOV parameters.
3.1. Configure Two-port SR-IOV with VLAN Tunnelling 复制链接链接已复制到粘贴板!
This section describes the YAML files you need to modify to configure SR-IOV with two ports that use VLAN tunnelling for your OpenStack environment.
3.1.1. Modify first-boot.yaml 复制链接链接已复制到粘贴板!
If you have included the following lines in the first-boot.yaml file in a previous deployment, remove these lines for Red Hat OpenStack Platform 10 with Open vSwitch 2.9.
Set the
tunedconfiguration to enable CPU affinity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the Kernel arguments:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2. Modify network-environment.yaml 复制链接链接已复制到粘贴板!
Add
first-boot.yamlunderresource_registryto set the CPU tuning.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, disable the tunnel type (set the value to""), and set network type tovlan.NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'
NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, map the Open vSwitch physical network to the bridge.NeutronBridgeMappings: 'tenant:br-link0'
NeutronBridgeMappings: 'tenant:br-link0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range.NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'
NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the SR-IOV configuration parameters.Enable the SR-IOV mechanism driver (
sriovnicswitch).NeutronMechanismDrivers: "openvswitch,sriovnicswitch"
NeutronMechanismDrivers: "openvswitch,sriovnicswitch"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Compute
pci_passthrough_whitelistparameter, and setdevnamefor the SR-IOV interface. The whitelist sets the PCI devices available to instances.NovaPCIPassthrough: - devname: "p7p1" physical_network: "tenant" - devname: "p7p2" physical_network: "tenant"NovaPCIPassthrough: - devname: "p7p1" physical_network: "tenant" - devname: "p7p2" physical_network: "tenant"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the physical network and SR-IOV interface in the format -
PHYSICAL_NETWORK:PHYSICAL DEVICE.All physical networks listed in the
network_vlan_rangeson the server should have mappings to the appropriate interfaces on each agent.NeutronPhysicalDevMappings: "tenant:p7p1,tenant:p7p2"
NeutronPhysicalDevMappings: "tenant:p7p1,tenant:p7p2"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Provide the number of Virtual Functions (VFs) to be reserved for each SR-IOV interface.
Red Hat OpenStack Platform supports the number of VFs supported by the NIC vendor. See Deployment Limits for Red Hat OpenStack Platform for other related details. This example reserves 5 VFs for each of the SR-IOV interfaces:
NeutronSriovNumVFs: "p7p1:5,p7p2:5"
NeutronSriovNumVFs: "p7p1:5,p7p2:5"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteChanging the
NeutronSriovNumVFsparameter within a running environment is known to cause a permanent outage for all running instances which have an SR-IOV port on that PF. Unless you hard reboot these instances, the SR-IOV PCI device will not be visible to the instance.
Under
parameter_defaults, reserve the RAM for the host processes.NovaReservedHostMemory: 4096
NovaReservedHostMemory: 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set a comma-separated list or range of physical CPU cores to reserve for virtual machine processes.NovaVcpuPinSet: "1-19,21-39"
NovaVcpuPinSet: "1-19,21-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, list the the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, define theComputeKernelArgsparameters to be included in the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=12 intel_iommu=on iommu=pt"
ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=12 intel_iommu=on iommu=pt"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou need to add
hw:mem_page_size=1GBto the flavor you associate with the DPDK instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be tuned.The given argument is appended to the tuned
cpu-partitioningprofile.HostIsolatedCoreList: "1-19,21-39"
HostIsolatedCoreList: "1-19,21-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Modify controller.yaml 复制链接链接已复制到粘贴板!
Create the Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OVS bridge for access to neutron-dhcp-agent and neutron-metadata-agent services.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.4. Modify compute.yaml 复制链接链接已复制到粘贴板!
Create the Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the two SR-IOV interfaces by adding the following to the
compute.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.5. Run the overcloud_deploy.sh Script 复制链接链接已复制到粘贴板!
The following example defines the openstack overcloud deploy command for the VLAN environment.
-
/usr/share/openstack-tripleo-heat-templates/environments/neutron-sriov.yamlis the location of the defaultneutron-sriov.yamlfile, which enables the SR-IOV parameters in the Compute node. -
/home/stack/<relative-directory>/network-environment.yamlis the path for thenetwork-environment.yamlfile. The defaultneutron-sriov.yamlvalues can be overridden innetwork-environment.yamlfile.
3.2. Create a Flavor and Deploy an Instance for SR-IOV 复制链接链接已复制到粘贴板!
After you have completed configuring SR-IOV for your Red Hat OpenStack Platform deployment with NFV, you need to create a flavor and deploy an instance by performing the following steps.
Create an aggregate group and add a host to it for SR-IOV. Define metadata, for example,
"aggregate_instance_extra_specs:sriov"="true", that matches flavor metadata.openstack aggregate create sriov_group # openstack aggregate set --property \ "aggregate_instance_extra_specs:sriov"="true" sriov_group # openstack aggregate add host sriov compute-sriov-0.localdomain
# openstack aggregate create sriov_group # openstack aggregate set --property \ "aggregate_instance_extra_specs:sriov"="true" sriov_group # openstack aggregate add host sriov compute-sriov-0.localdomainCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor.
openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>
# openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set additional flavor properties. Note that the defined metadata,
"aggregate_instance_extra_specs:sriov"="true", matches the defined metadata on the SR-IOV aggregate.openstack flavor set --property "aggregate_instance_extra_specs:sriov"="true" \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=large <flavor>
# openstack flavor set --property "aggregate_instance_extra_specs:sriov"="true" \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=large <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the network.
openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
# openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subnet.
openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp
# openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the port.
Use
vnic-typedirect to create an SR-IOV VF port.openstack port create --network net1 --vnic-type direct sriov_port
# openstack port create --network net1 --vnic-type direct sriov_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use
vnic-typedirect-physical to create an SR-IOV PF port.openstack port create --network net1 --vnic-type direct-physical sriov_port
# openstack port create --network net1 --vnic-type direct-physical sriov_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Deploy an instance.
openstack server create --flavor <flavor> --image <glance_image> --nic port-id=sriov_port <name>
# openstack server create --flavor <flavor> --image <glance_image> --nic port-id=sriov_port <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You have now deployed an instance for the SR-IOV with NFV use case.
This chapter covers DPDK with Open vSwitch installation and tuning within the Red Hat OpenStack Platform environment.
See Planning Your OVS-DPDK Deployment to understand the parameters used to configure OVS-DPDK.
This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and the Network Functions Virtualization Planning Guide to understand the hardware and configuration options.
Do not edit or change isolated_cores or other values in etc/tuned/cpu-partitioning-variables.conf that are modified by these director heat templates.
In the following procedures, you need to:
-
Update the appropriate
network-environment.yamlfile to include parameters for kernel arguments and DPDK arguments. -
Update the
compute.yamlfile to include the bridge for DPDK interface parameters. -
Update the
controller.yamlfile to include the same bridge details for DPDK interface parameters. -
Run the
overcloud_deploy.shscript to deploy the overcloud with the DPDK parameters.
For deployments that use hugepages, you also need to configure reserved_huge_pages. See How to set reserved_huge_pages in /etc/nova/nova.conf in Red Hat OpenStack Platform 10 for details.
Before you begin the procedure, ensure that you have the following:
- Red Hat OpenStack Platform 10 with Red Hat Enterprise Linux 7.5
- OVS-DPDK 2.9
- Tested NIC. For a list of tested NICs for NFV, see Tested NICs.
Red Hat OpenStack Platform 10 with OVS 2.9 operates in OVS client mode for OVS-DPDK deployments.
4.1. Naming Conventions 复制链接链接已复制到粘贴板!
We recommend that you follow a consistent naming convention when you use custom roles in your OpenStack deployment, especially with multiple nodes. This naming convention can assist you when creating the following files and configurations:
instackenv.json- To differentiate between nodes with different hardware or NIC capabilities."name":"computeovsdpdk-0"
"name":"computeovsdpdk-0"Copy to Clipboard Copied! Toggle word wrap Toggle overflow roles_data.yaml- To differentiate between compute-based roles that support DPDK.`ComputeOvsDpdk`
`ComputeOvsDpdk`Copy to Clipboard Copied! Toggle word wrap Toggle overflow network-environment.yaml- To ensure that you match the custom role to the correct flavor name.`OvercloudComputeOvsDpdkFlavor: computeovsdpdk`
`OvercloudComputeOvsDpdkFlavor: computeovsdpdk`Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
nic-configfile names - To differentiate NIC yaml files for compute nodes that support DPDK interfaces. Flavor creation - To help you match a flavor and
capabilities:profilevalue to the appropriate bare metal node and custom role.openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 4 computeovsdpdk openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computeovsdpdk" computeovsdpdk
# openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 4 computeovsdpdk # openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="computeovsdpdk" computeovsdpdkCopy to Clipboard Copied! Toggle word wrap Toggle overflow Bare metal node - To ensure that you match the bare metal node with the appropriate hardware and
capability:profilevalue.openstack baremetal node update computeovsdpdk-0 add properties/capabilities='profile:computeovsdpdk,boot_option:local'
# openstack baremetal node update computeovsdpdk-0 add properties/capabilities='profile:computeovsdpdk,boot_option:local'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The flavor name does not have to match the capabilities:profile value for the flavor, but the flavor capabilities:profile value must match the bare metal node properties/capabilities='profile value. All three use computeovsdpdk in this example.
Ensure that all your nodes used for a custom role and profile have the same CPU, RAM, and PCI hardware topology.
This section covers the procedures to configure and deploy OVS-DPDK with two data plane ports in an OVS-DPDK bond, with control plane Linux bonding for your OpenStack environment.
4.2.1. Modify first-boot.yaml 复制链接链接已复制到粘贴板!
Modify the first-boot.yaml file to set up OVS and DPDK parameters and to configure tuned for CPU affinity.
If you have included the following lines in the first-boot.yaml file in a previous deployment, remove these lines for Red Hat OpenStack Platform 10 with Open vSwitch 2.9.
Add additional resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the DPDK parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
tunedconfiguration to provide CPU affinity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the kernel arguments.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. Modify network-environment.yaml 复制链接链接已复制到粘贴板!
Add the custom resources for OVS-DPDK under
resource_registry.resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlresource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, disable the tunnel type (set the value to""), and set the network type tovlan.NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'
NeutronTunnelTypes: '' NeutronNetworkType: 'vlan'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, map the physical network to the virtual bridge.NeutronBridgeMappings: 'tenant:br-link0'
NeutronBridgeMappings: 'tenant:br-link0'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the OpenStack Networking ML2 and Open vSwitch VLAN mapping range.NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'
NeutronNetworkVLANRanges: 'tenant:22:22,tenant:25:25'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example sets the VLAN ranges on the physical network.
Under
parameter_defaults, set the OVS-DPDK configuration parameters.NoteNeutronDPDKCoreListandNeutronDPDKMemoryChannelsare the required settings for this procedure. Attempting to deploy DPDK without appropriate values causes the deployment to fail or lead to unstable deployments.Provide a list of cores that can be used as DPDK poll mode drivers (PMDs) in the format -
[allowed_pattern: "'[0-9,-]+'"].NeutronDpdkCoreList: "'2,22,3,23'"
NeutronDpdkCoreList: "'2,22,3,23'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
To optimize OVS-DPDK performance, consider the following options:
-
Select CPUs associated with the NUMA node of the DPDK interface. Use
cat /sys/class/net/<interface>/device/numa_nodeto list the NUMA node associated with an interface and uselscputo list the CPUs associated with that NUMA node. -
Group CPU siblings together (in case of hyper-threading). Use
cat /sys/devices/system/cpu/<cpu>/topology/thread_siblings_listto find the sibling of a CPU. - Reserve CPU 0 for the host process.
- Isolate CPUs assigned to PMD so that the host process does not use these CPUs.
Use
NovaVcpuPinsetto exclude CPUs assigned to PMD from Compute scheduling.Provide the number of memory channels in the format -
[allowed_pattern: "[0-9]+"].NeutronDpdkMemoryChannels: "4"
NeutronDpdkMemoryChannels: "4"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the memory pre-allocated from the hugepage pool for each socket.
NeutronDpdkSocketMemory: "'3072,1024'"
NeutronDpdkSocketMemory: "'3072,1024'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is a comma-separated string, in ascending order of the CPU socket. This example assumes a 2 NUMA node configuration and sets socket 0 to pre-allocate 1024 MB of huge pages, and sets socket 1 to pre-allocate 1024 MB. If you have a single NUMA node system, set this value to 1024,0.
Set the DPDK driver type for OVS bridges.
NeutronDpdkDriverType: "vfio-pci"
NeutronDpdkDriverType: "vfio-pci"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Under
parameter_defaults, set the vhost-user socket directory for OVS.NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"
NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, reserve the RAM for the host processes.NovaReservedHostMemory: 4096
NovaReservedHostMemory: 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set a comma-separated list or range of physical CPU cores to reserve for virtual machine processes.NovaVcpuPinSet: "4-19,24-39"
NovaVcpuPinSet: "4-19,24-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, list the the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, add theComputeKernelArgsparameters to add these parameters to the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"
ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese huge pages are consumed by the virtual machines, and also by OVS-DPDK using the
NeutronDpdkSocketMemoryparameter as shown in this procedure. The number of huge pages available for the virtual machines is thebootparameter minus theNeutronDpdkSocketMemory.You need to add
hw:mem_page_size=1GBto the flavor you associate with the DPDK instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be tuned.The given argument is appended to the tuned
cpu-partitioningprofile.HostIsolatedCoreList: "2-19,22-39"
HostIsolatedCoreList: "2-19,22-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameters_default, set the logical OVS-DPDK cores list. These cores must be mutually exclusive from the list of cores inNeutronDpdkCoreListandNovaVcpuPinSet.HostCpusList: "'0,20,1,21'"
HostCpusList: "'0,20,1,21'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3. Modify controller.yaml 复制链接链接已复制到粘贴板!
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OVS bridge for access to neutron-dhcp-agent and neutron-metadata-agent services.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4. Modify compute.yaml 复制链接链接已复制到粘贴板!
Modify the default compute.yaml file and make the following changes:
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bridge with two DPDK ports in an OVS-DPDK data plane bond to link to the controller.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.
4.2.5. Run the overcloud_deploy.sh Script 复制链接链接已复制到粘贴板!
The following example defines the openstack overcloud deploy command for the OVS-DPDK environment within a bash script:
-
/usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yamlis the location of the defaultneutron-ovs-dpdk.yamlfile, which enables the OVS-DPDK parameters for the Compute role. -
/home/stack/<relative-directory>/network-environment.yamlis the path for thenetwork-environment.yamlfile. Use this file to overwrite the default values from theneutron-ovs-dpdk.yamlfile.
This configuration of OVS-DPDK does not support security groups and live migrations.
This section covers the procedures to configure single-port OVS-DPDK with control plane Linux bonding and VXLAN tunnelling for your OpenStack environment.
4.3.1. Modify first-boot.yaml 复制链接链接已复制到粘贴板!
Modify the first-boot.yaml file to set up OVS and DPDK parameters and to configure tuned for CPU affinity.
If you have included the following lines in the first-boot.yaml file in a previous deployment, remove these lines for Red Hat OpenStack Platform 10 with Open vSwitch 2.9.
Add additional resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the DPDK parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
tunedconfiguration to provide CPU affinity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the kernel arguments.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2. Modify network-environment.yaml 复制链接链接已复制到粘贴板!
Add the custom resources for OVS-DPDK under
resource_registry.resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlresource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the tunnel type and the tenant type tovxlan.NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan'
NeutronTunnelTypes: 'vxlan' NeutronNetworkType: 'vxlan'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set the OVS-DPDK configuration parameters.NoteNeutronDPDKCoreListandNeutronDPDKMemoryChannelsare the required settings for this procedure. Attempting to deploy DPDK without appropriate values causeS the deployment to fail or lead to unstable deployments.Provide a list of cores that can be used as DPDK poll mode drivers (PMDs) in the format -
[allowed_pattern: "'[0-9,-]+'"].NeutronDpdkCoreList: "'2,22,3,23'"
NeutronDpdkCoreList: "'2,22,3,23'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
To optimize OVS-DPDK performance, consider the following options:
-
Select CPUs associated with the NUMA node of the DPDK interface. Use
cat /sys/class/net/<interface>/device/numa_nodeto list the NUMA node associated with an interface and uselscputo list the CPUs associated with that NUMA node. -
Group CPU siblings together (in case of hyper-threading). Use
cat /sys/devices/system/cpu/<cpu>/topology/thread_siblings_listto find the sibling of a CPU. - Reserve CPU 0 for the host process.
- Isolate CPUs assigned to PMD so that the host process does not use these CPUs.
Use
NovaVcpuPinsetto exclude CPUs assigned to PMD from Compute scheduling.Provide the number of memory channels in the format -
[allowed_pattern: "[0-9]+"].NeutronDpdkMemoryChannels: "4"
NeutronDpdkMemoryChannels: "4"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the memory pre-allocated from the hugepage pool for each socket.
NeutronDpdkSocketMemory: "'3072,1024'"
NeutronDpdkSocketMemory: "'3072,1024'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This is a comma-separated string, in ascending order of the CPU socket. If you have a single NUMA node system, set this value to 3072,0.
Set the DPDK driver type for OVS bridges.
NeutronDpdkDriverType: "vfio-pci"
NeutronDpdkDriverType: "vfio-pci"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Under
parameter_defaults, set the vhost-user socket directory for OVS.NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"
NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, reserve the RAM for the host processes.NovaReservedHostMemory: 4096
NovaReservedHostMemory: 4096Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, set a comma-separated list or range of physical CPU cores to reserve for virtual machine processes.NovaVcpuPinSet: "4-19,24-39"
NovaVcpuPinSet: "4-19,24-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, list the the applicable filters.Nova scheduler applies these filters in the order they are listed. List the most restrictive filters first to make the filtering process for the nodes more efficient.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameter_defaults, add theComputeKernelArgsparameters to add these parameters to the defaultgrubfile at first boot.ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"
ComputeKernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThese huge pages are consumed by the virtual machines, and also by OVS-DPDK using the
NeutronDpdkSocketMemoryparameter as shown in this procedure. The number of huge pages available for the virtual machines is thebootparameter minus theNeutronDpdkSocketMemory.You need to add
hw:mem_page_size=1GBto the flavor you associate with the DPDK instance. If you do not do this, the instance does not get a DHCP allocation.Under
parameter_defaults, set a list or range of physical CPU cores to be tuned.The given argument is appended to the tuned
cpu-partitioningprofile.HostIsolatedCoreList: "2-19,22-39"
HostIsolatedCoreList: "2-19,22-39"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under
parameters_default, set the logical OVS-DPDK cores list. These cores must be mutually exclusive from the list of cores inNeutronDpdkCoreListandNovaVcpuPinSet.HostCpusList: "'0,20,22-39'"
HostCpusList: "'0,20,22-39'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.3. Modify controller.yaml 复制链接链接已复制到粘贴板!
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OVS bridge for access to neutron-dhcp-agent and neutron-metadata-agent services.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.4. Modify compute.yaml 复制链接链接已复制到粘贴板!
Create the compute-ovs-dpdk.yaml file from the default compute.yaml file and make the following changes:
Create a separate provisioning interface.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the control plane Linux bond for an isolated network.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bridge with a DPDK port to link to the controller.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.
4.3.5. Run the overcloud_deploy.sh Script 复制链接链接已复制到粘贴板!
The following example defines the openstack overcloud deploy command for the OVS-DPDK environment within a Bash script:
-
/usr/share/openstack-tripleo-heat-templates/environments/neutron-ovs-dpdk.yamlis the location of the defaultneutron-ovs-dpdk.yamlfile, which enables the OVS-DPDK parameters for the Compute role. -
/home/stack/<relative-directory>/network-environment.yamlis the path for thenetwork-environment.yamlfile. Use this file to overwrite the default values from theneutron-ovs-dpdk.yamlfile.
This configuration of OVS-DPDK does not support security groups and live migrations.
4.4. Set the MTU Value for OVS-DPDK Interfaces 复制链接链接已复制到粘贴板!
Red Hat OpenStack Platform supports jumbo frames for OVS-DPDK. To set the MTU value for jumbo frames you must:
-
Set the global MTU value for networking in the
network-environment.yamlfile. -
Set the physical DPDK port MTU value in the
compute.yamlfile. This value is also used by the vhost user interface. - Set the MTU value within any guest instances on the Compute node to ensure that you have a comparable MTU value from end to end in your configuration.
VXLAN packets include an extra 50 bytes in the header. Calculate your MTU requirements based on these additional header bytes. For example, an MTU value of 9000 means the VXLAN tunnel MTU value is 8950 to account for these extra bytes.
You do not need any special configuration for the physical NIC since the NIC is controlled by the DPDK PMD and has the same MTU value set by the compute.yaml file. You cannot set an MTU value larger than the maximum value supported by the physical NIC.
To set the MTU value for OVS-DPDK interfaces:
Set the
NeutronGlobalPhysnetMtuparameter in thenetwork-environment.yamlfile.parameter_defaults: # Global MTU configuration on Neutron NeutronGlobalPhysnetMtu: 9000
parameter_defaults: # Global MTU configuration on Neutron NeutronGlobalPhysnetMtu: 9000Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the NeutronDpdkSocketMemory value in the
network-environment.yamlfile is large enough to support jumbo frames. See Memory Parameters for details.Set the MTU value on the bridge to the Compute node in the
controller.yamlfile.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To set the MTU values for the OVS-DPDK interfaces and bonds in the compute.yaml file:
4.5. Set Multiqueue for OVS-DPDK Interfaces 复制链接链接已复制到粘贴板!
To set the number of queues for an OVS-DPDK port on the Compute node, modify the compute.yaml file as follows:
4.6. Known Limitations 复制链接链接已复制到粘贴板!
There are certain limitations when configuring OVS-DPDK with Red Hat OpenStack Platform 10 for the NFV use case:
- Use Linux bonds for control plane networks. Ensure both PCI devices used in the bond are on the same NUMA node for optimum performance. Neutron Linux bridge configuration is not supported by Red Hat.
- Huge pages are required for every instance running on the hosts with OVS-DPDK. If huge pages are not present in the guest, the interface will appear but not function.
There is a performance degradation of services that use tap devices, because these devices do not support DPDK. For example, services such as DVR, FWaaS, and LBaaS use tap devices.
-
With OVS-DPDK, you can enable DVR with
netdev datapath, but this has poor performance and is not suitable for a production environment. DVR uses kernel namespace and tap devices to perform the routing. - To ensure the DVR routing performs well with OVS-DPDK, you need to use a controller such as ODL which implements routing as OpenFlow rules. With OVS-DPDK, OpenFlow routing removes the bottleneck introduced by the Linux kernel interfaces so that the full performance of datapath is maintained.
-
With OVS-DPDK, you can enable DVR with
-
When using OVS-DPDK, all bridges should be of type
ovs_user_bridgeon the Compute node. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridge.
4.7. Create a Flavor and Deploy an Instance for OVS-DPDK 复制链接链接已复制到粘贴板!
After you have completed configuring OVS-DPDK for your Red Hat OpenStack Platform deployment with NFV, you can create a flavor and deploy an instance with the following steps:
Create an aggregate group and add a host to it for OVS-DPDK. Define metadata, for example,
"aggregate_instance_extra_specs:dpdk"="true", that matches flavor metadata.openstack aggregate create dpdk_group # openstack aggregate set --property \ "aggregate_instance_extra_specs:dpdk"="true" dpdk_group # openstack aggregate add host dpdk compute-ovs-dpdk-0.localdomain
# openstack aggregate create dpdk_group # openstack aggregate set --property \ "aggregate_instance_extra_specs:dpdk"="true" dpdk_group # openstack aggregate add host dpdk compute-ovs-dpdk-0.localdomainCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor.
openstack flavor create <flavor --ram <MB> --disk <GB> --vcpus <#>
# openstack flavor create <flavor --ram <MB> --disk <GB> --vcpus <#>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set additional flavor properties. Note that the defined metadata,
"aggregate_instance_extra_specs:dpdk"=true", matches the defined metadata on the DPDK aggregate.openstack flavor set --property "aggregate_instance_extra_specs:dpdk"="true" \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=large <flavor>
# openstack flavor set --property "aggregate_instance_extra_specs:dpdk"="true" \ --property hw:cpu_policy=dedicated \ --property hw:mem_page_size=large <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the network.
openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>
# openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the subnet.
openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp
# openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an instance.
openstack server create --flavor <flavor> --image <glance_image> --nic net-id=net1 <name>
# openstack server create --flavor <flavor> --image <glance_image> --nic net-id=net1 <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You have now deployed an instance for the OVS-DPDK with NFV use case.
To improve performance, you can pin the Qemu emulator thread to an alternate core.
Determine which cores are used as vCPUs for your instance.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Select the core you want to pin the emulator thread to. Ensure the selected core is from the NovaVcpuPinSet.
#virsh emulatorpin <vm-name> --cpulist 2
#virsh emulatorpin <vm-name> --cpulist 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe pCPU associated with the emulator pin thread consumes one vCPU (two threads if hyperthreading is enabled) from the
NovaVcpuPinSet.
4.8. Troubleshooting the Configuration 复制链接链接已复制到粘贴板!
This section describes the steps to troubleshoot the DPDK-OVS configuration.
Review the bridge configuration and confirm that the bridge was created with the
datapath_type=netdev. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the OVS service by confirming that the
neutron-ovs-agentis configured to start automatically.systemctl status neutron-openvswitch-agent.service neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2015-11-23 14:49:31 AEST; 25min ago
# systemctl status neutron-openvswitch-agent.service neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2015-11-23 14:49:31 AEST; 25min agoCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the service is having trouble starting, you can view any related messages.
journalctl -t neutron-openvswitch-agent.service
# journalctl -t neutron-openvswitch-agent.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the PMD CPU mask of the
ovs-dpdkare pinned to the CPUs. In case of HT, use sibling CPUs.For example, take
CPU4:cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20
# cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20Copy to Clipboard Copied! Toggle word wrap Toggle overflow So, using CPU 4 and 20:
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010
# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the status.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
This section describes how to deploy SR-IOV and DPDK interfaces on the same Compute node.
This guide provides examples for CPU assignments, memory allocation, and NIC configurations that may vary from your topology and use case. See the Network Functions Virtualization Product Guide and the Network Functions Virtualization Planning Guide to understand the hardware and configuration options.
The process to create and deploy SR-IOV and DPDK interfaces on the same Compute node includes:
-
Set the parameters for SR-IOV role and OVS-DPDK in the
network-environment.yamlfile. -
Configure the
compute.yamlfile with an SR-IOV interface and a DPDK interface. - Deploy the overcloud with this updated set of roles.
- Create the appropriate OpenStack flavor, networks, and ports to support these interface types.
We recommend the following network settings:
- Use floating IP addresses for the guest instances.
- Create a router and attach it to the DPDK VXLAN network (the management network).
- Use SR-IOV for the provider network.
-
Boot the guest instance with two ports attached. We recommend you use
cloud-initfor the guest instance to set the default route for the management network. - Add the floating IP address to booted guest instance.
If needed, use SR-IOV bonding for the guest instance and ensure both SR-IOV interfaces exist on the same NUMA node for optimum performance.
You must install and configure the undercloud before you can deploy the compute node in the overcloud. See the Director Installation and Usage Guide for details.
Ensure that you create an OpenStack flavor that match this custom role.
5.1. Modifying the first-boot.yaml file 复制链接链接已复制到粘贴板!
Modify the first-boot.yaml file to set up OVS and DPDK parameters and to configure tuned for CPU affinity.
If you have included the following lines in the first-boot.yaml file in a previous deployment, remove these lines for Red Hat OpenStack Platform 10 with Open vSwitch 2.9.
Add additional resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the DPDK parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
tunedconfiguration to provide CPU affinity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the kernel arguments.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Dataplane interfaces need a high degree of performance in a stateful firewall. To protect these interfaces, consider deploying a telco grade firewall (VNF).
Controlplane interfaces can be configured by setting the NeutronOVSFirewallDriver parameter openvswitch. This configures OpenStack Networking to use the flow-based OVS firewall driver. This is set in the network-environment.yaml file under parameter_defaults.
Example:
parameter_defaults: NeutronOVSFirewallDriver: openvswitch
parameter_defaults:
NeutronOVSFirewallDriver: openvswitch
Openvswitch is a technology preview and should only be used in testing environments. The only supported value for the NeutronOVSFirewallDriver parameter is noop.
When the OVS firewall driver is used, it is important to disable it for dataplane interfaces. This can be done with the openstack port set command.
Example:
openstack port set --no-security-group --disable-port-security ${PORT}
openstack port set --no-security-group --disable-port-security ${PORT}
5.3. Defining the SR-IOV and OVS-DPDK parameters 复制链接链接已复制到粘贴板!
Modify the network-environment.yaml file to configure SR-IOV and OVS-DPDK role-specific parameters:
Add the resource mapping for the OVS-DPDK and SR-IOV services to the
network-environment.yamlfile along with the network configuration for these nodes:resource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlresource_registry: # Specify the relative/absolute path to the config files you want to use for override the default. OS::TripleO::Compute::Net::SoftwareConfig: nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: nic-configs/controller.yaml OS::TripleO::NodeUserData: first-boot.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the flavors:
OvercloudControlFlavor: controller OvercloudComputeFlavor: compute
OvercloudControlFlavor: controller OvercloudComputeFlavor: computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Define the tunnel type:
# The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: 'vxlan' # The tenant network type for Neutron (vlan or vxlan). NeutronNetworkType: 'vlan'
# The tunnel type for the tenant network (vxlan or gre). Set to '' to disable tunneling. NeutronTunnelTypes: 'vxlan' # The tenant network type for Neutron (vlan or vxlan). NeutronNetworkType: 'vlan'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the parameters for SR-IOV. You can obtain the PCI vendor and device values as seen in the
NeutronSupportedPCIVendorDevsparameter by runninglspci -nv.NoteThe OpenvSwitch firewall driver, as seen in the following example, is a Technology Preview and should be used for control plane interfaces only. The only supported value for the
NeutronOVSFirewallDriverparameter isnoop. See Configuring openvswitch for security groups for details.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the parameters for OVS-DPDK:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must assign at least one CPU (with sibling thread) on each NUMA node with or without DPDK NICs present for DPDK PMD to avoid failures in creating guest instances.
-
Configure the remainder of the
network-environment.yamlfile to override the default parameters from theneutron-ovs-dpdk-agent.yamlandneutron-sriov-agent.yamlfiles as needed for your OpenStack deployment.
See the Network Functions Virtualization Planning Guide for details on how to determine the best values for the OVS-DPDK parameters that you set in the network-environment.yaml file to optimize your OpenStack network for OVS-DPDK.
This example uses the sample the compute.yaml file to support SR-IOV and DPDK interfaces.
Create the control plane Linux bond for an isolated network:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Assign VLANs to this Linux bond:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bridge with a DPDK port to link to the controller:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo include multiple DPDK devices, repeat the
typecode section for each DPDK device you want to add.NoteWhen using OVS-DPDK, all bridges on the same Compute node should be of type
ovs_user_bridge. The director may accept the configuration, but Red Hat OpenStack Platform does not support mixingovs_bridgeandovs_user_bridgeon the same node.Create the SR-IOV interface to the Controller:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Deploying the overcloud 复制链接链接已复制到粘贴板!
The following example defines the overcloud_deploy.sh Bash script that deploys both OVS-DPDK and SR-IOV:
With a successful deployment, you can now begin populating the overcloud. Start by sourcing the newly created overcloudrc file in the /home/stack directory. Then, create a flavor and deploy an instance.
Create a flavor:
source overcloudrc openstack flavor create --vcpus 6 --ram 4096 --disk 40 compute
# source overcloudrc # openstack flavor create --vcpus 6 --ram 4096 --disk 40 computeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
computeis the flavor name. -
4096is the memory size in MB. -
40is the disk size in GB (default 0G). -
6is the number of vCPUs.
-
Set the flavor for large pages:
openstack flavor set compute --property hw:mem_page_size=1GB
# openstack flavor set compute --property hw:mem_page_size=1GBCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the external network:
openstack network create --share --external \ --provider-physical-network <net-mgmt-physnet> \ --provider-network-type <flat|vlan> external
# openstack network create --share --external \ --provider-physical-network <net-mgmt-physnet> \ --provider-network-type <flat|vlan> externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the networks for SR-IOV and DPDK:
openstack network create net-dpdk openstack network create net-sriov openstack subnet create --subnet-range <cidr/prefix> --network net-dpdk net-dpdk-subnet openstack subnet create --subnet-range <cidr/prefix> --network net-sriov net-sriov-subnet
# openstack network create net-dpdk # openstack network create net-sriov # openstack subnet create --subnet-range <cidr/prefix> --network net-dpdk net-dpdk-subnet # openstack subnet create --subnet-range <cidr/prefix> --network net-sriov net-sriov-subnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the SR-IOV port.
Use
vnic-typedirect to create an SR-IOV VF port:openstack port create --network net-sriov --vnic-type direct sriov_port
# openstack port create --network net-sriov --vnic-type direct sriov_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use
vnic-typedirect-physical to create an SR-IOV PF port:openstack port create --network net-sriov --vnic-type direct-physical sriov_port
# openstack port create --network net-sriov --vnic-type direct-physical sriov_portCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a router and attach to the DPDK VXLAN network:
openstack router create router1 openstack router add subnet router1 net-dpdk-subnet
# openstack router create router1 # openstack router add subnet router1 net-dpdk-subnetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a floating IP address and associate it with the guest instance port:
openstack floating ip create --floating-ip-address FLOATING-IP external
# openstack floating ip create --floating-ip-address FLOATING-IP externalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy an instance:
openstack server create --flavor compute --image rhel_7.3 --nic port-id=sriov_port --nic net-id=NET_DPDK_ID vm1
# openstack server create --flavor compute --image rhel_7.3 --nic port-id=sriov_port --nic net-id=NET_DPDK_ID vm1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Where:
- compute is the flavor name or ID.
-
rhel_7.3is the image (name or ID) used to create an instance. -
sriov_portis the name of the port created in the previous step. - NET_DPDK_ID is the DPDK network ID.
-
vm1is the name of the instance.
You have now deployed an instance that uses an SR-IOV interface and a DPDK interface on the same Compute node.
For instances with more interfaces, you can use cloud-init. See Table 3.1 in Create an Instance for details.
Chapter 6. Finding More Information 复制链接链接已复制到粘贴板!
The following table includes additional Red Hat documentation for reference:
The Red Hat OpenStack Platform documentation suite can be found here: Red Hat OpenStack Platform 10 Documentation Suite
| Component | Reference |
|---|---|
| Red Hat Enterprise Linux | Red Hat OpenStack Platform is supported on Red Hat Enterprise Linux 7.3. For information on installing Red Hat Enterprise Linux, see the corresponding installation guide at: Red Hat Enterprise Linux Documentation Suite. |
| Red Hat OpenStack Platform | To install OpenStack components and their dependencies, use the Red Hat OpenStack Platform director. The director uses a basic OpenStack installation as the undercloud to install, configure and manage the OpenStack nodes in the final overcloud. Be aware that you will need one extra host machine for the installation of the undercloud, in addition to the environment necessary for the deployed overcloud. For detailed instructions, see Red Hat OpenStack Platform Director Installation and Usage. For information on configuring advanced features for a Red Hat OpenStack Platform enterprise environment using the Red Hat OpenStack Platform director such as network isolation, storage configuration, SSL communication, and general configuration method, see Advanced Overcloud Customization. You can also manually install the Red Hat OpenStack Platform components, see Manual Installation Procedures. |
| NFV Documentation | For a high level overview of the NFV concepts, see the Network Functions Virtualization Product Guide. For more details on planning your Red Hat OpenStack Platform deployment with NFV, see Network Function Virtualization Planning Guide. |
Appendix A. Sample SR-IOV YAML Files 复制链接链接已复制到粘贴板!
This section provides sample configuration files for single root I/O virtualization (SR-IOV) as a reference for Network Functions Virtualization infrastructure (NFVi).
These templates are from a fully configured environment and include parameters unrelated to NFV, that may not be relevant or appropriate for your deployment.
A.1. Sample VLAN SR-IOV YAML Files 复制链接链接已复制到粘贴板!
A.1.1. network-environment.yaml 复制链接链接已复制到粘贴板!
A.1.2. first-boot.yaml 复制链接链接已复制到粘贴板!
A.1.3. controller.yaml 复制链接链接已复制到粘贴板!
A.1.4. compute.yaml 复制链接链接已复制到粘贴板!
A.1.5. overcloud_deploy.sh 复制链接链接已复制到粘贴板!
Appendix B. Sample OVS-DPDK YAML Files 复制链接链接已复制到粘贴板!
This section provides sample configuration files for Open vSwitch with Data Plane Development Kit (OVS-DPDK) as a reference for Network Functions Virtualization infrastructure (NFVi).
These templates are from a fully configured environment and include parameters unrelated to NFV, that may not be relevant or appropriate for your deployment.
B.1. Sample VLAN OVS-DPDK Data Plane Bonding YAML Files 复制链接链接已复制到粘贴板!
B.1.1. first-boot.yaml 复制链接链接已复制到粘贴板!
B.1.2. network-environment.yaml 复制链接链接已复制到粘贴板!
B.1.3. controller.yaml 复制链接链接已复制到粘贴板!
B.1.4. compute-ovs-dpdk.yaml 复制链接链接已复制到粘贴板!
B.1.5. overcloud_deploy.sh 复制链接链接已复制到粘贴板!
B.2. Sample VXLAN OVS-DPDK Data Plane Bonding YAML Files 复制链接链接已复制到粘贴板!
B.2.1. first-boot.yaml 复制链接链接已复制到粘贴板!
B.2.2. network-environment.yaml 复制链接链接已复制到粘贴板!
B.2.3. controller.yaml 复制链接链接已复制到粘贴板!
B.2.4. compute-ovs-dpdk.yaml 复制链接链接已复制到粘贴板!
B.2.5. overcloud_deploy.sh 复制链接链接已复制到粘贴板!
This section provides sample YAML files as a reference for adding SR-IOV and DPDK interfaces on the same compute node.
These templates are from a fully configured environment and include parameters unrelated to NFV, that may not be relevant or appropriate for your deployment.
This section provides sample DPDK and SR-IOV YAML files as a reference.
C.1.1. first-boot.yaml 复制链接链接已复制到粘贴板!
C.1.2. network-environment.yaml 复制链接链接已复制到粘贴板!
C.1.3. controller.yaml 复制链接链接已复制到粘贴板!
C.1.4. compute.yaml 复制链接链接已复制到粘贴板!
C.1.5. overcloud_deploy.sh 复制链接链接已复制到粘贴板!
Appendix D. Revision History 复制链接链接已复制到粘贴板!
| Revision History | ||
|---|---|---|
| Revision 10.3-0 | July 31 2018 | |
| Updated network creation steps to use OSC parameters. | ||
| Revision 10.2-0 | July 24 2018 | |
| Removed section 'Configure OVS-DPDK Composable Role'. | ||
| Revision 10.1-0 | June 27 2018 | |
| Updates for 10zasync release with OVS 2.9 support. | ||
| Revision 10.0-0 | April 11 2018 | |
| Updates for 10z7 release. | ||