Chapter 12. Tuning NFV in a Red Hat OpenStack Services on OpenShift environment
You can configure a variety of parameters to tune your {rhoso_long} NFV environment.
12.1. Managing port security in NFV environments Copy linkLink copied to clipboard!
Port security is an anti-spoofing measure that blocks any egress traffic that does not match the source IP and source MAC address of the originating network port. You cannot view or modify this behavior using security group rules.
By default, the port_security_enabled parameter is set to enabled on newly created Networking service (neutron) networks in Red Hat OpenStack Services on OpenShift (RHOSO) environments. Newly created ports copy the value of the port_security_enabled parameter from the network they are created on.
For some NFV use cases, such as building a firewall or router, you must disable port security.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientTo disable port security on a single port, run the following command:
$ openstack port set --disable-port-security <port-id>To prevent port security from being enabled on any newly created port on a network, run the following command:
$ openstack network set --disable-port-security <network-id>Exit the
openstackclientpod:$ exit
12.2. Creating and using VF ports Copy linkLink copied to clipboard!
By running various OpenStack CLI client commands, you can create and use virtual function (VF) ports.
Prerequisites
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
$ oc rsh -n openstack openstackclientCreate a network of type
vlan.- Example
$ openstack network create trusted_vf_network \ --provider-network-type vlan --provider-segment 111 \ --provider-physical-network sriov2 --external --disable-port-security
Create a subnet.
- Example
$ openstack subnet create --network trusted_vf_network \ --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \ subnet-trusted_vf_network
Create a port.
- Example
Set the
vnic-typeoption todirect, and thebinding-profileoption totrue.$ openstack port create --network trusted_vf_network \ --vnic-type direct --binding-profile trusted=true \ sriov111_port_trusted
Create an instance, and bind it to the previously-created trusted port.
- Example
$ openstack server create --image rhel --flavor dpdk \ --network trusted_vf_network --port sriov111_port_trusted \ --config-drive True --wait rhel-dpdk-sriov_trusted
Exit the
openstackclientpod:$ exit
Verification
On the compute node that you created the instance, enter the following command:
$ ip link- Sample output
7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
-
Verify that the trust status of the VF is
trust on. The example output contains details of an environment that contains two ports. Note thatvf 6contains the texttrust on. -
You can disable spoof checking if you set
port_security_enabled: falsein the Networking service (neutron) network, or if you include the argument--disable-port-securitywhen you run theopenstack port createcommand.
12.3. Known limitations for NUMA-aware vSwitches Copy linkLink copied to clipboard!
This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.
This section lists the constraints for implementing a NUMA-aware vSwitch in a Red Hat OpenStack Services on OpenShift (RHOSO) network functions virtualization infrastructure (NFVi).
- You cannot start a VM that has two NICs connected to physnets on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one NIC connected to a physnet and another NIC connected to a tunneled network on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- You cannot start a VM that has one vhost port and one VF on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
- NUMA-aware vSwitch parameters are specific to overcloud roles. For example, Compute node 1 and Compute node 2 can have different NUMA topologies.
- If the interfaces of a VM have NUMA affinity, ensure that the affinity is for a single NUMA node only. You can locate any interface without NUMA affinity on any NUMA node.
- Configure NUMA affinity for data plane networks, not management networks.
- NUMA affinity for tunneled networks is a global setting that applies to all VMs.
12.4. Quality of Service (QoS) in NFVi environments Copy linkLink copied to clipboard!
You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Services on OpenShift (RHOSO) networks in a network functions virtualization infrastructure (NFVi).
In NFVi environments, QoS support is limited to the following rule types:
-
minimum bandwidthon SR-IOV, if supported by the vendor. -
bandwidth limiton SR-IOV and OVS-DPDK egress interfaces.
12.5. Creating an HCI data plane that uses DPDK Copy linkLink copied to clipboard!
You can deploy your NFV infrastructure with hyperconverged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage.
For more information about hyperconverged infrastructure (HCI), see Deploying a hyperconverged infrastructure environment.
12.5.1. Example NUMA node configuration Copy linkLink copied to clipboard!
For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1.
| NUMA-0 | NUMA-1 |
|---|---|
| Number of Ceph OSDs * 4 HT | Guest vCPU for the VNF and non-NFV VMs |
| DPDK lcore - 2 HT | DPDK lcore - 2 HT |
| DPDK PMD - 2 HT | DPDK PMD - 2 HT |
| NUMA-0 | NUMA-1 | |
|---|---|---|
| Ceph OSD | 32,34,36,38,40,42,76,78,80,82,84,86 | |
| DPDK-lcore | 0,44 | 1,45 |
| DPDK-pmd | 2,46 | 3,47 |
| nova | 5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87 |
12.5.2. Recommended configuration for HCI-DPDK deployments Copy linkLink copied to clipboard!
The following table lists the parameters that you can tune for HCI deployments:
| Block Device Type | OSDs, Memory, vCPUs per device |
|---|---|
| NVMe | Memory : 5GB per OSD OSDs per device: 4 vCPUs per device: 3 |
| SSD | Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 4 |
| HDD | Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 1 |
Use the same NUMA node for the following functions:
- Disk controller
- Storage networks
- Storage CPU and memory
Allocate another NUMA node for the following functions of the DPDK provider network:
- NIC
- PMD CPUs
- Socket memory