Chapter 12. Tuning NFV in a Red Hat OpenStack Services on OpenShift environment


You can configure a variety of parameters to tune your {rhoso_long} NFV environment.

12.1. Managing port security in NFV environments

Port security is an anti-spoofing measure that blocks any egress traffic that does not match the source IP and source MAC address of the originating network port. You cannot view or modify this behavior using security group rules.

By default, the port_security_enabled parameter is set to enabled on newly created Networking service (neutron) networks in Red Hat OpenStack Services on OpenShift (RHOSO) environments. Newly created ports copy the value of the port_security_enabled parameter from the network they are created on.

For some NFV use cases, such as building a firewall or router, you must disable port security.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. To disable port security on a single port, run the following command:

    $ openstack port set --disable-port-security <port-id>
  3. To prevent port security from being enabled on any newly created port on a network, run the following command:

    $ openstack network set --disable-port-security <network-id>
  4. Exit the openstackclient pod:

    $ exit

12.2. Creating and using VF ports

By running various OpenStack CLI client commands, you can create and use virtual function (VF) ports.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create a network of type vlan.

    Example
    $ openstack network create trusted_vf_network \
    --provider-network-type vlan --provider-segment 111 \
    --provider-physical-network sriov2 --external --disable-port-security
  3. Create a subnet.

    Example
    $ openstack subnet create --network trusted_vf_network \
      --ip-version 4 --subnet-range 192.168.111.0/24 --no-dhcp \
     subnet-trusted_vf_network
  4. Create a port.

    Example

    Set the vnic-type option to direct, and the binding-profile option to true.

    $ openstack port create --network trusted_vf_network \
    --vnic-type direct --binding-profile trusted=true \
    sriov111_port_trusted
  5. Create an instance, and bind it to the previously-created trusted port.

    Example
    $ openstack server create --image rhel --flavor dpdk \
    --network trusted_vf_network --port sriov111_port_trusted \
    --config-drive True --wait rhel-dpdk-sriov_trusted
  6. Exit the openstackclient pod:

    $ exit

Verification

  1. On the compute node that you created the instance, enter the following command:

    $ ip link
    Sample output
    7: p5p2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
        link/ether b4:96:91:1c:40:fa brd ff:ff:ff:ff:ff:ff
        vf 6 MAC fa:16:3e:b8:91:c2, vlan 111, spoof checking off, link-state auto, trust on, query_rss off
        vf 7 MAC fa:16:3e:84:cf:c8, vlan 111, spoof checking off, link-state auto, trust off, query_rss off
  2. Verify that the trust status of the VF is trust on. The example output contains details of an environment that contains two ports. Note that vf 6 contains the text trust on.
  3. You can disable spoof checking if you set port_security_enabled: false in the Networking service (neutron) network, or if you include the argument --disable-port-security when you run the openstack port create command.

12.3. Known limitations for NUMA-aware vSwitches

Important

This feature is available in this release as a Technology Preview, and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details.

This section lists the constraints for implementing a NUMA-aware vSwitch in a Red Hat OpenStack Services on OpenShift (RHOSO) network functions virtualization infrastructure (NFVi).

  • You cannot start a VM that has two NICs connected to physnets on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
  • You cannot start a VM that has one NIC connected to a physnet and another NIC connected to a tunneled network on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
  • You cannot start a VM that has one vhost port and one VF on different NUMA nodes, if you did not specify a two-node guest NUMA topology.
  • NUMA-aware vSwitch parameters are specific to overcloud roles. For example, Compute node 1 and Compute node 2 can have different NUMA topologies.
  • If the interfaces of a VM have NUMA affinity, ensure that the affinity is for a single NUMA node only. You can locate any interface without NUMA affinity on any NUMA node.
  • Configure NUMA affinity for data plane networks, not management networks.
  • NUMA affinity for tunneled networks is a global setting that applies to all VMs.

You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Services on OpenShift (RHOSO) networks in a network functions virtualization infrastructure (NFVi).

In NFVi environments, QoS support is limited to the following rule types:

  • minimum bandwidth on SR-IOV, if supported by the vendor.
  • bandwidth limit on SR-IOV and OVS-DPDK egress interfaces.

12.5. Creating an HCI data plane that uses DPDK

You can deploy your NFV infrastructure with hyperconverged nodes, by co-locating and configuring Compute and Ceph Storage services for optimized resource usage.

For more information about hyperconverged infrastructure (HCI), see Deploying a hyperconverged infrastructure environment.

12.5.1. Example NUMA node configuration

For increased performance, place the tenant network and Ceph object service daemon (OSD)s in one NUMA node, such as NUMA-0, and the VNF and any non-NFV VMs in another NUMA node, such as NUMA-1.

Expand
Table 12.1. CPU allocation
NUMA-0NUMA-1

Number of Ceph OSDs * 4 HT

Guest vCPU for the VNF and non-NFV VMs

DPDK lcore - 2 HT

DPDK lcore - 2 HT

DPDK PMD - 2 HT

DPDK PMD - 2 HT

Expand
Table 12.2. Example of CPU allocation
 NUMA-0NUMA-1

Ceph OSD

32,34,36,38,40,42,76,78,80,82,84,86

 

DPDK-lcore

0,44

1,45

DPDK-pmd

2,46

3,47

nova

 

5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87

The following table lists the parameters that you can tune for HCI deployments:

Expand
Table 12.3. Tunable parameters for HCI deployments
Block Device TypeOSDs, Memory, vCPUs per device

NVMe

Memory : 5GB per OSD OSDs per device: 4 vCPUs per device: 3

SSD

Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 4

HDD

Memory : 5GB per OSD OSDs per device: 1 vCPUs per device: 1

Use the same NUMA node for the following functions:

  • Disk controller
  • Storage networks
  • Storage CPU and memory

Allocate another NUMA node for the following functions of the DPDK provider network:

  • NIC
  • PMD CPUs
  • Socket memory
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top