Chapter 5. Planning an OVS-DPDK deployment


You can optimize your Open vSwitch with Data Plane Development Kit (OVS-DPDK) deployment for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments by choosing appropriate values for configuration parameters. Inform your choices by understanding how OVS-DPDK uses the Compute node hardware (CPU, NUMA nodes, memory, NICs).

Important

When using OVS-DPDK and the OVS native firewall (a stateful firewall based on conntrack), you can track only packets that use ICMPv4, ICMPv6, TCP, and UDP protocols. OVS marks all other types of network traffic as invalid.

OVS-DPDK partitions the hardware resources for host, guests, and itself. The OVS-DPDK Poll Mode Drivers (PMDs) run DPDK active loops, which require dedicated CPU cores. Therefore you must allocate some CPUs, and huge pages, to OVS-DPDK.

A sample partitioning includes 16 cores per NUMA node on dual-socket Compute nodes. The traffic requires additional NICs because you cannot share NICs between the host and OVS-DPDK.

Figure 5.1. NUMA topology: OVS-DPDK with CPU partitioning

NUMA topology: OVS-DPDK with CPU partitioning
Note

You must reserve DPDK PMD threads on both NUMA nodes, even if a NUMA node does not have an associated DPDK NIC.

For optimum OVS-DPDK performance, reserve a block of memory local to the NUMA node. Choose NICs associated with the same NUMA node that you use for memory and CPU pinning. Ensure that both bonded interfaces are from NICs on the same NUMA node.

5.2. OVS-DPDK with TCP segmentation offload

RHOSO 18.0.10 (Feature Release 3) promotes TCP segmentation offload (TSO) for RHOSO environments with OVS-DPDK from a technology preview to a generally available feature.

The segmentation process happens at the transport layer. It divides data from the upper stack layers into segments to support transport across and within networks at the network and data link layers.

Note

Enable TSO for DPDK only in the initial deployment of a new RHOSO environment. Enabling this feature in a previously deployed system is not supported.

Segmentation processing can happen on the host, where it consumes CPU resources. With TSO, segmentation is offloaded to NICs, to free up host resources and improve performance.

TSO for DPDK can be useful if your workload includes large frames that require TCP segmentation in the user space or kernel.

You can configure your Red Hat OpenStack Services on OpenShift (RHOSO) OVS-DPDK environment to offload TCP segmentation to NICs (TSO).

Note

Enable TSO for DPDK only in the initial deployment of a new RHOSO environment. Enabling this feature in a previously deployed system is not supported.

Prerequisites

  • A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane.
  • You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges.

Procedure

  1. When you follow the instructions in Creating a set of data plane nodes with pre-provisioned nodes or Creating a set of data plane nodes with unprovisioned nodes, include the edpm_ovs_dpdk_enable_tso: true value pair in the OpenStackDataPlaneNodeSet manifest. For example:

      nodeTemplate:
        ansible:
          ansibleUser: cloud-admin
          ansiblePort: 22
          ansibleVarsFrom:
            - prefix: subscription_manager_
              secretRef:
                name: subscription-manager
            - secretRef:
                name: redhat-registry
          ansibleVars:
            edpm_ovs_dpdk_enable_tso: true
            edpm_bootstrap_command: |
           ...
  2. Complete the node set creation procedure.

Verification

  • After completing the node set procedure, run the following command on the Compute nodes:

    ovs-vsctl get Open_vSwitch . other_config:userspace-tso-enable

5.4. Two NUMA node example OVS-DPDK deployment

The Red Hat OpenStack Services on OpenShift (RHOSO) Compute node in the following example includes two NUMA nodes:

  • NUMA 0 has logical cores 0-7 (four physical cores). The sibling thread pairs are (0,1), (2,3), (4,5), and (6,7)
  • NUMA 1 has cores 8-15. The sibling thread pairs are (8,9), (10,11), (12,13), and (14,15).
  • Each NUMA node connects to a physical NIC, namely NIC1 on NUMA 0, and NIC2 on NUMA 1.

Figure 5.2. OVS-DPDK: two NUMA nodes example

OVS-DPDK: two NUMA nodes example
Note

Reserve the first physical cores or both thread pairs on each NUMA node (0,1 and 8,9) for non-datapath DPDK processes.

This example also assumes a 1500 MTU configuration, so the OvsDpdkSocketMemory is the same for all use cases:

OvsDpdkSocketMemory: "1024,1024"
NIC 1 for DPDK, with one physical core for PMD
In this use case, you allocate one physical core on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11"
cpu_dedicated_set: "4,5,6,7,12,13,14,15"
NIC 1 for DPDK, with two physical cores for PMD
In this use case, you allocate two physical cores on NUMA 0 for PMD. You must also allocate one physical core on NUMA 1, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11"
cpu_dedicated_set: "6,7,12,13,14,15"
NIC 2 for DPDK, with one physical core for PMD
In this use case, you allocate one physical core on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11"
cpu_dedicated_set: "4,5,6,7,12,13,14,15"
NIC 2 for DPDK, with two physical cores for PMD
In this use case, you allocate two physical cores on NUMA 1 for PMD. You must also allocate one physical core on NUMA 0, even though DPDK is not enabled on the NIC for that NUMA node. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,10,11,12,13"
cpu_dedicated_set: "4,5,6,7,14,15"
NIC 1 and NIC2 for DPDK, with two physical cores for PMD
In this use case, you allocate two physical cores on each NUMA node for PMD. The remaining cores are allocated for guest instances. The resulting parameter settings are:
edpm_ovs_dpdk_pmd_core_list: "2,3,4,5,10,11,12,13"
cpu_dedicated_set: "6,7,14,15"

5.5. Topology of an NFV OVS-DPDK deployment

This example deployment shows an OVS-DPDK configuration and consists of two virtual network functions (VNFs) with two interfaces each:

  • The management interface, represented by mgt.
  • The data plane interface.

In the OVS-DPDK deployment, the VNFs operate with inbuilt DPDK that supports the physical interface. OVS-DPDK enables bonding at the vSwitch level. For improved performance in your OVS-DPDK deployment, separate kernel and OVS-DPDK NICs. To separate the management (mgt) network, connected to the Base provider network for the virtual machine, ensure you have additional NICs. The Compute node consists of two regular NICs for the Red Hat OpenStack Platform API management that can be reused by the Ceph API but cannot be shared with any OpenStack project.

Figure 5.3. Compute node: NFV OVS-DPDK

Compute node: NFV OVS-DPDK

Figure 5.4. OVS-DPDK Topology for NFV

OVS-DPDK Topology for NFV
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top