此内容没有您所选择的语言版本。

Chapter 5. Planning an SR-IOV deployment


Optimize single root I/O virtualization (SR-IOV) deployments for NFV by setting individual parameters based on your Compute node hardware.

See Discovering your NUMA node topology to evaluate your hardware impact on the SR-IOV parameters.

5.1. Hardware partitioning for an SR-IOV deployment

To achieve high performance with SR-IOV, partition the resources between the host and the guest.

OpenStack NFV Hardware Capacities 464931 0118 SR IOV

A typical topology includes 14 cores per NUMA node on dual socket Compute nodes. Both hyper-threading (HT) and non-HT cores are supported. Each core has two sibling threads. One core is dedicated to the host on each NUMA node. The VNF handles the SR-IOV interface bonding. All the interrupt requests (IRQs) are routed on the host cores. The VNF cores are dedicated to the VNFs. They provide isolation from other VNFs and isolation from the host. Each VNF must use resources on a single NUMA node. The SR-IOV NICs used by the VNF must also be associated with that same NUMA node. This topology does not have a virtualization overhead. The host, OpenStack Networking (neutron), and Compute (nova) configuration parameters are exposed in a single file for ease, consistency, and to avoid incoherence that is fatal to proper isolation, causing preemption and packet loss. The host and virtual machine isolation depend on a tuned profile, which takes care of the boot parameters and any Red Hat OpenStack Platform modifications based on the list of CPUs to isolate.

5.2. Topology of an NFV SR-IOV deployment

The following image has two virtual network functions (VNFs) each with the management interface represented by mgt and the data plane interfaces. The management interface manages the ssh access and other management functions. The data plane interfaces bond the VNFs to Data Plane Development Kit (DPDK) to ensure high availability. VNFs bond the data plane interfaces using the DPDK library. The image also has two redundant provider networks. The Compute node has two regular NICs bonded together and shared between the VNF management and the Red Hat OpenStack Platform API management.

NFV SR-IOV deployment

The image shows a VNF that leverages DPDK at an application level and has access to SR-IOV virtual functions (VFs) and physical functions (PFs), together for better availability or performance, depending on the fabric configuration. DPDK improves performance, while the VF/PF DPDK bonds provide support for failover/availability. The VNF vendor must ensure that their DPDK poll mode driver (PMD) supports the SR-IOV card that is being exposed as a VF/PF. The management network uses Open vSwitch (OVS) so the VNF accesses a “mgmt” network device using the standard virtIO drivers. Operators can use that device to initially connect to the VNF and ensure that their DPDK application bonds properly the two VF/PFs.

5.2.1. Topology for NFV SR-IOV without HCI

The following image shows the topology for single root I/O virtualization (SR-IOV) without hyper-converged infrastructure (HCI) for the NFV use case. It consists of compute and controller nodes with 1 Gbps NICs, and the Director node.

NFV SR-IOV Topology without HCI
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat, Inc.