Este contenido no está disponible en el idioma seleccionado.
Chapter 3. Hardware requirements for NFV
This section describes the hardware requirements for NFV.
Red Hat certifies hardware for use with Red Hat OpenStack Platform. For more information, see Certified hardware.
3.1. Tested NICs for NFV Copiar enlaceEnlace copiado en el portapapeles!
For a list of tested NICs for NFV, see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix.
Use the default driver for the supported NIC, unless you are configuring OVS-DPDK on NVIDIA (Mellanox) network interfaces. For NVIDIA network interfaces, you must set the corresponding kernel driver in the j2 network configuration template.
Example
In this example, the mlx5_core driver is set for the Mellanox ConnectX-5 network interface:
3.2. Troubleshooting hardware offload Copiar enlaceEnlace copiado en el portapapeles!
In a Red Hat OpenStack Platform(RHOSP) 17.1 deployment, OVS Hardware Offload might not offload flows for VMs with switchdev-capable ports and Mellanox ConnectX5 NICs. To troubleshoot and configure offload flows in this scenario, disable the ESWITCH_IPV4_TTL_MODIFY_ENABLE Mellanox firmware parameter. For more troubleshooting information about OVS Hardware Offload in RHOSP 17.1, see the Red Hat Knowledgebase solution OVS Hardware Offload with Mellanox NIC in OpenStack Platform 16.2.
Procedure
- Log in to the Compute nodes in your RHOSP deployment that have Mellanox NICs that you want to configure.
Use the
mstflintutility to query theESWITCH_IPV4_TTL_MODIFY_ENABLEMellanox firmware parameter .yum install -y mstflint mstconfig -d <PF PCI BDF> q ESWITCH_IPV4_TTL_MODIFY_ENABLE
[root@compute-1 ~]# yum install -y mstflint [root@compute-1 ~]# mstconfig -d <PF PCI BDF> q ESWITCH_IPV4_TTL_MODIFY_ENABLECopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
ESWITCH_IPV4_TTL_MODIFY_ENABLEparameter is enabled and set to1, then set the value to0to disable it.mstconfig -d <PF PCI BDF> s ESWITCH_IPV4_TTL_MODIFY_ENABLE=0`
[root@compute-1 ~]# mstconfig -d <PF PCI BDF> s ESWITCH_IPV4_TTL_MODIFY_ENABLE=0`Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the node.
3.3. Discovering your NUMA node topology Copiar enlaceEnlace copiado en el portapapeles!
When you plan your deployment, you must understand the NUMA topology of your Compute node to partition the CPU and memory resources for optimum performance. To determine the NUMA information, perform one of the following tasks:
- Enable hardware introspection to retrieve this information from bare-metal nodes.
- Log on to each bare-metal node to manually collect the information.
You must install and configure the undercloud before you can retrieve NUMA information through hardware introspection. For more information about undercloud configuration, see Installing and managing Red Hat OpenStack Platform with director Guide.
3.4. Retrieving hardware introspection details Copiar enlaceEnlace copiado en el portapapeles!
The Bare Metal service hardware-inspection-extras feature is enabled by default, and you can use it to retrieve hardware details for overcloud configuration. For more information about the inspection_extras parameter in the undercloud.conf file, see Director configuration parameters.
For example, the numa_topology collector is part of the hardware-inspection extras and includes the following information for each NUMA node:
- RAM (in kilobytes)
- Physical CPU cores and their sibling threads
- NICs associated with the NUMA node
Procedure
To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to complete the following command:
openstack baremetal introspection data save <UUID> | jq .numa_topology
# openstack baremetal introspection data save <UUID> | jq .numa_topologyCopy to Clipboard Copied! Toggle word wrap Toggle overflow The following example shows the retrieved NUMA information for a bare-metal node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. NFV BIOS settings Copiar enlaceEnlace copiado en el portapapeles!
The following table describes the required BIOS settings for NFV:
You must enable SR-IOV global and NIC settings in the BIOS, or your Red Hat OpenStack Platform (RHOSP) deployment with SR-IOV Compute nodes will fail.
| Parameter | Setting | Description |
|---|---|---|
|
| Disabled. | |
|
| Disabled. | |
|
| Enabled. | |
|
| Enabled. | |
|
| Enabled. | |
|
| Enabled. | |
|
| Performance. | |
|
| Enabled. | |
|
|
Disabled in NFV deployments that require deterministic performance. | |
|
| Enabled for Intel cards if VFIO functionality is needed. | |
|
| Disabled. | |
|
| Disabled. | When enabled, Sub-NUMA Clustering divides the processor cores, cache, and memory into multiple NUMA domains. Enabling this feature can increase performance for workloads that are NUMA-aware and optimized. When this option is enabled, up to 1GB of system memory can become unavailable. |
|
| Disabled. | Turbo Boost Technology enables the processor to transition to a higher frequency than the processor rated speed if the processor has available power and is within temperature specifications. Disabling this option reduces power usage and also reduces the maximum achievable performance of the system under some workloads. |
|
| Clustered. |
Use the NUMA Group Size Optimization option to configure how the system ROM reports the number of logical processors in a NUMA node. The resulting information helps the operating system group processors for application use. The |
On processors that use the intel_idle driver, Red Hat Enterprise Linux can ignore BIOS settings and re-enable the processor C-state.
You can disable intel_idle and instead use the acpi_idle driver by specifying the key-value pair intel_idle.max_cstate=0 on the kernel boot command line.
Confirm that the processor is using the acpi_idle driver by checking the contents of current_driver:
cat /sys/devices/system/cpu/cpuidle/current_driver acpi_idle
# cat /sys/devices/system/cpu/cpuidle/current_driver
acpi_idle
You will experience some latency after changing drivers, because it takes time for the Tuned daemon to start. However, after Tuned loads, the processor does not use the deeper C-state.