Este conteúdo não está disponível no idioma selecionado.
Chapter 7. Performance
Red Hat OpenStack Platform 10 director configures the Compute nodes to enforce resource partitioning and fine tuning to achieve line rate performance for the guest VNFs. The key performance factors in the NFV use case are throughput, latency and jitter.
DPDK-accelerated OVS enables high performance packet switching between physical NICs and virtual machines. OVS 2.5 with DPDK 2.2 adds support for vhost-user
multiqueue allowing scalable performance. OVS-DPDK provides line rate performance for guest VNFs.
SR-IOV networking provides enhanced performance characteristics, including improved throughput for specific networks and virtual machines.
Other important features for performance tuning include huge pages, NUMA alignment, host isolation and CPU pinning. VNF flavors require huge pages for better performance. Host isolation and CPU pinning improve NFV performance and prevent spurious packet loss.
For more details on these features and performance tuning for NFV, see NFV Tuning for Performance.
7.1. Configuring RX/TX queue size Copiar o linkLink copiado para a área de transferência!
You can experience packet loss at high packet rates above 3.5mpps for many reasons, such as:
- a network interrupt
- a SMI
- packet processing latency in the Virtual Network Function
To prevent packet loss, increase the queue size from the default of 256 to a maximum of 1024.
Prerequisites
- To configure RX, ensure that you have libvirt v2.3 and QEMU v2.7.
- To configure TX, ensure that you have libvirt v3.7 and QEMU v2.10.
Procedure
To increase the RX and TX queue size, include the following lines in the
parameter_defaults:
section of a relevant director role. Here is an example with ComputeOvsDpdk role:parameter_defaults: ComputeOvsDpdkParameters: -NovaLibvirtRxQueueSize: 1024 -NovaLibvirtTxQueueSize: 1024
parameter_defaults: ComputeOvsDpdkParameters: -NovaLibvirtRxQueueSize: 1024 -NovaLibvirtTxQueueSize: 1024
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Testing
You can observe the values for RX queue size and TX queue size in the nova.conf file:
[libvirt] rx_queue_size=1024 tx_queue_size=1024
[libvirt] rx_queue_size=1024 tx_queue_size=1024
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can check the values for RX queue size and TX queue size in the VM instance XML file generated by libvirt on the compute host.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the values for RX queue size and TX queue size, use the following command on a KVM host:
virsh dumpxml <vm name> | grep queue_size
$ virsh dumpxml <vm name> | grep queue_size
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You can check for improved performance, such as 3.8 mpps/core at 0 frame loss.