Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Interpreting the output of the pmd-stats-show command in Open vSwitch with DPDK
Use this section to interpret the output of the pmd-stats-show
command (ovs-appctl dpif-netdev/pmd-stats-show
) in Open vSwitch (OVS) with DPDK.
6.1. Symptom Copier lienLien copié sur presse-papiers!
The ovs-appctl dpif-netdev/pmd-stats-show
command provides an inaccurate measurement. This is due to gathered statistics that have been charted since PMD was started.
6.2. Diagnosis Copier lienLien copié sur presse-papiers!
To obtain useful output, put the system into a steady state and reset the statistics that you want to measure:
put system into steady state wait <x> seconds
# put system into steady state
ovs-appctl dpif-netdev/pmd-stats-clear
# wait <x> seconds
sleep <x>
ovs-appctl dpif-netdev/pmd-stats-show
Here’s an example of the output:
Note that core_id 2
is mainly busy, spending 70% of the time processing and 30% of the time polling.
polling cycles:5460724802 (29.10%) processing cycles:13305794333 (70.90%)
polling cycles:5460724802 (29.10%)
processing cycles:13305794333 (70.90%)
In this example, miss
indicates packets that were not classified in the DPDK datapath ('emc' or 'dp' classifier). Under normal circumstances, they would then be sent to the ofproto
layer. On rare occasions, due to a flow revalidation lock or if the ofproto
layer returns an error, the packet is dropped. In this case, the value of lost
will also be incremented to indicate the loss.
emc hits:14874381 megaflow hits:0 avg. subtable lookups per hit:0.00 miss:0 lost:0
emc hits:14874381
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
For more information, see OVS-DPDK Datapath Classifier.
6.3. Solution Copier lienLien copié sur presse-papiers!
This section shows the procedures for viewing traffic flow using the ovs-appctl
command.
6.3.1. Idle PMD Copier lienLien copié sur presse-papiers!
The following example shows a system where the core_ids serve the PMDs that are pinned to dpdk0, with only management traffic flowing through dpdk0:
6.3.2. PMD under load test with packet drop Copier lienLien copié sur presse-papiers!
The following example shows a system where the core_ids serve the PMDs that are pinned to dpdk0, with a load test flowing through dpdk0, causing a high number of RX drops:
Where packet drops occur, you can see a high ratio of processing cycles vs polling cycles (more than 90% processing cycles):
polling cycles:1497174615 (6.85%) processing cycles:20354613261 (93.15%)
polling cycles:1497174615 (6.85%)
processing cycles:20354613261 (93.15%)
Check the average cycles per packet (CPP) and average processing cycles per packet (PCPP). You can expect a PCPP/CPP ratio of 1 for a fully loaded PMD as there will be no idle cycles counted.
avg cycles per packet: 723.96 (21851787876/30183584) avg processing cycles per packet: 674.36 (20354613261/30183584)
avg cycles per packet: 723.96 (21851787876/30183584)
avg processing cycles per packet: 674.36 (20354613261/30183584)
6.3.3. PMD under loadtest with 50% of mpps capacity Copier lienLien copié sur presse-papiers!
The following example shows a system where the core_ids serve the PMDs that are pinned to dpdk0, with a load test flowing through dpdk0, sending 6.4 Mpps (around 50% of the maximum capacity) of this dpdk0 interface (around 12.85 Mpps):
Where the pps are about half of the maximum for the interface, you can see a lower ratio of processing cycles vs polling cycles (approximately 70% processing cycles):
polling cycles:5460724802 (29.10%) processing cycles:13305794333 (70.90%)
polling cycles:5460724802 (29.10%)
processing cycles:13305794333 (70.90%)
6.3.4. Hit vs miss vs lost Copier lienLien copié sur presse-papiers!
The following examples shows the man pages regarding the subject:
Some of the documentation is referring to the kernel datapath, so when it says user space processing
it means the packet is not classified in the kernel sw
caches (equivalents to emc
& dpcls
) and sent to the ofproto layer in userspace.