6.3. 解决方案
本节演示了使用 ovs-appctl 命令查看流量流的步骤。
6.3.1. idle PMD 复制链接链接已复制到粘贴板!
以下示例显示了一个系统,其中 core_ids 为 dpdk0 固定为 dpdk0,它只会管理通过 dpdk0 的流量流:
[root@overcloud-compute-0 ~]# ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl dpif-netdev/pmd-stats-show |
egrep 'core_id (2|22):' -A9
pmd thread numa_id 0 core_id 22:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:12613298746 (100.00%)
processing cycles:0 (0.00%)
--
pmd thread numa_id 0 core_id 2:
emc hits:5
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:12480023709 (100.00%)
processing cycles:14354 (0.00%)
avg cycles per packet: 2496007612.60 (12480038063/5)
avg processing cycles per packet: 2870.80 (14354/5)
6.3.2. 带有数据包丢弃的 load 测试中的 PMD 复制链接链接已复制到粘贴板!
以下示例显示了一个系统,其中 core_ids 为 dpdk0 固定为 dpdk0,其负载测试流通过 dpdk0,从而导致了大量 RX 丢弃:
[root@overcloud-compute-0 ~]# ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl dpif-netdev/pmd-stats-show |
egrep 'core_id (2|4|22|24):' -A9
pmd thread numa_id 0 core_id 22:
emc hits:35497952
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:1446658819 (6.61%)
processing cycles:20453874401 (93.39%)
avg cycles per packet: 616.95 (21900533220/35497952)
avg processing cycles per packet: 576.20 (20453874401/35497952)
--
pmd thread numa_id 0 core_id 2:
emc hits:30183582
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:2
lost:0
polling cycles:1497174615 (6.85%)
processing cycles:20354613261 (93.15%)
avg cycles per packet: 723.96 (21851787876/30183584)
avg processing cycles per packet: 674.36 (20354613261/30183584)
数据包被丢弃时,您可以看到处理周期与轮询周期的高比率(超过 90% 的处理周期):
polling cycles:1497174615 (6.85%)
processing cycles:20354613261 (93.15%)
检查每个数据包的平均周期(CPP)和每个数据包的平均处理周期(PCPP)。对于完全加载的 PMD,您可以预计 PCPP/CPP 比率为 1,因为没有空闲循环。
avg cycles per packet: 723.96 (21851787876/30183584)
avg processing cycles per packet: 674.36 (20354613261/30183584)
6.3.3. pmD 在 loadtest 下具有 50% 的 mpps 容量 复制链接链接已复制到粘贴板!
以下示例显示了一个系统,其中 core_ids 为 dpdk0 固定于 dpdk0 的 PMD,其负载测试流过 dpdk0,发送 6.4 Mpps(大约大约 12.85 Mpps):
[root@overcloud-compute-0 ~]# ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl dpif-netdev/pmd-stats-show |
egrep 'core_id (2|4|22|24):' -A9
pmd thread numa_id 0 core_id 22:
emc hits:17461158
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:4948219259 (25.81%)
processing cycles:14220835107 (74.19%)
avg cycles per packet: 1097.81 (19169054366/17461158)
avg processing cycles per packet: 814.43 (14220835107/17461158)
--
pmd thread numa_id 0 core_id 2:
emc hits:14874381
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:5460724802 (29.10%)
processing cycles:13305794333 (70.90%)
avg cycles per packet: 1261.67 (18766519135/14874381)
avg processing cycles per packet: 894.54 (13305794333/14874381)
当 pps 为接口的最大值为一半时,您可以看到处理周期与轮询周期的比率较低(大约 70% 的处理周期):
polling cycles:5460724802 (29.10%)
processing cycles:13305794333 (70.90%)
6.3.4. hit vs miss vs lost 复制链接链接已复制到粘贴板!
以下示例显示了有关主题的 man page:
an ovs-vswitchd
(...)
DPIF-NETDEV COMMANDS
These commands are used to expose internal information (mostly statistics)
about the `dpif-netdev` userspace datapath. If there is only one datapath
(as is often the case, unless dpctl/ commands are used), the dp argument can
be omitted.
dpif-netdev/pmd-stats-show [dp]
Shows performance statistics for each pmd thread of the datapath dp.
The special thread ``main'' sums up the statistics of every non pmd
thread. The sum of ``emc hits'', ``masked hits'' and ``miss'' is the
number of packets received by the datapath. Cycles are counted using
the TSC or similar facilities when available on the platform. To
reset these counters use dpif-netdev/pmd-stats-clear. The duration of
one cycle depends on the measuring infrastructure.
(...)
Raw
man ovs-dpctl
(...)
dump-dps
Prints the name of each configured datapath on a separate line.
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their datapath numbers and a list of ports connected to each datapath. (The local port is
identified as port 0.) If -s or --statistics is specified, then packet and byte counters are also printed for each port.
The datapath numbers consists of flow stats and mega flow mask stats.
The "lookups" row displays three stats related to flow lookup triggered by processing incoming packets in the datapath. "hit" displays number
of packets matches existing flows. "missed" displays the number of packets not matching any existing flow and require user space processing.
"lost" displays number of packets destined for user space process but subsequently dropped before reaching userspace. The sum of "hit" and
"miss" equals to the total number of packets datapath processed.
(...)
Raw
man ovs-vswitchd
(...)
dpctl/show [-s | --statistics] [dp...]
Prints a summary of configured datapaths, including their datapath numbers and a list of ports connected to each datapath. (The local port is identified as
port 0.) If -s or --statistics is specified, then packet and byte counters are also printed for each port.
The datapath numbers consists of flow stats and mega flow mask stats.
The "lookups" row displays three stats related to flow lookup triggered by processing incoming packets in the datapath. "hit" displays number of packets
matches existing flows. "missed" displays the number of packets not matching any existing flow and require user space processing. "lost" displays number of
packets destined for user space process but subsequently dropped before reaching userspace. The sum of "hit" and "miss" equals to the total number of packets
datapath processed.
(...)
其中一些文档指的是内核数据路径,因此当它指出 用户空间处理时,意味着数据包不会被归类为内核 proto 层。
sw 缓存中(等同于 emc 和 dpcls)并将其发送到用户空间中的