Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 8. Networking
Over time, Red Hat Enterprise Linux's network stack has been upgraded with numerous automated optimization features. For most workloads, the auto-configured network settings provide optimized performance.
In most cases, networking performance problems are actually caused by a malfunction in hardware or faulty infrastructure. Such causes are beyond the scope of this document; the performance issues and solutions discussed in this chapter are useful in optimizing perfectly functional systems.
Networking is a delicate subsystem, containing different parts with sensitive connections. This is why the open source community and Red Hat invest much work in implementing ways to automatically optimize network performance. As such, given most workloads, you may never even need to reconfigure networking for performance.
8.1. Network Performance Enhancements
Red Hat Enterprise Linux 6.1 provided the following network performance enhancements:
Receive Packet Steering (RPS)
RPS enables a single NIC
rx
queue to have its receive softirq
workload distributed among several CPUs. This helps prevent network traffic from being bottlenecked on a single NIC hardware queue.
To enable RPS, specify the target CPU names in
/sys/class/net/ethX/queues/rx-N/rps_cpus
, replacing ethX
with the NIC's corresponding device name (for example, eth1
, eth2
) and rx-N
with the specified NIC receive queue. This will allow the specified CPUs in the file to process data from queue rx-N
on ethX
. When specifying CPUs, consider the queue's cache affinity [4].
Receive Flow Steering
RFS is an extension of RPS, allowing the administrator to configure a hash table that is populated automatically when applications receive data and are interrogated by the network stack. This determines which applications are receiving each piece of network data (based on source:destination network information).
Using this information, the network stack can schedule the most optimal CPU to receive each packet. To configure RFS, use the following tunables:
/proc/sys/net/core/rps_sock_flow_entries
- This controls the maximum number of sockets/flows that the kernel can steer towards any specified CPU. This is a system-wide, shared limit.
/sys/class/net/ethX/queues/rx-N/rps_flow_cnt
- This controls the maximum number of sockets/flows that the kernel can steer for a specified receive queue (
rx-N
) on a NIC (ethX
). Note that sum of all per-queue values for this tunable on all NICs should be equal or less than that of/proc/sys/net/core/rps_sock_flow_entries
.
Unlike RPS, RFS allows both the receive queue and the application to share the same CPU when processing packet flows. This can result in improved performance in some cases. However, such improvements are dependent on factors such as cache hierarchy, application load, and the like.
getsockopt support for TCP thin-streams
Thin-stream is a term used to characterize transport protocols wherein applications send data at such a low rate that the protocol's retransmission mechanisms are not fully saturated. Applications that use thin-stream protocols typically transport via reliable protocols like TCP; in most cases, such applications provide very time-sensitive services (for example, stock trading, online gaming, control systems).
For time-sensitive services, packet loss can be devastating to service quality. To help prevent this, the
getsockopt
call has been enhanced to support two extra options:
- TCP_THIN_DUPACK
- This Boolean enables dynamic triggering of retransmissions after one dupACK for thin streams.
- TCP_THIN_LINEAR_TIMEOUTS
- This Boolean enables dynamic triggering of linear timeouts for thin streams.
Both options are specifically activated by the application. For more information about these options, refer to
file:///usr/share/doc/kernel-doc-version/Documentation/networking/ip-sysctl.txt
. For more information about thin-streams, refer to file:///usr/share/doc/kernel-doc-version/Documentation/networking/tcp-thin.txt
.
Transparent Proxy (TProxy) support
The kernel can now handle non-locally bound IPv4 TCP and UDP sockets to support transparent proxies. To enable this, you will need to configure iptables accordingly. You will also need to enable and configure policy routing properly.
For more information about transparent proxies, refer to
file:///usr/share/doc/kernel-doc-version/Documentation/networking/tproxy.txt
.
[4]
Ensuring cache affinity between a CPU and a NIC means configuring them to share the same L2 cache. For more information, refer to Section 8.3, “Overview of Packet Reception”.