8.2. Optimized Network Settings


Performance tuning is usually done in a pre-emptive fashion. Often, we adjust known variables before running an application or deploying a system. If the adjustment proves to be ineffective, we try adjusting other variables. The logic behind such thinking is that by default, the system is not operating at an optimal level of performance; as such, we think we need to adjust the system accordingly. In some cases, we do so via calculated guesses.
As mentioned earlier, the network stack is mostly self-optimizing. In addition, effectively tuning the network requires a thorough understanding not just of how the network stack works, but also of the specific system's network resource requirements. Incorrect network performance configuration can actually lead to degraded performance.
For example, consider the bufferfloat problem. Increasing buffer queue depths results in TCP connections that have congestion windows larger than the link would otherwise allow (due to deep buffering). However, those connections also have huge RTT values since the frames spend so much time in-queue. This, in turn, actually results in sub-optimal output, as it would become impossible to detect congestion.
When it comes to network performance, it is advisable to keep the default settings unless a particular performance issue becomes apparent. Such issues include frame loss, significantly reduced throughput, and the like. Even then, the best solution is often one that results from meticulous study of the problem, rather than simply tuning settings upward (increasing buffer/queue lengths, reducing interrupt latency, etc).
To properly diagnose a network performance problem, use the following tools:
netstat
A command-line utility that prints network connections, routing tables, interface statistics, masquerade connections and multicast memberships. It retrieves information about the networking subsystem from the /proc/net/ file system. These files include:
  • /proc/net/dev (device information)
  • /proc/net/tcp (TCP socket information)
  • /proc/net/unix (Unix domain socket information)
For more information about netstat and its referenced files from /proc/net/, refer to the netstat man page: man netstat.
dropwatch
A monitoring utility that monitors packets dropped by the kernel. For more information, refer to the dropwatch man page: man dropwatch.
ip
A utility for managing and monitoring routes, devices, policy routing, and tunnels. For more information, refer to the ip man page: man ip.
ethtool
A utility for displaying and changing NIC settings. For more information, refer to the ethtool man page: man ethtool.
/proc/net/snmp
A file that displays ASCII data needed for the IP, ICMP, TCP, and UDP management information bases for an snmp agent. It also displays real-time UDP-lite statistics.
The SystemTap Beginners Guide contains several sample scripts you can use to profile and monitor network performance. This guide is available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/.
After collecting relevant data on a network performance problem, you should be able to formulate a theory — and, hopefully, a solution. [5] For example, an increase in UDP input errors in /proc/net/snmp indicates that one or more socket receive queues are full when the network stack attempts to queue new frames into an application's socket.
This indicates that packets are bottlenecked at at least one socket queue, which means either the socket queue drains packets too slowly, or packet volume is too large for that socket queue. If it is the latter, then verify the logs of any network-intensive application for lost data -- to resolve this, you would need to optimize or reconfigure the offending application.

Socket receive buffer size

Socket send and receive sizes are dynamically adjusted, so they rarely need to be manually edited. If further analysis, such as the analysis presented in the SystemTap network example, sk_stream_wait_memory.stp, suggests that the socket queue's drain rate is too slow, then you can increase the depth of the application's socket queue. To do so, increase the size of receive buffers used by sockets by configuring either of the following values:
rmem_default
A kernel parameter that controls the default size of receive buffers used by sockets. To configure this, run the following command:
sysctl -w net.core.rmem_default=N
Replace N with the desired buffer size, in bytes. To determine the value for this kernel parameter, view /proc/sys/net/core/rmem_default. Bear in mind that the value of rmem_default should be no greater than rmem_max (/proc/sys/net/core/rmem_max); if need be, increase the value of rmem_max.
SO_RCVBUF
A socket option that controls the maximum size of a socket's receive buffer, in bytes. For more information on SO_RCVBUF, refer to the man page for more details: man 7 socket.
To configure SO_RCVBUF, use the setsockopt utility. You can retrieve the current SO_RCVBUF value with getsockopt. For more information using both utilities, refer to the setsockopt man page: man setsockopt.


[5] Section 8.3, “Overview of Packet Reception” contains an overview of packet travel, which should help you locate and map bottleneck-prone areas in the network stack.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.