Chapter 30. Configuring ethtool settings in NetworkManager connection profiles
NetworkManager can configure certain network driver and hardware settings persistently. Compared to using the ethtool
utility to manage these settings, this has the benefit of not losing the settings after a reboot.
You can set the following ethtool
settings in NetworkManager connection profiles:
- Offload features
- Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput.
- Interrupt coalesce settings
- By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput.
- Ring buffers
- These buffers store incoming and outgoing network packets. You can increase the ring buffer sizes to reduce a high packet drop rate.
- Channel settings
A network interface manages its associated number of channels along with hardware settings and network drivers. All devices associated with a network interface communicate with each other through interrupt requests (IRQ). Each device queue holds pending IRQ and communicates with each other over a data line known as channel. Types of queues are associated with specific channel types. These channel types include:
-
rx
for receiving queues -
tx
for transmit queues -
other
for link interrupts or single root input/output virtualization (SR-IOV) coordination -
combined
for hardware capacity-based multipurpose channels
-
30.1. Configuring an ethtool offload feature by using nmcli Copy linkLink copied to clipboard!
You can use NetworkManager to enable and disable ethtool
offload features in a connection profile.
Procedure
For example, to enable the RX offload feature and disable TX offload in the
enp1s0
connection profile, enter:nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off
# nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command explicitly enables RX offload and disables TX offload.
To remove the setting of an offload feature that you previously enabled or disabled, set the feature’s parameter to a null value. For example, to remove the configuration for TX offload, enter:
nmcli con modify enp1s0 ethtool.feature-tx ""
# nmcli con modify enp1s0 ethtool.feature-tx ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reactivate the network profile:
nmcli connection up enp1s0
# nmcli connection up enp1s0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
ethtool -k
command to display the current offload features of a network device:ethtool -k network_device
# ethtool -k network_device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
30.2. Configuring an ethtool offload feature by using the network RHEL system role Copy linkLink copied to clipboard!
You can use the network
RHEL system role to automate configuring TCP offload engine (TOE) to offload processing certain operations to the network controller. TOE improves the network throughput.
You cannot use the network
RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
gro: no
- Disables Generic receive offload (GRO).
gso: yes
- Enables Generic segmentation offload (GSO).
tx_sctp_segmentation: no
- Disables TX stream control transmission protocol (SCTP) segmentation.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the Ansible facts of the managed node and verify the offload settings:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
30.3. Configuring an ethtool coalesce settings by using nmcli Copy linkLink copied to clipboard!
You can use NetworkManager to set ethtool
coalesce settings in connection profiles.
Procedure
For example, to set the maximum number of received packets to delay to
128
in theenp1s0
connection profile, enter:nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128
# nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To remove a coalesce setting, set it to a null value. For example, to remove the
ethtool.coalesce-rx-frames
setting, enter:nmcli connection modify enp1s0 ethtool.coalesce-rx-frames ""
# nmcli connection modify enp1s0 ethtool.coalesce-rx-frames ""
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To reactivate the network profile:
nmcli connection up enp1s0
# nmcli connection up enp1s0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
ethtool -c
command to display the current offload features of a network device:ethtool -c network_device
# ethtool -c network_device
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
30.4. Configuring an ethtool coalesce settings by using the network RHEL system role Copy linkLink copied to clipboard!
Interrupt coalescing collects network packets and generates a single interrupt for multiple packets. This reduces interrupt load and maximizes throughput. You can automate the configuration of these settings in the NetworkManager connection profile by using the network
RHEL system role.
You cannot use the network
RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rx_frames: <value>
- Sets the number of RX frames.
gso: <value>
- Sets the number of TX frames.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the current offload features of the network device:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
30.5. Increasing the ring buffer size to reduce a high packet drop rate by using nmcli Copy linkLink copied to clipboard!
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Receive ring buffers are shared between the device driver and network interface controller (NIC). The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff
or skb
to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
Procedure
Display the packet drop statistics of the interface:
ethtool -S enp1s0
# ethtool -S enp1s0 ... rx_queue_0_drops: 97326 rx_queue_1_drops: 63783 ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the output of the command depends on the network card and the driver.
High values in
discard
ordrop
counters indicate that the available buffer fills up faster than the kernel can process the packets. Increasing the ring buffers can help to avoid such loss.Display the maximum ring buffer sizes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the values in the
Pre-set maximums
section are higher than in theCurrent hardware settings
section, you can change the settings in the next steps.Identify the NetworkManager connection profile that uses the interface:
nmcli connection show
# nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the connection profile, and increase the ring buffers:
To increase the RX ring buffer, enter:
nmcli connection modify Example-Connection ethtool.ring-rx 4096
# nmcli connection modify Example-Connection ethtool.ring-rx 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To increase the TX ring buffer, enter:
nmcli connection modify Example-Connection ethtool.ring-tx 4096
# nmcli connection modify Example-Connection ethtool.ring-tx 4096
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reload the NetworkManager connection:
nmcli connection up Example-Connection
# nmcli connection up Example-Connection
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantDepending on the driver your NIC uses, changing in the ring buffer can shortly interrupt the network connection.
30.6. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role Copy linkLink copied to clipboard!
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff
or skb
to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network
RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You cannot use the network
RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You know the maximum ring buffer sizes that the device supports.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
rx: <value>
- Sets the maximum number of received ring buffer entries.
tx: <value>
- Sets the maximum number of transmitted ring buffer entries.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the maximum ring buffer sizes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
30.7. Configuring an ethtool channels settings by using nmcli Copy linkLink copied to clipboard!
The ethtool
utility handles IRQ based communication with associated devices to manage related channels settings in connection profiles. Use the nmcli
utility to configure ethtool
settings in NetworkManager connection profiles.
Procedure
Display the channels associated with a network device:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the channel settings of a network interface:
nmcli connection modify enp1s0 ethtool.channels-rx 4 ethtool.channels-tx 3 ethtools.channels-other 9 ethtool.channels-combined 50
# nmcli connection modify enp1s0 ethtool.channels-rx 4 ethtool.channels-tx 3 ethtools.channels-other 9 ethtool.channels-combined 50
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reactivate the network profile:
nmcli connection up enp1s0
# nmcli connection up enp1s0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check the updated channel settings associated with the network device:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow