Chapter 28. Configuring ethtool settings in NetworkManager connection profiles
NetworkManager can configure certain network driver and hardware settings persistently. Compared to using the ethtool
utility to manage these settings, this has the benefit of not losing the settings after a reboot.
You can set the following ethtool
settings in NetworkManager connection profiles:
- Offload features
- Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput.
- Interrupt coalesce settings
- By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput.
- Ring buffers
- These buffers store incoming and outgoing network packets. You can increase the ring buffer sizes to reduce a high packet drop rate.
28.1. Configuring an ethtool offload feature by using nmcli
You can use NetworkManager to enable and disable ethtool
offload features in a connection profile.
Procedure
For example, to enable the RX offload feature and disable TX offload in the
enp1s0
connection profile, enter:# nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx off
This command explicitly enables RX offload and disables TX offload.
To remove the setting of an offload feature that you previously enabled or disabled, set the feature’s parameter to a null value. For example, to remove the configuration for TX offload, enter:
# nmcli con modify enp1s0 ethtool.feature-tx ""
Reactivate the network profile:
# nmcli connection up enp1s0
Verification
Use the
ethtool -k
command to display the current offload features of a network device:# ethtool -k network_device
Additional resources
-
nm-settings-nmcli(5)
man page
28.2. Configuring an ethtool
offload feature by using the network
RHEL system role
Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput. You configure offload features in the connection profile of the network interface. By using Ansible and the network
RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You cannot use the network
RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up
The settings specified in the example playbook include the following:
gro: no
- Disables Generic receive offload (GRO).
gso: yes
- Enables Generic segmentation offload (GSO).
tx_sctp_segmentation: no
- Disables TX stream control transmission protocol (SCTP) segmentation.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify the offload settings:
# ansible managed-node-01.example.com -m ansible.builtin.setup ... "ansible_enp1s0": { "active": true, "device": "enp1s0", "features": { ... "rx_gro_hw": "off, ... "tx_gso_list": "on, ... "tx_sctp_segmentation": "off", ... } ...
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file -
/usr/share/doc/rhel-system-roles/network/
directory
28.3. Configuring an ethtool coalesce settings by using nmcli
You can use NetworkManager to set ethtool
coalesce settings in connection profiles.
Procedure
For example, to set the maximum number of received packets to delay to
128
in theenp1s0
connection profile, enter:# nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128
To remove a coalesce setting, set it to a null value. For example, to remove the
ethtool.coalesce-rx-frames
setting, enter:# nmcli connection modify enp1s0 ethtool.coalesce-rx-frames ""
To reactivate the network profile:
# nmcli connection up enp1s0
Verification
Use the
ethtool -c
command to display the current offload features of a network device:# ethtool -c network_device
Additional resources
-
nm-settings-nmcli(5)
man page
28.4. Configuring an ethtool
coalesce settings by using the network
RHEL system role
By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. You configure coalesce settings in the connection profile of the network interface. By using Ansible and the network
RHEL role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You cannot use the network
RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up
The settings specified in the example playbook include the following:
rx_frames: <value>
- Sets the number of RX frames.
gso: <value>
- Sets the number of TX frames.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Display the current offload features of the network device:
# ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> ... rx-frames: 128 ... tx-frames: 128 ...
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file -
/usr/share/doc/rhel-system-roles/network/
directory
28.5. Increasing the ring buffer size to reduce a high packet drop rate by using nmcli
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Receive ring buffers are shared between the device driver and network interface controller (NIC). The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff
or skb
to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
Procedure
Display the packet drop statistics of the interface:
# ethtool -S enp1s0 ... rx_queue_0_drops: 97326 rx_queue_1_drops: 63783 ...
Note that the output of the command depends on the network card and the driver.
High values in
discard
ordrop
counters indicate that the available buffer fills up faster than the kernel can process the packets. Increasing the ring buffers can help to avoid such loss.Display the maximum ring buffer sizes:
# ethtool -g enp1s0 Ring parameters for enp1s0: Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 16320 TX: 4096 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255
If the values in the
Pre-set maximums
section are higher than in theCurrent hardware settings
section, you can change the settings in the next steps.Identify the NetworkManager connection profile that uses the interface:
# nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0
Update the connection profile, and increase the ring buffers:
To increase the RX ring buffer, enter:
# nmcli connection modify Example-Connection ethtool.ring-rx 4096
To increase the TX ring buffer, enter:
# nmcli connection modify Example-Connection ethtool.ring-tx 4096
Reload the NetworkManager connection:
# nmcli connection up Example-Connection
ImportantDepending on the driver your NIC uses, changing in the ring buffer can shortly interrupt the network connection.
Additional resources
28.6. Increasing the ring buffer size to reduce a high packet drop rate by using the network
RHEL system role
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff
or skb
to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network
RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You cannot use the network
RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You know the maximum ring buffer sizes that the device supports.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up
The settings specified in the example playbook include the following:
rx: <value>
- Sets the maximum number of received ring buffer entries.
tx: <value>
- Sets the maximum number of transmitted ring buffer entries.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Display the maximum ring buffer sizes:
# ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> ... Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file -
/usr/share/doc/rhel-system-roles/network/
directory