Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 15. Configuring ethtool settings in NetworkManager connection profiles
NetworkManager can configure certain network driver and hardware settings persistently. Compared to using the ethtool utility to manage these settings, this has the benefit of not losing the settings after a reboot.
You can set the following ethtool settings in NetworkManager connection profiles:
- Offload features
- Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput.
- Interrupt coalesce settings
- By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput.
- Ring buffers
- These buffers store incoming and outgoing network packets. You can increase the ring buffer sizes to reduce a high packet drop rate.
- Channel settings
A network interface manages its associated number of channels along with hardware settings and network drivers. All devices associated with a network interface communicate with each other through interrupt requests (IRQ). Each device queue holds pending IRQ and communicates with each other over a data line known as channel. Types of queues are associated with specific channel types. These channel types include:
-
rxfor receiving queues -
txfor transmit queues -
otherfor link interrupts or single root input/output virtualization (SR-IOV) coordination -
combinedfor hardware capacity-based multipurpose channels
-
15.1. Configuring an ethtool offload feature by using nmcli Copier lienLien copié sur presse-papiers!
You can use NetworkManager to enable and disable ethtool offload features in a connection profile.
Procedure
For example, to enable the RX offload feature and disable TX offload in the
enp1s0connection profile, enter:# nmcli con modify enp1s0 ethtool.feature-rx on ethtool.feature-tx offThis command explicitly enables RX offload and disables TX offload.
For a list of settings you can configure, see the
ethtool settingsection in thenm-settings-nmcli(5)man page on your system.To remove the setting of an offload feature that you previously enabled or disabled, set the feature’s parameter to a null value. For example, to remove the configuration for TX offload, enter:
# nmcli con modify enp1s0 ethtool.feature-tx ""Reactivate the network profile:
# nmcli connection up enp1s0
Verification
Use the
ethtool -kcommand to display the current offload features of a network device:# ethtool -k network_device
15.2. Configuring an ethtool offload feature by using the network RHEL system role Copier lienLien copié sur presse-papiers!
You can use the network RHEL system role to automate configuring TCP offload engine (TOE) to offload processing certain operations to the network controller. TOE improves the network throughput.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: redhat.rhel_system_roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: upThe settings specified in the example playbook include the following:
gro: no- Disables Generic receive offload (GRO).
gso: yes- Enables Generic segmentation offload (GSO).
tx_sctp_segmentation: no- Disables TX stream control transmission protocol (SCTP) segmentation.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify the offload settings:
# ansible managed-node-01.example.com -m ansible.builtin.setup ... "ansible_enp1s0": { "active": true, "device": "enp1s0", "features": { ... "rx_gro_hw": "off, ... "tx_gso_list": "on, ... "tx_sctp_segmentation": "off", ... } ...
15.3. Configuring an ethtool coalesce setting by using nmcli Copier lienLien copié sur presse-papiers!
You can use NetworkManager to set ethtool coalesce settings in connection profiles.
Procedure
For example, to set the maximum number of received packets to delay to
128in theenp1s0connection profile, enter:# nmcli connection modify enp1s0 ethtool.coalesce-rx-frames 128For a list of settings you can configure, see the
ethtool settingsection in thenm-settings-nmcli(5)man page on your system.To remove a coalesce setting, set it to a null value. For example, to remove the
ethtool.coalesce-rx-framessetting, enter:# nmcli connection modify enp1s0 ethtool.coalesce-rx-frames ""To reactivate the network profile:
# nmcli connection up enp1s0
Verification
Use the
ethtool -ccommand to display the current offload features of a network device:# ethtool -c <network_device>
15.4. Configuring an ethtool coalesce setting by using the network RHEL system role Copier lienLien copié sur presse-papiers!
Interrupt coalescing collects network packets and generates a single interrupt for multiple packets. This reduces interrupt load and maximizes throughput. You can automate the configuration of these settings in the NetworkManager connection profile by using the network RHEL system role.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: redhat.rhel_system_roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: upThe settings specified in the example playbook include the following:
rx_frames: <value>- Sets the number of RX frames.
tx_frames: <value>- Sets the number of TX frames.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Display the current offload features of the network device:
# ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> ... rx-frames: 128 ... tx-frames: 128 ...
15.5. Increasing the ring buffer size to reduce a high packet drop rate by using nmcli Copier lienLien copié sur presse-papiers!
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Receive ring buffers are shared between the device driver and network interface controller (NIC). The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
Procedure
Display the packet drop statistics of the interface:
# ethtool -S enp1s0 ... rx_queue_0_drops: 97326 rx_queue_1_drops: 63783 ...Note that the output of the command depends on the network card and the driver.
High values in
discardordropcounters indicate that the available buffer fills up faster than the kernel can process the packets. Increasing the ring buffers can help to avoid such loss.Display the maximum ring buffer sizes:
# ethtool -g enp1s0 Ring parameters for enp1s0: Pre-set maximums: RX: 4096 RX Mini: 0 RX Jumbo: 16320 TX: 4096 Current hardware settings: RX: 255 RX Mini: 0 RX Jumbo: 0 TX: 255If the values in the
Pre-set maximumssection are higher than in theCurrent hardware settingssection, you can change the settings in the next steps.Identify the NetworkManager connection profile that uses the interface:
# nmcli connection show NAME UUID TYPE DEVICE Example-Connection a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0Update the connection profile, and increase the ring buffers:
To increase the RX ring buffer, enter:
# nmcli connection modify Example-Connection ethtool.ring-rx 4096To increase the TX ring buffer, enter:
# nmcli connection modify Example-Connection ethtool.ring-tx 4096
Reload the NetworkManager connection:
# nmcli connection up Example-ConnectionImportantDepending on the driver your NIC uses, changing in the ring buffer can shortly interrupt the network connection.
15.6. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role Copier lienLien copié sur presse-papiers!
Increase the size of an Ethernet device’s ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues.
Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions for these nodes. - You know the maximum ring buffer sizes that the device supports.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: redhat.rhel_system_roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: upThe settings specified in the example playbook include the following:
rx: <value>- Sets the maximum number of received ring buffer entries.
tx: <value>- Sets the maximum number of transmitted ring buffer entries.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.ymlNote that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
Display the maximum ring buffer sizes:
# ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> ... Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096
15.7. Configuring an ethtool channels settings by using nmcli Copier lienLien copié sur presse-papiers!
The ethtool utility handles IRQ based communication with associated devices to manage related channels settings in connection profiles. Use the nmcli utility to configure ethtool settings in NetworkManager connection profiles.
Procedure
Display the channels associated with a network device:
# ethtool --show-channels enp1s0 Channel parameters for enp1s0: Pre-set maximums: RX: 4 TX: 3 Other: 10 Combined: 63 Current hardware settings: RX: 1 TX: 1 Other: 1 Combined: 1Update the channel settings of a network interface:
# nmcli connection modify enp1s0 ethtool.channels-rx 4 ethtool.channels-tx 3 ethtool.channels-other 9 ethtool.channels-combined 50Reactivate the network profile:
# nmcli connection up enp1s0
Verification
Check the updated channel settings associated with the network device:
# ethtool --show-channels enp1s0 Channel parameters for enp1s0: Pre-set maximums: RX: 4 TX: 3 Other: 10 Combined: 63 Current hardware settings: RX: 4 TX: 3 Other: 9 Combined: 50