Ce contenu n'est pas disponible dans la langue sélectionnée.
Configuring InfiniBand and RDMA networks
Configuring and managing high-speed network protocols and RDMA hardware
Abstract
Providing feedback on Red Hat documentation Copier lienLien copié sur presse-papiers!
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introduction to InfiniBand and RDMA Copier lienLien copié sur presse-papiers!
InfiniBand refers to two distinct things:
- The physical link-layer protocol for InfiniBand networks
- The InfiniBand Verbs API, an implementation of the remote direct memory access (RDMA) technology
RDMA provides access between the main memory of two computers without involving an operating system, cache, or storage. By using RDMA, data transfers with high-throughput, low-latency, and low CPU utilization.
In a typical IP data transfer, when an application on one machine sends data to an application on another machine, the following actions happen on the receiving end:
- The kernel must receive the data.
- The kernel must determine that the data belongs to the application.
- The kernel wakes up the application.
- The kernel waits for the application to perform a system call into the kernel.
- The application copies the data from the internal memory space of the kernel into the buffer provided by the application.
This process means that most network traffic is copied across the main memory of the system if the host adapter uses direct memory access (DMA) or otherwise at least twice. Additionally, the computer executes some context switches to switch between the kernel and application. These context switches can cause a higher CPU load with high traffic rates while slowing down the other tasks.
Unlike traditional IP communication, RDMA communication bypasses the kernel intervention in the communication process. This reduces the CPU overhead. After a packet enters a network, the RDMA protocol enables the host adapter to decide which application should receive it and where to store it in the memory space of that application. Instead of sending the packet for processing to the kernel and copying it into the memory of the user application, the host adapter directly places the packet contents in the application buffer. This process requires a separate API, the InfiniBand Verbs API, and applications need to implement the InfiniBand Verbs API to use RDMA.
Red Hat Enterprise Linux supports both the InfiniBand hardware and the InfiniBand Verbs API. Additionally, it supports the following technologies to use the InfiniBand Verbs API on non-InfiniBand hardware:
- iWARP: A network protocol that implements RDMA over IP networks
- RDMA over Converged Ethernet (RoCE), which is also known as InfiniBand over Ethernet (IBoE): A network protocol that implements RDMA over Ethernet networks
Chapter 2. Configuring the rdma service Copier lienLien copié sur presse-papiers!
With the Remote Direct Memory Access (RDMA) protocol, you can transfer data between the RDMA enabled systems over the network by using the main memory. The RDMA protocol provides low latency and high throughput. To manage supported network protocols and communication standards, you need to configure the rdma service. This configuration includes high speed network protocols such as RoCE and iWARP, and communication standards such as Soft-RoCE and Soft-iWARP. When Red Hat Enterprise Linux detects InfiniBand, iWARP, or RoCE devices and their configuration files residing at the /etc/rdma/modules/* directory, the udev device manager instructs systemd to start the rdma service. Configuration of modules in the /etc/rdma/modules/rdma.conf file remains persistent after reboot. You need to restart the rdma-load-modules@rdma.service configuration service to apply changes.
Procedure
Install the
rdma-coreandopensmpackages:dnf install rdma-core opensm
# dnf install rdma-core opensmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
opensmservice:systemctl enable opensm
# systemctl enable opensmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
opensmservice:systemctl start opensm
# systemctl start opensmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/etc/rdma/modules/rdma.conffile and uncomment the modules that you want to enable:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the service to make the changes effective:
systemctl restart <rdma-load-modules@rdma.service>
# systemctl restart <rdma-load-modules@rdma.service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Install the
libibverbs-utilsandinfiniband-diagspackages:dnf install libibverbs-utils infiniband-diags
# dnf install libibverbs-utils infiniband-diagsCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the available InfiniBand devices:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the information of the
mlx4_1device:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the status of the
mlx4_1device:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ibpingutility pings an InfiniBand address and runs as a client/server by configuring the parameters.Start server mode
-Son port number-Pwith-CInfiniBand channel adapter (CA) name on the host:ibping -S -C mlx4_1 -P 1
# ibping -S -C mlx4_1 -P 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start client mode, send some packets
-con port number-Pby using-CInfiniBand channel adapter (CA) name with-LLocal Identifier (LID) on the host:ibping -c 50 -C mlx4_0 -P 1 -L 2
# ibping -c 50 -C mlx4_0 -P 1 -L 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 3. Configuring IPoIB Copier lienLien copié sur presse-papiers!
By default, InfiniBand does not use the internet protocol (IP) for communication. However, IP over InfiniBand (IPoIB) provides an IP network emulation layer on top of InfiniBand remote direct memory access (RDMA) networks. This allows existing unmodified applications to transmit data over InfiniBand networks, but the performance is lower than if the application would use RDMA natively.
The Mellanox devices, starting from ConnectX-4 and above, on RHEL 8 and later use Enhanced IPoIB mode by default (datagram only). Connected mode is not supported on these devices.
3.1. The IPoIB communication modes Copier lienLien copié sur presse-papiers!
An IPoIB device is configurable in either Datagram or Connected mode. The difference is the type of queue pair the IPoIB layer attempts to open with the machine at the other end of the communication:
In the
Datagrammode, the system opens an unreliable, disconnected queue pair.This mode does not support packages larger than Maximum Transmission Unit (MTU) of the InfiniBand link layer. During transmission of data, the IPoIB layer adds a 4-byte IPoIB header on top of the IP packet. As a result, the IPoIB MTU is 4 bytes less than the InfiniBand link-layer MTU. As
2048is a common InfiniBand link-layer MTU, the common IPoIB device MTU inDatagrammode is2044.In the
Connectedmode, the system opens a reliable, connected queue pair.This mode allows messages larger than the InfiniBand link-layer MTU. The host adapter handles packet segmentation and reassembly. As a result, in the
Connectedmode, the messages sent from Infiniband adapters have no size limits. However, there are limited IP packets due to thedatafield and TCP/IPheaderfield. For this reason, the IPoIB MTU in theConnectedmode is65520bytes.The
Connectedmode has a higher performance but consumes more kernel memory.
Though a system is configured to use the Connected mode, a system still sends multicast traffic by using the Datagram mode because InfiniBand switches and fabric cannot pass multicast traffic in the Connected mode. Also, when the host is not configured to use the Connected mode, the system falls back to the Datagram mode.
While running an application that sends multicast data up to the MTU on the interface, configure the interface in Datagram mode or configure the application to cap the send size of a packet that will fit in datagram-sized packets.
3.2. Understanding IPoIB hardware addresses Copier lienLien copié sur presse-papiers!
IPoIB devices have a 20 byte hardware address that consists of the following parts:
- The first 4 bytes are flags and queue pair numbers
The next 8 bytes are the subnet prefix
The default subnet prefix is
0xfe:80:00:00:00:00:00:00. After the device connects to the subnet manager, the device changes this prefix to match with the configured subnet manager.- The last 8 bytes are the Globally Unique Identifier (GUID) of the InfiniBand port that attaches to the IPoIB device
As the first 12 bytes can change, do not use them in the udev device manager rules.
3.3. Renaming IPoIB devices Copier lienLien copié sur presse-papiers!
By default, the kernel names Internet Protocol over InfiniBand (IPoIB) devices, for example, ib0, ib1, and so on. To avoid conflicts, Red Hat recommends creating a rule in the udev device manager to create persistent and meaningful names such as mlx4_ib0.
Prerequisites
- You have installed an InfiniBand device.
Procedure
Display the hardware address of the device
ib0:ip link show ib0
# ip link show ib0 8: ib0: >BROADCAST,MULTICAST,UP,LOWER_UP< mtu 65520 qdisc pfifo_fast state UP mode DEFAULT qlen 256 link/infiniband 80:00:02:00:fe:80:00:00:00:00:00:00:00:02:c9:03:00:31:78:f2 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ffCopy to Clipboard Copied! Toggle word wrap Toggle overflow The last eight bytes of the address are required to create a
udevrule in the next step.To configure a rule that renames the device with the
00:02:c9:03:00:31:78:f2hardware address tomlx4_ib0, edit the/etc/udev/rules.d/70-persistent-ipoib.rulesfile and add anACTIONrule:ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{type}=="32", ATTR{address}=="?*00:02:c9:03:00:31:78:f2", NAME="mlx4_ib0"ACTION=="add", SUBSYSTEM=="net", DRIVERS=="?*", ATTR{type}=="32", ATTR{address}=="?*00:02:c9:03:00:31:78:f2", NAME="mlx4_ib0"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the host:
reboot
# rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Configuring an IPoIB connection by using nmcli Copier lienLien copié sur presse-papiers!
You can use the nmcli utility to create an IP over InfiniBand connection on the command line.
Prerequisites
- An InfiniBand device is installed on the server
- The corresponding kernel module is loaded
Procedure
Create the InfiniBand connection to use the
mlx4_ib0interface in theConnectedtransport mode and the maximum MTU of65520bytes:nmcli connection add type infiniband con-name mlx4_ib0 ifname mlx4_ib0 transport-mode Connected mtu 65520
# nmcli connection add type infiniband con-name mlx4_ib0 ifname mlx4_ib0 transport-mode Connected mtu 65520Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a
P_Key, for example:nmcli connection modify mlx4_ib0 infiniband.p-key 0x8002
# nmcli connection modify mlx4_ib0 infiniband.p-key 0x8002Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the IPv4 settings:
To use DHCP, enter:
nmcli connection modify mlx4_ib0 ipv4.method auto
# nmcli connection modify mlx4_ib0 ipv4.method autoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Skip this step if
ipv4.methodis already set toauto(default).To set a static IPv4 address, network mask, default gateway, DNS servers, and search domain, enter:
nmcli connection modify mlx4_ib0 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com
# nmcli connection modify mlx4_ib0 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the IPv6 settings:
To use stateless address autoconfiguration (SLAAC), enter:
nmcli connection modify mlx4_ib0 ipv6.method auto
# nmcli connection modify mlx4_ib0 ipv6.method autoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Skip this step if
ipv6.methodis already set toauto(default).To set a static IPv6 address, network mask, default gateway, DNS servers, and search domain, enter:
nmcli connection modify mlx4_ib0 ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com
# nmcli connection modify mlx4_ib0 ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To customize other settings in the profile, use the following command:
nmcli connection modify mlx4_ib0 <setting> <value>
# nmcli connection modify mlx4_ib0 <setting> <value>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enclose values with spaces or semicolons in quotes.
Activate the profile:
nmcli connection up mlx4_ib0
# nmcli connection up mlx4_ib0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Use the
pingutility to send ICMP packets to the remote host’s InfiniBand adapter, for example:ping -c5 192.0.2.2
# ping -c5 192.0.2.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Configuring an IPoIB connection by using the network RHEL system role Copier lienLien copié sur presse-papiers!
You can use IP over InfiniBand (IPoIB) to send IP packets over an InfiniBand interface. To configure IPoIB, create a NetworkManager connection profile. By using Ansible and the network system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook.
You can use the network RHEL system role to configure IPoIB and, if a connection profile for the InfiniBand’s parent device does not exist, the role can create it as well.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. -
An InfiniBand device named
mlx4_ib0is installed in the managed nodes. - The managed nodes use NetworkManager to configure the network.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
type: <profile_type>- Sets the type of the profile to create. The example playbook creates two connection profiles: One for the InfiniBand connection and one for the IPoIB device.
parent: <parent_device>- Sets the parent device of the IPoIB connection profile.
p_key: <value>-
Sets the InfiniBand partition key. If you set this variable, do not set
interface_nameon the IPoIB device. transport_mode: <mode>-
Sets the IPoIB connection operation mode. You can set this variable to
datagram(default) orconnected.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the IP settings of the
mlx4_ib0.8002device:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the partition key (P_Key) of the
mlx4_ib0.8002device:ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/pkey' managed-node-01.example.com | CHANGED | rc=0 >> 0x8002
# ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/pkey' managed-node-01.example.com | CHANGED | rc=0 >> 0x8002Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the mode of the
mlx4_ib0.8002device:ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/mode' managed-node-01.example.com | CHANGED | rc=0 >> datagram
# ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/mode' managed-node-01.example.com | CHANGED | rc=0 >> datagramCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Configuring an IPoIB connection by using nmstatectl Copier lienLien copié sur presse-papiers!
You can use the nmstatectl utility to configure an IP over InfiniBand (IPoIB) connection through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state.
Prerequisites
- An InfiniBand device is installed on the server.
- The kernel module for the InfiniBand device is loaded.
Procedure
Create a YAML file, for example
~/create-IPoIB-profile.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow An IPoIB connection has now the following settings:
-
IPOIB device name:
mlx4_ib0.8002 -
Base interface (parent):
mlx4_ib0 -
InfiniBand partition key:
0x8002 -
Transport mode:
datagram -
Static IPv4 address:
192.0.2.1with the/24subnet mask -
Static IPv6 address:
2001:db8:1::1with the/64subnet mask -
IPv4 default gateway:
192.0.2.254 -
IPv6 default gateway:
2001:db8:1::fffe
-
IPOIB device name:
Apply the settings to the system:
nmstatectl apply ~/create-IPoIB-profile.yml
# nmstatectl apply ~/create-IPoIB-profile.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the IP settings of the
mlx4_ib0.8002device:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the partition key (P_Key) of the
mlx4_ib0.8002device:cat /sys/class/net/mlx4_ib0.8002/pkey 0x8002
# cat /sys/class/net/mlx4_ib0.8002/pkey 0x8002Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the mode of the
mlx4_ib0.8002device:cat /sys/class/net/mlx4_ib0.8002/mode datagram
# cat /sys/class/net/mlx4_ib0.8002/mode datagramCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Configuring an IPoIB connection by using nm-connection-editor Copier lienLien copié sur presse-papiers!
The nmcli-connection-editor application configures and manages network connections stored by NetworkManager by using the management console.
Prerequisites
- An InfiniBand device is installed on the server.
- The corresponding kernel module is loaded.
-
The
nm-connection-editorpackage is installed.
Procedure
Enter the command:
nm-connection-editor
$ nm-connection-editorCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Click the + button to add a new connection.
-
Select the
InfiniBandconnection type and click . On the
InfiniBandtab:- Change the connection name if you want to.
- Select the transport mode.
- Select the device.
- Set an MTU if needed.
-
On the
IPv4 Settingstab, configure the IPv4 settings. For example, set a static IPv4 address, network mask, default gateway, and DNS server:
-
On the
IPv6 Settingstab, configure the IPv6 settings. For example, set a static IPv6 address, network mask, default gateway, and DNS server:
- Click to save the team connection.
-
Close
nm-connection-editor. You can set a
P_Keyinterface. As this setting is not available innm-connection-editor, you must set this parameter on the command line.For example, to set
0x8002asP_Keyinterface of themlx4_ib0connection:nmcli connection modify mlx4_ib0 infiniband.p-key 0x8002
# nmcli connection modify mlx4_ib0 infiniband.p-key 0x8002Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Testing an RDMA network by using qperf after IPoIB is configured Copier lienLien copié sur presse-papiers!
The qperf utility measures RDMA and IP performance between two nodes in terms of bandwidth, latency, and CPU utilization.
Prerequisites
-
You have installed the
qperfpackage on both hosts. - IPoIB is configured on both hosts.
Procedure
Start
qperfon one of the hosts without any options to act as a server:qperf
# qperfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following commands on the client. The commands use port
1of themlx4_0host channel adapter in the client to connect to IP address192.0.2.1assigned to the InfiniBand adapter in the server.Display the configuration of the host channel adapter:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the Reliable Connection (RC) streaming two-way bandwidth:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the RC streaming one-way bandwidth:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Configuring RoCE Copier lienLien copié sur presse-papiers!
Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol that utilizes RDMA over an Ethernet network. For configuration, RoCE requires specific hardware and some of the hardware vendors are Mellanox, Broadcom, and QLogic.
4.1. Overview of RoCE protocol versions Copier lienLien copié sur presse-papiers!
The following are the different RoCE versions:
- RoCE v1
-
The RoCE version 1 protocol is an Ethernet link layer protocol with Ethertype
0x8915that enables the communication between any two hosts in the same Ethernet broadcast domain. - RoCE v2
-
The RoCE version 2 protocol exists on the top of either the UDP over IPv4 or the UDP over IPv6 protocol. For RoCE v2, the UDP destination port number is
4791.
The RDMA_CM sets up a reliable connection between a client and a server for transferring data. RDMA_CM provides an RDMA transport-neutral interface for establishing connections. The communication uses a specific RDMA device and message-based data transfers.
Using different versions like RoCE v2 on the client and RoCE v1 on the server is not supported. In such a case, configure both the server and client to communicate over RoCE v1.
4.2. Temporarily changing the default RoCE version Copier lienLien copié sur presse-papiers!
Using the RoCE v2 protocol on the client and RoCE v1 on the server is not supported. If the hardware in your server supports RoCE v1 only, configure your clients for RoCE v1 to communicate with the server. For example, you can configure a client that uses the mlx5_0 driver for the Mellanox ConnectX-5 InfiniBand device that only supports RoCE v1.
The changes described here will remain effective until you reboot the host.
Prerequisites
- The client uses an InfiniBand device with RoCE v2 protocol.
- The server uses an InfiniBand device that only supports RoCE v1.
Procedure
Create the
/sys/kernel/config/rdma_cm/mlx5_0/directory:mkdir /sys/kernel/config/rdma_cm/mlx5_0/
# mkdir /sys/kernel/config/rdma_cm/mlx5_0/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the default RoCE mode:
cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode RoCE v2
# cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode RoCE v2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the default RoCE mode to version 1:
echo "IB/RoCE v1" > /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode
# echo "IB/RoCE v1" > /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_modeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Configuring Soft-RoCE Copier lienLien copié sur presse-papiers!
Soft-RoCE is a software implementation of remote direct memory access (RDMA) over Ethernet, which is also called RXE. Use Soft-RoCE on hosts without RoCE host channel adapters (HCA).
Soft-RoCE is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
Prerequisites
- An Ethernet adapter is installed
Procedure
Install the
iproute,libibverbs,libibverbs-utils, andinfiniband-diagspackages:yum install iproute libibverbs libibverbs-utils infiniband-diags
# yum install iproute libibverbs libibverbs-utils infiniband-diagsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the RDMA links:
rdma link show
# rdma link showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new
rxedevice namedrxe0that uses theenp0s1interface:rdma link add rxe0 type rxe netdev enp1s0
# rdma link add rxe0 type rxe netdev enp1s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the state of all RDMA links:
rdma link show
# rdma link show link rxe0/1 state ACTIVE physical_state LINK_UP netdev enp1s0Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available RDMA devices:
ibv_devices
# ibv_devices device node GUID ------ ---------------- rxe0 505400fffed5e0fbCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the
ibstatutility to display a detailed status:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Increasing the amount of memory that users are allowed to pin in the system Copier lienLien copié sur presse-papiers!
Remote direct memory access (RDMA) operations require the pinning of physical memory. As a consequence, the kernel is not allowed to write memory into the swap space. If a user pins too much memory, the system can run out of memory, and the kernel terminates processes to free up more memory. Therefore, memory pinning is a privileged operation.
If non-root users need to run large RDMA applications, it is necessary to increase the amount of memory to maintain pages in primary memory pinned all the time.
Procedure
As the
rootuser, create the file/etc/security/limits.confwith the following contents:@rdma soft memlock unlimited @rdma hard memlock unlimited
@rdma soft memlock unlimited @rdma hard memlock unlimitedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Log in as a member of the
rdmagroup after editing the/etc/security/limits.conffile.Note that Red Hat Enterprise Linux applies updated
ulimitsettings when the user logs in.Use the
ulimit -lcommand to display the limit:ulimit -l
$ ulimit -l unlimitedCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the command returns
unlimited, the user can pin an unlimited amount of memory.
Chapter 6. Enabling NFS over RDMA on an NFS server Copier lienLien copié sur presse-papiers!
Remote Direct Memory Access (RDMA) is a protocol that enables a client system to directly transfer data from the memory of a storage server into its own memory. This enhances storage throughput, decreases latency in data transfer between the server and client, and reduces CPU load on both ends. If both the NFS server and clients are connected over RDMA, clients can use NFSoRDMA to mount an exported directory.
Prerequisites
- The NFS service is running and configured
- An InfiniBand or RDMA over Converged Ethernet (RoCE) device is installed on the server.
- IP over InfiniBand (IPoIB) is configured on the server, and the InfiniBand device has an IP address assigned.
Procedure
Install the
rdma-corepackage:dnf install rdma-core
# dnf install rdma-coreCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the package was already installed, verify that the
xprtrdmaandsvcrdmamodules in the/etc/rdma/modules/rdma.conffile are uncommented:NFS over RDMA client support NFS over RDMA server support
# NFS over RDMA client support xprtrdma # NFS over RDMA server support svcrdmaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: By default, NFS over RDMA uses port 20049. If you want to use a different port, set the
rdma-portsetting in the[nfsd]section of the/etc/nfs.conffile:rdma-port=<port>
rdma-port=<port>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the NFSoRDMA port in
firewalld:firewall-cmd --permanent --add-port={20049/tcp,20049/udp} firewall-cmd --reload# firewall-cmd --permanent --add-port={20049/tcp,20049/udp} # firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Adjust the port numbers if you set a different port than 20049.
Restart the
nfs-serverservice:systemctl restart nfs-server
# systemctl restart nfs-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On a client with InfiniBand hardware, perform the following steps:
Install the following packages:
dnf install nfs-utils rdma-core
# dnf install nfs-utils rdma-coreCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount an exported NFS share over RDMA:
mount -o rdma server.example.com:/nfs/projects/ /mnt/
# mount -o rdma server.example.com:/nfs/projects/ /mnt/Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you set a port number other than the default (20049), pass
port=<port_number>to the command:mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/
# mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the share was mounted with the
rdmaoption:mount | grep "/mnt"
# mount | grep "/mnt" server.example.com:/nfs/projects/ on /mnt type nfs (...,proto=rdma,...)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Configuring Soft-iWARP Copier lienLien copié sur presse-papiers!
Remote Direct Memory Access (RDMA) uses several libraries and protocols over an Ethernet such as iWARP, Soft-iWARP for performance improvement and aided programming interface.
Soft-iWARP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview.
7.1. Overview of iWARP and Soft-iWARP Copier lienLien copié sur presse-papiers!
Remote direct memory access (RDMA) uses the iWARP over Ethernet for converged and low latency data transmission over TCP. By using standard Ethernet switches and the TCP/IP stack, iWARP routes traffic across the IP subnets to utilize the existing infrastructure efficiently. In Red Hat Enterprise Linux, multiple providers implement iWARP for their hardware network interface cards. For example, cxgb4, irdma, qedr, and so on.
Soft-iWARP (siw) is a software-based iWARP kernel driver and user library for Linux. It is a software-based RDMA device that provides a programming interface to RDMA hardware when attached to network interface cards. It provides an easy way to test and validate the RDMA environment.
7.2. Configuring Soft-iWARP Copier lienLien copié sur presse-papiers!
Soft-iWARP (siw) implements the iWARP Remote direct memory access (RDMA) transport over the Linux TCP/IP network stack. It enables a system with a standard Ethernet adapter to interoperate with an iWARP adapter or with another system running the Soft-iWARP driver or a host with the hardware that supports iWARP.
The Soft-iWARP feature is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
To configure Soft-iWARP, you can use this procedure in a script to run automatically when the system boots.
Prerequisites
- An Ethernet adapter is installed
Procedure
Install the
iproute,libibverbs,libibverbs-utils, andinfiniband-diagspackages:yum install iproute libibverbs libibverbs-utils infiniband-diags
# yum install iproute libibverbs libibverbs-utils infiniband-diagsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display the RDMA links:
rdma link show
# rdma link showCopy to Clipboard Copied! Toggle word wrap Toggle overflow Load the
siwkernel module:modprobe siw
# modprobe siwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new
siwdevice namedsiw0that uses theenp0s1interface:rdma link add siw0 type siw netdev enp0s1
# rdma link add siw0 type siw netdev enp0s1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the state of all RDMA links:
rdma link show
# rdma link show link siw0/1 state ACTIVE physical_state LINK_UP netdev enp0s1Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available RDMA devices:
ibv_devices
# ibv_devices device node GUID ------ ---------------- siw0 0250b6fffea19d61Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the
ibv_devinfoutility to display a detailed status:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. InfiniBand subnet manager Copier lienLien copié sur presse-papiers!
All InfiniBand networks must have a subnet manager running for the network to function. This is true even if two machines are connected directly with no switch involved.
It is possible to have more than one subnet manager. In that case, one acts as a master and another subnet manager acts as a slave that will take over in case the master subnet manager fails.
Red Hat Enterprise Linux provides OpenSM, an implementation of an InfiniBand subnet manager. However, the features of OpenSM are limited and there is no active upstream development. Typically, embedded subnet managers in InfiniBand switches provide more features and support up-to-date InfiniBand hardware. For further details, see Installing and configuring the OpenSM InfiniBand subnet manager.