Search

Chapter 4. Configuring RoCE

download PDF

Remote Direct Memory Access (RDMA) provides remote execution for Direct Memory Access (DMA). RDMA over Converged Ethernet (RoCE) is a network protocol that utilizes RDMA over an Ethernet network. For configuration, RoCE requires specific hardware and some of the hardware vendors are Mellanox, Broadcom, and QLogic.

4.1. Overview of RoCE protocol versions

RoCE is a network protocol that enables remote direct memory access (RDMA) over Ethernet.

The following are the different RoCE versions:

RoCE v1
The RoCE version 1 protocol is an Ethernet link layer protocol with ethertype 0x8915 that enables the communication between any two hosts in the same Ethernet broadcast domain.
RoCE v2
The RoCE version 2 protocol exists on the top of either the UDP over IPv4 or the UDP over IPv6 protocol. For RoCE v2, the UDP destination port number is 4791.

The RDMA_CM sets up a reliable connection between a client and a server for transferring data. RDMA_CM provides an RDMA transport-neutral interface for establishing connections. The communication uses a specific RDMA device and message-based data transfers.

Important

Using different versions like RoCE v2 on the client and RoCE v1 on the server is not supported. In such a case, configure both the server and client to communicate over RoCE v1.

4.2. Temporarily changing the default RoCE version

Using the RoCE v2 protocol on the client and RoCE v1 on the server is not supported. If the hardware in your server supports RoCE v1 only, configure your clients for RoCE v1 to communicate with the server. For example, you can configure a client that uses the mlx5_0 driver for the Mellanox ConnectX-5 InfiniBand device that only supports RoCE v1.

Note

Changes described here will remain effective until you reboot the host.

Prerequisites

  • The client uses an InfiniBand device with RoCE v2 protocol.
  • The server uses an InfiniBand device that only supports RoCE v1.

Procedure

  1. Create the /sys/kernel/config/rdma_cm/mlx5_0/ directory:

    # mkdir /sys/kernel/config/rdma_cm/mlx5_0/
  2. Display the default RoCE mode:

    # cat /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode
    
    RoCE v2
  3. Change the default RoCE mode to version 1:

    # echo "IB/RoCE v1" > /sys/kernel/config/rdma_cm/mlx5_0/ports/1/default_roce_mode

4.3. Configuring Soft-RoCE

Soft-RoCE is a software implementation of remote direct memory access (RDMA) over Ethernet, which is also called RXE. Use Soft-RoCE on hosts without RoCE host channel adapters (HCA).

Important

The Soft-RoCE feature is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.

Prerequisites

  • An Ethernet adapter is installed

Procedure

  1. Install the iproute, libibverbs, libibverbs-utils, and infiniband-diags packages:

    # yum install iproute libibverbs libibverbs-utils infiniband-diags
  2. Display the RDMA links:

    # rdma link show
  3. Load the rdma_rxe kernel module and add a new rxe device named rxe0 that uses the enp0s1 interface:

    # rdma link add rxe0 type rxe netdev enp1s0

Verification

  1. View the state of all RDMA links:

    # rdma link show
    
    link rxe0/1 state ACTIVE physical_state LINK_UP netdev enp1s0
  2. List the available RDMA devices:

    # ibv_devices
    
        device          	   node GUID
        ------          	----------------
        rxe0            	505400fffed5e0fb
  3. You can use the ibstat utility to display a detailed status:

    # ibstat rxe0
    
    CA 'rxe0'
    	CA type:
    	Number of ports: 1
    	Firmware version:
    	Hardware version:
    	Node GUID: 0x505400fffed5e0fb
    	System image GUID: 0x0000000000000000
    	Port 1:
    		State: Active
    		Physical state: LinkUp
    		Rate: 100
    		Base lid: 0
    		LMC: 0
    		SM lid: 0
    		Capability mask: 0x00890000
    		Port GUID: 0x505400fffed5e0fb
    		Link layer: Ethernet
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.