Chapter 10. Configuring NVMe over fabrics using NVMe/RDMA


In a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) setup, you configure an NVMe controller and an NVMe initiator.

10.1. Setting up an NVMe/RDMA controller using configfs

You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using configfs.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.

Procedure

  1. Create the nvmet-rdma subsystem:

    # modprobe nvmet-rdma
    
    # mkdir /sys/kernel/config/nvmet/subsystems/testnqn
    
    # cd /sys/kernel/config/nvmet/subsystems/testnqn
    Copy to Clipboard

    Replace testnqn with the subsystem name.

  2. Allow any host to connect to this controller:

    # echo 1 > attr_allow_any_host
    Copy to Clipboard
  3. Configure a namespace:

    # mkdir namespaces/10
    
    # cd namespaces/10
    Copy to Clipboard

    Replace 10 with the namespace number

  4. Set a path to the NVMe device:

    # echo -n /dev/nvme0n1 > device_path
    Copy to Clipboard
  5. Enable the namespace:

    # echo 1 > enable
    Copy to Clipboard
  6. Create a directory with an NVMe port:

    # mkdir /sys/kernel/config/nvmet/ports/1
    
    # cd /sys/kernel/config/nvmet/ports/1
    Copy to Clipboard
  7. Display the IP address of mlx5_ib0:

    # ip addr show mlx5_ib0
    
    8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
        link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
        inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0
           valid_lft forever preferred_lft forever
        inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
    Copy to Clipboard
  8. Set the transport address for the controller:

    # echo -n 172.31.0.202 > addr_traddr
    Copy to Clipboard
  9. Set RDMA as the transport type:

    # echo rdma > addr_trtype
    
    # echo 4420 > addr_trsvcid
    Copy to Clipboard
  10. Set the address family for the port:

    # echo ipv4 > addr_adrfam
    Copy to Clipboard
  11. Create a soft link:

    # ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
    Copy to Clipboard

Verification

  • Verify that the NVMe controller is listening on the given port and ready for connection requests:

    # dmesg | grep "enabling port"
    [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)
    Copy to Clipboard

10.2. Setting up the NVMe/RDMA controller using nvmetcli

You can configure the Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using the nvmetcli utility. The nvmetcli utility provides a command line and an interactive shell option.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.
  • Run the following nvmetcli operations as a root user.

Procedure

  1. Install the nvmetcli package:

    # dnf install nvmetcli
    Copy to Clipboard
  2. Download the rdma.json file:

    # wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
    Copy to Clipboard
  3. Edit the rdma.json file and change the traddr value to 172.31.0.202.
  4. Setup the controller by loading the NVMe controller configuration file:

    # nvmetcli restore rdma.json
    Copy to Clipboard
Note

If the NVMe controller configuration file name is not specified, the nvmetcli uses the /etc/nvmet/config.json file.

Verification

  • Verify that the NVMe controller is listening on the given port and ready for connection requests:

    # dmesg | tail -1
    [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
    Copy to Clipboard
  • Optional: Clear the current NVMe controller:

    # nvmetcli clear
    Copy to Clipboard

10.3. Configuring an NVMe/RDMA host

You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) host by using the NVMe management command-line interface (nvme-cli) tool.

Procedure

  1. Install the nvme-cli tool:

    # dnf install nvme-cli
    Copy to Clipboard
  2. Load the nvme-rdma module if it is not loaded:

    # modprobe nvme-rdma
    Copy to Clipboard
  3. Discover available subsystems on the NVMe controller:

    # nvme discover -t rdma -a 172.31.0.202 -s 4420
    
    Discovery Log Number of Records 2, Generation counter 2
    =====Discovery Log Entry 0======
    trtype: rdma
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified, sq flow control disable supported
    portid: 2
    trsvcid: 4420
    subnqn: nqn.2014-08.org.nvmexpress.discovery
    traddr: 172.31.0.202
    eflags: none
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms: rdma-cm
    rdma_pkey: 0000
    =====Discovery Log Entry 1======
    trtype: rdma
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified, sq flow control disable supported
    portid: 2
    trsvcid: 4420
    subnqn: testnqn
    traddr: 172.31.0.202
    eflags: none
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms: rdma-cm
    rdma_pkey: 0000
    Copy to Clipboard
  4. Connect to the discovered subsystems:

    # nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqn
    connecting to device: nvme0
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
    nvme0n1
    
    # cat /sys/class/nvme/nvme0/transport
    rdma
    Copy to Clipboard

    Replace testnqn with the NVMe subsystem name.

    Replace 172.31.0.202 with the controller IP address.

    Replace 4420 with the port number.

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
    Copy to Clipboard
  • Optional: Disconnect from the controller:

    # nvme disconnect -n testnqn
    NQN:testnqn disconnected 1 controller(s)
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
    Copy to Clipboard
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat