Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 10. Configuring NVMe over fabrics using NVMe/RDMA


Configure NVMe over RDMA setup including NVMe controller and initiator configuration. You can set up RDMA controllers by using configfs and nvmetcli, and configure RDMA hosts for high-speed storage access.

10.1. Setting up an NVMe/RDMA controller using configfs

You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using configfs. For more information, see the nvme(1) man page on your system.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.

Procedure

  1. Create the nvmet-rdma subsystem:

    # modprobe nvmet-rdma
    # mkdir /sys/kernel/config/nvmet/subsystems/testnqn
    # cd /sys/kernel/config/nvmet/subsystems/testnqn

    Replace testnqn with the subsystem name.

  2. Allow any host to connect to this controller:

    # echo 1 > attr_allow_any_host
  3. Configure a namespace:

    # mkdir namespaces/10
    # cd namespaces/10

    Replace 10 with the namespace number

  4. Set a path to the NVMe device:

    # echo -n /dev/nvme0n1 > device_path
  5. Enable the namespace:

    # echo 1 > enable
  6. Create a directory with an NVMe port:

    # mkdir /sys/kernel/config/nvmet/ports/1
    # cd /sys/kernel/config/nvmet/ports/1
  7. Display the IP address of mlx5_ib0:

    # ip addr show mlx5_ib0
    8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
        link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
        inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0
           valid_lft forever preferred_lft forever
        inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
  8. Set the transport address for the controller:

    # echo -n 172.31.0.202 > addr_traddr
  9. Set RDMA as the transport type:

    # echo rdma > addr_trtype
    # echo 4420 > addr_trsvcid
  10. Set the address family for the port:

    # echo ipv4 > addr_adrfam
  11. Create a soft link:

    # ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/testnqn

Verification

  • Verify that the NVMe controller is listening on the given port and ready for connection requests:

    # dmesg | grep "enabling port"
    [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)

10.2. Setting up the NVMe™/RDMA controller using nvmetcli

You can configure the Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using the nvmetcli utility. The nvmetcli utility provides a command line and an interactive shell option. For more information, see the nvmetcli and nvme(1) man pages on your system.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.
  • Run the following nvmetcli operations as a root user.

Procedure

  1. Install the nvmetcli package:

    # dnf install nvmetcli
  2. Download the rdma.json file:

    # wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
  3. Edit the rdma.json file and change the traddr value to 172.31.0.202.
  4. Setup the controller by loading the NVMe controller configuration file:

    # nvmetcli restore rdma.json
    Note

    If the NVMe controller configuration file name is not specified, the nvmetcli uses the /etc/nvmet/config.json file.

Verification

  • Verify that the NVMe controller is listening on the given port and ready for connection requests:

    # dmesg | tail -1
    [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
  • Optional: Clear the current NVMe controller:

    # nvmetcli clear

10.3. Configuring an NVMe/RDMA host

You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) host by using the NVMe management command-line interface (nvme-cli) tool. For more information, see the nvme(1) man page on your system.

Procedure

  1. Install the nvme-cli tool:

    # dnf install nvme-cli
  2. Load the nvme-rdma module if it is not loaded:

    # modprobe nvme-rdma
  3. Discover available subsystems on the NVMe controller:

    # nvme discover -t rdma -a 172.31.0.202 -s 4420
    Discovery Log Number of Records 2, Generation counter 2
    =====Discovery Log Entry 0======
    trtype: rdma
    adrfam: ipv4
    subtype: current discovery subsystem
    treq: not specified, sq flow control disable supported
    portid: 2
    trsvcid: 4420
    subnqn: nqn.2014-08.org.nvmexpress.discovery
    traddr: 172.31.0.202
    eflags: none
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms: rdma-cm
    rdma_pkey: 0000
    =====Discovery Log Entry 1======
    trtype: rdma
    adrfam: ipv4
    subtype: nvme subsystem
    treq: not specified, sq flow control disable supported
    portid: 2
    trsvcid: 4420
    subnqn: testnqn
    traddr: 172.31.0.202
    eflags: none
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms: rdma-cm
    rdma_pkey: 0000
  4. Connect to the discovered subsystems:

    # nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqn
    connecting to device: nvme0
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
    nvme0n1
    # cat /sys/class/nvme/nvme0/transport
    rdma

    Replace testnqn with the NVMe subsystem name.

    Replace 172.31.0.202 with the controller IP address.

    Replace 4420 with the port number.

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
  • Optional: Disconnect from the controller:

    # nvme disconnect -n testnqn
    NQN:testnqn disconnected 1 controller(s)
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben