Search

Chapter 11. Configuring NVMe over fabrics using NVMe/RDMA

download PDF

In an Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) setup, you configure an NVMe controller and an NVMe initiator.

As a system administrator, complete the following tasks to deploy the NVMe/RDMA setup:

11.1. Overview of NVMe over fabric devices

Non-volatile Memory Express™ (NVMe™) is an interface that allows host software utility to communicate with solid state drives.

Use the following types of fabric transport to configure NVMe over fabric devices:

NVMe over Remote Direct Memory Access (NVMe/RDMA)
For information about how to configure NVMe™/RDMA, see Configuring NVMe over fabrics using NVMe/RDMA.
NVMe over Fibre Channel (NVMe/FC)
For information about how to configure NVMe™/FC, see Configuring NVMe over fabrics using NVMe/FC.
NVMe over TCP (NVMe/TCP)
For information about how to configure NVMe/FC, see Configuring NVMe over fabrics using NVMe/TCP.

When using NVMe over fabrics, the solid-state drive does not have to be local to your system; it can be configured remotely through a NVMe over fabrics devices.

11.2. Setting up an NVMe/RDMA controller using configfs

Use this procedure to configure an Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller using configfs.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.

Procedure

  1. Create the nvmet-rdma subsystem:

    # modprobe nvmet-rdma
    
    # mkdir /sys/kernel/config/nvmet/subsystems/testnqn
    
    # cd /sys/kernel/config/nvmet/subsystems/testnqn

    Replace testnqn with the subsystem name.

  2. Allow any host to connect to this controller:

    # echo 1 > attr_allow_any_host
  3. Configure a namespace:

    # mkdir namespaces/10
    
    # cd namespaces/10

    Replace 10 with the namespace number

  4. Set a path to the NVMe device:

    # echo -n /dev/nvme0n1 > device_path
  5. Enable the namespace:

    # echo 1 > enable
  6. Create a directory with an NVMe port:

    # mkdir /sys/kernel/config/nvmet/ports/1
    
    # cd /sys/kernel/config/nvmet/ports/1
  7. Display the IP address of mlx5_ib0:

    # ip addr show mlx5_ib0
    
    8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
        link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
        inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0
           valid_lft forever preferred_lft forever
        inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
  8. Set the transport address for the controller:

    # echo -n 172.31.0.202 > addr_traddr
  9. Set RDMA as the transport type:

    # echo rdma > addr_trtype
    
    # echo 4420 > addr_trsvcid
  10. Set the address family for the port:

    # echo ipv4 > addr_adrfam
  11. Create a soft link:

    # ln -s /sys/kernel/config/nvmet/subsystems/testnqn   /sys/kernel/config/nvmet/ports/1/subsystems/testnqn

Verification

  • Verify that the NVMe controller is listening on the given port and ready for connection requests:

    # dmesg | grep "enabling port"
    [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)

Additional resources

  • nvme(1) man page

11.3. Setting up the NVMe/RDMA controller using nvmetcli

Use the nvmetcli utility to edit, view, and start an Non-volatile Memory Express™ (NVMe™) controller. The nvmetcli utility provides a command line and an interactive shell option. Use this procedure to configure the NVMe™/RDMA controller by nvmetcli.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.
  • Execute the following nvmetcli operations as a root user.

Procedure

  1. Install the nvmetcli package:

    # dnf install nvmetcli
  2. Download the rdma.json file:

    # wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
  3. Edit the rdma.json file and change the traddr value to 172.31.0.202.
  4. Setup the controller by loading the NVMe controller configuration file:

    # nvmetcli restore rdma.json
Note

If the NVMe controller configuration file name is not specified, the nvmetcli uses the /etc/nvmet/config.json file.

Verification

  • Verify that the NVMe controller is listening on the given port and ready for connection requests:

    # dmesg | tail -1
    [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
  • Optional: Clear the current NVMe controller:

    # nvmetcli clear

Additional resources

  • nvmetcli and nvme(1) man pages

11.4. Configuring an NVMe/RDMA host

Use this procedure to configure an Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) host using the NVMe management command line interface (nvme-cli) tool.

Procedure

  1. Install the nvme-cli tool:

    # dnf install nvme-cli
  2. Load the nvme-rdma module if it is not loaded:

    # modprobe nvme-rdma
  3. Discover available subsystems on the NVMe controller:

    # nvme discover -t rdma -a 172.31.0.202 -s 4420
    
    Discovery Log Number of Records 1, Generation counter 2
    =====Discovery Log Entry 0======
    trtype:  rdma
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified, sq flow control disable supported
    portid:  1
    trsvcid: 4420
    subnqn:  testnqn
    traddr:  172.31.0.202
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms:    rdma-cm
    rdma_pkey: 0x0000
  4. Connect to the discovered subsystems:

    # nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
    nvme0n1
    
    # cat /sys/class/nvme/nvme0/transport
    rdma

    Replace testnqn with the NVMe subsystem name.

    Replace 172.31.0.202 with the controller IP address.

    Replace 4420 with the port number.

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
  • Optional: Disconnect from the controller:

    # nvme disconnect -n testnqn
    NQN:testnqn disconnected 1 controller(s)
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home

Additional resources

11.5. Next steps

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.