Chapter 10. Configuring NVMe over fabrics using NVMe/RDMA
In a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) setup, you configure an NVMe controller and an NVMe initiator.
10.1. Setting up an NVMe/RDMA controller using configfs
You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using configfs
.
Prerequisites
-
Verify that you have a block device to assign to the
nvmet
subsystem.
Procedure
Create the
nvmet-rdma
subsystem:modprobe nvmet-rdma mkdir /sys/kernel/config/nvmet/subsystems/testnqn cd /sys/kernel/config/nvmet/subsystems/testnqn
# modprobe nvmet-rdma # mkdir /sys/kernel/config/nvmet/subsystems/testnqn # cd /sys/kernel/config/nvmet/subsystems/testnqn
Copy to Clipboard Copied! Replace testnqn with the subsystem name.
Allow any host to connect to this controller:
echo 1 > attr_allow_any_host
# echo 1 > attr_allow_any_host
Copy to Clipboard Copied! Configure a namespace:
mkdir namespaces/10 cd namespaces/10
# mkdir namespaces/10 # cd namespaces/10
Copy to Clipboard Copied! Replace 10 with the namespace number
Set a path to the NVMe device:
echo -n /dev/nvme0n1 > device_path
# echo -n /dev/nvme0n1 > device_path
Copy to Clipboard Copied! Enable the namespace:
echo 1 > enable
# echo 1 > enable
Copy to Clipboard Copied! Create a directory with an NVMe port:
mkdir /sys/kernel/config/nvmet/ports/1 cd /sys/kernel/config/nvmet/ports/1
# mkdir /sys/kernel/config/nvmet/ports/1 # cd /sys/kernel/config/nvmet/ports/1
Copy to Clipboard Copied! Display the IP address of mlx5_ib0:
ip addr show mlx5_ib0
# ip addr show mlx5_ib0 8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256 link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0 valid_lft forever preferred_lft forever inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute valid_lft forever preferred_lft forever
Copy to Clipboard Copied! Set the transport address for the controller:
echo -n 172.31.0.202 > addr_traddr
# echo -n 172.31.0.202 > addr_traddr
Copy to Clipboard Copied! Set RDMA as the transport type:
echo rdma > addr_trtype echo 4420 > addr_trsvcid
# echo rdma > addr_trtype # echo 4420 > addr_trsvcid
Copy to Clipboard Copied! Set the address family for the port:
echo ipv4 > addr_adrfam
# echo ipv4 > addr_adrfam
Copy to Clipboard Copied! Create a soft link:
ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
# ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
Copy to Clipboard Copied!
Verification
Verify that the NVMe controller is listening on the given port and ready for connection requests:
dmesg | grep "enabling port"
# dmesg | grep "enabling port" [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)
Copy to Clipboard Copied!
10.2. Setting up the NVMe/RDMA controller using nvmetcli
You can configure the Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using the nvmetcli
utility. The nvmetcli
utility provides a command line and an interactive shell option.
Prerequisites
-
Verify that you have a block device to assign to the
nvmet
subsystem. -
Run the following
nvmetcli
operations as a root user.
Procedure
Install the
nvmetcli
package:dnf install nvmetcli
# dnf install nvmetcli
Copy to Clipboard Copied! Download the
rdma.json
file:wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
# wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
Copy to Clipboard Copied! -
Edit the
rdma.json
file and change thetraddr
value to172.31.0.202
. Setup the controller by loading the NVMe controller configuration file:
nvmetcli restore rdma.json
# nvmetcli restore rdma.json
Copy to Clipboard Copied!
If the NVMe controller configuration file name is not specified, the nvmetcli
uses the /etc/nvmet/config.json
file.
Verification
Verify that the NVMe controller is listening on the given port and ready for connection requests:
dmesg | tail -1
# dmesg | tail -1 [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
Copy to Clipboard Copied! Optional: Clear the current NVMe controller:
nvmetcli clear
# nvmetcli clear
Copy to Clipboard Copied!
10.3. Configuring an NVMe/RDMA host
You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) host by using the NVMe management command-line interface (nvme-cli
) tool.
Procedure
Install the
nvme-cli
tool:dnf install nvme-cli
# dnf install nvme-cli
Copy to Clipboard Copied! Load the
nvme-rdma
module if it is not loaded:modprobe nvme-rdma
# modprobe nvme-rdma
Copy to Clipboard Copied! Discover available subsystems on the NVMe controller:
nvme discover -t rdma -a 172.31.0.202 -s 4420
# nvme discover -t rdma -a 172.31.0.202 -s 4420 Discovery Log Number of Records 2, Generation counter 2 =====Discovery Log Entry 0====== trtype: rdma adrfam: ipv4 subtype: current discovery subsystem treq: not specified, sq flow control disable supported portid: 2 trsvcid: 4420 subnqn: nqn.2014-08.org.nvmexpress.discovery traddr: 172.31.0.202 eflags: none rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0000 =====Discovery Log Entry 1====== trtype: rdma adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 2 trsvcid: 4420 subnqn: testnqn traddr: 172.31.0.202 eflags: none rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0000
Copy to Clipboard Copied! Connect to the discovered subsystems:
nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqn lsblk cat /sys/class/nvme/nvme0/transport
# nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqn connecting to device: nvme0 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home nvme0n1 # cat /sys/class/nvme/nvme0/transport rdma
Copy to Clipboard Copied! Replace testnqn with the NVMe subsystem name.
Replace 172.31.0.202 with the controller IP address.
Replace 4420 with the port number.
Verification
List the NVMe devices that are currently connected:
nvme list
# nvme list
Copy to Clipboard Copied! Optional: Disconnect from the controller:
nvme disconnect -n testnqn lsblk
# nvme disconnect -n testnqn NQN:testnqn disconnected 1 controller(s) # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home
Copy to Clipboard Copied!