Questo contenuto non è disponibile nella lingua selezionata.
Chapter 15. Configuring NVMe over fabrics using NVMe/RDMA
In an Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) setup, you configure an NVMe controller and an NVMe initiator.
15.1. Setting up an NVMe/RDMA controller using configfs
You can configure an Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using configfs
.
Prerequisites
-
Verify that you have a block device to assign to the
nvmet
subsystem.
Procedure
Create the
nvmet-rdma
subsystem:# modprobe nvmet-rdma # mkdir /sys/kernel/config/nvmet/subsystems/testnqn # cd /sys/kernel/config/nvmet/subsystems/testnqn
Replace testnqn with the subsystem name.
Allow any host to connect to this controller:
# echo 1 > attr_allow_any_host
Configure a namespace:
# mkdir namespaces/10 # cd namespaces/10
Replace 10 with the namespace number
Set a path to the NVMe device:
# echo -n /dev/nvme0n1 > device_path
Enable the namespace:
# echo 1 > enable
Create a directory with an NVMe port:
# mkdir /sys/kernel/config/nvmet/ports/1 # cd /sys/kernel/config/nvmet/ports/1
Display the IP address of mlx5_ib0:
# ip addr show mlx5_ib0 8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256 link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0 valid_lft forever preferred_lft forever inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute valid_lft forever preferred_lft forever
Set the transport address for the controller:
# echo -n 172.31.0.202 > addr_traddr
Set RDMA as the transport type:
# echo rdma > addr_trtype # echo 4420 > addr_trsvcid
Set the address family for the port:
# echo ipv4 > addr_adrfam
Create a soft link:
# ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
Verification
Verify that the NVMe controller is listening on the given port and ready for connection requests:
# dmesg | grep "enabling port" [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)
Additional resources
-
nvme(1)
man page on your system
15.2. Setting up the NVMe/RDMA controller using nvmetcli
You can configure the Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using the nvmetcli
utility. The nvmetcli
utility provides a command line and an interactive shell option.
Prerequisites
-
Verify that you have a block device to assign to the
nvmet
subsystem. -
Execute the following
nvmetcli
operations as a root user.
Procedure
Install the
nvmetcli
package:# yum install nvmetcli
Download the
rdma.json
file:# wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
-
Edit the
rdma.json
file and change thetraddr
value to172.31.0.202
. Setup the controller by loading the NVMe controller configuration file:
# nvmetcli restore rdma.json
If the NVMe controller configuration file name is not specified, the nvmetcli
uses the /etc/nvmet/config.json
file.
Verification
Verify that the NVMe controller is listening on the given port and ready for connection requests:
# dmesg | tail -1 [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
Optional: Clear the current NVMe controller:
# nvmetcli clear
Additional resources
-
nvmetcli
andnvme(1)
man pages on your system
15.3. Configuring an NVMe/RDMA host
You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) host by using the NVMe management command-line interface (nvme-cli
) tool.
Procedure
Install the
nvme-cli
tool:# yum install nvme-cli
Load the
nvme-rdma
module if it is not loaded:# modprobe nvme-rdma
Discover available subsystems on the NVMe controller:
# nvme discover -t rdma -a 172.31.0.202 -s 4420 Discovery Log Number of Records 1, Generation counter 2 =====Discovery Log Entry 0====== trtype: rdma adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 1 trsvcid: 4420 subnqn: testnqn traddr: 172.31.0.202 rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0x0000
Connect to the discovered subsystems:
# nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420 # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home nvme0n1 # cat /sys/class/nvme/nvme0/transport rdma
Replace testnqn with the NVMe subsystem name.
Replace 172.31.0.202 with the controller IP address.
Replace 4420 with the port number.
Verification
List the NVMe devices that are currently connected:
# nvme list
Optional: Disconnect from the controller:
# nvme disconnect -n testnqn NQN:testnqn disconnected 1 controller(s) # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home
Additional resources
-
nvme(1)
man page on your system