Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 10. Configuring NVMe over fabrics using NVMe/RDMA
Configure NVMe over RDMA setup including NVMe controller and initiator configuration. You can set up RDMA controllers by using configfs and nvmetcli, and configure RDMA hosts for high-speed storage access.
10.1. Setting up an NVMe/RDMA controller using configfs Copier lienLien copié sur presse-papiers!
You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using configfs. For more information, see the nvme(1) man page on your system.
Prerequisites
-
Verify that you have a block device to assign to the
nvmetsubsystem.
Procedure
Create the
nvmet-rdmasubsystem:# modprobe nvmet-rdma# mkdir /sys/kernel/config/nvmet/subsystems/testnqn# cd /sys/kernel/config/nvmet/subsystems/testnqnReplace testnqn with the subsystem name.
Allow any host to connect to this controller:
# echo 1 > attr_allow_any_hostConfigure a namespace:
# mkdir namespaces/10# cd namespaces/10Replace 10 with the namespace number
Set a path to the NVMe device:
# echo -n /dev/nvme0n1 > device_pathEnable the namespace:
# echo 1 > enableCreate a directory with an NVMe port:
# mkdir /sys/kernel/config/nvmet/ports/1# cd /sys/kernel/config/nvmet/ports/1Display the IP address of mlx5_ib0:
# ip addr show mlx5_ib08: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256 link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0 valid_lft forever preferred_lft forever inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute valid_lft forever preferred_lft foreverSet the transport address for the controller:
# echo -n 172.31.0.202 > addr_traddrSet RDMA as the transport type:
# echo rdma > addr_trtype# echo 4420 > addr_trsvcidSet the address family for the port:
# echo ipv4 > addr_adrfamCreate a soft link:
# ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
Verification
Verify that the NVMe controller is listening on the given port and ready for connection requests:
# dmesg | grep "enabling port"[ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)
10.2. Setting up the NVMe™/RDMA controller using nvmetcli Copier lienLien copié sur presse-papiers!
You can configure the Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) controller by using the nvmetcli utility. The nvmetcli utility provides a command line and an interactive shell option. For more information, see the nvmetcli and nvme(1) man pages on your system.
Prerequisites
-
Verify that you have a block device to assign to the
nvmetsubsystem. -
Run the following
nvmetclioperations as a root user.
Procedure
Install the
nvmetclipackage:# dnf install nvmetcliDownload the
rdma.jsonfile:# wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json-
Edit the
rdma.jsonfile and change thetraddrvalue to172.31.0.202. Setup the controller by loading the NVMe controller configuration file:
# nvmetcli restore rdma.jsonNoteIf the NVMe controller configuration file name is not specified, the
nvmetcliuses the/etc/nvmet/config.jsonfile.
Verification
Verify that the NVMe controller is listening on the given port and ready for connection requests:
# dmesg | tail -1[ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)Optional: Clear the current NVMe controller:
# nvmetcli clear
10.3. Configuring an NVMe/RDMA host Copier lienLien copié sur presse-papiers!
You can configure a Non-volatile Memory Express™ (NVMe™) over RDMA (NVMe™/RDMA) host by using the NVMe management command-line interface (nvme-cli) tool. For more information, see the nvme(1) man page on your system.
Procedure
Install the
nvme-clitool:# dnf install nvme-cliLoad the
nvme-rdmamodule if it is not loaded:# modprobe nvme-rdmaDiscover available subsystems on the NVMe controller:
# nvme discover -t rdma -a 172.31.0.202 -s 4420Discovery Log Number of Records 2, Generation counter 2 =====Discovery Log Entry 0====== trtype: rdma adrfam: ipv4 subtype: current discovery subsystem treq: not specified, sq flow control disable supported portid: 2 trsvcid: 4420 subnqn: nqn.2014-08.org.nvmexpress.discovery traddr: 172.31.0.202 eflags: none rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0000 =====Discovery Log Entry 1====== trtype: rdma adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 2 trsvcid: 4420 subnqn: testnqn traddr: 172.31.0.202 eflags: none rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0000Connect to the discovered subsystems:
# nvme connect -t rdma -a 172.31.0.202 -s 4420 -n testnqnconnecting to device: nvme0# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home nvme0n1# cat /sys/class/nvme/nvme0/transportrdmaReplace testnqn with the NVMe subsystem name.
Replace 172.31.0.202 with the controller IP address.
Replace 4420 with the port number.
Verification
List the NVMe devices that are currently connected:
# nvme listOptional: Disconnect from the controller:
# nvme disconnect -n testnqnNQN:testnqn disconnected 1 controller(s) # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home