Chapter 4. Enabling multipathing on NVMe devices
You can multipath Non-volatile Memory Express™ (NVMe™) devices that are connected to your system over a fabric transport, such as Fibre Channel (FC). You can select between multiple multipathing solutions.
4.1. Native NVMe multipathing and DM Multipath Copy linkLink copied to clipboard!
Non-volatile Memory Express™ (NVMe™) devices support a native multipathing functionality. When configuring multipathing on NVMe, you can select between the standard DM Multipath framework and the native NVMe multipathing.
Both DM Multipath and native NVMe multipathing support the Asymmetric Namespace Access (ANA) multipathing scheme of NVMe devices. ANA identifies optimized paths between the controller and the host, and improves performance.
When native NVMe multipathing is enabled, it applies globally to all NVMe devices. It can provide higher performance, but does not contain all of the functionality that DM Multipath provides. For example, native NVMe multipathing supports only the numa and round-robin path selection methods.
Red Hat recommends that you use DM Multipath in Red Hat Enterprise Linux 8 as your default multipathing solution.
4.2. Enabling native NVMe multipathing Copy linkLink copied to clipboard!
The default kernel setting for the nvme_core.multipath option is set to N, which means that the native Non-volatile Memory Express™ (NVMe™) multipathing is disabled. You can enable native NVMe multipathing using the native NVMe multipathing solution.
Prerequisites
- The NVMe devices are connected to your system. For more information, see Overview of NVMe over fabric devices.
Procedure
Check if native NVMe multipathing is enabled in the kernel:
# cat /sys/module/nvme_core/parameters/multipathThe command displays one of the following:
N- Native NVMe multipathing is disabled.
Y- Native NVMe multipathing is enabled.
If native NVMe multipathing is disabled, enable it by using one of the following methods:
Using a kernel option:
Add the
nvme_core.multipath=Yoption to the command line:# grubby --update-kernel=ALL --args="nvme_core.multipath=Y"On the 64-bit IBM Z architecture, update the boot menu:
# zipl- Reboot the system.
Using a kernel module configuration file:
Create the
/etc/modprobe.d/nvme_core.confconfiguration file with the following content:options nvme_core multipath=YBack up the
initramfsfile:# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).imgRebuild the
initramfs:# dracut --force --verbose- Reboot the system.
Optional: On the running system, change the I/O policy on NVMe devices to distribute the I/O on all available paths:
# echo "round-robin" > /sys/class/nvme-subsystem/nvme-subsys0/iopolicyOptional: Set the I/O policy persistently using
udevrules. Create the/etc/udev/rules.d/71-nvme-io-policy.rulesfile with the following content:ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"
Verification
Verify if your system recognizes the NVMe devices. The following example assumes you have a connected NVMe over fabrics storage subsystem with two NVMe namespaces:
# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 a34c4f3a0d6f5cec Linux 1 250.06 GB / 250.06 GB 512 B + 0 B 4.18.0-2 /dev/nvme0n2 a34c4f3a0d6f5cec Linux 2 250.06 GB / 250.06 GB 512 B + 0 B 4.18.0-2List all connected NVMe subsystems:
# nvme list-subsys nvme-subsys0 - NQN=testnqn \ +- nvme0 fc traddr=nn-0x20000090fadd597a:pn-0x10000090fadd597a host_traddr=nn-0x20000090fac7e1dd:pn-0x10000090fac7e1dd live +- nvme1 fc traddr=nn-0x20000090fadd5979:pn-0x10000090fadd5979 host_traddr=nn-0x20000090fac7e1dd:pn-0x10000090fac7e1dd live +- nvme2 fc traddr=nn-0x20000090fadd5979:pn-0x10000090fadd5979 host_traddr=nn-0x20000090fac7e1de:pn-0x10000090fac7e1de live +- nvme3 fc traddr=nn-0x20000090fadd597a:pn-0x10000090fadd597a host_traddr=nn-0x20000090fac7e1de:pn-0x10000090fac7e1de liveCheck the active transport type. For example,
nvme0 fcindicates that the device is connected over the Fibre Channel transport, andnvme tcpindicates that the device is connected over TCP.If you edited the kernel options, verify if native NVMe multipathing is enabled on the kernel command line:
# cat /proc/cmdline BOOT_IMAGE=[...] nvme_core.multipath=YIf you changed the I/O policy, verify if
round-robinis the active I/O policy on NVMe devices:# cat /sys/class/nvme-subsystem/nvme-subsys0/iopolicy round-robin
4.3. Enabling DM Multipath on NVMe devices Copy linkLink copied to clipboard!
You can enable DM Multipath on connected NVMe devices by disabling native NVMe multipathing.
Prerequisites
- The NVMe devices are connected to your system. For more information, see Overview of NVMe over fabric devices.
Procedure
Check if the native NVMe multipathing is disabled:
# cat /sys/module/nvme_core/parameters/multipathThe command displays one of the following:
N- Native NVMe multipathing is disabled.
Y- Native NVMe multipathing is enabled.
If the native NVMe multipathing is enabled, disable it by using one of the following methods:
Using a kernel option:
Remove the
nvme_core.multipath=Yoption from the kernel command line:# grubby --update-kernel=ALL --remove-args="nvme_core.multipath=Y"On the 64-bit IBM Z architecture, update the boot menu:
# zipl- Reboot the system.
Using a kernel module configuration file:
-
Remove the
nvme_core multipath=Yoption line from the/etc/modprobe.d/nvme_core.conffile, if it is present. Back up the
initramfsfile:# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m%d-%H%M%S).imgRebuild the
initramfs:# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img # dracut --force --verbose- Reboot the system.
-
Remove the
Enable DM Multipath:
# systemctl enable --now multipathd.serviceDistribute I/O on all available paths. Add the following content in the
/etc/multipath.conffile:devices { device { vendor "NVME" product ".*" path_grouping_policy group_by_prio } }NoteThe
/sys/class/nvme-subsystem/nvme-subsys0/iopolicyconfiguration file has no effect on the I/O distribution when DM Multipath manages the NVMe devices.Reload the
multipathdservice to apply the configuration changes:# multipath -r
Verification
Verify if the native NVMe multipathing is disabled:
# cat /sys/module/nvme_core/parameters/multipath NVerify if DM multipath recognizes the nvme devices:
# multipath -l eui.00007a8962ab241100a0980000d851c8 dm-6 NVME,NetApp E-Series size=20G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active |- 0:10:2:2 nvme0n2 259:3 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 4:11:2:2 nvme4n2 259:28 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 5:32778:2:2 nvme5n2 259:38 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 6:32779:2:2 nvme6n2 259:44 active undef running