Chapter 4. Enabling multipathing on NVMe devices
You can multipath Non-volatile Memory Express™ (NVMe™) devices that are connected to your system over a fabric transport, such as Fibre Channel (FC). You can select between multiple multipathing solutions.
4.1. Native NVMe multipathing and DM Multipath
Non-volatile Memory Express™ (NVMe™) devices support a native multipathing functionality. When configuring multipathing on NVMe, you can select between the standard DM Multipath framework and the native NVMe multipathing.
Both DM Multipath and native NVMe multipathing support the Asymmetric Namespace Access (ANA) multipathing scheme of NVMe devices. ANA identifies optimized paths between the controller and the host, and improves performance.
When native NVMe multipathing is enabled, it applies globally to all NVMe devices. It can provide higher performance, but does not contain all of the functionality that DM Multipath provides. For example, native NVMe multipathing supports only the numa
and round-robin
path selection methods.
By default, NVMe multipathing is enabled in Red Hat Enterprise Linux 9 and is the recommended multipathing solution.
4.2. Enabling DM Multipath on NVMe devices
The default kernel setting for the nvme_core.multipath
option is set to Y
, which means that the native Non-volatile Memory Express™ (NVMe™) multipathing is enabled. You can enable DM Multipath on connected NVMe devices by disabling native NVMe multipathing.
Prerequisites
- The NVMe devices are connected to your system. For more information, see Overview of NVMe over fabric devices.
Procedure
Check if the native NVMe multipathing is enabled:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /sys/module/nvme_core/parameters/multipath
# cat /sys/module/nvme_core/parameters/multipath
The command displays one of the following:
N
- Native NVMe multipathing is disabled.
Y
- Native NVMe multipathing is enabled.
If the native NVMe multipathing is enabled, disable it by using one of the following methods:
Using a kernel option:
Add the
nvme_core.multipath=N
option to the command line:Copy to Clipboard Copied! Toggle word wrap Toggle overflow grubby --update-kernel=ALL --args="nvme_core.multipath=N"
# grubby --update-kernel=ALL --args="nvme_core.multipath=N"
On the 64-bit IBM Z architecture, update the boot menu:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow zipl
# zipl
- Reboot the system.
Using a kernel module configuration file:
Create the
/etc/modprobe.d/nvme_core.conf
configuration file with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow options nvme_core multipath=N
options nvme_core multipath=N
Back up the
initramfs
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m%d-%H%M%S).img
# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m%d-%H%M%S).img
Rebuild the
initramfs
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img dracut --force --verbose
# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img # dracut --force --verbose
- Reboot the system.
Enable DM Multipath:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl enable --now multipathd.service
# systemctl enable --now multipathd.service
Distribute I/O on all available paths. Add the following content in the
/etc/multipath.conf
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow devices { device { vendor "NVME" product ".*" path_grouping_policy group_by_prio } }
devices { device { vendor "NVME" product ".*" path_grouping_policy group_by_prio } }
NoteThe
/sys/class/nvme-subsystem/nvme-subsys0/iopolicy
configuration file has no effect on the I/O distribution when DM Multipath manages the NVMe devices.Reload the
multipathd
service to apply the configuration changes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow multipath -r
# multipath -r
Verification
Verify if the native NVMe multipathing is disabled:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /sys/module/nvme_core/parameters/multipath N
# cat /sys/module/nvme_core/parameters/multipath N
Verify if DM multipath recognizes the nvme devices:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow multipath -l
# multipath -l eui.00007a8962ab241100a0980000d851c8 dm-6 NVME,NetApp E-Series size=20G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active |- 0:10:2:2 nvme0n2 259:3 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 4:11:2:2 nvme4n2 259:28 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 5:32778:2:2 nvme5n2 259:38 active undef running `-+- policy='service-time 0' prio=0 status=enabled |- 6:32779:2:2 nvme6n2 259:44 active undef running
Additional resources
4.3. Enabling native NVMe multipathing
If native NVMe multipathing is disabled, you can enable it using the following solution.
Prerequisites
- The NVMe devices are connected to your system. For more information, see Overview of NVMe over fabric devices.
Procedure
Check if native NVMe multipathing is enabled in the kernel:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /sys/module/nvme_core/parameters/multipath
# cat /sys/module/nvme_core/parameters/multipath
The command displays one of the following:
N
- Native NVMe multipathing is disabled.
Y
- Native NVMe multipathing is enabled.
If native NVMe multipathing is disabled, enable it by using one of the following methods:
Using a kernel option:
Remove the
nvme_core.multipath=N
option from the kernel command line:Copy to Clipboard Copied! Toggle word wrap Toggle overflow grubby --update-kernel=ALL --remove-args="nvme_core.multipath=N"
# grubby --update-kernel=ALL --remove-args="nvme_core.multipath=N"
On the 64-bit IBM Z architecture, update the boot menu:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow zipl
# zipl
- Reboot the system.
Using a kernel module configuration file:
Remove the
/etc/modprobe.d/nvme_core.conf
configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm /etc/modprobe.d/nvme_core.conf
# rm /etc/modprobe.d/nvme_core.conf
Back up the
initramfs
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).bak.$(date +%m-%d-%H%M%S).img
Rebuild the
initramfs
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow dracut --force --verbose
# dracut --force --verbose
- Reboot the system.
Optional: On the running system, change the I/O policy on NVMe devices to distribute the I/O on all available paths:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo "round-robin" > /sys/class/nvme-subsystem/nvme-subsys0/iopolicy
# echo "round-robin" > /sys/class/nvme-subsystem/nvme-subsys0/iopolicy
Optional: Set the I/O policy persistently using
udev
rules. Create the/etc/udev/rules.d/71-nvme-io-policy.rules
file with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"
ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{iopolicy}="round-robin"
Verification
Verify if your system recognizes the NVMe devices. The following example assumes you have a connected NVMe over fabrics storage subsystem with two NVMe namespaces:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow nvme list
# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 a34c4f3a0d6f5cec Linux 1 250.06 GB / 250.06 GB 512 B + 0 B 4.18.0-2 /dev/nvme0n2 a34c4f3a0d6f5cec Linux 2 250.06 GB / 250.06 GB 512 B + 0 B 4.18.0-2
List all connected NVMe subsystems:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow nvme list-subsys
# nvme list-subsys nvme-subsys0 - NQN=testnqn \ +- nvme0 fc traddr=nn-0x20000090fadd597a:pn-0x10000090fadd597a host_traddr=nn-0x20000090fac7e1dd:pn-0x10000090fac7e1dd live +- nvme1 fc traddr=nn-0x20000090fadd5979:pn-0x10000090fadd5979 host_traddr=nn-0x20000090fac7e1dd:pn-0x10000090fac7e1dd live +- nvme2 fc traddr=nn-0x20000090fadd5979:pn-0x10000090fadd5979 host_traddr=nn-0x20000090fac7e1de:pn-0x10000090fac7e1de live +- nvme3 fc traddr=nn-0x20000090fadd597a:pn-0x10000090fadd597a host_traddr=nn-0x20000090fac7e1de:pn-0x10000090fac7e1de live
Check the active transport type. For example,
nvme0 fc
indicates that the device is connected over the Fibre Channel transport, andnvme tcp
indicates that the device is connected over TCP.If you edited the kernel options, verify if native NVMe multipathing is enabled on the kernel command line:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /proc/cmdline BOOT_IMAGE=[...] nvme_core.multipath=Y
# cat /proc/cmdline BOOT_IMAGE=[...] nvme_core.multipath=Y
If you changed the I/O policy, verify if
round-robin
is the active I/O policy on NVMe devices:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat /sys/class/nvme-subsystem/nvme-subsys0/iopolicy round-robin
# cat /sys/class/nvme-subsystem/nvme-subsys0/iopolicy round-robin
Additional resources