Chapter 2. Multipath devices


DM Multipath provides a way of organizing the I/O paths logically, by creating a single multipath device on top of the underlying devices. Without DM Multipath, system treats each path from a server node to a storage controller as a separate device, even when the I/O path connects the same server node to the same storage controller.

2.1. Multipath device identifiers

When new devices are under the control of DM Multipath, these devices are created in the /dev/mapper/ and /dev/ directory.

Note

Any devices of the form /dev/dm-X are for internal use only and should never be used by the administrator directly.

The following describes multipath device names:

  • When the user_friendly_names configuration option is set to no, the name of the multipath device is set to World Wide Identifier (WWID). By default, the name of a multipath device is set to its WWID. The device name would be /dev/mapper/WWID. It is also created in the /dev/ directory, named as /dev/dm-X.
  • Alternately, you can set the user_friendly_names option to yes in the /etc/multipath.conf file. This sets the alias in the multipath section to a node-unique name of the form mpathN. The device name would be /dev/mapper/mpathN and /dev/dm-X. But the device name is not guaranteed to be the same on all nodes using the multipath device. Similarly, if you set the alias option in the /etc/multipath.conf file, the name is not automatically consistent across all nodes in the cluster.
Note

This should not cause any difficulties if you use LVM to create logical devices from the multipath device. To keep your multipath device names consistent in every node, Red Hat recommends disabling the user_friendly_names option.

For example, a node with two HBAs attached to a storage controller with two ports by means of a single unzoned FC switch sees four devices: /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. DM Multipath creates a single device with a unique WWID that reroutes I/O to those four underlying devices according to the multipath configuration.

In addition to the user_friendly_names and alias options, a multipath device also has other attributes. You can modify these attributes for a specific multipath device by creating an entry for that device in the multipaths section of the /etc/multipath.conf file.

Additional resources

2.2. Multipath devices in logical volumes

After creating multipath devices, you can use the multipath device names as you would use a physical device name when creating an Logical volume manager (LVM) physical volume. For example, if /dev/mapper/mpatha is the name of a multipath device, the pvcreate /dev/mapper/mpatha command marks /dev/mapper/mpatha as a physical volume.

You can use the resulting LVM physical device when you create an LVM volume group just as you would use any other LVM physical device.

To filter all the sd devices in the /etc/lvm/lvm.conf file, add the filter = [ "r/block/", "r/disk/", "r/sd./", "a/./" ] filter in the devices section of the file.

Note

If you attempt to create an LVM physical volume on a whole device on which you have configured partitions, the pvcreate command fails. The Anaconda and Kickstart installation programs create empty partition tables if you do not specify otherwise for every block device. If you want to use the whole device instead of creating a partition, remove the existing partitions from the device. You can remove existing partitions with the kpartx -d device command and the fdisk utility. If your system has block devices that are greater than 2Tb, use the parted utility to remove partitions.

When you create an LVM logical volume that uses active/passive multipath arrays as the underlying physical devices, you can optionally include filters in the /etc/lvm/lvm.conf file to exclude the disks that underline the multipath devices. This is because if the array automatically changes the active path to the passive path when it receives I/O, multipath will failover and failback whenever LVM scans the passive path, if these devices are not filtered.

The kernel changes the active/passive state by automatically detecting the correct hardware handler to use. For active/passive paths that require intervention to change their state, multipath automatically uses this hardware handler to do so as necessary. If the kernel does not automatically detect the correct hardware handler to use, you can configure which hardware handler to use in the multipath.conf file with the "hardware_handler" option. For active/passive arrays that require a command to make the passive path active, LVM prints a warning message when this occurs.

Depending on your configuration, LVM may print any of the following messages:

  • LUN not ready:

    end_request: I/O error, dev sdc, sector 0
    sd 0:0:0:3: Device not ready: <6>: Current: sense key: Not Ready
        Add. Sense: Logical unit not ready, manual intervention required
  • Read failed:

    /dev/sde: read failed after 0 of 4096 at 0: Input/output error

The following are the reasons for the mentioned errors:

  • Multipath is not set up on storage devices that are providing active/passive paths to a machine.
  • Paths are accessed directly, instead of through the multipath device.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.