5.4. device-mapper-multipath
The device-mapper-multipath packages provide tools to manage multipath devices using the device-mapper multipath kernel module.
- By default, the multipathd service starts up before the iscsi service. This provides multipathing support early in the bootup process and is necessary for multipathed ISCSI SAN boot setups. However, once started, the multipathd service adds paths as informed about them by udev. As soon as the multipathd service detects a path that belongs to a multipath device, it creates the device. If the first path that multipathd notices is a passive path, it attempts to make that path active. If it later adds a more optimal path, multipathd activates the more optimal path. In some cases, this can cause a significant overhead during a startup.If you are experiencing such performance problems, define the multipathd service to start after the iscsi service. This does not apply to systems where the root device is a multipathed ISCSI device, since it the system would become unbootable. To move the service start time run the following commands:
# mv /etc/rc5.d/S06multipathd /etc/rc5.d/S14multipathd # mv /etc/rc3.d/S06multipathd /etc/rc3.d/S14multipathd
To restore the original start time, run the following command:# chkconfig multipathd resetpriorities
- When using
dm-multipath
, iffeatures "1 queue_if_no_path"
is specified in/etc/multipath.conf
then any process that issues I/O will hang until one or more paths are restored.To avoid this, setno_path_retry [N]
in/etc/multipath.conf
(where[N]
is the number of times the system should retry a path). When you do, remove thefeatures "1 queue_if_no_path"
option from/etc/multipath.conf
as well.If you need to use"1 queue_if_no_path"
and experience the issue noted here, usedmsetup
to edit the policy at runtime for a particular LUN (i.e. for which all the paths are unavailable).To illustrate: rundmsetup message [device] 0 "fail_if_no_path"
, where[device]
is the multipath device name (e.g.mpath2
; do not specify the path) for which you want to change the policy from"queue_if_no_path"
to"fail_if_no_path"
. (BZ#419581) - When a LUN is deleted on a configured storage system, the change is not reflected on the host. In such cases,
lvm
commands will hang indefinitely whendm-multipath
is used, as the LUN has now become stale.To work around this, delete all device andmpath
link entries in/etc/lvm/.cache
specific to the stale LUN.To find out what these entries are, run the following command:ls -l /dev/mpath | grep [stale LUN]
For example, if[stale LUN]
is 3600d0230003414f30000203a7bc41a00, the following results may appear:lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5
This means that 3600d0230003414f30000203a7bc41a00 is mapped to twompath
links:dm-4
anddm-5
.As such, the following lines should be deleted from/etc/lvm/.cache
:/dev/dm-4 /dev/dm-5 /dev/mapper/3600d0230003414f30000203a7bc41a00 /dev/mapper/3600d0230003414f30000203a7bc41a00p1 /dev/mpath/3600d0230003414f30000203a7bc41a00 /dev/mpath/3600d0230003414f30000203a7bc41a00p1
- Running the
multipath
command with the-ll
option can cause the command to hang if one of the paths is on a blocking device. Note that the driver does not fail a request after some time if the device does not respond.This is caused by the cleanup code, which waits until the path checker request either completes or fails. To display the currentmultipath
state without hanging the command, usemultipath -l
instead. (BZ#214838)