このコンテンツは選択した言語では利用できません。
7.135. lvm2
Updated lvm2 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6.
The lvm2 packages provide support for Logical Volume Management (LVM).
Bug fixes
- BZ#837927
- When creating a RAID Logical Volume, if the
--regionsize(-R)
option (used with thelvcreate
command) was not specified, LVs larger than 2 TB could not be created or extended. Consequently, creating or extending such volumes caused errors. With this update, the region size is automatically adjusted upon creation or extension and large LVs can now be created. - BZ#834703
- Extending a RAID 4/5/6 Logical Volume failed to work properly because the parity devices were not properly accounted for. This has been corrected by covering the "simple" case where the LV is extended with the same number of stripes as the original (reducing or extending a RAID 4/5/6 LV with different number of stripes is not implemented yet). As a result, it is now possible to extend a RAID 4/5/6 Logical Volume.
- BZ#832392
- When the
issue_discards=1
configuration option was used or configured in the/etc/lvm/lvm.conf
file, moving Physical Volumes via thepvmove
command resulted in data loss. The problem has been fixed with this update. - BZ#713599, BZ#800801
- Device-mapper devices (including LVM devices) were not deactivated at system shutdown or reboot. Consequently, when device-mapper devices were layered on the top of other block devices and these were detached during the shutdown or reboot procedure, any further access to the device-mapper devices ended up with either I/O errors or an unresponsive system as the underlying devices were unreachable (for example iSCSI or FCoE devices). With this update, a new
blkdeactivate
script along withblk-availability
shutdown script have been provided. These scripts unmount and deactivate any existing device-mapper devices before deactivating and detaching the underlying devices on shutdown or reboot. As a result, there are no I/O errors or hangs if using attached storage that detaches itself during the shutdown or reboot procedure. - BZ#619574
- An LVM mirror can be created with three different types of log devices: core (in-memory), disk, and mirrored. The mirrored log is itself redundant and resides on two different Physical Volumes. Previously, if both devices composing the mirror log were lost, they were not always properly replaced during repair, even if spare devices existed. With this update, a mirrored log is properly replaced with a mirrored log if there are sufficient replacement PVs.
- BZ#832120, BZ#743505
- A mirror Logical Volume can itself have a mirrored log device. When a device in an image of the mirror and its log failed at the same time, it was possible for unexpected I/O errors to appear on the mirror LV. The kernel did not absorb the I/O errors from the failed device by relying on the remaining device. This bug then caused file systems built on the device to respond to the I/O errors (turn read-only in the case of the ext3/4 file systems). The cause was found to be that the mirror was not suspended for repair using the
noflush
flag. This flag allows the kernel to re-queue I/O requests that need to be retried. Because the kernel was not allowed to re-queue the requests, it had no choice but to return the I/O as errored. This bug has been corrected by allowing the log to be repaired first, thus, the top-level mirror's log can be completed successfully. As a result, the mirror is now properly suspended with thenoflush
flag. - BZ#803271
- When using the
lvmetad
daemon (global/use_lvmetad=1
LVM2 configuration option) while processing LVM2 commands in a cluster environment (global/locking_type=3
), the LVM2 commands did not work correctly and issued various error messages. With this update, if clustered locking is set, thelvmetad
daemon is disabled automatically as this configuration is not yet supported with LVM2. As a result, there is now a fallback to non-lvmetad operation in LVM2, if clustered locking is used and a warning message is issued:WARNING: configuration setting the
use_lvmetad
parameter overriden to 0 due to thelocking_type 3
parameter. Clustered environment is not supported by thelvmetad
daemon yet. - BZ#855180
- When the user tried to convert a thin snapshot volume into a read-only volume, internal error messages were displayed and the operation failed. With this update, thin snapshot volumes can be converted to read-only mode. Also for the conversion of the thin pool to read-only mode, en explicit error message about an unsupported feature is added.
- BZ#801571
- Previously, if a device failed while a RAID Logical Volume was not in-sync, any attempts to fix it failed. This case is now handled, however the following limitations are to be noted:
- The user cannot repair or replace devices in a RAID Logical Volume that is not active. The tool (the
lvconvert -repair
command) must know the sync status of the array and can only get that when the array is active. - The user cannot replace a device in a RAID Logical Volume that has not completed its initial synchronization. Doing so would produce unpredictable results and is therefore disallowed.
- The user can repair a RAID Logical Volume that has not completed its initial synchronization, but some data may not be recoverable because it had not had time to make that data fully redundant. In this case, a warning is printed and the user is queried if they would like to proceed.
- BZ#871058
- A race condition in the
lvmetad
daemon occasionally caused LVM commands to fail intermittently, failing to find a VG that was being updated at the same time by another command. With this update, the race condition does no longer occur. - BZ#857554
- If the
issue_discards
option was enabled in the configuration file and thelvremove
command ran against a partial Logical Volume where Physical Volumes were missing, thelvremove
command terminated unexpectedly. This bug has been fixed. Also, the newp
attribute in the LVS command output is set when the Logical Volume is partial. - BZ#820116
- Previously, when there was a Physical Volume in the Volume Group with zero Physical Extents (PEs), so the Physical Volume was used to store metadata only, the
vgcfgrestore
command failed with a "Floating point exception" error, because the command attempted to divide by zero. A proper check for this condition has been added to prevent the error and now, after using thevgcfgrestore
command, VG metadata is successfully written. - BZ#820229
- Previously, when attempting to rename thin Logical Volumes, the procedure failed with the following error message:
"lvrename Cannot rename <volume_name>: name format not recognized for internal LV <pool_name>"
This bug is now fixed and the user can successfully rename thin Logical Volumes. - BZ#843546
- Previously, it was not possible to add a Physical Volume to a Volume Group if a device failure occurred in a RAID Logical Volume and there were no spare devices in the VG. Therefore users could not replace the failed devices in a RAID LV and the VG could not be made consistent without physically editing LVM metadata. It is now possible to add a PV to a VG with missing or failed devices and to replace failed devices in a RAID LV with the
lvconvert --repair <vg>/<LV>
command. - BZ#855398
- An improper restriction placed on mirror Logical Volumes caused them to be ignored during activation. Users were unable to create Volume Groups on top of clustered mirror LV and could not recursively stack cluster VG. The restriction has been refined to pass over mirrors that cause LVM commands to block indefinitely and it is now possible to layer clustered VG on clustered mirror LV.
- BZ#865035
- When a device was missing from a Volume Group or Logical Volume, tags could not be added or removed from the LV. If the activation of an LV was based on tagging using the
volume_list
parameter in the configuration file (lvm.conf
), the LV could not be activated. This affected High Availability LVM (HA-LVM) and without the ability to add or remove tags while a device was missing, RAID LVs in HA-LVM configuration could not be used. This update allowsvgchange
andlvchange
to alter the LVM metadata for a limited set of options while PVs are missing. The "- --[add|del]" tag is included and the set of allowable options do not cause changes to the device-mapper kernel target and do not alter the structure of the LV. - BZ#845269
- When an LVM command encountered a response problem with the
lvmetad
daemon, the command could cause the system to terminate unexpectedly with a segmentation fault. Currently, LVM commands work properly withlvmetad
and crashes no longer occur even if there is a malformed response fromlvmetad
. - BZ#823918
- A running LVM process could not switch between the
lvmetad
daemon and non-lvmetad modes of operation and this caused the LVM process to terminate unexpectedly with a segmentation fault when polling for the result of runninglvconvert
operation. With this update, the segmentation fault no longer occurs. - BZ#730289
- The
clvmd
daemon consumed a lot of memory resource to process every request. Each request invoked a thread, and by default each thread allocated approximately 9 MB of RAM for stack. To fix this bug, the default thread's stack size has been reduced to 128 KB which is enough for the current version of LVM to handle all tasks. This leads to massive reduction of memory used during runtime by theclvmd
daemon. - BZ#869254
- Previously, disabling the
udev
synchronisation causedudev
verification to be constantly enabled, ignoring the actual user-defined setting. Consequently,libdevmapper
/LVM2 incorrectly bypassedudev
when processing relevant nodes. Thelibdevmapper
library has been fixed to honor the actual user's settings forudev
verification. As a result,udev
works correctly even in case theudev
verification andudev
synchronization are disabled at the same time. - BZ#832033
- Previously, when using the
lvmetad
daemon, passing the--test
argument to commands occasionally caused inconsistencies in thelvmetad
cache thatlvmetad
maintains. Consequently, disk corruption occurred when shared disks were involved. An upstream patch has been applied to fix this bug. - BZ#870248
- Due to a missing dependency on the device-mapper-persistent-data thin pool devices were not monitored on activation. Consequently, unmonitored pools could overfill the configured threshold. To fix this bug, the code path for enabling monitoring of thin pool has been fixed and the missing package dependency added. As a result, when monitoring for thin pool is configured, the
dmeventd
daemon is enabled to watch for pool overfill. - BZ#836653
- A failed attempt to reduce the size of a Logical Volume was sometimes not detected and the
lvremove
command exited successfully even though it had failed to operate the LV. With this update,lvremove
returns the right exit code in the described scenario. - BZ#836663
- When using a Physical Volume (PV) that contained ignored metadata areas, an LVM command, such as
pvs
, could incorrectly display the PV as being an orphan due to the order of processing individual PV in the VG. With this update, the processing of PVs in a VG has been fixed to properly account for PVs with ignored metadata areas so that the order of processing is no longer important, and LVM commands now always give the same correct result, regardless of PVs with ignored metadata areas. - BZ#837599
- Issuing the
vgscan --cache
command (to refresh thelvmetad
daemon) did not remove data about Physical Volumes or Volume Groups that no longer existed — it only updated metadata of existing entities. With this update, thevgscan --cache
command removes all metadata that are no longer relevant. - BZ#862253
- When there were numerous parallel LVM commands running, the
lvmetad
daemon could deadlock and cause other LVM commands to stop responding. This behavior was caused by a race condition inlvmetad's
multi-threaded code. The code has been improved and now the parallel commands succeed and no deadlocks occur. - BZ#839811
- Previously, the first attribute flag was incorrectly set to
S
when an invalid snapshot occurred, whereas this value in the first position is supposed to indicate a merging snapshot. Invalid snapshot is normally indicated by capitalizing the fifth Logical Volume attribute character. This bug has been fixed and thelvs
utility no longer capitalizes the first LV attribute character for invalid snapshots but the fifth, as required. - BZ#842019
- Previously, it was possible to specify incorrect arguments when creating a RAID Logical Volume, which could harmfully affect the created device. These inappropriate arguments are no longer allowed.
- BZ#839796
- Due to incorrect handling of sub-Logical-Volumes (LVs), the
pvmove
utility was inconsistent and returned a misleading message for RAID. To fix this bug,pvmove
has been disallowed from operating on RAID LVs. Now, if it is necessary to move a RAID LV's components from one device to another, thelvconvert --replace <old_pv> <vg>/<lv> <new_pv>
command is used. - BZ#836381
- The kernel does not allow adding images to a RAID Logical Volume while the array is not synchronized. Previously, the LVM RAID code did not check whether the LV was synchronized. As a consequence, an invalid request could be issued, which caused errors. With this update, the aforementioned condition is checked and the user is now informed that the operation cannot take place until the array is synchronized. The kernel does not allow to add additional images to a RAID Logical Volume when the array is not synchronized. Previously, the LVM RAID code did not check whether the LV was in synchronized condition, which could have caused invalid requests. With this update, LVM RAID has been modified to check for the aforementioned condition and the user is now informed in case the operation is stopped due to unsynchronized array.
- BZ#855171, BZ#855179
- Prior to this update, the conversion of a thin pool into a mirror resulted in an aborting error message. As this conversion is not supported, an explicit check which prohibits this conversion before the
lvm
utility attempts to perform it has been added. Now, the error message returns an explicit error message stating that the feature is not supported. - BZ#822248
- Prior to this update, RAID Logical Volumes could become corrupted if they were activated in a clustered Volume Group. To fix this bug, a VG is no longer allowed to be changed to a clustered VG if there are RAID LVs in a VG.
- BZ#822243
- Previously, it was possible to create RAID Logical Volumes in a clustered Volume Group. As RAID LVs are not cluster capable and activating them in a cluster could cause data damage, the ability to create RAID LVs in a cluster has been disabled.
- BZ#821007
- Previously, if no last segment on an pre-existing Logical Volume was defined, the normal
cling
allocation policy was applied and an LV could be successfully created or extended even though there was not enough space on a single Physical Volume and no additional PV was defined in thelvm.conf
file. This update corrects the behavior of thecling
allocation policy and any attempts to create or extend an LV under these circumstances now fail as expected. - BZ#814782
- The interaction of LVM filters and
lvmetad
could have lead to unexpected and undesirable results. Also, updates to the "filter" settings while thelvmetad
daemon was running did not forcelvmetad
to forget the devices forbidden by the filter. Since the normal "filter" setting in thelvm.conf
file is often used on the command line, a new option has been added tolvm.conf
(global_filter) which also applies tolvmetad
. The traditional "filter" settings only applies at the command level and does not affect device visibility tolvmetad
. The options are documented in more detail in the example configuration file. - BZ#814777
- Prior to this update, the
lvrename
utility did not work with thin provisioning (pool, metadata, or snapshots) correctly. This bug has been fixed by implementing full support for stacked devices. Now,lvrename
handles all types of thin Logical Volumes as expected. - BZ#861456
- When creating a Logical Volume using the
lvcreate
command with the--thinpool
and--mirror
options, thethinpool
flag was ignored and a regular Logical Volume was created. With this update, use of the--thinpool
option with the--mirror
option is no longer allowed and thelvcreate
command fails with a proper error message under these circumstances. - BZ#861841
- Previously, the
lvm_percent_to_float()
function declared in thelvm2app.h
header file did not have an implementation in thelvm2app
library. Any program, which tried to use this function, failed at linking time. A patch forlvm2app.h
has been applied to fix this bug andlvm_percent_to_float()
now works as expected. - BZ#813766
- Prior to this update, the LVM utilities returned spurious warning messages during the boot process, if the
use_lvmetad = 1
option was set in thelvm.conf
file. This has been fixed and warning messages are no longer issued during boot. - BZ#862095
- Due to the unimplemented
<data_percent>
property for thelvm2app
library, incorrect value-1
was returned for thin volumes. This bug has been fixed by adding proper support for thelvm_lv_get_property(lv, <data_percent>)
function. Now,lvm2app
returns correct values. - BZ#870534
- Due to a wrong initialization sequence, running an (LVM) command caused the LVM utility to abort instead of proceeding with scanning-based metadata discovery (requested by using the
--config "global{use_lvmetad=0"}
option). This bug occurred only when an LVM command was run withlvmetad
cache daemon running. The bug has been fixed and LVM no longer aborts. - BZ#863401
- Previously, the
pvscan --cache
command failed to read part of LVM1 metadata. As a consequence, when using LVM1 (legacy) metadata and thelvmetad
daemon together, LVM commands could run into infinite loops when invoked. This bug has been fixed and LVM1 andlvmetad
now work together as expected. - BZ#863881
- Due to the missing
lvm2app
library support, incorrect values for thin snapshotsorigin
field were reported. A patch has been updated to return the correct response to thelvm_lv_get_property(lv, "origin")
function. - BZ#865850
- Previously, the degree to which RAID 4/5/6 Logical Volumes had completed their initial array synchronization (i.e. initial parity calculations) was not printed in the
lvs
command output. This information is now included under the heading that has been changed fromCopy%
toCpy%Sync
. Users can now request theCpy%Sync
information directly vialvs
with either thelvs -o copy_percent
or thelvs -o sync_percent
option. - BZ#644752
- Previously, when using Physical Volumes, the exclusive lock was held to prevent other PVs commands to run concurrently in case any Volume Group metadata needed to be read in addition. This is not necessary anymore when using
lvmetad
aslvmetad
caches VG metadata and thus avoids taking the exclusive lock. As a consequence, numerous PVs commands reading VG metadata can be run in parallel without the need for the exclusive lock. - BZ#833180
- Attempting to convert a linear Logical Volume to a RAID 4/5/6 Logical Volume is not allowed. When the user tried to execute this operation, a message indicating that the original LV had been
striped
instead oflinear
, was returned. The messages have been updated to provide correct information and only messages with correct and relevant content are now returned under these circumstances. - BZ#837114
- Previously, an attempt to test the
create
command of a RAID Logical Volume resulted in failure even though the process itself succeeded without the--test
argument of the command. With this update, a test run of thecreate
command now properly indicates success if the command is successful. - BZ#837098
- Previously, a user-instantiated resynchronization of a RAID Logical Volume failed to cause the RAID LV to perform the actual resynchronization. This bug has been fixed and the LV now performs the resynchronization as expected.
- BZ#837093
- When a RAID or mirror Logical Volume is created with the
--nosync
option, an attribute with this information is attached to the LV. Previously, a RAID1 LV did not clear this attribute when the LV was converted to a linear LV and back, even though it underwent a complete resynchronization in the process. With this update,--nosync
has been fixed and the attribute is now properly cleared after the LV conversion. - BZ#836391
- Due to an error in the code, user-initiated resynchronization of a RAID Logical Volume was ineffective. With this update, the
lvchange --resync
command has been added on a RAID LV, which makes the LV undergo complete resynchronization. - BZ#885811
- Previously, an error in the Volume Group (VG) auto-activation code could cause LVM commands to terminate unexpectedly with the following message:
Internal error: Handler needs existing VG
With this update, cached VG metadata are used instead of relying on an absent MDA content of the last discovered PV. As a result, the aforementioned error no longer occurs. - BZ#885993
- Prior to this update, testing the health status of the
mirror
utility caused a minor memory leak. To fix this bug, all resources taken in the function have been released, and memory leaks for longterm living processes (such as thedmeventd
daemon) no longer occur. - BZ#887228
- Previously, a nested mutex lock could result in a deadlock in the
lvmetad
daemon. As a consequence, Logical Volume Manager (LVM) commands trying to talk tolvmetad
became unrepsonsive. The nested lock has been removed, and the deadlock no longer occurs. - BZ#877811
- Previously, the
lvconvert
utility handled the-y
and-f
command line options inconsistently when repairing mirror or RAID volumes. Whereas the-f
option alone worked correctly, when used along with the-y
option, the-f
option was ignored. With this update,lvconvert
handles the-f
option correctly as described in the manual page. - BZ#860338
- When Physical Volumes were stored on read-only disks, the
vgchange -ay
command failed to activate any Logical Volumes and the following error message was returned:/dev/dasdf1: open failed: Read-only file system device-mapper: reload ioctl failed: Invalid argument 1 logical volume(s) in volume group "v-9c0ed7a0-1271-452a-9342-60dacafe5d17" now active
However, this error message did not reflect the nature of the bug. With this update, the command has been fixed and Volume Group can now be activated on a read-only disk. - BZ#832596
- An error in the space allocation logic caused Logical Volume creation with the
--alloc anywhere
option to occasionally fail. RAID 4/5/6 systems were particularly affected. The bug was fixed to avoid picking already-full areas for RAID devices.
Enhancements
- BZ#783097
- Previously, the
device-mapper
driver UUIDs could have been used to create the/dev
content with theudev
utility. If mangling was not enabled,udev
created incorrect entries for UUIDs containing unsupported characters. With this update, character-mangling support in thelibdevmapper
library and thedmsetup
utility for characters not on the udev-supported whitelist has been enhanced to processdevice-mapper
UUIDs the same way asdevice-mapper
names are. The UUIDs and names are now always controlled by the same mangling mode, thus the existing--manglename dmsetup
option affects UUIDs as well. Furthermore, thedmsetup info -c -o
command has new fields to display:mangled_uuid
andunmangled_uuid
. - BZ#817866, BZ#621375
- Previously, users had to activate Volume Groups and Logical Volumes manually by calling
vgchange/lvchange -ay
on the command line. This update adds the autoactivation feature, LVM2 now lets the user specify precisely which Logical Volumes should be activated at boot time and which ones should remain inactive. Currently, the feature is supported only on non-clustered and complete VGs. Note that to activate the feature,lvmetad
must be enabled (global/use_lvmetad=1
LVM2 configuration option). - BZ#869402
- The manual page for the
lvconvert
utility has been updated with new supported options for conversion of existing volumes into a thin pool. - BZ#814732
- Previously, the user could not specify conversion of an Logical Volume already containing pool information ("pre-formatted LV") into a legitimate thin pool LV. Furthermore, it was rather complex to guide the allocation mechanism to use proper Physical Volumes (PVs) for data and metadata LV. As the
lvconvert
utility is easier to use in these cases,lvconvert
has been enhanced to support conversion of pre-formatted LVs into a thin pool volume. With the--thinpool data_lv_name
and--poolmetadata metadata_lv_name
options, the user may use a pre-formatted LV to construct a thin pool as with thelvcreate
utility. - BZ#636001
- A new optional metadata caching daemon (
lvmetad
) is available as part of this LVM2 update, along withudev
integration for device scanning. Repeated scans of all block devices in the system with each LVM command are avoided if the daemon is enabled. The original behavior can be restored at any time by disablinglvmetad
in thelvm.conf
file. - BZ#814766
- Previously, no default behavior could be used to fine-tune performance of some workloads. Now, the thin pool support has been enhanced with configurable discards support. The user may now select from three types of behavior:
passdown
is default and allows to pass-through discard requests to the thin pool backing device;nopassdown
processes allows discards only on the thin pool level and requests are not passed to the backing device;ignore
allows ignoring of discard request. - BZ#844492
- LVM support for 2-way mirror RAID10 has been added. LVM is now able to create, remove, and resize RAID10 Logical Volumes. To create a RAID10 Logical Volume, specify individual RAID parameters similarly as for other RAID types, like in the following example:
~]#
Note that thelvcreate --type raid10 -m 1 -i 2 -L 1G -n lv vg
-m
and-i
arguments behave in the same way they would for other segment types. That is,-i
is the total number of stripes while-m
is the number of (additional) copies (that is,-m 1 -i 2
gives 2 stripes on the top of 2-way mirrors). - BZ#861843
- The
lvm2app
library now reports the data_percent field which indicates how full snapshots, thin pools and volumes are. The Logical Volume needs to be active to obtain this information. - BZ#814824
- The thin pool now supports non-power-of-2 chunk size. However, the size must be a multiple of 64KiB.
- BZ#823660
- The
-l
option has been added to thelvmetad
daemon to allow logging of wire traffic and more detailed information on internal operation to thestandard error
stream. This new feature is mainly useful for troubleshooting and debugging. - BZ#834031
- Previously, it was possible to pass an incorrect argument on the command line when creating a RAID Logical Volume, for example the
--mirrors
command for RAID5. Consequently, erroneous and unexpected results were produced. With this update, invalid arguments are caught and reported. - BZ#823667
- The
lvmdump
utility has been extended to include a dump of the internallvmetad
daemon state, helping with troubleshooting and analysis oflvmetad
-related problems. - BZ#830250
- In Red Hat Enterprise Linux 6.4, LVM adds support for Micron PCIe Solid State Drives (SSDs) as devices that may form a part of a Volume Group.
- BZ#883416
- The
DM_DISABLE_UDEV
environment variable is now recognized and takes precedence over other existing setting when using LVM2 tools, dmsetup and libdevmapper to fallback to non-udev operation. Setting theDM_DISABLE_UDEV
environment variable provides a more convenient way of disabling udev support in libdevmapper, dmsetup and LVM2 tools globally without a need to modify any existing configuration settings. This is mostly useful if the system environment does not useudev
. - BZ#829221
- Physical Volumes (PV) are now automatically restored from the missing state after they become reachable again and even if they had no active metadata areas. In cases of transient inaccessibility of a PV, for example with Internet Small Computer System Interface (iSCSI) or other unreliable transport, LVM required manual action to restore a PV for use even if there was no room for conflict, because there was no active metadata area (MDA) on the PV. With this update, the manual action is no longer required if the transiently inaccessible PV has no active metadata areas.
Users of lvm2 should upgrade to these updated packages, which fix these bugs and add these enhancements.
7.135.2. RHBA-2013:1504 — lvm2 bug fix update
Updated lvm2 packages that fix several bugs are now available for Red Hat Enterprise Linux 6.
The lvm2 packages include all of the support for handling read and write operations on physical volumes, creating volume groups from one or more physical volumes and creating one or more logical volumes in volume groups.
Bug Fix
- BZ#1024911
- When there were visible clustered Volume Groups in the system, it was not possible to silently skip them with proper return error code while non-clustered locking type was used (the global/locking_type lvm.conf setting). To fix this bug, "--ignoreskippedcluster" option has been added for several LVM commands (pvs, vgs, lvs, pvdisplay, vgdisplay, lvdisplay, vgchange, and lvchange). With this option, the clustered Volume Groups are skipped correctly while the return error code does not depend on these clustered Volume Groups.
Users of lvm2 are advised to upgrade to these updated packages, which fix this bug.
7.135.3. RHBA-2013:1471 — lvm2 bug fix update
Updated lvm2 packages that fix several bugs are now available for Red Hat Enterprise Linux 6.
The lvm2 packages include all of the support for handling read and write operations on physical volumes, creating volume groups from one or more physical volumes and creating one or more logical volumes in volume groups.
Bug Fixes
- BZ#965810
- Previously, on certain HP servers using Red Hat Enterprise Linux 6 with the xfs file system, a regression in the code caused the lvm2 utility to ignore the "optimal_io_size" parameter and use a 1MB offset start. Consequently, there was an increase in the disk write operations which caused data misalignment and considerably lowered the performance of the servers. With this update, lvm2 no longer ignores "optimal_io_size" and data misalignment no longer occurs in this scenario.
- BZ#965968
- The lvm2 tools determine the PowerPath major number by searching for an "emcpower" line in the /proc/devices file. Previously, some versions of PowerPath used the ID string "power2". Аs a consequence, on systems with such an identifier, PowerPath devices were not given the expected precedence over PowerPath components which exhibit the same physical volume UUID. With this update, detection of EMC power devices works as expected, and the priority of devices is now set properly.
- BZ#1016083
- Due to an error in the LVM allocation code, lvm2 attempted free space allocation contiguous to an existing striped space. When trying to extend a 3-way striped logical volume using the lvextend command, the lvm2 utility terminated unexpectedly with a segmentation fault. With this update, the behavior of LVM has been modified, and lvextend now completes the extension without a segmentation fault.
Users of lvm2 are advised to upgrade to these updated packages, which fix these bugs.