이 콘텐츠는 선택한 언어로 제공되지 않습니다.
8.108. lvm2
Updated lvm2 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6.
The lvm2 packages include all of the support for handling read and write operations on physical volumes, creating volume groups from one or more physical volumes and creating one or more logical volumes in volume groups.
Bug Fixes
- BZ#820991
- When visible clustered volume groups (VGs) were present in the system, it was not possible to silently skip them with proper return error code while the non-clustered locking type was used. To fix this bug, the "--ignoreskippedcluster" option has been added for several LVM commands; namely pvs, vgs, lvs, pvdisplay, vgdisplay, lvdisplay, vgchange, and lvchange. With this option, the clustered VGs are skipped correctly without any warning or error messages while the return error code also does not depend on these clustered VGs.
- BZ#834327
- Previously, the lvremove command failed to remove a virtual snapshot device if this device was still open. Consequently, the <virtual_snashot_name>_vorigin device-mapper device was left on the system after the failed removal. A manual remove with use of dmsetup was required to discard this device. With this update, lvremove has been modified to properly check the LV open count status before proceeding with the removal operation.
- BZ#861227
- Previously, when the lvconvert command was used with the "--stripes" option, the required supplementary options, such as "--mirrors" or "--repair", "thinpool", or "type raid*/mirror", were not enforced. Consequently, calling "lvconvert --stripes" without accompanying conversion instructions led to an incomplete conversion. With this update, a condition has been added to enforce the correct syntax. As a result, an error message is now displayed in the described scenario.
- BZ#880414
- Previously certain lvm2app functions were returning values in sectors instead of bytes. This behavior applied for values of origin_size, vg_extent_size, stripe_size, region_size, chunk_size, seg_start, and pvseg_size. Consequently, the returned lvm2app results were inconsistent and therefore misleading. This behavior has been changed and all lvm2app values are now returning byte values.
- BZ#902538
- The lvm2 tools determine the PowerPath major number by searching for an "emcpower" line in the /proc/devices file. Previously, some versions of PowerPath used the ID string "power2". Аs a consequence, on systems with such an identifier, PowerPath devices were not given the expected precedence over PowerPath components which exhibit the same physical volume UUID. With this update, detection of EMC power devices works as expected, and the priority of devices is now set properly.
- BZ#902806
- Prior to this update, the lvm2 dmeventd daemon attempted to reset to C locales only through the LANG environmental variable. However, when the system sets locales using the LC_ALL variable, this variable has a higher priority than the LANG variable, which leads to an extensive memory consumption. With this update, LC_ALL has been reset to C instead of LANG, thus reducing the memory consumption.
- BZ#905254
- With this update, a specific diagnostic message has been added for the case when the lvmetad deamon was already running or its pidfile was locked for any other reason. When trying to start lvmetad while it is already running now returns a message with a clear indication of the problem:
Failed to acquire lock on /var/run/lvmetad.pid. Already running?
- BZ#907487
- Previously, the 'vgreduce --removemissing' command could not be used when missing physical volumes were still used by RAID logical volumes. Now, it is possible for 'vgreduce --removemissing' to replace the failed physical volume with an 'error' segment within the affected RAID logical volumes and remove the PV from the volume group. However, in most cases it is better to replace a failed RAID device with a spare one (with use of 'lvconvert --repair') if possible.
- BZ#910104
- Under certain circumstances, cached metadata in the lvmetad daemon could have leaked during metadata updates. With this update, lvmetad has been fixed to prevent the leak.
- BZ#913644
- Previously, if a device had failed after the vgexport command was issued, it was impossible to import the volume group. Additionally, this failure to import also meant it was impossible to repair the volume group.It is now possible to use the '--force' option with vgimport to import volume groups even if there are devices missing.
- BZ#914143
- When LVM scans devices for LVM meta data, it applies several filters, such as the multipath filter, MD component filter, or partition signature filter. Previously, the order in which these filters were applied caused that multipath filter failed to filter out a multipath component because the device was accessed by other filters. Consequently, I/O errors occurred if the path was not accessible. With this update, the order of filtering has been changed and the multipath filter now works as expected.
- BZ#919604
- The 'raid1' type can be used to set the device fault tolerance for thinpool logical volumes. It is no longer possible to create thinpools on top of logical volumes of 'mirror' segment type. The existing thinpools with data or meta data areas of 'mirror' segment type will still function, however, it is recommended to convert these to 'raid1' with use of the 'lvconvert' command.
- BZ#928537
- When using the pvcreate command with the --restorefile and --uuid options while the supplied UUID was incorrect, an internal error message about a memory leak was issued:
Internal error: Unreleased memory pool(s) found.
With this update, the memory leak has been fixed and the error message is no longer displayed. - BZ#953612
- When updating the device-mapper-event package to a later version, the package update script attempts to restart running dmeventd instance and to replace it with the new dmeventd daemon. However, the previous version of dmeventd does not recognize the notification for restart and therefore a manual intervention is needed in this situation. Previously, the following warning message was displayed:
WARNING: The running dmeventd instance is too old
In order to provide more precise information and advise for the required action, the following message has been added for the described case:Failed to restart dmeventd daemon. Please, try manual restart
- BZ#953867
- When using the lvmetad daemon together with the accompanying LVM autoactivation feature, the logical volumes on top of encrypted devices were not automatically activated during system boot. This was caused by ignoring the extra udev event that was artificially generated during system boot to initialize all existing devices. This bug has been fixed, and LVM now properly recognizes the udev event used to initialize the devices at boot, including encrypted devices.
- BZ#954061
- When using the lvmetad daemon together with the accompanying LVM autoactivation feature, the device-mapper devices representing the logical volumes were not refreshed after the underlying PV was unplugged or deactivated and then plugged back or activated. This was caused by assigning a different major and minor pair to identify the reconnected device, while LVs mapped on this device still referenced it with the original pairs. This bug has been fixed and LVM now always refreshes logical volumes on PV device after reactivation.
- BZ#962436
- Due to a regression introduced in LVM version 2.02.74, when the optimal_io_size device hint was smaller than the default pe_start size of 1 MiB, this optimal_io_size was ignored and the default size was used. With this update, the optimal_io_size is applied correctly to calculate the PV's pe_start value.
- BZ#967247
- Prior to this update, before adding additional images to a RAID logical volume, the available space was calculated incorrectly. Consequently, if the available space was insufficient, adding these images failed. This bug has been fixed and the calculation is now performed correctly.
- BZ#973519
- Previously, if the nohup command was used together with LVM commands that do not require input, nohup configured the standard input as write-only while LVM tried to reopen it also for reading. Consequently, the commands terminated with the following message:
stdin: fdopen failed: Invalid argument
LVM has been modified and if the standard input is already open write-only, LVM does not attempt to reopen it for reading. - BZ#976104
- Previously, when converting a linear logical volume to a mirror logical volume, the preferred mirror segment type set in the /etc/lvm/lvm.conf configuration file was not always accepted. This behavior has been changed, and the segment type specified with the 'mirror_segtype_default' setting in configuration file is now applied as expected.
- BZ#987693
- Due to a code regression, a corruption of thin snapshot occurred when the underlaying thin-pool was created without the '--zero' option. As a consequence, the first 4KB in the snapshot could have been invalided. This bug has been fixed and the snapshot is no longer corrupted in the aforementioned scenario.
- BZ#989347
- Due to an error in the LVM allocation code, lvm2 attempted free space allocation contiguous to an existing striped space. When trying to extend a 3-way striped logical volume using the lvextend command, the lvm2 utility terminated unexpectedly with a segmentation fault. With this update, the behavior of LVM has been modified, and lvextend now completes the extension without a segmentation fault.
- BZ#995193
- Previously, it was impossible to convert a volume group from clustered to non-clustered with a configuration setting of 'locking_type = 0'. Consequently, problems could arise if the cluster was unavailable and it was necessary to convert the volume group to non-clustered mode. With this update, LVM has been modified to make the aforementioned conversion possible.
- BZ#995440
- Prior to this update, the repair of inconsistent metadata used an inconsistent code path depending on whether the lvmetad daemon was running and enabled. Consequently, the lvmetad version of meta data repair failed to correct the meta data and a warning message was printed repeatedly by every command until the problem was manually fixed. With this update, the code paths have been reconciled. As a result, metadata inconsistencies are automatically repaired as appropriate, regardless of the lvmetad.
- BZ#997188
- When the lvm_list_pvs_free function from the lvm2app library was called on a system with no physical volumes, lvm2app code tried to free an internal structure that had already been freed before. Consequently, the function terminated with a segmentation fault. This bug has been fixed, and the segmentation fault no longer occurs when calling lvm_list_pvs_free.
- BZ#1007406
- When using LVM logical volumes on MD RAID devices as PVs and while the lvmetad daemon was enabled, the accompanying logical volume automatic activation sometimes left incomplete device-mapper devices on the system. Consequently, no further logical volumes could be activated without manual cleanup of the dangling device-mapper devices. This bug has been fixed, and dangling devices are no longer left on the system.
- BZ#1009700
- Previously, LVM commands could become unresponsive when attempting to read an LVM mirror just after a write failure but before the repair command handled the failure. With this update, a new 'ignore_lvm_mirrors' configuration option has been added to avoid this issue. Setting this option to '1' will cause LVM mirrors to be ignored and prevent the described problem. Ignoring LVM mirrors also means that it is impossible to stack volume groups on LVM mirrors. The aforementioned problem is not present with the LVM RAID types, like "raid1". It is recommended to use the RAID segment types especially when attempting to stack volume groups on top of mirrored logical volumes.
- BZ#1016322
- Prior to this update, a race condition could occur during the pool destruction in libdevmapper.so. Consequently, the lvmetad daemon sometimes terminated due to heap corruption, especially under heavier concurrent loads, such as multiple LVM commands executing at once. With this update, a correct locking has been introduced to fix the race condition. As a result, lvmetad no longer suffers heap corruption and subsequent crashes.
- BZ#1020304
- The blkdeactivate script iterates over the list of devices given to it as an argument and tries to unmount or deactivate them one by one. However, in case of failed unmount or deactivation, the iteration did not proceed. Consequently, blkdeactivate kept attempting to process the same device and entered an endless loop. This behavior has been fixed and if blkdeactivate fails to unmount or deactivate any of the devices, the processing of this device is properly skipped and blkdeactivate proceeds as expected.
Enhancements
- BZ#814737
- With this update, lvm2 has been enhanced to support the creation of thin snapshots of existing non-thinly-provisioned logical volumes. Thin-pool can now be used for these snapshots of non-thin volumes, providing performance gains. Note that the current lvm2 version does not support the merge feature, so unlike with older lvm2 snapshots, an updated device cannot be merged back into its origin device.
- BZ#820203
- LVM now supports validating of configuration files and it can report any unrecognized entries or entries with wrong value types in addition to existing syntax checking. To support this feature, a new "config" configuration section has been added to the /etc/lvm/lvm.conf configuration file. This section has two configurables: "config/checks" which enables or disables the checking (enabled by default), and "config/abort_on_errors" which enables or disables immediate abort on any invalid configuration entry found (disabled by default).In addition, new options have been added to the "lvm dumpconfig" command that make use of the new configuration handling code introduced. The "lvm dumpconfig" now recognizes the following options: --type, --atversion, --ignoreadvanced, --ignoreunsupported, --mergedconfig, --withcomments, --withversions, and --validate.
- BZ#888641
- Previously, the scm (Storage Class Memory) device was not internally recognized as partitionable device. Consequently, scm devices could not be used as physical volumes. With this update, scm device has been added to internal list of devices which are known to be partitionable. As a result, physical volumes are supported on scm partitions. Also, the new 'lvm devtypes' command has been added to list all known device types.
- BZ#894136
- When the lvmetad daemon is enabled, meta data is cached in RAM and most LVM commands do not consult on-disk meta data during normal operation. However, when meta data becomes corrupt on disk, LVM may not take a notice until a restart of lvmetad or a reboot. With this update, the vgck command used for checking VG consistency has been improved to detect such on-disk corruption even while lvmetad is active and the meta data is cached. As a result, users can issue the "vgck" command to verify consistency of on-disk meta data at any time, or they can arrange a periodic check using cron.
- BZ#903249
- If a device temporarily fails, the kernel notices the interruption and regards the device as disabled. Later, the kernel needs to be notified before it accepts the device as alive again. Previously, LVM did not recognize these changes and the 'lvs' command reported the device as operating normally even though the kernel still regarded the device as failed. With this update, 'lvs' has been modified to print a 'p' (partial) if a device is missing and also an 'r' (refresh/replace) if the device is present but the kernel regards the device as still disabled. When seeing an 'r' attribute for a RAID logical volume, the user can then decide if the array should be refreshed (reloaded into the kernel using 'lvchange --refresh') or if the device should be replaced.
- BZ#916746
- With this update, snapshot management handling of COW device size has been improved. This version trims the snapshot COW size to the maximal usable size to avoid unnecessary disk space consumption. It also stops snapshot monitoring once the maximal size is reached.
- BZ#921280
- Support for more complicated device stack for thinpool has been enhanced to properly resize more complex volumes like mirrors or raids. The new lvm2 version now supports thin data volume extension on raids. Support for mirrors has been deactivated.
- BZ#921734
- Prior to this update, , the "vgchange -c {y|n}" command call changed all volume groups accessible on the system to clustered or non-clustered. This may have caused an unintentional change and therefore the following prompt has been added to acknowledge this change:
Change clustered property of all volumes groups? [y/n]
This prompt is displayed only if the "vgchange -c {y|n}" is called without specifying target volume groups. - BZ#924137
- The blkdeactivate utility now suppresses error and information messages from external tools that are called. Instead, only a summary message "done" or "skipped" is issued by blkdeactivate. To show these error messages if needed, a new -e/--errors switch has been added to blkdeactivate. Also, there's a new -v/--verbose switch to display any information messages from external tools together with any possible debug information.
- BZ#958511
- With this update, the blkdeactivate utility has been modified to correctly handle file systems mounted with bind (the 'mount -o bind' command). Now, blkdeactivate unmounts all such mount points correctly before trying to deactivate the volumes underneath.
- BZ#969171
- When creating many RAID logical volumes at the same time, it is possible for the background synchronization I/O necessary to calculate parity or copy mirror images to crowd out nominal I/O and cause subsequent logical volume creation to slow dramatically. It is now possible to throttle this initializing I/O via the '--raidmaxrecoveryrate' option to lvcreate. You can use the same argument with lvchange to alter the recovery I/O rate after a logical volume has been created. Reducing the recovery rate will prevent nominal I/O from being crowded out. Initialization will take longer, but the creation of many logical volumes will proceed more quickly. (BZ#969171)
- BZ#985976
- With this update, RAID logical volumes that are created with LVM can now be checked with use of scrubbing operations. Scrubbing operations are user-initiated checks to ensure that the RAID volume is consistent. There are two scrubbing operations that can be performed by appending the "check" or "repair" option to the "lvchange --syncaction" command. The "check" operation will examine the logical volume for any discrepancies, but will not correct them. The "repair" operation will correct any discrepancies found.
- BZ#1003461
- This update adds support for thin external origin to lvm2. This allows to use any LV as an external origin for a thin volume. All unprovisioned blocks are loaded from the external origin volume, while all once-written blocks are loaded from the thin volume. This functionality is provided by the 'lvcreate --snapshot' command and the 'lvconvert' command that converts any LV into a thin LV.
- BZ#1003470
- The error message 'Cannot change discards state for active pool volume "pool volume name"' has been improved to be more comprehensible: 'Cannot change support for discards while pool volume "pool volume name" is active'.
- BZ#1007074
- The repair of corrupted thin pool meta data is now provided by the 'lvconvert --repair' command, which is low-level manual repair. The thin pool meta data volume can be swapped out of the thin-pool LV via 'lvconvert --poolmetadata swapLV vg/pool' command and then the thin_check, thin_dump, and thin_repair commands can be used to run manual recover operation. After the repair, the thin pool meta data volume can be swapped back. This low-level repair should be only used when the user is fully aware of thin-pool functionality.
- BZ#1017291
- LVM now recognizes NVM Express devices as a proper block device type.
Users of lvm2 are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.