이 콘텐츠는 선택한 언어로 제공되지 않습니다.

5.179. lvm2


Updated lvm2 packages that add an enhancement are now available for Red Hat Enterprise Linux 6.
The lvm2 packages provide support for Logical Volume Management (LVM).

Enhancement

BZ#883034
In cases of transient inaccessibility of a PV (Physical Volume), such as with iSCSI or other unreliable transport, LVM required manual action to restore the PV for use even if there was no room for conflict. With this update, the manual action is no longer required if the transiently inaccessible PV had no active metadata areas (MDA). The automatic restore action of a physical volume (PV) from the MISSING state after it becomes reachable again and if it has no active MDA has been added to the lvm2 packages.
Users of lvm2 are advised to upgrade to these updated packages, which adds this enhancement.
Updated lvm2 packages that fix several bugs are now available for Red Hat Enterprise Linux 6.
The lvm2 packages provide support for Logical Volume Management (LVM).

Bug Fixes

BZ#843808
When using a physical volume (PV) that contained ignored metadata areas, an LVM command, such as pvs, could incorrectly display the PV as being orphan despite it belonged to a volume group (VG). This incorrect behavior was also dependent on the order of processing each PV in the VG. With this update, the processing of PVs in a VG has been fixed to properly account for PVs with ignored metadata areas so that the order of processing is no longer important, and LVM commands now always give the same correct result, regardless of PVs with ignored metadata areas.
BZ#852438
Previously, if the "issue_discards=1" configuration option was used with an LVM command, moving physical volumes using the pvmove command resulted in data loss. This update fixes the bug in pvmove and the data loss no longer occurs in the described scenario.
BZ#852440
When the "--alloc anywhere" command-line option was specified for the lvcreate command, an attempt to create a logical volume failed if "raid4", "raid5", or "raid6" was specified for the "--type" command-line option as well. A patch has been provided to address this bug and lvcreate now succeeds in the described scenario.
BZ#852441
An error in the way RAID 4/5/6 space was calculated, was preventing users from being able to increase the size of these logical volumes. This update provides a patch to fix this bug but it comes with two limitations. Firstly, a RAID 4/5/6 logical volume cannot be reduced in size yet. Secondly, users cannot extend a RAID 4/5/6 logical volume with a different stripe count than the original.
BZ#867009
If the "issue_discards=1" configuration option was set in the /etc/lvm/lvm.conf file, it was possible to issue a discard request to a PV that was missing in a VG. Consequently, the dmeventd, lvremove, or vgreduce utilities could terminate unexpectedly with a segmentation fault. This bug has been fixed and discard requests are no longer issued on missing devices. As the discard operation is irreversible, in addition to this fix, a confirmation prompt has been added to the lvremove utility to ask the user before discarding a LV, thus increasing robustness of the discard logic.
Users of lvm2 are advised to upgrade to these updated packages, which fix these bugs.
Updated lvm2 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6.
The lvm2 packages contain support for Logical Volume Management (LVM).

Bug Fixes

BZ#683270
When mirrors are up-converted it was impossible to add another image (copy) to a mirrored logical volume whose activation was regulated by tags present in the volume_list parameter of lvm.conf. The code has been improved to temporarily copy the mirror's tags to the in-coming image so that it can be properly activated. As a result, high-availability (HA) LVM service relocation now works as expected.
BZ#700128
Previously, the lvremove command could fail with the error message Can't remove open logical volume despite the volume itself not being in use anymore. In most cases, such a situation was caused by the udev daemon that still processed events that preceded the removal and it kept the device open while lvremove tried to remove it at the same time. This update adds a retry loop to help to avoid this problem. The removal is tried several times before the command fails completely.
BZ#733522
Previously, if a snapshot with a virtual origin was created in a clustered Volume Group (VG), it incorrectly tried to activate on other nodes as well and the command failed with Error locking on node error messages. This has been fixed and a snapshot with virtual origin, using --virtualsize, is now properly activated exclusively only (on local node).
BZ#738484
Previously, if clvmd received an invalid request through its socket (for example an incomplete header was sent), the clvmd process could terminate unexpectedly or stay in an infinite loop. Additional checks have been added so that such invalid packets now cause a proper error reply to the client and clvmd no longer crashes in the scenario described.
BZ#739190
The device-mapper daemon (dmeventd) is used, for example, for monitoring LVM based mirrors and snapshots. When attempting to create a snapshot using lvm2, the lvcreate -s command resulted in a dlopen error if dmeventd was upgraded after the last system restart. With this update, dmeventd is now restarted during a package update to fetch new versions of installed libraries to avoid any code divergence that could end up with a symbol lookup failure.
BZ#740290
Restarting clvmd with option -S should preserve exclusive locks on a restarted cluster node. However the option -E, which should pass such exclusive locks, had errors in its implementation. Consequently, exclusive locks were not preserved in a cluster after restart. This update implements proper support for option -E. As a result, after restarting clvmd the locks will preserve a cluster's exclusive state.
BZ#742607
If a device-mapper device was left open by a process, it could not be removed with the dmsetup [--force] remove device_name command. The --force option failed, reporting that the device was busy. Consequently, the underlying block device could not be detached from the system. With this update, dmsetup has a new command wipe_table to wipe the table of the device. Any subsequent I/O sent to the device returns errors and any devices used by the table, that is to say devices to which the I/O is forwarded, are closed. As a result, if a long-running process keeps a device open after it has finished using it, the underlying devices can be released before that process exits.
BZ#760946
Using a prefix or command names on LVM command output (the log/prefix and log/command_names directive in lvm.conf) caused the lvm2-monitor init script to fail to start monitoring for relevant VGs. The init script acquires the list of VGs first by calling the vgs command and then it uses its output for further processing. However, if the prefix or command name directive is used on output, the VG name was not correctly formatted. To solve this, the lvm2-monitor init script now overrides the log/prefix and log/command_names setting so the command's output is always suitable for use in the init script.
BZ#761267
Prior to this update, the lvconvert --merge command did not check if the snapshot in question was invalid before proceeding. Consequently, the operation failed part-way through, leaving an invalid snapshot. This update disallows an invalid snapshot to be merged. In addition, it allows the removal of an invalid snapshot that was to be merged on next activation, or that was invalidated while merging (the user will be prompted for confirmation).
BZ#796602
If a Volume Group name is supplied together with the Logical Volume name, lvconvert --splitmirrors fails to strip it off. This leads to an attempt to use a Logical Volume name that is invalid. This release detects and validates any supplied Volume Group name correctly.
BZ#799071
Previously, if pvmove was used on a clustered VG, temporarily activated pvmove devices were improperly activated cluster-wide (that is to say, on all nodes). Consequently, in some situations, such as when using tags or HA-LVM configuration, pvmove failed. This update fixes the problem, pvmove now activates all such devices exclusively if the Logical Volumes to be moved are already exclusively activated.
BZ#807441
Previously, when the vgreduce command was executed with a non-existent VG, it unnecessarily tried to unlock the VG at command exit. However the VG was not locked at that time as it was unlocked as part of the error handling process. Consequently, the vgreduce command failed with the internal error Attempt to unlock unlocked VG when it was executed with a non-existent VG. This update improves the code to provide a proper check so that only locked VGs are unlocked at vgreduce command exit.
BZ#816711
Previously, requests for information regarding the health of the log device were processed locally. Consequently, in the event of a device failure that affected the log of a cluster mirror, it was possible for a failure to be ignored and this could cause I/O to the mirror LV to become unresponsive. With this update, the information is requested from the cluster so that log device failures are detected and processed as expected.

Enhancements

BZ#464877
Most LVM commands require an accurate view of the LVM metadata stored on the disk devices on the system. With the current LVM design, if this information is not available, LVM must scan all the physical disk devices in the system. This requires a significant amount of I/O operations in systems that have a large number of disks. The purpose of the LV Metadata daemon (lvmetad) is to eliminate the need for this scanning by dynamically aggregating metadata information each time the status of a device changes. These events are signaled to lvmetad by udev rules. If lvmetad is not running, LVM performs a scan as it normally would. This feature is provided as a Technology Preview and is disabled by default in Red Hat Enterprise Linux 6.3. To enable it, refer to the use_lvmetad parameter in the /etc/lvm/lvm.conf file, and enable the lvmetad daemon by configuring the lvm2-lvmetad init script.
BZ#593119
The expanded RAID support in LVM is now fully supported in Red Hat Enterprise Linux 6.3, with the exception of RAID logical volumes in HA-LVM. LVM now has the capability to create RAID 4/5/6 Logical Volumes and supports a new implementation of mirroring. The MD (software RAID) modules provide the back-end support for these new features.
BZ#637693
When a new LV is defined, anyone who has access to the LV can read any data already present on the LUNs in the extents allocated to the new LV. Users can create thin volumes by using the "lvcreate -T" command to meet the requirement that zeros are returned when attempting to read a block not previously written to. The default behavior of thin volumes is the provisioning of zero data blocks. The size of provisioned blocks is in the range of 64KB to 1GB. The bigger the blocks the longer it takes for the initial provisioning. After first write, the performance should be close to a native linear volume. However, for a clustering environment there is a difference as thin volumes may be only exclusively activated.
BZ#658639
This update greatly reduces the time spent on creating identical data structures, which allows even a very large number of devices (in the thousands) to be activated and deactivated in a matter of seconds. In addition, the number of system calls from device scanning has been reduced, which also gives a 10%-30% speed improvement.
BZ#672314
Some LVM segment types, such as mirror, have single machine and cluster-aware variants. Others, such as snapshot and the RAID types, have only single machine variants. When switching the cluster attribute of a Volume Group (VG), the aforementioned segment types must be inactive. This allows for re-loading of the appropriate single machine or cluster variant, or for the necessity of the activation to be exclusive in nature. This update disallows changing cluster attributes of a VG while RAID LVs are active.
BZ#731785
The dmsetup command now supports displaying block device names for any devices listed in the deps, ls and info command output. For the dmsetup deps and ls command, it is possible to switch among devno (major and minor number, the default and the original behavior), devname (mapping name for a device-mapper device, block device name otherwise) and blkdevname (always display a block device name). For the dmsetup info command, it is possible to use the new blkdevname and blkdevs_used fields.
BZ#736486
Device-mapper allows any character except / to be used in a device-mapper name. However, this is in conflict with udev as its character whitelist is restricted to 0-9, A-Z, a-z and #+-.:=@_. Using any black-listed character in the device-mapper name ends up with incorrect /dev entries being created by udev. To solve this problem, the libdevmapper library together with the dmsetup command now supports encoding of udev-blacklisted characters by using the \xNN format where NN is the hex value of the character. This format is supported by udev. There are three mangling modes in which libdevmapper can operate: none (no mangling), hex (always mangle any blacklisted character) and auto (use detection and mangle only if not mangled yet). The default mode used is auto and any libdevmapper user is affected unless this setting is changed by the respective libdevmapper call. To support this feature, the dmsetup command has a new --manglename <mangling_mode> option to define the name mangling mode used while processing device-mapper names. The dmsetup info -c -o command has new fields to display: mangled_name and unmangled_name. There is also a new dmsetup mangle command that renames any existing device-mapper names to its correct form automatically. It is strongly advised to issue this command after an update to correct any existing device-mapper names.
BZ#743640
It is now possible to extend a mirrored logical volume without inducing a synchronization of the new portion. The --nosync option to lvextend will cause the initial synchronization to be skipped. This can save time and is acceptable if the user does not intend to read what they have not written.
BZ#746792
LVM mirroring has a variety of options for the bitmap write-intent log: core, disk, mirrored. The cluster log daemon (cmirrord) is not multi-threaded and can handle only one request at a time. When a log is stacked on top of a mirror (which itself contains a 'core' log), it creates a situation that cannot be solved without threading. When the top level mirror issues a "resume", the log daemon attempts to read from the log device to retrieve the log state. However, the log is a mirror which, before issuing the read, attempts to determine the "sync" status of the region of the mirror which is to be read. This sync status request cannot be completed by the daemon because it is blocked on a read I/O to the very mirror requesting the sync status. With this update, the mirrored option is not available in the cluster context to prevent this problem from occurring.
BZ#769293
A new LVM configuration file parameter, activation/read_only_volume_list, makes it possible to activate particular volumes always in read-only mode, regardless of the actual permissions on the volumes concerned. This parameter overrides the --permission rw option stored in the metadata.
BZ#771419
In previous versions, when monitoring of multiple snapshots was enabled, dmeventd would log redundant informative messages in the form Another thread is handling an event. Waiting.... This needlessly flooded system log files. This behavior has been fixed in this update.
BZ#773482
A new implementation of LVM copy-on-write (cow) snapshots is available in Red Hat Enterprise Linux 6.3 as a Technology Preview. The main advantage of this implementation, compared to the previous implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This implementation also provides support for arbitrary depth of recursive snapshots. This feature is for use on a single-system. It is not available for multi-system access in cluster environments. For more information, refer to the documentation of the -s or --snapshot option in the lvcreate man page.
BZ#773507
Logical Volumes (LVs) can now be thinly provisioned to manage a storage pool of free space to be allocated to an arbitrary number of devices when needed by applications. This allows creation of devices that can be bound to a thinly provisioned pool for late allocation when an application actually writes to the LV. The thinly-provisioned pool can be expanded dynamically if and when needed for cost-effective allocation of storage space. In Red Hat Enterprise Linux 6.3, this feature is introduced as a Technology Preview. For more information, refer to the lvcreate man page. Note that the device-mapper-persistent-data package is required.
BZ#796408
LVM now recognizes EMC PowerPath devices (emcpower) and uses them in preference to the devices out of which they are constructed.
BZ#817130
LVM now has two implementations for creating mirrored logical volumes: the mirror segment type and the raid1 segment type. The raid1 segment type contains design improvements over the mirror segment type that are useful to its operation with snapshots. As a result, users who employ snapshots of mirrored volumes are encouraged to use the raid1 segment type rather than the mirror segment type. Users who continue to use the mirror segment type as the origin LV for snapshots should plan for the possibility of the following disruptions.
When a snapshot is created or resized, it forces I/O through the underlying origin. The operation will not complete until this occurs. If a device failure occurs to a mirrored logical volume (of mirror segment type) that is the origin of the snapshot being created or resized, it will delay I/O until it is reconfigured. The mirror cannot be reconfigured until the snapshot operation completes, but the snapshot operation cannot complete unless the mirror releases the I/O. Again, the problem can manifest itself when the mirror suffers a failure simultaneously with a snapshot creation or resize.
There is no current solution to this problem beyond converting the mirror from the mirror segment type to the raid1 segment type. In order to convert an existing mirror from the mirror segment type to the raid1 segment type, perform the following action:
~]$ lvconvert --type raid1 <VG>/<mirrored LV>
This operation can only be undone using the vgcfgrestore command.
With the current version of LVM2, if the mirror segment type is used to create a new mirror LV, a warning message is issued to the user about possible problems and it suggests using the raid1 segment type instead.
Users of lvm2 should upgrade to these updated packages, which fix these bugs and add these enhancements.
Updated lvm2 packages that fix one bug are now available for Red Hat Enterprise Linux 6.
The lvm2 packages include all of the support for handling read and write operations on physical volumes, creating volume groups from one or more physical volumes and creating one or more logical volumes in volume groups.

Bug Fix

BZ#965810
Previously, on certain HP servers using Red Hat Enterprise Linux 6 with the xfs file system, a regression in the code caused the lvm2 utility to ignore the "optimal_io_size" parameter and use a 1MB offset start. Consequently, there was an increase in the disk write operations which caused data misalignment and considerably lowered the performance of the servers. With this update, lvm2 no longer ignores "optimal_io_size" and data misalignment no longer occurs in this scenario.
Users of lvm2 are advised to upgrade to these updated packages, which fix this bug.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.