Chapter 5. Managing LVM logical volumes
A logical volume is a virtual, block storage device that a file system, database, or application can use. To create an LVM logical volume, the physical volumes (PVs) are combined into a volume group (VG). This creates a pool of disk space out of which LVM logical volumes (LVs) can be allocated.
5.1. Overview of logical volumes
An administrator can grow or shrink logical volumes without destroying data, unlike standard disk partitions. If the physical volumes in a volume group are on separate drives or RAID arrays, then administrators can also spread a logical volume across the storage devices.
You can lose data if you shrink a logical volume to a smaller capacity than the data on the volume requires. Further, some file systems are not capable of shrinking. To ensure maximum flexibility, create logical volumes to meet your current needs, and leave excess storage capacity unallocated. You can safely extend logical volumes to use unallocated space, depending on your needs.
On AMD, Intel, ARM systems, and IBM Power Systems servers, the boot loader cannot read LVM volumes. You must make a standard, non-LVM disk partition for your /boot
partition. On IBM Z, the zipl
boot loader supports /boot
on LVM logical volumes with linear mapping. By default, the installation process always creates the /
and swap partitions within LVM volumes, with a separate /boot
partition on a physical volume.
The following are the different types of logical volumes:
- Linear volumes
- A linear volume aggregates space from one or more physical volumes into one logical volume. For example, if you have two 60GB disks, you can create a 120GB logical volume. The physical storage is concatenated.
- Striped logical volumes
When you write data to an LVM logical volume, the file system lays the data out across the underlying physical volumes. You can control the way the data is written to the physical volumes by creating a striped logical volume. For large sequential reads and writes, this can improve the efficiency of the data I/O.
Striping enhances performance by writing data to a predetermined number of physical volumes in round-robin fashion. With striping, I/O can be done in parallel. In some situations, this can result in near-linear performance gain for each additional physical volume in the stripe.
- RAID logical volumes
- LVM supports RAID levels 0, 1, 4, 5, 6, and 10. RAID logical volumes are not cluster-aware. When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size for every data or parity subvolume in the array.
- Thin-provisioned logical volumes (thin volumes)
- Using thin-provisioned logical volumes, you can create logical volumes that are larger than the available physical storage. Creating a thinly provisioned set of volumes allows the system to allocate what you use instead of allocating the full amount of storage that is requested
- Snapshot volumes
- The LVM snapshot feature provides the ability to create virtual images of a device at a particular instant without causing a service interruption. When a change is made to the original device (the origin) after a snapshot is taken, the snapshot feature makes a copy of the changed data area as it was prior to the change so that it can reconstruct the state of the device.
- Thin-provisioned snapshot volumes
- Using thin-provisioned snapshot volumes, you can have more virtual devices to be stored on the same data volume. Thinly provisioned snapshots are useful because you are not copying all of the data that you are looking to capture at a given time.
- Cache volumes
- LVM supports the use of fast block devices, such as SSD drives as write-back or write-through caches for larger slower block devices. Users can create cache logical volumes to improve the performance of their existing logical volumes or create new cache logical volumes composed of a small and fast device coupled with a large and slow device.
5.2. Creating LVM logical volume
Prerequisites
-
The
lvm2
package is installed. - The volume group is created. For more information, see Creating LVM volume group.
Procedure
Create a logical volume:
# lvcreate -n mylv -L 500M myvg Logical volume "mylv" successfully created.
Use the
-n
option to set the LV name to mylv, and the-L
option to set the size of LV in units of Mb, but it is possible to use any other units. The LV type is linear by default, but the user can specify the desired type by using the--type
option.ImportantThe command fails if the VG does not have a sufficient number of free physical extents for the requested size and type.
View the created logical volumes by using any one of the following commands as per your requirement:
The
lvs
command provides logical volume information in a configurable form, displaying one line per logical volume:# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mylv myvg -wi-ao---- 500.00m
The
lvdisplay
command displays logical volume properties, such as size, layout, and mapping in a fixed format:# lvdisplay -v /dev/myvg/mylv --- Logical volume --- LV Path /dev/myvg/mylv LV Name mylv VG Name myvg LV UUID YTnAk6-kMlT-c4pG-HBFZ-Bx7t-ePMk-7YjhaM LV Write Access read/write [..]
The
lvscan
command scans for all logical volumes in the system and lists them:# lvscan ACTIVE '/dev/myvg/mylv' [500.00 MiB] inherit
Create a file system on the logical volume. The following command creates an
xfs
file system on the logical volume:# mkfs.xfs /dev/myvg/mylv meta-data=/dev/myvg/mylv isize=512 agcount=4, agsize=32000 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=128000, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=1368, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done.
Mount the logical volume and report the file system disk space usage:
# mount /dev/myvg/mylv /mnt # df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/myvg-mylv 506528 29388 477140 6% /mnt
Additional resources
-
lvcreate(8)
,lvdisplay(8)
,lvs(8)
,lvscan(8)
,lvm(8)
andmkfs.xfs(8)
man pages
5.3. Creating a RAID0 striped logical volume
A RAID0 logical volume spreads logical volume data across multiple data subvolumes in units of stripe size. The following procedure creates an LVM RAID0 logical volume called mylv that stripes data across the disks.
Prerequisites
- You have created three or more physical volumes. For more information about creating physical volumes, see Creating LVM physical volume.
- You have created the volume group. For more information, see Creating LVM volume group.
Procedure
Create a RAID0 logical volume from the existing volume group. The following command creates the RAID0 volume mylv from the volume group myvg, which is 2G in size, with three stripes and a stripe size of 4kB:
# lvcreate --type raid0 -L 2G --stripes 3 --stripesize 4 -n mylv my_vg Rounding size 2.00 GiB (512 extents) up to stripe boundary size 2.00 GiB(513 extents). Logical volume "mylv" created.
Create a file system on the RAID0 logical volume. The following command creates an ext4 file system on the logical volume:
# mkfs.ext4 /dev/my_vg/mylv
Mount the logical volume and report the file system disk space usage:
# mount /dev/my_vg/mylv /mnt # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/my_vg-mylv 2002684 6168 1875072 1% /mnt
Verification
View the created RAID0 stripped logical volume:
# lvs -a -o +devices,segtype my_vg LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type mylv my_vg rwi-a-r--- 2.00g mylv_rimage_0(0),mylv_rimage_1(0),mylv_rimage_2(0) raid0 [mylv_rimage_0] my_vg iwi-aor--- 684.00m /dev/sdf1(0) linear [mylv_rimage_1] my_vg iwi-aor--- 684.00m /dev/sdg1(0) linear [mylv_rimage_2] my_vg iwi-aor--- 684.00m /dev/sdh1(0) linear
5.4. Renaming LVM logical volumes
This procedure describes how to rename an existing logical volume mylv to mylv1.
Procedure
If the logical volume is currently mounted, unmount the volume:
# umount /mnt
Replace /mnt with the mount point.
Rename an existing logical volume:
# lvrename myvg mylv mylv1 Renamed "mylv" to "mylv1" in volume group "myvg"
You can also rename the logical volume by specifying the full paths to the devices:
# lvrename /dev/myvg/mylv /dev/myvg/mylv1
Additional resources
-
lvrename(8)
man page
5.5. Removing a disk from a logical volume
This procedure describes how to remove a disk from an existing logical volume, either to replace the disk or to use the disk as part of a different volume.
In order to remove a disk, you must first move the extents on the LVM physical volume to a different disk or set of disks.
Procedure
View the used and free space of physical volumes when using the LV:
# pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/vdb1 myvg lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 myvg lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 myvg lvm2 a-- 1020.00m 1008.00m 12.00m
Move the data to other physical volume:
If there are enough free extents on the other physical volumes in the existing volume group, use the following command to move the data:
# pvmove /dev/vdb3 /dev/vdb3: Moved: 2.0% ... /dev/vdb3: Moved: 79.2% ... /dev/vdb3: Moved: 100.0%
If there are no enough free extents on the other physical volumes in the existing volume group, use the following commands to add a new physical volume, extend the volume group using the newly created physical volume, and move the data to this physical volume:
# pvcreate /dev/vdb4 Physical volume "/dev/vdb4" successfully created # vgextend myvg /dev/vdb4 Volume group "myvg" successfully extended # pvmove /dev/vdb3 /dev/vdb4 /dev/vdb3: Moved: 33.33% /dev/vdb3: Moved: 100.00%
Remove the physical volume:
# vgreduce myvg /dev/vdb3 Removed "/dev/vdb3" from volume group "myvg"
If a logical volume contains a physical volume that fails, you cannot use that logical volume. To remove missing physical volumes from a volume group, you can use the
--removemissing
parameter of thevgreduce
command, if there are no logical volumes that are allocated on the missing physical volumes:# vgreduce --removemissing myvg
Additional resources
-
pvmove(8)
,vgextend(8)
,vereduce(8)
, andpvs(8)
man pages
5.6. Changing physical drives in volume groups using the web console
You can change the drive in a volume group using the RHEL 8 web console.
Prerequisites
- A new physical drive for replacing the old or broken one.
- The configuration expects that physical drives are organized in a volume group.
5.6.1. Adding physical drives to volume groups in the web console
You can add a new physical drive or other type of volume to the existing logical volume by using the RHEL 8 web console.
Prerequisites
You have installed the RHEL 8 web console.
For instructions, see Installing and enabling the web console.
-
The
cockpit-storaged
package is installed on your system. - A volume group must be created.
- A new drive connected to the machine.
Procedure
Log in to the RHEL 8 web console.
For details, see Logging in to the web console.
- Click Storage.
- In the Storage table, click the volume group to which you want to add physical drives.
- On the LVM2 volume group page, click .
- In the Add Disks dialog box, select the preferred drives and click .
Verification
- On the LVM2 volume group page, check the Physical volumes section to verify whether the new physical drives are available in the volume group.
5.6.2. Removing physical drives from volume groups in the web console
If a logical volume includes multiple physical drives, you can remove one of the physical drives online.
The system moves automatically all data from the drive to be removed to other drives during the removal process. Notice that it can take some time.
The web console also verifies, if there is enough space for removing the physical drive.
Prerequisites
You have installed the RHEL 8 web console.
For instructions, see Installing and enabling the web console.
-
The
cockpit-storaged
package is installed on your system. - A volume group with more than one physical drive connected.
Procedure
- Log in to the RHEL 8 web console.
- Click Storage.
- In the Storage table, click the volume group to which you want to add physical drives.
- On the LVM2 volume group page, scroll to the Physical volumes section.
- Click the menu button, , next to the physical volume you want to remove.
From the drop-down menu, select
.The RHEL 8 web console verifies whether the logical volume has enough free space to removing the disk. If there is no free space to transfer the data, you cannot remove the disk and you must first add another disk to increase the capacity of the volume group. For details, see Adding physical drives to logical volumes in the web console.
5.7. Removing LVM logical volumes
This procedure describes how to remove an existing logical volume /dev/myvg/mylv1 from the volume group myvg.
Procedure
If the logical volume is currently mounted, unmount the volume:
# umount /mnt
If the logical volume exists in a clustered environment, deactivate the logical volume on all nodes where it is active. Use the following command on each such node:
# lvchange --activate n vg-name/lv-name
Remove the logical volume using the
lvremove
utility:# lvremove /dev/myvg/mylv1 Do you really want to remove active logical volume "mylv1"? [y/n]: y Logical volume "mylv1" successfully removed
NoteIn this case, the logical volume has not been deactivated. If you explicitly deactivated the logical volume before removing it, you would not see the prompt verifying whether you want to remove an active logical volume.
Additional resources
-
lvremove(8)
man page
5.8. Managing LVM logical volumes by using RHEL system roles
Use the storage
role to perform the following tasks:
- Create an LVM logical volume in a volume group consisting of multiple disks.
- Create an ext4 file system with a given label on the logical volume.
- Persistently mount the ext4 file system.
Prerequisites
-
An Ansible playbook including the
storage
role
5.8.1. Managing logical volumes by using the storage
RHEL system role
The example Ansible playbook applies the storage
role to create an LVM logical volume in a volume group.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:- hosts: managed-node-01.example.com roles: - rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/dat
-
The
myvg
volume group consists of the following disks:/dev/sda
,/dev/sdb
, and/dev/sdc
. -
If the
myvg
volume group already exists, the playbook adds the logical volume to the volume group. -
If the
myvg
volume group does not exist, the playbook creates it. -
The playbook creates an Ext4 file system on the
mylv
logical volume, and persistently mounts the file system at/mnt
.
-
The
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.storage/README.md
file -
/usr/share/doc/rhel-system-roles/storage/
directory
5.8.2. Additional resources
-
For more information about the
storage
role, see Managing local storage by using RHEL system roles.
5.9. Removing LVM volume groups
You can remove an existing volume group using the vgremove
command.
Prerequisites
- The volume group contains no logical volumes. To remove logical volumes from a volume group, see Removing LVM logical volumes.
Procedure
If the volume group exists in a clustered environment, stop the lockspace of the volume group on all other nodes. Use the following command on all nodes except the node where you are performing the removal:
# vgchange --lockstop vg-name
Wait for the lock to stop.
Remove the volume group:
# vgremove vg-name Volume group "vg-name" successfully removed
Additional resources
-
vgremove(8)
man page