Chapter 10. Snapshot of logical volumes
Using the LVM snapshot feature, you can create virtual images of a volume, for example, /dev/sda, at a particular instant without causing a service interruption.
10.1. Overview of snapshot volumes
When you modify the original volume (the origin) after you take a snapshot, the snapshot feature makes a copy of the modified data area as it was prior to the change so that it can reconstruct the state of the volume. When you create a snapshot, full read and write access to the origin stays possible.
Since a snapshot copies only the data areas that change after the snapshot is created, the snapshot feature requires a minimal amount of storage. For example, with a rarely updated origin, 3-5 % of the origin’s capacity is sufficient to maintain the snapshot. It does not provide a substitute for a backup procedure. Snapshot copies are virtual copies and are not an actual media backup.
The size of the snapshot controls the amount of space set aside for storing the changes to the origin volume. For example, if you create a snapshot and then completely overwrite the origin, the snapshot should be at least as big as the origin volume to hold the changes. You should regularly monitor the size of the snapshot. For example, a short-lived snapshot of a read-mostly volume, such as /usr
, would need less space than a long-lived snapshot of a volume because it contains many writes, such as /home
.
If a snapshot is full, the snapshot becomes invalid because it can no longer track changes on the origin volume. But you can configure LVM to automatically extend a snapshot whenever its usage exceeds the snapshot_autoextend_threshold
value to avoid snapshot becoming invalid. Snapshots are fully resizable and you can perform the following operations:
- If you have the storage capacity, you can increase the size of the snapshot volume to prevent it from getting dropped.
- If the snapshot volume is larger than you need, you can reduce the size of the volume to free up space that is needed by other logical volumes.
The snapshot volume provide the following benefits:
- Most typically, you take a snapshot when you need to perform a backup on a logical volume without halting the live system that is continuously updating the data.
-
You can execute the
fsck
command on a snapshot file system to check the file system integrity and determine if the original file system requires file system repair. - Since the snapshot is read/write, you can test applications against production data by taking a snapshot and running tests against the snapshot without touching the real data.
- You can create LVM volumes for use with Red Hat Virtualization. You can use LVM snapshots to create snapshots of virtual guest images. These snapshots can provide a convenient way to modify existing guests or create new guests with minimal additional storage.
10.2. Creating a snapshot of the original volume
Use the lvcreate
command to create a snapshot of the original volume (the origin). A snapshot of a volume is writable. By default, a snapshot volume is activated with the origin during normal activation commands as compared to the thinly-provisioned snapshots. LVM does not support creating a snapshot volume that is larger than the sum of the origin volume’s size and the required metadata size for the volume. If you specify a snapshot volume that is larger than this, LVM creates a snapshot volume that is required for the size of the origin.
The nodes in a cluster do not support LVM snapshots. You cannot create a snapshot volume in a shared volume group. However, if you need to create a consistent backup of data on a shared logical volume you can activate the volume exclusively and then create the snapshot.
The following procedure creates an origin logical volume named origin and a snapshot volume of this original volume named snap.
Prerequisites
- You have created volume group vg001. For more information, see Creating LVM volume group.
Procedure
Create a logical volume named origin from the volume group vg001:
# lvcreate -L 1G -n origin vg001 Logical volume "origin" created.
Create a snapshot logical volume named snap of /dev/vg001/origin that is 100 MB in size:
# lvcreate --size 100M --name snap --snapshot /dev/vg001/origin Logical volume "snap" created.
You can also use the
-L
argument instead of using--size
,-n
instead of using--name
, and-s
instead of using--snapshot
to create a snapshot.If the original logical volume contains a file system, you can mount the snapshot logical volume on an arbitrary directory in to access the contents of the file system to run a backup while the original file system continues to get updated.
Display the origin volume and the current percentage of the snapshot volume being used:
# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices origin vg001 owi-a-s--- 1.00g /dev/sde1(0) snap vg001 swi-a-s--- 100.00m origin 0.00 /dev/sde1(256)
You can also display the status of logical volume /dev/vg001/origin with all the snapshot logical volumes and their status, such as active or inactive by using the
lvdisplay /dev/vg001/origin
command.WarningSpace in the snapshot LV is consumed after the origin LV is written to. The
lvs
command reports the current snapshot space usage in the Data% data_percent field value. If the snapshot space reaches 100%, the snapshot becomes invalid and unusable.An invalid snapshot is reported with I in the fifth position of the Attr column, or the lv_snapshot_invalid reporting field in
lvs
. You can remove the invalid snapshot by using thelvremove
command.Optional: Extend the snapshot before its space becomes 100% full and becomes invalid by using any one of the following options:
Configure LVM to automatically extend the snapshot by using the following parameters in the
/etc/lvm.conf
file:snapshot_autoextend_threshold
- Extends the snapshot after its usage exceeds the value set for this parameter. By default, it is set to 100, which disables automatic extension. The minimum value of this parameter is 50.
snapshot_autoextend_percent
- Adds an additional space to the snapshot, which is the percent of its current size. By default, it is set to 20.
In the following example, after setting the following parameters, the created 1G snapshot extends to 1.2G when its usage exceeds 700M:
Example 10.1. Automatically extend the snapshot
# vi /etc/lvm.conf snapshot_autoextend_threshold = 70 snapshot_autoextend_percent = 20
NoteThis feature requires unallocated space in the volume group. An automatic extension of a snapshot does not increase the size of a snapshot volume beyond the maximum calculated size that is necessary for the snapshot. Once a snapshot has grown large enough to cover the origin, it is no longer monitored for automatic extension.
Extend this snapshot manually by using the
lvextend
command:# lvextend -L+100M /dev/vg001/snap
Additional resources
-
lvcreate(8)
,lvextend(8)
, andlvs(8)
man pages -
/etc/lvm/lvm.conf
file
10.3. Merging snapshot to its original volume
Use the lvconvert
command with the --merge
option to merge a snapshot into its original (the origin) volume. You can perform a system rollback if you have lost data or files, or otherwise you have to restore your system to a previous state. After you merge the snapshot volume, the resulting logical volume has the origin volume’s name, minor number, and UUID. While the merge is in progress, reads or writes to the origin appear as they were directed to the snapshot being merged. When the merge finishes, the merged snapshot is removed.
If both the origin and snapshot volume are not open and active, the merge starts immediately. Otherwise, the merge starts after either the origin or snapshot are activated and both are closed. You can merge a snapshot into an origin that cannot be closed, for example a root
file system, after the origin volume is activated.
Procedure
Merge the snapshot volume. The following command merges snapshot volume vg001/snap into its origin:
# lvconvert --merge vg001/snap Merging of volume vg001/snap started. vg001/origin: Merged: 100.00%
View the origin volume:
# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices origin vg001 owi-a-s--- 1.00g /dev/sde1(0)
Additional resources
-
lvconvert(8)
man page
10.4. Creating LVM snapshots using the snapshot RHEL System Role
With the new snapshot
RHEL system role, you can now create LVM snapshots. This system role also checks if there is sufficient space for the created snapshots and no conflict with its name by setting the snapshot_lvm_action
parameter to check
. To mount the created snapshot, set snapshot_lvm_action
to mount
.
In the following example, the nouuid
option is set and only required when working with the XFS file system. XFS does not support mounting multiple file systems at the same time with the same UUID.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Run the snapshot system role hosts: managed-node-01.example.com vars: snapshot_lvm_set: name: snapset1 volumes: - name: data1 snapshot vg: data_vg lv: data1 percent_space_required: 25 mountpoint: /data1_snapshot options: nouuid mountpoint_create: true - name: data2 snapshot vg: data_vg lv: data2 percent_space_required: 25 mountpoint: /data2_snapshot options: nouuid mountpoint_create: true tasks: - name: Create a snapshot set ansible.builtin.include_role: name: rhel-system-roles.snapshot vars: snapshot_lvm_action: snapshot - name: Verify the set of snapshots for the LVs ansible.builtin.include_role: name: rhel-system-roles.snapshot vars: snapshot_lvm_action: check snapshot_lvm_verify_only: true - name: Mount the snapshot set ansible.builtin.include_role: name: rhel-system-roles.snapshot vars: snapshot_lvm_action: mount
Here, the
snapshot_lvm_set
parameter describes specific logical volumes (LV) from the same volume group (VG). You can also specify LVs from different VGs while setting this parameter.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
On the managed node, view the created snapshots:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data1 data_vg owi-a-s--- 1.00g data1_snapset1 data_vg swi-a-s--- 208.00m data1 0.00 data2 data_vg owi-a-s--- 1.00g data2_snapset1 data_vg swi-a-s--- 208.00m data2 0.00
On the managed node, verify if the mount operation was successful by checking the existence of /data1_snapshot and /data2_snapshot:
# ls -al /data1_snapshot # ls -al /data2_snapshot
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.snapshot/README.md
file -
/usr/share/doc/rhel-system-roles/snapshot/
directory
10.5. Unmounting LVM snapshots using the snapshot RHEL System Role
You can unmount a specific snapshot or all snapshots by setting the snapshot_lvm_action
parameter to umount
.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You have created snapshots using the name <_snapset1_> for the set of snapshots.
-
You have mounted the snapshots by setting
snapshot_lvm_action
tomount
or otherwise mounted them manually.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:Unmount a specific LVM snapshot:
--- - name: Unmount the snapshot specified by the snapset hosts: managed-node-01.example.com vars: snapshot_lvm_snapset_name: snapset1 snapshot_lvm_action: umount snapshot_lvm_vg: data_vg snapshot_lvm_lv: data2 snapshot_lvm_mountpoint: /data2_snapshot roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_lv
parameter describes a specific logical volume (LV) and thesnapshot_lvm_vg
parameter describes a specific volume group (VG).Unmount a set of LVM snapshots:
--- - name: Unmount a set of snapshots hosts: managed-node-01.example.com vars: snapshot_lvm_action: umount snapshot_lvm_set: name: snapset1 volumes: - name: data1 snapshot vg: data_vg lv: data1 mountpoint: /data1_snapshot - name: data2 snapshot vg: data_vg lv: data2 mountpoint: /data2_snapshot roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_set
parameter describes specific LVs from the same VG. You can also specify LVs from different VGs while setting this parameter.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.snapshot/README.md
file -
/usr/share/doc/rhel-system-roles/snapshot/
directory
10.6. Extending LVM snapshots using the snapshot RHEL System Role
With the new snapshot
RHEL system role, you can now extend LVM snapshots by setting the snapshot_lvm_action
parameter to extend
. You can set the snapshot_lvm_percent_space_required
parameter to the required space that should be allocated to the snapshot after extending it.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You have created snapshots for the given volume groups and logical volumes.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:Extend all LVM snapshots by specifying the value for the
percent_space_required
parameter:--- - name: Extend all snapshots hosts: managed-node-01.example.com vars: snapshot_lvm_action: extend snapshot_lvm_set: name: snapset1 volumes: - name: data1 snapshot vg: data_vg lv: data1 percent_space_required: 40 - name: data2 snapshot vg: data_vg lv: data2 percent_space_required: 40 roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_set
parameter describes specific LVs from the same VG. You can also specify LVs from different VGs while setting this parameter.Extend a LVM snapshot set by setting
percent_space_required
to different value for each VG and LV pair in a set:--- - name: Extend the snapshot hosts: managed-node-01.example.com vars: snapshot_extend_set: name: snapset1 volumes: - name: data1 snapshot vg: data_vg lv: data1 percent_space_required: 30 - name: data2 snapshot vg: data_vg lv: data2 percent_space_required: 40 tasks: - name: Extend data1 to 30% and data2 to 40% vars: snapshot_lvm_set: "{{ snapshot_extend_set }}" snapshot_lvm_action: extend ansible.builtin.include_role: name: rhel-system-roles.snapshot
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Verification
On the managed node, view the extended snapshot by 30%:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data1 data_vg owi-a-s--- 1.00g data1_snapset1 data_vg swi-a-s--- 308.00m data1 0.00 data2 data_vg owi-a-s--- 1.00g data2_snapset1 data_vg1 swi-a-s--- 408.00m data2 0.00
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.snapshot/README.md
file -
/usr/share/doc/rhel-system-roles/snapshot/
directory
10.7. Reverting LVM snapshots using the snapshot RHEL System Role
With the new snapshot
RHEL system role, you can now revert LVM snapshots to its original volume by setting the snapshot_lvm_action
parameter to revert
.
If both the logical volume and snapshot volume are not open and active, the revert operation starts immediately. Otherwise, it starts either after the origin or snapshot are activated and both are closed.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You have created snapshots for the given volume groups and logical volumes by using <_snapset1_> as the snapset name.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:Revert a specific LVM snapshot to its original volume:
--- - name: Revert a snapshot to its original volume hosts: managed-node-01.example.com vars: snapshot_lvm_snapset_name: snapset1 snapshot_lvm_action: revert snapshot_lvm_vg: data_vg snapshot_lvm_lv: data2 roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_lv
parameter describes a specific logical volume (LV) and thesnapshot_lvm_vg
parameter describes a specific volume group (VG).Revert a set of LVM snapshots to its original volume:
--- - name: Revert a set of snapshot hosts: managed-node-01.example.com vars: snapshot_lvm_action: revert snapshot_lvm_set: name: snapset1 volumes: - name: data1 snapshot vg: data_vg lv: data1 - name: data2 snapshot vg: data_vg lv: data2 roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_set
parameter describes specific LVs from the same VG. You can also specify LVs from different VGs while setting this parameter.NoteThe
revert
operation might take some time to complete.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Reboot the host, or deactivate and reactivate the logical volume using the following steps:
$ umount /data1; umount /data2 $ lvchange -an data_vg/data1 data_vg/data2 $ lvchange -ay data_vg/data1 data_vg/data2 $ mount /data1; mount /data2
Verification
On the managed node, view the reverted snapshots:
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data1 data_vg -wi-a----- 1.00g data2 data_vg -wi-a----- 1.00g
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.snapshot/README.md
file -
/usr/share/doc/rhel-system-roles/snapshot/
directory
10.8. Removing LVM snapshots using the snapshot RHEL System Role
With the new snapshot
RHEL system role, you can now remove all LVM snapshots by specifying the prefix or pattern of the snapshot, and by setting the snapshot_lvm_action
parameter to remove
.
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - You have created the specified snapshots by using <_snapset1_> as the snapset name.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:Remove a specific LVM snapshot:
--- - name: Remove a snapshot hosts: managed-node-01.example.com vars: snapshot_lvm_snapset_name: snapset1 snapshot_lvm_action: remove snapshot_lvm_vg: data_vg roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_vg
parameter describes a specific logical volume (LV) from the volume group (VG).Remove a set of LVM snapshots:
--- - name: Remove a set of snapshots hosts: managed-node-01.example.com vars: snapshot_lvm_action: remove snapshot_lvm_set: name: snapset1 volumes: - name: data1 snapshot vg: data_vg lv: data1 - name: data2 snapshot vg: data_vg lv: data2 roles: - rhel-system-roles.snapshot
Here, the
snapshot_lvm_set
parameter describes specific LVs from the same VG. You can also specify LVs from different VGs while setting this parameter.
Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.snapshot/README.md
file -
/usr/share/doc/rhel-system-roles/snapshot/
directory