Este conteúdo não está disponível no idioma selecionado.
Chapter 8. Configuring LVM on shared storage
Shared storage is storage that can be accessed by multiple nodes at the same time. You can use LVM to manage shared storage. Shared storage is commonly used in cluster and high-availability setups and there are two common scenarios for how shared storage appears on the system:
- LVM devices are attached to a host and passed to a guest VM to use. In this case, the device is never intended to be used by the host, only by the guest VM.
- Machines are attached to a storage area network (SAN), for example using Fiber Channel, and the SAN LUNs are visible to multiple machines:
8.1. Configuring LVM for VM disks
To prevent VM storage from being exposed to the host, you can configure LVM device access and LVM system ID
. You can do this by excluding the devices in question from the host, which ensures that the LVM on the host doesn’t see or use the devices passed to the guest VM. You can protect against accidental usage of the VM’s VG on the host by setting the LVM system ID
in the VG to match the guest VM.
Procedure
In the
lvm.conf
file, check if thesystem.devices
file is enabled:use_devicesfile=1
Exclude the devices in question from the host’s devices file:
$ lvmdevices --deldev <device>
Optional: You can further protect LVM devices:
Set the LVM
system ID
feature in both the host and the VM in thelvm.conf
file:system_id_source = "uname"
Set the VG’s
system ID
to match the VMsystem ID
. This ensures that only the guest VM is capable of activating the VG:$ vgchange --systemid <VM_system_id> <VM_vg_name>
8.2. Configuring LVM to use SAN disks on one machine
To prevent the SAN LUNs from being used by the wrong machine, exclude the LUNs from the devices file on all machines except the one machine which is meant to use them.
You can also protect the VG from being used by the wrong machine by configuring a system ID
on all machines, and setting the system ID
in the VG to match the machine using it.
Procedure
In the
lvm.conf
file, check if thesystem.devices
file is enabled:use_devicesfile=1
Exclude the devices in question from the host’s devices file:
$ lvmdevices --deldev <device>
Set the LVM
system ID
feature in thelvm.conf
file:system_id_source = "uname"
Set the VG’s
system ID
to match thesystem ID
of the machine using this VG:$ vgchange --systemid <system_id> <vg_name>
8.3. Configuring LVM to use SAN disks for failover
You can configure LUNs to be moved between machines, for example for failover purposes. You can set up the LVM by configuring the LVM devices file and including the LUNs in the devices file on all machines that may use the devices and by configuring the LVM system ID
on each machine.
The following procedure describes the initial LVM configuration, to finish setting up LVM for failover and move the VG between machines, you need to configure pacemaker
and LVM-activate resource agent that will automatically modify the VG’s system ID to match the system ID of the machine where the VG can be used. For more information see Configuring and managing high availability clusters.
Procedure
In the
lvm.conf
file, check if thesystem.devices
file is enabled:use_devicesfile=1
Include the devices in question in the host’s devices file:
$ lvmdevices --adddev <device>
Set the LVM
system ID
feature in all machines in thelvm.conf
file:system_id_source = "uname"
8.4. Configuring LVM to share SAN disks among multiple machines
Using the lvmlockd
daemon and a lock manager such as dlm
or sanlock
, you can enable access to a shared VG on the SAN disks from multiple machines. The specific commands may differ based on the lock manager and operating system used. The following procedure describes the overview of the required steps to configure LVM to share SAN disks among multiple machines.
When using pacemaker
, the system must be configured and started using the pacemaker steps shown in Configuring and managing high availability clusters instead.
Procedure
In the
lvm.conf
file, check if thesystem.devices
file is enabled:use_devicesfile=1
For each machine that will use the shared LUN, add the LUN in the machines devices file:
$ lvmdevices --adddev <device>
Configure the
lvm.conf
file to use thelvmlockd
daemon on all machines:use_lvmlockd=1
-
Start the
lvmlockd
daemon file on all machines. -
Start a lock manager to use with
lvmlockd
, such asdlm
orsanlock
on all machines. -
Create a new shared VG using the command
vgcreate --shared
. -
Start and stop access to existing shared VGs using the commands
vgchange --lockstart
andvgchange --lockstop
on all machines.
Additional resources
-
lvmlockd(8)
man page on your system
8.5. Creating shared LVM devices using the storage
RHEL system role
You can use the storage
RHEL system role to create shared LVM devices if you want your multiple systems to access the same storage at the same time.
This can bring the following notable benefits:
- Resource sharing
- Flexibility in managing storage resources
- Simplification of storage management tasks
Prerequisites
- You have prepared the control node and the managed nodes
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. -
lvmlockd
is configured on the managed node. For more information, see Configuring LVM to share SAN disks among multiple machines.
Procedure
Create a playbook file, for example
~/playbook.yml
, with the following content:--- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.md
file on the control node.Validate the playbook syntax:
$ ansible-playbook --syntax-check ~/playbook.yml
Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
$ ansible-playbook ~/playbook.yml
Additional resources
-
/usr/share/ansible/roles/rhel-system-roles.storage/README.md
file -
/usr/share/doc/rhel-system-roles/storage/
directory