Este contenido no está disponible en el idioma seleccionado.
3.4. Exclusive Activation of a Volume Group in a Cluster
The following procedure configures the LVM volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata.
This procedure modifies the
volume_list
entry in the /etc/lvm/lvm.conf
configuration file. Volume groups listed in the volume_list
entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list
entry. Note that this procedure does not require the use of clvmd
.
Perform the following procedure on each node in the cluster.
- Execute the following command to ensure that
locking_type
is set to 1 and thatuse_lvmetad
is set to 0 in the/etc/lvm/lvm.conf
file. This command also disables and stops anylvmetad
processes immediately.#
lvmconf --enable-halvm --services --startstopservices
- Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example.
#
vgs --noheadings -o vg_name
my_vg rhel_home rhel_root - Add the volume groups other than
my_vg
(the volume group you have just defined for the cluster) as entries tovolume_list
in the/etc/lvm/lvm.conf
configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment thevolume_list
line of thelvm.conf
file and add these volume groups as entries tovolume_list
as follows:volume_list = [ "rhel_root", "rhel_home" ]
Note
If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize thevolume_list
entry asvolume_list = []
. - Rebuild the
initramfs
boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update theinitramfs
device with the following command. This command may take up to a minute to complete.#
dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
- Reboot the node.
Note
If you have installed a new Linux kernel since booting the node on which you created the boot image, the newinitrd
image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correctinitrd
device is in use by running theuname -r
command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update theinitrd
file after rebooting with the new kernel and then reboot the node. - When the node has rebooted, check whether the cluster services have started up again on that node by executing the
pcs cluster status
command on that node. If this yields the messageError: cluster is not currently running on this node
then enter the following command.#
pcs cluster start
Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on all of the nodes in the cluster with the following command.#
pcs cluster start --all