Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
2.3. Exclusive Activation of a Volume Group in a Cluster
The following procedure configures the volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata.
This procedure modifies the
volume_list entry in the /etc/lvm/lvm.conf configuration file. Volume groups listed in the volume_list entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list entry. Note that this procedure does not require the use of clvmd.
Perform the following procedure on each node in the cluster.
- Execute the following command to ensure that
locking_typeis set to 1 and thatuse_lvmetadis set to 0 in the/etc/lvm/lvm.conffile. This command also disables and stops anylvmetadprocesses immediately.lvmconf --enable-halvm --services --startstopservices
# lvmconf --enable-halvm --services --startstopservicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example.
vgs --noheadings -o vg_name
# vgs --noheadings -o vg_name my_vg rhel_home rhel_rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the volume groups other than
my_vg(the volume group you have just defined for the cluster) as entries tovolume_listin the/etc/lvm/lvm.confconfiguration file.For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment thevolume_listline of thelvm.conffile and add these volume groups as entries tovolume_listas follows. Note that the volume group you have just defined for the cluster (my_vgin this example) is not in this list.volume_list = [ "rhel_root", "rhel_home" ]
volume_list = [ "rhel_root", "rhel_home" ]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize thevolume_listentry asvolume_list = []. - Rebuild the
initramfsboot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update theinitramfsdevice with the following command. This command may take up to a minute to complete.dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the node.
Note
If you have installed a new Linux kernel since booting the node on which you created the boot image, the newinitrdimage will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correctinitrddevice is in use by running theuname -rcommand before and after the reboot to determine the kernel release that is running. If the releases are not the same, update theinitrdfile after rebooting with the new kernel and then reboot the node. - When the node has rebooted, check whether the cluster services have started up again on that node by executing the
pcs cluster statuscommand on that node. If this yields the messageError: cluster is not currently running on this nodethen enter the following command.pcs cluster start
# pcs cluster startCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on each of the nodes with the following command.pcs cluster start --all
# pcs cluster start --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow