4.2. Configuring a Clustered LVM Volume with a GFS2 File System
This use case requires that you create a clustered LVM logical volume on storage that is shared between the nodes of the cluster.
This section describes how to create a clustered LVM logical volume with a GFS2 file system on that volume. In this example, the shared partition
/dev/vdb is used to store the LVM physical volume from which the LVM logical volume will be created.
Note
LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only.
Before starting this procedure, install the
lvm2-cluster and gfs2-utils packages, which are part of the Resilient Storage channel, on both nodes of the cluster.
yum install lvm2-cluster gfs2-utils
# yum install lvm2-cluster gfs2-utils
Since the
/dev/vdb partition is storage that is shared, you perform this procedure on one node only,
- Set the global Pacemaker parameter
no_quorum_policytofreeze. This prevents the entire cluster from being fenced every time quorum is lost. For further information on setting this policy, see Global File System 2.pcs property set no-quorum-policy=freeze
[root@z1 ~]# pcs property set no-quorum-policy=freezeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set up a
dlmresource. This is a required dependency for theclvmdservice and the GFS2 file system.pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
[root@z1 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set up
clvmdas a cluster resource.pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
[root@z1 ~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that theocf:heartbeat:clvmresource agent, as part of the start procedure, sets thelocking_typeparameter in the/etc/lvm/lvm.conffile to3and disables thelvmetaddaemon. - Set up
clvmdanddlmdependency and start up order. Theclvmdresource must start after thedlmresource and must run on the same node as thedlmresource.pcs constraint order start dlm-clone then clvmd-clone pcs constraint colocation add clvmd-clone with dlm-clone
[root@z1 ~]# pcs constraint order start dlm-clone then clvmd-clone Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]# pcs constraint colocation add clvmd-clone with dlm-cloneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the
dlmandclvmdresources are running on all nodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the clustered logical volume
pvcreate /dev/vdb vgcreate -Ay -cy cluster_vg /dev/vdb lvcreate -L4G -n cluster_lv cluster_vg
[root@z1 ~]# pvcreate /dev/vdb [root@z1 ~]# vgcreate -Ay -cy cluster_vg /dev/vdb [root@z1 ~]# lvcreate -L4G -n cluster_lv cluster_vgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optionally, to verify that the volume was created successfully you can use the
lvscommand to display the logical volume.lvs
[root@z1 ~]# lvs LV VG Attr LSize ... cluster_lv cluster_vg -wi-ao---- 4.00g ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Format the volume with a GFS2 file system. In this example,
my_clusteris the cluster name. This example specifies-j 2to indicate two journals because the number of journals you configure must equal the number of nodes in the cluster.mkfs.gfs2 -p lock_dlm -j 2 -t my_cluster:samba /dev/cluster_vg/cluster_lv
[root@z1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t my_cluster:samba /dev/cluster_vg/cluster_lvCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a
Filesystemresource, which configures Pacemaker to mount and manage the file system. This example creates aFilesystemresource namedfs, and creates the/mnt/gfs2shareon both nodes of the cluster.pcs resource create fs ocf:heartbeat:Filesystem device="/dev/cluster_vg/cluster_lv" directory="/mnt/gfs2share" fstype="gfs2" --clone
[root@z1 ~]# pcs resource create fs ocf:heartbeat:Filesystem device="/dev/cluster_vg/cluster_lv" directory="/mnt/gfs2share" fstype="gfs2" --cloneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure a dependency and a startup order for the GFS2 file system and the
clvmdservice. GFS2 must start afterclvmdand must run on the same node asclvmd.pcs constraint order start clvmd-clone then fs-clone pcs constraint colocation add fs-clone with clvmd-clone
[root@z1 ~]# pcs constraint order start clvmd-clone then fs-clone Adding clvmd-clone fs-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]# pcs constraint colocation add fs-clone with clvmd-cloneCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the GFS2 file system is mounted as expected.
mount |grep /mnt/gfs2share
[root@z1 ~]# mount |grep /mnt/gfs2share /dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2share type gfs2 (rw,noatime,seclabel)Copy to Clipboard Copied! Toggle word wrap Toggle overflow