Chapter 6. Configuring a GFS2 File System in a Pacemaker Cluster
The following procedure is an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system.
- On each node in the cluster, install the High Availability and Resilient Storage packages.
#
yum groupinstall 'High Availability' 'Resilient Storage'
- Create the Pacemaker cluster and configure fencing for the cluster. For information on configuring a Pacemaker cluster, see Configuring the Red Hat High Availability Add-On with Pacemaker.
- On each node in the cluster, enable the
clvmd
service. If you will be using cluster-mirrored volumes, enable thecmirrord
service.#
chkconfig clvmd on
#chkconfig cmirrord on
After you enable these daemons, when starting and stopping Pacemaker or the cluster through normal means usingpcs cluster start
,pcs cluster stop
,service pacemaker start
, orservice pacemaker stop
, theclvmd
andcmirrord
daemons will be started and stopped as needed. - On one node in the cluster, perform the following steps:
- Set the global Pacemaker parameter
no_quorum_policy
tofreeze
.Note
By default, the value ofno-quorum-policy
is set tostop
, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.To address this situation, you can set theno-quorum-policy=freeze
when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.#
pcs property set no-quorum-policy=freeze
- After ensuring that the locking type is set to 3 in the
/etc/lvm/lvm.conf
file to support clustered locking, Create the clustered LV and format the volume with a GFS2 file system. Ensure that you create enough journals for each of the nodes in your cluster.#
pvcreate /dev/vdb
#vgcreate -Ay -cy cluster_vg /dev/vdb
#lvcreate -L5G -n cluster_lv cluster_vg
#mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:gfs2-demo /dev/cluster_vg/cluster_lv
- Configure a
clusterfs
resource.You should not add the file system to the/etc/fstab
file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration withoptions=options
. Run thepcs resource describe Filesystem
command for full configuration options.This cluster resource creation command specifies thenoatime
mount option.#
pcs resource create clusterfs Filesystem device="/dev/cluster_vg/cluster_lv" directory="/var/mountpoint" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence clone interleave=true
- Verify that GFS2 is mounted as expected.
#
mount |grep /mnt/gfs2-demo
/dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2-demo type gfs2 (rw,noatime,seclabel)
- (Optional) Reboot all cluster nodes to verify GFS2 persistence and recovery.