Search

Chapter 6. Configuring a GFS2 File System in a Pacemaker Cluster

download PDF
The following procedure is an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system.
  1. On each node in the cluster, install the High Availability and Resilient Storage packages.
    # yum groupinstall 'High Availability' 'Resilient Storage'
  2. Create the Pacemaker cluster and configure fencing for the cluster. For information on configuring a Pacemaker cluster, see Configuring the Red Hat High Availability Add-On with Pacemaker.
  3. On each node in the cluster, enable the clvmd service. If you will be using cluster-mirrored volumes, enable the cmirrord service.
    # chkconfig clvmd on
    # chkconfig cmirrord on
    After you enable these daemons, when starting and stopping Pacemaker or the cluster through normal means using pcs cluster start, pcs cluster stop, service pacemaker start, or service pacemaker stop, the clvmd and cmirrord daemons will be started and stopped as needed.
  4. On one node in the cluster, perform the following steps:
    1. Set the global Pacemaker parameter no_quorum_policy to freeze.

      Note

      By default, the value of no-quorum-policy is set to stop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.
      To address this situation, you can set the no-quorum-policy=freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.
      # pcs property set no-quorum-policy=freeze
    2. After ensuring that the locking type is set to 3 in the /etc/lvm/lvm.conf file to support clustered locking, Create the clustered LV and format the volume with a GFS2 file system. Ensure that you create enough journals for each of the nodes in your cluster.
      # pvcreate /dev/vdb
      # vgcreate -Ay -cy cluster_vg /dev/vdb
      # lvcreate -L5G -n cluster_lv cluster_vg
      # mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:gfs2-demo /dev/cluster_vg/cluster_lv
    3. Configure a clusterfs resource.
      You should not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options=options. Run the pcs resource describe Filesystem command for full configuration options.
      This cluster resource creation command specifies the noatime mount option.
      # pcs resource create clusterfs Filesystem device="/dev/cluster_vg/cluster_lv" directory="/var/mountpoint" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence clone interleave=true
    4. Verify that GFS2 is mounted as expected.
      # mount |grep /mnt/gfs2-demo
      /dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2-demo type gfs2 (rw,noatime,seclabel)
      
  5. (Optional) Reboot all cluster nodes to verify GFS2 persistence and recovery.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.