검색

이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 5. Configuring a GFS2 File System in a Cluster

download PDF
The following procedure is an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system.
After installing and starting the cluster software on all nodes, create the cluster. You must configure fencing for the cluster. For information on creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability Cluster with Pacemaker in High Availability Add-On Administration. Once you have done this, perform the following procedure.
  1. On all nodes of the cluster, install the lvm2-cluster and gfs2-utils packages, which are part of the Resilient Storage channel.
    # yum install lvm2-cluster gfs2-utils
  2. Set the global Pacemaker parameter no_quorum_policy to freeze.

    Note

    By default, the value of no-quorum-policy is set to stop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.
    To address this situation, set no-quorum-policy to freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.
    # pcs property set no-quorum-policy=freeze
  3. Set up a dlm resource. This is a required dependency for clvmd and GFS2.
    # pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
  4. Execute the following command in each node of the cluster to enable clustered locking. This command sets the locking_type parameter in the /etc/lvm/lvm.conf file to 3.
    # /sbin/lvmconf --enable-cluster
  5. Set up clvmd as a cluster resource.
    # pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
    Note that the clvmd and cmirrord deamons are started and managed by Pacemaker using the ocf:heartbeat:clvm resource agent and do not need to be started during boot with systemd. Additionally, the ocf:heartbeat:clvm resource agent, as part of the start procedure, sets the locking_type parameter in the /etc/lvm/lvm.conf file to 3 and disables the lvmetad daemon.
  6. Set up clvmd and dlm dependency and start up order. clvmd must start after dlm and must run on the same node as dlm.
    # pcs constraint order start dlm-clone then clvmd-clone
    # pcs constraint colocation add clvmd-clone with dlm-clone
  7. Create the clustered logical volume.
    # pvcreate /dev/vdb
    # vgcreate -Ay -cy sasbin_vg /dev/vdb
    # lvcreate -L5G -n sasbin_lv sasbin_vg

    Warning

    When you create volume groups with CLVM on shared storage, you must ensure that all nodes in the cluster have access to the physical volumes that constitute the volume group. Asymmetric cluster configurations in which some nodes have access to the storage and others do not are not supported.
    When managing volume groups using CLVMD to allow for concurrent activation of volumes across multiple nodes, the volume groups must have the clustered flag enabled. This flag allows CLVMD to identify the volumes it must manage, which is what enables CLVMD to maintain LVM metadata continuity. Failure to adhere to this configuration renders the configuration unsupported by Red Hat and may result in storage corruption and loss of data.
  8. Format the logical volume with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster.
    # mkfs.gfs2 -j2 -p lock_dlm -t rhel7-demo:sasbin /dev/sasbin_vg/sasbin_lv

    Warning

    When you create the GFS2 filesystem, it is important to specify a correct value for the -t LockTableName option. The correct format is ClusterName:FSName. Failure to specify a correct value will prevent the filesystem from mounting. Additionally, the file system name must be unique. For more information on the options for the mkfs.gfs2 command, see Section 3.1, “Creating a GFS2 File System”.
  9. Configure a clusterfs resource.
    You should not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options=options. Run the pcs resource describe Filesystem command for full configuration options.
    This cluster resource creation command specifies the noatime mount option, which is recommended for GFS2 file systems where the application allows it.
    In this example, the file system has the same name as the mount point. This is not a requirement, but it is recommended that the file system name relate to its actual use or mount point to help with troubleshooting should the file system encounter a problem.
    # # pcs resource create clusterfs Filesystem device="/dev/sasbin_vg/sasbin_lv" directory="/usr/local/sasbin" fstype="gfs2" options="noatime" op monitor interval=10s on-fail=fence clone interleave=true
  10. Set up GFS2 and clvmd dependency and startup order. GFS2 must start after clvmd and must run on the same node as clvmd.
    # pcs constraint order start clvmd-clone then clusterfs-clone
    # pcs constraint colocation add clusterfs-clone with clvmd-clone
  11. Verify that GFS2 is mounted as expected.
    # mount |grep sas
    /dev/mapper/sasbin_vg-sasbin_lv on /usr/local/sasbin type gfs2 (rw,noatime,seclabel)
    
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.