Buscar

Este contenido no está disponible en el idioma seleccionado.

4.2. Configuring a Clustered LVM Volume with a GFS2 File System

download PDF
This use case requires that you create a clustered LVM logical volume on storage that is shared between the nodes of the cluster.
This section describes how to create a clustered LVM logical volume with a GFS2 file system on that volume. In this example, the shared partition /dev/vdb is used to store the LVM physical volume from which the LVM logical volume will be created.

Note

LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only.
Before starting this procedure, install the lvm2-cluster and gfs2-utils packages, which are part of the Resilient Storage channel, on both nodes of the cluster.
# yum install lvm2-cluster gfs2-utils
Since the /dev/vdb partition is storage that is shared, you perform this procedure on one node only,
  1. Set the global Pacemaker parameter no_quorum_policy to freeze. This prevents the entire cluster from being fenced every time quorum is lost. For further information on setting this policy, see Global File System 2.
    [root@z1 ~]# pcs property set no-quorum-policy=freeze
  2. Set up a dlm resource. This is a required dependency for the clvmd service and the GFS2 file system.
    [root@z1 ~]# pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
  3. Set up clvmd as a cluster resource.
    [root@z1 ~]# pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
    Note that the ocf:heartbeat:clvm resource agent, as part of the start procedure, sets the locking_type parameter in the /etc/lvm/lvm.conf file to 3 and disables the lvmetad daemon.
  4. Set up clvmd and dlm dependency and start up order. The clvmd resource must start after the dlm resource and must run on the same node as the dlm resource.
    [root@z1 ~]# pcs constraint order start dlm-clone then clvmd-clone
    Adding dlm-clone clvmd-clone (kind: Mandatory) (Options: first-action=start then-action=start)
    [root@z1 ~]# pcs constraint colocation add clvmd-clone with dlm-clone
  5. Verify that the dlm and clvmd resources are running on all nodes.
    [root@z1 ~]# pcs status
    ...
    Full list of resources:
    ...
     Clone Set: dlm-clone [dlm]
         Started: [ z1 z2 ]
     Clone Set: clvmd-clone [clvmd]
         Started: [ z1 z2 ]
    
  6. Create the clustered logical volume
    [root@z1 ~]# pvcreate /dev/vdb
    [root@z1 ~]# vgcreate -Ay -cy cluster_vg /dev/vdb
    [root@z1 ~]# lvcreate -L4G -n cluster_lv cluster_vg
  7. Optionally, to verify that the volume was created successfully you can use the lvs command to display the logical volume.
    [root@z1 ~]# lvs
      LV         VG         Attr       LSize ...
      cluster_lv cluster_vg -wi-ao---- 4.00g
      ...
    
  8. Format the volume with a GFS2 file system. In this example, my_cluster is the cluster name. This example specifies -j 2 to indicate two journals because the number of journals you configure must equal the number of nodes in the cluster.
    [root@z1 ~]# mkfs.gfs2 -p lock_dlm -j 2 -t my_cluster:samba /dev/cluster_vg/cluster_lv
  9. Create a Filesystem resource, which configures Pacemaker to mount and manage the file system. This example creates a Filesystem resource named fs, and creates the /mnt/gfs2share on both nodes of the cluster.
    [root@z1 ~]# pcs resource create fs ocf:heartbeat:Filesystem device="/dev/cluster_vg/cluster_lv" directory="/mnt/gfs2share" fstype="gfs2" --clone
  10. Configure a dependency and a startup order for the GFS2 file system and the clvmd service. GFS2 must start after clvmd and must run on the same node as clvmd.
    [root@z1 ~]# pcs constraint order start clvmd-clone then fs-clone
    Adding clvmd-clone fs-clone (kind: Mandatory) (Options: first-action=start then-action=start)
    [root@z1 ~]# pcs constraint colocation add fs-clone with clvmd-clone
  11. Verify that the GFS2 file system is mounted as expected.
    [root@z1 ~]# mount |grep /mnt/gfs2share
    /dev/mapper/cluster_vg-cluster_lv on /mnt/gfs2share type gfs2 (rw,noatime,seclabel)
    
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

© 2024 Red Hat, Inc.