Questo contenuto non è disponibile nella lingua selezionata.
Chapter 51. GFS2 file systems in a cluster
Use the following administrative procedures to configure GFS2 file systems in a Red Hat high availability cluster.
51.1. Configuring a GFS2 file system in a cluster
You can set up a Pacemaker cluster that includes GFS2 file systems with the following procedure. In this example, you create three GFS2 file systems on three logical volumes in a two-node cluster.
Prerequisites
- Install and start the cluster software on both cluster nodes and create a basic two-node cluster.
- Configure fencing for the cluster.
For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker.
Procedure
- On both nodes in the cluster, enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, you can enter the following - subscription-managercommand:- subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms - # subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - Note that the Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository you do not also need to enable the High Availability repository. 
- On both nodes of the cluster, install the - lvm2-lockd,- gfs2-utils, and- dlmpackages. To support these packages, you must be subscribed to the AppStream channel and the Resilient Storage channel.- yum install lvm2-lockd gfs2-utils dlm - # yum install lvm2-lockd gfs2-utils dlm- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On both nodes of the cluster, set the - use_lvmlockdconfiguration option in the- /etc/lvm/lvm.conffile to- use_lvmlockd=1.- ... use_lvmlockd = 1 ... - ... use_lvmlockd = 1 ...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set the global Pacemaker parameter - no-quorum-policyto- freeze.Note- By default, the value of - no-quorum-policyis set to- stop, indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost.- To address this situation, set - no-quorum-policyto- freezewhen GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained.- pcs property set no-quorum-policy=freeze - [root@z1 ~]# pcs property set no-quorum-policy=freeze- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set up a - dlmresource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates the- dlmresource as part of a resource group named- locking.- pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence - [root@z1 ~]# pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Clone the - lockingresource group so that the resource group can be active on both nodes of the cluster.- pcs resource clone locking interleave=true - [root@z1 ~]# pcs resource clone locking interleave=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Set up an - lvmlockdresource as part of the- lockingresource group.- pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence - [root@z1 ~]# pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check the status of the cluster to ensure that the - lockingresource group has started on both nodes of the cluster.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On one node of the cluster, create two shared volume groups. One volume group will contain two GFS2 file systems, and the other volume group will contain one GFS2 file system. Note- If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker. - The following command creates the shared volume group - shared_vg1on- /dev/vdb.- vgcreate --shared shared_vg1 /dev/vdb - [root@z1 ~]# vgcreate --shared shared_vg1 /dev/vdb Physical volume "/dev/vdb" successfully created. Volume group "shared_vg1" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow - The following command creates the shared volume group - shared_vg2on- /dev/vdc.- vgcreate --shared shared_vg2 /dev/vdc - [root@z1 ~]# vgcreate --shared shared_vg2 /dev/vdc Physical volume "/dev/vdc" successfully created. Volume group "shared_vg2" successfully created VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready...- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On the second node in the cluster: - (RHEL 8.5 and later) If you have enabled the use of a devices file by setting - use_devicesfile = 1in the- lvm.conffile, add the shared devices to the devices file. By default, the use of a devices file is not enabled.- lvmdevices --adddev /dev/vdb lvmdevices --adddev /dev/vdc - [root@z2 ~]# lvmdevices --adddev /dev/vdb [root@z2 ~]# lvmdevices --adddev /dev/vdc- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Start the lock manager for each of the shared volume groups. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- On one node in the cluster, create the shared logical volumes and format the volumes with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster. The format of the lock table name is ClusterName:FSName where ClusterName is the name of the cluster for which the GFS2 file system is being created and FSName is the file system name, which must be unique for all - lock_dlmfile systems over the cluster.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an - LVM-activateresource for each logical volume to automatically activate that logical volume on all nodes.- Create an - LVM-activateresource named- sharedlv1for the logical volume- shared_lv1in volume group- shared_vg1. This command also creates the resource group- shared_vg1that includes the resource. In this example, the resource group has the same name as the shared volume group that includes the logical volume.- pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd - [root@z1 ~]# pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an - LVM-activateresource named- sharedlv2for the logical volume- shared_lv2in volume group- shared_vg1. This resource will also be part of the resource group- shared_vg1.- pcs resource create sharedlv2 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv2 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd - [root@z1 ~]# pcs resource create sharedlv2 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv2 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create an - LVM-activateresource named- sharedlv3for the logical volume- shared_lv1in volume group- shared_vg2. This command also creates the resource group- shared_vg2that includes the resource.- pcs resource create sharedlv3 --group shared_vg2 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg2 activation_mode=shared vg_access_mode=lvmlockd - [root@z1 ~]# pcs resource create sharedlv3 --group shared_vg2 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg2 activation_mode=shared vg_access_mode=lvmlockd- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
 
- Clone the two new resource groups. - pcs resource clone shared_vg1 interleave=true pcs resource clone shared_vg2 interleave=true - [root@z1 ~]# pcs resource clone shared_vg1 interleave=true [root@z1 ~]# pcs resource clone shared_vg2 interleave=true- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Configure ordering constraints to ensure that the - lockingresource group that includes the- dlmand- lvmlockdresources starts first.- pcs constraint order start locking-clone then shared_vg1-clone pcs constraint order start locking-clone then shared_vg2-clone - [root@z1 ~]# pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start) [root@z1 ~]# pcs constraint order start locking-clone then shared_vg2-clone Adding locking-clone shared_vg2-clone (kind: Mandatory) (Options: first-action=start then-action=start)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Configure colocation constraints to ensure that the - vg1and- vg2resource groups start on the same node as the- lockingresource group.- pcs constraint colocation add shared_vg1-clone with locking-clone pcs constraint colocation add shared_vg2-clone with locking-clone - [root@z1 ~]# pcs constraint colocation add shared_vg1-clone with locking-clone [root@z1 ~]# pcs constraint colocation add shared_vg2-clone with locking-clone- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On both nodes in the cluster, verify that the logical volumes are active. There may be a delay of a few seconds. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a file system resource to automatically mount each GFS2 file system on all nodes. - You should not add the file system to the - /etc/fstabfile because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with- options=options. Run the- pcs resource describe Filesystemcommand to display the full configuration options.- The following commands create the file system resources. These commands add each resource to the resource group that includes the logical volume resource for that file system. - pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/shared_vg1/shared_lv1" directory="/mnt/gfs1" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs2 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/shared_vg1/shared_lv2" directory="/mnt/gfs2" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs3 --group shared_vg2 ocf:heartbeat:Filesystem device="/dev/shared_vg2/shared_lv1" directory="/mnt/gfs3" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence - [root@z1 ~]# pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/shared_vg1/shared_lv1" directory="/mnt/gfs1" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence [root@z1 ~]# pcs resource create sharedfs2 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/shared_vg1/shared_lv2" directory="/mnt/gfs2" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence [root@z1 ~]# pcs resource create sharedfs3 --group shared_vg2 ocf:heartbeat:Filesystem device="/dev/shared_vg2/shared_lv1" directory="/mnt/gfs3" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Verify that the GFS2 file systems are mounted on both nodes of the cluster. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check the status of the cluster. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
51.2. Configuring an encrypted GFS2 file system in a cluster
					(RHEL 8.4 and later) You can create a Pacemaker cluster that includes a LUKS encrypted GFS2 file system with the following procedure. In this example, you create one GFS2 file systems on a logical volume and encrypt the file system. Encrypted GFS2 file systems are supported using the crypt resource agent, which provides support for LUKS encryption.
				
There are three parts to this procedure:
- Configuring a shared logical volume in a Pacemaker cluster
- 
							Encrypting the logical volume and creating a cryptresource
- Formatting the encrypted logical volume with a GFS2 file system and creating a file system resource for the cluster
51.2.2. Encrypt the logical volume and create a crypt resource
Prerequisites
- You have configured a shared logical volume in a Pacemaker cluster.
Procedure
- On one node in the cluster, create a new file that will contain the crypt key and set the permissions on the file so that it is readable only by root. - touch /etc/crypt_keyfile chmod 600 /etc/crypt_keyfile - [root@z1 ~]# touch /etc/crypt_keyfile [root@z1 ~]# chmod 600 /etc/crypt_keyfile- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the crypt key. - dd if=/dev/urandom bs=4K count=1 of=/etc/crypt_keyfile scp /etc/crypt_keyfile root@z2.example.com:/etc/ - [root@z1 ~]# dd if=/dev/urandom bs=4K count=1 of=/etc/crypt_keyfile 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306202 s, 13.4 MB/s [root@z1 ~]# scp /etc/crypt_keyfile root@z2.example.com:/etc/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Distribute the crypt keyfile to the other nodes in the cluster, using the - -pparameter to preserve the permissions you set.- scp -p /etc/crypt_keyfile root@z2.example.com:/etc/ - [root@z1 ~]# scp -p /etc/crypt_keyfile root@z2.example.com:/etc/- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the encrypted device on the LVM volume where you will configure the encrypted GFS2 file system. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create the crypt resource as part of the - shared_vg1volume group.- pcs resource create crypt --group shared_vg1 ocf:heartbeat:crypt crypt_dev="luks_lv1" crypt_type=luks2 key_file=/etc/crypt_keyfile encrypted_dev="/dev/shared_vg1/shared_lv1" - [root@z1 ~]# pcs resource create crypt --group shared_vg1 ocf:heartbeat:crypt crypt_dev="luks_lv1" crypt_type=luks2 key_file=/etc/crypt_keyfile encrypted_dev="/dev/shared_vg1/shared_lv1"- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
							Ensure that the crypt resource has created the crypt device, which in this example is /dev/mapper/luks_lv1.
						
ls -l /dev/mapper/
[root@z1 ~]# ls -l /dev/mapper/
...
lrwxrwxrwx 1 root root 7 Mar 4 09:52 luks_lv1 -> ../dm-3
...51.2.3. Format the encrypted logical volume with a GFS2 file system and create a file system resource for the cluster
Prerequisites
- You have encrypted the logical volume and created a crypt resource.
Procedure
- On one node in the cluster, format the volume with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster. The format of the lock table name is ClusterName:FSName where ClusterName is the name of the cluster for which the GFS2 file system is being created and FSName is the file system name, which must be unique for all - lock_dlmfile systems over the cluster.- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Create a file system resource to automatically mount the GFS2 file system on all nodes. - Do not add the file system to the - /etc/fstabfile because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with- options=options. Run the- pcs resource describe Filesystemcommand for full configuration options.- The following command creates the file system resource. This command adds the resource to the resource group that includes the logical volume resource for that file system. - pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/mapper/luks_lv1" directory="/mnt/gfs1" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence - [root@z1 ~]# pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device="/dev/mapper/luks_lv1" directory="/mnt/gfs1" fstype="gfs2" options=noatime op monitor interval=10s on-fail=fence- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
Verification
- Verify that the GFS2 file system is mounted on both nodes of the cluster. - mount | grep gfs2 mount | grep gfs2 - [root@z1 ~]# mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) [root@z2 ~]# mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel)- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- Check the status of the cluster. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
51.3. Migrating a GFS2 file system from RHEL7 to RHEL8
You can use your existing Red Hat Enterprise 7 logical volumes when configuring a RHEL 8 cluster that includes GFS2 file systems.
					In Red Hat Enterprise Linux 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage devices in an active/active cluster. This requires that you configure the logical volumes that your active/active cluster will require as shared logical volumes. Additionally, this requires that you use the LVM-activate resource to manage an LVM volume and that you use the lvmlockd resource agent to manage the lvmlockd daemon. See Configuring a GFS2 file system in a cluster for a full procedure for configuring a Pacemaker cluster that includes GFS2 file systems using shared logical volumes.
				
					To use your existing Red Hat Enterprise Linux 7 logical volumes when configuring a RHEL8 cluster that includes GFS2 file systems, perform the following procedure from the RHEL8 cluster. In this example, the clustered RHEL 7 logical volume is part of the volume group upgrade_gfs_vg.
				
The RHEL8 cluster must have the same name as the RHEL7 cluster that includes the GFS2 file system in order for the existing file system to be valid.
Procedure
- Ensure that the logical volumes containing the GFS2 file systems are currently inactive. This procedure is safe only if all nodes have stopped using the volume group.
- From one node in the cluster, forcibly change the volume group to be local. - vgchange --lock-type none --lock-opt force upgrade_gfs_vg - [root@rhel8-01 ~]# vgchange --lock-type none --lock-opt force upgrade_gfs_vg Forcibly change VG lock type to none? [y/n]: y Volume group "upgrade_gfs_vg" successfully changed- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- From one node in the cluster, change the local volume group to a shared volume group - vgchange --lock-type dlm upgrade_gfs_vg - [root@rhel8-01 ~]# vgchange --lock-type dlm upgrade_gfs_vg Volume group "upgrade_gfs_vg" successfully changed- Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
- On each node in the cluster, start locking for the volume group. - Copy to Clipboard Copied! - Toggle word wrap Toggle overflow 
					After performing this procedure, you can create an LVM-activate resource for each logical volume.