Este contenido no está disponible en el idioma seleccionado.
2.5. Usage Considerations
This section provides general recommendations about GFS2 usage.
2.5.1. Mount Options: noatime and nodiratime
It is generally recommended to mount GFS2 file systems with the
noatime
and nodiratime
arguments. This allows GFS2 to spend less time updating disk inodes for every access. For more information on the effect of these arguments on GFS2 file system performance, see Section 2.9, “GFS2 Node Locking”.
2.5.2. VFS Tuning Options: Research and Experiment
Like all Linux file systems, GFS2 sits on top of a layer called the virtual file system (VFS). You can tune the VFS layer to improve underlying GFS2 performance by using the
sysctl
(8) command. For example, the values for dirty_background_ratio
and vfs_cache_pressure
may be adjusted depending on your situation. To fetch the current values, use the following commands:
#sysctl -n vm.dirty_background_ratio
#sysctl -n vm.vfs_cache_pressure
The following commands adjust the values:
#sysctl -w vm.dirty_background_ratio=20
#sysctl -w vm.vfs_cache_pressure=500
You can permanently change the values of these parameters by editing the
/etc/sysctl.conf
file.
To find the optimal values for your use cases, research the various VFS options and experiment on a test cluster before deploying into full production.
2.5.3. SELinux on GFS2
As of Red Hat Enterprise Linux 7.4 and later, Security Enhanced Linux (SELinux) is supported for use with GFS2 file systems.
Use of SELinux with GFS2 incurs a small performance penalty. To avoid this overhead, you may choose not to use SELinux with GFS2 even on a system with SELinux in enforcing mode. When mounting a GFS2 file system, you can ensure that SELinux will not attempt to read the
seclabel
element on each file system object by using one of the context
options as described on the mount
(8) man page; SELinux will assume that all content in the file system is labeled with the seclabel
element provided in the context
mount options. This will also speed up processing as it avoids another disk read of the extended attribute block that could contain seclabel
elements.
For example, on a system with SELinux in enforcing mode, you can use the following
mount
command to mount the GFS2 file system if the file system is going to contain Apache content. This label will apply to the entire file system; it remains in memory and is not written to disk.
# mount -t gfs2 -o context=system_u:object_r:httpd_sys_content_t:s0 /dev/mapper/xyz/mnt/gfs2
If you are not sure whether the file system will contain Apache content, you can use the labels
public_content_rw_t
or public_content_t
, or you could define a new label altogether and define a policy around it.
Note that in a Pacemaker cluster you should always use Pacemaker to manage a GFS2 file system. You can specify the mount options when you create a GFS2 file system resource, as described in Chapter 5, Configuring a GFS2 File System in a Cluster.
2.5.4. Setting Up NFS Over GFS2
Due to the added complexity of the GFS2 locking subsystem and its clustered nature, setting up NFS over GFS2 requires taking many precautions and careful configuration. This section describes the caveats you should take into account when configuring an NFS service over a GFS2 file system.
Warning
If the GFS2 file system is NFS exported, then you must mount the file system with the
localflocks
option. Because utilizing the localflocks
option prevents you from safely accessing the GFS2 filesystem from multiple locations, and it is not viable to export GFS2 from multiple nodes simultaneously, it is a support requirement that the GFS2 file system be mounted on only one node at a time when using this configuration. The intended effect of this is to force POSIX locks from each server to be local: non-clustered, independent of each other. This is because a number of problems exist if GFS2 attempts to implement POSIX locks from NFS across the nodes of a cluster. For applications running on NFS clients, localized POSIX locks means that two clients can hold the same lock concurrently if the two clients are mounting from different servers, which could cause data corruption. If all clients mount NFS from one server, then the problem of separate servers granting the same locks independently goes away. If you are not sure whether to mount your file system with the localflocks
option, you should not use the option. Contact Red Hat support immediately to discuss the appropriate configuration to avoid data loss. Exporting GFS2 via NFS, while technically supported in some circumstances, is not recommended.
For all other (non-NFS) GFS2 applications, do not mount your file system using
localflocks
, so that GFS2 will manage the POSIX locks and flocks between all the nodes in the cluster (on a cluster-wide basis). If you specify localflocks
and do not use NFS, the other nodes in the cluster will not have knowledge of each other's POSIX locks and flocks, thus making them unsafe in a clustered environment
In addition to the locking considerations, you should take the following into account when configuring an NFS service over a GFS2 file system.
- Red Hat supports only Red Hat High Availability Add-On configurations using NFSv3 with locking in an active/passive configuration with the following characteristics:
- The back-end file system is a GFS2 file system running on a 2 to 16 node cluster.
- An NFSv3 server is defined as a service exporting the entire GFS2 file system from a single cluster node at a time.
- The NFS server can fail over from one cluster node to another (active/passive configuration).
- No access to the GFS2 file system is allowed except through the NFS server. This includes both local GFS2 file system access as well as access through Samba or Clustered Samba. Accessing the file system locally via the cluster node from which it is mounted may result in data corruption.
- There is no NFS quota support on the system.
This configuration provides High Availability (HA) for the file system and reduces system downtime since a failed node does not result in the requirement to execute thefsck
command when failing the NFS server from one node to another. - The
fsid=
NFS option is mandatory for NFS exports of GFS2. - If problems arise with your cluster (for example, the cluster becomes inquorate and fencing is not successful), the clustered logical volumes and the GFS2 file system will be frozen and no access is possible until the cluster is quorate. You should consider this possibility when determining whether a simple failover solution such as the one defined in this procedure is the most appropriate for your system.
2.5.5. Samba (SMB or Windows) File Serving Over GFS2
You can use Samba (SMB or Windows) file serving from a GFS2 file system with CTDB, which allows active/active configurations.
Simultaneous access to the data in the Samba share from outside of Samba is not supported. There is currently no support for GFS2 cluster leases, which slows Samba file serving. For further information on support policies for Samba, see Support Policies for RHEL Resilient Storage - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal.
2.5.6. Configuring Virtual Machines for GFS2
When using a GFS2 file system with a virtual machine, it is important that your VM storage settings on each node be configured properly in order to force the cache off. For example, including these settings for
cache
and io
in the libvirt
domain should allow GFS2 to behave as expected.
<driver name='qemu' type='raw' cache='none' io='native'/>
Alternately, you can configure the
shareable
attribute within the device element. This indicates that the device is expected to be shared between domains (as long as hypervisor and OS support this). If shareable
is used, cache='no'
should be used for that device.