Chapter 1. GFS2 Overview
The Red Hat GFS2 file system is included in the Resilient Storage Add-On. It is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs distributed metadata and multiple journals. Red Hat supports the use of GFS2 file systems only as implemented in the High Availability Add-On.
Note
Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 6 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat does support a number of high-performance single node file systems which are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system.
Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes).
Note
Red Hat does not support using GFS2 for cluster file system deployments greater than 16 nodes.
GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB. The current supported maximum size of a GFS2 file system for 32-bit hardware is 16 TB. If your system requires larger GFS2 file systems, contact your Red Hat service representative.
When determining the size of your file system, you should consider your recovery needs. Running the
fsck.gfs2
command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsystem failure, recovery time is limited by the speed of your backup media. For information on the amount of memory the fsck.gfs2
command requires, see Section 4.11, “Repairing a File System”.
When configured in a cluster, Red Hat GFS2 nodes can be configured and managed with High Availability Add-On configuration and management tools. Red Hat GFS2 then provides data sharing among GFS2 nodes in a cluster, with a single, consistent view of the file system name space across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way that processes on the same node can share files on a local file system, with no discernible difference. For information about the High Availability Add-On see Configuring and Managing a Red Hat Cluster.
While a GFS2 file system may be used outside of LVM, Red Hat supports only GFS2 file systems that are created on a CLVM logical volume. CLVM is included in the Resilient Storage Add-On. It is a cluster-wide implementation of LVM, enabled by the CLVM daemon
clvmd
, which manages LVM logical volumes in a cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the LVM volume manager, see Logical Volume Manager Administration
The
gfs2.ko
kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes.
Note
When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric cluster configurations in which some nodes have access to the shared storage and others do not are not supported. This does not require that all nodes actually mount the GFS2 file system itself.
This chapter provides some basic, abbreviated information as background to help you understand GFS2. It contains the following sections:
1.1. New and Changed Features
This section lists new and changed features of the GFS2 file system and the GFS2 documentation that are included with the initial and subsequent releases of Red Hat Enterprise Linux 6.
1.1.1. New and Changed Features for Red Hat Enterprise Linux 6.0
Red Hat Enterprise Linux 6.0 includes the following documentation and feature updates and changes.
- For the Red Hat Enterprise Linux 6 release, Red Hat does not support the use of GFS2 as a single-node file system.
- For the Red Hat Enterprise Linux 6 release, the
gfs2_convert
command to upgrade from a GFS to a GFS2 file system has been enhanced. For information on this command, see Appendix B, Converting a File System from GFS to GFS2. - The Red Hat Enterprise Linux 6 release supports the
discard
,nodiscard
,barrier
,nobarrier
,quota_quantum
,statfs_quantum
, andstatfs_percent
mount options. For information about mounting a GFS2 file system, see Section 4.2, “Mounting a File System”. - The Red Hat Enterprise Linux 6 version of this document contains a new section, Section 2.9, “GFS2 Node Locking”. This section describes some of the internals of GFS2 file systems.
1.1.2. New and Changed Features for Red Hat Enterprise Linux 6.1
Red Hat Enterprise Linux 6.1 includes the following documentation and feature updates and changes.
- As of the Red Hat Enterprise Linux 6.1 release, GFS2 supports the standard Linux quota facilities. GFS2 quota management is documented in Section 4.5, “GFS2 Quota Management”.For earlier releases of Red Hat Enterprise Linux, GFS2 required the
gfs2_quota
command to manage quotas. Documentation for thegfs2_quota
is now provided in Appendix A, GFS2 Quota Management with thegfs2_quota
Command. - This document now contains a new chapter, Chapter 5, Diagnosing and Correcting Problems with GFS2 File Systems.
- Small technical corrections and clarifications have been made throughout the document.
1.1.3. New and Changed Features for Red Hat Enterprise Linux 6.2
Red Hat Enterprise Linux 6.2 includes the following documentation and feature updates and changes.
- As of the Red Hat Enterprise Linux 6.2 release, GFS2 supports the
tunegfs2
command, which replaces some of the features of thegfs2_tool
command. For further information, see thetunegfs2
man page.The following sections have been updated to provide administrative procedures that do not require the use of thegfs2_tool
command:- Section 4.5.4, “Synchronizing Quotas with the
quotasync
Command”. and Section A.3, “Synchronizing Quotas with thegfs2_quota
Command” now describe how to change thequota_quantum
parameter from its default value of 60 seconds by using thequota_quantum=
mount option. - Section 4.10, “Suspending Activity on a File System” now describes how to suspend write activity to a file system using the
dmsetup
command.suspend
- This document includes a new appendix, Appendix C, GFS2 tracepoints and the debugfs glocks File. This appendix describes the glock
debugfs
interface and the GFS2 tracepoints. It is intended for advanced users who are familiar with file system internals who would like to learn more about the design of GFS2 and how to debug GFS2-specific issues.
1.1.4. New and Changed Features for Red Hat Enterprise Linux 6.3
For the Red Hat Enterprise Linux 6.3 release, this document contains a new chapter, Chapter 2, GFS2 Configuration and Operational Considerations. This chapter provides recommendations for optimizing GFS2 performance, including recommendations for creating, using, and maintaining a GFS2 file system.
In addition, small clarifications and corrections have been made throughout the document.
1.1.5. New and Changed Features for Red Hat Enterprise Linux 6.4
For the Red Hat Enterprise Linux 6.4 release, Chapter 2, GFS2 Configuration and Operational Considerations has been updated with small clarifications.
1.1.6. New and Changed Features for Red Hat Enterprise Linux 6.6
For the Red Hat Enterprise Linux 6.6 release, this document contains a new chapter, Chapter 6, Configuring a GFS2 File System in a Pacemaker Cluster. This chapter provides an outline of the steps required to set up a Pacemaker cluster that includes a GFS2 file system.
In addition, small clarifications and corrections have been made throughout the document.