Pesquisar

Este conteúdo não está disponível no idioma selecionado.

Chapter 2. GFS2 Overview

download PDF
The Red Hat GFS2 file system is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs distributed metadata and multiple journals. Red Hat supports the use of GFS2 file systems only as implemented in Red Hat Cluster Suite.

Note

Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 5.5 release and later Red Hat does not support the use of GFS2 as a single-node file system. Red Hat does support a number of high-performance single node file systems which are optimized for single node and thus have generally lower overhead than a cluster file system. Red Hat recommends using these file systems in preference to GFS2 in cases where only a single node needs to mount the file system.
Red Hat will continue to support single-node GFS2 file systems for existing customers.

Note

Red Hat does not support using GFS2 for cluster file system deployments greater than 16 nodes.
GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB. The current supported maximum size of a GFS2 file system for 32-bit hardware for Red Hat Enterprise Linux Release 5.3 and later is 16 TB. If your system requires larger GFS2 file systems, contact your Red Hat service representative.
When determining the size of your file system, you should consider your recovery needs. Running the fsck.gfs2 command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsytem failure, recovery time is limited by the speed of your backup media. For information on the amount of memory the fsck.gfs2 command requires, see Section 4.11, “Repairing a File System”.
When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system name space across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way that processes on the same node can share files on a local file system, with no discernible difference. For information about Red Hat Cluster Suite refer to Configuring and Managing a Red Hat Cluster.
While a GFS2 file system may be used outside of LVM, Red Hats supports only GFS2 file systems that are created on a CLVM logical volume. CLVM is a cluster-wide implementation of LVM, enabled by the CLVM daemon clvmd, which manages LVM logical volumes in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the LVM volume manager, see Logical Volume Manager Administration
The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes.

Note

When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric cluster configurations in which some nodes have access to the shared storage and others do not are not supported. This does not require that all nodes actually mount the GFS2 file system itself.
This chapter provides some basic, abbreviated information as background to help you understand GFS2. It contains the following sections:

2.1. Before Setting Up GFS2

Before you install and set up GFS2, note the following key characteristics of your GFS2 file systems:
GFS2 nodes
Determine which nodes in the Red Hat Cluster Suite will mount the GFS2 file systems.
Number of file systems
Determine how many GFS2 file systems to create initially. (More file systems can be added later.)
File system name
Determine a unique name for each file system. The name must be unique for all lock_dlm file systems over the cluster, and for all file systems (lock_dlm and lock_nolock) on each local node. Each file system name is required in the form of a parameter variable. For example, this book uses file system names mydata1 and mydata2 in some example procedures.
Journals
Determine the number of journals for your GFS2 file systems. One journal is required for each node that mounts a GFS2 file system. GFS2 allows you to add journals dynamically at a later point as additional servers mount a file system. For information on adding journals to a GFS2 file system, see Section 4.7, “Adding Journals to a File System”.
GNBD server nodes
If you are using GNBD, determine how many GNBD server nodes are needed. Note the hostname and IP address of each GNBD server node for setting up GNBD clients later. For information on using GNBD with GFS2, see the Using GNBD with Global File System document.
Storage devices and partitions
Determine the storage devices and partitions to be used for creating logical volumes (via CLVM) in the file systems.

Note

You may see performance problems with GFS2 when many create and delete operations are issued from more than one node in the same directory at the same time. If this causes performance problems in your system, you should localize file creation and deletions by a node to directories specific to that node as much as possible.
Red Hat logoGithubRedditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja oBlog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

© 2024 Red Hat, Inc.