検索

このコンテンツは選択した言語では利用できません。

18.3. gstatus Command

download PDF

18.3.1. gstatus Command

A Red Hat Gluster Storage trusted storage pool consists of nodes, volumes, and bricks. A new command called gstatus provides an overview of the health of a Red Hat Gluster Storage trusted storage pool for distributed, replicated, distributed-replicated, dispersed, and distributed-dispersed volumes.
The gstatus command provides an easy-to-use, high-level view of the health of a trusted storage pool with a single command. By executing the glusterFS commands, it gathers information about the statuses of the Red Hat Gluster Storage nodes, volumes, and bricks. The checks are performed across the trusted storage pool and the status is displayed. This data can be analyzed to add further checks and incorporate deployment best-practices and free-space triggers.
A Red Hat Gluster Storage volume is made from individual file systems (glusterFS bricks) across multiple nodes. Although the complexity is abstracted, the status of the individual bricks affects the data availability of the volume. For example, even without replication, the loss of a single brick in the volume will not cause the volume itself to be unavailable, instead this would manifest as inaccessible files in the file system.

Note

As of Rad Hat Gluster Storage 3.4, the gstatus command is deprecated. To view health of Red Hat Gluster Storage clusters and volumes, access the Grafana Dashboard integrated to the Web Administration environment.

18.3.1.1. Prerequisites

Package dependencies
  • Python 2.6 or above
To install gstatus, refer to the Deploying gstatus on Red Hat Gluster Storage chapter in the Red Hat Gluster Storage 3.4 Installation Guide.

18.3.2. Executing the gstatus command

The gstatus command can be invoked in different ways. The table below shows the optional switches that can be used with gstatus.
# gstatus -h
Usage: gstatus [options]
Table 18.1. gstatus Command Options
Option Description
--version Displays the program's version number and exits.
-h, --help Displays the help message and exits.
-s, --state Displays the high level health of the Red Hat Gluster Storage trusted storage pool.
-v, --volume Displays volume information of all the volumes, by default. Specify a volume name to display the volume information of a specific volume.
-b, --backlog Probes the self heal state.
-a, --all Displays the detailed status of volume health. (This output is aggregation of -s and -v).
-l, --layout Displays the brick layout when used in combination with -v, or -a .
-o OUTPUT_MODE, --output-mode=OUTPUT_MODE Produces outputs in various formats such as - json, keyvalue, or console(default).
-D, --debug Enables the debug mode.
-w, --without-progress Disables progress updates during data gathering.
-u UNITS, --units=UNITS Displays capacity units in decimal or binary format (GB vs GiB).
-t TIMEOUT, --timeout=TIMEOUTSpecify the command timeout value in seconds.
Table 18.2. Commonly used gstatus Commands
Command Description
gstatus -s An overview of the trusted storage pool.
gstatus -a View detailed status of the volume health.
gstatus -vl VOLNAME View the volume details, including the brick layout.
gstatus -o <keyvalue> View the summary output for Nagios and Logstash.
Interpreting the output with Examples
Each invocation of gstatus provides a header section, which provides a high level view of the state of the Red Hat Gluster Storage trusted storage pool. The Status field within the header offers two states; Healthy and Unhealthy. When problems are detected, the status field changes to Unhealthy(n), where n denotes the total number of issues that have been detected.
The following examples illustrate gstatus command output for both healthy and unhealthy Red Hat Gluster Storage environments.

Example 18.1. Example 1: Trusted Storage Pool is in a healthy state; all nodes, volumes and bricks are online

# gstatus -a

   Product: RHGS Server v3.2.0      Capacity:  36.00 GiB(raw bricks)
      Status: HEALTHY                        7.00 GiB(raw used)
   Glusterfs: 3.7.1                        18.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes    :  4/ 4  Volumes:  1 Up
   Self Heal:  4/ 4            0 Up(Degraded)
   Bricks   :  4/ 4            0 Up(Partial)
   Connections  : 5 / 20       0 Down

Volume Information
 splunk       UP - 4/4 bricks up - Distributed-Replicate
                  Capacity: (18% used) 3.00 GiB/18.00 GiB (used/total)
                  Snapshots: 0
                  Self Heal:  4/ 4
                  Tasks Active: None
                  Protocols: glusterfs:on  NFS:on  SMB:off
                  Gluster Connectivty: 5 hosts, 20 tcp connections



Status Messages
- Cluster is HEALTHY, all_bricks checks successful

Example 18.2. Example 2: A node is down within the trusted pool

# gstatus -al

     Product: RHGS Server v3.1.1      Capacity:  27.00 GiB(raw bricks)
      Status: UNHEALTHY(4)                   5.00 GiB(raw used)
   Glusterfs: 3.7.1                      18.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes    :  3/ 4  Volumes:  0 Up
   Self Heal:  3/ 4            1 Up(Degraded)
   Bricks   :  3/ 4            0 Up(Partial)
   Connections  :  5/ 20       0 Down

Volume Information
 splunk           UP(DEGRADED) - 3/4 bricks up - Distributed-Replicate
                  Capacity: (18% used) 3.00 GiB/18.00 GiB (used/total)
                  Snapshots: 0
                  Self Heal:  3/ 4
                  Tasks Active: None
                  Protocols: glusterfs:on  NFS:on  SMB:off
                  Gluster Connectivty: 5 hosts, 20 tcp connections

 splunk---------- +
                  |
                Distribute (dht)
                         |
                         +-- Repl Set 0 (afr)
                         |     |
                         |     +--splunk-rhs1:/rhgs/brick1/splunk(UP) 2.00 GiB/9.00 GiB
                         |     |
                         |     +--splunk-rhs2:/rhgs/brick1/splunk(UP) 2.00 GiB/9.00 GiB
                         |
                         +-- Repl Set 1 (afr)
                               |
                               +--splunk-rhs3:/rhgs/brick1/splunk(DOWN) 0.00 KiB/0.00 KiB
                               |
                               +--splunk-rhs4:/rhgs/brick1/splunk(UP) 2.00 GiB/9.00 GiB
 Status Messages
  - Cluster is UNHEALTHY
  - One of the nodes in the cluster is down
  - Brick splunk-rhs3:/rhgs/brick1/splunk in volume 'splunk' is down/unavailable
  - INFO -> Not all bricks are online, so capacity provided is NOT accurate
Example 2, displays the output of the command when the -l option is used. The brick layout mode shows the brick and node relationships. This provides a simple means of checking the replication relationships for bricks across nodes is as intended.
Table 18.3. Field Descriptions of the gstatus command output
Field Description
Volume StateUp – The volume is started and available, and all the bricks are up .
Up (Degraded) - This state is specific to replicated volumes, where at least one brick is down within a replica set. Data is still 100% available due to the alternate replicas, but the resilience of the volume to further failures within the same replica set flags this volume as degraded.
Up (Partial) - Effectively, this means that all though some bricks in the volume are online, there are others that are down to a point where areas of the file system will be missing. For a distributed volume, this state is seen if any brick is down, whereas for a replicated volume a complete replica set needs to be down before the volume state transitions to PARTIAL.
Down - Bricks are down, or the volume is yet to be started.
Capacity Information This information is derived from the brick information taken from the volume status detail command. The accuracy of this number hence depends on the nodes and bricks all being online - elements missing from the configuration are not considered in the calculation.
Over-commit Status The physical file system used by a brick could be re-used by multiple volumes, this field indicates whether a brick is used by multiple volumes. But this exposes the system to capacity conflicts across different volumes when the quota feature is not in use. Reusing a brick for multiple volumes is not recommended.
Connections Displays a count of connections made to the trusted pool and each of the volumes.
Nodes / Self Heal / Bricks X/Y This indicates that X components of Y total/expected components within the trusted pool are online. In Example 2, note that 3/4 is displayed against all of these fields, indicating 3 nodes are available out of 4 nodes. A node, brick, and the self-heal daemon are also unavailable.
Tasks Active Active background tasks such as rebalance, remove-brick are displayed here against individual volumes.
Protocols Displays which protocols have been enabled for the volume.
Snapshots Displays a count of the number of snapshots taken for the volume. The snapshot count for each volume is rolled up to the trusted storage pool to provide a high level view of the number of snapshots in the environment.
Status Messages After the information is gathered, any errors detected are reported in the Status Messages section. These descriptions provide a view of the problem and the potential impact of the condition.
Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.