検索

このコンテンツは選択した言語では利用できません。

Chapter 18. Monitoring Red Hat Gluster Storage Gluster Workload

download PDF
Monitoring storage volumes is helpful when conducting a capacity planning or performance tuning activity on a Red Hat Gluster Storage volume. You can monitor the Red Hat Gluster Storage volumes with different parameters and use those system outputs to identify and troubleshoot issues.
You can use the volume top and volume profile commands to view vital performance information and identify bottlenecks on each brick of a volume.
You can also perform a statedump of the brick processes and NFS server process of a volume, and also view volume status and volume information.

Note

If you restart the server process, the existing profile and top information will be reset.

18.1. Profiling volumes

18.1.1. Server-side volume profiling using volume profile

The volume profile command provides an interface to get the per-brick or NFS server I/O information for each File Operation (FOP) of a volume. This information helps in identifying the bottlenecks in the storage system.
This section describes how to use the volume profile command.

18.1.1.1. Start Profiling

To view the file operation information of each brick, start the profiling command:
# gluster volume profile VOLNAME start
For example, to start profiling on test-volume:
# gluster volume profile test-volume start
Profiling started on test-volume

Important

Running profile command can affect system performance while the profile information is being collected. Red Hat recommends that profiling should only be used for debugging.
When profiling is started on the volume, the following additional options are displayed when using the volume info command:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on

18.1.1.2. Displaying the I/O Information

To view the I/O information of the bricks on a volume, use the following command:
# gluster volume profile VOLNAME info
For example, to view the I/O information of test-volume:
# gluster volume profile test-volume info
Brick: Server1:/rhgs/brick0/2
Cumulative Stats:

Block                     1b+           32b+           64b+
Size:
       Read:                0              0              0
       Write:             908             28              8

Block                   128b+           256b+         512b+
Size:
       Read:                0               6             4
       Write:               5              23            16

Block                  1024b+          2048b+        4096b+
Size:
       Read:                 0              52           17
       Write:               15             120          846

Block                   8192b+         16384b+      32768b+
Size:
       Read:                52               8           34
       Write:              234             134          286

Block                                  65536b+     131072b+
Size:
       Read:                               118          622
       Write:                             1341          594


%-latency  Avg-      Min-       Max-       calls     Fop
          latency   Latency    Latency
___________________________________________________________
4.82      1132.28   21.00      800970.00   4575    WRITE
5.70       156.47    9.00      665085.00   39163   READDIRP
11.35      315.02    9.00     1433947.00   38698   LOOKUP
11.88     1729.34   21.00     2569638.00    7382   FXATTROP
47.35   104235.02 2485.00     7789367.00     488   FSYNC

------------------

------------------

Duration     : 335

BytesRead    : 94505058

BytesWritten : 195571980
To view the I/O information of the NFS server on a specified volume, use the following command:
# gluster volume profile VOLNAME info nfs
For example, to view the I/O information of the NFS server on test-volume:
# gluster volume profile test-volume info nfs
NFS Server : localhost
----------------------
Cumulative Stats:
Block Size:              32768b+               65536b+
No. of Reads:                    0                     0
No. of Writes:                 1000                  1000
%-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
---------   -----------   -----------   -----------   ------------        ----
0.01     410.33 us     194.00 us     641.00 us              3      STATFS
0.60     465.44 us     346.00 us     867.00 us            147       FSTAT
1.63     187.21 us      67.00 us    6081.00 us           1000     SETATTR
1.94     221.40 us      58.00 us   55399.00 us           1002      ACCESS
2.55     301.39 us      52.00 us   75922.00 us            968        STAT
2.85     326.18 us      88.00 us   66184.00 us           1000    TRUNCATE
4.47     511.89 us      60.00 us  101282.00 us           1000       FLUSH
5.02    3907.40 us    1723.00 us   19508.00 us            147    READDIRP
25.42    2876.37 us     101.00 us  843209.00 us           1012      LOOKUP
55.52    3179.16 us     124.00 us  121158.00 us           2000       WRITE

Duration: 7074 seconds
Data Read: 0 bytes
Data Written: 102400000 bytes

Interval 1 Stats:
Block Size:              32768b+               65536b+
No. of Reads:                    0                     0
No. of Writes:                 1000                  1000
%-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
---------   -----------   -----------   -----------   ------------        ----
0.01     410.33 us     194.00 us     641.00 us              3      STATFS
0.60     465.44 us     346.00 us     867.00 us            147       FSTAT
1.63     187.21 us      67.00 us    6081.00 us           1000     SETATTR
1.94     221.40 us      58.00 us   55399.00 us           1002      ACCESS
2.55     301.39 us      52.00 us   75922.00 us            968        STAT
2.85     326.18 us      88.00 us   66184.00 us           1000    TRUNCATE
4.47     511.89 us      60.00 us  101282.00 us           1000       FLUSH
5.02    3907.40 us    1723.00 us   19508.00 us            147    READDIRP
25.41    2878.07 us     101.00 us  843209.00 us           1011      LOOKUP
55.53    3179.16 us     124.00 us  121158.00 us           2000       WRITE

Duration: 330 seconds
Data Read: 0 bytes
Data Written: 102400000 bytes

18.1.1.3. Stop Profiling

To stop profiling on a volume, use the following command:
# gluster volume profile VOLNAME stop
For example, to stop profiling on test-volume:
# gluster volume profile test-volume stop
    Profiling stopped on test-volume

18.1.2. Client-side volume profiling (FUSE only)

Red Hat Gluster Storage lets you profile how your mount point is being accessed, so that you can investigate latency issues even when you cannot instrument the application accessing your storage.
The io-stats translator records statistics of all file system activity on a Red Hat Gluster Storage volume that travels through a FUSE mount point. It collects information on files opened from the FUSE mount path, the read and write throughput for these files, the number of blocks read and written, and the latency observed for different file operations.
Run the following command to output all recorded statistics for the specified mount point to the specified output file.
# setfattr -n trusted.io-stats-dump -v output_file_id mount_point
This generates a number of files in the /var/run/gluster directory. The output_file_id is not the whole file name, but is used as part of the name of the generated files.
Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.