17.2. Running the Volume Top Command
The
volume top
command allows you to view the glusterFS bricks’ performance metrics, including read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls. The volume top
command displays up to 100 results.
This section describes how to use the
volume top
command.
17.2.1. Viewing Open File Descriptor Count and Maximum File Descriptor Count
You can view the current open file descriptor count and the list of files that are currently being accessed on the brick with the
volume top
command. The volume top
command also displays the maximum open file descriptor count of files that are currently open, and the maximum number of files opened at any given point of time since the servers are up and running. If the brick name is not specified, then the open file descriptor metrics of all the bricks belonging to the volume displays.
To view the open file descriptor count and the maximum file descriptor count, use the following command:
# gluster volume top VOLNAME open [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the open file descriptor count and the maximum file descriptor count on brick server:/export on test-volume, and list the top 10 open calls:
# gluster volume top test open brick server:/bricks/brick1/test list-cnt 10 Brick: server/bricks/brick1/test Current open fds: 2, Max open fds: 4, Max openfd time: 2020-10-09 05:57:20.171038 Count filename ======================= 2 /file222 1 /file1
17.2.2. Viewing Highest File Read Calls
You can view a list of files with the highest file read calls on each brick with the
volume top
command. If the brick name is not specified, a list of 100 files are displayed by default.
To view the highest read() calls, use the following command:
# gluster volume top VOLNAME read [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest read calls on brick server:/export of test-volume:
# gluster volume top testvol_distributed-dispersed read brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 9 /user11/dir1/dir4/testfile2.txt 9 /user11/dir1/dir1/testfile0.txt 9 /user11/dir0/dir4/testfile4.txt 9 /user11/dir0/dir3/testfile4.txt 9 /user11/dir0/dir1/testfile0.txt 9 /user11/testfile4.txt 9 /user11/testfile3.txt 5 /user11/dir2/dir1/testfile4.txt 5 /user11/dir2/dir1/testfile0.txt 5 /user11/dir2/testfile2.txt
17.2.3. Viewing Highest File Write Calls
You can view a list of files with the highest file write calls on each brick with the
volume top
command. If the brick name is not specified, a list of 100 files displays by default.
To view the highest write() calls, use the following command:
# gluster volume top VOLNAME write [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest write calls on brick server:/export of test-volume:
# gluster volume top testvol_distributed-dispersed write brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 8 /user12/dir4/dir4/testfile3.txt 8 /user12/dir4/dir3/testfile3.txt 8 /user2/dir4/dir3/testfile4.txt 8 /user3/dir4/dir4/testfile1.txt 8 /user12/dir4/dir2/testfile3.txt 8 /user2/dir4/dir1/testfile0.txt 8 /user11/dir4/dir3/testfile4.txt 8 /user3/dir4/dir2/testfile2.txt 8 /user12/dir4/dir0/testfile0.txt 8 /user11/dir4/dir3/testfile3.txt
17.2.4. Viewing Highest Open Calls on a Directory
You can view a list of files with the highest open calls on the directories of each brick with the
volume top
command. If the brick name is not specified, the metrics of all bricks belonging to that volume displays.
To view the highest open() calls on each directory, use the following command:
# gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest open calls on brick server:/export/ of test-volume:
# gluster volume top testvol_distributed-dispersed opendir brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 3 /user2/dir3/dir2 3 /user2/dir3/dir1 3 /user2/dir3/dir0 3 /user2/dir3 3 /user2/dir2/dir4 3 /user2/dir2/dir3 3 /user2/dir2/dir2 3 /user2/dir2/dir1 3 /user2/dir2/dir0 3 /user2/dir2
17.2.5. Viewing Highest Read Calls on a Directory
You can view a list of files with the highest directory read calls on each brick with the
volume top
command. If the brick name is not specified, the metrics of all bricks belonging to that volume displays.
To view the highest directory read() calls on each brick, use the following command:
# gluster volume top VOLNAME readdir [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the highest directory read calls on brick server:/export/ of test-volume:
# gluster volume top testvol_distributed-dispersed readdir brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Count filename ======================= 4 /user6/dir2/dir3 4 /user6/dir1/dir4 4 /user6/dir1/dir2 4 /user6/dir1 4 /user6/dir0 4 /user13/dir1/dir1 4 /user3/dir4/dir4 4 /user3/dir3/dir4 4 /user3/dir3/dir3 4 /user3/dir3/dir1
17.2.6. Viewing Read Performance
You can view the read throughput of files on each brick with the
volume top
command. If the brick name is not specified, the metrics of all the bricks belonging to that volume is displayed. The output is the read throughput.
This command initiates a read() call for the specified count and block size and measures the corresponding throughput directly on the back-end export, bypassing glusterFS processes.
To view the read performance on each brick, use the command, specifying options as needed:
# gluster volume top VOLNAME read-perf [bs blk-size count count] [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the read performance on brick
server:/export/
of test-volume, specifying a 256 block size, and list the top 10 results:
# gluster volume top testvol_distributed-dispersed read-perf bs 256 count 1 brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Throughput 10.67 MBps time 0.0000 secs MBps Filename Time ==== ======== ==== 0 /user2/dir3/dir2/testfile3.txt 2021-02-01 15:47:35.391234 0 /user2/dir3/dir2/testfile0.txt 2021-02-01 15:47:35.371018 0 /user2/dir3/dir1/testfile4.txt 2021-02-01 15:47:33.375333 0 /user2/dir3/dir1/testfile0.txt 2021-02-01 15:47:31.859194 0 /user2/dir3/dir0/testfile2.txt 2021-02-01 15:47:31.749105 0 /user2/dir3/dir0/testfile1.txt 2021-02-01 15:47:31.728151 0 /user2/dir3/testfile4.txt 2021-02-01 15:47:31.296924 0 /user2/dir3/testfile3.txt 2021-02-01 15:47:30.988683 0 /user2/dir3/testfile0.txt 2021-02-01 15:47:30.557743 0 /user2/dir2/dir4/testfile4.txt 2021-02-01 15:47:30.464017
17.2.7. Viewing Write Performance
You can view the write throughput of files on each brick or NFS server with the
volume top
command. If brick name is not specified, then the metrics of all the bricks belonging to that volume will be displayed. The output will be the write throughput.
This command initiates a write operation for the specified count and block size and measures the corresponding throughput directly on back-end export, bypassing glusterFS processes.
To view the write performance on each brick, use the following command, specifying options as needed:
# gluster volume top VOLNAME write-perf [bs blk-size count count] [nfs | brick BRICK-NAME] [list-cnt cnt]
For example, to view the write performance on brick
server:/export/
of test-volume, specifying a 256 block size, and list the top 10 results:
# gluster volume top testvol_distributed-dispersed write-perf bs 256 count 1 brick `hostname`:/bricks/brick1/testvol_distributed-dispersed_brick1/ list-cnt 10 Brick: server/bricks/brick1/testvol_distributed-dispersed_brick1 Throughput 3.88 MBps time 0.0001 secs MBps Filename Time ==== ======== ==== 0 /user12/dir4/dir4/testfile4.txt 2021-02-01 13:30:32.225628 0 /user12/dir4/dir4/testfile3.txt 2021-02-01 13:30:31.771095 0 /user12/dir4/dir4/testfile0.txt 2021-02-01 13:30:29.655447 0 /user12/dir4/dir3/testfile4.txt 2021-02-01 13:30:29.62920 0 /user12/dir4/dir3/testfile3.txt 2021-02-01 13:30:28.995407 0 /user2/dir4/dir4/testfile2.txt 2021-02-01 13:30:28.489318 0 /user2/dir4/dir4/testfile1.txt 2021-02-01 13:30:27.956523 0 /user2/dir4/dir3/testfile4.txt 2021-02-01 13:30:27.34337 0 /user12/dir4/dir3/testfile0.txt 2021-02-01 13:30:26.699984 0 /user3/dir4/dir4/testfile2.txt 2021-02-01 13:30:26.602165