此内容没有您所选择的语言版本。
17.8. Displaying Volume Status
You can display the status information about a specific volume, brick, or all volumes, as needed. Status information can be used to understand the current status of the brick, NFS processes, self-heal daemon and overall file system. Status information can also be used to monitor and debug the volume information. You can view status of the volume along with the details:
- detail - Displays additional information about the bricks.
- clients - Displays the list of clients connected to the volume.
- mem - Displays the memory usage and memory pool details of the bricks.
- inode - Displays the inode tables of the volume.
- fd - Displays the open file descriptor tables of the volume.
- callpool - Displays the pending calls of the volume.
Setting Timeout Period
When you try to obtain information of a specific volume, the command may get timed out from the CLI if the originator glusterd
takes longer than 120 seconds, the default time out, to aggregate the results from all the other glusterd
s and report back to CLI.
You can use the
--timeout
option to ensure that the commands do not get timed out by 120 seconds.
For example,
# gluster volume status --timeout=500 VOLNAME inode
It is recommended to use
--timeout
option when obtaining information about the inodes or clients or details as they frequently get timed out.
Display information about a specific volume using the following command:
# gluster volume status --timeout=value_in_seconds [all|VOLNAME [nfs | shd | BRICKNAME]] [detail |clients | mem | inode | fd |callpool]
For example, to display information about test-volume:
# gluster volume status test-volume Status of volume: test-volume Gluster process Port Online Pid ------------------------------------------------------------ Brick Server1:/rhgs/brick0/rep1 24010 Y 18474 Brick Server1:/rhgs/brick0/rep2 24011 Y 18479 NFS Server on localhost 38467 Y 18486 Self-heal Daemon on localhost N/A Y 18491
The self-heal daemon status will be displayed only for replicated volumes.
Display information about all volumes using the command:
# gluster volume status all
# gluster volume status all Status of volume: test Gluster process Port Online Pid ----------------------------------------------------------- Brick Server1:/rhgs/brick0/test 24009 Y 29197 NFS Server on localhost 38467 Y 18486 Status of volume: test-volume Gluster process Port Online Pid ------------------------------------------------------------ Brick Server1:/rhgs/brick0/rep1 24010 Y 18474 Brick Server1:/rhgs/brick0/rep2 24011 Y 18479 NFS Server on localhost 38467 Y 18486 Self-heal Daemon on localhost N/A Y 18491
Display additional information about the bricks using the command:
# gluster volume status VOLNAME detail
For example, to display additional information about the bricks of test-volume:
# gluster volume status test-volume detail Status of volume: test-vol ------------------------------------------------------------------------------ Brick : Brick Server1:/rhgs/test Port : 24012 Online : Y Pid : 18649 File System : xfs Device : /dev/sda1 Mount Options : rw,relatime,user_xattr,acl,commit=600,barrier=1,data=ordered Inode Size : 256 Disk Space Free : 22.1GB Total Disk Space : 46.5GB Inode Count : 3055616 Free Inodes : 2577164
Detailed information is not available for NFS and the self-heal daemon.
Display the list of clients accessing the volumes using the command:
# gluster volume status VOLNAME clients
For example, to display the list of clients connected to test-volume:
# gluster volume status test-volume clients Brick : Server1:/rhgs/brick0/1 Clients connected : 2 Hostname Bytes Read BytesWritten OpVersion -------- --------- ------------ --------- 127.0.0.1:1013 776 676 70200 127.0.0.1:1012 50440 51200 70200
Client information is not available for the self-heal daemon.
Display the memory usage and memory pool details of the bricks on a volume using the command:
# gluster volume status VOLNAME mem
For example, to display the memory usage and memory pool details for the bricks on test-volume:
# gluster volume status glustervol mem Memory status for volume : glustervol ---------------------------------------------- Brick : rhsqaci-vm33.lab.eng.blr.redhat.com:/bricks/brick0/1 Mallinfo -------- Arena : 11509760 Ordblks : 278 Smblks : 16 Hblks : 17 Hblkhd : 17350656 Usmblks : 0 Fsmblks : 1376 Uordblks : 3850640 Fordblks : 7659120 Keepcost : 121632 ---------------------------------------------- Brick : rhsqaci-vm44.lab.eng.blr.redhat.com:/bricks/brick0/1 Mallinfo -------- Arena : 11595776 Ordblks : 329 Smblks : 44 Hblks : 17 Hblkhd : 17350656 Usmblks : 0 Fsmblks : 4240 Uordblks : 3888928 Fordblks : 7706848 Keepcost : 121632 ---------------------------------------------- Brick : rhsqaci-vm32.lab.eng.blr.redhat.com:/bricks/brick0/1 Mallinfo -------- Arena : 9695232 Ordblks : 306 Smblks : 67 Hblks : 17 Hblkhd : 17350656 Usmblks : 0 Fsmblks : 5616 Uordblks : 3890736 Fordblks : 5804496 Keepcost : 121632
Display the inode tables of the volume using the command:
# gluster volume status VOLNAME inode
For example, to display the inode tables of test-volume:
# gluster volume status test inode inode tables for volume test ---------------------------------------------- Brick : rhsqaci-vm35.lab.eng.blr.redhat.com:/bricks/brick1/test Connection 1: LRU limit : 16384 Active Inodes : 1000 LRU Inodes : 1 Purge Inodes : 0
Display the open file descriptor tables of the volume using the command:
# gluster volume status VOLNAME fd
For example, to display the open file descriptor tables of test-volume:
# gluster volume status test-volume fd FD tables for volume test-volume ---------------------------------------------- Brick : Server1:/rhgs/brick0/1 Connection 1: RefCount = 0 MaxFDs = 128 FirstFree = 4 FD Entry PID RefCount Flags -------- --- -------- ----- 0 26311 1 2 1 26310 3 2 2 26310 1 2 3 26311 3 2 Connection 2: RefCount = 0 MaxFDs = 128 FirstFree = 0 No open fds Connection 3: RefCount = 0 MaxFDs = 128 FirstFree = 0 No open fds
FD information is not available for NFS and the self-heal daemon.
Display the pending calls of the volume using the command:
# gluster volume status VOLNAME callpool
Note, each call has a call stack containing call frames.
For example, to display the pending calls of test-volume:
# gluster volume status test-volume callpool Pending calls for volume test-volume ---------------------------------------------- Brick : Server1:/rhgs/brick0/1 Pending calls: 2 Call Stack1 UID : 0 GID : 0 PID : 26338 Unique : 192138 Frames : 7 Frame 1 Ref Count = 1 Translator = test-volume-server Completed = No Frame 2 Ref Count = 0 Translator = test-volume-posix Completed = No Parent = test-volume-access-control Wind From = default_fsync Wind To = FIRST_CHILD(this)->fops->fsync Frame 3 Ref Count = 1 Translator = test-volume-access-control Completed = No Parent = repl-locks Wind From = default_fsync Wind To = FIRST_CHILD(this)->fops->fsync Frame 4 Ref Count = 1 Translator = test-volume-locks Completed = No Parent = test-volume-io-threads Wind From = iot_fsync_wrapper Wind To = FIRST_CHILD (this)->fops->fsync Frame 5 Ref Count = 1 Translator = test-volume-io-threads Completed = No Parent = test-volume-marker Wind From = default_fsync Wind To = FIRST_CHILD(this)->fops->fsync Frame 6 Ref Count = 1 Translator = test-volume-marker Completed = No Parent = /export/1 Wind From = io_stats_fsync Wind To = FIRST_CHILD(this)->fops->fsync Frame 7 Ref Count = 1 Translator = /export/1 Completed = No Parent = test-volume-server Wind From = server_fsync_resume Wind To = bound_xl->fops->fsync