19.6. Obtaining Node Information
get-state
command outputs information about a node to a specified file.
The get-state
command outputs information about a node to a specified file and can be invoked in different ways. The table below shows the options that can be used with the get-state command.
# gluster get-state [odir path_to_output_dir] [file filename
]
Usage: get-state [options]
Command | Description |
---|---|
gluster get-state | glusterd state information is saved in the /var/run/gluster/glusterd_state_time file appended with the time. |
gluster get-state file | glusterd state information is saved in the /var/run/gluster/ directory and in the file name as specified in the command. |
gluster get-state odir | glusterd state information is saved in the directory and in the file name as specified in the command. |
Each invocation of the get-state command, it saves the information that reflect the node level status of the trusted storage pool as maintained in glusterd (no other daemons are supported as of now) to a file specified in the command. By default, the output will be dumped to /var/run/gluster/glusterd_state_timestamp
file .
Section | Description |
---|---|
Global | Displays the UUID and the op-version of the glusterd. |
Global options | Displays cluster specific options that have been set explicitly through the volume set command. |
Peers | Displays the peer node information including its hostname and connection status. |
Volumes | Displays the list of volumes created on this node along with the detailed information on each volume. |
Services | Displays the list of the services configured on this node along with its status. |
Misc | Displays miscellaneous information about the node. For example, configured ports. |
# gluster get-state glusterd state dumped to /var/run/gluster/glusterd_state_20160921_123605 [Global] MYUUID: 1e20ed87-c22a-4612-ab04-90765bccaea5 op-version: 40000 [Global options] cluster.server-quorum-ratio: 60 [Peers] Peer1.primary_hostname: 192.168.122.31 Peer1.uuid: dfc7ff96-b61d-4c88-a3ad-b6852f72c5f0 Peer1.state: Peer in Cluster Peer1.connected: Connected Peer1.othernames: Peer2.primary_hostname: 192.168.122.65 Peer2.uuid: dd83409e-22fa-4186-935a-648a1927cc9d Peer2.state: Peer in Cluster Peer2.connected: Connected Peer2.othernames: [Volumes] Volume1.name: tv1 Volume1.id: cf89d345-8cde-4c53-be85-1f3f20e7e410 Volume1.type: Distribute Volume1.transport_type: tcp Volume1.status: Started Volume1.brickcount: 3 Volume1.Brick1.path: 192.168.122.200:/root/bricks/tb11 Volume1.Brick1.hostname: 192.168.122.200 Volume1.Brick1.port: 49152 Volume1.Brick1.rdma_port: 0 Volume1.Brick1.status: Started Volume1.Brick1.signedin: True Volume1.Brick2.path: 192.168.122.65:/root/bricks/tb12 Volume1.Brick2.hostname: 192.168.122.65 Volume1.Brick3.path: 192.168.122.31:/root/bricks/tb13 Volume1.Brick3.hostname: 192.168.122.31 Volume1.snap_count: 0 Volume1.stripe_count: 1 Volume1.replica_count: 1 Volume1.subvol_count: 3 Volume1.arbiter_count: 0 Volume1.disperse_count: 0 Volume1.redundancy_count: 0 Volume1.quorum_status: not_applicable Volume1.snapd_svc.online_status: Online Volume1.snapd_svc.inited: True Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 Volume1.rebalance.status: not_started Volume1.rebalance.failures: 0 Volume1.rebalance.skipped: 0 Volume1.rebalance.lookedup: 0 Volume1.rebalance.files: 0 Volume1.rebalance.data: 0Bytes [Volume1.options] features.uss: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on Volume2.name: tv2 Volume2.id: 700fd588-6fc2-46d5-9435-39c434656fe2 Volume2.type: Distribute Volume2.transport_type: tcp Volume2.status: Created Volume2.brickcount: 3 Volume2.Brick1.path: 192.168.122.200:/root/bricks/tb21 Volume2.Brick1.hostname: 192.168.122.200 Volume2.Brick1.port: 0 Volume2.Brick1.rdma_port: 0 Volume2.Brick1.status: Stopped Volume2.Brick1.signedin: False Volume2.Brick2.path: 192.168.122.65:/root/bricks/tb22 Volume2.Brick2.hostname: 192.168.122.65 Volume2.Brick3.path: 192.168.122.31:/root/bricks/tb23 Volume2.Brick3.hostname: 192.168.122.31 Volume2.snap_count: 0 Volume2.stripe_count: 1 Volume2.replica_count: 1 Volume2.subvol_count: 3 Volume2.arbiter_count: 0 Volume2.disperse_count: 0 Volume2.redundancy_count: 0 Volume2.quorum_status: not_applicable Volume2.snapd_svc.online_status: Offline Volume2.snapd_svc.inited: False Volume2.rebalance.id: 00000000-0000-0000-0000-000000000000 Volume2.rebalance.status: not_started Volume2.rebalance.failures: 0 Volume2.rebalance.skipped: 0 Volume2.rebalance.lookedup: 0 Volume2.rebalance.files: 0 Volume2.rebalance.data: 0Bytes [Volume2.options] transport.address-family: inet performance.readdir-ahead: on nfs.disable: on Volume3.name: tv3 Volume3.id: 97b94d77-116a-4595-acfc-9676e4ebcbd2 Volume3.type: Tier Volume3.transport_type: tcp Volume3.status: Stopped Volume3.brickcount: 4 Volume3.Brick1.path: 192.168.122.31:/root/bricks/tb34 Volume3.Brick1.hostname: 192.168.122.31 Volume3.Brick2.path: 192.168.122.65:/root/bricks/tb33 Volume3.Brick2.hostname: 192.168.122.65 Volume3.Brick3.path: 192.168.122.200:/root/bricks/tb31 Volume3.Brick3.hostname: 192.168.122.200 Volume3.Brick3.port: 49154 Volume3.Brick3.rdma_port: 0 Volume3.Brick3.status: Stopped Volume3.Brick3.signedin: False Volume3.Brick3.tier: Cold Volume3.Brick4.path: 192.168.122.65:/root/bricks/tb32 Volume3.Brick4.hostname: 192.168.122.65 Volume3.snap_count: 0 Volume3.stripe_count: 1 Volume3.replica_count: 2 Volume3.subvol_count: 2 Volume3.arbiter_count: 0 Volume3.disperse_count: 0 Volume3.redundancy_count: 0 Volume3.quorum_status: not_applicable Volume3.snapd_svc.online_status: Offline Volume3.snapd_svc.inited: True Volume3.rebalance.id: 00000000-0000-0000-0000-000000000000 Volume3.rebalance.status: not_started Volume3.rebalance.failures: 0 Volume3.rebalance.skipped: 0 Volume3.rebalance.lookedup: 0 Volume3.rebalance.files: 0 Volume3.rebalance.data: 0Bytes Volume3.tier_info.cold_tier_type: Replicate Volume3.tier_info.cold_brick_count: 2 Volume3.tier_info.cold_replica_count: 2 Volume3.tier_info.cold_disperse_count: 0 Volume3.tier_info.cold_dist_leaf_count: 2 Volume3.tier_info.cold_redundancy_count: 0 Volume3.tier_info.hot_tier_type: Replicate Volume3.tier_info.hot_brick_count: 2 Volume3.tier_info.hot_replica_count: 2 Volume3.tier_info.promoted: 0 Volume3.tier_info.demoted: 0 [Volume3.options] cluster.tier-mode: cache features.ctr-enabled: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on Volume4.name: tv4 Volume4.id: ad7260ac-0d5c-461f-a39c-a0f4a4ff854b Volume4.type: Distribute Volume4.transport_type: tcp Volume4.status: Started Volume4.brickcount: 2 Volume4.Brick1.path: 192.168.122.31:/root/bricks/tb41 Volume4.Brick1.hostname: 192.168.122.31 Volume4.Brick2.path: 192.168.122.31:/root/bricks/tb42 Volume4.Brick2.hostname: 192.168.122.31 Volume4.snapshot1.name: tv4-snap_GMT-2016.11.24-12.10.15 Volume4.snapshot1.id: 2eea76ae-c99f-4128-b5c0-3233048312f2 Volume4.snapshot1.time: 2016-11-24 12:10:15 Volume4.snapshot1.status: in_use Volume4.snap_count: 1 Volume4.stripe_count: 1 Volume4.subvol_count: 2 Volume4.arbiter_count: 0 Volume4.disperse_count: 0 Volume4.redundancy_count: 0 Volume4.quorum_status: not_applicable Volume4.snapd_svc.online_status: Offline Volume4.snapd_svc.inited: True Volume4.rebalance.id: 00000000-0000-0000-0000-000000000000 Volume4.rebalance.status: not_started Volume4.rebalance.failures: 0 Volume4.rebalance.skipped: 0 Volume4.rebalance.lookedup: 0 Volume4.rebalance.files: 0 Volume4.rebalance.data: 0 Volume4.rebalance.data: 0 [Volume4.options] features.uss: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [Services] svc1.name: glustershd svc1.online_status: Offline svc2.name: nfs svc2.online_status: Offline svc3.name: bitd svc3.online_status: Offline svc4.name: scrub svc4.online_status: Offline svc5.name: quotad svc5.online_status: Offline [Misc] Base port: 49152 Last allocated port: 49154