This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 11. Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment
This chapter lists out the various operations that can be performed on a Red Hat Gluster Storage pod (gluster pod):
- To list the pods, execute the following command :
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Following are the gluster pods from the above example:glusterfs-dc-node1.example.com glusterfs-dc-node2.example.com glusterfs-dc-node3.example.com
glusterfs-dc-node1.example.com glusterfs-dc-node2.example.com glusterfs-dc-node3.example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The topology.json file will provide the details of the nodes in a given Trusted Storage Pool (TSP) . In the above example all the 3 Red Hat Gluster Storage nodes are from the same TSP. - To enter the gluster pod shell, execute the following command:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-dc-node1.example.com
# oc rsh glusterfs-dc-node1.example.com sh-4.2#
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the peer status, execute the following command:
gluster peer status
# gluster peer status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To list the gluster volumes on the Trusted Storage Pool, execute the following command:
gluster volume info
# gluster volume info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the volume status, execute the following command:
gluster volume status <volname>
# gluster volume status <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To use the snapshot feature, load the snapshot module using the following command:
- modprobe dm_snapshot
# - modprobe dm_snapshot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Restrictions for using Snapshot- After a snapshot is created, it must be accessed though the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
- To take the snapshot of the gluster volume, execute the following command:
gluster snapshot create <snapname> <volname>
# gluster snapshot create <snapname> <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster snapshot create snap1 vol_9e86c0493f6b1be648c9deee1dc226a6
# gluster snapshot create snap1 vol_9e86c0493f6b1be648c9deee1dc226a6 snapshot create: success: Snap snap1_GMT-2016.07.29-13.05.46 created successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To list the snapshots, execute the following command:
gluster snapshot list
# gluster snapshot list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To delete a snapshot, execute the following command:
gluster snap delete <snapname>
# gluster snap delete <snapname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster snap delete snap1_GMT-2016.07.29-13.05.46
# gluster snap delete snap1_GMT-2016.07.29-13.05.46 Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y snapshot delete: snap1_GMT-2016.07.29-13.05.46: snap removed successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about managing snapshots, refer https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Managing_Snapshots. - You can set up Container-Native Storage volumes for geo-replication to a non-Container-Native Storage remote site. Geo-replication uses a master–slave model. Here, the Container-Native Storage volume acts as the master volume. To set up geo-replication, you must run the geo-replication commands on gluster pods. To enter the gluster pod shell, execute the following command:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about setting up geo-replication, refer https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/chap-managing_geo-replication. - Brick multiplexing is a feature that allows including multiple bricks into one process. This reduces resource consumption, allowing you to run more bricks than earlier with the same memory consumption.Brick multiplexing is enabled by default from Container-Native Storage 3.6. If you want to turn it off, execute the following command:
gluster volume set all cluster.brick-multiplex off
# gluster volume set all cluster.brick-multiplex off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The
auto_unmount
option in glusterfs libfuse, when enabled, ensures that the file system is unmounted at FUSE server termination by running a separate monitor process that performs the unmount.The GlusterFS plugin in Openshift enables theauto_unmount
option for gluster mounts.