This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.此内容没有您所选择的语言版本。
Chapter 5. Creating Persistent Volumes
OpenShift Enterprise clusters can be provisioned with persistent storage using GlusterFS.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
To enable persistent volume support in OpenShift and Kubernetes, few endpoints and a service must be created:
The sample glusterfs-endpoints.json file (sample-gluster-endpoint.json) and the sample glusterfs-service.json file (sample-gluster-service.json) are available at /usr/share/heketi/openshift/.
- To specify the endpoints you want to create, update the
sample-gluster-endpoint.json
file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.Copy to Clipboard Copied! Toggle word wrap Toggle overflow name: is the name of the endpointip: is the ip address of the Red Hat Gluster Storage nodes. - Execute the following command to create the endpoints:
oc create -f <name_of_endpoint_file>
# oc create -f <name_of_endpoint_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f sample-gluster-endpoint.json
# oc create -f sample-gluster-endpoint.json endpoints "glusterfs-cluster" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the endpoints are created, execute the following command:
oc get endpoints
# oc get endpoints
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create a gluster service:
oc create -f <name_of_service_file>
# oc create -f <name_of_service_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f sample-gluster-service.json
# oc create -f sample-gluster-service.json service "glusterfs-cluster" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the service is created, execute the following command:
oc get service
# oc get service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The endpoints and the services must be created for each project that requires a persistent storage. - Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume spec describing this volume to the file pv001.json:
heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow name: The name of the volume.storage: The amount of storage allocated to this volumeglusterfs: The volume type being used, in this case the glusterfs plug-inendpoints: The endpoints name that defines the trusted storage pool createdpath: The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.accessModes: accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.Note
- heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to
oc create -f -
to create the persistent volume immediately. - Creation of more than 100 volumes per 3 nodes per cluster is not supported.
- If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the
heketi-cli volume list
command. This command lists the cluster name. You can then update the endpoint information in thepv001.json
file accordingly. - When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
- If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
- Edit the pv001.json file and enter the name of the endpoint in the endpoints section.
oc create -f pv001.json
# oc create -f pv001.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f pv001.json
# oc create -f pv001.json persistentvolume "glusterfs-4fc22ff9" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is created, execute the following command:
oc get pv
# oc get pv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Available 4s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Bind the persistent volume to the persistent volume claim by executing the following command:
oc create -f pvc.json
# oc create -f pvc.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc create -f pvc.json
# oc create -f pvc.json persistentvolumeclaim "glusterfs-claim" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
oc get pv oc get pvc
# oc get pv # oc get pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Bound storage-project/glusterfs-claim 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-4fc22ff9 100Gi RWX 11s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The claim can now be used in the application:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yml
# oc create -f app.yml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en/openshift-enterprise/version-3.2/installation-and-configuration/#complete-example-using-gusterfs-creating-the-pod. - To verify that the pod is created, execute the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
If you encounter a permission denied error on the mount point, then refer to section Gluster Volume Security at: https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/installation-and-configuration/#gluster-volume-security.