5.2. Dynamic Provisioning of Volumes
Dynamic provisioning enables provisioning of Red hat Gluster Storage volume to a running application container without having to pre-create the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
Note
Dynamically provisioned Volumes are supported from Container Native Storage 3.4. If you have any statically provisioned volumes and require more information about managing it, then refer Section 5.1, “Static Provisioning of Volumes”
5.2.1. Configuring Dynamic Provisioning of Volumes
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
5.2.1.1. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
- To create a storage class execute the following command:
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8081" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret"
where,resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage poolsecretNamespace + secretName: Identification of Secret instance that contains the user password that is used when communicating with the Gluster REST service. These parameters are optional. Empty password will be used when both secretNamespace and secretName are omitted.Note
When the persistent volumes are dynamically provisioned, the Gluster plugin automatically creates an endpoint and a headless service in the name gluster-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted. - To register the storage class to Openshift, execute the following command:
# oc create -f glusterfs-storageclass.yaml storageclass "gluster-container" created
- To get the details of the storage class, execute the following command:
# oc describe storageclass gluster-container Name: gluster-container IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: resturl=http://127.0.0.1:8081,restuser=admin,secretName=heketi-secret,secretNamespace=default No events.
5.2.1.2. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:
- Create an encoded value for the password by executing the following command:
# echo -n "mypassword" | base64
where “mypassword” is Heketi’s admin user password.For example:# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
- Create a secret file. A sample secret file is provided below:
# cat glusterfs-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: kubernetes.io/glusterfs
- Register the secret on Openshift by executing the following command:
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
5.2.1.3. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
- Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
# cat glusterfs-pvc-claim1.yaml { "kind": "PersistentVolumeClaim", "apiVersion": "v1", "metadata": { "name": "claim1", "annotations": { "volume.beta.kubernetes.io/storage-class": "gluster-container" } }, "spec": { "accessModes": [ "ReadWriteOnce" ], "resources": { "requests": { "storage": "4Gi" } } } }
- Register the claim by executing the following command:
# oc create -f glusterfs-pvc-claim1.yaml persistentvolumeclaim "claim1" created
- To get the details of the claim, execute the following command:
# oc describe pvc <claim_name>
For example:# oc describe pvc claim1 Name: claim1 Namespace: default StorageClass: gluster-container Status: Bound Volume: pvc-54b88668-9da6-11e6-965e-54ee7551fd0c Labels: <none> Capacity: 4Gi Access Modes: RWO No events.
5.2.1.4. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
- To get the details of the persistent volume claim and persistent volume, execute the following command:
# oc get pv,pvc NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv/pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 4Gi RWO Delete Bound storage-project/claim1 3m NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc/claim1 Bound pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 4Gi RWO 4m
- To validate if the endpoint and the services are created as part of claim creation, execute the following command:
# oc get endpoints,service NAME ENDPOINTS AGE ep/storage-project-router 192.168.68.3:443,192.168.68.3:1936,192.168.68.3:80 28d ep/gluster-dynamic-claim1 192.168.68.2:1,192.168.68.3:1,192.168.68.4:1 5m ep/heketi 10.130.0.21:8080 21d ep/heketi-storage-endpoints 192.168.68.2:1,192.168.68.3:1,192.168.68.4:1 25d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/storage-project-router 172.30.166.64 <none> 80/TCP,443/TCP,1936/TCP 28d svc/gluster-dynamic-claim1 172.30.52.17 <none> 1/TCP 5m svc/heketi 172.30.129.113 <none> 8080/TCP 21d svc/heketi-storage-endpoints 172.30.133.212 <none> 1/TCP 25d
5.2.1.5. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
- To use the claim in the application, for example
# cat app.yml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1
# oc create -f app.yml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en/openshift-container-platform/3.4/single/installation-and-configuration/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
# oc get pods NAME READY STATUS RESTARTS AGE storage-project-router-1-at7tf 1/1 Running 0 13d busybox 1/1 Running 0 8s glusterfs-dc-192.168.68.2-1-hu28h 1/1 Running 0 7d glusterfs-dc-192.168.68.3-1-ytnlg 1/1 Running 0 7d glusterfs-dc-192.168.68.4-1-juqcq 1/1 Running 0 13d heketi-1-9r47c 1/1 Running 0 13d
- To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ $ df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-666733-38050a1d2cdb41dc00d60f25a7a295f6e89d4c529302fb2b93d8faa5a3205fb9 10.0G 33.8M 9.9G 0% / tmpfs 23.5G 0 23.5G 0% /dev tmpfs 23.5G 0 23.5G 0% /sys/fs/cgroup /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /run/secrets /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /dev/termination-log /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/resolv.conf /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/hostname /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 192.168.68.2:vol_5b05cf2e5404afe614f8afa698792bae 4.0G 32.6M 4.0G 1% /usr/share/busybox tmpfs 23.5G 16.0K 23.5G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 23.5G 0 23.5G 0% /proc/kcore tmpfs 23.5G 0 23.5G 0% /proc/timer_stats
5.2.1.6. Deleting a Persistent Volume Claim
- To delete a claim, execute the following command:
# oc delete pvc <claim-name>
For example:# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
- To verify if the claim is deleted, execute the following command:
# oc get pvc <claim-name>
For example:# oc get pvc claim1 No resources found.
When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:- To verify if the persistent volume is deleted, execute the following command:
# oc get pv <pv-name>
For example:# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
- To verify if the endpoints are deleted, execute the following command:
# oc get endpoints <endpointname>
For example:# oc get endpoints gluster-dynamic-claim1 No resources found.
- To verify if the service is deleted, execute the following command:
# oc get service <servicename>
For example:# oc get service gluster-dynamic-claim1 No resources found.