このコンテンツは選択した言語では利用できません。
9.2. Block Storage
Block storage allows the creation of high performance individual storage units. Unlike the traditional file storage capability that glusterfs supports, each storage volume/block device can be treated as an independent disk drive, so that each storage volume/block device can support an individual file system.
gluster-block is a distributed management framework for block devices. It aims to make Gluster-backed block storage creation and maintenance as simple as possible. gluster-block can provision block devices and export them as iSCSI LUN's across multiple nodes, and uses iSCSI protocol for data transfer as SCSI block/commands.
Note
Static provisioning of volumes is not supported for Block storage. Dynamic provisioning of volumes is the only method supported.
Block volume expansion is not supported in Container-Native Storage 3.6.
9.2.1. Dynamic Provisioning of Volumes for Block Storage
Dynamic provisioning enables provisioning of Red Hat Gluster Storage volume to a running application container without having to pre-create the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
Note
If you are upgrading from Container-Native Storage 3.5 to Container-Native Storage 3.6, then ensure you refer Chapter 13, Upgrading your Container-Native Storage Environment before proceeding with the following steps.
9.2.1.1. Configuring Dynamic Provisioning of Volumes
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
9.2.1.1.1. Configuring Multipathing on all Initiators
To ensure the iSCSI initiator can communicate with the iSCSI targets and achieve HA using multipathing, execute the following steps on all the OpenShift nodes (iSCSI initiator) where the app pods are hosted:
- To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
yum install iscsi-initiator-utils device-mapper-multipath
# yum install iscsi-initiator-utils device-mapper-multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable multipath, execute the following command:
mpathconf --enable
# mpathconf --enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create and add the following content to the multipath.conf file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to restart the multipath service:
systemctl restart multipathd
# systemctl restart multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.2. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:
Note
If the
admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Container-Native Storage, then the following steps can be omitted.
- Create an encoded value for the password by executing the following command:
echo -n "<key>" | base64
# echo -n "<key>" | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where “key
” is the value foradmin-key
that was created while deploying CNSFor example:echo -n "mypassword" | base64
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a secret file. A sample secret file is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the secret on Openshift by executing the following command:
oc create -f glusterfs-secret.yaml
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.3. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
- Create a storage class. A sample storage class file is presented below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage poolrestsecretnamespace + restsecretname : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional. Empty password will be used when bothrestsecretnamespace
andrestsecretname
are omitted.hacount: It is the count of the number of paths to the block target server.hacount
provides high availability via multipathing capability of iSCSI. If there is a path failure, the I/Os will not be interrupted and will be served via another available paths.clusterids: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma separated cluster IDs. This is an optional parameter.Note
To get the cluster ID, execute the following command:heketi-cli cluster list
# heketi-cli cluster list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chapauthenabled: If you want to provision block volume with CHAP authentication enabled, this value has to be set to true. This is an optional parameter. - To register the storage class to Openshift, execute the following command:
oc create -f glusterfs-block-storageclass.yaml
# oc create -f glusterfs-block-storageclass.yaml storageclass "gluster-block" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the storage class, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.4. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
- Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Register the claim by executing the following command:
oc create -f glusterfs-block-pvc-claim.yaml
# oc create -f glusterfs-block-pvc-claim.yaml persistentvolumeclaim "claim1" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To get the details of the claim, execute the following command:
oc describe pvc <claim_name>
# oc describe pvc <claim_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.5. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
- To get the details of the persistent volume claim and persistent volume, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.6. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
- To use the claim in the application, for example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.2.1.1.7. Deleting a Persistent Volume Claim
- To delete a claim, execute the following command:
oc delete pvc <claim-name>
# oc delete pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete pvc claim1
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the claim is deleted, execute the following command:
oc get pvc <claim-name>
# oc get pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pvc claim1
# oc get pvc claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:- To verify if the persistent volume is deleted, execute the following command:
oc get pv <pv-name>
# oc get pv <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow