OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Chapter 3. Creating Persistent Volumes
OpenShift Container Platform clusters can be provisioned with persistent storage using GlusterFS.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
Binding PVs by Labels and Selectors
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV.
You can use labels to identify common attributes or characteristics shared among volumes. For example, you can define the gluster volume to have a custom attribute (key) named storage-tier _with a value of _gold _assigned. A claim will be able to select a PV with _storage-tier=gold to match this PV.
More details for provisioning volumes in file-based storage is provided in ]. Similarly, further details for provisioning volumes in block-based storage is provided in xref:Block_Storage[.
3.1. File Storage Copy linkLink copied to clipboard!
File storage, also called file-level or file-based storage, stores data in a hierarchical structure. The data is saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. You can provision volumes either statically or dynamically for file-based storage.
3.1.1. Static Provisioning of Volumes Copy linkLink copied to clipboard!
To enable persistent volume support in OpenShift and Kubernetes, few endpoints and a service must be created.
The following steps are not required if OpenShift Container Storage was deployed using the (default) Ansible installer
The sample glusterfs endpoint file (sample-gluster-endpoints.yaml) and the sample glusterfs service file (sample-gluster-service.yaml) are available at* /usr/share/heketi/templates/ *directory.
The sample endpoints and services file will not be available for ansible deployments since /usr/share/heketi/templates/ directory will not be created for such deployments.
Ensure to copy the sample glusterfs endpoint file / glusterfs service file to a location of your choice and then edit the copied file. For example:
cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<_path_>/gluster-endpoints.yaml
# cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<_path_>/gluster-endpoints.yaml
To specify the endpoints you want to create, update the copied sample-gluster-endpoints.yaml file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - name
- The name of the endpoint.
- ip
- The ip address of the Red Hat Gluster Storage nodes.
Execute the following command to create the endpoints:
oc create -f <name_of_endpoint_file>
# oc create -f <name_of_endpoint_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f sample-gluster-endpoints.yaml
# oc create -f sample-gluster-endpoints.yaml endpoints "glusterfs-cluster" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the endpoints are created, execute the following command:
oc get endpoints
# oc get endpoints
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to create a gluster service:
oc create -f <name_of_service_file>
# oc create -f <name_of_service_file>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f sample-gluster-service.yaml
# oc create -f sample-gluster-service.yaml service "glusterfs-cluster" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the service is created, execute the following command:
oc get service
# oc get service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe endpoints and the services must be created for each project that requires a persistent storage.
Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume specification describing this volume to the file pv001.json:
heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must manually add the Labels information to the .json file.
Following is the example YAML file for reference:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - name
- The name of the volume.
- storage
- The amount of storage allocated to this volume
- glusterfs
- The volume type being used, in this case the glusterfs plug-in
- endpoints
- The endpoints name that defines the trusted storage pool created
- path
- The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.
- accessModes
- accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
- labels
- Use labels to identify common attributes or characteristics shared among volumes. In this case, we have defined the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.
Note-
heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to
oc create -f -
to create the persistent volume immediately. -
If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the
heketi-cli volume list
command. This command lists the cluster name. You can then update the endpoint information in the pv001.json file accordingly. - When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
- If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
Edit the pv001.json file and enter the name of the endpoint in the endpoint’s section:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a persistent volume by executing the following command:
oc create -f pv001.json
# oc create -f pv001.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f pv001.json
# oc create -f pv001.json persistentvolume "glusterfs-4fc22ff9" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the persistent volume is created, execute the following command:
oc get pv
# oc get pv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pv
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Available 4s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a persistent volume claim file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the persistent volume to the persistent volume claim by executing the following command:
oc create -f pvc.yaml
# oc create -f pvc.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc create -f pvc.yaml
# oc create -f pvc.yaml persistentvolumeclaim"glusterfs-claim" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
oc get pv oc get pvc
# oc get pv # oc get pvc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pv
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Bound storage-project/glusterfs-claim 1m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-4fc22ff9 100Gi RWX 11s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The claim can now be used in the application. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
To verify that the pod is created, execute the following command:
oc get pods -n <storage_project_name>
# oc get pods -n <storage_project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If you encounter a permission denied error on the mount point, then refer to section Gluster Volume Security at: https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
3.1.2. Dynamic Provisioning of Volumes Copy linkLink copied to clipboard!
Dynamic provisioning enables you to provision a Red Hat Gluster Storage volume to a running application container without pre-creating the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
The steps outlined below are not necessary when OpenShift Container Storage was deployed using the (default) Ansible installer and the default storage class (glusterfs-storage) created during the installation will be used.
3.1.2.1. Configuring Dynamic Provisioning of Volumes Copy linkLink copied to clipboard!
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
3.1.2.1.1. Creating Secret for Heketi Authentication Copy linkLink copied to clipboard!
To create a secret for Heketi authentication, execute the following commands:
If the admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Red Hat Openshift Container Storage, then the following steps can be omitted.
Create an encoded value for the password by executing the following command:
echo -n "<key>" | base64
# echo -n "<key>" | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where “key” is the value for “admin-key” that was created while deploying Red Hat Openshift Container Storage
For example:
echo -n "mypassword" | base64
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret file. A sample secret file is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the secret on Openshift by executing the following command:
oc create -f glusterfs-secret.yaml
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.1.2. Registering a Storage Class Copy linkLink copied to clipboard!
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
To create a storage class execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
- resturl
- Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.
- restuser
- Gluster REST service/Heketi user who has access to create volumes in the trusted storage pool
- volumetype
It specifies the volume type that is being used.
NoteDistributed-Three-way replication is the only supported volume type.This includes both standard three-way replication volumes and arbiter 2+1.
- clusterid
It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma-separated cluster IDs. This is an optional parameter.
NoteTo get the cluster ID, execute the following command:
heketi-cli cluster list
# heketi-cli cluster list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - secretNamespace + secretName
Identification of Secret instance that contains the user password that is used when communicating with the Gluster REST service. These parameters are optional. Empty password will be used when both secretNamespace and secretName are omitted.
NoteWhen the persistent volumes are dynamically provisioned, the Gluster plugin automatically creates an endpoint and a headless service in the name gluster-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.
- volumeoptions
This is an optional parameter. It allows you to create glusterfs volumes with encryption enabled by setting the parameter to "client.ssl on, server.ssl on". For more information on enabling encryption, see Chapter 8, Enabling Encryption.
NoteDo not add this parameter in the storageclass if encryption is not enabled.
- volumenameprefix
This is an optional parameter. It depicts the name of the volume created by heketi. For more information see Section 3.1.2.1.5, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”
NoteThe value for this parameter cannot contain
_
in the storageclass.- allowVolumeExpansion
-
To increase the PV claim value, ensure to set the allowVolumeExpansion parameter in the storageclass file to
true
. For more information, see Section 3.1.2.1.7, “Expanding Persistent Volume Claim”.
To register the storage class to Openshift, execute the following command:
oc create -f glusterfs-storageclass.yaml
# oc create -f glusterfs-storageclass.yaml storageclass "gluster-container" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the details of the storage class, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.1.3. Creating a Persistent Volume Claim Copy linkLink copied to clipboard!
To create a persistent volume claim execute the following commands:
Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - persistentVolumeReclaimPolicy
This is an optional parameter. When this parameter is set to "Retain" the underlying persistent volume is retained even after the corresponding persistent volume claim is deleted.
NoteWhen PVC is deleted, the underlying heketi and gluster volumes are not deleted if "persistentVolumeReclaimPolicy:" is set to "Retain". To delete the volume, you must use heketi cli and then delete the PV.
Register the claim by executing the following command:
oc create -f glusterfs-pvc-claim1.yaml
# oc create -f glusterfs-pvc-claim1.yaml persistentvolumeclaim "claim1" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the details of the claim, execute the following command:
oc describe pvc <_claim_name_>
# oc describe pvc <_claim_name_>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.1.4. Verifying Claim Creation Copy linkLink copied to clipboard!
To verify if the claim is created, execute the following commands:
To get the details of the persistent volume claim and persistent volume, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To validate if the endpoint and the services are created as part of claim creation, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.1.5. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes Copy linkLink copied to clipboard!
You can provide a custom volume name prefix to the persistent volume that is created. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project / Namespace name.
To set the name, ensure that you have added the parameter volumenameprefix to the storage class file. For more information, see Section 3.1.2.1.2, “Registering a Storage Class”
The value for this parameter cannot contain _
in the storageclass.
To verify if the custom volume name prefix is set, execute the following command:
oc describe pv <pv_name>
# oc describe pv <pv_name>
For example:
The value for Path will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
3.1.2.1.6. Using the Claim in a Pod Copy linkLink copied to clipboard!
Execute the following steps to use the claim in a pod.
To use the claim in the application, for example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
To verify that the pod is created, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.2.1.7. Expanding Persistent Volume Claim Copy linkLink copied to clipboard!
To increase the PV claim value, ensure to set the allowVolumeExpansion parameter in the storageclass file to true
. For more information refer, Section 3.1.2.1.2, “Registering a Storage Class”
You can also resize a PV via the OpenShift Container Platform 3.11 Web Console.
To expand the persistent volume claim value, execute the following commands:
To check the existing persistent volume size, execute the following command on the app pod:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow df -h
# df -h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example the persistent volume size is 2Gi.
To edit the persistent volume claim value, execute the following command and edit the following storage parameter:
resources: requests: storage: <storage_value>
resources: requests: storage: <storage_value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit pvc <claim_name>
# oc edit pvc <claim_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to expand the storage value to 20Gi:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify, execute the following command on the app pod:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow / # df -h
/ # df -h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It is observed that the size is changed from 2Gi (earlier) to 20Gi.
3.1.2.1.8. Deleting a Persistent Volume Claim Copy linkLink copied to clipboard!
If the "persistentVolumeReclaimPolicy" parameter was set to "Retain" when registering the storageclass, the underlying PV and the corresponding volume remains even when a PVC is deleted.
To delete a claim, execute the following command:
oc delete pvc <claim-name>
# oc delete pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pvc claim1
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if the claim is deleted, execute the following command:
oc get pvc <claim-name>
# oc get pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pvc claim1
# oc get pvc claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:
To verify if the persistent volume is deleted, execute the following command:
oc get pv <pv-name>
# oc get pv <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if the endpoints are deleted, execute the following command:
oc get endpoints <endpointname>
# oc get endpoints <endpointname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get endpoints gluster-dynamic-claim1
# oc get endpoints gluster-dynamic-claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if the service is deleted, execute the following command:
oc get service <servicename>
# oc get service <servicename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get service gluster-dynamic-claim1
# oc get service gluster-dynamic-claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.1.3. Volume Security Copy linkLink copied to clipboard!
Volumes come with a UID/GID of 0 (root). For an application pod to write to the volume, it should also have a UID/GID of 0 (root). With the volume security feature the administrator can now create a volume with a unique GID and the application pod can write to the volume using this unique GID
Volume security for statically provisioned volumes
To create a statically provisioned volume with a GID, execute the following command:
heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
In the above command, a 100G persistent volume with a GID of 590 is created and the output of the persistent volume specification describing this volume is added to the pv001.json file.
For more information about accessing the volume using this GID, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/configuring_clusters/persistent-storage-examples#install-config-storage-examples-gluster-example.
Volume security for dynamically provisioned volumes
Two new parameters, gidMin
and gidMax
, are introduced with the dynamic provisioner. These values allow the administrator to configure the GID range for the volume in the storage class. To set up the GID values and provide volume security for dynamically provisioned volumes, execute the following commands:
Create a storage class file with the GID values. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the
gidMin
andgidMax
values are not provided, then the dynamic provisioned volumes will have the GID between 2000 and 2147483647.- Create a persistent volume claim. For more information see, Section 3.1.2.1.3, “Creating a Persistent Volume Claim”
- Use the claim in the pod. Ensure that this pod is non-privileged. For more information see, Section 3.1.2.1.6, “Using the Claim in a Pod”
To verify if the GID is within the range specified, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow id
$ id
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
id
$ id uid=1000060000 gid=0(root) groups=0(root),2001
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, 2001 in the above output is the allocated GID for the persistent volume, which is within the range specified in the storage class. You can write to this volume with the allocated GID.
NoteWhen the persistent volume claim is deleted, the GID of the persistent volume is released from the pool.
3.1.4. Device tiering in heketi Copy linkLink copied to clipboard!
Heketi supports a simple tag matching approach to use certain devices when placing a volume. The user is required to specify a key-value pair on a specific set of devices and create a new volume with a volume option key user.heketi.device-tag-match
key and a simple matching rule.
Procedure
Apply the required tags on the heketi devices.
heketi-cli device settags <device-name> <key>:<value>
# heketi-cli device settags <device-name> <key>:<value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example :
heketi-cli device settags 1fe1b83e5660efb53cc56433cedf7771 disktype:hdd
# heketi-cli device settags 1fe1b83e5660efb53cc56433cedf7771 disktype:hdd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the applied tag from the device.
heketi-cli device rmtags <device-name> <key>
# heketi-cli device rmtags <device-name> <key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example :
heketi-cli device rmtags 1fe1b83e5660efb53cc56433cedf7771 disktype
# heketi-cli device rmtags 1fe1b83e5660efb53cc56433cedf7771 disktype
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the added tag on the device.
heketi-cli device info <device-name>
# heketi-cli device info <device-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example :
heketi-cli device info 1fe1b83e5660efb53cc56433cedf7771
# heketi-cli device info 1fe1b83e5660efb53cc56433cedf7771
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output :
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use tagged devices to create the volume.
heketi-cli volume create --size=<size in GiB> --gluster-volume-options'user.heketi.device-tag-match <key>=<value>’
# heketi-cli volume create --size=<size in GiB> --gluster-volume-options'user.heketi.device-tag-match <key>=<value>’
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important-
When creating volumes, you must pass a new volume option
user.heketi.device-tag-match
where the value of the option is a tag key followed by either "=" or "!=" and followed by a tag value. - All matches are exact and case sensitive and only one device-tag-match can be specified.
Example :
heketi-cli volume create --size=5 --gluster-volume-options 'user.heketi.device-tag-match disktype=hdd’
# heketi-cli volume create --size=5 --gluster-volume-options 'user.heketi.device-tag-match disktype=hdd’
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOnce a volume is created the volume options list is fixed. The tag-match rules persist with the volume metadata for volume expansion and brick replacement purposes.
-
When creating volumes, you must pass a new volume option
Create a storage class.
Create a storage class that only creates volumes on hard disks.
cat hdd-storageclass.yaml
# cat hdd-storageclass.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a storage class that only creates volumes using faster solid state storage.
ImportantYou must use a negative tag matching rule that excludes hard disk devices.
cat sdd-storageclass.yaml
# cat sdd-storageclass.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Block Storage Copy linkLink copied to clipboard!
Block storage allows the creation of high performance individual storage units. Unlike the traditional file storage capability that glusterfs supports, each storage volume/block device can be treated as an independent disk drive, so that each storage volume/block device can support an individual file system.
gluster-block is a distributed management framework for block devices. It aims to make Gluster-backed block storage creation and maintenance as simple as possible. gluster-block can provision block devices and export them as iSCSI LUN’s across multiple nodes, and uses iSCSI protocol for data transfer as SCSI block/commands.
- Block volume expansion is now supported in OpenShift Container Storage 3.11. Refer to Section 3.2.3, “Block volume expansion”.
- Static provisioning of volumes is not supported for Block storage. Dynamic provisioning of volumes is the only method supported.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:
uname -r
# uname -r
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the node for the latest kernel update to take effect.
3.2.1. Dynamic Provisioning of Volumes for Block Storage Copy linkLink copied to clipboard!
Dynamic provisioning enables you to provision a Red Hat Gluster Storage volume to a running application container without pre-creating the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
The steps outlined below are not necessary when OpenShift Container Storage was deployed using the (default) Ansible installer and the default storage class (glusterfs-storage-block) created during the installation will be used.
3.2.1.1. Configuring Dynamic Provisioning of Volumes Copy linkLink copied to clipboard!
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
3.2.1.1.1. Configuring Multipathing on all Initiators Copy linkLink copied to clipboard!
To ensure the iSCSI initiator can communicate with the iSCSI targets and achieve HA using multipathing, execute the following steps on all the OpenShift nodes (iSCSI initiator) where the app pods are hosted:
To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
yum install iscsi-initiator-utils device-mapper-multipath
# yum install iscsi-initiator-utils device-mapper-multipath
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable multipath, execute the following command:
mpathconf --enable
# mpathconf --enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and add the following content to the multipath.conf file:
NoteIn case of upgrades, make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to start multipath daemon and [re]load the multipath configuration:
systemctl start multipathd
# systemctl start multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl reload multipathd
# systemctl reload multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.1.2. Creating Secret for Heketi Authentication Copy linkLink copied to clipboard!
To create a secret for Heketi authentication, execute the following commands:
If the admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Red Hat Openshift Container Storage, then the following steps can be omitted.
Create an encoded value for the password by executing the following command:
echo -n "<key>" | base64
# echo -n "<key>" | base64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where
key
is the value foradmin-key
that was created while deploying CNSFor example:
echo -n "mypassword" | base64
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret file. A sample secret file is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Register the secret on Openshift by executing the following command:
oc create -f glusterfs-secret.yaml
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.1.3. Registering a Storage Class Copy linkLink copied to clipboard!
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
Create a storage class. A sample storage class file is presented below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,
- provisioner
The provisioner name should match the provisioner name with which the
glusterblock provisioner
pod was deployed. To get theprovisioner name
use the following command:oc describe pod <glusterblock_provisioner_pod_name> |grep PROVISIONER_NAME
# oc describe pod <glusterblock_provisioner_pod_name> |grep PROVISIONER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe pod glusterblock-registry-provisioner-dc-1-5j8l9 |grep PROVISIONER_NAME
# oc describe pod glusterblock-registry-provisioner-dc-1-5j8l9 |grep PROVISIONER_NAME PROVISIONER_NAME: gluster.org/glusterblock-infra-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - resturl
- Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.
- restuser
- Gluster REST service/Heketi user who has access to create volumes in the trusted storage pool
- restsecretnamespace + restsecretname
-
Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional. Empty password will be used when both
restsecretnamespace
andrestsecretname
are omitted. - hacount
-
It is the count of the number of paths to the block target server.
hacount
provides high availability via multipathing capability of iSCSI. If there is a path failure, the I/Os will not be interrupted and will be served via another available paths. - clusterids
It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma-separated cluster IDs. This is an optional parameter.
NoteTo get the cluster ID, execute the following command:
heketi-cli cluster list
# heketi-cli cluster list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - chapauthenabled
- If you want to provision block volume with CHAP authentication enabled, this value has to be set to true. This is an optional parameter.
- volumenameprefix
This is an optional parameter. It depicts the name of the volume created by heketi. For more information see, Section 3.2.1.1.6, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”
NoteThe value for this parameter cannot contain
_
in the storageclass.
To register the storage class to Openshift, execute the following command:
oc create -f glusterfs-block-storageclass.yaml
# oc create -f glusterfs-block-storageclass.yaml storageclass "gluster-block" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the details of the storage class, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.1.4. Creating a Persistent Volume Claim Copy linkLink copied to clipboard!
To create a persistent volume claim execute the following commands:
Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - persistentVolumeReclaimPolicy
This is an optional parameter. When this parameter is set to "Retain" the underlying persistent volume is retained even after the corresponding persistent volume claim is deleted.
NoteWhen PVC is deleted, the underlying heketi and gluster volumes are not deleted if "persistentVolumeReclaimPolicy:" is set to "Retain". To delete the volume, you must use heketi cli and then delete the PV.
Register the claim by executing the following command:
oc create -f glusterfs-block-pvc-claim.yaml
# oc create -f glusterfs-block-pvc-claim.yaml persistentvolumeclaim "claim1" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the details of the claim, execute the following command:
oc describe pvc <_claim_name_>
# oc describe pvc <_claim_name_>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.1.5. Verifying Claim Creation Copy linkLink copied to clipboard!
To verify if the claim is created, execute the following commands:
To get the details of the persistent volume claim and persistent volume, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To identify block volumes and block hosting volumes refer https://access.redhat.com/solutions/3897581
3.2.1.1.6. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes Copy linkLink copied to clipboard!
You can provide a custom volume name prefix to the persistent volume that is created. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project / Namespace name.
To set the name, ensure that you have added the parameter volumenameprefix to the storage class file. For more information, refer Section 3.2.1.1.3, “Registering a Storage Class”
The value for this parameter cannot contain _
in the storageclass.
To verify if the custom volume name prefix is set, execute the following command:
oc describe pv <pv_name>
# oc describe pv <pv_name>
For example:
The value for glusterBlockShare will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
3.2.1.1.7. Using the Claim in a Pod Copy linkLink copied to clipboard!
Execute the following steps to use the claim in a pod.
To use the claim in the application, for example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f app.yaml
# oc create -f app.yaml pod "busybox" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
To verify that the pod is created, execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify that the persistent volume is mounted inside the container, execute the following command:
oc rsh busybox
# oc rsh busybox
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.1.8. Deleting a Persistent Volume Claim Copy linkLink copied to clipboard!
If the "persistentVolumeReclaimPolicy" parameter was set to "Retain" when registering the storageclass, the underlying PV and the corresponding volume remains even when a PVC is deleted.
To delete a claim, execute the following command:
oc delete pvc <claim-name>
# oc delete pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete pvc claim1
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if the claim is deleted, execute the following command:
oc get pvc <claim-name>
# oc get pvc <claim-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pvc claim1
# oc get pvc claim1 No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:
To verify if the persistent volume is deleted, execute the following command:
oc get pv <pv-name>
# oc get pv <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next step: If you are installing Red Hat Openshift Container Storage 3.11, and you want to use block storage as the backend storage for logging and metrics, proceed to Chapter 7, Gluster Block Storage as Backend for Logging and Metrics.
3.2.2. Replacing a node on Block Storage Copy linkLink copied to clipboard!
If you want to replace a block from a node that is out of resources or is faulty, it can be replaced with a new node.
Execute the following commands:
Execute the following command to fetch the zone and cluster info from heketi
heketi-cli topology info --user=<user> --secret=<user key>
# heketi-cli topology info --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - --user
- heketi user
- --secret
- Secret key for a specified user
- After obtaining the cluster id and zone id, refer to Adding New Nodes to add a new node.
Execute the following command to add the device
heketi-cli device add --name=<device name> --node=<node id> --user=<user> --secret=<user key>
# heketi-cli device add --name=<device name> --node=<node id> --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - --name
- Name of device to add
- --node
- Newly added node id
For example:
heketi-cli device add --name=/dev/vdc --node=2639c473a2805f6e19d45997bb18cb9c --user=admin --secret=adminkey
# heketi-cli device add --name=/dev/vdc --node=2639c473a2805f6e19d45997bb18cb9c --user=admin --secret=adminkey Device added successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the new node and its associated devices are added to heketi, the faulty or unwanted node can be removed from heketi
To remove any node from heketi, follow this workflow:
- node disable (Disallow usage of a node by placing it offline)
- node replace (Removes a node and all its associated devices from Heketi)
- device delete (Deletes a device from Heketi node)
- node delete (Deletes a node from Heketi management)
Execute the following command to fetch the node list from heketi
#heketi-cli node list --user=<user> --secret=<user key>
#heketi-cli node list --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
heketi-cli node list --user=admin --secret=adminkey
# heketi-cli node list --user=admin --secret=adminkey Id:05746c562d6738cb5d7de149be1dac04 Cluster:607204cb27346a221f39887a97cf3f90 Id:ab37fc5aabbd714eb8b09c9a868163df Cluster:607204cb27346a221f39887a97cf3f90 Id:c513da1f9bda528a9fd6da7cb546a1ee Cluster:607204cb27346a221f39887a97cf3f90 Id:e6ab1fe377a420b8b67321d9e60c1ad1 Cluster:607204cb27346a221f39887a97cf3f90
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to fetch the node info of the node, that has to be deleted from heketi:
heketi-cli node info <nodeid> --user=<user> --secret=<user key>
# heketi-cli node info <nodeid> --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to disable the node from heketi. This makes the node go offline:
heketi-cli node disable <node-id> --user=<user> --secret=<user key>
# heketi-cli node disable <node-id> --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
heketi-cli node disable ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey
# heketi-cli node disable ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df is now offline
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to remove a node and all its associated devices from Heketi:
#heketi-cli node remove <node-id> --user=<user> --secret=<user key>
#heketi-cli node remove <node-id> --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
heketi-cli node remove ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey
# heketi-cli node remove ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df is now removed
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the devices from heketi node:
heketi-cli device delete <device-id> --user=<user> --secret=<user key>
# heketi-cli device delete <device-id> --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
heketi-cli device delete 0fca78c3a94faabfbe5a5a9eef01b99c --user=admin --secret=adminkey
# heketi-cli device delete 0fca78c3a94faabfbe5a5a9eef01b99c --user=admin --secret=adminkey Device 0fca78c3a94faabfbe5a5a9eef01b99c deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete a node from Heketi management:
#heketi-cli node delete <nodeid> --user=<user> --secret=<user key>
#heketi-cli node delete <nodeid> --user=<user> --secret=<user key>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
heketi-cli node delete ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey
# heketi-cli node delete ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands on any one of the gluster pods to replace the faulty node with the new node:
Execute the following command to get a list of block volumes hosted under block-hosting-volume:
gluster-block list <block-hosting-volume> --json-pretty
# gluster-block list <block-hosting-volume> --json-pretty
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the list of servers that are hosting the block volume, also save the GBID and PASSWORD values for later use:
gluster-block info <block-hosting-volume>/<block-volume> --json-pretty
# gluster-block info <block-hosting-volume>/<block-volume> --json-pretty
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to replace the faulty node with the new node:
gluster-block replace <volname/blockname> <old-node> <new-node> [force]
# gluster-block replace <volname/blockname> <old-node> <new-node> [force]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NoteThe next steps are to be executed only if the block that is to be replaced is still in use.
Skip this step if the block volume is not currently mounted. If the block volume is in use by the application, we need to reload the mapper device on the initiator side.
Identify the initiator node and targetname:
To find initiator node:
oc get pods -o wide | grep <podname>
# oc get pods -o wide | grep <podname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where podname is the name of the pod on which the blockvolume is mounted.
For example
oc get pods -o wide | grep cirros1 cirros1-1-x6b5n 1/1 Running 0 1h 10.130.0.5 dhcp46-31.lab.eng.blr.redhat.com <none>
# oc get pods -o wide | grep cirros1 cirros1-1-x6b5n 1/1 Running 0 1h 10.130.0.5 dhcp46-31.lab.eng.blr.redhat.com <none>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To find the targetname:
oc describe pv <pv_name> | grep IQN
# oc describe pv <pv_name> | grep IQN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc describe pv pvc-c50c69db-5f76-11ea-b27b-005056b253d1 | grep IQN
# oc describe pv pvc-c50c69db-5f76-11ea-b27b-005056b253d1 | grep IQN IQN: iqn.2016-12.org.gluster-block:87ffbcf3-e21e-4fa5-bd21-7db2598e8d3f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on the initiator node to find the mapper device:
mount | grep <targetname>
# mount | grep <targetname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reload the mapper device:
multipath -r mpathX
# multipath -r mpathX
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
mount | grep iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a/dev/mapper/mpatha on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/192.168.124.63:3260-iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a-lun-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) multipath -r mpatha
# mount | grep iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a/dev/mapper/mpatha on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/192.168.124.63:3260-iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a-lun-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) # multipath -r mpatha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Log out of the old portal by executing the following command on the initiator:
iscsiadm -m node -T <targetname> -p <old node> -u
# iscsiadm -m node -T <targetname> -p <old node> -u
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.63 -u
# iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.63 -u Logging out of session [sid: 8, target: iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a, portal: 192.168.124.63,3260] Logout of [sid: 8, target: iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a, portal: 192.168.124.63,3260] successful.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To re-discover the new node execute the following command:
iscsiadm -m discovery -t st -p <new node>
# iscsiadm -m discovery -t st -p <new node>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
iscsiadm -m discovery -t st -p 192.168.124.73
# iscsiadm -m discovery -t st -p 192.168.124.73 192.168.124.79:3260,1 iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a 192.168.124.73:3260,2 iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the new portal by executing the following:
Update the authentication credentials (use GBID and PASSWORD from step 11ii)
iscsiadm -m node -T <targetname> -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v <GBID> -n node.session.auth.password -v <PASSWORD> -p <new node ip>
# iscsiadm -m node -T <targetname> -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v <GBID> -n node.session.auth.password -v <PASSWORD> -p <new node ip>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to the new portal
iscsiadm -m node -T <targetname> -p <new node ip> -l
# iscsiadm -m node -T <targetname> -p <new node ip> -l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -n node.session.auth.password -v a6a9081f-3d0d-4e8b-b9b0-d2be703b455d -p 192.168.124.73 iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.73 -l
# iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -n node.session.auth.password -v a6a9081f-3d0d-4e8b-b9b0-d2be703b455d -p 192.168.124.73 # iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.73 -l
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To verify if the enabled hosting volume is replaced and running successfully, execute the following command on the initiator:
ll /dev/disk/by-path/ip-* | grep <targetname> | grep <“new node ip”>
# ll /dev/disk/by-path/ip-* | grep <targetname> | grep <“new node ip”>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you update a gluster block Persistent Volume (PV) with the new IP address.
The PVs are immutable by definition, so you cannot edit the PV, which means that you cannot change the old IP address on the PV. A document was created with the procedure to workaroud this issue (recreate a new PV and delete the old PV definition using the same data/underlying storage device), see the Red Hat Knowledgebase solution Gluster block PVs are not updated with new IPs after gluster node replacement.
3.2.3. Block volume expansion Copy linkLink copied to clipboard!
You can expand the block persistent volume claim to increase the amount of storage on the application pods. There are two ways to do this; offline resizing and online resizing.
3.2.3.1. Offline resizing Copy linkLink copied to clipboard!
Ensure that block hosting volume has sufficient size,before expanding the block PVC.
To get the Heketi block volume ID of the PVC, execute the following command on the primary OCP node:
oc get pv $(oc get pvc <PVC-NAME> --no-headers -o=custom-columns=:.spec.volumeName) -o=custom-columns=:.metadata.annotations."gluster\.org/volume-id"
# oc get pv $(oc get pvc <PVC-NAME> --no-headers -o=custom-columns=:.spec.volumeName) -o=custom-columns=:.metadata.annotations."gluster\.org/volume-id"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the block volume ID ,execute the following command:
heketi-cli blockvolume info <block-volume-id>
# heketi-cli blockvolume info <block-volume-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the block hosting volume information, execute the following command:
heketi-cli volume info <block-hosting-volume-id>
# heketi-cli volume info <block-hosting-volume-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that you have sufficient free size.
- Bring down the application pod.
To expand the block volume through heketi-cli, execute the following command:
heketi-cli blockvolume expand <block-volume-id> --new-size=<net-new-size>
# heketi-cli blockvolume expand <block-volume-id> --new-size=<net-new-size>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the Size and UsableSize match in the expand output. Steps 4 to 8 can be executed when Size and UsableSize match.
Replace
PVC-NAME
with your PVC and create a job to refresh the block volume size.Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the new size in logs of the pod, execute the following command:
oc logs refresh-block-size-xxxxx
# oc logs refresh-block-size-xxxxx
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that
df -Th
output postxfs_growfs
reflects the new size:For example:
oc logs refresh-block-size-jcbzh df -Th /mnt
# oc logs refresh-block-size-jcbzh # df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/mpatha xfs 5.0G 33M 5.0G 1% /mnt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow df -Th /mnt
# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/mpatha xfs 7.0G 34M 6.0G 1% /mnt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check the success of the job, execute the following command:
oc get jobs
# oc get jobs NAME DESIRED SUCCESSFUL AGE refresh-block-size 1 1 36m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To delete the job once it is successful, execute the following command:
oc delete job refresh-block-size
# oc delete job refresh-block-size job.batch "refresh-block-size" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You can use the new size after bringing up your application pod.
3.2.3.2. Online resizing Copy linkLink copied to clipboard!
Ensure that block hosting volume has sufficient size,before expanding the block PVC.
To get the Heketi block volume ID of the PVC, execute the following command on the primary OCP node:
oc get pv $(oc get pvc <PVC-NAME> --no-headers -o=custom-columns=:.spec.volumeName) -o=custom-columns=:.metadata.annotations."gluster\.org/volume-id"
# oc get pv $(oc get pvc <PVC-NAME> --no-headers -o=custom-columns=:.spec.volumeName) -o=custom-columns=:.metadata.annotations."gluster\.org/volume-id"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the block volume ID ,execute the following command:
heketi-cli blockvolume info <block-volume-id>
# heketi-cli blockvolume info <block-volume-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the block hosting volume information, execute the following command:
heketi-cli volume info <block-hosting-volume-id>
# heketi-cli volume info <block-hosting-volume-id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that you have sufficient free size.
To expand the block volume through heketi-cli, execute the following command:
heketi-cli blockvolume expand <BLOCK-VOLUME-ID> --new-size=<net-new-size>
# heketi-cli blockvolume expand <BLOCK-VOLUME-ID> --new-size=<net-new-size>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the Size and UsableSize match in the expand output. Steps 3 to 9 can be executed when Size and UsableSize match.
To get the iSCSI target IQN name mapped to PV, execute the following command and make a note of it for further reference:
oc get pv <PV-NAME> -o=custom-columns=:.spec.iscsi.iqn
# oc get pv <PV-NAME> -o=custom-columns=:.spec.iscsi.iqn
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pv pvc-fc3e9160-aaf9-11ea-a29f-005056b781de -o=custom-columns=:.spec.iscsi.iqn
# oc get pv pvc-fc3e9160-aaf9-11ea-a29f-005056b781de -o=custom-columns=:.spec.iscsi.iqn iqn.2016-12.org.gluster-block:8ce8eb4c-4951-4777-9b42-244b7ea525cd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Login to the host node of the application pod.
To get the node name of the application pod, execute the following command:
oc get pods <POD-NAME> -o=custom-columns=:.spec.nodeName
# oc get pods <POD-NAME> -o=custom-columns=:.spec.nodeName
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pods cirros2-1-8x6w5 -o=custom-columns=:.spec.nodeName
# oc get pods cirros2-1-8x6w5 -o=custom-columns=:.spec.nodeName dhcp53-203.lab.eng.blr.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To login to the host node of the application pod,execute the following command:
ssh <NODE-NAME>
# ssh <NODE-NAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ssh dhcp53-203.lab.eng.blr.redhat.com
# ssh dhcp53-203.lab.eng.blr.redhat.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Copy the multipath mapper device name (for example mpatha) ,current sizes of individual paths (for example sdd, sde and sdf) and mapper device for further reference.
lsblk | grep -B1 <pv-name>
# lsblk | grep -B1 <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use IQN name from step 3 to rescan the devices on the host node of the application pod (which is an iSCSI initiator), to execute the following command:
iscsiadm -m node -R -T <iqn-name>
# iscsiadm -m node -R -T <iqn-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
iscsiadm -m node -R -T iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1
# iscsiadm -m node -R -T iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1 Rescanning session [sid: 1, target:iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1, portal: 192.168.124.80,3260] Rescanning session [sid: 2, target:iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1, portal: 192.168.124.73,3260] Rescanning session [sid: 3, target:iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1, portal: 192.168.124.63,3260]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou should now see the new size reflecting at the individual paths (sdd, sde & sdf):
lsblk | grep -B1 <pv-name>
# lsblk | grep -B1 <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To refresh multipath device size, execute the following commands.
-
Get the multipath mapper device name from step 6, from the
lsblk
output. To refresh the multipath mapper device, execute the following command:
multipathd -k'resize map <multipath-mapper-name>'
# multipathd -k'resize map <multipath-mapper-name>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
multipathd -k'resize map mpatha'
# multipathd -k'resize map mpatha' Ok
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou should now see the new size reflecting with the mapper device mpatha. Copy the mount point path from the following command output for further reference:
lsblk | grep -B1 <PV-NAME>
# lsblk | grep -B1 <PV-NAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow df -Th | grep <pv-name>
# df -Th | grep <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
# df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de /dev/mapper/mpatha xfs 6.0G 44M 6.0G 1% /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Get the multipath mapper device name from step 6, from the
To grow the file system layout, execute the following commands:
xfs_growfs <mount-point>
# xfs_growfs <mount-point>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow df -Th | grep <pv-name>
# df -Th | grep <pv-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
# df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de /dev/mapper/mpatha xfs 7.0G 44M 7.0G 1% /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - You can now use the new size without restarting the application pod.