Chapter 3. Creating Persistent Volumes
OpenShift Container Platform clusters can be provisioned with persistent storage using GlusterFS.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
Binding PVs by Labels and Selectors
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV.
You can use labels to identify common attributes or characteristics shared among volumes. For example, you can define the gluster volume to have a custom attribute (key) named storage-tier _with a value of _gold _assigned. A claim will be able to select a PV with _storage-tier=gold to match this PV.
More details for provisioning volumes in file-based storage is provided in ]. Similarly, further details for provisioning volumes in block-based storage is provided in xref:Block_Storage[.
3.1. File Storage
File storage, also called file-level or file-based storage, stores data in a hierarchical structure. The data is saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. You can provision volumes either statically or dynamically for file-based storage.
3.1.1. Static Provisioning of Volumes
To enable persistent volume support in OpenShift and Kubernetes, few endpoints and a service must be created.
The following steps are not required if OpenShift Container Storage was deployed using the (default) Ansible installer
The sample glusterfs endpoint file (sample-gluster-endpoints.yaml) and the sample glusterfs service file (sample-gluster-service.yaml) are available at* /usr/share/heketi/templates/ *directory.
The sample endpoints and services file will not be available for ansible deployments since /usr/share/heketi/templates/ directory will not be created for such deployments.
Ensure to copy the sample glusterfs endpoint file / glusterfs service file to a location of your choice and then edit the copied file. For example:
# cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<_path_>/gluster-endpoints.yaml
To specify the endpoints you want to create, update the copied sample-gluster-endpoints.yaml file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.
# cat sample-gluster-endpoints.yaml apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.10.100 ports: - port: 1 - addresses: - ip: 192.168.10.101 ports: - port: 1 - addresses: - ip: 192.168.10.102 ports: - port: 1
- name
- The name of the endpoint.
- ip
- The ip address of the Red Hat Gluster Storage nodes.
Execute the following command to create the endpoints:
# oc create -f <name_of_endpoint_file>
For example:
# oc create -f sample-gluster-endpoints.yaml endpoints "glusterfs-cluster" created
To verify that the endpoints are created, execute the following command:
# oc get endpoints
For example:
# oc get endpoints NAME ENDPOINTS AGE storage-project-router 192.168.121.233:80,192.168.121.233:443,192.168.121.233:1936 2d glusterfs-cluster 192.168.121.168:1,192.168.121.172:1,192.168.121.233:1 3s heketi 10.1.1.3:8080 2m heketi-storage-endpoints 192.168.121.168:1,192.168.121.172:1,192.168.121.233:1 3m
Execute the following command to create a gluster service:
# oc create -f <name_of_service_file>
For example:
# cat sample-gluster-service.yaml apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1
# oc create -f sample-gluster-service.yaml service "glusterfs-cluster" created
To verify that the service is created, execute the following command:
# oc get service
For example:
# oc get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE storage-project-router 172.30.94.109 <none> 80/TCP,443/TCP,1936/TCP 2d glusterfs-cluster 172.30.212.6 <none> 1/TCP 5s heketi 172.30.175.7 <none> 8080/TCP 2m heketi-storage-endpoints 172.30.18.24 <none> 1/TCP 3m
NoteThe endpoints and the services must be created for each project that requires a persistent storage.
Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume specification describing this volume to the file pv001.json:
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
cat pv001.json { "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "glusterfs-f8c612ee", "creationTimestamp": null }, "spec": { "capacity": { "storage": "100Gi" }, "glusterfs": { "endpoints": "TYPE ENDPOINT HERE", "path": "vol_f8c612eea57556197511f6b8c54b6070" }, "accessModes": [ "ReadWriteMany" ], "persistentVolumeReclaimPolicy": "Retain" }, "status": {}
ImportantYou must manually add the Labels information to the .json file.
Following is the example YAML file for reference:
apiVersion: v1 kind: PersistentVolume metadata: name: pv-storage-project-glusterfs1 labels: storage-tier: gold spec: capacity: storage: 12Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain glusterfs: endpoints: TYPE END POINTS NAME HERE, path: vol_e6b77204ff54c779c042f570a71b1407
- name
- The name of the volume.
- storage
- The amount of storage allocated to this volume
- glusterfs
- The volume type being used, in this case the glusterfs plug-in
- endpoints
- The endpoints name that defines the trusted storage pool created
- path
- The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.
- accessModes
- accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
- labels
- Use labels to identify common attributes or characteristics shared among volumes. In this case, we have defined the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.
Note-
heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to
oc create -f -
to create the persistent volume immediately. -
If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the
heketi-cli volume list
command. This command lists the cluster name. You can then update the endpoint information in the pv001.json file accordingly. - When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
- If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
Edit the pv001.json file and enter the name of the endpoint in the endpoint’s section:
cat pv001.json { "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "glusterfs-f8c612ee", "creationTimestamp": null, "labels": { "storage-tier": "gold" } }, "spec": { "capacity": { "storage": "12Gi" }, "glusterfs": { "endpoints": "glusterfs-cluster", "path": "vol_f8c612eea57556197511f6b8c54b6070" }, "accessModes": [ "ReadWriteMany" ], "persistentVolumeReclaimPolicy": "Retain" }, "status": {} }
Create a persistent volume by executing the following command:
# oc create -f pv001.json
For example:
# oc create -f pv001.json persistentvolume "glusterfs-4fc22ff9" created
To verify that the persistent volume is created, execute the following command:
# oc get pv
For example:
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Available 4s
Create a persistent volume claim file. For example:
# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi selector: matchLabels: storage-tier: gold
Bind the persistent volume to the persistent volume claim by executing the following command:
# oc create -f pvc.yaml
For example:
# oc create -f pvc.yaml persistentvolumeclaim"glusterfs-claim" created
To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
# oc get pv # oc get pvc
For example:
# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Bound storage-project/glusterfs-claim 1m
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-4fc22ff9 100Gi RWX 11s
The claim can now be used in the application. For example:
# cat app.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: glusterfs-claim
# oc create -f app.yaml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
To verify that the pod is created, execute the following command:
# oc get pods -n <storage_project_name>
For example:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE block-test-router-1-deploy 0/1 Running 0 4h busybox 1/1 Running 0 43s glusterblock-provisioner-1-bjpz4 1/1 Running 0 4h glusterfs-7l5xf 1/1 Running 0 4h glusterfs-hhxtk 1/1 Running 3 4h glusterfs-m4rbc 1/1 Running 0 4h heketi-1-3h9nb 1/1 Running 0 4h
To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ $ df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-1310998-81732b5fd87c197f627a24bcd2777f12eec4ee937cc2660656908b2fa6359129 100.0G 34.1M 99.9G 0% / tmpfs 1.5G 0 1.5G 0% /dev tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup 192.168.121.168:vol_4fc22ff934e531dec3830cfbcad1eeae 99.9G 66.1M 99.9G 0% /usr/share/busybox tmpfs 1.5G 0 1.5G 0% /run/secrets /dev/mapper/vg_vagrant-lv_root 37.7G 3.8G 32.0G 11% /dev/termination-log tmpfs 1.5G 12.0K 1.5G 0% /var/run/secretgit s/kubernetes.io/serviceaccount
If you encounter a permission denied error on the mount point, then refer to section Gluster Volume Security at: https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
3.1.2. Dynamic Provisioning of Volumes
Dynamic provisioning enables you to provision a Red Hat Gluster Storage volume to a running application container without pre-creating the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
The steps outlined below are not necessary when OpenShift Container Storage was deployed using the (default) Ansible installer and the default storage class (glusterfs-storage) created during the installation will be used.
3.1.2.1. Configuring Dynamic Provisioning of Volumes
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
3.1.2.1.1. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:
If the admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Red Hat Openshift Container Storage, then the following steps can be omitted.
Create an encoded value for the password by executing the following command:
# echo -n "<key>" | base64
where “key” is the value for “admin-key” that was created while deploying Red Hat Openshift Container Storage
For example:
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Create a secret file. A sample secret file is provided below:
# cat glusterfs-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: kubernetes.io/glusterfs
Register the secret on Openshift by executing the following command:
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
3.1.2.1.2. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
To create a storage class execute the following command:
# cat > glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs reclaimPolicy: Retain parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "client.ssl on, server.ssl on" volumenameprefix: "test-vol" allowVolumeExpansion: true
where,
- resturl
- Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.
- restuser
- Gluster REST service/Heketi user who has access to create volumes in the trusted storage pool
- volumetype
It specifies the volume type that is being used.
NoteDistributed-Three-way replication is the only supported volume type.This includes both standard three-way replication volumes and arbiter 2+1.
- clusterid
It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma-separated cluster IDs. This is an optional parameter.
NoteTo get the cluster ID, execute the following command:
# heketi-cli cluster list
- secretNamespace + secretName
Identification of Secret instance that contains the user password that is used when communicating with the Gluster REST service. These parameters are optional. Empty password will be used when both secretNamespace and secretName are omitted.
NoteWhen the persistent volumes are dynamically provisioned, the Gluster plugin automatically creates an endpoint and a headless service in the name gluster-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.
- volumeoptions
This is an optional parameter. It allows you to create glusterfs volumes with encryption enabled by setting the parameter to "client.ssl on, server.ssl on". For more information on enabling encryption, see Chapter 8, Enabling Encryption.
NoteDo not add this parameter in the storageclass if encryption is not enabled.
- volumenameprefix
This is an optional parameter. It depicts the name of the volume created by heketi. For more information see Section 3.1.2.1.5, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”
NoteThe value for this parameter cannot contain
_
in the storageclass.- allowVolumeExpansion
-
To increase the PV claim value, ensure to set the allowVolumeExpansion parameter in the storageclass file to
true
. For more information, see Section 3.1.2.1.7, “Expanding Persistent Volume Claim”.
To register the storage class to Openshift, execute the following command:
# oc create -f glusterfs-storageclass.yaml storageclass "gluster-container" created
To get the details of the storage class, execute the following command:
# oc describe storageclass gluster-container Name: gluster-container IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: resturl=http://heketi-storage-project.cloudapps.mystorage.com,restuser=admin,secretName=heketi-secret,secretNamespace=default No events.
3.1.2.1.3. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
# cat glusterfs-pvc-claim1.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 annotations: volume.beta.kubernetes.io/storage-class: gluster-container spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
- persistentVolumeReclaimPolicy
This is an optional parameter. When this parameter is set to "Retain" the underlying persistent volume is retained even after the corresponding persistent volume claim is deleted.
NoteWhen PVC is deleted, the underlying heketi and gluster volumes are not deleted if "persistentVolumeReclaimPolicy:" is set to "Retain". To delete the volume, you must use heketi cli and then delete the PV.
Register the claim by executing the following command:
# oc create -f glusterfs-pvc-claim1.yaml persistentvolumeclaim "claim1" created
To get the details of the claim, execute the following command:
# oc describe pvc <_claim_name_>
For example:
# oc describe pvc claim1 Name: claim1 Namespace: default StorageClass: gluster-container Status: Bound Volume: pvc-54b88668-9da6-11e6-965e-54ee7551fd0c Labels: <none> Capacity: 4Gi Access Modes: RWO No events.
3.1.2.1.4. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
To get the details of the persistent volume claim and persistent volume, execute the following command:
# oc get pv,pvc NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv/pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 4Gi RWO Delete Bound storage-project/claim1 3m NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc/claim1 Bound pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 4Gi RWO 4m
To validate if the endpoint and the services are created as part of claim creation, execute the following command:
# oc get endpoints,service NAME ENDPOINTS AGE ep/storage-project-router 192.168.68.3:443,192.168.68.3:1936,192.168.68.3:80 28d ep/gluster-dynamic-claim1 192.168.68.2:1,192.168.68.3:1,192.168.68.4:1 5m ep/heketi 10.130.0.21:8080 21d ep/heketi-storage-endpoints 192.168.68.2:1,192.168.68.3:1,192.168.68.4:1 25d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/storage-project-router 172.30.166.64 <none> 80/TCP,443/TCP,1936/TCP 28d svc/gluster-dynamic-claim1 172.30.52.17 <none> 1/TCP 5m svc/heketi 172.30.129.113 <none> 8080/TCP 21d svc/heketi-storage-endpoints 172.30.133.212 <none> 1/TCP 25d
3.1.2.1.5. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes
You can provide a custom volume name prefix to the persistent volume that is created. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project / Namespace name.
To set the name, ensure that you have added the parameter volumenameprefix to the storage class file. For more information, see Section 3.1.2.1.2, “Registering a Storage Class”
The value for this parameter cannot contain _
in the storageclass.
To verify if the custom volume name prefix is set, execute the following command:
# oc describe pv <pv_name>
For example:
# oc describe pv pvc-f92e3065-25e8-11e8-8f17-005056a55501 Name: pvc-f92e3065-25e8-11e8-8f17-005056a55501 Labels: <none> Annotations: Description=Gluster-Internal: Dynamically provisioned PV gluster.kubernetes.io/heketi-volume-id=027c76b24b1a3ce3f94d162f843529c8 gluster.org/type=file kubernetes.io/createdby=heketi-dynamic-provisioner pv.beta.kubernetes.io/gid=2000 pv.kubernetes.io/bound-by-controller=yes pv.kubernetes.io/provisioned-by=kubernetes.io/glusterfs volume.beta.kubernetes.io/mount-options=auto_unmount StorageClass: gluster-container-prefix Status: Bound Claim: glusterfs/claim1 Reclaim Policy: Delete Access Modes: RWO Capacity: 1Gi Message: Source: Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime) EndpointsName: glusterfs-dynamic-claim1 Path: test-vol_glusterfs_claim1_f9352e4c-25e8-11e8-b460-005056a55501 ReadOnly: false Events: <none>
The value for Path will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
3.1.2.1.6. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
To use the claim in the application, for example
# cat app.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1
# oc create -f app.yaml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
To verify that the pod is created, execute the following command:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE storage-project-router-1-at7tf 1/1 Running 0 13d busybox 1/1 Running 0 8s glusterfs-dc-192.168.68.2-1-hu28h 1/1 Running 0 7d glusterfs-dc-192.168.68.3-1-ytnlg 1/1 Running 0 7d glusterfs-dc-192.168.68.4-1-juqcq 1/1 Running 0 13d heketi-1-9r47c 1/1 Running 0 13d
To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ $ df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-666733-38050a1d2cdb41dc00d60f25a7a295f6e89d4c529302fb2b93d8faa5a3205fb9 10.0G 33.8M 9.9G 0% / tmpfs 23.5G 0 23.5G 0% /dev tmpfs 23.5G 0 23.5G 0% /sys/fs/cgroup /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /run/secrets /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /dev/termination-log /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/resolv.conf /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/hostname /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 192.168.68.2:vol_5b05cf2e5404afe614f8afa698792bae 4.0G 32.6M 4.0G 1% /usr/share/busybox tmpfs 23.5G 16.0K 23.5G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 23.5G 0 23.5G 0% /proc/kcore tmpfs 23.5G 0 23.5G 0% /proc/timer_stats
3.1.2.1.7. Expanding Persistent Volume Claim
To increase the PV claim value, ensure to set the allowVolumeExpansion parameter in the storageclass file to true
. For more information refer, Section 3.1.2.1.2, “Registering a Storage Class”
You can also resize a PV via the OpenShift Container Platform 3.11 Web Console.
To expand the persistent volume claim value, execute the following commands:
To check the existing persistent volume size, execute the following command on the app pod:
# oc rsh busybox
# df -h
For example:
# oc rsh busybox / # df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-100702042-0fa327369e7708b67f0c632d83721cd9a5b39fd3a7b3218f3ff3c83ef4320ce7 10.0G 34.2M 9.9G 0% / tmpfs 15.6G 0 15.6G 0% /dev tmpfs 15.6G 0 15.6G 0% /sys/fs/cgroup /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /dev/termination-log /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /run/secrets /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/resolv.conf /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hostname /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 10.70.46.177:test-vol_glusterfs_claim10_d3e15a8b-26b3-11e8-acdf-005056a55501 2.0G 32.6M 2.0G 2% /usr/share/busybox tmpfs 15.6G 16.0K 15.6G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 15.6G 0 15.6G 0% /proc/kcore tmpfs 15.6G 0 15.6G 0% /proc/timer_list tmpfs 15.6G 0 15.6G 0% /proc/timer_stats tmpfs 15.6G 0 15.6G 0% /proc/sched_debug tmpfs 15.6G 0 15.6G 0% /proc/scsi tmpfs 15.6G 0 15.6G 0% /sys/firmware
In this example the persistent volume size is 2Gi.
To edit the persistent volume claim value, execute the following command and edit the following storage parameter:
resources: requests: storage: <storage_value>
# oc edit pvc <claim_name>
For example, to expand the storage value to 20Gi:
# oc edit pvc claim3 apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-class: gluster-container2 volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs creationTimestamp: 2018-02-14T07:42:00Z name: claim3 namespace: storage-project resourceVersion: "283924" selfLink: /api/v1/namespaces/storage-project/persistentvolumeclaims/claim3 uid: 8a9bb0df-115a-11e8-8cb3-005056a5a340 spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi volumeName: pvc-8a9bb0df-115a-11e8-8cb3-005056a5a340 status: accessModes: - ReadWriteOnce capacity: storage: 2Gi phase: Bound
To verify, execute the following command on the app pod:
# oc rsh busybox
/ # df -h
For example:
# oc rsh busybox # df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-100702042-0fa327369e7708b67f0c632d83721cd9a5b39fd3a7b3218f3ff3c83ef4320ce7 10.0G 34.2M 9.9G 0% / tmpfs 15.6G 0 15.6G 0% /dev tmpfs 15.6G 0 15.6G 0% /sys/fs/cgroup /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /dev/termination-log /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /run/secrets /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/resolv.conf /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hostname /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 10.70.46.177:test-vol_glusterfs_claim10_d3e15a8b-26b3-11e8-acdf-005056a55501 20.0G 65.3M 19.9G 1% /usr/share/busybox tmpfs 15.6G 16.0K 15.6G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 15.6G 0 15.6G 0% /proc/kcore tmpfs 15.6G 0 15.6G 0% /proc/timer_list tmpfs 15.6G 0 15.6G 0% /proc/timer_stats tmpfs 15.6G 0 15.6G 0% /proc/sched_debug tmpfs 15.6G 0 15.6G 0% /proc/scsi tmpfs 15.6G 0 15.6G 0% /sys/firmware
It is observed that the size is changed from 2Gi (earlier) to 20Gi.
3.1.2.1.8. Deleting a Persistent Volume Claim
If the "persistentVolumeReclaimPolicy" parameter was set to "Retain" when registering the storageclass, the underlying PV and the corresponding volume remains even when a PVC is deleted.
To delete a claim, execute the following command:
# oc delete pvc <claim-name>
For example:
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
To verify if the claim is deleted, execute the following command:
# oc get pvc <claim-name>
For example:
# oc get pvc claim1 No resources found.
When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:
To verify if the persistent volume is deleted, execute the following command:
# oc get pv <pv-name>
For example:
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
To verify if the endpoints are deleted, execute the following command:
# oc get endpoints <endpointname>
For example:
# oc get endpoints gluster-dynamic-claim1 No resources found.
To verify if the service is deleted, execute the following command:
# oc get service <servicename>
For example:
# oc get service gluster-dynamic-claim1 No resources found.
3.1.3. Volume Security
Volumes come with a UID/GID of 0 (root). For an application pod to write to the volume, it should also have a UID/GID of 0 (root). With the volume security feature the administrator can now create a volume with a unique GID and the application pod can write to the volume using this unique GID
Volume security for statically provisioned volumes
To create a statically provisioned volume with a GID, execute the following command:
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
In the above command, a 100G persistent volume with a GID of 590 is created and the output of the persistent volume specification describing this volume is added to the pv001.json file.
For more information about accessing the volume using this GID, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/configuring_clusters/persistent-storage-examples#install-config-storage-examples-gluster-example.
Volume security for dynamically provisioned volumes
Two new parameters, gidMin
and gidMax
, are introduced with the dynamic provisioner. These values allow the administrator to configure the GID range for the volume in the storage class. To set up the GID values and provide volume security for dynamically provisioned volumes, execute the following commands:
Create a storage class file with the GID values. For example:
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "2000" gidMax: "4000"
NoteIf the
gidMin
andgidMax
values are not provided, then the dynamic provisioned volumes will have the GID between 2000 and 2147483647.- Create a persistent volume claim. For more information see, Section 3.1.2.1.3, “Creating a Persistent Volume Claim”
- Use the claim in the pod. Ensure that this pod is non-privileged. For more information see, Section 3.1.2.1.6, “Using the Claim in a Pod”
To verify if the GID is within the range specified, execute the following command:
# oc rsh busybox
$ id
For example:
$ id uid=1000060000 gid=0(root) groups=0(root),2001
where, 2001 in the above output is the allocated GID for the persistent volume, which is within the range specified in the storage class. You can write to this volume with the allocated GID.
NoteWhen the persistent volume claim is deleted, the GID of the persistent volume is released from the pool.
3.1.4. Device tiering in heketi
Heketi supports a simple tag matching approach to use certain devices when placing a volume. The user is required to specify a key-value pair on a specific set of devices and create a new volume with a volume option key user.heketi.device-tag-match
key and a simple matching rule.
Procedure
Apply the required tags on the heketi devices.
# heketi-cli device settags <device-name> <key>:<value>
Example :
# heketi-cli device settags 1fe1b83e5660efb53cc56433cedf7771 disktype:hdd
Remove the applied tag from the device.
# heketi-cli device rmtags <device-name> <key>
Example :
# heketi-cli device rmtags 1fe1b83e5660efb53cc56433cedf7771 disktype
Verify the added tag on the device.
# heketi-cli device info <device-name>
Example :
# heketi-cli device info 1fe1b83e5660efb53cc56433cedf7771
Example output :
Device Id: 1fe1b83e5660efb53cc56433cedf7771 State: online Size (GiB): 49 Used (GiB): 41 Free (GiB): 8 Create Path: /dev/vdc Physical Volume UUID: GpAnb4-gY8e-p5m9-0UU3-lV3J-zQWY-zFgO92 Known Paths: /dev/disk/by-id/virtio-bf48c436-04a9-48ed-9 /dev/disk/by-path/pci-0000:00:08.0 /dev/disk/by-path/virtio-pci-0000:00:08.0 /dev/vdc Tags: disktype: hdd ---> added tag
Use tagged devices to create the volume.
# heketi-cli volume create --size=<size in GiB> --gluster-volume-options'user.heketi.device-tag-match <key>=<value>’
Important-
When creating volumes, you must pass a new volume option
user.heketi.device-tag-match
where the value of the option is a tag key followed by either "=" or "!=" and followed by a tag value. - All matches are exact and case sensitive and only one device-tag-match can be specified.
Example :
# heketi-cli volume create --size=5 --gluster-volume-options 'user.heketi.device-tag-match disktype=hdd’
NoteOnce a volume is created the volume options list is fixed. The tag-match rules persist with the volume metadata for volume expansion and brick replacement purposes.
-
When creating volumes, you must pass a new volume option
Create a storage class.
Create a storage class that only creates volumes on hard disks.
# cat hdd-storageclass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "false" name: glusterfs-storage-hdd selfLink: /apis/storage.k8s.io/v1/storageclasses/glusterfs-storage parameters: resturl: http://heketi-storage.glusterfs.svc:8080 restuser: admin secretName: heketi-storage-admin-secret secretNamespace: glusterfs volumeoptions: "user.heketi.device-tag-match disktype=hdd" provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete volumeBindingMode: Immediate
Create a storage class that only creates volumes using faster solid state storage.
ImportantYou must use a negative tag matching rule that excludes hard disk devices.
# cat sdd-storageclass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "false" name: glusterfs-storage-dd selfLink: /apis/storage.k8s.io/v1/storageclasses/glusterfs-storage parameters: resturl: http://heketi-storage.glusterfs.svc:8080 restuser: admin secretName: heketi-storage-admin-secret secretNamespace: glusterfs volumeoptions: "user.heketi.device-tag-match disktype!=hdd" provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete volumeBindingMode: Immediate
3.2. Block Storage
Block storage allows the creation of high performance individual storage units. Unlike the traditional file storage capability that glusterfs supports, each storage volume/block device can be treated as an independent disk drive, so that each storage volume/block device can support an individual file system.
gluster-block is a distributed management framework for block devices. It aims to make Gluster-backed block storage creation and maintenance as simple as possible. gluster-block can provision block devices and export them as iSCSI LUN’s across multiple nodes, and uses iSCSI protocol for data transfer as SCSI block/commands.
- Block volume expansion is now supported in OpenShift Container Storage 3.11. Refer to Section 3.2.3, “Block volume expansion”.
- Static provisioning of volumes is not supported for Block storage. Dynamic provisioning of volumes is the only method supported.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:
# uname -r
Reboot the node for the latest kernel update to take effect.
3.2.1. Dynamic Provisioning of Volumes for Block Storage
Dynamic provisioning enables you to provision a Red Hat Gluster Storage volume to a running application container without pre-creating the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
The steps outlined below are not necessary when OpenShift Container Storage was deployed using the (default) Ansible installer and the default storage class (glusterfs-storage-block) created during the installation will be used.
3.2.1.1. Configuring Dynamic Provisioning of Volumes
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
3.2.1.1.1. Configuring Multipathing on all Initiators
To ensure the iSCSI initiator can communicate with the iSCSI targets and achieve HA using multipathing, execute the following steps on all the OpenShift nodes (iSCSI initiator) where the app pods are hosted:
To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
# yum install iscsi-initiator-utils device-mapper-multipath
To enable multipath, execute the following command:
# mpathconf --enable
Create and add the following content to the multipath.conf file:
NoteIn case of upgrades, make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.
# cat >> /etc/multipath.conf <<EOF # LIO iSCSI devices { device { vendor "LIO-ORG" user_friendly_names "yes" # names like mpatha path_grouping_policy "failover" # one path per group hardware_handler "1 alua" path_selector "round-robin 0" failback immediate path_checker "tur" prio "alua" no_path_retry 120 } } EOF
Execute the following commands to start multipath daemon and [re]load the multipath configuration:
# systemctl start multipathd
# systemctl reload multipathd
3.2.1.1.2. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:
If the admin-key
value (secret to access heketi to get the volume details) was not set during the deployment of Red Hat Openshift Container Storage, then the following steps can be omitted.
Create an encoded value for the password by executing the following command:
# echo -n "<key>" | base64
where
key
is the value foradmin-key
that was created while deploying CNSFor example:
# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
Create a secret file. A sample secret file is provided below:
# cat glusterfs-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: gluster.org/glusterblock
Register the secret on Openshift by executing the following command:
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
3.2.1.1.3. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
Create a storage class. A sample storage class file is presented below:
# cat > glusterfs-block-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-block provisioner: gluster.org/glusterblock-infra-storage reclaimPolicy: Retain parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" restsecretnamespace: "default" restsecretname: "heketi-secret" hacount: "3" clusterids: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" chapauthenabled: "true" volumenameprefix: "test-vol"
where,
- provisioner
The provisioner name should match the provisioner name with which the
glusterblock provisioner
pod was deployed. To get theprovisioner name
use the following command:# oc describe pod <glusterblock_provisioner_pod_name> |grep PROVISIONER_NAME
For example:
# oc describe pod glusterblock-registry-provisioner-dc-1-5j8l9 |grep PROVISIONER_NAME PROVISIONER_NAME: gluster.org/glusterblock-infra-storage
- resturl
- Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.
- restuser
- Gluster REST service/Heketi user who has access to create volumes in the trusted storage pool
- restsecretnamespace + restsecretname
-
Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional. Empty password will be used when both
restsecretnamespace
andrestsecretname
are omitted. - hacount
-
It is the count of the number of paths to the block target server.
hacount
provides high availability via multipathing capability of iSCSI. If there is a path failure, the I/Os will not be interrupted and will be served via another available paths. - clusterids
It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma-separated cluster IDs. This is an optional parameter.
NoteTo get the cluster ID, execute the following command:
# heketi-cli cluster list
- chapauthenabled
- If you want to provision block volume with CHAP authentication enabled, this value has to be set to true. This is an optional parameter.
- volumenameprefix
This is an optional parameter. It depicts the name of the volume created by heketi. For more information see, Section 3.2.1.1.6, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”
NoteThe value for this parameter cannot contain
_
in the storageclass.
To register the storage class to Openshift, execute the following command:
# oc create -f glusterfs-block-storageclass.yaml storageclass "gluster-block" created
To get the details of the storage class, execute the following command:
# oc describe storageclass gluster-block Name: gluster-block IsDefaultClass: No Annotations: <none> Provisioner: gluster.org/glusterblock-infra-storage Parameters: chapauthenabled=true,hacount=3,opmode=heketi,restsecretname=heketi-secret,restsecretnamespace=default,resturl=http://heketi-storage-project.cloudapps.mystorage.com,restuser=admin Events: <none>
3.2.1.1.4. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
# cat glusterfs-block-pvc-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 annotations: volume.beta.kubernetes.io/storage-class: gluster-block spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
- persistentVolumeReclaimPolicy
This is an optional parameter. When this parameter is set to "Retain" the underlying persistent volume is retained even after the corresponding persistent volume claim is deleted.
NoteWhen PVC is deleted, the underlying heketi and gluster volumes are not deleted if "persistentVolumeReclaimPolicy:" is set to "Retain". To delete the volume, you must use heketi cli and then delete the PV.
Register the claim by executing the following command:
# oc create -f glusterfs-block-pvc-claim.yaml persistentvolumeclaim "claim1" created
To get the details of the claim, execute the following command:
# oc describe pvc <_claim_name_>
For example:
# oc describe pvc claim1 Name: claim1 Namespace: block-test StorageClass: gluster-block Status: Bound Volume: pvc-ee30ff43-7ddc-11e7-89da-5254002ec671 Labels: <none> Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"8d7fecb4-7dba-11e7-a347-0a580a830002","leaseDurationSeconds":15,"acquireTime":"2017-08-10T15:02:30Z","renewTime":"2017-08-10T15:02:58Z","lea... pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes volume.beta.kubernetes.io/storage-class=gluster-block volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock Capacity: 5Gi Access Modes: RWO Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 gluster.org/glusterblock 8d7fecb4-7dba-11e7-a347-0a580a830002 Normal Provisioning External provisioner is provisioning volume for claim "block-test/claim1" 1m 1m 18 persistentvolume-controller Normal ExternalProvisioning cannot find provisioner "gluster.org/glusterblock", expecting that a volume for the claim is provisioned either manually or via external software 1m 1m 1 gluster.org/glusterblock 8d7fecb4-7dba-11e7-a347-0a580a830002 Normal ProvisioningSucceeded Successfully provisioned volume pvc-ee30ff43-7ddc-11e7-89da-5254002ec671
3.2.1.1.5. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
To get the details of the persistent volume claim and persistent volume, execute the following command:
# oc get pv,pvc NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-ee30ff43-7ddc-11e7-89da-5254002ec671 5Gi RWO Delete Bound block-test/claim1 gluster-block 3m NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/claim1 Bound pvc-ee30ff43-7ddc-11e7-89da-5254002ec671 5Gi RWO gluster-block 4m
To identify block volumes and block hosting volumes refer https://access.redhat.com/solutions/3897581
3.2.1.1.6. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes
You can provide a custom volume name prefix to the persistent volume that is created. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project / Namespace name.
To set the name, ensure that you have added the parameter volumenameprefix to the storage class file. For more information, refer Section 3.2.1.1.3, “Registering a Storage Class”
The value for this parameter cannot contain _
in the storageclass.
To verify if the custom volume name prefix is set, execute the following command:
# oc describe pv <pv_name>
For example:
# oc describe pv pvc-4e97bd84-25f4-11e8-8f17-005056a55501 Name: pvc-4e97bd84-25f4-11e8-8f17-005056a55501 Labels: <none> Annotations: AccessKey=glusterblk-67d422eb-7b78-4059-9c21-a58e0eabe049-secret AccessKeyNs=glusterfs Blockstring=url:http://172.31.251.137:8080,user:admin,secret:heketi-secret,secretnamespace:glusterfs Description=Gluster-external: Dynamically provisioned PV gluster.org/type=block gluster.org/volume-id=cd37c089372040eba20904fb60b8c33e glusterBlkProvIdentity=gluster.org/glusterblock glusterBlockShare=test-vol_glusterfs_bclaim1_4eab5a22-25f4-11e8-954d-0a580a830003 kubernetes.io/createdby=heketi pv.kubernetes.io/provisioned-by=gluster.org/glusterblock v2.0.0=v2.0.0 StorageClass: gluster-block-prefix Status: Bound Claim: glusterfs/bclaim1 Reclaim Policy: Delete Access Modes: RWO Capacity: 5Gi Message: Source: Type: ISCSI (an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod) TargetPortal: 10.70.46.177 IQN: iqn.2016-12.org.gluster-block:67d422eb-7b78-4059-9c21-a58e0eabe049 Lun: 0 ISCSIInterface default FSType: xfs ReadOnly: false Portals: [10.70.46.142 10.70.46.4] DiscoveryCHAPAuth: false SessionCHAPAuth: true SecretRef: {glusterblk-67d422eb-7b78-4059-9c21-a58e0eabe049-secret } InitiatorName: <none> Events: <none>
The value for glusterBlockShare will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
3.2.1.1.7. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
To use the claim in the application, for example
# cat app.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1
# oc create -f app.yaml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example.
To verify that the pod is created, execute the following command:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE block-test-router-1-deploy 0/1 Running 0 4h busybox 1/1 Running 0 43s glusterblock-provisioner-1-bjpz4 1/1 Running 0 4h glusterfs-7l5xf 1/1 Running 0 4h glusterfs-hhxtk 1/1 Running 3 4h glusterfs-m4rbc 1/1 Running 0 4h heketi-1-3h9nb 1/1 Running 0 4h
To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ # df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:1-11438-39febd9d64f3a3594fc11da83d6cbaf5caf32e758eb9e2d7bdd798752130de7e 10.0G 33.9M 9.9G 0% / tmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /dev/termination-log /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /run/secrets /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /etc/resolv.conf /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /etc/hostname /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm /dev/mpatha 5.0G 32.2M 5.0G 1% /usr/share/busybox tmpfs 3.8G 16.0K 3.8G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 3.8G 0 3.8G 0% /proc/kcore tmpfs 3.8G 0 3.8G 0% /proc/timer_list tmpfs 3.8G 0 3.8G 0% /proc/timer_stats tmpfs 3.8G 0 3.8G 0% /proc/sched_debug
3.2.1.1.8. Deleting a Persistent Volume Claim
If the "persistentVolumeReclaimPolicy" parameter was set to "Retain" when registering the storageclass, the underlying PV and the corresponding volume remains even when a PVC is deleted.
To delete a claim, execute the following command:
# oc delete pvc <claim-name>
For example:
# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
To verify if the claim is deleted, execute the following command:
# oc get pvc <claim-name>
For example:
# oc get pvc claim1 No resources found.
When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:
To verify if the persistent volume is deleted, execute the following command:
# oc get pv <pv-name>
For example:
# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Next step: If you are installing Red Hat Openshift Container Storage 3.11, and you want to use block storage as the backend storage for logging and metrics, proceed to Chapter 7, Gluster Block Storage as Backend for Logging and Metrics.
3.2.2. Replacing a node on Block Storage
If you want to replace a block from a node that is out of resources or is faulty, it can be replaced with a new node.
Execute the following commands:
Execute the following command to fetch the zone and cluster info from heketi
# heketi-cli topology info --user=<user> --secret=<user key>
- --user
- heketi user
- --secret
- Secret key for a specified user
- After obtaining the cluster id and zone id, refer to Adding New Nodes to add a new node.
Execute the following command to add the device
# heketi-cli device add --name=<device name> --node=<node id> --user=<user> --secret=<user key>
- --name
- Name of device to add
- --node
- Newly added node id
For example:
# heketi-cli device add --name=/dev/vdc --node=2639c473a2805f6e19d45997bb18cb9c --user=admin --secret=adminkey Device added successfully
After the new node and its associated devices are added to heketi, the faulty or unwanted node can be removed from heketi
To remove any node from heketi, follow this workflow:
- node disable (Disallow usage of a node by placing it offline)
- node replace (Removes a node and all its associated devices from Heketi)
- device delete (Deletes a device from Heketi node)
- node delete (Deletes a node from Heketi management)
Execute the following command to fetch the node list from heketi
#heketi-cli node list --user=<user> --secret=<user key>
For example:
# heketi-cli node list --user=admin --secret=adminkey Id:05746c562d6738cb5d7de149be1dac04 Cluster:607204cb27346a221f39887a97cf3f90 Id:ab37fc5aabbd714eb8b09c9a868163df Cluster:607204cb27346a221f39887a97cf3f90 Id:c513da1f9bda528a9fd6da7cb546a1ee Cluster:607204cb27346a221f39887a97cf3f90 Id:e6ab1fe377a420b8b67321d9e60c1ad1 Cluster:607204cb27346a221f39887a97cf3f90
Execute the following command to fetch the node info of the node, that has to be deleted from heketi:
# heketi-cli node info <nodeid> --user=<user> --secret=<user key>
For example:
# heketi-cli node info c513da1f9bda528a9fd6da7cb546a1ee --user=admin --secret=adminkey Node Id: c513da1f9bda528a9fd6da7cb546a1ee State: online Cluster Id: 607204cb27346a221f39887a97cf3f90 Zone: 1 Management Hostname: dhcp43-171.lab.eng.blr.redhat.com Storage Hostname: 10.70.43.171 Devices: Id:3a1e0717e6352a8830ab43978347a103 Name:/dev/vdc State:online Size (GiB):499 Used (GiB):100 Free (GiB):399 Bricks:1 Id:89a57ace1c3184826e1317fef785e6b7 Name:/dev/vdd State:online Size (GiB):499 Used (GiB):10 Free (GiB):489 Bricks:5
Execute the following command to disable the node from heketi. This makes the node go offline:
# heketi-cli node disable <node-id> --user=<user> --secret=<user key>
For example:
# heketi-cli node disable ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df is now offline
Execute the following command to remove a node and all its associated devices from Heketi:
#heketi-cli node remove <node-id> --user=<user> --secret=<user key>
For example:
# heketi-cli node remove ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df is now removed
Execute the following command to delete the devices from heketi node:
# heketi-cli device delete <device-id> --user=<user> --secret=<user key>
For example:
# heketi-cli device delete 0fca78c3a94faabfbe5a5a9eef01b99c --user=admin --secret=adminkey Device 0fca78c3a94faabfbe5a5a9eef01b99c deleted
Execute the following command to delete a node from Heketi management:
#heketi-cli node delete <nodeid> --user=<user> --secret=<user key>
For example:
# heketi-cli node delete ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df deleted
Execute the following commands on any one of the gluster pods to replace the faulty node with the new node:
Execute the following command to get a list of block volumes hosted under block-hosting-volume:
# gluster-block list <block-hosting-volume> --json-pretty
Execute the following command to get the list of servers that are hosting the block volume, also save the GBID and PASSWORD values for later use:
# gluster-block info <block-hosting-volume>/<block-volume> --json-pretty
Execute the following command to replace the faulty node with the new node:
# gluster-block replace <volname/blockname> <old-node> <new-node> [force]
For example:
{ "NAME":"block", "CREATE SUCCESS":"192.168.124.73", "DELETE SUCCESS":"192.168.124.63", "REPLACE PORTAL SUCCESS ON":[ "192.168.124.79" ], "RESULT":"SUCCESS" } Note: If the old node is down and does not come up again then you can force replace: gluster-block replace sample/block 192.168.124.63 192.168.124.73 force --json-pretty { "NAME":"block", "CREATE SUCCESS":"192.168.124.73", "DELETE FAILED (ignored)":"192.168.124.63", "REPLACE PORTAL SUCCESS ON":[ "192.168.124.79" ], "RESULT":"SUCCESS" }
NoteThe next steps are to be executed only if the block that is to be replaced is still in use.
Skip this step if the block volume is not currently mounted. If the block volume is in use by the application, we need to reload the mapper device on the initiator side.
Identify the initiator node and targetname:
To find initiator node:
# oc get pods -o wide | grep <podname>
where podname is the name of the pod on which the blockvolume is mounted.
For example
# oc get pods -o wide | grep cirros1 cirros1-1-x6b5n 1/1 Running 0 1h 10.130.0.5 dhcp46-31.lab.eng.blr.redhat.com <none>
To find the targetname:
# oc describe pv <pv_name> | grep IQN
For example:
# oc describe pv pvc-c50c69db-5f76-11ea-b27b-005056b253d1 | grep IQN IQN: iqn.2016-12.org.gluster-block:87ffbcf3-e21e-4fa5-bd21-7db2598e8d3f
Execute the following command on the initiator node to find the mapper device:
# mount | grep <targetname>
Reload the mapper device:
# multipath -r mpathX
For example:
# mount | grep iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a/dev/mapper/mpatha on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/192.168.124.63:3260-iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a-lun-0 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) # multipath -r mpatha
Log out of the old portal by executing the following command on the initiator:
# iscsiadm -m node -T <targetname> -p <old node> -u
For example:
# iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.63 -u Logging out of session [sid: 8, target: iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a, portal: 192.168.124.63,3260] Logout of [sid: 8, target: iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a, portal: 192.168.124.63,3260] successful.
To re-discover the new node execute the following command:
# iscsiadm -m discovery -t st -p <new node>
For example:
# iscsiadm -m discovery -t st -p 192.168.124.73 192.168.124.79:3260,1 iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a 192.168.124.73:3260,2 iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a
Log in to the new portal by executing the following:
Update the authentication credentials (use GBID and PASSWORD from step 11ii)
# iscsiadm -m node -T <targetname> -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v <GBID> -n node.session.auth.password -v <PASSWORD> -p <new node ip>
Log in to the new portal
# iscsiadm -m node -T <targetname> -p <new node ip> -l
For example:
# iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -o update -n node.session.auth.authmethod -v CHAP -n node.session.auth.username -v d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -n node.session.auth.password -v a6a9081f-3d0d-4e8b-b9b0-d2be703b455d -p 192.168.124.73 # iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.73 -l
To verify if the enabled hosting volume is replaced and running successfully, execute the following command on the initiator:
# ll /dev/disk/by-path/ip-* | grep <targetname> | grep <“new node ip”>
Ensure that you update a gluster block Persistent Volume (PV) with the new IP address.
The PVs are immutable by definition, so you cannot edit the PV, which means that you cannot change the old IP address on the PV. A document was created with the procedure to workaroud this issue (recreate a new PV and delete the old PV definition using the same data/underlying storage device), see the Red Hat Knowledgebase solution Gluster block PVs are not updated with new IPs after gluster node replacement.
3.2.3. Block volume expansion
You can expand the block persistent volume claim to increase the amount of storage on the application pods. There are two ways to do this; offline resizing and online resizing.
3.2.3.1. Offline resizing
Ensure that block hosting volume has sufficient size,before expanding the block PVC.
To get the Heketi block volume ID of the PVC, execute the following command on the primary OCP node:
# oc get pv $(oc get pvc <PVC-NAME> --no-headers -o=custom-columns=:.spec.volumeName) -o=custom-columns=:.metadata.annotations."gluster\.org/volume-id"
To get the block volume ID ,execute the following command:
# heketi-cli blockvolume info <block-volume-id>
To get the block hosting volume information, execute the following command:
# heketi-cli volume info <block-hosting-volume-id>
NoteEnsure that you have sufficient free size.
- Bring down the application pod.
To expand the block volume through heketi-cli, execute the following command:
# heketi-cli blockvolume expand <block-volume-id> --new-size=<net-new-size>
For example:
# heketi-cli blockvolume expand d911d4bcfd4f11bf07b9688a4798b5dc --new-size=7 Name: blk_glusterfs_claim2_fc40d362-aaf9-11ea-bb32-0a580a820004 Size: 7 UsableSize: 7 Volume Id: d911d4bcfd4f11bf07b9688a4798b5dc Cluster Id: 8d1656d29fb8dc6780388cf797351a6d Hosts: [10.70.53.185 10.70.53.203 10.70.53.198] IQN: iqn.2016-12.org.gluster-block:8ce8eb4c-4951-4777-9b42-244b7ea525cd LUN: 0 Hacount: 3 Username: 8ce8eb4c-4951-4777-9b42-244b7ea525cd Password: b83a74be-df90-4fd7-b1a1-928fdcfed8c4 Block Hosting Volume: 2224ac1da64c1737604456a1f511574e
NoteEnsure that the Size and UsableSize match in the expand output. Steps 4 to 8 can be executed when Size and UsableSize match.
Replace
PVC-NAME
with your PVC and create a job to refresh the block volume size.apiVersion: batch/v1 kind: Job metadata: name: refresh-block-size spec: completions: 1 template: spec: containers: - image: rhel7 env: - name: HOST_ROOTFS value: "/rootfs" - name: EXEC_ON_HOST value: "nsenter --root=$(HOST_ROOTFS) nsenter -t 1 -m" command: ['sh', '-c', 'echo -e "# df -Th /mnt" && df -Th /mnt && DEVICE=$(df --output=source /mnt | sed -e /^Filesystem/d) && MOUNTPOINT=$($EXEC_ON_HOST lsblk $DEVICE -n -o MOUNTPOINT) && $EXEC_ON_HOST xfs_growfs $MOUNTPOINT > /dev/null && echo -e "\n# df -Th /mnt" && df -Th /mnt'] name: rhel7 volumeMounts: - mountPath: /mnt name: block-pvc - mountPath: /dev name: host-dev - mountPath: /rootfs name: host-rootfs securityContext: privileged: true volumes: - name: block-pvc persistentVolumeClaim: claimName: <PVC-NAME> - name: host-dev hostPath: path: /dev - name: host-rootfs hostPath: path: / restartPolicy: Never
To verify the new size in logs of the pod, execute the following command:
# oc logs refresh-block-size-xxxxx
NoteEnsure that
df -Th
output postxfs_growfs
reflects the new size:For example:
# oc logs refresh-block-size-jcbzh # df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/mpatha xfs 5.0G 33M 5.0G 1% /mnt
# df -Th /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/mpatha xfs 7.0G 34M 6.0G 1% /mnt
To check the success of the job, execute the following command:
# oc get jobs NAME DESIRED SUCCESSFUL AGE refresh-block-size 1 1 36m
To delete the job once it is successful, execute the following command:
# oc delete job refresh-block-size job.batch "refresh-block-size" deleted
- You can use the new size after bringing up your application pod.
3.2.3.2. Online resizing
Ensure that block hosting volume has sufficient size,before expanding the block PVC.
To get the Heketi block volume ID of the PVC, execute the following command on the primary OCP node:
# oc get pv $(oc get pvc <PVC-NAME> --no-headers -o=custom-columns=:.spec.volumeName) -o=custom-columns=:.metadata.annotations."gluster\.org/volume-id"
To get the block volume ID ,execute the following command:
# heketi-cli blockvolume info <block-volume-id>
To get the block hosting volume information, execute the following command:
# heketi-cli volume info <block-hosting-volume-id>
NoteEnsure that you have sufficient free size.
To expand the block volume through heketi-cli, execute the following command:
# heketi-cli blockvolume expand <BLOCK-VOLUME-ID> --new-size=<net-new-size>
For example:
# heketi-cli blockvolume expand d911d4bcfd4f11bf07b9688a4798b5dc --new-size=7 Name: blk_glusterfs_claim2_fc40d362-aaf9-11ea-bb32-0a580a820004 Size: 7 UsableSize: 7 Volume Id: d911d4bcfd4f11bf07b9688a4798b5dc Cluster Id: 8d1656d29fb8dc6780388cf797351a6d Hosts: [10.70.53.185 10.70.53.203 10.70.53.198] IQN: iqn.2016-12.org.gluster-block:8ce8eb4c-4951-4777-9b42-244b7ea525cd LUN: 0 Hacount: 3 Username: 8ce8eb4c-4951-4777-9b42-244b7ea525cd Password: b83a74be-df90-4fd7-b1a1-928fdcfed8c4 Block Hosting Volume: 2224ac1da64c1737604456a1f511574e
NoteEnsure that the Size and UsableSize match in the expand output. Steps 3 to 9 can be executed when Size and UsableSize match.
To get the iSCSI target IQN name mapped to PV, execute the following command and make a note of it for further reference:
# oc get pv <PV-NAME> -o=custom-columns=:.spec.iscsi.iqn
For example:
# oc get pv pvc-fc3e9160-aaf9-11ea-a29f-005056b781de -o=custom-columns=:.spec.iscsi.iqn iqn.2016-12.org.gluster-block:8ce8eb4c-4951-4777-9b42-244b7ea525cd
Login to the host node of the application pod.
To get the node name of the application pod, execute the following command:
# oc get pods <POD-NAME> -o=custom-columns=:.spec.nodeName
For example:
# oc get pods cirros2-1-8x6w5 -o=custom-columns=:.spec.nodeName dhcp53-203.lab.eng.blr.redhat.com
To login to the host node of the application pod,execute the following command:
# ssh <NODE-NAME>
For example:
# ssh dhcp53-203.lab.eng.blr.redhat.com
Copy the multipath mapper device name (for example mpatha) ,current sizes of individual paths (for example sdd, sde and sdf) and mapper device for further reference.
# lsblk | grep -B1 <pv-name>
For example:
# lsblk | grep -B1 pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sdd 8:48 0 6G 0 disk └─mpatha 253:18 0 6G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sde 8:64 0 6G 0 disk └─mpatha 253:18 0 6G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sdf 8:80 0 6G 0 disk └─mpatha 253:18 0 6G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de # df -Th| grep <PV-NAME> For example: # df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de /dev/mapper/mpatha xfs 6.0G 44M 6.0G 1% /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
You can use IQN name from step 3 to rescan the devices on the host node of the application pod (which is an iSCSI initiator), to execute the following command:
# iscsiadm -m node -R -T <iqn-name>
For example:
# iscsiadm -m node -R -T iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1 Rescanning session [sid: 1, target:iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1, portal: 192.168.124.80,3260] Rescanning session [sid: 2, target:iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1, portal: 192.168.124.73,3260] Rescanning session [sid: 3, target:iqn.2016-12.org.gluster-block:a951f673-1a17-47b8-ac02-197baa32b9b1, portal: 192.168.124.63,3260]
NoteYou should now see the new size reflecting at the individual paths (sdd, sde & sdf):
# lsblk | grep -B1 <pv-name>
For example:
# lsblk | grep -B1 pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sdd 8:48 0 7G 0 disk └─mpatha 253:18 0 6G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sde 8:64 0 7G 0 disk └─mpatha 253:18 0 6G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sdf 8:80 0 7G 0 disk └─mpatha 253:18 0 6G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
To refresh multipath device size, execute the following commands.
-
Get the multipath mapper device name from step 6, from the
lsblk
output. To refresh the multipath mapper device, execute the following command:
# multipathd -k'resize map <multipath-mapper-name>'
For example:
# multipathd -k'resize map mpatha' Ok
NoteYou should now see the new size reflecting with the mapper device mpatha. Copy the mount point path from the following command output for further reference:
# lsblk | grep -B1 <PV-NAME>
For example:
# lsblk | grep -B1 pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sdd 8:48 0 7G 0 disk └─mpatha 253:18 0 7G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sde 8:64 0 7G 0 disk └─mpatha 253:18 0 7G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de sdf 8:80 0 7G 0 disk └─mpatha 253:18 0 7G 0 mpath /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
# df -Th | grep <pv-name>
For example:
# df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de /dev/mapper/mpatha xfs 6.0G 44M 6.0G 1% /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
-
Get the multipath mapper device name from step 6, from the
To grow the file system layout, execute the following commands:
# xfs_growfs <mount-point>
For example:
# xfs_growfs /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de meta-data=/dev/mapper/mpatha isize=512 agcount=24, agsize=65536 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=1572864, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 1572864 to 1835008
# df -Th | grep <pv-name>
For example:
# df -Th | grep pvc-fc3e9160-aaf9-11ea-a29f-005056b781de /dev/mapper/mpatha xfs 7.0G 44M 7.0G 1% /var/lib/origin/openshift.local.volumes/pods/44b76db5-afa2-11ea-a29f-005056b781de/volumes/kubernetes.io~iscsi/pvc-fc3e9160-aaf9-11ea-a29f-005056b781de
- You can now use the new size without restarting the application pod.