このコンテンツは選択した言語では利用できません。
Chapter 6. Upgrading your Red Hat Openshift Container Storage in Converged Mode
This chapter describes the procedure to upgrade your environment from Container Storage in Converged Mode 3.9 to Red Hat Openshift Container Storage in Converged Mode 3.10.
6.1. Upgrading the Glusterfs Pods リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
The following sections provide steps to upgrade your Glusterfs pods
6.1.1. Prerequisites リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
Ensure the following prerequisites are met:
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- Ensure to run the following command to retrieve the current configuration details before starting with upgrade:
# oc get all - Ensure to run the following command to get the latest versions of Ansible templates.
# yum update openshift-ansible
Note
The template files are available in the following locations:
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
6.1.2. Restoring original label values for /dev/log リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
To restore the original selinux label, execute the following commands:
- Create a directory and soft links on all nodes that run gluster pods:
# mkdir /srv/<directory_name> # cd /srv/<directory_name>/ # same dir as above # ln -sf /dev/null systemd-tmpfiles-setup-dev.service # ln -sf /dev/null systemd-journald.service # ln -sf /dev/null systemd-journald.socket - Edit the daemonset that creates the glusterfs pods on the node which has oc client:
# oc edit daemonset <daemonset_name>Under volumeMounts section add a mapping for the volume:- mountPath: /usr/lib/systemd/system/systemd-journald.service name: systemd-journald-service - mountPath: /usr/lib/systemd/system/systemd-journald.socket name: systemd-journald-socket - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service name: systemd-tmpfiles-setup-dev-serviceUnder volumes section add a new host path for each service listed:Note
The path mentioned in here should be the same as mentioned in Step 1.- hostPath: path: /srv/<directory_name>/systemd-journald.socket type: "" name: systemd-journald-socket - hostPath: path: /srv/<directory_name>/systemd-journald.service type: "" name: systemd-journald-service - hostPath: path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service type: "" name: systemd-tmpfiles-setup-dev-service - Run the following command on all nodes that run gluster pods. This will reset the label:
# restorecon /dev/log - Execute the following command to check the status of self heal for all volumes:
# oc rsh <gluster_pod_name> # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Wait for self-heal to complete. - Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}' - Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of
glusterfsdprocess:# gluster volume set all cluster.max-bricks-per-process 250- Execute the following command on any one of the gluster pods to ensure that the option is set correctly:
# gluster volume get all cluster.max-bricks-per-processFor example:# gluster volume get all cluster.max-bricks-per-process cluster.max-bricks-per-process 250
- Execute the following command on the node which has oc client to delete the gluster pod:
# oc delete pod <gluster_pod_name> - To verify if the pod is ready, execute the following command:
# oc get pods -l glusterfs=storage-pod - Login to the node hosting the pod and check the selinux label of /dev/log
# ls -lZ /dev/logThe output should show devlog_t labelFor example:# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logExit the node. - In the gluster pod, check if the label value is devlog_t:
# oc rsh <gluster_pod_name> # ls -lZ /dev/logFor example:# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/log - Perform steps 4 to 9 for other pods.
6.1.3. Upgrading if existing version deployed by using cns-deploy リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
6.1.3.1. Upgrading cns-deploy and Heketi Server リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
The following commands must be executed on the client machine.
- Execute the following command to update the heketi client and cns-deploy packages:
# yum update cns-deploy -y # yum update heketi-client -y - Backup the Heketi database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit - Execute the following command to delete the heketi template.
# oc delete templates heketi - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo - Execute the following command to install the heketi template.
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created - Execute the following command to grant the heketi Service Account the necessary privileges.
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountFor example,# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account - Execute the following command to generate a new heketi configuration file.
# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json- The
BLOCK_HOST_SIZEparameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.templatetoheketi.jsonin the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secretfile already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.# oc create secret generic heketi-config-secret --from-file=heketi.json- Execute the following command to delete the deployment configuration, service, and route for heketi:
Note
The names of these parameters can be referenced from output of the following command:# oc get all | grep heketi# oc delete deploymentconfig,service,route heketi - Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created - Execute the following command to verify that the containers are running:
# oc get podsFor example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
6.1.3.2. Upgrading the Red Hat Gluster Storage Pods リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
The following commands must be executed on the client machine. .
Following are the steps for updating a DaemonSet for glusterfs:
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
# oc project <project_name>For example:# oc project storage-project - Execute the following command to get the
DeploymentConfig:# oc get dc - Execute the following command to set heketi server to accept requests only from the local-client:
# heketi-cli server mode set local-client - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
# heketi-cli server operations info - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
# oc scale dc <heketi_dc> --replicas=0 - Execute the following command to verify that the heketi pod is no longer present:
# oc get pods
- Execute the following command to find the DaemonSet name for gluster
# oc get ds - Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=falseUsing--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deleted - Execute the following commands to verify all the old pods are up:
# oc get podsFor example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d - Execute the following command to delete the old glusterfs template.
# oc delete templates glusterfsFor example,# oc delete templates glusterfs template “glusterfs” deleted - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labelsIf the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfslabel, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> storagenode=glusterfs
- Execute the following command to register new gluster template.
# oc create -f /usr/share/heketi/templates/glusterfs-template.yamlFor example,# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created - Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created - Execute the following command to identify the old gluster pods that needs to be deleted:
# oc get podsFor example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d - Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}' - Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
# oc delete pod <gluster_pod>For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedNote
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
# oc rsh <gluster_pod_name> - Run the following command to check the self-heal status of all the volumes:
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
- Execute the following command to verify that the pods are running:
# oc get podsFor example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-j241c 1/1 Running 0 4m glusterfs-pqfs6 1/1 Running 0 7m glusterfs-wrn6n 1/1 Running 0 12m storage-project-router-2-db2wl 1/1 Running 0 4d - Execute the following command to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_pod_name> glusterd --versionFor example:# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2 - Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
# gluster vol get all cluster.op-version- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.# gluster --timeout=3600 volume set all cluster.op-version 31302
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
# oc get podsFor example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d - Remote shell into one of the glusterfs pods. For example:
# oc rsh glusterfs-0vcf3 - Execute the following command:
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneFor example:# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>For example:# oc delete dc glusterblock-storage-provisioner-dc - Execute the following commands to deploy the gluster-block provisioner:
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisionerFor example:# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner - Delete the following resources from the old pod:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisioner - After editing the template, execute the following command to create the deployment configuration:
# oc process <gluster_block_provisioner_template> | oc create -f - - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
# oc rsh <gluster_pod_name> - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex onNote
You can check the brick multiplex status by executing the following command:# gluster v get all allFor example:# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading heketi and glusterfs registry pods” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.1.4. Upgrading if existing version deployed by using Ansible リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
6.1.4.1. Upgrading Heketi Server リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
The following commands must be executed on the client machine.
- Execute the following command to update the heketi client packages:
# yum update heketi-client -y - Backup the Heketi database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo - Execute the following step to edit the template:
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 templateIf the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storageIf the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7:v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage - Execute the following command to delete the deployment configuration, service, and route for heketi:
Note
The names of these parameters can be referenced from output of the following command:# oc get all | grep heketi# oc delete deploymentconfig,service,route heketi-storage - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created - Execute the following command to verify that the containers are running:
# oc get podsFor example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
6.1.4.2. Upgrading the Red Hat Gluster Storage Pods リンクのコピーリンクがクリップボードにコピーされました!
リンクのコピーリンクがクリップボードにコピーされました!
The following commands must be executed on the client machine.
Following are the steps for updating a DaemonSet for glusterfs:
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
# oc project <project_name>For example:# oc project storage-project - Execute the following command to get the
DeploymentConfig:# oc get dc - Execute the following command to set heketi server to accept requests only from the local-client:
# heketi-cli server mode set local-client - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
# heketi-cli server operations info - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
# oc scale dc <heketi_dc> --replicas=0 - Execute the following command to verify that the heketi pod is no longer present:
# oc get pods
- Execute the following command to find the DaemonSet name for gluster
# oc get ds - Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=falseUsing--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs-storage --cascade=false daemonset "glusterfs-storage" deleted - Execute the following commands to verify all the old pods are up:
# oc get podsFor example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d - Execute the following command to edit the old glusterfs template.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 templateIf the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:# oc edit template glusterfs - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7 - displayName: GlusterFS container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storageIf the template has only IMAGE_NAME as a parameter, then update the glusterfs template as following. For example:# oc edit template glusterfs - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7:v3.10 - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storageNote
Ensure that the CLUSTER_NAME variable is set to the correct value - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labelsIf the Red Hat Gluster Storage nodes do not have theglusterfs=storage-hostlabel, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> glusterfs=storage-host
- Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created - Execute the following command to identify the old gluster pods that needs to be deleted:
# oc get podsFor example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d - Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}' - Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
# oc delete pod <gluster_pod>For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedNote
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
# oc rsh <gluster_pod_name> - Run the following command to check the self-heal status of all the volumes:
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
- Execute the following command to verify that the pods are running:
# oc get podsFor example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-j241c 1/1 Running 0 4m glusterfs-pqfs6 1/1 Running 0 7m glusterfs-wrn6n 1/1 Running 0 12m storage-project-router-2-db2wl 1/1 Running 0 4d - Execute the following command to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_pod_name> glusterd --versionFor example:# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2 - Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
# gluster vol get all cluster.op-version- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.# gluster --timeout=3600 volume set all cluster.op-version 31302
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
# oc get podsFor example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d - Remote shell into one of the glusterfs pods. For example:
# oc rsh glusterfs-0vcf3 - Execute the following command:
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneFor example:# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>For example:# oc delete dc glusterblock-storage-provisioner-dc - Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 templateIf the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storageIf the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage - Delete the following resources from the old pod
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisioner - After editing the template, execute the following command to create the deployment configuration:
# oc process <gluster_block_provisioner_template> | oc create -f - - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
# oc rsh <gluster_pod_name> - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex onNote
You can check the brick multiplex status by executing the following command:# gluster v get all allFor example:# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading heketi and glusterfs registry pods” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.