OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Ce contenu n'est pas disponible dans la langue sélectionnée.
6.2. Upgrading heketi and glusterfs registry pods
The following sections provide steps to upgrade your glusterfs registry pods
6.2.1. Prerequisites Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
Ensure the following prerequisites are met:
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- Ensure to run the following command to get the latest versions of Ansible templates.
yum update openshift-ansible
# yum update openshift-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
The template files are available in the following locations:
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
6.2.2. Upgrading if existing version deployed by using cns-deploy Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
6.2.2.1. Upgrading cns-deploy and Heketi Server Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
The following commands must be executed on the client machine.
- Backup the Heketi registry database file
oc rsh <heketi_pod_name> cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` exit
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.template
toheketi.json
in the current directory and edit the new file directly, replacing each "${VARIABLE}
" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secret
file already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.oc create secret generic heketi-config-secret --from-file=heketi.json
# oc create secret generic heketi-config-secret --from-file=heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.2.2. Upgrading the Red Hat Gluster Storage Registry Pods Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
The following commands must be executed on the client machine. .
Following are the steps for updating a DaemonSet for glusterfs:
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project storage-project
# oc project storage-project
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig
:oc get dc
# oc get dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=false
option while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs-registry --cascade=false
# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfs
label, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> storagenode=glusterfs
# oc label nodes <node name> storagenode=glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to register new gluster template.
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old glusterfs-registry pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on glusterfs-registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to check the self-heal status of all the volumes: :
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_registry_pod_name> glusterd --version
# oc rsh <gluster_registry_pod_name> glusterd --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rpm -qa|grep gluster
# rpm -qa|grep gluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.gluster volume set all cluster.op-version 31302
# gluster volume set all cluster.op-version 31302
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remote shell into one of the glusterfs-registry pods. For example:
#oc rsh glusterfs-registry-g6vd9
#oc rsh glusterfs-registry-g6vd9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-registry-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing the template, execute the following command to create the deployment configuration:
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the glusterfs_registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You can check the brick multiplex status by executing the following command:gluster v get all all
# gluster v get all all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- After upgrading the glusterfs registry pods, proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.2.3. Upgrading if existing version deployed by using Ansible Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
6.2.3.1. Upgrading Heketi Server Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
The following commands must be executed on the client machine.
Note
"yum update cns-deploy -y" is not required to be executed if OCS 3.9 was deployed via Ansible.
- Backup the Heketi registry database file
oc rsh <heketi_pod_name>
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-registry
# oc delete deploymentconfig,service,route heketi-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.3.2. Upgrading the Red Hat Gluster Storage Registry Pods Copier lienLien copié sur presse-papiers!
Copier lienLien copié sur presse-papiers!
The following commands must be executed on the client machine.
Following are the steps for updating a DaemonSet for glusterfs:
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project storage-project
# oc project storage-project
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig
:oc get dc
# oc get dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-client
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=false
option while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs-registry --cascade=false
# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the old glusterfs template.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter then update the glusterfs template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Ensure that the CLUSTER_NAME variable is set to the correct value - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have theglusterfs=registry-host
label, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> glusterfs=registry-host
# oc label nodes <node name> glusterfs=registry-host
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old glusterfs-registry pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on glusterfs-registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to check the self-heal status of all the volumes: :
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_registry_pod_name> glusterd --version
# oc rsh <gluster_registry_pod_name> glusterd --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow rpm -qa|grep gluster
# rpm -qa|grep gluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.gluster volume set all cluster.op-version 31302
# gluster volume set all cluster.op-version 31302
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remote shell into one of the glusterfs-registry pods. For example:
#oc rsh glusterfs-registry-g6vd9
#oc rsh glusterfs-registry-g6vd9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-registry-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing the template, execute the following command to create the deployment configuration:
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the glusterfs_registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You can check the brick multiplex status by executing the following command:gluster v get all all
# gluster v get all all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- After upgrading the glusterfs registry pods, proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.