OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Upgrading your Red Hat Openshift Container Storage in Converged Mode
This chapter describes the procedure to upgrade your environment from Container Storage in Converged Mode 3.10 to Red Hat Openshift Container Storage in Converged Mode 3.11.
-
New registry name
registry.redhat.iois used throughout in this Guide. However, if you have not migrated to the newregistryyet then replace all occurrences ofregistry.redhat.iowithregistry.access.redhat.comwhere ever applicable. - Follow the same upgrade procedure to upgrade your environment from Red Hat Openshift Container Storage in Converged Mode 3.11.0 and above to Red Hat Openshift Container Storage in Converged Mode 3.11.8. Ensure that the correct image and version numbers are configured before you start the upgrade process.
The valid images for Red Hat Openshift Container Storage 3.11.8 are:
- registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.8
6.1. Upgrading the pods in the glusterfs group Link kopierenLink in die Zwischenablage kopiert!
The following sections provide steps to upgrade your Glusterfs pods.
6.1.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
Ensure the following prerequisites are met:
- Section 3.1.3, “Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements”
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
Ensure to run the following command to get the latest versions of Ansible templates.
yum update openshift-ansible
# yum update openshift-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For deployments using cns-deploy tool, the templates are available in the following location:
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
For deployments using ansible playbook the templates are available in the following location:
- gluster template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
- heketi template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
- glusterblock-provisioner template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
6.1.2. Restoring original label values for /dev/log Link kopierenLink in die Zwischenablage kopiert!
Follow this procedure only if you are upgrading your environment from Red Hat Container Native Storage 3.9 to Red Hat Openshift Container Storage 3.11.8.
Skip this procedure if you are upgrading your environment from Red Hat Openshift Container Storage 3.10 and above to Red Hat Openshift Container Storage 3.11.8.
To restore the original selinux label, execute the following commands:
Create a directory and soft links on all nodes that run gluster pods:
mkdir /srv/<directory_name> cd /srv/<directory_name>/ # same dir as above ln -sf /dev/null systemd-tmpfiles-setup-dev.service ln -sf /dev/null systemd-journald.service ln -sf /dev/null systemd-journald.socket
# mkdir /srv/<directory_name> # cd /srv/<directory_name>/ # same dir as above # ln -sf /dev/null systemd-tmpfiles-setup-dev.service # ln -sf /dev/null systemd-journald.service # ln -sf /dev/null systemd-journald.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the daemonset that creates the glusterfs pods on the node which has oc client:
oc edit daemonset <daemonset_name>
# oc edit daemonset <daemonset_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under volumeMounts section add a mapping for the volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under volumes section add a new host path for each service listed:
NoteThe path mentioned in here should be the same as mentioned in Step 1.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command on all nodes that run gluster pods. This will reset the label:
restorecon /dev/log
# restorecon /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to check the status of self heal for all volumes:
oc rsh <gluster_pod_name> for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# oc rsh <gluster_pod_name> # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for self-heal to complete.
Execute the following command and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'# df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).
NoteThe
dfcommand is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by thedfcommand is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of
glusterfsdprocess:gluster volume set all cluster.max-bricks-per-process 250
# gluster volume set all cluster.max-bricks-per-process 250Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on any one of the gluster pods to ensure that the option is set correctly:
gluster volume get all cluster.max-bricks-per-process
# gluster volume get all cluster.max-bricks-per-processCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
gluster volume get all cluster.max-bricks-per-process
# gluster volume get all cluster.max-bricks-per-process cluster.max-bricks-per-process 250Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command on the node which has oc client to delete the gluster pod:
oc delete pod <gluster_pod_name>
# oc delete pod <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if the pod is ready, execute the following command:
oc get pods -l glusterfs=storage-pod
# oc get pods -l glusterfs=storage-podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Login to the node hosting the pod and check the selinux label of /dev/log
ls -lZ /dev/log
# ls -lZ /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should show devlog_t label
For example:
ls -lZ /dev/log
# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the node.
In the gluster pod, check if the label value is devlog_t:
oc rsh <gluster_pod_name> ls -lZ /dev/log
# oc rsh <gluster_pod_name> # ls -lZ /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ls -lZ /dev/log
# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform steps 4 to 9 for other pods.
6.1.3. Upgrading if existing version deployed by using cns-deploy Link kopierenLink in die Zwischenablage kopiert!
6.1.3.1. Upgrading cns-deploy and Heketi Server Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
Execute the following command to update the heketi client and cns-deploy packages:
yum update cns-deploy -y yum update heketi-client -y
# yum update cns-deploy -y # yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Backup the Heketi database file
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY.
The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret <heketi-admin-secret> -o jsonpath='{.data.key}'|base64 -d;echooc get secret <heketi-admin-secret> -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
BLOCK_HOST_SIZEparameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/index#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.
NoteJSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
-
The
Execute the following command to create a secret to hold the configuration file.
oc create secret generic <heketi-config-secret> --from-file=heketi.json
# oc create secret generic <heketi-config-secret> --from-file=heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the heketi-config-secret file already exists, then delete the file and run the following command.
Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe names of these parameters can be referenced from output of the following command:
oc get all | grep heketi
# oc get all | grep heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the heketi template.
Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Add an ENV with the name HEKETI_LVM_WRAPPER and value
/usr/sbin/exec-on-host.- description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container. displayName: Wrapper for executing LVM commands name: HEKETI_LVM_WRAPPER value: /usr/sbin/exec-on-host
- description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container. displayName: Wrapper for executing LVM commands name: HEKETI_LVM_WRAPPER value: /usr/sbin/exec-on-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add an ENV with the name HEKETI_DEBUG_UMOUNT_FAILURES and value
true.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add an ENV with the name HEKETI_CLI_USER and value
admin. - Add an ENV with the name HEKETI_CLI_KEY and the same value provided for the ENV HEKETI_ADMIN_KEY.
Replace the
valueunderIMAGE_VERSIONwithv3.11.5orv3.11.8depending on the version you want to upgrade to.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstoragevolume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage.Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pods
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-ffgs5 1/1 Running 0 3m heketi-storage-4-9fnvz 2/2 Running 0 8dCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.3.2. Upgrading the Red Hat Gluster Storage Pods Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
Following are the steps for updating a DaemonSet for glusterfs:
Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the
DeploymentConfig:oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the DaemonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using
--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,
oc delete ds glusterfs --cascade=false
# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to register new glusterfs template.
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
Check if the nodes are labelled with the appropriate label by using the following command:
oc get nodes -l glusterfs=storage-host
# oc get nodes -l glusterfs=storage-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit the glusterfs template.
Execute the following command:
oc edit template glusterfs
# oc edit template glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines under volume mounts:
- name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true - name: host-rootfs mountPath: "/rootfs"
- name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true - name: host-rootfs mountPath: "/rootfs"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines under volumes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
valueunderIMAGE_VERSIONwithv3.11.5orv3.11.8depending on the version you want to upgrade to.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to identify the old gluster pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'# df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).
NoteThe
dfcommand is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by thedfcommand is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.To delete the old gluster pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBefore deleting the next pod, self heal check has to be made:
Run the following command to access shell on gluster pod:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check the self-heal status of all the volumes:
for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"
# for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_pod_name> glusterd --version
# oc rsh <gluster_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-4cpcc glusterd --version
# oc rsh glusterfs-4cpcc glusterd --version glusterfs 6.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:
Scale up the DC (Deployment Configuration).
oc scale dc <heketi_dc> --replicas=1
# oc scale dc <heketi_dc> --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the cluster.op-version to 70200 on any one of the pods:
ImportantEnsure all the gluster pods are updated before changing the cluster.op-version.
gluster --timeout=3600 volume set all cluster.op-version 70200
# gluster --timeout=3600 volume set all cluster.op-version 70200Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following steps to enable server.tcp-user-timeout on all volumes.
NoteThe "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.
It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.
List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remote shell into one of the glusterfs pods. For example:
oc rsh glusterfs-0vcf3
# oc rsh glusterfs-0vcf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc glusterblock-provisioner-dc
# oc delete dc glusterblock-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete dc glusterblock-storage-provisioner-dc
# oc delete dc glusterblock-storage-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the following resources from the old pod:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-provisioner serviceaccount "glusterblock-provisioner" deleted # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to deploy the gluster-block provisioner:
`sed -e 's/${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/<VERSION>/<NEW-VERSION>/' | oc create -f -`sed -e 's/${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/<VERSION>/<NEW-VERSION>/' | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - <VERSION>
- Existing version of OpenShift Container Storage.
- <NEW-VERSION>
Either 3.11.5 or 3.11.8 depending on the version you are upgrading to.
oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
`sed -e 's/${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/3.11.4/3.11.8/' | oc create -f -`sed -e 's/${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/3.11.4/3.11.8/' | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:
To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the brick multiplex status:
gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow If it is disabled, then execute the following command to enable brick multiplexing:
NoteEnsure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-770ql
# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:
gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading the pods in the glusterfs registry group” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in ] to bring back your heketi pod and then proceed with the steps listed in xref:chap-upgrade_client_common[ to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.1.4. Upgrading if existing version deployed by using Ansible Link kopierenLink in die Zwischenablage kopiert!
6.1.4.1. Upgrading Heketi Server Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
Execute the following steps to check for any pending Heketi operatons:
Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Backup the Heketi database file.
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe json file created can be used to restore and therefore should be stored in persistent storage of your choice.
Execute the following command to update the heketi client packages. Update the
heketi-clientpackage on all the OCP nodes where it is installed. Newer installations may not have theheketi-clientrpm installed on any OCP nodes:yum update heketi-client -y
# yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY.
The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo# oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
HEKETI_USER_KEYwas set previously, you can obtain it by using the following command:oc describe pod <heketi-pod>
# oc describe pod <heketi-pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to install the heketi template.
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml template "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to delete the deployment configuration, service, and route for heketi:
NoteThe names of these parameters can be referenced from output of the following command:
oc get all | grep heketi
# oc get all | grep heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete deploymentconfig,service,route heketi-storage
# oc delete deploymentconfig,service,route heketi-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstoragevolume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage.Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.4.2. Upgrading the Red Hat Gluster Storage Pods Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
Following are the steps for updating a DaemonSet for glusterfs:
Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the DaemonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using
--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,
oc delete ds glusterfs-storage --cascade=false
# oc delete ds glusterfs-storage --cascade=false daemonset "glusterfs-storage" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to register new glusterfs template.
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml template "glusterfs" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to edit the old glusterfs template.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
If the template has only IMAGE_NAME as a parameter, then update the glusterfs template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the CLUSTER_NAME variable is set to the correct value
Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
Check if the nodes are labelled with the appropriate label by using the following command:
oc get nodes -l glusterfs=storage-host
# oc get nodes -l glusterfs=storage-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to identify the old gluster pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'# df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).
NoteThe
dfcommand is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by thedfcommand is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.To delete the old gluster pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBefore deleting the next pod, self heal check has to be made:
Run the following command to access shell on gluster pod:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check the self-heal status of all the volumes:
for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"
# for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_pod_name> glusterd --version
# oc rsh <gluster_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-4cpcc glusterd --version
# oc rsh glusterfs-4cpcc glusterd --version glusterfs 6.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:
Scale up the DC (Deployment Configuration).
oc scale dc <heketi_dc> --replicas=1
# oc scale dc <heketi_dc> --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the cluster.op-version to 70200 on any one of the pods:
NoteEnsure all the gluster pods are updated before changing the cluster.op-version.
gluster --timeout=3600 volume set all cluster.op-version 70200
# gluster --timeout=3600 volume set all cluster.op-version 70200Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following steps to enable server.tcp-user-timeout on all volumes.
NoteThe "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.
It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.
List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remote shell into one of the glusterfs pods. For example:
oc rsh glusterfs-0vcf3
# oc rsh glusterfs-0vcf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc glusterblock-provisioner-dc
# oc delete dc glusterblock-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete dc glusterblock-storage-provisioner-dc
# oc delete dc glusterblock-storage-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the old glusterblock provisioner template.
oc delete templates glusterblock-provisioner
# oc delete templates glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a glusterblock provisioner template. For example:
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml template.template.openshift.io/glusterblock-provisioner createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-storage-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Before running oc process determine the correct
provisionername. If there are more than onegluster block provisionerrunning in your cluster the names must differ from all otherprovisioners.
For example,
-
If there are 2 or more provisioner the name should be
gluster.org/glusterblock-<namespace>where, namespace is replaced by the namespace that the provisioner is deployed in. -
If there is only one provisioner, installed prior to 3.11.8,
gluster.org/glusterblockis sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
-
If there are 2 or more provisioner the name should be
After editing the template, execute the following command to create the deployment configuration:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f - clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created serviceaccount/glusterblock-storage-provisioner created clusterrolebinding.authorization.openshift.io/glusterblock-storage-provisioner created deploymentconfig.apps.openshift.io/glusterblock-storage-provisioner-dc createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:
To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the brick multiplex status:
gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow If it is disabled, then execute the following command to enable brick multiplexing:
NoteEnsure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-770ql
# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:
gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading the pods in the glusterfs registry group” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in ] to bring back your heketi pod and then proceed with the steps listed in xref:chap-upgrade_client_common[ to upgrade the client on Red Hat Openshift Container Platform Nodes.
All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a
block provisioner, in a givennamespace, run the following command:oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage glusterfs-storage-block gluster.org/glusterblock-app-storage app-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check each storage class
provisioner name, if it does not match theblock provisioner nameconfigured for thatnamespaceit must be updated. If theblock provisionername already matches theconfigured provisionername, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
For every storage class in this list do the following:oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml oc delete sc <storageclass> sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml # oc delete sc <storageclass> # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o yaml gluster-storage-block > storageclass-to-edit.yaml oc delete sc gluster-storage-block sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml gluster-storage-block > storageclass-to-edit.yaml # oc delete sc gluster-storage-block # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2. Upgrading the pods in the glusterfs registry group Link kopierenLink in die Zwischenablage kopiert!
The following sections provide steps to upgrade your glusterfs registry pods.
6.2.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
Ensure the following prerequisites are met:
- Section 3.1.3, “Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements”
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
Ensure to run the following command to get the latest versions of Ansible templates.
yum update openshift-ansible
# yum update openshift-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For deployments using cns-deploy tool, the templates are available in the following location:
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
For deployments using ansible playbook the templates are available in the following location:
- gluster template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
- heketi template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
- glusterblock-provisioner template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
6.2.2. Restoring original label values for /dev/log Link kopierenLink in die Zwischenablage kopiert!
Follow this procedure only if you are upgrading your environment from Red Hat Container Native Storage 3.9 to Red Hat Openshift Container Storage 3.11.8.
Skip this procedure if you are upgrading your environment from Red Hat Openshift Container Storage 3.10 and above to Red Hat Openshift Container Storage 3.11.8.
To restore the original selinux label, execute the following commands:
Create a directory and soft links on all nodes that run gluster pods:
mkdir /srv/<directory_name> cd /srv/<directory_name>/ # same dir as above ln -sf /dev/null systemd-tmpfiles-setup-dev.service ln -sf /dev/null systemd-journald.service ln -sf /dev/null systemd-journald.socket
# mkdir /srv/<directory_name> # cd /srv/<directory_name>/ # same dir as above # ln -sf /dev/null systemd-tmpfiles-setup-dev.service # ln -sf /dev/null systemd-journald.service # ln -sf /dev/null systemd-journald.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the daemonset that creates the glusterfs pods on the node which has oc client:
oc edit daemonset <daemonset_name>
# oc edit daemonset <daemonset_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under volumeMounts section add a mapping for the volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under volumes section add a new host path for each service listed:
NoteThe path mentioned in here should be the same as mentioned in Step 1.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command on all nodes that run gluster pods. This will reset the label:
restorecon /dev/log
# restorecon /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to check the status of self heal for all volumes:
oc rsh <gluster_pod_name> for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# oc rsh <gluster_pod_name> # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for self-heal to complete.
Execute the following command and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'# df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).
NoteThe
dfcommand is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by thedfcommand is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of
glusterfsdprocess:gluster volume set all cluster.max-bricks-per-process 250
# gluster volume set all cluster.max-bricks-per-process 250Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on any one of the gluster pods to ensure that the option is set correctly:
gluster volume get all cluster.max-bricks-per-process
# gluster volume get all cluster.max-bricks-per-processCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
gluster volume get all cluster.max-bricks-per-process
# gluster volume get all cluster.max-bricks-per-process cluster.max-bricks-per-process 250Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command on the node which has oc client to delete the gluster pod:
oc delete pod <gluster_pod_name>
# oc delete pod <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if the pod is ready, execute the following command:
oc get pods -l glusterfs=registry-pod
# oc get pods -l glusterfs=registry-podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Login to the node hosting the pod and check the selinux label of /dev/log
ls -lZ /dev/log
# ls -lZ /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should show devlog_t label
For example:
ls -lZ /dev/log
# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the node.
In the gluster pod, check if the label value is devlog_t:
oc rsh <gluster_pod_name> ls -lZ /dev/log
# oc rsh <gluster_pod_name> # ls -lZ /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ls -lZ /dev/log
# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform steps 4 to 9 for other pods.
6.2.3. Upgrading if existing version deployed by using cns-deploy Link kopierenLink in die Zwischenablage kopiert!
6.2.3.1. Upgrading cns-deploy and Heketi Server Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
Execute the following command to update the heketi client and cns-deploy packages:
yum update cns-deploy -y yum update heketi-client -y
# yum update cns-deploy -y # yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Backup the Heketi registry database file
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY.
The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echo# oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe service account used in heketi pod needs to be privileged because Heketi/rhgs-volmanager pod mounts the heketidb storage Gluster volume as a "glusterfs" volume type and not as a PersistentVolume (PV).
As per the security-context-constraints regulations in OpenShift, ability to mount volumes which are not of the type configMap, downwardAPI, emptyDir, hostPath, nfs, persistentVolumeClaim, secret is granted only to accounts with privileged Security Context Constraint (SCC).Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
BLOCK_HOST_SIZEparameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/index#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.
NoteJSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
-
The
Execute the following command to create a secret to hold the configuration file.
oc create secret generic <heketi-registry-config-secret> --from-file=heketi.json
# oc create secret generic <heketi-registry-config-secret> --from-file=heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the heketi-registry-config-secret file already exists, then delete the file and run the following command.
Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-registry
# oc delete deploymentconfig,service,route heketi-registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the heketi template.
Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Add an ENV with the name HEKETI_LVM_WRAPPER and value
/usr/sbin/exec-on-host.- description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container. displayName: Wrapper for executing LVM commands name: HEKETI_LVM_WRAPPER value: /usr/sbin/exec-on-host
- description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container. displayName: Wrapper for executing LVM commands name: HEKETI_LVM_WRAPPER value: /usr/sbin/exec-on-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add an ENV with the name HEKETI_DEBUG_UMOUNT_FAILURES and value
true.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add an ENV with the name HEKETI_CLI_USER and value
admin. - Add an ENV with the name HEKETI_CLI_KEY and the same value provided for the ENV HEKETI_ADMIN_KEY.
Replace the
valueunderIMAGE_VERSIONwithv3.11.5orv3.11.8depending on the version you want to upgrade to.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstoragevolume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage.Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.3.2. Upgrading the Red Hat Gluster Storage Registry Pods Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine. .
Following are the steps for updating a DaemonSet for glusterfs:
Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the
DeploymentConfig:oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the DaemonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using
--cascade=falseoption while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,
oc delete ds glusterfs-registry --cascade=false
# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
Check if the nodes are labelled with the appropriate label by using the following command:
oc get nodes -l glusterfs=registry-host
# oc get nodes -l glusterfs=registry-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to register new glusterfs template.
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the glusterfs template.
Execute the following command:
oc edit template glusterfs
# oc edit template glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines under volume mounts:
- name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true - name: host-rootfs mountPath: "/rootfs"
- name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true - name: host-rootfs mountPath: "/rootfs"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following lines under volumes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
valueunderIMAGE_VERSIONwithv3.11.5orv3.11.8depending on the version you want to upgrade to.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'# df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).
NoteThe
dfcommand is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by thedfcommand is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.To delete the old glusterfs-registry pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBefore deleting the next pod, self heal check has to be made:
Run the following command to access shell on glusterfs-registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check the self-heal status of all the volumes: :
for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"
# for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_registry_pod_name> glusterd --version
# oc rsh <gluster_registry_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 6.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow rpm -qa|grep gluster
# rpm -qa|grep glusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:
Scale up the DC (Deployment Configuration).
oc scale dc <heketi_dc> --replicas=1
# oc scale dc <heketi_dc> --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the cluster.op-version to 70200 on any one of the pods:
NoteEnsure all the glusterfs-registry pods are updated before changing the cluster.op-version.
gluster volume set all cluster.op-version 70200
# gluster volume set all cluster.op-version 70200Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following steps to enable server.tcp-user-timeout on all volumes.
NoteThe "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.
It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.
List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remote shell into one of the glusterfs-registry pods. For example:
oc rsh glusterfs-registry-g6vd9
# oc rsh glusterfs-registry-g6vd9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-provisioner serviceaccount "glusterblock-provisioner" deleted # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to deploy the gluster-block provisioner:
`sed -e 's/${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/<VERSION>/<NEW-VERSION>/' | oc create -f -`sed -e 's/${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/<VERSION>/<NEW-VERSION>/' | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - <VERSION>
- Existing version of OpenShift Container Storage.
- <NEW-VERSION>
Either 3.11.5 or 3.11.8 depending on the version you are upgrading to.
oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
`sed -e 's/${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/3.11.4/3.11.8/' | oc create -f -`sed -e 's/${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/3.11.4/3.11.8/' | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:
To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the brick multiplex status:
gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow If it is disabled, then execute the following command to enable brick multiplexing:
NoteEnsure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-registry-g6vd9
# oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:
gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
NoteAfter upgrading the glusterfs registry pods, proceed with the steps listed in ] to bring back your heketi pod and then proceed with the steps listed in xref:chap-upgrade_client_common[ to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.2.4. Upgrading if existing version deployed by using Ansible Link kopierenLink in die Zwischenablage kopiert!
6.2.4.1. Upgrading Heketi Server Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
"yum update cns-deploy -y" is not required to be executed if OCS 3.10 was deployed via Ansible.
Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the
DeploymentConfig:oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Backup the Heketi database file
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe json file created can be used to restore and therefore should be stored in persistent storage of your choice.
Execute the following command to update the heketi client packages. Update the
heketi-clientpackage on all the OCP nodes where it is installed. Newer installations may not have theheketi-clientrpm installed on any OCP nodes:yum update heketi-client -y
# yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY.
The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo# oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
HEKETI_USER_KEYwas set previously, you can obtain it by using the following command:oc describe pod <heketi-pod>
# oc describe pod <heketi-pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following step to edit the template:
If the existing template has IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-registry
# oc delete deploymentconfig,service,route heketi-registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstoragevolume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage.Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.4.2. Upgrading the Red Hat Gluster Storage Registry Pods Link kopierenLink in die Zwischenablage kopiert!
The following commands must be executed on the client machine.
Following are the steps for updating a DaemonSet for glusterfs:
Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the DaemonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using
--cascade=falseoption while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,
oc delete ds glusterfs-registry --cascade=false
# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to register new glusterfs template.
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml template "glusterfs" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to edit the old glusterfs template.
If the template has IMAGE_NAME, then update the glusterfs template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note- Ensure that the CLUSTER_NAME variable is set to the correct value
- If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
Check if the nodes are labelled with the appropriate label by using the following command:
oc get nodes -l glusterfs=registry-host
# oc get nodes -l glusterfs=registry-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true
- name: host-rootfs mountPath: "/rootfs"
- name: kernel-modules hostPath: path: "/usr/lib/modules"
- name: host-rootfs hostPath: path: "/"
- displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.11.8
Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs-registry” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'# df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).
NoteThe
dfcommand is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by thedfcommand is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.To delete the old glusterfs-registry pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc delete pod glusterfs-registry-4cpcc
# oc delete pod glusterfs-registry-4cpcc pod “glusterfs-registry-4cpcc” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBefore deleting the next pod, self heal check has to be made:
Run the following command to access shell on glusterfs-registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to check the self-heal status of all the volumes: :
for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"
# for eachVolume in $(gluster volume list); do gluster volume heal $eachVolume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-registry-4cpcc 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_registry_pod_name> glusterd --version
# oc rsh <gluster_registry_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-registry-abmqa glusterd --version
# oc rsh glusterfs-registry-abmqa glusterd --version glusterfs 6.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow rpm -qa|grep gluster
# rpm -qa|grep glusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:
Scale up the DC (Deployment Configuration).
oc scale dc <heketi_dc> --replicas=1
# oc scale dc <heketi_dc> --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the cluster.op-version to 70200 on any one of the pods:
NoteEnsure all the glusterfs-registry pods are updated before changing the cluster.op-version.
gluster volume set all cluster.op-version 70200
# gluster volume set all cluster.op-version 70200Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following steps to enable server.tcp-user-timeout on all volumes.
NoteThe "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.
It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.
List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remote shell into one of the glusterfs-registry pods. For example:
oc rsh glusterfs-registry-g6vd9
# oc rsh glusterfs-registry-g6vd9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the old glusterblock provisioner template.
oc delete templates glusterblock-provisioner
# oc delete templates glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a glusterblock provisioner template. For example:
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml template.template.openshift.io/glusterblock-provisioner createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME and NAMESPACE.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following.
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-registry-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Before running oc process determine the correct
provisionername. If there are more than onegluster block provisionerrunning in your cluster the names must differ from all otherprovisioners.
For example,
-
If there are 2 or more provisioners the name should be
gluster.org/glusterblock-<namespace>where, namespace is replaced by the namespace that the provisioner is deployed in. -
If there is only one provisioner, installed prior to 3.11.8,
gluster.org/glusterblockis sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
-
If there are 2 or more provisioners the name should be
After editing the template, execute the following command to create the deployment configuration:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f - clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created serviceaccount/glusterblock-registry-provisioner created clusterrolebinding.authorization.openshift.io/glusterblock-registry-provisioner created deploymentconfig.apps.openshift.io/glusterblock-registry-provisioner-dc createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:
To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the brick multiplex status:
gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow If it is disabled, then execute the following command to enable brick multiplexing:
NoteEnsure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh glusterfs-registry-g6vd9
# oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:
For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:
gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
NoteAfter upgrading the glusterfs registry pods, proceed with the steps listed in ] to bring back your heketi pod and then proceed with the steps listed in xref:chap-upgrade_client_common[ to upgrade the client on Red Hat Openshift Container Platform Nodes.
All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a
block provisioner, in a givennamespace, run the following command:oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage glusterfs-registry-block gluster.org/glusterblock infra-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check each storage class
provisioner name, if it does not match theblock provisioner nameconfigured for thatnamespaceit must be updated. If theblock provisionername already matches theconfigured provisionername, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
For every storage class in this list do the following:oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml oc delete sc <storageclass> sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml # oc delete sc <storageclass> # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml oc delete sc glusterfs-registry-block sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml # oc delete sc glusterfs-registry-block storageclass.storage.k8s.io "glusterfs-registry-block" deleted # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Starting the Heketi Pods Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands on the client machine for both glusterfs and registry namespace.
Execute the following command to navigate to the project where the Heketi pods are running:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example for glusterfs namespace:
oc project glusterfs
# oc project glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example for registry namespace:
oc project glusterfs-registry
# oc project glusterfs-registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, on a glusterfs-registry project:
oc get dc
# oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY glusterblock-storage-provisioner-dc 1 1 0 config heketi-storage 4 1 1 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, on a glusterfs project:
oc get dc
# oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY glusterblock-storage-provisioner-dc 1 1 0 config heketi-storage 4 1 1 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to increase the replica count from 0 to 1. This brings back the Heketi pod:
oc scale dc <heketi_dc> --replicas=1
# oc scale dc <heketi_dc> --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify that the heketi pod is present in both glusterfs and glusterfs-registry namespace:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example for glusterfs:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example for registry pods:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Upgrading the client on Red Hat OpenShift Container Platform nodes Link kopierenLink in die Zwischenablage kopiert!
Execute the following commands on each of the nodes:
To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
oc adm drain <node_name> --ignore-daemonsets
# oc adm drain <node_name> --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access) :
oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to upgrade the client node to the latest glusterfs-fuse version:
yum update glusterfs-fuse
# yum update glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
oc adm manage-node --schedulable=true <node_name>
# oc adm manage-node --schedulable=true <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and add the following content to the multipath.conf file:
NoteThe multipath.conf file does not require any change as the change was implemented during a previous upgrade.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to start multipath daemon and [re]load the multipath configuration:
systemctl start multipathd
# systemctl start multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl reload multipathd
# systemctl reload multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow