Chapter 8. Upgrading Your Red Hat Openshift Container Storage in Independent Mode
This chapter describes the procedures to follow to upgrade your independent mode environment.
New registry name registry.redhat.io
is used throughout in this Guide.
However, if you have not migrated to the new registry
yet then replace all occurrences of registry.redhat.io
with registry.access.redhat.com
where ever applicable.
Follow the same upgrade procedure to upgrade your environment from Red Hat Openshift Container Storage in Independent Mode 3.11.0 and above to Red Hat Openshift Container Storage in Independent Mode 3.11.8. Ensure that the correct image and version numbers are configured before you start the upgrade process.
The valid images for Red Hat Openshift Container Storage 3.11.8 are:
- registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.8
8.1. Prerequisites
Ensure the following prerequisites are met:
- Section 3.1.3, “Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements”
- Configuring Port Access: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#CRS_port_access
- Enabling Kernel Modules: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#CRS_enable_kernel
- Starting and Enabling Services: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#Start_enable_service
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
If Heketi is running as a standalone service in one of the Red Hat Gluster Storage nodes, then ensure to open the port for Heketi. By default the port number for Heketi is 8080. To open this port execute the following command on the node where Heketi is running:
# firewall-cmd --zone=zone_name --add-port=8080/tcp # firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
If Heketi is configured to listen on a different port, then change the port number in the command accordingly.
Ensure that brick multiplexing is enabled. Brick multiplex status can be checked by using the following command.
# gluster v get all all
Ensure to run the following command on master nodes to get the latest versions of Ansible templates.
# yum update openshift-ansible
8.2. Upgrading nodes and pods in glusterfs group
Follow the steps in the sections ahead to upgrade your independent mode Setup.
8.2.1. Upgrading the Red Hat Gluster Storage Cluster
To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.
8.2.2. Upgrading/Migration of Heketi in RHGS node
If Heketi is in an Openshift node, then skip this section and see Section 8.2.4.1, “Upgrading Heketi in Openshift node” instead.
- In OCS 3.11, upgrade of Heketi in RHGS node is not supported. Hence, you have to migrate heketi to a new heketi pod.
- Ensure to migrate to the supported heketi deployment now, as there might not be a migration path in the future versions.
Ensure that cns-deploy rpm is installed in the master node. This provides template files necessary to setup heketi pod.
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# yum install cns-deploy
Use the newly created containerized Red Hat Gluster Storage project on the master node:
# oc project <project-name>
For example:
# oc project gluster
Execute the following command on the master node to create the service account:
# oc create -f /usr/share/heketi/templates/heketi-service-account.yaml serviceaccount/heketi-service-account created
Execute the following command on the master node to install the heketi template:
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template.template.openshift.io/heketi created
Verify if the templates are created
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS heketi Heketi service deployment template 5 (3 blank) 3
Execute the following command on the master node to grant the heketi Service Account the necessary privileges:
# oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account role "edit" added: "system:serviceaccount:gluster:heketi-service-account"
# oc adm policy add-scc-to-user privileged -z heketi-service-account scc "privileged" added to: ["system:serviceaccount:gluster:heketi-service-account"]
On the RHGS node, where heketi is running, execute the following commands:
Create the heketidbstorage volume:
# heketi-cli volume create --size=2 --name=heketidbstorage
Mount the volume:
# mount -t glusterfs 192.168.11.192:heketidbstorage /mnt/
where 192.168.11.192 is one of the RHGS node.
Stop the heketi service:
# systemctl stop heketi
Disable the heketi service:
# systemctl disable heketi
Copy the heketi db to the heketidbstorage volume:
# cp /var/lib/heketi/heketi.db /mnt/
Unmount the volume:
# umount /mnt
Copy the following files from the heketi node to the master node:
# scp /etc/heketi/heketi.json topology.json /etc/heketi/heketi_key OCP_master_node:/root/
where OCP_master_node is the hostname of the master node.
On the master node, set the environment variables for the following three files that were copied from the heketi node. Add the following lines to ~/.bashrc file and run the bash command to apply and save the changes:
export SSH_KEYFILE=heketi_key export TOPOLOGY=topology.json export HEKETI_CONFIG=heketi.json
NoteIf you have changed the value for "keyfile" in /etc/heketi/heketi.json to a different value, change here accordingly.
Execute the following command to create a secret to hold the configuration file:
# oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY} secret/heketi-config-secret created
Execute the following command to label the secret:
# oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret/heketi-config-secret labeled
Create the
heketi-gluster-endpoints.yaml
file, and get the IP addresses of all the glusterfs nodes.Create the
heketi-gluster-endpoints.yaml
file.# oc create -f ./heketi-gluster-endpoints.yaml
Get the IP addresses of all the glusterfs nodes.
# cat heketi-gluster-endpoints.yaml apiVersion: v1 kind: Endpoints metadata: name: heketi-storage-endpoints subsets: - addresses: - ip: 192.168.11.208 ports: - port: 1 - addresses: - ip: 192.168.11.176 ports: - port: 1 - addresses: - ip: 192.168.11.192 ports: - port: 1
In the above example, 192.168.11.208, 192.168.11.176, 192.168.11.192 are the glusterfs nodes.
Execute the following command to create the service:
# oc create -f ./heketi-gluster-service.yaml
For Example:
# cat heketi-gluster-service.yaml apiVersion: v1 kind: Service metadata: name: heketi-storage-endpoints spec: ports: - port: 1
Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service/heketi created route.route.openshift.io/heketi created deploymentconfig.apps.openshift.io/heketi created
NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.To verify if Heketi is migrated execute the following command on the master node:
# oc rsh po/<heketi-pod-name>
For example:
# oc rsh po/heketi-1-p65c6
Execute the following command to check the cluster IDs
# heketi-cli cluster list
From the output verify if the cluster ID matches with the old cluster.
8.2.3. Upgrading if existing version deployed using cns-deploy
8.2.3.1. Upgrading Heketi in Openshift node
The following commands must be executed on the client machine.
Execute the following command to update the heketi client and cns-deploy packages:
# yum update cns-deploy -y # yum update heketi-client -y
Backup the Heketi database file
# heketi-cli db dump > heketi-db-dump-$(date -I).json
Execute the following command to get the current HEKETI_ADMIN_KEY.
The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echo
Where <heketi-admin-secret-name> is the name of the heketi admin secret created by the user.
Execute the following command to delete the heketi template.
# oc delete templates heketi
Execute the following command to install the heketi template.
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
Execute the following command to grant the heketi Service Account the necessary privileges.
# oc policy add-role-to-user edit system:serviceaccount: <project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
For example,
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
NoteThe service account used in heketi pod needs to be privileged because Heketi/rhgs-volmanager pod mounts the heketidb storage Gluster volume as a "glusterfs" volume type and not as a PersistentVolume (PV).
As per the security-context-constraints regulations in OpenShift, ability to mount volumes which are not of the type configMap, downwardAPI, emptyDir, hostPath, nfs, persistentVolumeClaim, secret is granted only to accounts with privileged Security Context Constraint (SCC).
Execute the following command to generate a new heketi configuration file.
# sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
-
The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.
NoteJSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
-
The
Execute the following command to create a secret to hold the configuration file.
# oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json
NoteIf the heketi-config-secret file already exists, then delete the file and run the following command.
Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi
Edit the heketi template.
Edit the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, and HEKETI_EXECUTOR parameters.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type user displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user admin displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Replace the
value
underIMAGE_NAME
withv3.11.5
orv3.11.8
depending on the version you want to upgrade to.- displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.Execute the following command to verify that the containers are running:
# oc get pods
For example:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-ffgs5 1/1 Running 0 3m glusterfs-storage-5thpc 1/1 Running 0 9d glusterfs-storage-hfttr 1/1 Running 0 9d glusterfs-storage-n8rg5 1/1 Running 0 9d heketi-storage-4-9fnvz 2/2 Running 0 8d
8.2.3.2. Upgrading Gluster Block
Execute the following steps to upgrade gluster block.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:
# uname -r
Reboot the node for the latest kernel update to take effect.
To use gluster block, add the following two parameters to the
glusterfs
section in the heketi configuration file at /etc/heketi/heketi.JSON:auto_create_block_hosting_volume block_hosting_volume_size
Where:
auto_create_block_hosting_volume
: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value totrue
.block_hosting_volume_size
: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.For example:
..... ..... "glusterfs" : { "executor" : "ssh", "db" : "/var/lib/heketi/heketi.db", "sshexec" : { "rebalance_on_expansion": true, "keyfile" : "/etc/heketi/private_key" }, "auto_create_block_hosting_volume": true, "block_hosting_volume_size": 500G }, ..... .....
Restart the Heketi service:
# systemctl restart heketi
NoteThis step is not applicable if heketi is running as a pod in the Openshift cluster.
If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>
For example:
# oc delete dc glusterblock-provisioner-dc
Delete the following resources from the old pod
If you have glusterfs pods:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete serviceaccounts glusterblock-provisioner serviceaccount "glusterblock-provisioner" deleted # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
If you have registry pods:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete serviceaccounts glusterblock-provisioner serviceaccount "glusterblock-provisioner" deleted # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
Execute the following commands to deploy the gluster-block provisioner:
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
For example:
# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
8.2.4. Upgrading if existing version deployed using Ansible
8.2.4.1. Upgrading Heketi in Openshift node
The following commands must be executed on the client machine.
Execute the following command to update the heketi client:
# yum update heketi-client -y
Backup the Heketi database file:
# heketi-cli db dump > heketi-db-dump-$(date -I).json
Execute the following command to get the current HEKETI_ADMIN_KEY:
The OCS administrator can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
If the HEKETI_USER_KEY was set previously, you can obtain it by using the following command:
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
You can obtain the HEKETI_USER_KEY if it is set earlier with the following command.
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
If the HEKETI_USER_KEY was set previously, you can obtain it by using the following command:
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
Execute the following command to delete the heketi template.
# oc delete templates heketi
Execute the following command to install the heketi template.
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml template "heketi" created
Execute the following step to edit the template:
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION ,CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
NoteThe value of the HEKETI_LVM_WRAPPER parameter points to the wrapper command for LVM. In independent mode setups wrapper is not required, change the value to an empty string as shown below.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type user displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user admin displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.11.8 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container. name: HEKETI_LVM_WRAPPER displayName: Wrapper for executing LVM commands value: ""
If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
parameters: - description: Set secret for those creating volumes as type user displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user admin displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.7 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container name: HEKETI_LVM_WRAPPER displayName: Wrapper for executing LVM commands value: ""
If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi-storage
Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.Execute the following command to verify that the containers are running:
# oc get pods
For example:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-ffgs5 1/1 Running 0 3m heketi-storage-4-9fnvz 2/2 Running 0 8d
8.2.4.2. Upgrading Gluster Block if Deployed by Using Ansible
Execute the following steps to upgrade gluster block.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:
# uname -r
Reboot the node for the latest kernel update to take effect.
Execute the following command to delete the old glusterblock provisioner template.
# oc delete templates glusterblock-provisioner
Create a glusterblock provisioner template. For example:
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml template.template.openshift.io/glusterblock-provisioner created
If a gluster-block-provisoner-pod already exists then delete it by executing the following commands.
For glusterfs namespace:
# oc delete dc glusterblock-storage-provisioner-dc
For glusterfs-registry namespace:
oc delete dc glusterblock-registry-provisioner-dc
Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner template 3 (2 blank) 4 glusterfs GlusterFS DaemonSet template 5 (1 blank) 1 heketi Heketi service deployment template 7 (3 blank)3
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:
# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.11.8 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:
# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
Delete the following resources from the old pod.
If you have glusterfs pods:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete serviceaccounts glusterblock-storage-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisioner
If you have registry pods:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete serviceaccounts glusterblock-registry-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
Before running oc process determine the correct
provisioner
name. If there are more than onegluster block provisioner
running in your cluster the names must differ from all otherprovisioners
.
For example,
-
If there are 2 or more provisioners the name should be
gluster.org/glusterblock-<namespace>
where, namespace is replaced by the namespace that the provisioner is deployed in. -
If there is only one provisioner, installed prior to 3.11.8,
gluster.org/glusterblock
is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
-
If there are 2 or more provisioners the name should be
After editing the template, execute the following command to create the deployment configuration:
# oc process glusterblock-provisioner -o yaml | oc create -f -
For example:
# oc process glusterblock-provisioner -o yaml | oc create -f - clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created serviceaccount/glusterblock-storage-provisioner created clusterrolebinding.authorization.openshift.io/glusterblock-storage-provisioner created deploymentconfig.apps.openshift.io/glusterblock-storage-provisioner-dc created
All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a
block provisioner
, in a givennamespace
, run the following command:# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
Example:
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage glusterfs-storage-block gluster.org/glusterblock-app-storage app-storage
Check each storage class
provisioner name
, if it does not match theblock provisioner name
configured for thatnamespace
it must be updated. If theblock provisioner
name already matches theconfigured provisioner
name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
For every storage class in this list do the following:# oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml # oc delete sc <storageclass> # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
Example:
# oc get sc -o yaml gluster-storage-block > storageclass-to-edit.yaml # oc delete sc gluster-storage-block # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block created
8.2.5. Enabling S3 Compatible Object store
Support for S3 compatible Object Store is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#S3_Object_Store.
- If you have gluster nodes and heketi pods in glusterfs registry namespace, then follow the steps in section Section 8.3, “Upgrading nodes and pods in glusterfs registry group”.
- S3 compatible Object store is only available with Red Hat Openshift Container Storage 3.11.4 and older releases.
8.3. Upgrading nodes and pods in glusterfs registry group
Follow the steps in the sections to upgrade your gluster nodes and heketi pods in glusterfs registry namespace.
8.3.1. Upgrading the Red Hat Gluster Storage Registry Cluster
To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.
8.3.1.1. Upgrading Heketi Registry pod
If Heketi is not in an Openshift node, then you have to migrate Heketi in RHGS node to Openshift node. For more information on how to migrate, refer Section 8.2.2, “Upgrading/Migration of Heketi in RHGS node”.
To upgrade the Heketi registry pods, perform the following steps:
The following commands must be executed on the client machine.
Execute the following command to update the heketi client:
# yum update heketi-client -y
Backup the Heketi registry database file:
# heketi-cli db dump > heketi-db-dump-$(date -I).json
Execute the following command to get the current HEKETI_ADMIN_KEY:
The OCS administrator is free to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
# oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
To fetch HEKETI_USER_KEY, run below command:
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
Execute the following command to delete the heketi template.
# oc delete templates heketi
Execute the following command to install the heketi template.
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml template "heketi" created
Execute the following step to edit the template:
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock- glusterblock provisioner 3 (2 blank) 4 provisioner template heketi Heketi service deployment 7 (3 blank) 3 template
If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION,CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the following example:
The value of the HEKETI_LVM_WRAPPER parameter points to the wrapper command for LVM. In independent mode setups wrapper is not required, change the value to an empty string as shown below.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: heketiuserkey - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: adminkey - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-registry - displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.11.8 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container name: HEKETI_LVM_WRAPPER displayName: Wrapper for executing LVM commands value: ""
If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the following example:
parameters: - description: Set secret for those creating volumes as type user displayName: Heketi User Secret name: HEKETI_USER_KEY value: heketiuserkey - description: Set secret for administration of the Heketi service as user admin displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: adminkey - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-registry - displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.7 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container name: HEKETI_LVM_WRAPPER displayName: Wrapper for executing LVM commands value:""
If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi-registry
Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig "heketi-registry" created
NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.Execute the following command to verify that the containers are running:
# oc get pods
For example:
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-ffgs5 1/1 Running 0 3m heketi-storage-4-9fnvz 2/2 Running 0 8d
8.3.2. Upgrading glusterblock-provisioner Pod
To upgrade the glusterblock-provisioner pods, perform the following steps:
Execute the following command to delete the old glusterblock provisioner template.
# oc delete templates glusterblock-provisioner
Create a glusterblock provisioner template. For example:
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml template.template.openshift.io/glusterblock-provisioner created
If a glusterblock-provisoner pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-registry-dc>
For example:
# oc delete dc glusterblock-registry-provisioner-dc
Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION, and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock- glusterblock 3 (2 blank) 4 provisioner provisioner template heketi Heketi service 7 (3 blank) 3 deployment template
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as follows:
oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.11.8 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs-registry - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry
Replace the
value
underIMAGE_VERSION
withv3.11.5
orv3.11.8
depending on the version you want to upgrade to.- displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.11.8
If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as follows:
# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8 - description: The namespace in which these resources are being created - displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs-registry - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry
Replace the
value
underIMAGE_NAME
withv3.11.5
orv3.11.8
depending on the version you want to upgrade to.- displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
Delete the following resources from the old pod:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
Before running oc process determine the correct
provisioner
name. If there are more than onegluster block provisioner
running in your cluster the names must differ from all otherprovisioners
.
For example,
-
If there are 2 or more provisioners the name should be
gluster.org/glusterblock-<namespace>
where, namespace is replaced by the namespace that the provisioner is deployed in. -
If there is only one provisioner, installed prior to 3.11.8,
gluster.org/glusterblock
is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
-
If there are 2 or more provisioners the name should be
After editing the template, execute the following command to create the deployment configuration:
# oc process glusterblock-provisioner -o yaml | oc create -f -
For example:
# oc process glusterblock-provisioner -o yaml | oc create -f - clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created serviceaccount/glusterblock-registry-provisioner created clusterrolebinding.authorization.openshift.io/glusterblock-registry-provisioner created deploymentconfig.apps.openshift.io/glusterblock-registry-provisioner-dc created
All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a
block provisioner
, in a givennamespace
, run the following command:# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
Example:
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage glusterfs-registry-block gluster.org/glusterblock infra-storage
Check each storage class
provisioner name
, if it does not match theblock provisioner name
configured for thatnamespace
it must be updated. If theblock provisioner
name already matches theconfigured provisioner
name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
For every storage class in this list do the following:# oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml # oc delete sc <storageclass> # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
Example:
# oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml # oc delete sc glusterfs-registry-block storageclass.storage.k8s.io "glusterfs-registry-block" deleted # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block created
8.3.3. Upgrading Gluster Block
To upgrade the gluster block, perform the following steps:
Execute the following command to upgrade the gluster block:
# yum update gluster-block
Enable and start the gluster block service:
# systemctl enable gluster-blockd # systemctl start gluster-blockd
8.4. Upgrading the client on Red Hat OpenShift Container Platform nodes
Execute the following commands on each of the nodes:
To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
# oc adm drain <node_name> --ignore-daemonsets
To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access):
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
Execute the following command on the node to upgrade the client on the node:
# yum update glusterfs-fuse
To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
# oc adm manage-node --schedulable=true <node_name>
- Create and add the following content to the multipath.conf file:
NoteThe multipath.conf file does not require any change as the change was implemented during a previous upgrade.
+
# cat >> /etc/multipath.conf <<EOF # LIO iSCSI devices { device { vendor "LIO-ORG" user_friendly_names "yes" # names like mpatha path_grouping_policy "failover" # one path per group hardware_handler "1 alua" path_selector "round-robin 0" failback immediate path_checker "tur" prio "alua" no_path_retry 120 } } EOF
Execute the following commands to start multipath daemon and [re]load the multipath configuration:
# systemctl start multipathd
# systemctl reload multipathd