Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 6. Upgrading your Red Hat Openshift Container Storage in Converged Mode


This chapter describes the procedure to upgrade your environment from Container Storage in Converged Mode 3.10 to Red Hat Openshift Container Storage in Converged Mode 3.11.

Note
  • New registry name registry.redhat.io is used throughout in this Guide. However, if you have not migrated to the new registry yet then replace all occurrences of registry.redhat.io with registry.access.redhat.com where ever applicable.
  • Follow the same upgrade procedure to upgrade your environment from Red Hat Openshift Container Storage in Converged Mode 3.11.0 and above to Red Hat Openshift Container Storage in Converged Mode 3.11.8. Ensure that the correct image and version numbers are configured before you start the upgrade process.
  • The valid images for Red Hat Openshift Container Storage 3.11.8 are:

    • registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
    • registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
    • registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
    • registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.8

6.1. Upgrading the pods in the glusterfs group

The following sections provide steps to upgrade your Glusterfs pods.

6.1.1. Prerequisites

Ensure the following prerequisites are met:

Note

For deployments using cns-deploy tool, the templates are available in the following location:

  • gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
  • heketi template - /usr/share/heketi/templates/heketi-template.yaml
  • glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml

For deployments using ansible playbook the templates are available in the following location:

  • gluster template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
  • heketi template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
  • glusterblock-provisioner template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml

6.1.2. Restoring original label values for /dev/log

Note

Follow this procedure only if you are upgrading your environment from Red Hat Container Native Storage 3.9 to Red Hat Openshift Container Storage 3.11.8.

Skip this procedure if you are upgrading your environment from Red Hat Openshift Container Storage 3.10 and above to Red Hat Openshift Container Storage 3.11.8.

To restore the original selinux label, execute the following commands:

  1. Create a directory and soft links on all nodes that run gluster pods:

    # mkdir /srv/<directory_name>
    # cd /srv/<directory_name>/   # same dir as above
    # ln -sf /dev/null systemd-tmpfiles-setup-dev.service
    # ln -sf /dev/null systemd-journald.service
    # ln -sf /dev/null systemd-journald.socket
    Copy to Clipboard Toggle word wrap
  2. Edit the daemonset that creates the glusterfs pods on the node which has oc client:

    # oc edit daemonset <daemonset_name>
    Copy to Clipboard Toggle word wrap

    Under volumeMounts section add a mapping for the volume:

    - mountPath: /usr/lib/systemd/system/systemd-journald.service
      name: systemd-journald-service
    - mountPath: /usr/lib/systemd/system/systemd-journald.socket
      name: systemd-journald-socket
    - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service
    name: systemd-tmpfiles-setup-dev-service
    Copy to Clipboard Toggle word wrap

    Under volumes section add a new host path for each service listed:

    Note

    The path mentioned in here should be the same as mentioned in Step 1.

    - hostPath:
       path: /srv/<directory_name>/systemd-journald.socket
       type: ""
      name: systemd-journald-socket
    - hostPath:
       path: /srv/<directory_name>/systemd-journald.service
       type: ""
      name: systemd-journald-service
    - hostPath:
       path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service
       type: ""
    name: systemd-tmpfiles-setup-dev-service
    Copy to Clipboard Toggle word wrap
  3. Run the following command on all nodes that run gluster pods. This will reset the label:

    # restorecon /dev/log
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to check the status of self heal for all volumes:

    # oc rsh <gluster_pod_name>
    # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done  | grep  "Number of entries: [^0]$"
    Copy to Clipboard Toggle word wrap

    Wait for self-heal to complete.

  5. Execute the following command and ensure that the bricks are not more than 90% full:

    # df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'
    Copy to Clipboard Toggle word wrap
    Note

    If the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).

    Note

    The df command is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by the df command is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.

  6. Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of glusterfsd process:

    # gluster volume set all cluster.max-bricks-per-process 250
    Copy to Clipboard Toggle word wrap
    1. Execute the following command on any one of the gluster pods to ensure that the option is set correctly:

      # gluster volume get all cluster.max-bricks-per-process
      Copy to Clipboard Toggle word wrap

      For example:

      # gluster volume get all cluster.max-bricks-per-process
      cluster.max-bricks-per-process 250
      Copy to Clipboard Toggle word wrap
  7. Execute the following command on the node which has oc client to delete the gluster pod:

    # oc delete pod <gluster_pod_name>
    Copy to Clipboard Toggle word wrap
  8. To verify if the pod is ready, execute the following command:

    # oc get pods -l glusterfs=storage-pod
    Copy to Clipboard Toggle word wrap
  9. Login to the node hosting the pod and check the selinux label of /dev/log

    # ls -lZ /dev/log
    Copy to Clipboard Toggle word wrap

    The output should show devlog_t label

    For example:

    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Copy to Clipboard Toggle word wrap

    Exit the node.

  10. In the gluster pod, check if the label value is devlog_t:

    # oc rsh <gluster_pod_name>
    # ls -lZ /dev/log
    Copy to Clipboard Toggle word wrap

    For example:

    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Copy to Clipboard Toggle word wrap
  11. Perform steps 4 to 9 for other pods.

6.1.3. Upgrading if existing version deployed by using cns-deploy

6.1.3.1. Upgrading cns-deploy and Heketi Server

The following commands must be executed on the client machine.

  1. Execute the following command to update the heketi client and cns-deploy packages:

    # yum update cns-deploy -y
    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
    Copy to Clipboard Toggle word wrap
    • Execute the following command to get the current HEKETI_ADMIN_KEY.

      The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

      oc get secret <heketi-admin-secret> -o jsonpath='{.data.key}'|base64 -d;echo
      Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the heketi template.

    # oc delete templates heketi
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to install the heketi template.

    oc create -f /usr/share/heketi/templates/heketi-template.yaml
    template "heketi" created
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to grant the heketi Service Account the necessary privileges.

    # oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account
    # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap

    For example,

    # oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
    # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to generate a new heketi configuration file.

    # sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
    Copy to Clipboard Toggle word wrap
    • The BLOCK_HOST_SIZE parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/index#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required.
    • Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.

      Note

      JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).

  7. Execute the following command to create a secret to hold the configuration file.

    # oc create secret generic <heketi-config-secret> --from-file=heketi.json
    Copy to Clipboard Toggle word wrap
    Note

    If the heketi-config-secret file already exists, then delete the file and run the following command.

  8. Execute the following command to delete the deployment configuration, service, and route for heketi:

    # oc delete deploymentconfig,service,route heketi
    Copy to Clipboard Toggle word wrap
    Note

    The names of these parameters can be referenced from output of the following command:

    # oc get all | grep heketi
    Copy to Clipboard Toggle word wrap
  9. Edit the heketi template.

    • Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.

      # oc edit template heketi
      parameters:
      - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: kubernetes
      - description: Set the hostname for the route URL
        displayName: heketi route name
        name: HEKETI_ROUTE
        value: heketi-storage
      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      - description: A unique name to identify this heketi service, useful for running
          multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
      Copy to Clipboard Toggle word wrap
      Note

      If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

    • Add an ENV with the name HEKETI_LVM_WRAPPER and value /usr/sbin/exec-on-host.

      - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands
      in the host namespace instead of in the Gluster container.
      displayName: Wrapper for executing LVM commands
      name: HEKETI_LVM_WRAPPER
      value: /usr/sbin/exec-on-host
      Copy to Clipboard Toggle word wrap
    • Add an ENV with the name HEKETI_DEBUG_UMOUNT_FAILURES and value true.

      - description: When unmounting a brick fails, Heketi will not be able to cleanup the
      Gluster volume completely. The main causes for preventing to unmount a brick,
      seem to originate from Gluster processes. By enabling this option, the heketi.log
      will contain the output of 'lsof' to aid with debugging of the Gluster processes
      and help with identifying any files that may be left open.
      displayName: Capture more details in case brick unmounting fails
      name: HEKETI_DEBUG_UMOUNT_FAILURES
      required=true
      Copy to Clipboard Toggle word wrap
    • Add an ENV with the name HEKETI_CLI_USER and value admin.
    • Add an ENV with the name HEKETI_CLI_KEY and the same value provided for the ENV HEKETI_ADMIN_KEY.
    • Replace the value under IMAGE_VERSION with v3.11.5 or v3.11.8 depending on the version you want to upgrade to.

      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      Copy to Clipboard Toggle word wrap
  10. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    
      service "heketi" created
      route "heketi" created
    deploymentconfig "heketi" created
    Copy to Clipboard Toggle word wrap
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  11. Execute the following command to verify that the containers are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example:

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap

6.1.3.2. Upgrading the Red Hat Gluster Storage Pods

The following commands must be executed on the client machine.

Following are the steps for updating a DaemonSet for glusterfs:

  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:

    1. Execute the following command to access your project:

      # oc project <project_name>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:

      # oc get ds
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:

      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:

      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
    5. Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:

      # oc scale dc <heketi_dc> --replicas=0
      Copy to Clipboard Toggle word wrap
    6. Execute the following command to verify that the heketi pod is no longer present:

      # oc get pods
      Copy to Clipboard Toggle word wrap
  2. Execute the following command to find the DaemonSet name for gluster

    # oc get ds
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the DaemonSet:

    # oc delete ds <ds-name> --cascade=false
    Copy to Clipboard Toggle word wrap

    Using --cascade=false option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.

    For example,

    # oc delete ds glusterfs --cascade=false
    daemonset "glusterfs" deleted
    Copy to Clipboard Toggle word wrap
  4. Execute the following commands to verify all the old pods are up:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the old glusterfs template.

    # oc delete templates glusterfs
    Copy to Clipboard Toggle word wrap

    For example,

    # oc delete templates glusterfs
    template “glusterfs” deleted
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to register new glusterfs template.

    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
    Copy to Clipboard Toggle word wrap

    For example,

    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
      template “glusterfs” created
    Copy to Clipboard Toggle word wrap
  7. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:

    1. Check if the nodes are labelled with the appropriate label by using the following command:

      # oc get nodes -l glusterfs=storage-host
      Copy to Clipboard Toggle word wrap
  8. Edit the glusterfs template.

    • Execute the following command:

      # oc edit template glusterfs
      Copy to Clipboard Toggle word wrap
    • Add the following lines under volume mounts:

       - name: kernel-modules
         mountPath: "/usr/lib/modules"
         readOnly: true
       - name: host-rootfs
         mountPath: "/rootfs"
      Copy to Clipboard Toggle word wrap
    • Add the following lines under volumes:

       - name: kernel-modules
         hostPath:
         path: "/usr/lib/modules"
       - name: host-rootfs
         hostPath:
         path: "/"
      Copy to Clipboard Toggle word wrap
    • Replace the value under IMAGE_VERSION with v3.11.5 or v3.11.8 depending on the version you want to upgrade to.

      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      Copy to Clipboard Toggle word wrap
  9. Execute the following commands to create the gluster DaemonSet:

    # oc process glusterfs | oc create -f -
    Copy to Clipboard Toggle word wrap

    For example,

    # oc process glusterfs | oc create -f -
    Deamonset “glusterfs” created
    Copy to Clipboard Toggle word wrap
    Note

    If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  10. Execute the following command to identify the old gluster pods that needs to be deleted:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  11. Execute the following command and ensure that the bricks are not more than 90% full:

    # df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'
    Copy to Clipboard Toggle word wrap
    Note

    If the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).

    Note

    The df command is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by the df command is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.

  12. Execute the following command to delete the old gluster pods. Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.

    1. To delete the old gluster pods, execute the following command:

      # oc delete pod <gluster_pod>
      Copy to Clipboard Toggle word wrap

      For example,

      # oc delete pod glusterfs-0vcf3
      pod  “glusterfs-0vcf3” deleted
      Copy to Clipboard Toggle word wrap
      Note

      Before deleting the next pod, self heal check has to be made:

      1. Run the following command to access shell on gluster pod:

        # oc rsh <gluster_pod_name>
        Copy to Clipboard Toggle word wrap
      2. Run the following command to check the self-heal status of all the volumes:

        # for eachVolume in $(gluster volume list);  do gluster volume heal $eachVolume info ;  done | grep "Number of entries: [^0]$"
        Copy to Clipboard Toggle word wrap
    2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.

      # oc get pods -w
        NAME                             READY     STATUS        RESTARTS   AGE
        glusterfs-0vcf3                  1/1       Terminating   0          3d
      …
      Copy to Clipboard Toggle word wrap
  13. Execute the following command to verify that the pods are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  14. Execute the following command to verify if you have upgraded the pod to the latest version:

    # oc rsh <gluster_pod_name> glusterd --version
    Copy to Clipboard Toggle word wrap

    For example:

     # oc rsh glusterfs-4cpcc glusterd --version
    glusterfs 6.0
    Copy to Clipboard Toggle word wrap
  15. Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.

    # gluster vol get all cluster.op-version
    Copy to Clipboard Toggle word wrap
  16. After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:

    • Scale up the DC (Deployment Configuration).

      # oc scale dc <heketi_dc> --replicas=1
      Copy to Clipboard Toggle word wrap
  17. Set the cluster.op-version to 70200 on any one of the pods:

    Important

    Ensure all the gluster pods are updated before changing the cluster.op-version.

    # gluster --timeout=3600 volume set all cluster.op-version 70200
    Copy to Clipboard Toggle word wrap
    • Execute the following steps to enable server.tcp-user-timeout on all volumes.

      Note

      The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.

      It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.

      1. List the glusterfs pod using the following command:

        # oc get pods
        Copy to Clipboard Toggle word wrap

        For example:

        # oc get pods
        NAME                                          READY     STATUS    RESTARTS   AGE
        glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
        glusterfs-storage-5thpc                       1/1       Running   0          9d
        glusterfs-storage-hfttr                       1/1       Running   0          9d
        glusterfs-storage-n8rg5                       1/1       Running   0          9d
        heketi-storage-4-9fnvz                        2/2       Running   0          8d
        Copy to Clipboard Toggle word wrap
      2. Remote shell into one of the glusterfs pods. For example:

        # oc rsh glusterfs-0vcf3
        Copy to Clipboard Toggle word wrap
      3. Execute the following command:

        # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
        Copy to Clipboard Toggle word wrap

        For example:

        # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
          volume1
          volume set: success
          volume2
        volume set: success
        Copy to Clipboard Toggle word wrap
  18. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:

    # oc delete dc glusterblock-provisioner-dc
    Copy to Clipboard Toggle word wrap

    For example:

    # oc delete dc glusterblock-storage-provisioner-dc
    Copy to Clipboard Toggle word wrap
  19. Delete the following resources from the old pod:

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-provisioner
    serviceaccount "glusterblock-provisioner" deleted
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  20. Execute the following commands to deploy the gluster-block provisioner:

    `sed -e 's/${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/<VERSION>/<NEW-VERSION>/' | oc create -f -
    Copy to Clipboard Toggle word wrap
    <VERSION>
    Existing version of OpenShift Container Storage.
    <NEW-VERSION>

    Either 3.11.5 or 3.11.8 depending on the version you are upgrading to.

    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap

    For example:

    `sed -e 's/${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/3.11.4/3.11.8/' | oc create -f -
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  21. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:

    1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:

      # oc rsh <gluster_pod_name>
      Copy to Clipboard Toggle word wrap
    2. Verify the brick multiplex status:

      # gluster v get all all
      Copy to Clipboard Toggle word wrap
    3. If it is disabled, then execute the following command to enable brick multiplexing:

      Note

      Ensure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.

      # gluster volume set all cluster.brick-multiplex on
      Copy to Clipboard Toggle word wrap

      For example:

      # oc rsh glusterfs-770ql
      
        sh-4.2# gluster volume set all cluster.brick-multiplex on
        Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
      volume set: success
      Copy to Clipboard Toggle word wrap
    4. List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:

      For example:

      # gluster volume list
      
        heketidbstorage
        vol_194049d2565d2a4ad78ef0483e04711e
        ...
        ...
      Copy to Clipboard Toggle word wrap

      Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:

      # gluster vol stop <VOLNAME>
      # gluster vol start <VOLNAME>
      Copy to Clipboard Toggle word wrap
  22. Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note

6.1.4. Upgrading if existing version deployed by using Ansible

6.1.4.1. Upgrading Heketi Server

The following commands must be executed on the client machine.

  1. Execute the following steps to check for any pending Heketi operatons:

    1. Execute the following command to access your project:

      # oc project <project_name>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:

      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file.

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
    Copy to Clipboard Toggle word wrap
    Note

    The json file created can be used to restore and therefore should be stored in persistent storage of your choice.

  3. Execute the following command to update the heketi client packages. Update the heketi-client package on all the OCP nodes where it is installed. Newer installations may not have the heketi-client rpm installed on any OCP nodes:

    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to get the current HEKETI_ADMIN_KEY.

    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

    # oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  5. If the HEKETI_USER_KEY was set previously, you can obtain it by using the following command:

    # oc describe pod <heketi-pod>
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to delete the heketi template.

    # oc delete templates heketi
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to install the heketi template.

    # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
    template "heketi" created
    Copy to Clipboard Toggle word wrap
  8. Execute the following step to edit the template:

    # oc get templates
    NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
    glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
    			  template
    glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
    			  template
    heketi			  Heketi service deployment  7 (3 blank)	3
    template
    Copy to Clipboard Toggle word wrap
    1. If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.

      # oc edit template heketi
      parameters:
        - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
        - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
        - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: kubernetes
        - description: Set the hostname for the route URL
        displayName: heketi route name
        name: HEKETI_ROUTE
        value: heketi-storage
        - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
        - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
        - description: A unique name to identify this heketi service, useful for running
        multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
        - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
        name: HEKETI_LVM_WRAPPER
        displayName: Wrapper for executing LVM commands
        value: /usr/sbin/exec-on-host
      Copy to Clipboard Toggle word wrap
    2. If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.

      # oc edit template heketi
      parameters:
      - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: kubernetes
      - description: Set the hostname for the route URL
        displayName: heketi route name
        name: HEKETI_ROUTE
        value: heketi-storage
      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
      - description: A unique name to identify this heketi service, useful for running
         multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
      - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
        name: HEKETI_LVM_WRAPPER
        displayName: Wrapper for executing LVM commands
        value: /usr/sbin/exec-on-host
      Copy to Clipboard Toggle word wrap
      Note

      If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  9. Execute the following command to delete the deployment configuration, service, and route for heketi:

    Note

    The names of these parameters can be referenced from output of the following command:

    # oc get all | grep heketi
    Copy to Clipboard Toggle word wrap
    # oc delete deploymentconfig,service,route heketi-storage
    Copy to Clipboard Toggle word wrap
  10. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    
    service "heketi" created
    route "heketi" created
    deploymentconfig "heketi" created
    Copy to Clipboard Toggle word wrap
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  11. Execute the following command to verify that the containers are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example:

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap

6.1.4.2. Upgrading the Red Hat Gluster Storage Pods

The following commands must be executed on the client machine.

Following are the steps for updating a DaemonSet for glusterfs:

  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:

    1. Execute the following command to access your project:

      # oc project <project_name>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:

      # oc get dc
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:

      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:

      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
    5. Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:

      # oc scale dc <heketi_dc> --replicas=0
      Copy to Clipboard Toggle word wrap
    6. Execute the following command to verify that the heketi pod is no longer present:

      # oc get pods
      Copy to Clipboard Toggle word wrap
  2. Execute the following command to find the DaemonSet name for gluster

    # oc get ds
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the DaemonSet:

    # oc delete ds <ds-name> --cascade=false
    Copy to Clipboard Toggle word wrap

    Using --cascade=false option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.

    For example,

    # oc delete ds glusterfs-storage --cascade=false
    daemonset "glusterfs-storage" deleted
    Copy to Clipboard Toggle word wrap
  4. Execute the following commands to verify all the old pods are up:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the old glusterfs template.

    # oc delete templates glusterfs
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to register new glusterfs template.

    # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
    template "glusterfs" created
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to edit the old glusterfs template.

    # oc get templates
    NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
    glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
            template
    glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
            template
    heketi			  Heketi service deployment  7 (3 blank)	3
    template
    Copy to Clipboard Toggle word wrap
    1. If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:

      # oc edit template glusterfs
      - displayName: GlusterFS container image name
      	  name: IMAGE_NAME
      	  required: true
      	  value: registry.redhat.io/rhgs3/rhgs-server-rhel7
      - displayName: GlusterFS container image version
      	  name: IMAGE_VERSION
      	  required: true
      	  value: v3.11.8
      - description: A unique name to identify which heketi service manages this cluster, useful for running
           multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
      Copy to Clipboard Toggle word wrap
      Note

      If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

    2. If the template has only IMAGE_NAME as a parameter, then update the glusterfs template as following. For example:

      # oc edit template glusterfs
      - displayName: GlusterFS container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
      - description: A unique name to identify which heketi service manages this cluster, useful for running
           multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
      value: storage
      Copy to Clipboard Toggle word wrap
      Note

      Ensure that the CLUSTER_NAME variable is set to the correct value

  8. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:

    1. Check if the nodes are labelled with the appropriate label by using the following command:

      # oc get nodes -l glusterfs=storage-host
      Copy to Clipboard Toggle word wrap
  9. Execute the following commands to create the gluster DaemonSet:

    # oc process glusterfs | oc create -f -
    Copy to Clipboard Toggle word wrap

    For example,

    # oc process glusterfs | oc create -f -
    Deamonset “glusterfs” created
    Copy to Clipboard Toggle word wrap
    • Execute the following command to identify the old gluster pods that needs to be deleted:

      # oc get pods
      Copy to Clipboard Toggle word wrap

      For example,

      # oc get pods
      NAME                                          READY     STATUS    RESTARTS   AGE
      glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
      glusterfs-storage-5thpc                       1/1       Running   0          9d
      glusterfs-storage-hfttr                       1/1       Running   0          9d
      glusterfs-storage-n8rg5                       1/1       Running   0          9d
      heketi-storage-4-9fnvz                        2/2       Running   0          8d
      Copy to Clipboard Toggle word wrap
  10. Execute the following command and ensure that the bricks are not more than 90% full:

    # df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'
    Copy to Clipboard Toggle word wrap
    Note

    If the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).

    Note

    The df command is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by the df command is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.

  11. Execute the following command to delete the old gluster pods. Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.

    1. To delete the old gluster pods, execute the following command:

      # oc delete pod <gluster_pod>
      Copy to Clipboard Toggle word wrap

      For example,

      # oc delete pod glusterfs-0vcf3
      pod  “glusterfs-0vcf3” deleted
      Copy to Clipboard Toggle word wrap
      Note

      Before deleting the next pod, self heal check has to be made:

      1. Run the following command to access shell on gluster pod:

        # oc rsh <gluster_pod_name>
        Copy to Clipboard Toggle word wrap
      2. Run the following command to check the self-heal status of all the volumes:

        # for eachVolume in $(gluster volume list);  do gluster volume heal $eachVolume info ;  done | grep "Number of entries: [^0]$"
        Copy to Clipboard Toggle word wrap
    2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.

      # oc get pods -w
      NAME                             READY     STATUS        RESTARTS   AGE
      glusterfs-0vcf3                  1/1       Terminating   0          3d
      …
      Copy to Clipboard Toggle word wrap
  12. Execute the following command to verify that the pods are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  13. Execute the following command to verify if you have upgraded the pod to the latest version:

    # oc rsh <gluster_pod_name> glusterd --version
    Copy to Clipboard Toggle word wrap

    For example:

    # oc rsh glusterfs-4cpcc glusterd --version
    glusterfs 6.0
    Copy to Clipboard Toggle word wrap
  14. Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.

    # gluster vol get all cluster.op-version
    Copy to Clipboard Toggle word wrap
  15. After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:

    • Scale up the DC (Deployment Configuration).

      # oc scale dc <heketi_dc> --replicas=1
      Copy to Clipboard Toggle word wrap
  16. Set the cluster.op-version to 70200 on any one of the pods:

    Note

    Ensure all the gluster pods are updated before changing the cluster.op-version.

    # gluster --timeout=3600 volume set all cluster.op-version 70200
    Copy to Clipboard Toggle word wrap
  17. Execute the following steps to enable server.tcp-user-timeout on all volumes.

    Note

    The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.

    It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.

    1. List the glusterfs pod using the following command:

      # oc get pods
      Copy to Clipboard Toggle word wrap

      For example:

      # oc get pods
      NAME                                          READY     STATUS    RESTARTS   AGE
      glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
      glusterfs-storage-5thpc                       1/1       Running   0          9d
      glusterfs-storage-hfttr                       1/1       Running   0          9d
      glusterfs-storage-n8rg5                       1/1       Running   0          9d
      heketi-storage-4-9fnvz                        2/2       Running   0          8d
      Copy to Clipboard Toggle word wrap
    2. Remote shell into one of the glusterfs pods. For example:

      # oc rsh glusterfs-0vcf3
      Copy to Clipboard Toggle word wrap
    3. Execute the following command:

      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
      Copy to Clipboard Toggle word wrap

      For example:

      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
          volume1
          volume set: success
          volume2
      volume set: success
      Copy to Clipboard Toggle word wrap
  18. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:

    # oc delete dc glusterblock-provisioner-dc
    Copy to Clipboard Toggle word wrap

    For example:

    # oc delete dc glusterblock-storage-provisioner-dc
    Copy to Clipboard Toggle word wrap
  19. Execute the following command to delete the old glusterblock provisioner template.

     # oc delete templates glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  20. Create a glusterblock provisioner template. For example:

    # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
    template.template.openshift.io/glusterblock-provisioner created
    Copy to Clipboard Toggle word wrap
  21. Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.

    # oc get templates
    NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
    glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
            template
    glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
            template
    heketi			  Heketi service deployment  7 (3 blank)	3
    template
    Copy to Clipboard Toggle word wrap
    1. If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:

      # oc edit template glusterblock-provisioner
      - displayName: glusterblock provisioner container image name
      name: IMAGE_NAME
      required: true
      value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7
      - displayName: glusterblock provisioner container image version
      name: IMAGE_VERSION
      required: true
      value: v3.11.8
      - description: The namespace in which these resources are being created
      displayName: glusterblock provisioner namespace
      name: NAMESPACE
      required: true
      value: glusterfs
      - description: A unique name to identify which heketi service manages this cluster,
        useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
      Copy to Clipboard Toggle word wrap
    2. If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:

      # oc edit template glusterblock-provisioner
      - displayName: glusterblock provisioner container image name
      name: IMAGE_NAME
      required: true
      value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
      - description: The namespace in which these resources are being created
      displayName: glusterblock provisioner namespace
      name: NAMESPACE
      required: true
      value: glusterfs
      - description: A unique name to identify which heketi service manages this cluster,
                  useful for running multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
      Copy to Clipboard Toggle word wrap
  22. Delete the following resources from the old pod

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-storage-provisioner
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisioner
    Copy to Clipboard Toggle word wrap
  23. Before running oc process determine the correct provisioner name. If there are more than one gluster block provisioner running in your cluster the names must differ from all other provisioners.
    For example,

    • If there are 2 or more provisioner the name should be gluster.org/glusterblock-<namespace> where, namespace is replaced by the namespace that the provisioner is deployed in.
    • If there is only one provisioner, installed prior to 3.11.8, gluster.org/glusterblock is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
  24. After editing the template, execute the following command to create the deployment configuration:

    # oc process glusterblock-provisioner -o yaml | oc create -f -
    Copy to Clipboard Toggle word wrap

    For example:

    # oc process glusterblock-provisioner -o yaml | oc create -f -
    clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created
    serviceaccount/glusterblock-storage-provisioner created
    clusterrolebinding.authorization.openshift.io/glusterblock-storage-provisioner created
    deploymentconfig.apps.openshift.io/glusterblock-storage-provisioner-dc created
    Copy to Clipboard Toggle word wrap
  25. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:

    1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:

      # oc rsh <gluster_pod_name>
      Copy to Clipboard Toggle word wrap
    2. Verify the brick multiplex status:

      # gluster v get all all
      Copy to Clipboard Toggle word wrap
    3. If it is disabled, then execute the following command to enable brick multiplexing:

      Note

      Ensure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.

      # gluster volume set all cluster.brick-multiplex on
      Copy to Clipboard Toggle word wrap

      For example:

      # oc rsh glusterfs-770ql
      sh-4.2# gluster volume set all cluster.brick-multiplex on
      Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
      volume set: success
      Copy to Clipboard Toggle word wrap
    4. List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:

      For example:

      # gluster volume list
      
      heketidbstorage
      vol_194049d2565d2a4ad78ef0483e04711e
      ...
      ...
      Copy to Clipboard Toggle word wrap

      Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:

      # gluster vol stop <VOLNAME>
      # gluster vol start <VOLNAME>
      Copy to Clipboard Toggle word wrap
  26. Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.

    Note
  27. All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a block provisioner, in a given namespace, run the following command:

    # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
    Copy to Clipboard Toggle word wrap

    Example:

    # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage
    glusterfs-storage-block   gluster.org/glusterblock-app-storage   app-storage
    Copy to Clipboard Toggle word wrap

    Check each storage class provisioner name, if it does not match the block provisioner name configured for that namespace it must be updated. If the block provisioner name already matches the configured provisioner name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
    For every storage class in this list do the following:

    # oc get sc  -o yaml <storageclass>  > storageclass-to-edit.yaml
    # oc delete sc  <storageclass>
    # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap

    Example:

    # oc get sc  -o yaml gluster-storage-block  > storageclass-to-edit.yaml
    # oc delete sc  gluster-storage-block
    # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap

6.2. Upgrading the pods in the glusterfs registry group

The following sections provide steps to upgrade your glusterfs registry pods.

6.2.1. Prerequisites

Ensure the following prerequisites are met:

Note

For deployments using cns-deploy tool, the templates are available in the following location:

  • gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
  • heketi template - /usr/share/heketi/templates/heketi-template.yaml
  • glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml

For deployments using ansible playbook the templates are available in the following location:

  • gluster template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
  • heketi template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
  • glusterblock-provisioner template - /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml

6.2.2. Restoring original label values for /dev/log

Note

Follow this procedure only if you are upgrading your environment from Red Hat Container Native Storage 3.9 to Red Hat Openshift Container Storage 3.11.8.

Skip this procedure if you are upgrading your environment from Red Hat Openshift Container Storage 3.10 and above to Red Hat Openshift Container Storage 3.11.8.

To restore the original selinux label, execute the following commands:

  1. Create a directory and soft links on all nodes that run gluster pods:

    # mkdir /srv/<directory_name>
    # cd /srv/<directory_name>/   # same dir as above
    # ln -sf /dev/null systemd-tmpfiles-setup-dev.service
    # ln -sf /dev/null systemd-journald.service
    # ln -sf /dev/null systemd-journald.socket
    Copy to Clipboard Toggle word wrap
  2. Edit the daemonset that creates the glusterfs pods on the node which has oc client:

    # oc edit daemonset <daemonset_name>
    Copy to Clipboard Toggle word wrap

    Under volumeMounts section add a mapping for the volume:

    - mountPath: /usr/lib/systemd/system/systemd-journald.service
      name: systemd-journald-service
    - mountPath: /usr/lib/systemd/system/systemd-journald.socket
      name: systemd-journald-socket
    - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service
    name: systemd-tmpfiles-setup-dev-service
    Copy to Clipboard Toggle word wrap

    Under volumes section add a new host path for each service listed:

    Note

    The path mentioned in here should be the same as mentioned in Step 1.

    - hostPath:
       path: /srv/<directory_name>/systemd-journald.socket
       type: ""
      name: systemd-journald-socket
    - hostPath:
       path: /srv/<directory_name>/systemd-journald.service
       type: ""
      name: systemd-journald-service
    - hostPath:
       path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service
       type: ""
    name: systemd-tmpfiles-setup-dev-service
    Copy to Clipboard Toggle word wrap
  3. Run the following command on all nodes that run gluster pods. This will reset the label:

    # restorecon /dev/log
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to check the status of self heal for all volumes:

    # oc rsh <gluster_pod_name>
    # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done  | grep  "Number of entries: [^0]$"
    Copy to Clipboard Toggle word wrap

    Wait for self-heal to complete.

  5. Execute the following command and ensure that the bricks are not more than 90% full:

    # df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'
    Copy to Clipboard Toggle word wrap
    Note

    If the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).

    Note

    The df command is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by the df command is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.

  6. Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of glusterfsd process:

    # gluster volume set all cluster.max-bricks-per-process 250
    Copy to Clipboard Toggle word wrap
    1. Execute the following command on any one of the gluster pods to ensure that the option is set correctly:

      # gluster volume get all cluster.max-bricks-per-process
      Copy to Clipboard Toggle word wrap

      For example:

      # gluster volume get all cluster.max-bricks-per-process
      cluster.max-bricks-per-process 250
      Copy to Clipboard Toggle word wrap
  7. Execute the following command on the node which has oc client to delete the gluster pod:

    # oc delete pod <gluster_pod_name>
    Copy to Clipboard Toggle word wrap
  8. To verify if the pod is ready, execute the following command:

    # oc get pods -l glusterfs=registry-pod
    Copy to Clipboard Toggle word wrap
  9. Login to the node hosting the pod and check the selinux label of /dev/log

    # ls -lZ /dev/log
    Copy to Clipboard Toggle word wrap

    The output should show devlog_t label

    For example:

    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Copy to Clipboard Toggle word wrap

    Exit the node.

  10. In the gluster pod, check if the label value is devlog_t:

    # oc rsh <gluster_pod_name>
    # ls -lZ /dev/log
    Copy to Clipboard Toggle word wrap

    For example:

    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Copy to Clipboard Toggle word wrap
  11. Perform steps 4 to 9 for other pods.

6.2.3. Upgrading if existing version deployed by using cns-deploy

6.2.3.1. Upgrading cns-deploy and Heketi Server

The following commands must be executed on the client machine.

  1. Execute the following command to update the heketi client and cns-deploy packages:

    # yum update cns-deploy -y
    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  2. Backup the Heketi registry database file

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the heketi template.

    # oc delete templates heketi
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to get the current HEKETI_ADMIN_KEY.

    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

    # oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to install the heketi template.

    # oc create -f /usr/share/heketi/templates/heketi-template.yaml
    template "heketi" created
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to grant the heketi Service Account the necessary privileges.

    # oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account
    # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap

    For example,

    # oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
    # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap
    Note

    The service account used in heketi pod needs to be privileged because Heketi/rhgs-volmanager pod mounts the heketidb storage Gluster volume as a "glusterfs" volume type and not as a PersistentVolume (PV).
    As per the security-context-constraints regulations in OpenShift, ability to mount volumes which are not of the type configMap, downwardAPI, emptyDir, hostPath, nfs, persistentVolumeClaim, secret is granted only to accounts with privileged Security Context Constraint (SCC).

  7. Execute the following command to generate a new heketi configuration file.

    # sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
    Copy to Clipboard Toggle word wrap
    • The BLOCK_HOST_SIZE parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/index#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required.
    • Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.

      Note

      JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).

  8. Execute the following command to create a secret to hold the configuration file.

    # oc create secret generic <heketi-registry-config-secret> --from-file=heketi.json
    Copy to Clipboard Toggle word wrap
    Note

    If the heketi-registry-config-secret file already exists, then delete the file and run the following command.

  9. Execute the following command to delete the deployment configuration, service, and route for heketi:

    # oc delete deploymentconfig,service,route heketi-registry
    Copy to Clipboard Toggle word wrap
  10. Edit the heketi template.

    • Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.

      # oc edit template heketi
      parameters:
      - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: kubernetes
      - description: Set the hostname for the route URL
        displayName: heketi route name
        name: HEKETI_ROUTE
        value: heketi-storage
      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      - description: A unique name to identify this heketi service, useful for running
          multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
      Copy to Clipboard Toggle word wrap
      Note

      If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

    • Add an ENV with the name HEKETI_LVM_WRAPPER and value /usr/sbin/exec-on-host.

      - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands
      in the host namespace instead of in the Gluster container.
      displayName: Wrapper for executing LVM commands
      name: HEKETI_LVM_WRAPPER
      value: /usr/sbin/exec-on-host
      Copy to Clipboard Toggle word wrap
    • Add an ENV with the name HEKETI_DEBUG_UMOUNT_FAILURES and value true.

      - description: When unmounting a brick fails, Heketi will not be able to cleanup the
      Gluster volume completely. The main causes for preventing to unmount a brick,
      seem to originate from Gluster processes. By enabling this option, the heketi.log
      will contain the output of 'lsof' to aid with debugging of the Gluster processes
      and help with identifying any files that may be left open.
      displayName: Capture more details in case brick unmounting fails
      name: HEKETI_DEBUG_UMOUNT_FAILURES
      required=true
      Copy to Clipboard Toggle word wrap
    • Add an ENV with the name HEKETI_CLI_USER and value admin.
    • Add an ENV with the name HEKETI_CLI_KEY and the same value provided for the ENV HEKETI_ADMIN_KEY.
    • Replace the value under IMAGE_VERSION with v3.11.5 or v3.11.8 depending on the version you want to upgrade to.

      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      Copy to Clipboard Toggle word wrap
  11. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    
    service "heketi-registry" created
    route "heketi-registry" created
    deploymentconfig-registry "heketi" created
    Copy to Clipboard Toggle word wrap
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  12. Execute the following command to verify that the containers are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example:

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap

6.2.3.2. Upgrading the Red Hat Gluster Storage Registry Pods

The following commands must be executed on the client machine. .

Following are the steps for updating a DaemonSet for glusterfs:

  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:

    1. Execute the following command to access your project:

      # oc project <project_name>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:

      # oc get ds
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:

      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:

      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
    5. Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:

      # oc scale dc <heketi_dc> --replicas=0
      Copy to Clipboard Toggle word wrap
    6. Execute the following command to verify that the heketi pod is no longer present:

      # oc get pods
      Copy to Clipboard Toggle word wrap
  2. Execute the following command to find the DaemonSet name for gluster

    # oc get ds
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the DaemonSet:

    # oc delete ds <ds-name> --cascade=false
    Copy to Clipboard Toggle word wrap

    Using --cascade=false option while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.

    For example,

    # oc delete ds glusterfs-registry --cascade=false
    daemonset "glusterfs-registry" deleted
    Copy to Clipboard Toggle word wrap
  4. Execute the following commands to verify all the old pods are up:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the old glusterfs template.

    # oc delete templates glusterfs
    Copy to Clipboard Toggle word wrap

    For example,

    # oc delete templates glusterfs
    template “glusterfs” deleted
    Copy to Clipboard Toggle word wrap
  6. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:

    1. Check if the nodes are labelled with the appropriate label by using the following command:

      # oc get nodes -l glusterfs=registry-host
      Copy to Clipboard Toggle word wrap
  7. Execute the following command to register new glusterfs template.

    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
    Copy to Clipboard Toggle word wrap

    For example,

    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
    template “glusterfs” created
    Copy to Clipboard Toggle word wrap
  8. Edit the glusterfs template.

    • Execute the following command:

      # oc edit template glusterfs
      Copy to Clipboard Toggle word wrap
    • Add the following lines under volume mounts:

       - name: kernel-modules
         mountPath: "/usr/lib/modules"
         readOnly: true
       - name: host-rootfs
         mountPath: "/rootfs"
      Copy to Clipboard Toggle word wrap
    • Add the following lines under volumes:

       - name: kernel-modules
         hostPath:
         path: "/usr/lib/modules"
       - name: host-rootfs
         hostPath:
         path: "/"
      Copy to Clipboard Toggle word wrap
    • Replace the value under IMAGE_VERSION with v3.11.5 or v3.11.8 depending on the version you want to upgrade to.

      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      Copy to Clipboard Toggle word wrap
  9. Execute the following commands to create the gluster DaemonSet:

    # oc process glusterfs | oc create -f -
    Copy to Clipboard Toggle word wrap

    For example,

    # oc process glusterfs | oc create -f -
    Deamonset “glusterfs” created
    Copy to Clipboard Toggle word wrap
    Note

    If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  10. Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  11. Execute the following command and ensure that the bricks are not more than 90% full:

    # df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'
    Copy to Clipboard Toggle word wrap
    Note

    If the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).

    Note

    The df command is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by the df command is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.

  12. Execute the following command to delete the old glusterfs-registry pods. glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.

    1. To delete the old glusterfs-registry pods, execute the following command:

      # oc delete pod <gluster_pod>
      Copy to Clipboard Toggle word wrap

      For example,

      # oc delete pod glusterfs-0vcf3
      pod  “glusterfs-0vcf3” deleted
      Copy to Clipboard Toggle word wrap
      Note

      Before deleting the next pod, self heal check has to be made:

      1. Run the following command to access shell on glusterfs-registry pods:

        # oc rsh <gluster_pod_name>
        Copy to Clipboard Toggle word wrap
      2. Run the following command to check the self-heal status of all the volumes: :

        # for eachVolume in $(gluster volume list);  do gluster volume heal $eachVolume info ;  done | grep "Number of entries: [^0]$"
        Copy to Clipboard Toggle word wrap
    2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.

      # oc get pods -w
      NAME                             READY     STATUS        RESTARTS   AGE
      glusterfs-0vcf3                  1/1       Terminating   0          3d
      …
      Copy to Clipboard Toggle word wrap
  13. Execute the following command to verify that the pods are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
    • Execute the following commands to verify if you have upgraded the pod to the latest version:

      # oc rsh <gluster_registry_pod_name> glusterd --version
      Copy to Clipboard Toggle word wrap

      For example:

       # oc rsh glusterfs-registry-4cpcc glusterd --version
      glusterfs 6.0
      Copy to Clipboard Toggle word wrap
      # rpm -qa|grep gluster
      Copy to Clipboard Toggle word wrap
  14. Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.

    # gluster vol get all cluster.op-version
    Copy to Clipboard Toggle word wrap
  15. After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:

    • Scale up the DC (Deployment Configuration).

      # oc scale dc <heketi_dc> --replicas=1
      Copy to Clipboard Toggle word wrap
  16. Set the cluster.op-version to 70200 on any one of the pods:

    Note

    Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.

    # gluster volume set all cluster.op-version 70200
    Copy to Clipboard Toggle word wrap
  17. Execute the following steps to enable server.tcp-user-timeout on all volumes.

    Note

    The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.

    It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.

    1. List the glusterfs pod using the following command:

      # oc get pods
      Copy to Clipboard Toggle word wrap

      For example:

      # oc get pods
      NAME                                          READY     STATUS    RESTARTS   AGE
      glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
      glusterfs-storage-5thpc                       1/1       Running   0          9d
      glusterfs-storage-hfttr                       1/1       Running   0          9d
      glusterfs-storage-n8rg5                       1/1       Running   0          9d
      heketi-storage-4-9fnvz                        2/2       Running   0          8d
      Copy to Clipboard Toggle word wrap
    2. Remote shell into one of the glusterfs-registry pods. For example:

      # oc rsh glusterfs-registry-g6vd9
      Copy to Clipboard Toggle word wrap
    3. Execute the following command:

      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
      Copy to Clipboard Toggle word wrap

      For example:

      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
      volume1
      volume set: success
      volume2
      volume set: success
      Copy to Clipboard Toggle word wrap
  18. If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:

    # oc delete dc <gluster-block-registry-dc>
    Copy to Clipboard Toggle word wrap

    For example:

    # oc delete dc glusterblock-registry-provisioner-dc
    Copy to Clipboard Toggle word wrap
  19. Delete the following resources from the old pod

    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
    # oc delete serviceaccounts glusterblock-provisioner
    serviceaccount "glusterblock-provisioner" deleted
    # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  20. Execute the following commands to deploy the gluster-block provisioner:

    `sed -e 's/${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/<VERSION>/<NEW-VERSION>/' | oc create -f -
    Copy to Clipboard Toggle word wrap
    <VERSION>
    Existing version of OpenShift Container Storage.
    <NEW-VERSION>

    Either 3.11.5 or 3.11.8 depending on the version you are upgrading to.

    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap

    For example:

    `sed -e 's/${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | sed -e 's/3.11.4/3.11.8/' | oc create -f -
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  21. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:

    1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:

      # oc rsh <gluster_pod_name>
      Copy to Clipboard Toggle word wrap
    2. Verify the brick multiplex status:

      # gluster v get all all
      Copy to Clipboard Toggle word wrap
    3. If it is disabled, then execute the following command to enable brick multiplexing:

      Note

      Ensure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.

      # gluster volume set all cluster.brick-multiplex on
      Copy to Clipboard Toggle word wrap

      For example:

      # oc rsh glusterfs-registry-g6vd9
      
      sh-4.2# gluster volume set all cluster.brick-multiplex on
      Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
      volume set: success
      Copy to Clipboard Toggle word wrap
    4. List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:

      For example:

      # gluster volume list
      
      heketidbstorage
      vol_194049d2565d2a4ad78ef0483e04711e
      ...
      ...
      Copy to Clipboard Toggle word wrap

      Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:

      # gluster vol stop <VOLNAME>
      # gluster vol start <VOLNAME>
      Copy to Clipboard Toggle word wrap
  22. Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.

    Note

    After upgrading the glusterfs registry pods, proceed with the steps listed in ] to bring back your heketi pod and then proceed with the steps listed in xref:chap-upgrade_client_common[ to upgrade the client on Red Hat Openshift Container Platform Nodes.

6.2.4. Upgrading if existing version deployed by using Ansible

6.2.4.1. Upgrading Heketi Server

The following commands must be executed on the client machine.

Note

"yum update cns-deploy -y" is not required to be executed if OCS 3.10 was deployed via Ansible.

  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:

    1. Execute the following command to access your project:

      # oc project <project_name>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:

      # oc get ds
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:

      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:

      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file

    # heketi-cli db dump > heketi-db-dump-$(date -I).json
    Copy to Clipboard Toggle word wrap
    Note

    The json file created can be used to restore and therefore should be stored in persistent storage of your choice.

  3. Execute the following command to update the heketi client packages. Update the heketi-client package on all the OCP nodes where it is installed. Newer installations may not have the heketi-client rpm installed on any OCP nodes:

    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to get the current HEKETI_ADMIN_KEY.

    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.

    # oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  5. If the HEKETI_USER_KEY was set previously, you can obtain it by using the following command:

    # oc describe pod <heketi-pod>
    Copy to Clipboard Toggle word wrap
  6. Execute the following step to edit the template:

    1. If the existing template has IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.

      # oc edit template heketi
      parameters:
      - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: kubernetes
      - description: Set the hostname for the route URL
        displayName: heketi route name
        name: HEKETI_ROUTE
        value: heketi-registry
      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
      - description: A unique name to identify this heketi service, useful for running
        multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: registry
      - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
        name: HEKETI_LVM_WRAPPER
        displayName: Wrapper for executing LVM commands
        value: /usr/sbin/exec-on-host
      Copy to Clipboard Toggle word wrap
    2. If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.

      # oc edit template heketi
      parameters:
      - description: Set secret for those creating volumes as type user
        displayName: Heketi User Secret
        name: HEKETI_USER_KEY
        value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user admin
        displayName: Heketi Administrator Secret
        name: HEKETI_ADMIN_KEY
        value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
        displayName: heketi executor type
        name: HEKETI_EXECUTOR
        value: kubernetes
      - description: Set the hostname for the route URL
        displayName: heketi route name
        name: HEKETI_ROUTE
        value: heketi-registry
      - displayName: heketi container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
        name: IMAGE_VERSION
        required: true
        value: v3.11.8
      - description: A unique name to identify this heketi service, useful for running multiple heketi instances
        displayName: GlusterFS-registry cluster name
        name: CLUSTER_NAME
        value: registry
      - description: Heketi can use a wrapper to execute LVM commands, i.e. run commands in the host namespace instead of in the Gluster container
        name: HEKETI_LVM_WRAPPER
        displayName: Wrapper for executing LVM commands
        value: /usr/sbin/exec-on-host
      Copy to Clipboard Toggle word wrap
      Note

      If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.

  7. Execute the following command to delete the deployment configuration, service, and route for heketi:

    # oc delete deploymentconfig,service,route heketi-registry
    Copy to Clipboard Toggle word wrap
  8. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:

    # oc process heketi | oc create -f -
    
    service "heketi-registry" created
    route "heketi-registry" created
    deploymentconfig-registry "heketi" created
    Copy to Clipboard Toggle word wrap
    Note

    It is recommended that the heketidbstorage volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volume heketidbstorage.

  9. Execute the following command to verify that the containers are running:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example:

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap

6.2.4.2. Upgrading the Red Hat Gluster Storage Registry Pods

The following commands must be executed on the client machine.

Following are the steps for updating a DaemonSet for glusterfs:

  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:

    1. Execute the following command to access your project:

      # oc project <project_name>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:

      # oc get dc
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:

      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:

      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
    5. Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:

      # oc scale dc <heketi_dc> --replicas=0
      Copy to Clipboard Toggle word wrap
    6. Execute the following command to verify that the heketi pod is no longer present:

      # oc get pods
      Copy to Clipboard Toggle word wrap
  2. Execute the following command to find the DaemonSet name for gluster

    # oc get ds
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the DaemonSet:

    # oc delete ds <ds-name> --cascade=false
    Copy to Clipboard Toggle word wrap

    Using --cascade=false option while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.

    For example,

    # oc delete ds glusterfs-registry --cascade=false
    daemonset "glusterfs-registry" deleted
    Copy to Clipboard Toggle word wrap
  4. Execute the following commands to verify all the old pods are up:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example,

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the old glusterfs template.

     # oc delete templates glusterfs
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to register new glusterfs template.

    # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterfs-template.yml
    template "glusterfs" created
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to edit the old glusterfs template.

    1. If the template has IMAGE_NAME, then update the glusterfs template as following. For example:

      # oc edit template glusterfs
      
      - description: Labels which define the daemonset node selector. Must contain at least
          one label of the format \'glusterfs=<CLUSTER_NAME>-host\'
        displayName: Daemonset Node Labels
        name: NODE_LABELS
        value: '{ "glusterfs": "registry-host" }'
      - displayName: GlusterFS container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
      - description: A unique name to identify which heketi service manages this cluster,
          useful for running multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: registry
      Copy to Clipboard Toggle word wrap
    2. If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:

      # oc edit template glusterfs
      - description: Labels which define the daemonset node selector. Must contain at least one label of the format \'glusterfs=<CLUSTER_NAME>-host\'
      displayName: Daemonset Node Labels
      name: NODE_LABELS
      value: '{ "glusterfs": "registry-host" }'
      - displayName: GlusterFS container image name
      name: IMAGE_NAME
      required: true
      value: registry.redhat.io/rhgs3/rhgs-server-rhel7
      - description: A unique name to identify which heketi service manages this cluster,
      useful for running multiple heketi instances
      - displayName: GlusterFS container image version
      name: IMAGE_VERSION
      required: true
      value: v3.11.8
      - displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: registry
      Copy to Clipboard Toggle word wrap
      Note
  8. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:

    1. Check if the nodes are labelled with the appropriate label by using the following command:

      # oc get nodes -l glusterfs=registry-host
      Copy to Clipboard Toggle word wrap
  • name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true
  • name: host-rootfs mountPath: "/rootfs"
  • name: kernel-modules hostPath: path: "/usr/lib/modules"
  • name: host-rootfs hostPath: path: "/"
  • displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7
  • displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.11.8

    1. Execute the following commands to create the gluster DaemonSet:

      # oc process glusterfs | oc create -f -
      Copy to Clipboard Toggle word wrap

      For example,

      # oc process glusterfs | oc create -f -
      Deamonset “glusterfs-registry” created
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:

      # oc get pods
      Copy to Clipboard Toggle word wrap

      For example,

      # oc get pods
      NAME                                          READY     STATUS    RESTARTS   AGE
      glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
      glusterfs-storage-5thpc                       1/1       Running   0          9d
      glusterfs-storage-hfttr                       1/1       Running   0          9d
      glusterfs-storage-n8rg5                       1/1       Running   0          9d
      heketi-storage-4-9fnvz                        2/2       Running   0          8d
      Copy to Clipboard Toggle word wrap
    3. Execute the following command and ensure that the bricks are not more than 90% full:

      # df -kh | grep -v ^Filesystem | awk '{if(int($5)>90) print $0}'
      Copy to Clipboard Toggle word wrap
      Note

      If the bricks are close to 100% utilization, then the Logical Volume Manager(LVM) activation for these bricks may take a long time or can get stuck once the pod or node is rebooted. It is advised to bring down the utilization of that brick or expand the physical volume(PV) that is using the logical volume(LV).

      Note

      The df command is not applicable to bricks that belong to Block Hosting Volume(BHV). On a BHV , the used size of the bricks generated by the df command is the added size of the blockvolumes of that Gluster volume, it is not the size of the data that resides in the blockvolumes. For more information refer to How To Identify Block Volumes and Block Hosting Volumes in Openshift Container Storage.

    4. Execute the following command to delete the old glusterfs-registry pods. glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.

      1. To delete the old glusterfs-registry pods, execute the following command:

        # oc delete pod <gluster_pod>
        Copy to Clipboard Toggle word wrap

        For example,

        # oc delete pod glusterfs-registry-4cpcc
        pod “glusterfs-registry-4cpcc” deleted
        Copy to Clipboard Toggle word wrap
        Note

        Before deleting the next pod, self heal check has to be made:

        1. Run the following command to access shell on glusterfs-registry pods:

          # oc rsh <gluster_pod_name>
          Copy to Clipboard Toggle word wrap
        2. Run the following command to check the self-heal status of all the volumes: :

          # for eachVolume in $(gluster volume list);  do gluster volume heal $eachVolume info ;  done | grep "Number of entries: [^0]$"
          Copy to Clipboard Toggle word wrap
      2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.

        # oc get pods -w
        NAME                             READY     STATUS        RESTARTS   AGE
        glusterfs-registry-4cpcc                  1/1       Terminating   0          3d
        …
        Copy to Clipboard Toggle word wrap
    5. Execute the following command to verify that the pods are running:

      # oc get pods
      Copy to Clipboard Toggle word wrap

      For example,

      # oc get pods
      NAME                                          READY     STATUS    RESTARTS   AGE
      glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
      glusterfs-storage-5thpc                       1/1       Running   0          9d
      glusterfs-storage-hfttr                       1/1       Running   0          9d
      glusterfs-storage-n8rg5                       1/1       Running   0          9d
      heketi-storage-4-9fnvz                        2/2       Running   0          8d
      Copy to Clipboard Toggle word wrap
    6. Execute the following commands to verify if you have upgraded the pod to the latest version:

      # oc rsh <gluster_registry_pod_name> glusterd --version
      Copy to Clipboard Toggle word wrap

      For example:

      # oc rsh glusterfs-registry-abmqa glusterd --version
      glusterfs 6.0
      Copy to Clipboard Toggle word wrap
      # rpm -qa|grep gluster
      Copy to Clipboard Toggle word wrap
    7. Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.

      # gluster vol get all cluster.op-version
      Copy to Clipboard Toggle word wrap
    8. After you upgrade the Gluster pods, ensure that you set Heketi back to operational mode:

      • Scale up the DC (Deployment Configuration).

        # oc scale dc <heketi_dc> --replicas=1
        Copy to Clipboard Toggle word wrap
    9. Set the cluster.op-version to 70200 on any one of the pods:

      Note

      Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.

      # gluster volume set all cluster.op-version 70200
      Copy to Clipboard Toggle word wrap
    10. Execute the following steps to enable server.tcp-user-timeout on all volumes.

      Note

      The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.

      It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.

      1. List the glusterfs pod using the following command:

        # oc get pods
        Copy to Clipboard Toggle word wrap

        For example:

        # oc get pods
        NAME                                          READY     STATUS    RESTARTS   AGE
        glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
        glusterfs-storage-5thpc                       1/1       Running   0          9d
        glusterfs-storage-hfttr                       1/1       Running   0          9d
        glusterfs-storage-n8rg5                       1/1       Running   0          9d
        heketi-storage-4-9fnvz                        2/2       Running   0          8d
        Copy to Clipboard Toggle word wrap
      2. Remote shell into one of the glusterfs-registry pods. For example:

        # oc rsh glusterfs-registry-g6vd9
        Copy to Clipboard Toggle word wrap
      3. Execute the following command:

        # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
        Copy to Clipboard Toggle word wrap

        For example:

        # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
        volume1
        volume set: success
        volume2
        volume set: success
        Copy to Clipboard Toggle word wrap
    11. If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:

      # oc delete dc <gluster-block-registry-dc>
      Copy to Clipboard Toggle word wrap

      For example:

      # oc delete dc glusterblock-registry-provisioner-dc
      Copy to Clipboard Toggle word wrap
    12. Execute the following command to delete the old glusterblock provisioner template.

       # oc delete templates glusterblock-provisioner
      Copy to Clipboard Toggle word wrap
    13. Create a glusterblock provisioner template. For example:

      # oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
      template.template.openshift.io/glusterblock-provisioner created
      Copy to Clipboard Toggle word wrap
    14. Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME and NAMESPACE.

      # oc edit template glusterblock-provisioner
      - displayName: glusterblock provisioner container image name
        name: IMAGE_NAME
        required: true
        value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
      - description: The namespace in which these resources are being created
        displayName: glusterblock provisioner namespace
        name: NAMESPACE
        required: true
        value: glusterfs-registry
      - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: registry
      Copy to Clipboard Toggle word wrap
      • If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following.
        For example:

        # oc edit template glusterblock-provisioner
        
        - displayName: glusterblock provisioner container image name
          name: IMAGE_NAME
          required: true
          value: registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7
        - displayName: glusterblock provisioner container image version
          name: IMAGE_VERSION
          required: true
          value: v3.11.8
        - description: The namespace in which these resources are being created
          displayName: glusterblock provisioner namespace
          name: NAMESPACE
          required: true
          value: glusterfs-registry
        - description: A unique name to identify which heketi service manages this cluster,
          useful for running multiple heketi instances
          displayName: GlusterFS cluster name
          name: CLUSTER_NAME
          value: registry
        Copy to Clipboard Toggle word wrap
    15. Delete the following resources from the old pod

      # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
      # oc delete serviceaccounts glusterblock-registry-provisioner
      # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
      Copy to Clipboard Toggle word wrap
    16. Before running oc process determine the correct provisioner name. If there are more than one gluster block provisioner running in your cluster the names must differ from all other provisioners.
      For example,

      • If there are 2 or more provisioners the name should be gluster.org/glusterblock-<namespace> where, namespace is replaced by the namespace that the provisioner is deployed in.
      • If there is only one provisioner, installed prior to 3.11.8, gluster.org/glusterblock is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
    17. After editing the template, execute the following command to create the deployment configuration:

      # oc process glusterblock-provisioner -o yaml | oc create -f -
      Copy to Clipboard Toggle word wrap

      For example:

      # oc process glusterblock-provisioner -o yaml | oc create -f -
      clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created
      serviceaccount/glusterblock-registry-provisioner created
      clusterrolebinding.authorization.openshift.io/glusterblock-registry-provisioner created
      deploymentconfig.apps.openshift.io/glusterblock-registry-provisioner-dc created
      Copy to Clipboard Toggle word wrap
    18. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6 onward. During an upgrade from Container-Native Storage 3.10 to Red Hat Openshift Container Storage 3.11, to turn brick multiplexing on, execute the following commands:

      1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:

        # oc rsh <gluster_pod_name>
        Copy to Clipboard Toggle word wrap
      2. Verify the brick multiplex status:

        # gluster v get all all
        Copy to Clipboard Toggle word wrap
      3. If it is disabled, then execute the following command to enable brick multiplexing:

        Note

        Ensure that all volumes are in a stop state or no bricks are running while brick multiplexing is enabled.

        # gluster volume set all cluster.brick-multiplex on
        Copy to Clipboard Toggle word wrap

        For example:

        # oc rsh glusterfs-registry-g6vd9
        
        sh-4.2# gluster volume set all cluster.brick-multiplex on
        Brick-multiplexing is supported only for container workloads (Independent or Converged mode). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
        volume set: success
        Copy to Clipboard Toggle word wrap
      4. List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:

        For example:

        # gluster volume list
        
        heketidbstorage
        vol_194049d2565d2a4ad78ef0483e04711e
        ...
        ...
        Copy to Clipboard Toggle word wrap

        Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:

        # gluster vol stop <VOLNAME>
        # gluster vol start <VOLNAME>
        Copy to Clipboard Toggle word wrap
    19. Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.

      Note

      After upgrading the glusterfs registry pods, proceed with the steps listed in ] to bring back your heketi pod and then proceed with the steps listed in xref:chap-upgrade_client_common[ to upgrade the client on Red Hat Openshift Container Platform Nodes.

    20. All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a block provisioner, in a given namespace, run the following command:

      # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
      Copy to Clipboard Toggle word wrap

      Example:

      # oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage
        glusterfs-registry-block   gluster.org/glusterblock               infra-storage
      Copy to Clipboard Toggle word wrap

      Check each storage class provisioner name, if it does not match the block provisioner name configured for that namespace it must be updated. If the block provisioner name already matches the configured provisioner name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
      For every storage class in this list do the following:

      # oc get sc  -o yaml <storageclass>  > storageclass-to-edit.yaml
      # oc delete sc  <storageclass>
      # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
      Copy to Clipboard Toggle word wrap

      Example:

      # oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml
      # oc delete sc glusterfs-registry-block
      storageclass.storage.k8s.io "glusterfs-registry-block" deleted
      # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f -
      storageclass.storage.k8s.io/glusterfs-registry-block created
      Copy to Clipboard Toggle word wrap

6.3. Starting the Heketi Pods

Execute the following commands on the client machine for both glusterfs and registry namespace.

  1. Execute the following command to navigate to the project where the Heketi pods are running:

    # oc project <project_name>
    Copy to Clipboard Toggle word wrap

    For example for glusterfs namespace:

    # oc project glusterfs
    Copy to Clipboard Toggle word wrap

    For example for registry namespace:

    # oc project glusterfs-registry
    Copy to Clipboard Toggle word wrap
  2. Execute the following command to get the DeploymentConfig:

    # oc get dc
    Copy to Clipboard Toggle word wrap

    For example, on a glusterfs-registry project:

    # oc get dc
    NAME                                  REVISION   DESIRED   CURRENT   TRIGGERED BY
    glusterblock-storage-provisioner-dc   1          1         0         config
    heketi-storage                        4          1         1         config
    Copy to Clipboard Toggle word wrap

    For example, on a glusterfs project:

    # oc get dc
    NAME                                  REVISION   DESIRED   CURRENT   TRIGGERED BY
    glusterblock-storage-provisioner-dc   1          1         0         config
    heketi-storage                        4          1         1         config
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to increase the replica count from 0 to 1. This brings back the Heketi pod:

    # oc scale dc <heketi_dc> --replicas=1
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to verify that the heketi pod is present in both glusterfs and glusterfs-registry namespace:

    # oc get pods
    Copy to Clipboard Toggle word wrap

    For example for glusterfs:

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap

    For example for registry pods:

    # oc get pods
    NAME                                          READY     STATUS    RESTARTS   AGE
    glusterblock-storage-provisioner-dc-1-ffgs5   1/1       Running   0          3m
    glusterfs-storage-5thpc                       1/1       Running   0          9d
    glusterfs-storage-hfttr                       1/1       Running   0          9d
    glusterfs-storage-n8rg5                       1/1       Running   0          9d
    heketi-storage-4-9fnvz                        2/2       Running   0          8d
    Copy to Clipboard Toggle word wrap

6.4. Upgrading the client on Red Hat OpenShift Container Platform nodes

Execute the following commands on each of the nodes:

  1. To drain the pod, execute the following command on the master node (or any node with cluster-admin access):

    # oc adm drain <node_name> --ignore-daemonsets
    Copy to Clipboard Toggle word wrap
  2. To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access) :

    # oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to upgrade the client node to the latest glusterfs-fuse version:

    # yum update glusterfs-fuse
    Copy to Clipboard Toggle word wrap
  4. To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):

    # oc adm manage-node --schedulable=true <node_name>
    Copy to Clipboard Toggle word wrap
  5. Create and add the following content to the multipath.conf file:

    Note

    The multipath.conf file does not require any change as the change was implemented during a previous upgrade.

    # cat >> /etc/multipath.conf <<EOF
    # LIO iSCSI
    devices {
      device {
        vendor "LIO-ORG"
        user_friendly_names "yes" # names like mpatha
        path_grouping_policy "failover" # one path per group
        hardware_handler "1 alua"
        path_selector "round-robin 0"
        failback immediate
        path_checker "tur"
        prio "alua"
        no_path_retry 120
        rr_weight "uniform"
      }
    }
    EOF
    Copy to Clipboard Toggle word wrap
  6. Execute the following commands to start multipath daemon and [re]load the multipath configuration:

    # systemctl start multipathd
    Copy to Clipboard Toggle word wrap
    # systemctl reload multipathd
    Copy to Clipboard Toggle word wrap
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat