Este contenido no está disponible en el idioma seleccionado.

Chapter 6. Upgrading your Red Hat Openshift Container Storage in Converged Mode


This chapter describes the procedure to upgrade your environment from Container Storage in Converged Mode 3.9 to Red Hat Openshift Container Storage in Converged Mode 3.10.

6.1. Upgrading the Glusterfs Pods

The following sections provide steps to upgrade your Glusterfs pods

6.1.1. Prerequisites

Ensure the following prerequisites are met:

Note

The template files are available in the following locations:
  • gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
  • heketi template - /usr/share/heketi/templates/heketi-template.yaml
  • glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml

6.1.2. Restoring original label values for /dev/log

To restore the original selinux label, execute the following commands:
  1. Create a directory and soft links on all nodes that run gluster pods:
    # mkdir /srv/<directory_name>
    # cd /srv/<directory_name>/   # same dir as above
    # ln -sf /dev/null systemd-tmpfiles-setup-dev.service
    # ln -sf /dev/null systemd-journald.service
    # ln -sf /dev/null systemd-journald.socket
    Copy to Clipboard Toggle word wrap
  2. Edit the daemonset that creates the glusterfs pods on the node which has oc client:
    # oc edit daemonset <daemonset_name>
    Copy to Clipboard Toggle word wrap
    Under volumeMounts section add a mapping for the volume:
    - mountPath: /usr/lib/systemd/system/systemd-journald.service
      name: systemd-journald-service
    - mountPath: /usr/lib/systemd/system/systemd-journald.socket
      name: systemd-journald-socket
    - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service
      name: systemd-tmpfiles-setup-dev-service
    Copy to Clipboard Toggle word wrap
    Under volumes section add a new host path for each service listed:

    Note

    The path mentioned in here should be the same as mentioned in Step 1.
    - hostPath:
       path: /srv/<directory_name>/systemd-journald.socket
       type: ""
      name: systemd-journald-socket
    - hostPath:
       path: /srv/<directory_name>/systemd-journald.service
       type: ""
      name: systemd-journald-service
    - hostPath:
       path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service
       type: ""
      name: systemd-tmpfiles-setup-dev-service
    Copy to Clipboard Toggle word wrap
  3. Run the following command on all nodes that run gluster pods. This will reset the label:
    # restorecon /dev/log
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to check the status of self heal for all volumes:
    # oc rsh <gluster_pod_name>
    # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done  | grep  "Number of entries: [^0]$"
    Copy to Clipboard Toggle word wrap
    Wait for self-heal to complete.
  5. Execute the following commmand and ensure that the bricks are not more than 90% full:
    # df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
    Copy to Clipboard Toggle word wrap
  6. Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of glusterfsd process:
    # gluster volume set all cluster.max-bricks-per-process 250
    Copy to Clipboard Toggle word wrap
    1. Execute the following command on any one of the gluster pods to ensure that the option is set correctly:
      # gluster volume get all cluster.max-bricks-per-process
      Copy to Clipboard Toggle word wrap
      For example:
      # gluster volume get all cluster.max-bricks-per-process
      cluster.max-bricks-per-process 250
      Copy to Clipboard Toggle word wrap
  7. Execute the following command on the node which has oc client to delete the gluster pod:
    # oc delete pod <gluster_pod_name>
    Copy to Clipboard Toggle word wrap
  8. To verify if the pod is ready, execute the following command:
    # oc get pods -l glusterfs=storage-pod
    Copy to Clipboard Toggle word wrap
  9. Login to the node hosting the pod and check the selinux label of /dev/log
    # ls -lZ /dev/log
    Copy to Clipboard Toggle word wrap
    The output should show devlog_t label
    For example:
    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Copy to Clipboard Toggle word wrap
    Exit the node.
  10. In the gluster pod, check if the label value is devlog_t:
    # oc rsh <gluster_pod_name>
    # ls -lZ /dev/log
    Copy to Clipboard Toggle word wrap
    For example:
    #  ls -lZ /dev/log
    srw-rw-rw-. root root system_u:object_r:devlog_t:s0    /dev/log
    Copy to Clipboard Toggle word wrap
  11. Perform steps 4 to 9 for other pods.

6.1.3. Upgrading if existing version deployed by using cns-deploy

6.1.3.1. Upgrading cns-deploy and Heketi Server

The following commands must be executed on the client machine.
  1. Execute the following command to update the heketi client and cns-deploy packages:
    # yum update cns-deploy -y
      # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file
    # oc rsh <heketi_pod_name>
      # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'`
      # exit
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the heketi template.
    # oc delete templates heketi
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to get the current HEKETI_ADMIN_KEY.
    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
    oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to install the heketi template.
    # oc create -f /usr/share/heketi/templates/heketi-template.yaml
      template "heketi" created
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to grant the heketi Service Account the necessary privileges.
    # oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account
      # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap
    For example,
    # oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
      # oc adm policy add-scc-to-user privileged -z heketi-service-account
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to generate a new heketi configuration file.
    # sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
    Copy to Clipboard Toggle word wrap
    • The BLOCK_HOST_SIZE parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required.
    • Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.

      Note

      JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
  8. Note

    If the heketi-config-secret file already exists, then delete the file and run the following command.
    Execute the following command to create a secret to hold the configuration file.
    # oc create secret generic heketi-config-secret --from-file=heketi.json
    Copy to Clipboard Toggle word wrap
  9. Execute the following command to delete the deployment configuration, service, and route for heketi:

    Note

    The names of these parameters can be referenced from output of the following command:
    # oc get all | grep heketi
    Copy to Clipboard Toggle word wrap
    # oc delete deploymentconfig,service,route heketi
    Copy to Clipboard Toggle word wrap
  10. Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
    # oc edit template heketi
    parameters:
    - description: Set secret for those creating volumes as type _user_
      displayName: Heketi User Secret
      name: HEKETI_USER_KEY
      value: <heketiuserkey>
    - description: Set secret for administration of the Heketi service as user _admin_
      displayName: Heketi Administrator Secret
      name: HEKETI_ADMIN_KEY
      value: <adminkey>
    - description: Set the executor type, kubernetes or ssh
      displayName: heketi executor type
      name: HEKETI_EXECUTOR
      value: kubernetes
    - description: Set the hostname for the route URL
      displayName: heketi route name
      name: HEKETI_ROUTE
      value: heketi-storage
    - displayName: heketi container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-volmanager-rhel7
    - displayName: heketi container image version
      name: IMAGE_VERSION
      required: true
      value: v3.10
    - description: A unique name to identify this heketi service, useful for running
        multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
  11. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
    # oc process heketi | oc create -f -
    
      service "heketi" created
      route "heketi" created
      deploymentconfig "heketi" created
    Copy to Clipboard Toggle word wrap
  12. Execute the following command to verify that the containers are running:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example:
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      heketi-1-zpw4d                   1/1       Running   0          3h
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap

6.1.3.2. Upgrading the Red Hat Gluster Storage Pods

The following commands must be executed on the client machine. .
Following are the steps for updating a DaemonSet for glusterfs:
  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
    1. Execute the following command to access your project:
      # oc project <project_name>
      Copy to Clipboard Toggle word wrap
      For example:
      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:
      # oc get dc
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:
      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
    5. Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
      # oc scale dc <heketi_dc> --replicas=0
      Copy to Clipboard Toggle word wrap
    6. Execute the following command to verify that the heketi pod is no longer present:
      # oc get pods
      Copy to Clipboard Toggle word wrap
  2. Execute the following command to find the DaemonSet name for gluster
    # oc get ds
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the DeamonSet:
    # oc delete ds <ds-name> --cascade=false
    Copy to Clipboard Toggle word wrap
    Using --cascade=false option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.
    For example,
    # oc delete ds glusterfs  --cascade=false
      daemonset "glusterfs" deleted
    Copy to Clipboard Toggle word wrap
  4. Execute the following commands to verify all the old pods are up:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example,
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the old glusterfs template.
    # oc delete templates glusterfs
    Copy to Clipboard Toggle word wrap
    For example,
    # oc delete templates glusterfs
      template “glusterfs” deleted
    Copy to Clipboard Toggle word wrap
  6. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
    1. Check if the nodes are labelled using the following command:
      # oc get nodes --show-labels
      Copy to Clipboard Toggle word wrap
      If the Red Hat Gluster Storage nodes do not have the storagenode=glusterfs label, then label the nodes as shown in step ii.
    2. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
      # oc label nodes <node name> storagenode=glusterfs
      Copy to Clipboard Toggle word wrap
  7. Execute the following command to register new gluster template.
    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
    
    Copy to Clipboard Toggle word wrap
    For example,
    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
      template “glusterfs” created
    
    Copy to Clipboard Toggle word wrap
  8. Execute the following commands to create the gluster DaemonSet:
    # oc process glusterfs | oc create -f -
    Copy to Clipboard Toggle word wrap
    For example,
    # oc process glusterfs | oc create -f -
      Deamonset “glusterfs” created
    Copy to Clipboard Toggle word wrap
  9. Execute the following command to identify the old gluster pods that needs to be deleted:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example,
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap
  10. Execute the following commmand and ensure that the bricks are not more than 90% full:
    # df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
    Copy to Clipboard Toggle word wrap
  11. Execute the following command to delete the old gluster pods. Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.
    1. To delete the old gluster pods, execute the following command:
      # oc delete pod <gluster_pod>
      Copy to Clipboard Toggle word wrap
      For example,
      # oc delete pod glusterfs-0vcf3
        pod  “glusterfs-0vcf3” deleted
      Copy to Clipboard Toggle word wrap

      Note

      Before deleting the next pod, self heal check has to be made:
      1. Run the following command to access shell on gluster pod:
        # oc rsh <gluster_pod_name>
        Copy to Clipboard Toggle word wrap
      2. Run the following command to check the self-heal status of all the volumes:
        for each_volume in `gluster volume list`;
          do gluster volume heal $each_volume info ;
          done | grep "Number of entries: [^0]$"
        Copy to Clipboard Toggle word wrap
    2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.
      # oc get pods -w
        NAME                             READY     STATUS        RESTARTS   AGE
        glusterfs-0vcf3                  1/1       Terminating   0          3d
        …
      Copy to Clipboard Toggle word wrap
  12. Execute the following command to verify that the pods are running:
    # oc get pods
    
    Copy to Clipboard Toggle word wrap
    For example,
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-j241c                  1/1       Running   0          4m
      glusterfs-pqfs6                  1/1       Running   0          7m
      glusterfs-wrn6n                  1/1       Running   0          12m
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap
  13. Execute the following command to verify if you have upgraded the pod to the latest version:
    # oc rsh <gluster_pod_name> glusterd --version
    Copy to Clipboard Toggle word wrap
    For example:
     # oc rsh glusterfs-registry-4cpcc glusterd --version
      glusterfs 3.12.2
    Copy to Clipboard Toggle word wrap
  14. Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
    # gluster vol get all cluster.op-version
    Copy to Clipboard Toggle word wrap
    • Set the cluster.op-version to 31302 on any one of the pods:

      Note

      Ensure all the gluster pods are updated before changing the cluster.op-version.
      # gluster --timeout=3600 volume set all cluster.op-version 31302
      Copy to Clipboard Toggle word wrap
  15. Execute the following steps to enable server.tcp-user-timeout on all volumes.

    Note

    The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.
    It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.
    1. List the glusterfs pod using the following command:
      # oc get pods
      Copy to Clipboard Toggle word wrap
      For example:
      # oc get pods
        NAME                             READY     STATUS    RESTARTS   AGE
        glusterfs-0h68l                  1/1       Running   0          3d
        glusterfs-0vcf3                  1/1       Running   0          3d
        glusterfs-gr9gh                  1/1       Running   0          3d
        storage-project-router-2-db2wl   1/1       Running   0          4d
      Copy to Clipboard Toggle word wrap
    2. Remote shell into one of the glusterfs pods. For example:
      # oc rsh glusterfs-0vcf3
      Copy to Clipboard Toggle word wrap
    3. Execute the following command:
      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
      Copy to Clipboard Toggle word wrap
      For example:
      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
        volume1
        volume set: success
        volume2
        volume set: success
      Copy to Clipboard Toggle word wrap
  16. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
    # oc delete dc <gluster-block-dc>
    
    Copy to Clipboard Toggle word wrap
    For example:
    # oc delete dc glusterblock-storage-provisioner-dc
    Copy to Clipboard Toggle word wrap
  17. Execute the following commands to deploy the gluster-block provisioner:
    # sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
    For example:
    # sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap
    # oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
    Copy to Clipboard Toggle word wrap
  18. Delete the following resources from the old pod:
    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
      # oc delete serviceaccounts glusterblock-storage-provisioner
    Copy to Clipboard Toggle word wrap
  19. After editing the template, execute the following command to create the deployment configuration:
    # oc process <gluster_block_provisioner_template> | oc create -f -
    Copy to Clipboard Toggle word wrap
  20. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
    1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
      # oc rsh <gluster_pod_name>
      Copy to Clipboard Toggle word wrap
    2. Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
      # gluster volume set all cluster.brick-multiplex on
      Copy to Clipboard Toggle word wrap

      Note

      You can check the brick multiplex status by executing the following command:
      # gluster v get all all
      Copy to Clipboard Toggle word wrap
      For example:
      # oc rsh glusterfs-770ql
      
        sh-4.2# gluster volume set all cluster.brick-multiplex on
        Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
        volume set: success
      Copy to Clipboard Toggle word wrap
    3. List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:
      For example:
      # gluster volume list
      
        heketidbstorage
        vol_194049d2565d2a4ad78ef0483e04711e
        ...
        ...
      
      Copy to Clipboard Toggle word wrap
      Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:
      # gluster vol stop <VOLNAME>
        # gluster vol start <VOLNAME>
      Copy to Clipboard Toggle word wrap
  21. Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.

Note

6.1.4. Upgrading if existing version deployed by using Ansible

6.1.4.1. Upgrading Heketi Server

The following commands must be executed on the client machine.
  1. Execute the following command to update the heketi client packages:
    # yum update heketi-client -y
    Copy to Clipboard Toggle word wrap
  2. Backup the Heketi database file
    # oc rsh <heketi_pod_name>
      # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'`
      # exit
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to get the current HEKETI_ADMIN_KEY.
    The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
    oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
    Copy to Clipboard Toggle word wrap
  4. Execute the following step to edit the template:
      	# oc get templates
      	NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
      	glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
      				  template
      	glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
      				  template
      	heketi			  Heketi service deployment  7 (3 blank)	3
      				  template
    Copy to Clipboard Toggle word wrap
    If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.
    # oc edit template heketi
      parameters:
      - description: Set secret for those creating volumes as type _user_
      displayName: Heketi User Secret
      name: HEKETI_USER_KEY
      value: <heketiuserkey>
      - description: Set secret for administration of the Heketi service as user _admin_
      displayName: Heketi Administrator Secret
      name: HEKETI_ADMIN_KEY
      value: <adminkey>
      - description: Set the executor type, kubernetes or ssh
      displayName: heketi executor type
      name: HEKETI_EXECUTOR
      value: kubernetes
      - description: Set the hostname for the route URL
      displayName: heketi route name
      name: HEKETI_ROUTE
      value: heketi-storage
      - displayName: heketi container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-volmanager-rhel7
      - displayName: heketi container image version
      name: IMAGE_VERSION
      required: true
      value: v3.10
      - description: A unique name to identify this heketi service, useful for running
      multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
    If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.
    # oc edit template heketi
    parameters:
    - description: Set secret for those creating volumes as type _user_
      displayName: Heketi User Secret
      name: HEKETI_USER_KEY
      value: <heketiuserkey>
    - description: Set secret for administration of the Heketi service as user _admin_
      displayName: Heketi Administrator Secret
      name: HEKETI_ADMIN_KEY
      value: <adminkey>
    - description: Set the executor type, kubernetes or ssh
      displayName: heketi executor type
      name: HEKETI_EXECUTOR
      value: kubernetes
    - description: Set the hostname for the route URL
      displayName: heketi route name
      name: HEKETI_ROUTE
      value: heketi-storage
    - displayName: heketi container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-volmanager-rhel7:v3.10
    - description: A unique name to identify this heketi service, useful for running
       multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to delete the deployment configuration, service, and route for heketi:

    Note

    The names of these parameters can be referenced from output of the following command:
    # oc get all | grep heketi
    Copy to Clipboard Toggle word wrap
    # oc delete deploymentconfig,service,route heketi-storage
    Copy to Clipboard Toggle word wrap
  6. Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
    # oc process heketi | oc create -f -
    
      service "heketi" created
      route "heketi" created
      deploymentconfig "heketi" created
    Copy to Clipboard Toggle word wrap
  7. Execute the following command to verify that the containers are running:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example:
    # oc get pods
      NAME                             READY     STATUS    RESTARTS   AGE
      glusterfs-0h68l                  1/1       Running   0          3d
      glusterfs-0vcf3                  1/1       Running   0          3d
      glusterfs-gr9gh                  1/1       Running   0          3d
      heketi-1-zpw4d                   1/1       Running   0          3h
      storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap

6.1.4.2. Upgrading the Red Hat Gluster Storage Pods

The following commands must be executed on the client machine.
Following are the steps for updating a DaemonSet for glusterfs:
  1. Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
    1. Execute the following command to access your project:
      # oc project <project_name>
      Copy to Clipboard Toggle word wrap
      For example:
      # oc project storage-project
      Copy to Clipboard Toggle word wrap
    2. Execute the following command to get the DeploymentConfig:
      # oc get dc
      Copy to Clipboard Toggle word wrap
    3. Execute the following command to set heketi server to accept requests only from the local-client:
      # heketi-cli server mode set local-client
      Copy to Clipboard Toggle word wrap
    4. Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
      # heketi-cli server operations info
      Copy to Clipboard Toggle word wrap
    5. Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
      # oc scale dc <heketi_dc> --replicas=0
      Copy to Clipboard Toggle word wrap
    6. Execute the following command to verify that the heketi pod is no longer present:
      # oc get pods
      Copy to Clipboard Toggle word wrap
  2. Execute the following command to find the DaemonSet name for gluster
    # oc get ds
    Copy to Clipboard Toggle word wrap
  3. Execute the following command to delete the DeamonSet:
    # oc delete ds <ds-name> --cascade=false
    Copy to Clipboard Toggle word wrap
    Using --cascade=false option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.
    For example,
    # oc delete ds glusterfs-storage  --cascade=false
        daemonset "glusterfs-storage" deleted
    Copy to Clipboard Toggle word wrap
  4. Execute the following commands to verify all the old pods are up:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example,
    # oc get pods
        NAME                             READY     STATUS    RESTARTS   AGE
        glusterfs-0h68l                  1/1       Running   0          3d
        glusterfs-0vcf3                  1/1       Running   0          3d
        glusterfs-gr9gh                  1/1       Running   0          3d
        storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to edit the old glusterfs template.
        # oc get templates
        NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
        glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
                template
        glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
                template
        heketi			  Heketi service deployment  7 (3 blank)	3
                template
    Copy to Clipboard Toggle word wrap
    If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:
    # oc edit template glusterfs
    - displayName: GlusterFS container image name
    	  name: IMAGE_NAME
    	  required: true
    	  value: rhgs3/rhgs-server-rhel7
    - displayName: GlusterFS container image version
    	  name: IMAGE_VERSION
    	  required: true
    	  value: v3.10
    - description: A unique name to identify which heketi service manages this cluster, useful for running
         multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    
    Copy to Clipboard Toggle word wrap
    If the template has only IMAGE_NAME as a parameter, then update the glusterfs template as following. For example:
    # oc edit template glusterfs
    - displayName: GlusterFS container image name
      name: IMAGE_NAME
      required: true
      value: rhgs3/rhgs-server-rhel7:v3.10
    - description: A unique name to identify which heketi service manages this cluster, useful for running
         multiple heketi instances
      displayName: GlusterFS cluster name
      name: CLUSTER_NAME
      value: storage
    Copy to Clipboard Toggle word wrap

    Note

    Ensure that the CLUSTER_NAME variable is set to the correct value
  6. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
    1. Check if the nodes are labelled using the following command:
      # oc get nodes --show-labels
      Copy to Clipboard Toggle word wrap
      If the Red Hat Gluster Storage nodes do not have the glusterfs=storage-host label, then label the nodes as shown in step ii.
    2. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
      # oc label nodes <node name> glusterfs=storage-host
      Copy to Clipboard Toggle word wrap
  7. Execute the following commands to create the gluster DaemonSet:
    # oc process glusterfs | oc create -f -
    Copy to Clipboard Toggle word wrap
    For example,
    # oc process glusterfs | oc create -f -
        Deamonset “glusterfs” created
    Copy to Clipboard Toggle word wrap
  8. Execute the following command to identify the old gluster pods that needs to be deleted:
    # oc get pods
    Copy to Clipboard Toggle word wrap
    For example,
    # oc get pods
        NAME                             READY     STATUS    RESTARTS   AGE
        glusterfs-0h68l                  1/1       Running   0          3d
        glusterfs-0vcf3                  1/1       Running   0          3d
        glusterfs-gr9gh                  1/1       Running   0          3d
        storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap
  9. Execute the following commmand and ensure that the bricks are not more than 90% full:
    # df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
    Copy to Clipboard Toggle word wrap
  10. Execute the following command to delete the old gluster pods. Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy. With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.
    1. To delete the old gluster pods, execute the following command:
      # oc delete pod <gluster_pod>
      Copy to Clipboard Toggle word wrap
      For example,
      # oc delete pod glusterfs-0vcf3
          pod  “glusterfs-0vcf3” deleted
      Copy to Clipboard Toggle word wrap

      Note

      Before deleting the next pod, self heal check has to be made:
      1. Run the following command to access shell on gluster pod:
        # oc rsh <gluster_pod_name>
        Copy to Clipboard Toggle word wrap
      2. Run the following command to check the self-heal status of all the volumes:
        for each_volume in `gluster volume list`;
            do gluster volume heal $each_volume info ;
            done | grep "Number of entries: [^0]$"
        Copy to Clipboard Toggle word wrap
    2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.
      # oc get pods -w
          NAME                             READY     STATUS        RESTARTS   AGE
          glusterfs-0vcf3                  1/1       Terminating   0          3d
          …
      Copy to Clipboard Toggle word wrap
  11. Execute the following command to verify that the pods are running:
    # oc get pods
    
    Copy to Clipboard Toggle word wrap
    For example,
    # oc get pods
        NAME                             READY     STATUS    RESTARTS   AGE
        glusterfs-j241c                  1/1       Running   0          4m
        glusterfs-pqfs6                  1/1       Running   0          7m
        glusterfs-wrn6n                  1/1       Running   0          12m
        storage-project-router-2-db2wl   1/1       Running   0          4d
    
    Copy to Clipboard Toggle word wrap
  12. Execute the following command to verify if you have upgraded the pod to the latest version:
    # oc rsh <gluster_pod_name> glusterd --version
    Copy to Clipboard Toggle word wrap
    For example:
     # oc rsh glusterfs-registry-4cpcc glusterd --version
        glusterfs 3.12.2
    Copy to Clipboard Toggle word wrap
  13. Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
    # gluster vol get all cluster.op-version
    Copy to Clipboard Toggle word wrap
    • Set the cluster.op-version to 31302 on any one of the pods:

      Note

      Ensure all the gluster pods are updated before changing the cluster.op-version.
      # gluster --timeout=3600 volume set all cluster.op-version 31302
      Copy to Clipboard Toggle word wrap
  14. Execute the following steps to enable server.tcp-user-timeout on all volumes.

    Note

    The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.
    It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.
    1. List the glusterfs pod using the following command:
      # oc get pods
      Copy to Clipboard Toggle word wrap
      For example:
      # oc get pods
          NAME                             READY     STATUS    RESTARTS   AGE
          glusterfs-0h68l                  1/1       Running   0          3d
          glusterfs-0vcf3                  1/1       Running   0          3d
          glusterfs-gr9gh                  1/1       Running   0          3d
          storage-project-router-2-db2wl   1/1       Running   0          4d
      Copy to Clipboard Toggle word wrap
    2. Remote shell into one of the glusterfs pods. For example:
      # oc rsh glusterfs-0vcf3
      Copy to Clipboard Toggle word wrap
    3. Execute the following command:
      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
      Copy to Clipboard Toggle word wrap
      For example:
      # for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
          volume1
          volume set: success
          volume2
          volume set: success
      Copy to Clipboard Toggle word wrap
  15. If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
    # oc delete dc <gluster-block-dc>
    
    Copy to Clipboard Toggle word wrap
    For example:
    # oc delete dc glusterblock-storage-provisioner-dc
    Copy to Clipboard Toggle word wrap
  16. Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
        # oc get templates
        NAME			  DESCRIPTION		     PARAMETERS		OBJECTS
        glusterblock-provisioner  glusterblock provisioner   3 (2 blank)	4
                template
        glusterfs		  GlusterFS DaemonSet 	     5 (1 blank)	1
                template
        heketi			  Heketi service deployment  7 (3 blank)	3
                template
    Copy to Clipboard Toggle word wrap
    If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:
    # oc edit template glusterblock-provisioner
    
        - displayName: glusterblock provisioner container image name
        name: IMAGE_NAME
        required: true
        value: rhgs3/rhgs-gluster-block-prov-rhel7
        - displayName: glusterblock provisioner container image version
        name: IMAGE_VERSION
        required: true
        value: v3.10
        - description: The namespace in which these resources are being created
        displayName: glusterblock provisioner namespace
        name: NAMESPACE
        required: true
        value: glusterfs
        - description: A unique name to identify which heketi service manages this cluster,
          useful for running multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
    Copy to Clipboard Toggle word wrap
    If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:
    # oc edit template glusterblock-provisioner
    
        - displayName: glusterblock provisioner container image name
        name: IMAGE_NAME
        required: true
        value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10
        - description: The namespace in which these resources are being created
        displayName: glusterblock provisioner namespace
        name: NAMESPACE
        required: true
        value: glusterfs
        - description: A unique name to identify which heketi service manages this cluster,
          useful for running multiple heketi instances
        displayName: GlusterFS cluster name
        name: CLUSTER_NAME
        value: storage
    Copy to Clipboard Toggle word wrap
  17. Delete the following resources from the old pod
    # oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
        # oc delete serviceaccounts glusterblock-storage-provisioner
    Copy to Clipboard Toggle word wrap
  18. After editing the template, execute the following command to create the deployment configuration:
    # oc process <gluster_block_provisioner_template> | oc create -f -
    Copy to Clipboard Toggle word wrap
  19. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
    1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
      # oc rsh <gluster_pod_name>
      Copy to Clipboard Toggle word wrap
    2. Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
      # gluster volume set all cluster.brick-multiplex on
      Copy to Clipboard Toggle word wrap

      Note

      You can check the brick multiplex status by executing the following command:
      # gluster v get all all
      Copy to Clipboard Toggle word wrap
      For example:
      # oc rsh glusterfs-770ql
      
          sh-4.2# gluster volume set all cluster.brick-multiplex on
          Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
          volume set: success
      Copy to Clipboard Toggle word wrap
    3. List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:
      For example:
      # gluster volume list
      
          heketidbstorage
          vol_194049d2565d2a4ad78ef0483e04711e
          ...
          ...
      
      Copy to Clipboard Toggle word wrap
      Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:
      # gluster vol stop <VOLNAME>
          # gluster vol start <VOLNAME>
      Copy to Clipboard Toggle word wrap
  20. Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.

Note

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat