Search

13.3. Upgrading the Red Hat Gluster Storage Pods

download PDF
Following are the steps for updating a DaemonSet for glusterfs:
  1. Execute the following command to find the DaemonSet name for gluster
    # oc get ds
  2. Execute the following command to delete the DeamonSet:
    # oc delete ds <ds-name> --cascade=false
    Using --cascade=false option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.
    For example,
    # oc delete ds glusterfs  --cascade=false
    daemonset "glusterfs" deleted
  3. Execute the following commands to verify all the old pods are up:
    # oc get pods
    For example,
    # oc get pods
    NAME                             READY     STATUS    RESTARTS   AGE
    glusterfs-0h68l                  1/1       Running   0          3d
    glusterfs-0vcf3                  1/1       Running   0          3d
    glusterfs-gr9gh                  1/1       Running   0          3d
    heketi-1-zpw4d                   1/1       Running   0          3h
    storage-project-router-2-db2wl   1/1       Running   0          4d
    
  4. Execute the following command to delete the old glusterfs template:
    # oc delete templates glusterfs
    For example,
    # oc delete templates glusterfs
    template “glusterfs” deleted
  5. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
    1. Check if the nodes are labelled using the following command:
      # oc get nodes --show-labels
      If the Red Hat Gluster Storage nodes do not have the storagenode=glusterfs label, then proceed with the next step.
    2. Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
      # oc label nodes <node name> storagenode=glusterfs
  6. Execute the following command to register new gluster template:
    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
    
    For example,
    # oc create -f /usr/share/heketi/templates/glusterfs-template.yaml 
    template “glusterfs” created
    
  7. Execute the following commands to start the gluster DeamonSet:
    # oc process glusterfs | oc create -f -
    For example,
    # oc process glusterfs | oc create -f -
    Deamonset “glusterfs” created
  8. Execute the following command to identify the old gluster pods that needs to be deleted:
    # oc get pods
    For example,
    # oc get pods
    NAME                             READY     STATUS    RESTARTS   AGE
    glusterfs-0h68l                  1/1       Running   0          3d
    glusterfs-0vcf3                  1/1       Running   0          3d
    glusterfs-gr9gh                  1/1       Running   0          3d
    heketi-1-zpw4d                   1/1       Running   0          3h
    storage-project-router-2-db2wl   1/1       Running   0          4d
    
  9. Execute the following command to delete the old gluster pods. Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support OnDelete Strategy DaemonSet update strategy . With OnDelete Strategy update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.
    1. To delete the old gluster pods, execute the following command:
      # oc delete pod <gluster_pod>
      For example,
      # oc delete pod glusterfs-0vcf3 
      pod  “glusterfs-0vcf3” deleted

      Note

      Before deleting the next pod, self heal check has to be made:
      1. Run the following command to access shell on gluster pod:
        # oc rsh <gluster_pod_name>
      2. Run the following command to obtain the volume names:
        # gluster volume list
      3. Run the following command on each volume to check the self-heal status:
        # gluster volume heal <volname> info
    2. The delete pod command will terminate the old pod and create a new pod. Run # oc get pods -w and check the Age of the pod and READY status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.
      # oc get pods -w
      NAME                             READY     STATUS        RESTARTS   AGE
      glusterfs-0vcf3                  1/1       Terminating   0          3d
      …
      
      # oc get pods -w
      NAME                             READY     STATUS              RESTARTS   AGE
      glusterfs-pqfs6                  0/1       ContainerCreating   0          1s
      …
      
      # oc get pods -w
      NAME                             READY     STATUS        RESTARTS   AGE
      glusterfs-pqfs6                  1/1       Running       0          2m
  10. Execute the following command to verify that the pods are running:
    # oc get pods
    
    For example,
    # oc get pods
    NAME                             READY     STATUS    RESTARTS   AGE
    glusterfs-j241c                  1/1       Running   0          4m
    glusterfs-pqfs6                  1/1       Running   0          7m
    glusterfs-wrn6n                  1/1       Running   0          12m
    heketi-1-zpw4d                   1/1       Running   0          4h
    storage-project-router-2-db2wl   1/1       Running   0          4d
    
  11. Execute the following command to verify if you have upgraded the pod to the latest version:
    # oc rsh <gluster_pod_name> glusterd --version
    For example:
     # oc rsh glusterfs-47qfc glusterd --version                                                                                                  
    glusterfs 3.8.4 built on Sep  6 2017 06:59:40                                  
    Repository revision: git://git.gluster.com/glusterfs.git                      
    Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com>               
    GlusterFS comes with ABSOLUTELY NO WARRANTY.                                  
    It is licensed to you under your choice of the GNU Lesser                      
    General Public License, version 3 or any later version (LGPLv3                
    or later), or the GNU General Public License, version 2 (GPLv2),              
    in all cases as published by the Free Software Foundation.
  12. Check the Red Hat Gluster Storage op-version by executing the following command:
    # gluster vol get all cluster.op-version
    • Set the cluster.op-version to 31101 on any one of the pods:

      Note

      Ensure all the gluster pods are updated before changing the cluster.op-version.
      # gluster volume set all cluster.op-version 31101
  13. From Container-Native Storage 3.6, dynamically provisioning volumes for block storage is supported. Execute the following commands to deploy the gluster-block provisioner:
    # sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    # oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
    For example:
    # sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
    # oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
  14. Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption, and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.5 to Container-Native Storage 3.6, to turn brick multiplexing on, execute the following commands:
    1. To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
      # oc rsh <gluster_pod_name>
    2. Execute the following command to enable brick multiplexing:
      # gluster volume set all cluster.brick-multiplex on
      For example:
      # oc rsh glusterfs-770ql
      
      sh-4.2# gluster volume set all cluster.brick-multiplex on
      Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y
      volume set: success
    3. List all the volumes in the trusted storage pool:
      For example:
      # gluster volume list
      
      heketidbstorage
      vol_194049d2565d2a4ad78ef0483e04711e
      ...
      ...
      
      Restart all the volumes:
      # gluster vol stop <VOLNAME>
      # gluster vol start <VOLNAME>
  15. From Container-Native Storage 3.6, support for S3 compatible Object Store in Container-Native Storage is under technology preview. To enable S3 compatible object store, refer Chapter 18, S3 Compatible Object Store in a Container-Native Storage Environment.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.