This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
13.3. Upgrading the Red Hat Gluster Storage Pods
The following commands must be executed on the client machine. If you want to set up a client machine, refer Section 5.2.1, “Installing Red Hat Gluster Storage Container Native with OpenShift Container Platform on Red Hat Enterprise Linux 7 based OpenShift Container Platform Cluster ” or Section 5.2.2, “Installing Red Hat Gluster Storage Container Native with OpenShift Container Platform on Red Hat Enterprise Linux Atomic Host OpenShift Container Platform Cluster”.
Following are the steps for updating a DaemonSet for glusterfs:
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get ds
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=false
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=false
option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs --cascade=false
# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs template:
oc delete templates glusterfs
# oc delete templates glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labels
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfs
label, then proceed with the next step. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> storagenode=glusterfs
# oc label nodes <node name> storagenode=glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to register new gluster template:
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to start the gluster DeamonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old gluster pods that needs to be deleted:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to obtain the volume names:
gluster volume list
# gluster volume list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command on each volume to check the self-heal status:
gluster volume heal <volname> info
# gluster volume heal <volname> info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_pod_name> glusterd --version
# oc rsh <gluster_pod_name> glusterd --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command:
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31101 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.gluster volume set all cluster.op-version 31101
# gluster volume set all cluster.op-version 31101
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- From Container-Native Storage 3.6, dynamically provisioning volumes for block storage is supported. Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oadm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption, and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.5 to Container-Native Storage 3.6, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-770ql
# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes:gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Container-Native Storage is under technology preview. To enable S3 compatible object store, refer Chapter 18, S3 Compatible Object Store in a Container-Native Storage Environment.