OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Chapter 8. Upgrading Your Red Hat Openshift Container Storage in Independent Mode
This chapter describes the procedures to follow to upgrade your independent mode environment.
New registry name registry.redhat.io
is used throughout in this Guide.
However, if you have not migrated to the new registry
yet then replace all occurrences of registry.redhat.io
with registry.access.redhat.com
where ever applicable.
Follow the same upgrade procedure to upgrade your environment from Red Hat Openshift Container Storage in Independent Mode 3.11.0 and above to Red Hat Openshift Container Storage in Independent Mode 3.11.8. Ensure that the correct image and version numbers are configured before you start the upgrade process.
The valid images for Red Hat Openshift Container Storage 3.11.8 are:
- registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
- registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11.8
8.1. Prerequisites Copy linkLink copied to clipboard!
Ensure the following prerequisites are met:
- Section 3.1.3, “Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements”
- Configuring Port Access: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#CRS_port_access
- Enabling Kernel Modules: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#CRS_enable_kernel
- Starting and Enabling Services: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#Start_enable_service
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
If Heketi is running as a standalone service in one of the Red Hat Gluster Storage nodes, then ensure to open the port for Heketi. By default the port number for Heketi is 8080. To open this port execute the following command on the node where Heketi is running:
firewall-cmd --zone=zone_name --add-port=8080/tcp firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=8080/tcp # firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If Heketi is configured to listen on a different port, then change the port number in the command accordingly.
Ensure that brick multiplexing is enabled. Brick multiplex status can be checked by using the following command.
gluster v get all all
# gluster v get all all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure to run the following command on master nodes to get the latest versions of Ansible templates.
yum update openshift-ansible
# yum update openshift-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2. Upgrading nodes and pods in glusterfs group Copy linkLink copied to clipboard!
Follow the steps in the sections ahead to upgrade your independent mode Setup.
8.2.1. Upgrading the Red Hat Gluster Storage Cluster Copy linkLink copied to clipboard!
To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.
8.2.2. Upgrading/Migration of Heketi in RHGS node Copy linkLink copied to clipboard!
If Heketi is in an Openshift node, then skip this section and see Section 8.2.4.1, “Upgrading Heketi in Openshift node” instead.
- In OCS 3.11, upgrade of Heketi in RHGS node is not supported. Hence, you have to migrate heketi to a new heketi pod.
- Ensure to migrate to the supported heketi deployment now, as there might not be a migration path in the future versions.
Ensure that cns-deploy rpm is installed in the master node. This provides template files necessary to setup heketi pod.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
yum install cns-deploy
# yum install cns-deploy
Use the newly created containerized Red Hat Gluster Storage project on the master node:
oc project <project-name>
# oc project <project-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc project gluster
# oc project gluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on the master node to create the service account:
oc create -f /usr/share/heketi/templates/heketi-service-account.yaml
# oc create -f /usr/share/heketi/templates/heketi-service-account.yaml serviceaccount/heketi-service-account created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on the master node to install the heketi template:
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template.template.openshift.io/heketi created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if the templates are created
oc get templates
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS heketi Heketi service deployment template 5 (3 blank) 3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on the master node to grant the heketi Service Account the necessary privileges:
oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account role "edit" added: "system:serviceaccount:gluster:heketi-service-account"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc adm policy add-scc-to-user privileged -z heketi-service-account scc "privileged" added to: ["system:serviceaccount:gluster:heketi-service-account"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the RHGS node, where heketi is running, execute the following commands:
Create the heketidbstorage volume:
heketi-cli volume create --size=2 --name=heketidbstorage
# heketi-cli volume create --size=2 --name=heketidbstorage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the volume:
mount -t glusterfs 192.168.11.192:heketidbstorage /mnt/
# mount -t glusterfs 192.168.11.192:heketidbstorage /mnt/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where 192.168.11.192 is one of the RHGS node.
Stop the heketi service:
systemctl stop heketi
# systemctl stop heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the heketi service:
systemctl disable heketi
# systemctl disable heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the heketi db to the heketidbstorage volume:
cp /var/lib/heketi/heketi.db /mnt/
# cp /var/lib/heketi/heketi.db /mnt/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount the volume:
umount /mnt
# umount /mnt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following files from the heketi node to the master node:
scp /etc/heketi/heketi.json topology.json /etc/heketi/heketi_key OCP_master_node:/root/
# scp /etc/heketi/heketi.json topology.json /etc/heketi/heketi_key OCP_master_node:/root/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where OCP_master_node is the hostname of the master node.
On the master node, set the environment variables for the following three files that were copied from the heketi node. Add the following lines to ~/.bashrc file and run the bash command to apply and save the changes:
export SSH_KEYFILE=heketi_key export TOPOLOGY=topology.json export HEKETI_CONFIG=heketi.json
export SSH_KEYFILE=heketi_key export TOPOLOGY=topology.json export HEKETI_CONFIG=heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have changed the value for "keyfile" in /etc/heketi/heketi.json to a different value, change here accordingly.
Execute the following command to create a secret to hold the configuration file:
oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY}
# oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY} secret/heketi-config-secret created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to label the secret:
oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
# oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret/heketi-config-secret labeled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
heketi-gluster-endpoints.yaml
file, and get the IP addresses of all the glusterfs nodes.Create the
heketi-gluster-endpoints.yaml
file.oc create -f ./heketi-gluster-endpoints.yaml
# oc create -f ./heketi-gluster-endpoints.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the IP addresses of all the glusterfs nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above example, 192.168.11.208, 192.168.11.176, 192.168.11.192 are the glusterfs nodes.
Execute the following command to create the service:
oc create -f ./heketi-gluster-service.yaml
# oc create -f ./heketi-gluster-service.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service/heketi created route.route.openshift.io/heketi created deploymentconfig.apps.openshift.io/heketi created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.To verify if Heketi is migrated execute the following command on the master node:
oc rsh po/<heketi-pod-name>
# oc rsh po/<heketi-pod-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc rsh po/heketi-1-p65c6
# oc rsh po/heketi-1-p65c6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to check the cluster IDs
heketi-cli cluster list
# heketi-cli cluster list
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the output verify if the cluster ID matches with the old cluster.
8.2.3. Upgrading if existing version deployed using cns-deploy Copy linkLink copied to clipboard!
8.2.3.1. Upgrading Heketi in Openshift node Copy linkLink copied to clipboard!
The following commands must be executed on the client machine.
Execute the following command to update the heketi client and cns-deploy packages:
yum update cns-deploy -y yum update heketi-client -y
# yum update cns-deploy -y # yum update heketi-client -y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Backup the Heketi database file
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY.
The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echo
oc get secret <heketi-admin-secret-name> -o jsonpath='{.data.key}'|base64 -d;echo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where <heketi-admin-secret-name> is the name of the heketi admin secret created by the user.
Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount: <project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount: <project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,
oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe service account used in heketi pod needs to be privileged because Heketi/rhgs-volmanager pod mounts the heketidb storage Gluster volume as a "glusterfs" volume type and not as a PersistentVolume (PV).
As per the security-context-constraints regulations in OpenShift, ability to mount volumes which are not of the type configMap, downwardAPI, emptyDir, hostPath, nfs, persistentVolumeClaim, secret is granted only to accounts with privileged Security Context Constraint (SCC).
Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
# sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. Alternatively, copy the file /usr/share/heketi/templates/heketi.json.template to heketi.json in the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.
NoteJSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
-
The
Execute the following command to create a secret to hold the configuration file.
oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json
# oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the heketi-config-secret file already exists, then delete the file and run the following command.
Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the heketi template.
Edit the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, and HEKETI_EXECUTOR parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Replace the
value
underIMAGE_NAME
withv3.11.5
orv3.11.8
depending on the version you want to upgrade to.- displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
- displayName: heketi container image name name: IMAGE_NAME required: true value: registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11.8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.3.2. Upgrading Gluster Block Copy linkLink copied to clipboard!
Execute the following steps to upgrade gluster block.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:
uname -r
# uname -r
Reboot the node for the latest kernel update to take effect.
To use gluster block, add the following two parameters to the
glusterfs
section in the heketi configuration file at /etc/heketi/heketi.JSON:auto_create_block_hosting_volume block_hosting_volume_size
auto_create_block_hosting_volume block_hosting_volume_size
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
auto_create_block_hosting_volume
: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value totrue
.block_hosting_volume_size
: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the Heketi service:
systemctl restart heketi
# systemctl restart heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis step is not applicable if heketi is running as a pod in the Openshift cluster.
If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-dc>
# oc delete dc <gluster-block-dc>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete dc glusterblock-provisioner-dc
# oc delete dc glusterblock-provisioner-dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the following resources from the old pod
If you have glusterfs pods:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete serviceaccounts glusterblock-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
# oc delete serviceaccounts glusterblock-provisioner serviceaccount "glusterblock-provisioner" deleted # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have registry pods:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete serviceaccounts glusterblock-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
# oc delete serviceaccounts glusterblock-provisioner serviceaccount "glusterblock-provisioner" deleted # oc delete clusterrolebindings.authorization.openshift.io glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.4. Upgrading if existing version deployed using Ansible Copy linkLink copied to clipboard!
8.2.4.1. Upgrading Heketi in Openshift node Copy linkLink copied to clipboard!
The following commands must be executed on the client machine.
Execute the following command to update the heketi client:
yum update heketi-client -y
# yum update heketi-client -y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Backup the Heketi database file:
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY:
The OCS administrator can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the HEKETI_USER_KEY was set previously, you can obtain it by using the following command:
oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can obtain the HEKETI_USER_KEY if it is set earlier with the following command.
oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the HEKETI_USER_KEY was set previously, you can obtain it by using the following command:
oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to install the heketi template.
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml template "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION ,CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
NoteThe value of the HEKETI_LVM_WRAPPER parameter points to the wrapper command for LVM. In independent mode setups wrapper is not required, change the value to an empty string as shown below.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the example below.
If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-storage
# oc delete deploymentconfig,service,route heketi-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pods
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-ffgs5 1/1 Running 0 3m heketi-storage-4-9fnvz 2/2 Running 0 8d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.4.2. Upgrading Gluster Block if Deployed by Using Ansible Copy linkLink copied to clipboard!
Execute the following steps to upgrade gluster block.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4. Please ensure that your kernel version matches with 3.10.0-862.14.4.el7.x86_64. To verify execute:
uname -r
# uname -r
Reboot the node for the latest kernel update to take effect.
Execute the following command to delete the old glusterblock provisioner template.
oc delete templates glusterblock-provisioner
# oc delete templates glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a glusterblock provisioner template. For example:
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml template.template.openshift.io/glusterblock-provisioner created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a gluster-block-provisoner-pod already exists then delete it by executing the following commands.
For glusterfs namespace:
oc delete dc glusterblock-storage-provisioner-dc
# oc delete dc glusterblock-storage-provisioner-dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For glusterfs-registry namespace:
oc delete dc glusterblock-registry-provisioner-dc
oc delete dc glusterblock-registry-provisioner-dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
oc get templates
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner template 3 (2 blank) 4 glusterfs GlusterFS DaemonSet template 5 (1 blank) 1 heketi Heketi service deployment template 7 (3 blank)3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the following resources from the old pod.
If you have glusterfs pods:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete serviceaccounts glusterblock-storage-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisioner
# oc delete serviceaccounts glusterblock-storage-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-storage-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have registry pods:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete serviceaccounts glusterblock-registry-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
# oc delete serviceaccounts glusterblock-registry-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before running oc process determine the correct
provisioner
name. If there are more than onegluster block provisioner
running in your cluster the names must differ from all otherprovisioners
.
For example,
-
If there are 2 or more provisioners the name should be
gluster.org/glusterblock-<namespace>
where, namespace is replaced by the namespace that the provisioner is deployed in. -
If there is only one provisioner, installed prior to 3.11.8,
gluster.org/glusterblock
is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
-
If there are 2 or more provisioners the name should be
After editing the template, execute the following command to create the deployment configuration:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f - clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created serviceaccount/glusterblock-storage-provisioner created clusterrolebinding.authorization.openshift.io/glusterblock-storage-provisioner created deploymentconfig.apps.openshift.io/glusterblock-storage-provisioner-dc created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a
block provisioner
, in a givennamespace
, run the following command:oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep app-storage glusterfs-storage-block gluster.org/glusterblock-app-storage app-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check each storage class
provisioner name
, if it does not match theblock provisioner name
configured for thatnamespace
it must be updated. If theblock provisioner
name already matches theconfigured provisioner
name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
For every storage class in this list do the following:oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml oc delete sc <storageclass> sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml # oc delete sc <storageclass> # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o yaml gluster-storage-block > storageclass-to-edit.yaml oc delete sc gluster-storage-block sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block created
# oc get sc -o yaml gluster-storage-block > storageclass-to-edit.yaml # oc delete sc gluster-storage-block # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-app-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2.5. Enabling S3 Compatible Object store Copy linkLink copied to clipboard!
Support for S3 compatible Object Store is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#S3_Object_Store.
- If you have gluster nodes and heketi pods in glusterfs registry namespace, then follow the steps in section Section 8.3, “Upgrading nodes and pods in glusterfs registry group”.
- S3 compatible Object store is only available with Red Hat Openshift Container Storage 3.11.4 and older releases.
8.3. Upgrading nodes and pods in glusterfs registry group Copy linkLink copied to clipboard!
Follow the steps in the sections to upgrade your gluster nodes and heketi pods in glusterfs registry namespace.
8.3.1. Upgrading the Red Hat Gluster Storage Registry Cluster Copy linkLink copied to clipboard!
To upgrade the Red Hat Gluster Storage cluster, see In-Service Software Upgrade.
8.3.1.1. Upgrading Heketi Registry pod Copy linkLink copied to clipboard!
If Heketi is not in an Openshift node, then you have to migrate Heketi in RHGS node to Openshift node. For more information on how to migrate, refer Section 8.2.2, “Upgrading/Migration of Heketi in RHGS node”.
To upgrade the Heketi registry pods, perform the following steps:
The following commands must be executed on the client machine.
Execute the following command to update the heketi client:
yum update heketi-client -y
# yum update heketi-client -y
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Backup the Heketi registry database file:
heketi-cli db dump > heketi-db-dump-$(date -I).json
# heketi-cli db dump > heketi-db-dump-$(date -I).json
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to get the current HEKETI_ADMIN_KEY:
The OCS administrator is free to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
# oc get secret heketi-registry-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch HEKETI_USER_KEY, run below command:
oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
# oc describe pod <heketi-pod> |grep "HEKETI_USER_KEY"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketi
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to install the heketi template.
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/heketi-template.yml template "heketi" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION,CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the following example:
The value of the HEKETI_LVM_WRAPPER parameter points to the wrapper command for LVM. In independent mode setups wrapper is not required, change the value to an empty string as shown below.
If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, CLUSTER_NAME and HEKETI_LVM_WRAPPER as shown in the following example:
If a cluster has more than 1000 volumes refer to How to change the default PVS limit in Openshift Container Storage and add the required parameters before proceeding with the upgrade.
Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-registry
# oc delete deploymentconfig,service,route heketi-registry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig "heketi-registry" created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is recommended that the
heketidbstorage
volume be tuned for db workloads. Newly installed Openshift Container Storage deployments tune the heketidbstorage volume automatically. For older deployments, follow the KCS article Planning to run containerized DB or nosql workloads on Openshift Container Storage? and perform the volume set operation for the volumeheketidbstorage
.Execute the following command to verify that the containers are running:
oc get pods
# oc get pods
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc get pods
# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-ffgs5 1/1 Running 0 3m heketi-storage-4-9fnvz 2/2 Running 0 8d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.2. Upgrading glusterblock-provisioner Pod Copy linkLink copied to clipboard!
To upgrade the glusterblock-provisioner pods, perform the following steps:
Execute the following command to delete the old glusterblock provisioner template.
oc delete templates glusterblock-provisioner
# oc delete templates glusterblock-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a glusterblock provisioner template. For example:
oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml
# oc create -f /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/files/glusterblock-provisioner.yml template.template.openshift.io/glusterblock-provisioner created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If a glusterblock-provisoner pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dc
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION, and NAMESPACE.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
value
underIMAGE_VERSION
withv3.11.5
orv3.11.8
depending on the version you want to upgrade to.- displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.11.8
- displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.11.8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
value
underIMAGE_NAME
withv3.11.5
orv3.11.8
depending on the version you want to upgrade to.- displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
- displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.11.8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the following resources from the old pod:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-registry-provisioner oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner # oc delete clusterrolebindings.authorization.openshift.io glusterblock-registry-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Before running oc process determine the correct
provisioner
name. If there are more than onegluster block provisioner
running in your cluster the names must differ from all otherprovisioners
.
For example,
-
If there are 2 or more provisioners the name should be
gluster.org/glusterblock-<namespace>
where, namespace is replaced by the namespace that the provisioner is deployed in. -
If there is only one provisioner, installed prior to 3.11.8,
gluster.org/glusterblock
is sufficent. If the name currently in use already has a unique namespace suffix, reuse the existing name.
-
If there are 2 or more provisioners the name should be
After editing the template, execute the following command to create the deployment configuration:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
oc process glusterblock-provisioner -o yaml | oc create -f -
# oc process glusterblock-provisioner -o yaml | oc create -f - clusterrole.authorization.openshift.io/glusterblock-provisioner-runner created serviceaccount/glusterblock-registry-provisioner created clusterrolebinding.authorization.openshift.io/glusterblock-registry-provisioner created deploymentconfig.apps.openshift.io/glusterblock-registry-provisioner-dc created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow All storage classes that use gluster block volume provisioning must match exactly to one of the provisioner names in the cluster. To check the list of storage classes that refer to a
block provisioner
, in a givennamespace
, run the following command:oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage
# oc get sc -o custom-columns=NAME:.metadata.name,PROV:.provisioner,RSNS:.parameters.restsecretnamespace | grep 'gluster.org/glusterblock' | grep infra-storage glusterfs-registry-block gluster.org/glusterblock infra-storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check each storage class
provisioner name
, if it does not match theblock provisioner name
configured for thatnamespace
it must be updated. If theblock provisioner
name already matches theconfigured provisioner
name, nothing else needs to be done. Use the list generated above and include all storage class names where the provionser name must be updated.
For every storage class in this list do the following:oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml oc delete sc <storageclass> sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml <storageclass> > storageclass-to-edit.yaml # oc delete sc <storageclass> # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-<namespace>,' storageclass-to-edit.yaml | oc create -f -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example:
oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml oc delete sc glusterfs-registry-block sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f -
# oc get sc -o yaml glusterfs-registry-block > storageclass-to-edit.yaml # oc delete sc glusterfs-registry-block storageclass.storage.k8s.io "glusterfs-registry-block" deleted # sed 's,gluster.org/glusterblock$,gluster.org/glusterblock-infra-storage,' storageclass-to-edit.yaml | oc create -f - storageclass.storage.k8s.io/glusterfs-registry-block created
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3.3. Upgrading Gluster Block Copy linkLink copied to clipboard!
To upgrade the gluster block, perform the following steps:
Execute the following command to upgrade the gluster block:
yum update gluster-block
# yum update gluster-block
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the gluster block service:
systemctl enable gluster-blockd systemctl start gluster-blockd
# systemctl enable gluster-blockd # systemctl start gluster-blockd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.4. Upgrading the client on Red Hat OpenShift Container Platform nodes Copy linkLink copied to clipboard!
Execute the following commands on each of the nodes:
To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
oc adm drain <node_name> --ignore-daemonsets
# oc adm drain <node_name> --ignore-daemonsets
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access):
oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command on the node to upgrade the client on the node:
yum update glusterfs-fuse
# yum update glusterfs-fuse
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
oc adm manage-node --schedulable=true <node_name>
# oc adm manage-node --schedulable=true <node_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create and add the following content to the multipath.conf file:
NoteThe multipath.conf file does not require any change as the change was implemented during a previous upgrade.
+
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following commands to start multipath daemon and [re]load the multipath configuration:
systemctl start multipathd
# systemctl start multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl reload multipathd
# systemctl reload multipathd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow