OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Deployment Guide
Deploying Red Hat Openshift Container Storage 3.10.
Edition 0
Abstract
Part I. Planning Copy linkLink copied to clipboard!
Chapter 1. Identify your Workloads Copy linkLink copied to clipboard!
- Jenkins
- ElasticSearch
- Prometheus
Chapter 2. Identify your Use Case Copy linkLink copied to clipboard!
Note
2.1. Converged Mode Copy linkLink copied to clipboard!
Note
- OpenShift provides the platform as a service (PaaS) infrastructure based on Kubernetes container management. Basic OpenShift architecture is built around multiple master systems where each system contains a set of nodes.
- Red Hat Gluster Storage provides the containerized distributed storage based on Red Hat Gluster Storage 3.4 container. Each Red Hat Gluster Storage volume is composed of a collection of bricks, where each brick is the combination of a node and an export directory.
- Heketi provides the Red Hat Gluster Storage volume life-cycle management. It creates the Red Hat Gluster Storage volumes dynamically and supports multiple Red Hat Gluster Storage clusters.
- Create multiple persistent volumes (PV) and register these volumes with OpenShift.
- Developers then submit a persistent volume claim (PVC).
- A PV is identified and selected from a pool of available PVs and bound to the PVC.
- The OpenShift pod then uses the PV for persistent storage.
Figure 2.1. Architecture - Converged Mode for OpenShift Container Platform
Note
2.2. Independent mode Copy linkLink copied to clipboard!
Note
- OpenShift Container Platform administrators might not want to manage storage. Independent mode separates storage management from container management.
- Leverage legacy storage (SAN, Arrays, Old filers): Customers often have storage arrays from traditional storage vendors that have either limited or no support for OpenShift. Independent mode allows users to leverage existing legacy storage for OpenShift Containers.
- Cost effective: In environments where costs related to new infrastructure is a challenge, they can repurpose their existing storage arrays to back OpenShift under independent mode. Independent mode is perfect for such situations where one can run Red Hat Gluster Storage inside a VM and serve out LUNs or disks from these storage arrays to OpenShift offering all the features that the OpenShift storage subsystem has to offer including dynamic provisioning. This is a very useful solution in those environments with potential infrastructure additions.
Note
Chapter 3. Verify Prerequisites Copy linkLink copied to clipboard!
3.1. Converged mode Copy linkLink copied to clipboard!
3.1.1. Supported Versions Copy linkLink copied to clipboard!
| OpenShift Container Platform | Red Hat Gluster Storage | Red Hat Openshift Container Storage / Openshift Container Storage |
|---|---|---|
| 3.11 | 3.4 | 3.10, 3.11 |
| 3.10 | 3.4 | 3.9, 3.10 |
| 3.7, 3.9, 3.10 | 3.3.1 | 3.9 |
3.1.2. Environment Requirements Copy linkLink copied to clipboard!
3.1.2.1. Installing Red Hat Openshift Container Storage with OpenShift Container Platform on Red Hat Enterprise Linux 7 Copy linkLink copied to clipboard!
3.1.2.1.1. Setting up the Openshift Master as the Client Copy linkLink copied to clipboard!
oc commands across the cluster when installing OpenShift. Generally, this is setup as a non-scheduled node in the cluster. This is the default configuration when using the OpenShift installer. You can also choose to install their client on their local machine to access the cluster remotely. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/cli_reference/cli-reference-get-started-cli#installing-the-cli.
Execute the following commands to install heketi-client package.
subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
yum install heketi-client
# yum install heketi-client
subscription-manager repos --disable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --disable=rh-gluster-3-client-for-rhel-7-server-rpms
3.1.3. Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements Copy linkLink copied to clipboard!
- All OpenShift nodes on Red Hat Enterprise Linux systems must have glusterfs-client RPMs (glusterfs, glusterfs-client-xlators, glusterfs-libs, glusterfs-fuse) installed. You can verify if the RPMs are installed by running the following command:
yum list glusterfs glusterfs-client-xlators glusterfs-libs glusterfs-fuse
# yum list glusterfs glusterfs-client-xlators glusterfs-libs glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on installing native client packages, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html-single/administration_guide/#Installing_Native_Client
3.1.4. Red Hat Gluster Storage Requirements Copy linkLink copied to clipboard!
- Installation of Heketi packages must have valid subscriptions to Red Hat Gluster Storage Server repositories.
- Red Hat Gluster Storage installations must adhere to the requirements outlined in the Red Hat Gluster Storage Installation Guide.
- The versions of Red Hat Enterprise OpenShift and Red Hat Gluster Storage integrated must be compatible, according to the information in Section 3.1.1, “Supported Versions” section.
- A fully qualified domain name must be set for Red Hat Gluster Storage server node. Ensure that the correct DNS records exist, and that the fully qualified domain name is resolvable via both forward and reverse DNS lookup.
- To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
yum install glusterfs-fuse
# yum install glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:yum update glusterfs-fuse
# yum update glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Important
- After a snapshot is created, it must be accessed through the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
3.1.5. Deployment and Scaling Guidelines Copy linkLink copied to clipboard!
- Sizing guidelines on converged mode or independent mode:
- Persistent volumes backed by the file interface: For typical operations, size for 300-500 persistent volumes backed by files per three-node converged mode or independent mode cluster. The maximum limit of supported persistent volumes backed by the file interface is 1000 persistent volumes per three-node cluster in a converged mode or independent mode deployment. Considering that micro-services can dynamically scale as per demand, it is recommended that the initial sizing keep sufficient headroom for the scaling. If additional scaling is needed, add a new three-node converged mode or independent mode cluster to support additional persistent volumesCreation of more than 1,000 persistent volumes per trusted storage pool is not supported for file-based storage.
- Persistent volumes backed by block-based storage: Size for a maximum of 300 persistent volumes per three-node converged mode or independent mode cluster. Be aware that converged mode and independent mode supports only OpenShift Container Platform logging and metrics on block-backed persistent volumes.
- Persistent volumes backed by file and block: Size for 300-500 persistent volumes (backed by files) and 100-200 persistent volumes (backed by block). Do not exceed these maximum limits of file or block-backed persistent volumes or the combination of a maximum 1000 persistent volumes per three-node converged mode or independent mode cluster.
- 3-way distributed-replicated volumes and arbitrated volumes are the only supported volume typesS.
- Minimum Red Hat Openshift Container Storage cluster size (4): It is recommended to have a minimum of 4 nodes in the Red Hat Openshift Container Storage cluster to adequately meet high-availability requirements. Although 3 nodes are required to create a persistent volume claim, the failure of one node in a 3 node cluster prevents the persistent volume claim from being created. The fourth node provides high-availability and allows the persistent volume claim to be created even if a node fails.
- Each physical or virtual node that hosts a converged mode or independent mode peer requires the following:
- a minimum of 8 GB RAM and 30 MB per persistent volume.
- the same disk type.
- the heketidb utilises 2 GB distributed replica volume.
- Deployment guidelines on converged mode or independent mode:
- In converged mode, you can install the Red Hat Openshift Container Storage nodes, Heketi, and all provisioner pods on OpenShift Container Platform Infrastructure nodes or OpenShift Container Platform Application nodes.
- In independent mode, you can install Heketi and all provisioners pods on OpenShift Container Platform Infrastructure nodes or on OpenShift Container Platform Application nodes
- Red Hat Gluster Storage Container Native with OpenShift Container Platform supports up to 14 snapshots per volume by default (snap-max-hard-limit =14 in Heketi Template).
3.2. Independent mode Copy linkLink copied to clipboard!
3.2.1. Supported Versions Copy linkLink copied to clipboard!
| OpenShift Container Platform | Red Hat Gluster Storage | Red Hat Openshift Container Storage / Openshift Container Storage |
|---|---|---|
| 3.11 | 3.4 | 3.10, 3.11 |
| 3.10 | 3.4 | 3.9, 3.10 |
| 3.7, 3.9, 3.10 | 3.3.1 | 3.9 |
3.2.2. Environment Requirements Copy linkLink copied to clipboard!
3.2.2.1. Installing Red Hat Openshift Container Storage with OpenShift Container Platform on Red Hat Enterprise Linux 7 Copy linkLink copied to clipboard!
3.2.2.1.1. Setting up the Openshift Master as the Client Copy linkLink copied to clipboard!
oc commands across the cluster when installing OpenShift. Generally, this is setup as a non-scheduled node in the cluster. This is the default configuration when using the OpenShift installer. You can also choose to install their client on their local machine to access the cluster remotely. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/cli_reference/cli-reference-get-started-cli#installing-the-cli.
Execute the following commands to install heketi-client package.
subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
yum install heketi-client
# yum install heketi-client
subscription-manager repos --disable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --disable=rh-gluster-3-client-for-rhel-7-server-rpms
3.2.3. Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements Copy linkLink copied to clipboard!
- All OpenShift nodes on Red Hat Enterprise Linux systems must have glusterfs-client RPMs (glusterfs, glusterfs-client-xlators, glusterfs-libs, glusterfs-fuse) installed. You can verify if the RPMs are installed by running the following command:
yum list glusterfs glusterfs-client-xlators glusterfs-libs glusterfs-fuse
# yum list glusterfs glusterfs-client-xlators glusterfs-libs glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information on installing native client packages, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html-single/administration_guide/#Installing_Native_Client
3.2.4. Red Hat Gluster Storage Requirements Copy linkLink copied to clipboard!
- Installation of Heketi packages must have valid subscriptions to Red Hat Gluster Storage Server repositories.
- Red Hat Gluster Storage installations must adhere to the requirements outlined in the Red Hat Gluster Storage Installation Guide.
- The versions of Red Hat Enterprise OpenShift and Red Hat Gluster Storage integrated must be compatible, according to the information in Section 3.1.1, “Supported Versions” section.
- A fully qualified domain name must be set for Red Hat Gluster Storage server node. Ensure that the correct DNS records exist and that the fully qualified domain name is resolvable via both forward and reverse DNS lookup.
- To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
yum install glusterfs-fuse
# yum install glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:yum update glusterfs-fuse
# yum update glusterfs-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Important
- After a snapshot is created, it must be accessed through the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
3.2.5. Deployment and Scaling Guidelines Copy linkLink copied to clipboard!
- Sizing guidelines on Converged mode or Independent mode:
- Persistent volumes backed by the file interface: For typical operations, size for 300-500 persistent volumes backed by files per three-node converged mode or independent mode cluster. The maximum limit of supported persistent volumes backed by the file interface is 1000 persistent volumes per three-node cluster in a converged mode or independent mode deployment. Considering that micro-services can dynamically scale as per demand, it is recommended that the initial sizing keep sufficient headroom for the scaling. If additional scaling is needed, add a new three-node converged mode or independent mode cluster to support additional persistent volumesCreation of more than 1,000 persistent volumes per trusted storage pool is not supported for file-based storage.
- Persistent volumes backed by block-based storage: Size for a maximum of 300 persistent volumes per three-node converged mode or independent mode cluster. Be aware that converged mode and independent mode supports only OpenShift Container Platform logging and metrics on block-backed persistent volumes.
- Persistent volumes backed by file and block: Size for 300-500 persistent volumes (backed by files) and 100-200 persistent volumes (backed by block). Do not exceed these maximum limits of file or block-backed persistent volumes or the combination of a maximum 1000 persistent volumes per three-node converged mode or independent mode cluster.
- 3-way distributed-replicated volumes and arbitrated volumes are the only supported volume types.
- Minimum Red Hat Openshift Container Storage cluster size (4): It is recommended to have a minimum of 4 nodes in the Red Hat Openshift Container Storage cluster to adequately meet high-availability requirements. Although 3 nodes are required to create a persistent volume claim, the failure of one node in a 3 node cluster prevents the persistent volume claim from being created. The fourth node provides high-availability and allows the persistent volume claim to be created even if a node fails.
- Each physical or virtual node that hosts a Red Hat Gluster Storage converged mode or independent mode peer requires the following:
- a minimum of 8 GB RAM and 30 MB per persistent volume.
- the same disk type.
- the heketidb utilises 2 GB distributed replica volume.
- Deployment guidelines on converged mode or independent mode:
- In converged mode, you can install the Red Hat Openshift Container Storage nodes, Heketi, and all provisioner pods on OpenShift Container Platform Infrastructure nodes or OpenShift Container Platform Application nodes.
- In independent mode, you can install Heketi and all provisioners pods on OpenShift Container Platform Infrastructure nodes or on OpenShift Container Platform Application nodes
- Red Hat Gluster Storage Container Native with OpenShift Container Platform supports up to 14 snapshots per volume by default (snap-max-hard-limit =14 in Heketi Template).
Part II. Deploy Copy linkLink copied to clipboard!
Chapter 4. Deploying Containerized Storage in Converged Mode Copy linkLink copied to clipboard!
Note
- Red Hat Openshift Container Storage does not support a simultaneous deployment of converged and independent mode with ansible workflow. Therefore, you must deploy either converged mode or independent mode: you cannot mix both modes during deployment.
- s3 is deployed manually and not through Ansible installer. For more information on manual deployment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#S3_Object_Store
4.1. Specify Advanced Installer Variables Copy linkLink copied to clipboard!
glusterfs: A general storage cluster for use by user applications.glusterfs-registry: A dedicated storage cluster for use by infrastructure applications such as an integrated OpenShift Container Registry.
[OSEv3:children] group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars] group. glusterfs variables begin with openshift_storage_glusterfs_ and glusterfs-registry variables begin with openshift_storage_glusterfs_registry_. A few other variables, such as openshift_hosted_registry_storage_kind, interact with the GlusterFS clusters.
openshift_storage_glusterfs_imageopenshift_storage_glusterfs_block_imageopenshift_storage_glusterfs_heketi_imageopenshift_storage_glusterfs_registry_imageopenshift_storage_glusterfs_registry_block_imageopenshift_storage_glusterfs_registry_heketi_image
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7:v3.10openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:v3.10openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:v3.10
- The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OpenShift Container Platform.
- This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.ymlcan be used to deploy the clusters onto an existing OpenShift Container Platform installation./usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/registry.ymlcan be used to deploy the clusters onto an existing OpenShift Container Platform installation. In addition, this will deploy an integrated OpenShift Container Registry which uses GlusterFS storage.Important
There must not be a pre-existing registry in the OpenShift Container Platform cluster.playbooks/openshift-glusterfs/uninstall.ymlcan be used to remove existing clusters matching the configuration in the inventory hosts file. This is useful for cleaning up the Red Hat Openshift Container Storage environment in the case of a failed deployment due to configuration errors.Note
The GlusterFS playbooks are not guaranteed to be idempotent. Running the playbooks more than once for a given installation is currently not supported without deleting the entire GlusterFS installation (including disk data) and starting over.
4.2. Deploying Red Hat Openshift Container Storage in Converged Mode Copy linkLink copied to clipboard!
- In your inventory file, include the following variables in the
[OSEv3:vars]section, adjusting them as needed for your configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In your inventory file, add
glusterfsin the[OSEv3:children]section to enable the[glusterfs]group:[OSEv3:children] masters etcd nodes glusterfs
[OSEv3:children] masters etcd nodes glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Add a
[glusterfs]section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devicesto a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:[glusterfs] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[glusterfs] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the hosts listed under
[glusterfs]to the[nodes]group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.3. Deploying Red Hat Openshift Container Storage in Converged Mode with Registry Copy linkLink copied to clipboard!
- In your inventory file, include the following variables in the [OSEv3:vars] section, adjusting them as needed for your configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In your inventory file, set the following variable under
[OSEv3:vars]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add
glusterfs_registryin the[OSEv3:children]section to enable the[glusterfs_registry]group:[OSEv3:children] masters etcd nodes glusterfs_registry
[OSEv3:children] masters etcd nodes glusterfs_registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Add a
[glusterfs_registry]section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devicesto a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the hosts listed under
[glusterfs_registry]to the[nodes]group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.4. Deploying Red Hat Openshift Container Storage in Converged Mode with Logging and Metrics Copy linkLink copied to clipboard!
- In your inventory file, set the following variables under
[OSEv3:vars]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
See the GlusterFS role README, https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs, for details on these and other variables. - Add
glusterfs_registryin the[OSEv3:children]section to enable the[glusterfs_registry]group:[OSEv3:children] masters etcd nodes glusterfs_registry
[OSEv3:children] masters etcd nodes glusterfs_registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Add a
[glusterfs_registry]section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devicesto a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the hosts listed under
[glusterfs_registry]to the[nodes]group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For a standalone installation onto an existing OpenShift Container Platform cluster:ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.5. Deploying Red Hat Openshift Container Storage in Converged mode for Applications with Registry, Logging, and Metrics Copy linkLink copied to clipboard!
- In your inventory file, set the following variables under
[OSEv3:vars]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add
glusterfsandglusterfs_registryin the[OSEv3:children]section to enable the[glusterfs]and[glusterfs_registry]groups:[OSEv3:children] ... glusterfs glusterfs_registry
[OSEv3:children] ... glusterfs glusterfs_registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Add
[glusterfs]and[glusterfs_registry]sections with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devicesto a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add the hosts listed under
[glusterfs]and[glusterfs_registry]to the[nodes]group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.6. Verify your Deployment Copy linkLink copied to clipboard!
- Installation Verification for converged mode
- Examine the installation for the app-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Examine the installation for the infra-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the existence of the registry PVC backed by OCP infrastructure Red Hat Openshift Container Storage. This volume was statically provisioned by openshift-ansible deployment.
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the registry DeploymentConfig to verify it's using this glusterfs volume.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Storage Provisioning Verification for Converged Mode
- The Storage Class resources can be used to create new PV claims for verification of the RHOCS deployment. Validate PV provisioning using the following OCP Storage Class created during the RHOCS deployment:
- Use the glusterfs-storage-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using Section 4.2, “Deploying Red Hat Openshift Container Storage in Converged Mode”.
- Use the glusterfs-registry-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using one of the following workflows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f pvc-file.yaml oc create -f pvc-block.yaml
# oc create -f pvc-file.yaml # oc create -f pvc-block.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate that the two PVCs and respective PVs are created correctly:oc get pvc
# oc get pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Using the heketi-client for Verification
- The heketi-client package needs to be installed on the ansible deploy host or on a OCP master. Once it is installed two new files should be created to easily export the required environment variables to run the heketi-client commands (or heketi-cli). The content of each file as well as useful heketi-cli commands are detailed here.Create a new file (e.g. "heketi-exports-app") with the following contents:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Source the file to create the HEKETI app-storage environment variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file (e.g. "heketi-exports-infra") with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Source the file to create the HEKETI infra-storage environment variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Creating an Arbiter Volume (optional) Copy linkLink copied to clipboard!
- Better consistency: When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions.
- Less disk space required: Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
yum install heketi-client
# yum install heketi-client
4.7.1. Creating an Arbiter Volume Copy linkLink copied to clipboard!
4.7.1.1. Creating an Arbiter Volume using Heketi CLI Copy linkLink copied to clipboard!
heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
4.7.1.2. Creating an Arbiter Volume using the Storageclass file Copy linkLink copied to clipboard!
- user.heketi.arbiter true
- (Optional) user.heketi.average-file-size 1024
Note
Chapter 5. Deploy Containerized Storage in Independent Mode Copy linkLink copied to clipboard!
| Deployment workflow | Registry | Metrics | Logging | Applications |
|---|---|---|---|---|
| Section 5.3, “Deploying Red Hat Openshift Container Storage in Independent Mode” | ✔ | |||
| Section 5.4, “Deploying Red Hat Openshift Container Storage in Independent mode for Applications with Registry, Logging, and Metrics” | ✔ | ✔ | ✔ | ✔ |
Note
- Red Hat Openshift Container Storage does not support a simultaneous deployment of converged and independent mode with ansible workflow. Therefore, you must deploy either converged mode or independent mode: you cannot mix both modes during deployment.
- s3 is deployed manually and not through Ansible installer. For more information on manual deployment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#S3_Object_Store
5.1. Setting up a RHGS Cluster Copy linkLink copied to clipboard!
5.1.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install) Copy linkLink copied to clipboard!
Important
/var partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux 7 Server
Independent mode is supported only on Red Hat Enterprise Linux 7.Register the System with Subscription Manager
Run the following command and enter your Red Hat Network username and password to register the system with the Red Hat Network:subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:subscription-manager list --available
# subscription-manager list --availableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach theRed Hat Enterprise Linux ServerandRed Hat Gluster Storageentitlements to the system. Run the following command to attach the entitlements:subscription-manager attach --pool=[POOLID]
# subscription-manager attach --pool=[POOLID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Required Channels
For Red Hat Gluster Storage 3.4 on Red Hat Enterprise Linux 7.x- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:yum repolist
# yum repolistCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages
Ensure that all packages are up to date by running the following command.yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Important
If any kernel packages are updated, reboot the system with the following command.shutdown -r now
# shutdown -r nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Kernel Version Requirement
Independent mode requires the kernel-3.10.0-690.el7 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:rpm -q kernel
# rpm -q kernel kernel-3.10.0-862.11.6.el7.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow uname -r
# uname -r 3.10.0-862.11.6.el7.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:yum install redhat-storage-server
# yum install redhat-storage-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable gluster-block execute the following command:
yum install gluster-block
# yum install gluster-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Reboot
Reboot the system.
5.1.2. Configuring Port Access Copy linkLink copied to clipboard!
firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
5.1.3. Enabling Kernel Modules Copy linkLink copied to clipboard!
- You must ensure that the
dm_thin_poolandtarget_core_usermodules are loaded in the Red Hat Gluster Storage nodes.modprobe target_core_user
# modprobe target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow modprobe dm_thin_pool
# modprobe dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow lsmod | grep target_core_user
# lsmod | grep target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
To ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You must ensure that the
dm_multipathmodule is loaded on all OpenShift Container Platform nodes.modprobe dm_multipath
# modprobe dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:lsmod | grep dm_multipath
# lsmod | grep dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
To ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.4. Starting and Enabling Services Copy linkLink copied to clipboard!
systemctl start sshd
# systemctl start sshd
systemctl enable sshd
# systemctl enable sshd
systemctl start glusterd
# systemctl start glusterd
systemctl enable glusterd
# systemctl enable glusterd
systemctl start gluster-blockd
# systemctl start gluster-blockd
systemctl enable gluster-blockd
# systemctl enable gluster-blockd
5.2. Specify Advanced Installer Variables Copy linkLink copied to clipboard!
glusterfs: A general storage cluster for use by user applications.glusterfs-registry: A dedicated storage cluster for use by infrastructure applications such as an integrated OpenShift Container Registry.
[OSEv3:children] group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars] group. glusterfs variables begin with openshift_storage_glusterfs_ and glusterfs-registry variables begin with openshift_storage_glusterfs_registry_. A few other variables, such as openshift_hosted_registry_storage_kind, interact with the GlusterFS clusters.
openshift_storage_glusterfs_versionopenshift_storage_glusterfs_block_versionopenshift_storage_glusterfs_heketi_versionopenshift_storage_glusterfs_registry_versionopenshift_storage_glusterfs_registry_block_versionopenshift_storage_glusterfs_registry_heketi_version
Note
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7:v3.10openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:v3.10openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:v3.10
- The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OpenShift Container Platform.
- This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.ymlcan be used to deploy the clusters onto an existing OpenShift Container Platform installation./usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/registry.ymlcan be used to deploy the clusters onto an existing OpenShift Container Platform installation. In addition, this will deploy an integrated OpenShift Container Registry which uses GlusterFS storage.Important
There must not be a pre-existing registry in the OpenShift Container Platform cluster.playbooks/openshift-glusterfs/uninstall.ymlcan be used to remove existing clusters matching the configuration in the inventory hosts file. This is useful for cleaning up the Red Hat OpenShift Container Storage environment in the case of a failed deployment due to configuration errors.Note
The GlusterFS playbooks are not guaranteed to be idempotent. Running the playbooks more than once for a given installation is not supported without deleting the entire GlusterFS installation (including disk data) and starting over.
5.3. Deploying Red Hat Openshift Container Storage in Independent Mode Copy linkLink copied to clipboard!
- In your inventory file, add
glusterfsin the[OSEv3:children]section to enable the[glusterfs]group:[OSEv3:children] masters etcd nodes glusterfs
[OSEv3:children] masters etcd nodes glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Include the following variables in the
[OSEv3:vars]section, adjusting them as needed for your configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add a
[glusterfs]section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devicesto a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Also, setglusterfs_ipto the IP address of the node. Specifying the variable takes the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:[glusterfs] gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
[glusterfs] gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Deploying Red Hat Openshift Container Storage in Independent mode for Applications with Registry, Logging, and Metrics Copy linkLink copied to clipboard!
- In your inventory file, set the following variables under
[OSEv3:vars]:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Add
glusterfsandglusterfs_registryin the[OSEv3:children]section to enable the[glusterfs]and[glusterfs_registry]groups:[OSEv3:children] ... glusterfs glusterfs_registry
[OSEv3:children] ... glusterfs glusterfs_registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Add
[glusterfs]and[glusterfs_registry]sections with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devicesto a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- To verify the deployment see, Section 5.5, “Verify your Deployment”.
5.5. Verify your Deployment Copy linkLink copied to clipboard!
- Installation Verification for Independent mode
- Examine the installation for the app-storage namespace by running the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Examine the installation for the infra-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the existence of the registry PVC backed by OCP infrastructure Red Hat Openshift Container Storage. This volume was statically provisioned by openshift-ansible deployment.
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the registry DeploymentConfig to verify it's using this glusterfs volume.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Storage Provisioning Verification for Independent Mode
- Validate PV provisioning using the glusterfs and glusterblock OCP Storage Class created during the OCP deployment. The two Storage Class resources, glusterfs-storage and glusterfs-storage-block, can be used to create new PV claims for verification of the Red Hat Openshift Container Storage deployment. The new PVC using the glusterfs-storage storageclass will be using storage available to gluster pods in app-storage project.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f pvc-file.yaml oc create -f pvc-block.yaml
# oc create -f pvc-file.yaml # oc create -f pvc-block.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate that the two PVCs and respective PVs are created correctly:oc get pvc
# oc get pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Using the heketi-client for Verification
- The heketi-client package needs to be installed on the ansible deploy host or on a OCP master. Once it is installed two new files should be created to easily export the required environment variables to run the heketi-client commands (or heketi-cli). The content of each file as well as useful heketi-cli commands are detailed here.Create a new file (e.g. "heketi-exports-app") with the following contents:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Source the file to create the HEKETI app-storage environment variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file (e.g. "heketi-exports-infra") with the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Source the file to create the HEKETI infra-storage environment variables:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Creating an Arbiter Volume (optional) Copy linkLink copied to clipboard!
- Better consistency: When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions.
- Less disk space required: Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
yum install heketi-client
# yum install heketi-client
5.6.1. Creating an Arbiter Volume Copy linkLink copied to clipboard!
5.6.1.1. Creating an Arbiter Volume using Heketi CLI Copy linkLink copied to clipboard!
heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
5.6.1.2. Creating an Arbiter Volume using the Storageclass file Copy linkLink copied to clipboard!
- user.heketi.arbiter true
- (Optional) user.heketi.average-file-size 1024
Note
Part III. Upgrade Copy linkLink copied to clipboard!
Chapter 6. Upgrading your Red Hat Openshift Container Storage in Converged Mode Copy linkLink copied to clipboard!
6.1. Upgrading the Glusterfs Pods Copy linkLink copied to clipboard!
6.1.1. Prerequisites Copy linkLink copied to clipboard!
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- Ensure to run the following command to retrieve the current configuration details before starting with upgrade:
oc get all
# oc get allCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure to run the following command to get the latest versions of Ansible templates.
yum update openshift-ansible
# yum update openshift-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
6.1.2. Restoring original label values for /dev/log Copy linkLink copied to clipboard!
- Create a directory and soft links on all nodes that run gluster pods:
mkdir /srv/<directory_name> cd /srv/<directory_name>/ # same dir as above ln -sf /dev/null systemd-tmpfiles-setup-dev.service ln -sf /dev/null systemd-journald.service ln -sf /dev/null systemd-journald.socket
# mkdir /srv/<directory_name> # cd /srv/<directory_name>/ # same dir as above # ln -sf /dev/null systemd-tmpfiles-setup-dev.service # ln -sf /dev/null systemd-journald.service # ln -sf /dev/null systemd-journald.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the daemonset that creates the glusterfs pods on the node which has oc client:
oc edit daemonset <daemonset_name>
# oc edit daemonset <daemonset_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under volumeMounts section add a mapping for the volume:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Under volumes section add a new host path for each service listed:Note
The path mentioned in here should be the same as mentioned in Step 1.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command on all nodes that run gluster pods. This will reset the label:
restorecon /dev/log
# restorecon /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to check the status of self heal for all volumes:
oc rsh <gluster_pod_name> for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# oc rsh <gluster_pod_name> # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for self-heal to complete. - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of
glusterfsdprocess:gluster volume set all cluster.max-bricks-per-process 250
# gluster volume set all cluster.max-bricks-per-process 250Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on any one of the gluster pods to ensure that the option is set correctly:
gluster volume get all cluster.max-bricks-per-process
# gluster volume get all cluster.max-bricks-per-processCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster volume get all cluster.max-bricks-per-process
# gluster volume get all cluster.max-bricks-per-process cluster.max-bricks-per-process 250Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command on the node which has oc client to delete the gluster pod:
oc delete pod <gluster_pod_name>
# oc delete pod <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if the pod is ready, execute the following command:
oc get pods -l glusterfs=storage-pod
# oc get pods -l glusterfs=storage-podCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Login to the node hosting the pod and check the selinux label of /dev/log
ls -lZ /dev/log
# ls -lZ /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output should show devlog_t labelFor example:ls -lZ /dev/log
# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the node. - In the gluster pod, check if the label value is devlog_t:
oc rsh <gluster_pod_name> ls -lZ /dev/log
# oc rsh <gluster_pod_name> # ls -lZ /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:ls -lZ /dev/log
# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/logCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform steps 4 to 9 for other pods.
6.1.3. Upgrading if existing version deployed by using cns-deploy Copy linkLink copied to clipboard!
6.1.3.1. Upgrading cns-deploy and Heketi Server Copy linkLink copied to clipboard!
- Execute the following command to update the heketi client and cns-deploy packages:
yum update cns-deploy -y
# yum update cns-deploy -y # yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Backup the Heketi database file
oc rsh <heketi_pod_name>
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The
BLOCK_HOST_SIZEparameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.templatetoheketi.jsonin the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secretfile already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.oc create secret generic heketi-config-secret --from-file=heketi.json
# oc create secret generic heketi-config-secret --from-file=heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
Note
The names of these parameters can be referenced from output of the following command:oc get all | grep heketi
# oc get all | grep heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.3.2. Upgrading the Red Hat Gluster Storage Pods Copy linkLink copied to clipboard!
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs --cascade=false
# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labelsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfslabel, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> storagenode=glusterfs
# oc label nodes <node name> storagenode=glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to register new gluster template.
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old gluster pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to check the self-heal status of all the volumes:
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_pod_name> glusterd --version
# oc rsh <gluster_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.gluster --timeout=3600 volume set all cluster.op-version 31302
# gluster --timeout=3600 volume set all cluster.op-version 31302Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remote shell into one of the glusterfs pods. For example:
oc rsh glusterfs-0vcf3
# oc rsh glusterfs-0vcf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-dc>
# oc delete dc <gluster-block-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-storage-provisioner-dc
# oc delete dc glusterblock-storage-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod:
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing the template, execute the following command to create the deployment configuration:
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You can check the brick multiplex status by executing the following command:gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-770ql
# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:gluster vol stop <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading heketi and glusterfs registry pods” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.1.4. Upgrading if existing version deployed by using Ansible Copy linkLink copied to clipboard!
6.1.4.1. Upgrading Heketi Server Copy linkLink copied to clipboard!
- Execute the following command to update the heketi client packages:
yum update heketi-client -y
# yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Backup the Heketi database file
oc rsh <heketi_pod_name>
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
Note
The names of these parameters can be referenced from output of the following command:oc get all | grep heketi
# oc get all | grep heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete deploymentconfig,service,route heketi-storage
# oc delete deploymentconfig,service,route heketi-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.4.2. Upgrading the Red Hat Gluster Storage Pods Copy linkLink copied to clipboard!
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=falseoption while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs-storage --cascade=false
# oc delete ds glusterfs-storage --cascade=false daemonset "glusterfs-storage" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the old glusterfs template.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter, then update the glusterfs template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Ensure that the CLUSTER_NAME variable is set to the correct value - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labelsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have theglusterfs=storage-hostlabel, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> glusterfs=storage-host
# oc label nodes <node name> glusterfs=storage-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old gluster pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to check the self-heal status of all the volumes:
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_pod_name> glusterd --version
# oc rsh <gluster_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.gluster --timeout=3600 volume set all cluster.op-version 31302
# gluster --timeout=3600 volume set all cluster.op-version 31302Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remote shell into one of the glusterfs pods. For example:
oc rsh glusterfs-0vcf3
# oc rsh glusterfs-0vcf3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-dc>
# oc delete dc <gluster-block-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-storage-provisioner-dc
# oc delete dc glusterblock-storage-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing the template, execute the following command to create the deployment configuration:
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You can check the brick multiplex status by executing the following command:gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-770ql
# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:gluster vol stop <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading heketi and glusterfs registry pods” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.2. Upgrading heketi and glusterfs registry pods Copy linkLink copied to clipboard!
6.2.1. Prerequisites Copy linkLink copied to clipboard!
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- Ensure to run the following command to get the latest versions of Ansible templates.
yum update openshift-ansible
# yum update openshift-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
6.2.2. Upgrading if existing version deployed by using cns-deploy Copy linkLink copied to clipboard!
6.2.2.1. Upgrading cns-deploy and Heketi Server Copy linkLink copied to clipboard!
- Backup the Heketi registry database file
oc rsh <heketi_pod_name> cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` exit# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The
BLOCK_HOST_SIZEparameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.templatetoheketi.jsonin the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secretfile already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.oc create secret generic heketi-config-secret --from-file=heketi.json
# oc create secret generic heketi-config-secret --from-file=heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.2.2. Upgrading the Red Hat Gluster Storage Registry Pods Copy linkLink copied to clipboard!
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=falseoption while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs-registry --cascade=false
# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs template.
oc delete templates glusterfs
# oc delete templates glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete templates glusterfs
# oc delete templates glusterfs template “glusterfs” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labelsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfslabel, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> storagenode=glusterfs
# oc label nodes <node name> storagenode=glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to register new gluster template.
oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old glusterfs-registry pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on glusterfs-registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to check the self-heal status of all the volumes: :
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_registry_pod_name> glusterd --version
# oc rsh <gluster_registry_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow rpm -qa|grep gluster
# rpm -qa|grep glusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.gluster volume set all cluster.op-version 31302
# gluster volume set all cluster.op-version 31302Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remote shell into one of the glusterfs-registry pods. For example:
#oc rsh glusterfs-registry-g6vd9
#oc rsh glusterfs-registry-g6vd9Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-registry-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing the template, execute the following command to create the deployment configuration:
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the glusterfs_registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You can check the brick multiplex status by executing the following command:gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- After upgrading the glusterfs registry pods, proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.2.3. Upgrading if existing version deployed by using Ansible Copy linkLink copied to clipboard!
6.2.3.1. Upgrading Heketi Server Copy linkLink copied to clipboard!
Note
- Backup the Heketi registry database file
oc rsh <heketi_pod_name>
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-registry
# oc delete deploymentconfig,service,route heketi-registryCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.3.2. Upgrading the Red Hat Gluster Storage Registry Pods Copy linkLink copied to clipboard!
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project storage-project
# oc project storage-projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to set heketi server to accept requests only from the local-client:
heketi-cli server mode set local-client
# heketi-cli server mode set local-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
heketi-cli server operations info
# heketi-cli server operations infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
oc scale dc <heketi_dc> --replicas=0
# oc scale dc <heketi_dc> --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the heketi pod is no longer present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to find the DaemonSet name for gluster
oc get ds
# oc get dsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the DeamonSet:
oc delete ds <ds-name> --cascade=false
# oc delete ds <ds-name> --cascade=falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Using--cascade=falseoption while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,oc delete ds glusterfs-registry --cascade=false
# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify all the old pods are up:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the old glusterfs template.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter then update the glusterfs template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Ensure that the CLUSTER_NAME variable is set to the correct value - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
oc get nodes --show-labels
# oc get nodes --show-labelsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Red Hat Gluster Storage nodes do not have theglusterfs=registry-hostlabel, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
oc label nodes <node name> glusterfs=registry-host
# oc label nodes <node name> glusterfs=registry-hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following commands to create the gluster DaemonSet:
oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc process glusterfs | oc create -f -
# oc process glusterfs | oc create -f - Deamonset “glusterfs” createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commmand and ensure that the bricks are not more than 90% full:
df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support. WithOnDelete StrategyDaemonSet update strategyOnDelete Strategyupdate strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old glusterfs-registry pods, execute the following command:
oc delete pod <gluster_pod>
# oc delete pod <gluster_pod>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc delete pod glusterfs-0vcf3
# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on glusterfs-registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to check the self-heal status of all the volumes: :
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
# for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -wand check theAgeof the pod andREADYstatus should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.oc get pods -w
# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to verify that the pods are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to verify if you have upgraded the pod to the latest version:
oc rsh <gluster_registry_pod_name> glusterd --version
# oc rsh <gluster_registry_pod_name> glusterd --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh glusterfs-registry-4cpcc glusterd --version
# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2Copy to Clipboard Copied! Toggle word wrap Toggle overflow rpm -qa|grep gluster
# rpm -qa|grep glusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
gluster vol get all cluster.op-version
# gluster vol get all cluster.op-versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.gluster volume set all cluster.op-version 31302
# gluster volume set all cluster.op-version 31302Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Remote shell into one of the glusterfs-registry pods. For example:
#oc rsh glusterfs-registry-g6vd9
#oc rsh glusterfs-registry-g6vd9Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command:
for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-registry-dc>
# oc delete dc <gluster-block-registry-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-registry-provisioner-dc
# oc delete dc glusterblock-registry-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner oc delete serviceaccounts glusterblock-registry-provisioner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After editing the template, execute the following command to create the deployment configuration:
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the glusterfs_registry pods:
oc rsh <gluster_pod_name>
# oc rsh <gluster_pod_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
gluster volume set all cluster.brick-multiplex on
# gluster volume set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
You can check the brick multiplex status by executing the following command:gluster v get all all
# gluster v get all allCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow - List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:gluster vol stop <VOLNAME> gluster vol start <VOLNAME>
# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- After upgrading the glusterfs registry pods, proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.3. Upgrading the client on Red Hat Openshift Container Platform Nodes Copy linkLink copied to clipboard!
- To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
oc adm drain <node_name> --ignore-daemonsets
# oc adm drain <node_name> --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access) :
oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the command on the node to upgrade the client on the node to glusterfs-fuse-3.12.2-32.el7.x86_64 version:
yum install glusterfs-client
# yum install glusterfs-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
oc adm manage-node --schedulable=true <node_name>
# oc adm manage-node --schedulable=true <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create and add the following content to the multipath.conf file:
Note
Make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to start multipath daemon and [re]load the multipath configuration:
systemctl start multipathd
# systemctl start multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl reload multipathd
# systemctl reload multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4. Starting the Heketi Pods Copy linkLink copied to clipboard!
- Execute the following command to navigate to the project where the Heketi pods are running:
oc project <project_name>
# oc project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project glusterfs
# oc project glusterfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the
DeploymentConfig:oc get dc
# oc get dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get dc
# oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY glusterblock-provisioner-dc 1 1 1 config heketi 1 1 1 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to increase the replica count from 0 to 1. This brings back the Heketi pod:
oc scale dc <heketi_dc> --replicas=1
# oc scale dc <heketi_dc> --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the Heketi pod is present:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Upgrading Your Red Hat Openshift Container Storage in Independent Mode Copy linkLink copied to clipboard!
7.1. Prerequisites Copy linkLink copied to clipboard!
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- If Heketi is running as a standalone service in one of the Red Hat Gluster Storage nodes, then ensure to open the port for Heketi. By default the port number for Heketi is 8080. To open this port execute the following command on the node where Heketi is running:
firewall-cmd --zone=zone_name --add-port=8080/tcp firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=8080/tcp # firewall-cmd --zone=zone_name --add-port=8080/tcp --permanentCopy to Clipboard Copied! Toggle word wrap Toggle overflow If Heketi is configured to listen on a different port, then change the port number in the command accordingly.
7.2. Upgrading your Independent Mode Setup Copy linkLink copied to clipboard!
7.2.1. Upgrading the Red Hat Gluster Storage Cluster Copy linkLink copied to clipboard!
7.2.2. Upgrading/Migration of Heketi in RHGS node Copy linkLink copied to clipboard!
Note
Important
- In OCS 3.11, upgrade of Heketi in RHGS node is not supported. Hence, you have to migrate heketi to a new heketi pod.
- Ensure to migrate to the supported heketi deployment now, as there might not be a migration path in the future versions.
- Ensure that cns-deploy rpm is installed in the master node. This provides template files necessary to setup heketi pod.
subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow yum install cns-deploy
# yum install cns-deployCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Use the newly created containerized Red Hat Gluster Storage project on the master node:
oc project <project-name>
# oc project <project-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc project gluster
# oc project glusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on the master node to create the service account:
oc create -f /usr/share/heketi/templates/heketi-service-account.yaml
# oc create -f /usr/share/heketi/templates/heketi-service-account.yaml serviceaccount/heketi-service-account createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on the master node to install the heketi template:
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template.template.openshift.io/heketi createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify if the templates are created
oc get templates
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS heketi Heketi service deployment template 5 (3 blank) 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on the master node to grant the heketi Service Account the necessary privileges:
oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account role "edit" added: "system:serviceaccount:gluster:heketi-service-account"Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-scc-to-user privileged -z heketi-service-account
# oc adm policy add-scc-to-user privileged -z heketi-service-account scc "privileged" added to: ["system:serviceaccount:gluster:heketi-service-account"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On the RHGS node, where heketi is running, execute the following commands:
- Create the heketidbstorage volume:
heketi-cli volume create --size=2 --name=heketidbstorage
# heketi-cli volume create --size=2 --name=heketidbstorageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Mount the volume:
mount -t glusterfs 192.168.11.192:heketidbstorage /mnt/
# mount -t glusterfs 192.168.11.192:heketidbstorage /mnt/Copy to Clipboard Copied! Toggle word wrap Toggle overflow where 192.168.11.192 is one of the RHGS node. - Stop the heketi service:
systemctl stop heketi
# systemctl stop heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Disable the heketi service:
systemctl disable heketi
# systemctl disable heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the heketi db to the heketidbstorage volume:
cp /var/lib/heketi/heketi.db /mnt/
# cp /var/lib/heketi/heketi.db /mnt/Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Unmount the volume:
umount /mnt
# umount /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the following files from the heketi node to the master node:
scp /etc/heketi/heketi.json topology.json /etc/heketi/heketi_key OCP_master_node:/root/
# scp /etc/heketi/heketi.json topology.json /etc/heketi/heketi_key OCP_master_node:/root/Copy to Clipboard Copied! Toggle word wrap Toggle overflow where OCP_master_node is the hostname of the master node.
- On the master node, set the environment variables for the following three files that were copied from the heketi node. Add the following lines to ~/.bashrc file and run the bash command to apply and save the changes:
export SSH_KEYFILE=heketi_key export TOPOLOGY=topology.json export HEKETI_CONFIG=heketi.json
export SSH_KEYFILE=heketi_key export TOPOLOGY=topology.json export HEKETI_CONFIG=heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If you have changed the value for "keyfile" in /etc/heketi/heketi.json to a different value, change here accordingly. - Execute the following command to create a secret to hold the configuration file:
oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY}# oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY} secret/heketi-config-secret createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to label the secret:
oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret
# oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret/heketi-config-secret labeledCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Get the IP addresses of all the glusterfs nodes, from the heketi-gluster-endpoints.yml file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the above example, 192.168.11.208, 192.168.11.176, 192.168.11.192 are the glusterfs nodes. - Execute the following command to create the endpoints:
oc create -f ./heketi-gluster-endpoints.yaml
# oc create -f ./heketi-gluster-endpoints.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create the service:
oc create -f ./heketi-gluster-service.yaml
# oc create -f ./heketi-gluster-service.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To verify if Heketi is migrated execute the following command on the master node:
oc rsh po/<heketi-pod-name>
# oc rsh po/<heketi-pod-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc rsh po/heketi-1-p65c6
# oc rsh po/heketi-1-p65c6Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to check the cluster IDs
heketi-cli cluster list
# heketi-cli cluster listCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the output verify if the cluster ID matches with the old cluster.
7.2.3. Upgrading if existing version deployed using cns-deploy Copy linkLink copied to clipboard!
7.2.3.1. Upgrading Heketi in Openshift node Copy linkLink copied to clipboard!
- Execute the following command to update the heketi client and cns-deploy packages:
yum update cns-deploy -y
# yum update cns-deploy -y # yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Backup the Heketi database file
oc rsh <heketi_pod_name>
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the heketi template.
oc delete templates heketi
# oc delete templates heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to install the heketi template.
oc create -f /usr/share/heketi/templates/heketi-template.yaml
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to grant the heketi Service Account the necessary privileges.
oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example,oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account
# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-accountCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to generate a new heketi configuration file.
sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json# sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - The
BLOCK_HOST_SIZEparameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.templatetoheketi.jsonin the current directory and edit the new file directly, replacing each "${VARIABLE}" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secretfile already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json# oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi
# oc delete deploymentconfig,service,route heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, and HEKETI_EXECUTOR parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3.2. Upgrading Gluster Block Copy linkLink copied to clipboard!
- Execute the following command to upgrade the gluster block:
yum update gluster-block
# yum update gluster-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start the gluster block service:
systemctl enable gluster-blockd
# systemctl enable gluster-blockd # systemctl start gluster-blockdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to update the heketi client and cns-deploy packages
yum update cns-deploy -y
# yum update cns-deploy -y # yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To use gluster block, add the following two parameters to the
glusterfssection in the heketi configuration file at /etc/heketi/heketi.JSON:auto_create_block_hosting_volume block_hosting_volume_size
auto_create_block_hosting_volume block_hosting_volume_sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:auto_create_block_hosting_volume: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value totrue.block_hosting_volume_size: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Heketi service:
systemctl restart heketi
# systemctl restart heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not applicable if heketi is running as a pod in the Openshift cluster. - If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-dc>
# oc delete dc <gluster-block-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-provisioner-dc
# oc delete dc glusterblock-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to deploy the gluster-block provisioner:
sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runnerCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete serviceaccounts glusterblock-registry-provisioner
# oc delete serviceaccounts glusterblock-registry-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create a glusterblock-provisioner.
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4. Upgrading if existing version deployed using Ansible Copy linkLink copied to clipboard!
7.2.4.1. Upgrading Heketi in Openshift node Copy linkLink copied to clipboard!
- Execute the following command to update the heketi client:
yum update heketi-client -y
# yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Backup the Heketi database file
oc rsh <heketi_pod_name>
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following step to edit the template:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME, then edit the template to change the HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, and CLUSTER_NAME as shown in the example below.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to delete the deployment configuration, service, and route for heketi:
oc delete deploymentconfig,service,route heketi-storage
# oc delete deploymentconfig,service,route heketi-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to get the current HEKETI_ADMIN_KEY.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echooc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
oc process heketi | oc create -f -
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to verify that the containers are running:
oc get pods
# oc get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.4.2. Upgrading Gluster Block if Deployed by Using Ansible Copy linkLink copied to clipboard!
- Execute the following command to upgrade the gluster block:
yum update gluster-block
# yum update gluster-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable and start the gluster block service:
systemctl enable gluster-blockd
# systemctl enable gluster-blockd # systemctl start gluster-blockdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to update the heketi client
yum update heketi-client -y
# yum update heketi-client -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the Heketi service:
systemctl restart heketi
# systemctl restart heketiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not applicable if heketi is running as a pod in the Openshift cluster. - If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
oc delete dc <gluster-block-dc>
# oc delete dc <gluster-block-dc>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc delete dc glusterblock-provisioner-dc
# oc delete dc glusterblock-provisioner-dcCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
oc get templates
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner template 3 (2 blank) 4 glusterfs GlusterFS DaemonSet template 5 (1 blank) 1 heketi Heketi service deployment template 7 (3 blank) 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the following resources from the old pod
oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runnerCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete serviceaccounts glusterblock-registry-provisioner
# oc delete serviceaccounts glusterblock-registry-provisionerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to create a glusterblock-provisioner.
oc process <gluster_block_provisioner_template> | oc create -f -
# oc process <gluster_block_provisioner_template> | oc create -f -Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.5. Enabling S3 Compatible Object store Copy linkLink copied to clipboard!
Note
7.3. Upgrading Gluster Nodes and heketi pods in glusterfs Registry Namespace Copy linkLink copied to clipboard!
7.3.1. Upgrading the Red Hat Gluster Storage Registry Cluster Copy linkLink copied to clipboard!
7.3.2. Upgrading Heketi Registry pod Copy linkLink copied to clipboard!
Note
7.3.3. Upgrading Gluster Block Copy linkLink copied to clipboard!
7.4. Upgrading the client on Red Hat Openshift Container Platform Nodes Copy linkLink copied to clipboard!
- To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
oc adm drain <node_name> --ignore-daemonsets
# oc adm drain <node_name> --ignore-daemonsetsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access):
oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the command on the node to upgrade the client on the node:
yum update glusterfs-client
# yum update glusterfs-clientCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
oc adm manage-node --schedulable=true <node_name>
# oc adm manage-node --schedulable=true <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create and add the following content to the multipath.conf file:
Note
Make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to start multipath daemon and [re]load the multipath configuration:
systemctl start multipathd
# systemctl start multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl reload multipathd
# systemctl reload multipathdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Part IV. Uninstalling Copy linkLink copied to clipboard!
Chapter 8. Uninstall Red Hat Openshift Container Storage Copy linkLink copied to clipboard!
Warning
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
openshift_storage_glusterfs_wipe which, when enabled, will destroy any data on the block devices that were used for Red Hat Gluster Storage backend storage. For more information about the settings/variables that will be destroyed, see Appendix B, Settings that are destroyed when using uninstall playbook. It is recommended to use this variable in the following format:
ansible-playbook -i <path_to_inventory_file> -e "openshift_storage_glusterfs_wipe=true" /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
ansible-playbook -i <path_to_inventory_file> -e "openshift_storage_glusterfs_wipe=true" /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
Note
Part V. Workloads Copy linkLink copied to clipboard!
Chapter 9. Managing Arbitrated Replicated Volumes Copy linkLink copied to clipboard!
9.1. Managing Arbiter Brick Size Copy linkLink copied to clipboard!
heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true,user.heketi.average-file-size 1024'
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true,user.heketi.average-file-size 1024'
9.2. Managing Arbiter Brick Placement Copy linkLink copied to clipboard!
- supported: both arbiter bricks and data bricks are allowed.
- required: only arbiter bricks are allowed, data bricks are rejected.
- disabled: only data bricks are allowed, arbiter bricks are rejected.
Note
9.2.1. Setting Tags with the Heketi CLI Copy linkLink copied to clipboard!
heketi-cli node settags <node id> arbiter:<tag>
# heketi-cli node settags <node id> arbiter:<tag>
heketi-cli node settags e2a792a43ca9a6bac4b9bfa792e89347 arbiter:disabled
# heketi-cli node settags e2a792a43ca9a6bac4b9bfa792e89347 arbiter:disabled
heketi-cli device settags <device id> arbiter:<tag>
# heketi-cli device settags <device id> arbiter:<tag>
heketi-cli device settags 167fe2831ad0a91f7173dac79172f8d7 arbiter:required
# heketi-cli device settags 167fe2831ad0a91f7173dac79172f8d7 arbiter:required
9.2.2. Removing Tags using Heketi CLI Copy linkLink copied to clipboard!
heketi-cli node rmtags <node id> arbiter
# heketi-cli node rmtags <node id> arbiter
heketi-cli node rmtags e2a792a43ca9a6bac4b9bfa792e89347 arbiter
# heketi-cli node rmtags e2a792a43ca9a6bac4b9bfa792e89347 arbiter
heketi-cli device rmtags <device id> arbiter
# heketi-cli device rmtags <device id> arbiter
heketi-cli device rmtags 167fe2831ad0a91f7173dac79172f8d7 arbiter
# heketi-cli device rmtags 167fe2831ad0a91f7173dac79172f8d7 arbiter
9.2.3. Viewing Tags with the Heketi CLI Copy linkLink copied to clipboard!
heketi-cli node info <node id>
# heketi-cli node info <node id>
heketi-cli device info <device id>
# heketi-cli device info <device id>
9.3. Creating Persistent Volumes Copy linkLink copied to clipboard!
Important
Part VI. Appendix Copy linkLink copied to clipboard!
Appendix A. Optional Deployment Method (with cns-deploy) Copy linkLink copied to clipboard!
A.1. Setting up Converged mode Copy linkLink copied to clipboard!
A.1.1. Configuring Port Access Copy linkLink copied to clipboard!
- On each of the OpenShift nodes that will host the Red Hat Gluster Storage container, add the following rules to
/etc/sysconfig/iptablesin order to open the required ports:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
For more information about Red Hat Gluster Storage Server ports, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/chap-getting_started.- Execute the following command to reload the iptables:
systemctl reload iptables
# systemctl reload iptablesCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on each node to verify if the iptables are updated:
iptables -L
# iptables -LCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A.1.2. Enabling Kernel Modules Copy linkLink copied to clipboard!
cns-deploy tool, you must ensure that the dm_thin_pool, dm_multipath, and target_core_user modules are loaded in the OpenShift Container Platform node. Execute the following commands only on Gluster nodes to verify if the modules are loaded:
lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_pool
lsmod | grep dm_multipath
# lsmod | grep dm_multipath
lsmod | grep target_core_user
# lsmod | grep target_core_user
modprobe dm_thin_pool
# modprobe dm_thin_pool
modprobe dm_multipath
# modprobe dm_multipath
modprobe target_core_user
# modprobe target_core_user
Note
cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf
dm_thin_pool
cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf
dm_multipath
cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf
target_core_user
A.1.3. Starting and Enabling Services Copy linkLink copied to clipboard!
systemctl add-wants multi-user rpcbind.service systemctl enable rpcbind.service systemctl start rpcbind.service
# systemctl add-wants multi-user rpcbind.service
# systemctl enable rpcbind.service
# systemctl start rpcbind.service
Note
cns-deploy --abort command. Use the -g option if Gluster is containerized.
rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd /var/log/glusterfs command on every node that was running a Gluster pod and also run wipefs -a <device> for every storage device that was consumed by Heketi. This erases all the remaining Gluster states from each node. You must be an administrator to run the device wiping command
A.2. Setting up Independent Mode Copy linkLink copied to clipboard!
A.2.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install) Copy linkLink copied to clipboard!
Important
/var partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux 7 Server
Independent mode is supported only on Red Hat Enterprise Linux 7.Register the System with Subscription Manager
Run the following command and enter your Red Hat Network username and password to register the system with the Red Hat Network:subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:subscription-manager list --available
# subscription-manager list --availableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach theRed Hat Enterprise Linux ServerandRed Hat Gluster Storageentitlements to the system. Run the following command to attach the entitlements:subscription-manager attach --pool=[POOLID]
# subscription-manager attach --pool=[POOLID]Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Required Channels
For Red Hat Gluster Storage 3.3 on Red Hat Enterprise Linux 7.x- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:yum repolist
# yum repolistCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update all packages
Ensure that all packages are up to date by running the following command.yum update
# yum updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow Important
If any kernel packages are updated, reboot the system with the following command.shutdown -r now
# shutdown -r nowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Kernel Version Requirement
Independent mode requires the kernel-3.10.0-690.el7 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:rpm -q kernel
# rpm -q kernel kernel-3.10.0-862.11.6.el7.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow uname -r
# uname -r 3.10.0-862.11.6.el7.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:yum install redhat-storage-server
# yum install redhat-storage-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To enable gluster-block execute the following command:
yum install gluster-block
# yum install gluster-blockCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Reboot
Reboot the system.
A.2.2. Configuring Port Access Copy linkLink copied to clipboard!
firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example, the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
A.2.3. Enabling Kernel Modules Copy linkLink copied to clipboard!
- You must ensure that the
dm_thin_poolandtarget_core_usermodules are loaded in the Red Hat Gluster Storage nodes.modprobe target_core_user
# modprobe target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow modprobe dm_thin_pool
# modprobe dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:lsmod | grep dm_thin_pool
# lsmod | grep dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow lsmod | grep target_core_user
# lsmod | grep target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
To ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow cat /etc/modules-load.d/target_core_user.conf target_core_user
# cat /etc/modules-load.d/target_core_user.conf target_core_userCopy to Clipboard Copied! Toggle word wrap Toggle overflow - You must ensure that the
dm_multipathmodule is loaded on all OpenShift Container Platform nodes.modprobe dm_multipath
# modprobe dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the following command to verify if the modules are loaded:lsmod | grep dm_multipath
# lsmod | grep dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
To ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/dm_multipath.conf dm_multipathCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A.2.4. Starting and Enabling Services Copy linkLink copied to clipboard!
systemctl start sshd
# systemctl start sshd
systemctl enable sshd
# systemctl enable sshd
systemctl start glusterd
# systemctl start glusterd
systemctl enable glusterd
# systemctl enable glusterd
systemctl start gluster-blockd
# systemctl start gluster-blockd
systemctl enable gluster-blockd
# systemctl enable gluster-blockd
A.3. Setting up the Environment Copy linkLink copied to clipboard!
A.3.1. Preparing the Red Hat OpenShift Container Platform Cluster Copy linkLink copied to clipboard!
- On the master or client, execute the following command to login as the cluster admin user:
oc login
# oc loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - On the master or client, execute the following command to create a project, which will contain all the containerized Red Hat Gluster Storage services:
oc new-project <project_name>
# oc new-project <project_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc new-project storage-project
# oc new-project storage-project Now using project "storage-project" on server "https://master.example.com:8443"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
oc adm policy add-scc-to-user privileged -z default
# oc adm policy add-scc-to-user privileged -z defaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following steps on the master to set up the router:
Note
If a router already exists, proceed to Step 5. To verify if the router is already deployed, execute the following command:oc get dc --all-namespaces
# oc get dc --all-namespacesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list all routers in all namespaces execute the following command:oc get dc --all-namespaces --selector=router=router
# oc get dc --all-namespaces --selector=router=router NAMESPACE NAME REVISION DESIRED CURRENT TRIGGERED BY default router 31 5 5 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to enable the deployment of the router:
oc adm policy add-scc-to-user privileged -z router
# oc adm policy add-scc-to-user privileged -z routerCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command to deploy the router:
oc adm router storage-project-router --replicas=1
# oc adm router storage-project-router --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the subdomain name in the config.yaml file located at
/etc/origin/master/master-config.yaml.For example:subdomain: "cloudapps.mystorage.com"
subdomain: "cloudapps.mystorage.com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For OpenShift Container Platform 3.7 and 3.9 execute the following command to restart the services :
systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If the router setup fails, use the port forward method as described in https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Port_Fwding .
For more information regarding router setup, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html-single/configuring_clusters/#setting-up-a-router - Execute the following command to verify if the router is running:
oc get dc <router_name>
# oc get dc <router_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:oc get dc storage-project-router
# oc get dc storage-project-router NAME REVISION DESIRED CURRENT TRIGGERED BY storage-project-router 1 1 1 configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Ensure you do not edit the/etc/dnsmasq.conffile until the router has started. - After the router is running, the client has to be setup to access the services in the OpenShift cluster. Execute the following steps on the client to set up the DNS.
- Execute the following command to find the IP address of the router:
oc get pods -o wide --all-namespaces | grep router
# oc get pods -o wide --all-namespaces | grep router storage-project storage-project-router-1-cm874 1/1 Running 119d 10.70.43.132 dhcp43-132.lab.eng.blr.redhat.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the /etc/dnsmasq.conf file and add the following line to the file:
address=/.cloudapps.mystorage.com/<Router_IP_Address>
address=/.cloudapps.mystorage.com/<Router_IP_Address>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where, Router_IP_Address is the IP address of the node where the router is running. - Restart the
dnsmasqservice by executing the following command:systemctl restart dnsmasq
# systemctl restart dnsmasqCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit /etc/resolv.conf and add the following line:
nameserver 127.0.0.1
nameserver 127.0.0.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information regarding setting up the DNS, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/installing_clusters/install-config-install-prerequisites#envirornment-requirements.
A.3.2. Deploying Containerized Red Hat Gluster Storage Solutions Copy linkLink copied to clipboard!
cns-deploy tool.
Note
- It is recommended that a separate cluster for OpenShift Container Platform infrastructure workload (registry, logging and metrics) and application pod storage. Hence, if you have more than 6 nodes ensure you create multiple clusters with a minimum of 3 nodes each. The infrastructure cluster should belong to the
defaultproject namespace. - If you want to enable encryption on Red Hat Openshift Container Storage setup, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Enabling_Encryption before proceeding with the following steps.
- You must first provide a topology file for heketi which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Note
Copy the topology file from the default location to your location and then edit it:cp /usr/share/heketi/topology-sample.json /<Path>/topology.json
# cp /usr/share/heketi/topology-sample.json /<Path>/topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.managesection andnode.hostnames.storagesection with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.Important
Heketi stores its database on a Red Hat Gluster Storage volume. In cases where the volume is down, the Heketi service does not respond due to the unavailability of the volume served by a disabled trusted storage pool. To resolve this issue, restart the trusted storage pool which contains the Heketi volume.
A.3.2.1. Deploying Converged Mode Copy linkLink copied to clipboard!
- Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
cns-deploy -n <namespace> -g --admin-key <Key> topology.json
# cns-deploy -n <namespace> -g --admin-key <Key> topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To deploy S3 compatible object store in Red Hat Openshift Container Storage see Step 1a below.
- In the above command, the value for
admin-keyis the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZEparameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of cns-deploy.cns-deploy --help
# cns-deploy --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow object-account,object-user, andobject-passwordare required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-scandobject-capacityare optional parameters. Where,object-scis used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacityis the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:heketi-cli topology info
# heketi-cli topology infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
A.3.2.2. Deploying Independent Mode Copy linkLink copied to clipboard!
- To set a passwordless SSH to all Red Hat Gluster Storage nodes, execute the following command on the client for each of the Red Hat Gluster Storage node:
ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>
# ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following command on the client to deploy heketi pod and to create a cluster of Red Hat Gluster Storage nodes:
cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.json
# cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- Support for S3 compatible Object Store is under technology preview. To deploy S3 compatible object store see Step 2a below.
- In the above command, the value for
admin-keyis the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZEparameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.cns-deploy --help
# cns-deploy --helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verboseCopy to Clipboard Copied! Toggle word wrap Toggle overflow object-account,object-user, andobject-passwordare required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-scandobject-capacityare optional parameters. Where,object-scis used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacityis the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. Execute the following commands on one of the Red Hat Gluster Storage nodes on each cluster to enable brick-multiplexing:
- Execute the following command to enable brick multiplexing:
gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex onCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:gluster vol set all cluster.brick-multiplex on
# gluster vol set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the heketidb volumes:
gluster vol stop heketidbstorage
# gluster vol stop heketidbstorage Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: heketidbstorage: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow gluster vol start heketidbstorage
# gluster vol start heketidbstorage volume start: heketidbstorage: successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Execute the following command to let the client communicate with the container:
export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if Heketi is loaded with the topology execute the following command:heketi-cli topology info
# heketi-cli topology infoCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
Appendix B. Settings that are destroyed when using uninstall playbook Copy linkLink copied to clipboard!
- glusterfs_config_facts.yml
- glusterfs_registry_facts.yml
ansible-playbook -i <path_to_inventory_file> -e "openshift_storage_glusterfs_wipe=true" /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
ansible-playbook -i <path_to_inventory_file> -e "openshift_storage_glusterfs_wipe=true" /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
Appendix C. Revision History Copy linkLink copied to clipboard!
| Revision History | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Revision 1.0-03 | Thu May 2 2019 | |||||||||||
| ||||||||||||
| Revision 1.0-02 | Wed Sep 12 2018 | |||||||||||
| ||||||||||||
| Revision 1.0-01 | Tue Sep 11 2018 | |||||||||||
| ||||||||||||