Deployment Guide
Deploying Red Hat Openshift Container Storage 3.10.
Edition 0
Abstract
Part I. Planning
Chapter 1. Identify your Workloads
- Jenkins
- ElasticSearch
- Prometheus
Chapter 2. Identify your Use Case
Note
2.1. Converged Mode
Note
- OpenShift provides the platform as a service (PaaS) infrastructure based on Kubernetes container management. Basic OpenShift architecture is built around multiple master systems where each system contains a set of nodes.
- Red Hat Gluster Storage provides the containerized distributed storage based on Red Hat Gluster Storage 3.4 container. Each Red Hat Gluster Storage volume is composed of a collection of bricks, where each brick is the combination of a node and an export directory.
- Heketi provides the Red Hat Gluster Storage volume life-cycle management. It creates the Red Hat Gluster Storage volumes dynamically and supports multiple Red Hat Gluster Storage clusters.
- Create multiple persistent volumes (PV) and register these volumes with OpenShift.
- Developers then submit a persistent volume claim (PVC).
- A PV is identified and selected from a pool of available PVs and bound to the PVC.
- The OpenShift pod then uses the PV for persistent storage.

Figure 2.1. Architecture - Converged Mode for OpenShift Container Platform
Note
2.2. Independent mode
Note
- OpenShift Container Platform administrators might not want to manage storage. Independent mode separates storage management from container management.
- Leverage legacy storage (SAN, Arrays, Old filers): Customers often have storage arrays from traditional storage vendors that have either limited or no support for OpenShift. Independent mode allows users to leverage existing legacy storage for OpenShift Containers.
- Cost effective: In environments where costs related to new infrastructure is a challenge, they can repurpose their existing storage arrays to back OpenShift under independent mode. Independent mode is perfect for such situations where one can run Red Hat Gluster Storage inside a VM and serve out LUNs or disks from these storage arrays to OpenShift offering all the features that the OpenShift storage subsystem has to offer including dynamic provisioning. This is a very useful solution in those environments with potential infrastructure additions.
Note
Chapter 3. Verify Prerequisites
3.1. Converged mode
3.1.1. Supported Versions
OpenShift Container Platform | Red Hat Gluster Storage | Red Hat Openshift Container Storage / Openshift Container Storage |
---|---|---|
3.11 | 3.4 | 3.10, 3.11 |
3.10 | 3.4 | 3.9, 3.10 |
3.7, 3.9, 3.10 | 3.3.1 | 3.9 |
3.1.2. Environment Requirements
3.1.2.1. Installing Red Hat Openshift Container Storage with OpenShift Container Platform on Red Hat Enterprise Linux 7
3.1.2.1.1. Setting up the Openshift Master as the Client
oc
commands across the cluster when installing OpenShift. Generally, this is setup as a non-scheduled node in the cluster. This is the default configuration when using the OpenShift installer. You can also choose to install their client on their local machine to access the cluster remotely. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/cli_reference/cli-reference-get-started-cli#installing-the-cli.
Execute the following commands to install heketi-client
package.
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# yum install heketi-client
# subscription-manager repos --disable=rh-gluster-3-client-for-rhel-7-server-rpms
3.1.3. Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements
- All OpenShift nodes on Red Hat Enterprise Linux systems must have glusterfs-client RPMs (glusterfs, glusterfs-client-xlators, glusterfs-libs, glusterfs-fuse) installed. You can verify if the RPMs are installed by running the following command:
# yum list glusterfs glusterfs-client-xlators glusterfs-libs glusterfs-fuse
For more information on installing native client packages, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html-single/administration_guide/#Installing_Native_Client
3.1.4. Red Hat Gluster Storage Requirements
- Installation of Heketi packages must have valid subscriptions to Red Hat Gluster Storage Server repositories.
- Red Hat Gluster Storage installations must adhere to the requirements outlined in the Red Hat Gluster Storage Installation Guide.
- The versions of Red Hat Enterprise OpenShift and Red Hat Gluster Storage integrated must be compatible, according to the information in Section 3.1.1, “Supported Versions” section.
- A fully qualified domain name must be set for Red Hat Gluster Storage server node. Ensure that the correct DNS records exist, and that the fully qualified domain name is resolvable via both forward and reverse DNS lookup.
- To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:# yum update glusterfs-fuse
Important
- After a snapshot is created, it must be accessed through the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
3.1.5. Deployment and Scaling Guidelines
- Sizing guidelines on converged mode or independent mode:
- Persistent volumes backed by the file interface: For typical operations, size for 300-500 persistent volumes backed by files per three-node converged mode or independent mode cluster. The maximum limit of supported persistent volumes backed by the file interface is 1000 persistent volumes per three-node cluster in a converged mode or independent mode deployment. Considering that micro-services can dynamically scale as per demand, it is recommended that the initial sizing keep sufficient headroom for the scaling. If additional scaling is needed, add a new three-node converged mode or independent mode cluster to support additional persistent volumesCreation of more than 1,000 persistent volumes per trusted storage pool is not supported for file-based storage.
- Persistent volumes backed by block-based storage: Size for a maximum of 300 persistent volumes per three-node converged mode or independent mode cluster. Be aware that converged mode and independent mode supports only OpenShift Container Platform logging and metrics on block-backed persistent volumes.
- Persistent volumes backed by file and block: Size for 300-500 persistent volumes (backed by files) and 100-200 persistent volumes (backed by block). Do not exceed these maximum limits of file or block-backed persistent volumes or the combination of a maximum 1000 persistent volumes per three-node converged mode or independent mode cluster.
- 3-way distributed-replicated volumes and arbitrated volumes are the only supported volume typesS.
- Minimum Red Hat Openshift Container Storage cluster size (4): It is recommended to have a minimum of 4 nodes in the Red Hat Openshift Container Storage cluster to adequately meet high-availability requirements. Although 3 nodes are required to create a persistent volume claim, the failure of one node in a 3 node cluster prevents the persistent volume claim from being created. The fourth node provides high-availability and allows the persistent volume claim to be created even if a node fails.
- Each physical or virtual node that hosts a converged mode or independent mode peer requires the following:
- a minimum of 8 GB RAM and 30 MB per persistent volume.
- the same disk type.
- the heketidb utilises 2 GB distributed replica volume.
- Deployment guidelines on converged mode or independent mode:
- In converged mode, you can install the Red Hat Openshift Container Storage nodes, Heketi, and all provisioner pods on OpenShift Container Platform Infrastructure nodes or OpenShift Container Platform Application nodes.
- In independent mode, you can install Heketi and all provisioners pods on OpenShift Container Platform Infrastructure nodes or on OpenShift Container Platform Application nodes
- Red Hat Gluster Storage Container Native with OpenShift Container Platform supports up to 14 snapshots per volume by default (snap-max-hard-limit =14 in Heketi Template).
3.2. Independent mode
3.2.1. Supported Versions
OpenShift Container Platform | Red Hat Gluster Storage | Red Hat Openshift Container Storage / Openshift Container Storage |
---|---|---|
3.11 | 3.4 | 3.10, 3.11 |
3.10 | 3.4 | 3.9, 3.10 |
3.7, 3.9, 3.10 | 3.3.1 | 3.9 |
3.2.2. Environment Requirements
3.2.2.1. Installing Red Hat Openshift Container Storage with OpenShift Container Platform on Red Hat Enterprise Linux 7
3.2.2.1.1. Setting up the Openshift Master as the Client
oc
commands across the cluster when installing OpenShift. Generally, this is setup as a non-scheduled node in the cluster. This is the default configuration when using the OpenShift installer. You can also choose to install their client on their local machine to access the cluster remotely. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/cli_reference/cli-reference-get-started-cli#installing-the-cli.
Execute the following commands to install heketi-client
package.
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# yum install heketi-client
# subscription-manager repos --disable=rh-gluster-3-client-for-rhel-7-server-rpms
3.2.3. Red Hat OpenShift Container Platform and Red Hat Openshift Container Storage Requirements
- All OpenShift nodes on Red Hat Enterprise Linux systems must have glusterfs-client RPMs (glusterfs, glusterfs-client-xlators, glusterfs-libs, glusterfs-fuse) installed. You can verify if the RPMs are installed by running the following command:
# yum list glusterfs glusterfs-client-xlators glusterfs-libs glusterfs-fuse
For more information on installing native client packages, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html-single/administration_guide/#Installing_Native_Client
3.2.4. Red Hat Gluster Storage Requirements
- Installation of Heketi packages must have valid subscriptions to Red Hat Gluster Storage Server repositories.
- Red Hat Gluster Storage installations must adhere to the requirements outlined in the Red Hat Gluster Storage Installation Guide.
- The versions of Red Hat Enterprise OpenShift and Red Hat Gluster Storage integrated must be compatible, according to the information in Section 3.1.1, “Supported Versions” section.
- A fully qualified domain name must be set for Red Hat Gluster Storage server node. Ensure that the correct DNS records exist and that the fully qualified domain name is resolvable via both forward and reverse DNS lookup.
- To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:# yum update glusterfs-fuse
Important
- After a snapshot is created, it must be accessed through the user-serviceable snapshots feature only. This can be used to copy the old versions of files into the required location.Reverting the volume to a snapshot state is not supported and should never be done as it might damage the consistency of the data.
- On a volume with snapshots, volume changing operations, such as volume expansion, must not be performed.
3.2.5. Deployment and Scaling Guidelines
- Sizing guidelines on Converged mode or Independent mode:
- Persistent volumes backed by the file interface: For typical operations, size for 300-500 persistent volumes backed by files per three-node converged mode or independent mode cluster. The maximum limit of supported persistent volumes backed by the file interface is 1000 persistent volumes per three-node cluster in a converged mode or independent mode deployment. Considering that micro-services can dynamically scale as per demand, it is recommended that the initial sizing keep sufficient headroom for the scaling. If additional scaling is needed, add a new three-node converged mode or independent mode cluster to support additional persistent volumesCreation of more than 1,000 persistent volumes per trusted storage pool is not supported for file-based storage.
- Persistent volumes backed by block-based storage: Size for a maximum of 300 persistent volumes per three-node converged mode or independent mode cluster. Be aware that converged mode and independent mode supports only OpenShift Container Platform logging and metrics on block-backed persistent volumes.
- Persistent volumes backed by file and block: Size for 300-500 persistent volumes (backed by files) and 100-200 persistent volumes (backed by block). Do not exceed these maximum limits of file or block-backed persistent volumes or the combination of a maximum 1000 persistent volumes per three-node converged mode or independent mode cluster.
- 3-way distributed-replicated volumes and arbitrated volumes are the only supported volume types.
- Minimum Red Hat Openshift Container Storage cluster size (4): It is recommended to have a minimum of 4 nodes in the Red Hat Openshift Container Storage cluster to adequately meet high-availability requirements. Although 3 nodes are required to create a persistent volume claim, the failure of one node in a 3 node cluster prevents the persistent volume claim from being created. The fourth node provides high-availability and allows the persistent volume claim to be created even if a node fails.
- Each physical or virtual node that hosts a Red Hat Gluster Storage converged mode or independent mode peer requires the following:
- a minimum of 8 GB RAM and 30 MB per persistent volume.
- the same disk type.
- the heketidb utilises 2 GB distributed replica volume.
- Deployment guidelines on converged mode or independent mode:
- In converged mode, you can install the Red Hat Openshift Container Storage nodes, Heketi, and all provisioner pods on OpenShift Container Platform Infrastructure nodes or OpenShift Container Platform Application nodes.
- In independent mode, you can install Heketi and all provisioners pods on OpenShift Container Platform Infrastructure nodes or on OpenShift Container Platform Application nodes
- Red Hat Gluster Storage Container Native with OpenShift Container Platform supports up to 14 snapshots per volume by default (snap-max-hard-limit =14 in Heketi Template).
Part II. Deploy
Chapter 4. Deploying Containerized Storage in Converged Mode
Note
- Red Hat Openshift Container Storage does not support a simultaneous deployment of converged and independent mode with ansible workflow. Therefore, you must deploy either converged mode or independent mode: you cannot mix both modes during deployment.
- s3 is deployed manually and not through Ansible installer. For more information on manual deployment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#S3_Object_Store
4.1. Specify Advanced Installer Variables
glusterfs
: A general storage cluster for use by user applications.glusterfs-registry
: A dedicated storage cluster for use by infrastructure applications such as an integrated OpenShift Container Registry.
[OSEv3:children]
group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars]
group. glusterfs
variables begin with openshift_storage_glusterfs_
and glusterfs-registry
variables begin with openshift_storage_glusterfs_registry_
. A few other variables, such as openshift_hosted_registry_storage_kind
, interact with the GlusterFS clusters.
openshift_storage_glusterfs_image
openshift_storage_glusterfs_block_image
openshift_storage_glusterfs_heketi_image
openshift_storage_glusterfs_registry_image
openshift_storage_glusterfs_registry_block_image
openshift_storage_glusterfs_registry_heketi_image
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7:v3.10
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:v3.10
openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:v3.10
- The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OpenShift Container Platform.
- This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation./usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/registry.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation. In addition, this will deploy an integrated OpenShift Container Registry which uses GlusterFS storage.Important
There must not be a pre-existing registry in the OpenShift Container Platform cluster.playbooks/openshift-glusterfs/uninstall.yml
can be used to remove existing clusters matching the configuration in the inventory hosts file. This is useful for cleaning up the Red Hat Openshift Container Storage environment in the case of a failed deployment due to configuration errors.Note
The GlusterFS playbooks are not guaranteed to be idempotent. Running the playbooks more than once for a given installation is currently not supported without deleting the entire GlusterFS installation (including disk data) and starting over.
4.2. Deploying Red Hat Openshift Container Storage in Converged Mode
- In your inventory file, include the following variables in the
[OSEv3:vars]
section, adjusting them as needed for your configuration:[OSEv3:vars] openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_host_vol_create=true openshift_storage_glusterfs_block_host_vol_size=100 openshift_storage_glusterfs_block_storageclass=true openshift_storage_glusterfs_block_storageclass_default=false
- In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters etcd nodes glusterfs
- Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:[glusterfs] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
- Add the hosts listed under
[glusterfs]
to the[nodes]
group:[nodes] ... node11.example.com openshift_node_group_name="node-config-infra" node12.example.com openshift_node_group_name="node-config-infra" node13.example.com openshift_node_group_name="node-config-infra" node14.example.com openshift_node_group_name="node-config-infra"
- The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
- For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.3. Deploying Red Hat Openshift Container Storage in Converged Mode with Registry
- In your inventory file, include the following variables in the [OSEv3:vars] section, adjusting them as needed for your configuration:
openshift_storage_glusterfs_registry_namespace=app-storage openshift_storage_glusterfs_registry_storageclass=true openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false
- In your inventory file, set the following variable under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
- Add
glusterfs_registry
in the[OSEv3:children]
section to enable the[glusterfs_registry]
group:[OSEv3:children] masters etcd nodes glusterfs_registry
- Add a
[glusterfs_registry]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
- Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:[nodes] ... node11.example.com openshift_node_group_name="node-config-compute" node12.example.com openshift_node_group_name="node-config-compute" node13.example.com openshift_node_group_name="node-config-compute" node13.example.com openshift_node_group_name="node-config-compute"
- The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
- For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.4. Deploying Red Hat Openshift Container Storage in Converged Mode with Logging and Metrics
- In your inventory file, set the following variables under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_metrics_install_metrics=true openshift_metrics_storage_kind=dynamic openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_storage_volume_size=10Gi openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_pvc_size=10Gi openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" openshift_storage_glusterfs_registry_namespace=infra-storage openshift_storage_glusterfs_registry_storageclass=false openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false
Note
See the GlusterFS role README, https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs, for details on these and other variables. - Add
glusterfs_registry
in the[OSEv3:children]
section to enable the[glusterfs_registry]
group:[OSEv3:children] masters etcd nodes glusterfs_registry
- Add a
[glusterfs_registry]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
- Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:[nodes] ... node11.example.com openshift_node_group_name="node-config-compute" node12.example.com openshift_node_group_name="node-config-compute" node13.example.com openshift_node_group_name="node-config-compute" node14.example.com openshift_node_group_name="node-config-compute"
- The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
For a standalone installation onto an existing OpenShift Container Platform cluster:ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.5. Deploying Red Hat Openshift Container Storage in Converged mode for Applications with Registry, Logging, and Metrics
- In your inventory file, set the following variables under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true' openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_storage_kind=glusterfs [OSEv3:vars] ... openshift_metrics_install_metrics=true openshift_metrics_storage_kind=dynamic openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_storage_volume_size=10Gi openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_pvc_size=10Gi openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=false openshift_storage_glusterfs_registry_namespace=infra-storage openshift_storage_glusterfs_registry_storageclass=false openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false
- Add
glusterfs
andglusterfs_registry
in the[OSEv3:children]
section to enable the[glusterfs]
and[glusterfs_registry]
groups:[OSEv3:children] ... glusterfs glusterfs_registry
- Add
[glusterfs]
and[glusterfs_registry]
sections with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:[glusterfs] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' [glusterfs_registry] node15.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node16.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node17.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
- Add the hosts listed under
[glusterfs]
and[glusterfs_registry]
to the[nodes]
group:[nodes] ... node11.example.com openshift_node_group_name="node-config-compute" node12.example.com openshift_node_group_name="node-config-compute" node13.example.com openshift_node_group_name="node-config-compute" node14.example.com openshift_node_group_name="node-config-compute" node15.example.com openshift_node_group_name="node-config-infra" node16.example.com openshift_node_group_name="node-config-infra" node17.example.com openshift_node_group_name="node-config-infra"
- The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
- For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml ansible-playbook -i <path_to_the_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
- To verify the deployment see, Section 4.6, “Verify your Deployment”.
4.6. Verify your Deployment
- Installation Verification for converged mode
- Examine the installation for the app-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
# switch to the app-storage namespace oc project app-storage # get the list of pods here (3 gluster pods +1 heketi pod + 1 gluster block provisioner pod) oc get pods NAME READY STATUS RESTARTS AGE glusterblock-storage-provisioner-dc-1-mphfp 1/1 Running 0 1h glusterfs-storage-6tlzx 1/1 Running 0 1h glusterfs-storage-lksps 1/1 Running 0 1h glusterfs-storage-nf7qk 1/1 Running 0 1h glusterfs-storage-tcnd8 1/1 Running 0 1h heketi-storage-1-5m6cl 1/1 Running 0 1h
- Examine the installation for the infra-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
# switch to the infra-storage namespace oc project infra-storage # list the pods here (3 gluster pods, 1 heketi pod and 1 glusterblock-provisioner pod) oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-28sfc 1/1 Running 0 1h glusterfs-registry-cjp49 1/1 Running 0 1h glusterfs-registry-lhgjj 1/1 Running 0 1h glusterfs-registry-v4vqx 1/1 Running 0 1h heketi-registry-5-lht6s 1/1 Running 0 1h
- Check the existence of the registry PVC backed by OCP infrastructure Red Hat Openshift Container Storage. This volume was statically provisioned by openshift-ansible deployment.
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
Check the registry DeploymentConfig to verify it's using this glusterfs volume.oc describe dc/docker-registry -n default | grep -A3 Volumes Volumes: registry-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: registry-claim
- Storage Provisioning Verification for Converged Mode
- The Storage Class resources can be used to create new PV claims for verification of the RHOCS deployment. Validate PV provisioning using the following OCP Storage Class created during the RHOCS deployment:
- Use the glusterfs-storage-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using Section 4.2, “Deploying Red Hat Openshift Container Storage in Converged Mode”.
- Use the glusterfs-registry-block OCP Storage Class resource to create new PV claims if you deployed RHOCS using one of the following workflows:
# oc get storageclass NAME TYPE glusterfs-storage kubernetes.io/glusterfs Glusterfs-storage-block gluster.org/glusterblock $ cat pvc-file.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: rhocs-file-claim1 annotations: volume.beta.kubernetes.io/storage-class: glusterfs-storage spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
# cat pvc-block.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: rhocs-block-claim1 annotations: volume.beta.kubernetes.io/storage-class: glusterfs-storage-block spec: accessModes: - ReadWriteOnly resources: requests: storage: 5Gi
# oc create -f pvc-file.yaml # oc create -f pvc-block.yaml
Validate that the two PVCs and respective PVs are created correctly:# oc get pvc
- Using the heketi-client for Verification
- The heketi-client package needs to be installed on the ansible deploy host or on a OCP master. Once it is installed two new files should be created to easily export the required environment variables to run the heketi-client commands (or heketi-cli). The content of each file as well as useful heketi-cli commands are detailed here.Create a new file (e.g. "heketi-exports-app") with the following contents:
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n app-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64) export HEKETI_CLI_USER=admin
Source the file to create the HEKETI app-storage environment variables:source heketi-exports-app # see if heketi is alive curl -w '\n' ${HEKETI_CLI_SERVER}/hello Hello from Heketi # ask heketi about the cluster it knows about heketi-cli cluster list Clusters: Id:56ed234a384cef7dbef6c4aa106d4477 [file][block] # ask heketi about the topology of the RHOCS cluster for apps heketi-cli topology info # ask heketi about the volumes already created (one for the heketi db should exist after the OCP initial installation) heketi-cli volume list Id:d71a4cbea22af3453615a9020f261b5c Cluster:56ed234a384cef7dbef6c4aa106d4477 Name:heketidbstorage
Create a new file (e.g. "heketi-exports-infra") with the following contents:export HEKETI_POD=$(oc get pods -l glusterfs=heketi-registry-pod -n infra-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-registry -n infra-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n infra-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)
Source the file to create the HEKETI infra-storage environment variables:source heketi-exports-infra # see if heketi is alive curl -w '\n' ${HEKETI_CLI_SERVER}/hello Hello from Heketi # ask heketi about the cluster it knows about (the RHOCS cluster for infrastructure) heketi-cli cluster list Clusters: Id:baf91b261cbca2bb4b62caece63f60d0 [file][block] # ask heketi about the volumes already created heketi-cli volume list Id:77baed02f79f4518326d8cc1db6c7af8 Cluster:baf91b261cbca2bb4b62caece63f60d0 Name:heketidbstorage
4.7. Creating an Arbiter Volume (optional)
- Better consistency: When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions.
- Less disk space required: Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume.
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# yum install heketi-client
4.7.1. Creating an Arbiter Volume
4.7.1.1. Creating an Arbiter Volume using Heketi CLI
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
4.7.1.2. Creating an Arbiter Volume using the Storageclass file
- user.heketi.arbiter true
- (Optional) user.heketi.average-file-size 1024
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "user.heketi.arbiter true,user.heketi.average-file-size 1024" volumenameprefix: "test-vol" spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
Note
Chapter 5. Deploy Containerized Storage in Independent Mode
Deployment workflow | Registry | Metrics | Logging | Applications |
---|---|---|---|---|
Section 5.3, “Deploying Red Hat Openshift Container Storage in Independent Mode” | ✔ | |||
Section 5.4, “Deploying Red Hat Openshift Container Storage in Independent mode for Applications with Registry, Logging, and Metrics” | ✔ | ✔ | ✔ | ✔ |
Note
- Red Hat Openshift Container Storage does not support a simultaneous deployment of converged and independent mode with ansible workflow. Therefore, you must deploy either converged mode or independent mode: you cannot mix both modes during deployment.
- s3 is deployed manually and not through Ansible installer. For more information on manual deployment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#S3_Object_Store
5.1. Setting up a RHGS Cluster
5.1.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install)
Important
/var
partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux 7 Server
Independent mode is supported only on Red Hat Enterprise Linux 7.Register the System with Subscription Manager
Run the following command and enter your Red Hat Network username and password to register the system with the Red Hat Network:# subscription-manager register
Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:# subscription-manager list --available
Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach theRed Hat Enterprise Linux Server
andRed Hat Gluster Storage
entitlements to the system. Run the following command to attach the entitlements:# subscription-manager attach --pool=[POOLID]
For example:# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
Enable the Required Channels
For Red Hat Gluster Storage 3.4 on Red Hat Enterprise Linux 7.x- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpms
Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:# yum repolist
Update all packages
Ensure that all packages are up to date by running the following command.# yum update
Important
If any kernel packages are updated, reboot the system with the following command.# shutdown -r now
Kernel Version Requirement
Independent mode requires the kernel-3.10.0-690.el7 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:# rpm -q kernel kernel-3.10.0-862.11.6.el7.x86_64
# uname -r 3.10.0-862.11.6.el7.x86_64
Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:# yum install redhat-storage-server
- To enable gluster-block execute the following command:
# yum install gluster-block
Reboot
Reboot the system.
5.1.2. Configuring Port Access
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp # firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
5.1.3. Enabling Kernel Modules
- You must ensure that the
dm_thin_pool
andtarget_core_user
modules are loaded in the Red Hat Gluster Storage nodes.# modprobe target_core_user
# modprobe dm_thin_pool
Execute the following command to verify if the modules are loaded:# lsmod | grep dm_thin_pool
# lsmod | grep target_core_user
Note
To ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/target_core_user.conf target_core_user
- You must ensure that the
dm_multipath
module is loaded on all OpenShift Container Platform nodes.# modprobe dm_multipath
Execute the following command to verify if the modules are loaded:# lsmod | grep dm_multipath
Note
To ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:# cat /etc/modules-load.d/dm_multipath.conf dm_multipath
5.1.4. Starting and Enabling Services
# systemctl start sshd
# systemctl enable sshd
# systemctl start glusterd
# systemctl enable glusterd
# systemctl start gluster-blockd
# systemctl enable gluster-blockd
5.2. Specify Advanced Installer Variables
glusterfs
: A general storage cluster for use by user applications.glusterfs-registry
: A dedicated storage cluster for use by infrastructure applications such as an integrated OpenShift Container Registry.
[OSEv3:children]
group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars]
group. glusterfs
variables begin with openshift_storage_glusterfs_
and glusterfs-registry
variables begin with openshift_storage_glusterfs_registry_
. A few other variables, such as openshift_hosted_registry_storage_kind
, interact with the GlusterFS clusters.
openshift_storage_glusterfs_version
openshift_storage_glusterfs_block_version
openshift_storage_glusterfs_heketi_version
openshift_storage_glusterfs_registry_version
openshift_storage_glusterfs_registry_block_version
openshift_storage_glusterfs_registry_heketi_version
Note
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7:v3.10
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:v3.10
openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:v3.10
- The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OpenShift Container Platform.
- This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation./usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/registry.yml
can be used to deploy the clusters onto an existing OpenShift Container Platform installation. In addition, this will deploy an integrated OpenShift Container Registry which uses GlusterFS storage.Important
There must not be a pre-existing registry in the OpenShift Container Platform cluster.playbooks/openshift-glusterfs/uninstall.yml
can be used to remove existing clusters matching the configuration in the inventory hosts file. This is useful for cleaning up the Red Hat OpenShift Container Storage environment in the case of a failed deployment due to configuration errors.Note
The GlusterFS playbooks are not guaranteed to be idempotent. Running the playbooks more than once for a given installation is not supported without deleting the entire GlusterFS installation (including disk data) and starting over.
5.3. Deploying Red Hat Openshift Container Storage in Independent Mode
- In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters etcd nodes glusterfs
- Include the following variables in the
[OSEv3:vars]
section, adjusting them as needed for your configuration:[OSEv3:vars] ... openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=true openshift_storage_glusterfs_block_host_vol_create=true openshift_storage_glusterfs_block_host_vol_size=100 openshift_storage_glusterfs_block_storageclass=true openshift_storage_glusterfs_block_storageclass_default=false openshift_storage_glusterfs_is_native=false openshift_storage_glusterfs_heketi_is_native=true openshift_storage_glusterfs_heketi_executor=ssh openshift_storage_glusterfs_heketi_ssh_port=22 openshift_storage_glusterfs_heketi_ssh_user=root openshift_storage_glusterfs_heketi_ssh_sudo=false openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
- Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Also, setglusterfs_ip
to the IP address of the node. Specifying the variable takes the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:[glusterfs] gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
- The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
- For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
5.4. Deploying Red Hat Openshift Container Storage in Independent mode for Applications with Registry, Logging, and Metrics
- In your inventory file, set the following variables under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true' openshift_hosted_registry_storage_volume_size=5Gi openshift_hosted_registry_storage_kind=glusterfs openshift_metrics_install_metrics=true openshift_metrics_storage_kind=dynamic openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_metrics_storage_volume_size=10Gi openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_es_pvc_size=10Gi openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" openshift_storage_glusterfs_namespace=app-storage openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_storageclass_default=false openshift_storage_glusterfs_block_deploy=false openshift_storage_glusterfs_is_native=false openshift_storage_glusterfs_heketi_is_native=true openshift_storage_glusterfs_heketi_executor=ssh openshift_storage_glusterfs_heketi_ssh_port=22 openshift_storage_glusterfs_heketi_ssh_user=root openshift_storage_glusterfs_heketi_ssh_sudo=false openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa" openshift_storage_glusterfs_registry_namespace=infra-storage openshift_storage_glusterfs_registry_storageclass=false openshift_storage_glusterfs_registry_storageclass_default=false openshift_storage_glusterfs_registry_block_deploy=true openshift_storage_glusterfs_registry_block_host_vol_create=true openshift_storage_glusterfs_registry_block_host_vol_size=100 openshift_storage_glusterfs_registry_block_storageclass=true openshift_storage_glusterfs_registry_block_storageclass_default=false openshift_storage_glusterfs_registry_is_native=false openshift_storage_glusterfs_registry_heketi_is_native=true openshift_storage_glusterfs_registry_heketi_executor=ssh openshift_storage_glusterfs_registry_heketi_ssh_port=22 openshift_storage_glusterfs_registry_heketi_ssh_user=root openshift_storage_glusterfs_registry_heketi_ssh_sudo=false openshift_storage_glusterfs_registry_heketi_ssh_keyfile="/root/.ssh/id_rsa"
- Add
glusterfs
andglusterfs_registry
in the[OSEv3:children]
section to enable the[glusterfs]
and[glusterfs_registry]
groups:[OSEv3:children] ... glusterfs glusterfs_registry
- Add
[glusterfs]
and[glusterfs_registry]
sections with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:[glusterfs] node11.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node14.example.com glusterfs_ip=192.168.10.14 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' [glusterfs_registry] node15.example.com glusterfs_ip=192.168.10.15 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node16.example.com glusterfs_ip=192.168.10.16 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node17.example.com glusterfs_ip=192.168.10.17 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
- The preceding steps detail options that need to be added to a larger, complete inventory file. To use the complete inventory file to deploy {gluster} provide the file path as an option to the following playbooks:
- For an initial OpenShift Container Platform installation:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
- For a standalone installation onto an existing OpenShift Container Platform cluster:
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
- To verify the deployment see, Section 5.5, “Verify your Deployment”.
5.5. Verify your Deployment
- Installation Verification for Independent mode
- Examine the installation for the app-storage namespace by running the following commands:
# switch to the app-storage namespace oc project app-storage # get the list of pods here (1 heketi pod) oc get pods NAME READY STATUS RESTARTS AGE heketi-storage-1-v5skm 1/1 Running 0 1h
- Examine the installation for the infra-storage namespace by running the following commands This can be done from an OCP master node or the ansible deploy host that has the OC CLI installed.
# switch to the infra-storage namespace oc project infra-storage # list the pods here (1 heketi pod and 1 glusterblock-provisioner pod) oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-28sfc 1/1 Running 0 1h heketi-registry-5-lht6s 1/1 Running 0 1h
- Check the existence of the registry PVC backed by OCP infrastructure Red Hat Openshift Container Storage. This volume was statically provisioned by openshift-ansible deployment.
oc get pvc -n default NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE registry-claim Bound pvc-7ca4c8de-10ca-11e8-84d3-069df2c4f284 25Gi RWX 1h
Check the registry DeploymentConfig to verify it's using this glusterfs volume.oc describe dc/docker-registry -n default | grep -A3 Volumes Volumes: registry-storage: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: registry-claim
- Storage Provisioning Verification for Independent Mode
- Validate PV provisioning using the glusterfs and glusterblock OCP Storage Class created during the OCP deployment. The two Storage Class resources, glusterfs-storage and glusterfs-storage-block, can be used to create new PV claims for verification of the Red Hat Openshift Container Storage deployment. The new PVC using the glusterfs-storage storageclass will be using storage available to gluster pods in app-storage project.
# oc get storageclass NAME TYPE glusterfs-storage kubernetes.io/glusterfs Glusterfs-storage-block gluster.org/glusterblock $ cat pvc-file.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: rhocs-file-claim1 annotations: volume.beta.kubernetes.io/storage-class: glusterfs-storage spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
# cat pvc-block.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: rhocs-block-claim1 annotations: volume.beta.kubernetes.io/storage-class: glusterfs-storage-block spec: accessModes: - ReadWriteOnly resources: requests: storage: 5Gi
# oc create -f pvc-file.yaml # oc create -f pvc-block.yaml
Validate that the two PVCs and respective PVs are created correctly:# oc get pvc
- Using the heketi-client for Verification
- The heketi-client package needs to be installed on the ansible deploy host or on a OCP master. Once it is installed two new files should be created to easily export the required environment variables to run the heketi-client commands (or heketi-cli). The content of each file as well as useful heketi-cli commands are detailed here.Create a new file (e.g. "heketi-exports-app") with the following contents:
export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n app-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64) export HEKETI_CLI_USER=admin
Source the file to create the HEKETI app-storage environment variables:source heketi-exports-app # see if heketi is alive curl -w '\n' ${HEKETI_CLI_SERVER}/hello Hello from Heketi # ask heketi about the cluster it knows about heketi-cli cluster list Clusters: Id:56ed234a384cef7dbef6c4aa106d4477 [file][block] # ask heketi about the topology of the RHOCS cluster for apps heketi-cli topology info # ask heketi about the volumes already created (one for the heketi db should exist after the OCP initial installation) heketi-cli volume list Id:d71a4cbea22af3453615a9020f261b5c Cluster:56ed234a384cef7dbef6c4aa106d4477 Name:heketidbstorage
Create a new file (e.g. "heketi-exports-infra") with the following contents:export HEKETI_POD=$(oc get pods -l glusterfs=heketi-registry-pod -n infra-storage -o jsonpath="{.items[0].metadata.name}") export HEKETI_CLI_SERVER=http://$(oc get route/heketi-registry -n infra-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n infra-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)
Source the file to create the HEKETI infra-storage environment variables:source heketi-exports-infra # see if heketi is alive curl -w '\n' ${HEKETI_CLI_SERVER}/hello Hello from Heketi # ask heketi about the cluster it knows about (the RHOCS cluster for infrastructure) heketi-cli cluster list Clusters: Id:baf91b261cbca2bb4b62caece63f60d0 [file][block] # ask heketi about the volumes already created heketi-cli volume list Id:77baed02f79f4518326d8cc1db6c7af8 Cluster:baf91b261cbca2bb4b62caece63f60d0 Name:heketidbstorage
5.6. Creating an Arbiter Volume (optional)
- Better consistency: When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions.
- Less disk space required: Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume.
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# yum install heketi-client
5.6.1. Creating an Arbiter Volume
5.6.1.1. Creating an Arbiter Volume using Heketi CLI
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true'
5.6.1.2. Creating an Arbiter Volume using the Storageclass file
- user.heketi.arbiter true
- (Optional) user.heketi.average-file-size 1024
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "user.heketi.arbiter true,user.heketi.average-file-size 1024" volumenameprefix: "test-vol" spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
Note
Part III. Upgrade
Chapter 6. Upgrading your Red Hat Openshift Container Storage in Converged Mode
6.1. Upgrading the Glusterfs Pods
6.1.1. Prerequisites
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- Ensure to run the following command to retrieve the current configuration details before starting with upgrade:
# oc get all
- Ensure to run the following command to get the latest versions of Ansible templates.
# yum update openshift-ansible
Note
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
6.1.2. Restoring original label values for /dev/log
- Create a directory and soft links on all nodes that run gluster pods:
# mkdir /srv/<directory_name> # cd /srv/<directory_name>/ # same dir as above # ln -sf /dev/null systemd-tmpfiles-setup-dev.service # ln -sf /dev/null systemd-journald.service # ln -sf /dev/null systemd-journald.socket
- Edit the daemonset that creates the glusterfs pods on the node which has oc client:
# oc edit daemonset <daemonset_name>
Under volumeMounts section add a mapping for the volume:- mountPath: /usr/lib/systemd/system/systemd-journald.service name: systemd-journald-service - mountPath: /usr/lib/systemd/system/systemd-journald.socket name: systemd-journald-socket - mountPath: /usr/lib/systemd/system/systemd-tmpfiles-setup-dev.service name: systemd-tmpfiles-setup-dev-service
Under volumes section add a new host path for each service listed:Note
The path mentioned in here should be the same as mentioned in Step 1.- hostPath: path: /srv/<directory_name>/systemd-journald.socket type: "" name: systemd-journald-socket - hostPath: path: /srv/<directory_name>/systemd-journald.service type: "" name: systemd-journald-service - hostPath: path: /srv/<directory_name>/systemd-tmpfiles-setup-dev.service type: "" name: systemd-tmpfiles-setup-dev-service
- Run the following command on all nodes that run gluster pods. This will reset the label:
# restorecon /dev/log
- Execute the following command to check the status of self heal for all volumes:
# oc rsh <gluster_pod_name> # for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
Wait for self-heal to complete. - Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
- Execute the following command on any one of the gluster pods to set the maximum number of bricks (250) that can run on a single instance of
glusterfsd
process:# gluster volume set all cluster.max-bricks-per-process 250
- Execute the following command on any one of the gluster pods to ensure that the option is set correctly:
# gluster volume get all cluster.max-bricks-per-process
For example:# gluster volume get all cluster.max-bricks-per-process cluster.max-bricks-per-process 250
- Execute the following command on the node which has oc client to delete the gluster pod:
# oc delete pod <gluster_pod_name>
- To verify if the pod is ready, execute the following command:
# oc get pods -l glusterfs=storage-pod
- Login to the node hosting the pod and check the selinux label of /dev/log
# ls -lZ /dev/log
The output should show devlog_t labelFor example:# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/log
Exit the node. - In the gluster pod, check if the label value is devlog_t:
# oc rsh <gluster_pod_name> # ls -lZ /dev/log
For example:# ls -lZ /dev/log srw-rw-rw-. root root system_u:object_r:devlog_t:s0 /dev/log
- Perform steps 4 to 9 for other pods.
6.1.3. Upgrading if existing version deployed by using cns-deploy
6.1.3.1. Upgrading cns-deploy and Heketi Server
- Execute the following command to update the heketi client and cns-deploy packages:
# yum update cns-deploy -y # yum update heketi-client -y
- Backup the Heketi database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
- Execute the following command to delete the heketi template.
# oc delete templates heketi
- Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following command to install the heketi template.
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
- Execute the following command to grant the heketi Service Account the necessary privileges.
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
For example,# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
- Execute the following command to generate a new heketi configuration file.
# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
- The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.template
toheketi.json
in the current directory and edit the new file directly, replacing each "${VARIABLE}
" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secret
file already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.# oc create secret generic heketi-config-secret --from-file=heketi.json
- Execute the following command to delete the deployment configuration, service, and route for heketi:
Note
The names of these parameters can be referenced from output of the following command:# oc get all | grep heketi
# oc delete deploymentconfig,service,route heketi
- Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
- Execute the following command to verify that the containers are running:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
6.1.3.2. Upgrading the Red Hat Gluster Storage Pods
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
# oc project <project_name>
For example:# oc project storage-project
- Execute the following command to get the
DeploymentConfig
:# oc get dc
- Execute the following command to set heketi server to accept requests only from the local-client:
# heketi-cli server mode set local-client
- Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
# heketi-cli server operations info
- Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
# oc scale dc <heketi_dc> --replicas=0
- Execute the following command to verify that the heketi pod is no longer present:
# oc get pods
- Execute the following command to find the DaemonSet name for gluster
# oc get ds
- Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=false
Using--cascade=false
option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs --cascade=false daemonset "glusterfs" deleted
- Execute the following commands to verify all the old pods are up:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following command to delete the old glusterfs template.
# oc delete templates glusterfs
For example,# oc delete templates glusterfs template “glusterfs” deleted
- Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labels
If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfs
label, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> storagenode=glusterfs
- Execute the following command to register new gluster template.
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
For example,# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created
- Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -
For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
- Execute the following command to identify the old gluster pods that needs to be deleted:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
- Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
# oc delete pod <gluster_pod>
For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
# oc rsh <gluster_pod_name>
- Run the following command to check the self-heal status of all the volumes:
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
- Execute the following command to verify that the pods are running:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-j241c 1/1 Running 0 4m glusterfs-pqfs6 1/1 Running 0 7m glusterfs-wrn6n 1/1 Running 0 12m storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following command to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_pod_name> glusterd --version
For example:# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2
- Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
# gluster vol get all cluster.op-version
- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.# gluster --timeout=3600 volume set all cluster.op-version 31302
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d
- Remote shell into one of the glusterfs pods. For example:
# oc rsh glusterfs-0vcf3
- Execute the following command:
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
For example:# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>
For example:# oc delete dc glusterblock-storage-provisioner-dc
- Execute the following commands to deploy the gluster-block provisioner:
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
For example:# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
- Delete the following resources from the old pod:
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisioner
- After editing the template, execute the following command to create the deployment configuration:
# oc process <gluster_block_provisioner_template> | oc create -f -
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
# oc rsh <gluster_pod_name>
- Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex on
Note
You can check the brick multiplex status by executing the following command:# gluster v get all all
For example:# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
- List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...
Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading heketi and glusterfs registry pods” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.1.4. Upgrading if existing version deployed by using Ansible
6.1.4.1. Upgrading Heketi Server
- Execute the following command to update the heketi client packages:
# yum update heketi-client -y
- Backup the Heketi database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
- Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following step to edit the template:
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7:v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Execute the following command to delete the deployment configuration, service, and route for heketi:
Note
The names of these parameters can be referenced from output of the following command:# oc get all | grep heketi
# oc delete deploymentconfig,service,route heketi-storage
- Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
- Execute the following command to verify that the containers are running:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
6.1.4.2. Upgrading the Red Hat Gluster Storage Pods
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
# oc project <project_name>
For example:# oc project storage-project
- Execute the following command to get the
DeploymentConfig
:# oc get dc
- Execute the following command to set heketi server to accept requests only from the local-client:
# heketi-cli server mode set local-client
- Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
# heketi-cli server operations info
- Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
# oc scale dc <heketi_dc> --replicas=0
- Execute the following command to verify that the heketi pod is no longer present:
# oc get pods
- Execute the following command to find the DaemonSet name for gluster
# oc get ds
- Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=false
Using--cascade=false
option while deleting the old DaemonSet does not delete the gluster pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs-storage --cascade=false daemonset "glusterfs-storage" deleted
- Execute the following commands to verify all the old pods are up:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following command to edit the old glusterfs template.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:# oc edit template glusterfs - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7 - displayName: GlusterFS container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME as a parameter, then update the glusterfs template as following. For example:# oc edit template glusterfs - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7:v3.10 - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
Note
Ensure that the CLUSTER_NAME variable is set to the correct value - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labels
If the Red Hat Gluster Storage nodes do not have theglusterfs=storage-host
label, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> glusterfs=storage-host
- Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -
For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
- Execute the following command to identify the old gluster pods that needs to be deleted:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
- Execute the following command to delete the old gluster pods.
Gluster pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old gluster pod. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old gluster pods, execute the following command:
# oc delete pod <gluster_pod>
For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on gluster pod:
# oc rsh <gluster_pod_name>
- Run the following command to check the self-heal status of all the volumes:
for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
- Execute the following command to verify that the pods are running:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-j241c 1/1 Running 0 4m glusterfs-pqfs6 1/1 Running 0 7m glusterfs-wrn6n 1/1 Running 0 12m storage-project-router-2-db2wl 1/1 Running 0 4d
- Execute the following command to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_pod_name> glusterd --version
For example:# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2
- Check the Red Hat Gluster Storage op-version by executing the following command on one of the gluster pods.
# gluster vol get all cluster.op-version
- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the gluster pods are updated before changing the cluster.op-version.# gluster --timeout=3600 volume set all cluster.op-version 31302
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d storage-project-router-2-db2wl 1/1 Running 0 4d
- Remote shell into one of the glusterfs pods. For example:
# oc rsh glusterfs-0vcf3
- Execute the following command:
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
For example:# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
- If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>
For example:# oc delete dc glusterblock-storage-provisioner-dc
- Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Delete the following resources from the old pod
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-storage-provisioner
- After editing the template, execute the following command to create the deployment configuration:
# oc process <gluster_block_provisioner_template> | oc create -f -
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the gluster pods:
# oc rsh <gluster_pod_name>
- Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex on
Note
You can check the brick multiplex status by executing the following command:# gluster v get all all
For example:# oc rsh glusterfs-770ql sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
- List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...
Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- If you have glusterfs registry pods, then proceed with the steps listed in Section 6.2, “Upgrading heketi and glusterfs registry pods” to upgrade heketi and glusterfs registry pods.
- If you do not have glusterfs registry pods, then proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.2. Upgrading heketi and glusterfs registry pods
6.2.1. Prerequisites
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- Ensure to run the following command to get the latest versions of Ansible templates.
# yum update openshift-ansible
Note
- gluster template - /usr/share/heketi/templates/glusterfs-template.yaml
- heketi template - /usr/share/heketi/templates/heketi-template.yaml
- glusterblock-provisioner template - /usr/share/heketi/templates/glusterblock-provisioner.yaml
6.2.2. Upgrading if existing version deployed by using cns-deploy
6.2.2.1. Upgrading cns-deploy and Heketi Server
- Backup the Heketi registry database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
- Execute the following command to delete the heketi template.
# oc delete templates heketi
- Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following command to install the heketi template.
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
- Execute the following command to grant the heketi Service Account the necessary privileges.
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
For example,# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
- Execute the following command to generate a new heketi configuration file.
# sed -e "s/\${HEKETI_EXECUTOR}/kubernetes/" -e "s#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
- The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/block_storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.template
toheketi.json
in the current directory and edit the new file directly, replacing each "${VARIABLE}
" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secret
file already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.# oc create secret generic heketi-config-secret --from-file=heketi.json
- Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi
- Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY and HEKETI_ADMIN_KEY parameters.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" created
- Execute the following command to verify that the containers are running:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
6.2.2.2. Upgrading the Red Hat Gluster Storage Registry Pods
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
# oc project <project_name>
For example:# oc project storage-project
- Execute the following command to get the
DeploymentConfig
:# oc get dc
- Execute the following command to set heketi server to accept requests only from the local-client:
# heketi-cli server mode set local-client
- Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
# heketi-cli server operations info
- Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
# oc scale dc <heketi_dc> --replicas=0
- Execute the following command to verify that the heketi pod is no longer present:
# oc get pods
- Execute the following command to find the DaemonSet name for gluster
# oc get ds
- Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=false
Using--cascade=false
option while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deleted
- Execute the following commands to verify all the old pods are up:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Execute the following command to delete the old glusterfs template.
# oc delete templates glusterfs
For example,# oc delete templates glusterfs template “glusterfs” deleted
- Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labels
If the Red Hat Gluster Storage nodes do not have thestoragenode=glusterfs
label, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> storagenode=glusterfs
- Execute the following command to register new gluster template.
# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml
For example,# oc create -f /usr/share/heketi/templates/glusterfs-template.yaml template “glusterfs” created
- Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -
For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
- Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
- Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old glusterfs-registry pods, execute the following command:
# oc delete pod <gluster_pod>
For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on glusterfs-registry pods:
# oc rsh <gluster_pod_name>
- Run the following command to check the self-heal status of all the volumes: :
# for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
- Execute the following command to verify that the pods are running:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Execute the following commands to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_registry_pod_name> glusterd --version
For example:# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2
# rpm -qa|grep gluster
- Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
# gluster vol get all cluster.op-version
- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.# gluster volume set all cluster.op-version 31302
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Remote shell into one of the glusterfs-registry pods. For example:
#oc rsh glusterfs-registry-g6vd9
- Execute the following command:
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
For example:# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
- If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-registry-dc>
For example:# oc delete dc glusterblock-registry-provisioner-dc
- Execute the following commands to deploy the gluster-block provisioner:
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
For example:# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
- Delete the following resources from the old pod
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner
- After editing the template, execute the following command to create the deployment configuration:
# oc process <gluster_block_provisioner_template> | oc create -f -
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the glusterfs_registry pods:
# oc rsh <gluster_pod_name>
- Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex on
Note
You can check the brick multiplex status by executing the following command:# gluster v get all all
For example:#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
- List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...
Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- After upgrading the glusterfs registry pods, proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.2.3. Upgrading if existing version deployed by using Ansible
6.2.3.1. Upgrading Heketi Server
Note
- Backup the Heketi registry database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
- Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following step to edit the template:
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION, and CLUSTER_NAME as shown in the example below.# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME, then edit the template to change the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, HEKETI_ROUTE, IMAGE_NAME and CLUSTER_NAME as shown in the example below.# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: kubernetes - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7:v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi-registry
- Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi-registry" created route "heketi-registry" created deploymentconfig-registry "heketi" created
- Execute the following command to verify that the containers are running:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
6.2.3.2. Upgrading the Red Hat Gluster Storage Registry Pods
- Execute the following steps to stop the Heketi pod to prevent it from accepting any new request for volume creation or volume deletion:
- Execute the following command to access your project:
# oc project <project_name>
For example:# oc project storage-project
- Execute the following command to get the
DeploymentConfig
:# oc get dc
- Execute the following command to set heketi server to accept requests only from the local-client:
# heketi-cli server mode set local-client
- Wait for the ongoing operations to complete and execute the following command to monitor if there are any ongoing operations:
# heketi-cli server operations info
- Execute the following command to reduce the replica count from 1 to 0. This brings down the Heketi pod:
# oc scale dc <heketi_dc> --replicas=0
- Execute the following command to verify that the heketi pod is no longer present:
# oc get pods
- Execute the following command to find the DaemonSet name for gluster
# oc get ds
- Execute the following command to delete the DeamonSet:
# oc delete ds <ds-name> --cascade=false
Using--cascade=false
option while deleting the old DaemonSet does not delete the glusterfs_registry pods but deletes only the DaemonSet. After deleting the old DaemonSet, you must load the new one. When you manually delete the old pods, the new pods which are created will have the configurations of the new DaemonSet.For example,# oc delete ds glusterfs-registry --cascade=false daemonset "glusterfs-registry" deleted
- Execute the following commands to verify all the old pods are up:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Execute the following command to edit the old glusterfs template.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterfs template as following. For example:# oc edit template glusterfs - description: Labels which define the daemonset node selector. Must contain at least one label of the format \'glusterfs=<CLUSTER_NAME>-host\' displayName: Daemonset Node Labels name: NODE_LABELS value: '{ "glusterfs": "registry-host" }' - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7 - displayName: GlusterFS container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry
If the template has only IMAGE_NAME as a parameter then update the glusterfs template as following. For example:# oc edit template glusterfs - description: Labels which define the daemonset node selector. Must contain at least one label of the format \'glusterfs=<CLUSTER_NAME>-host\' displayName: Daemonset Node Labels name: NODE_LABELS value: '{ "glusterfs": "registry-host" }' - displayName: GlusterFS container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-server-rhel7:v3.10 - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry
Note
Ensure that the CLUSTER_NAME variable is set to the correct value - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
- Check if the nodes are labelled using the following command:
# oc get nodes --show-labels
If the Red Hat Gluster Storage nodes do not have theglusterfs=registry-host
label, then label the nodes as shown in step ii. - Label all the OpenShift Container Platform nodes that has the Red Hat Gluster Storage pods:
# oc label nodes <node name> glusterfs=registry-host
- Execute the following commands to create the gluster DaemonSet:
# oc process glusterfs | oc create -f -
For example,# oc process glusterfs | oc create -f - Deamonset “glusterfs” created
- Execute the following command to identify the old glusterfs_registry pods that needs to be deleted:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Execute the following commmand and ensure that the bricks are not more than 90% full:
# df -kh | grep -v ^Filesystem | awk '{if($5>"90%") print $0}'
- Execute the following command to delete the old glusterfs-registry pods.
glusterfs-registry pods should follow rolling upgrade. Hence, you must ensure that the new pod is running before deleting the next old glusterfs-registry pods. We support
. WithOnDelete Strategy
DaemonSet update strategyOnDelete Strategy
update strategy, after you update a DaemonSet template, new DaemonSet pods will only be created when you manually delete old DaemonSet pods.- To delete the old glusterfs-registry pods, execute the following command:
# oc delete pod <gluster_pod>
For example,# oc delete pod glusterfs-0vcf3 pod “glusterfs-0vcf3” deleted
Note
Before deleting the next pod, self heal check has to be made:- Run the following command to access shell on glusterfs-registry pods:
# oc rsh <gluster_pod_name>
- Run the following command to check the self-heal status of all the volumes: :
# for each_volume in `gluster volume list`; do gluster volume heal $each_volume info ; done | grep "Number of entries: [^0]$"
- The delete pod command will terminate the old pod and create a new pod. Run
# oc get pods -w
and check theAge
of the pod andREADY
status should be 1/1. The following is the example output showing the status progression from termination to creation of the pod.# oc get pods -w NAME READY STATUS RESTARTS AGE glusterfs-0vcf3 1/1 Terminating 0 3d …
- Execute the following command to verify that the pods are running:
# oc get pods
For example,# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Execute the following commands to verify if you have upgraded the pod to the latest version:
# oc rsh <gluster_registry_pod_name> glusterd --version
For example:# oc rsh glusterfs-registry-4cpcc glusterd --version glusterfs 3.12.2
# rpm -qa|grep gluster
- Check the Red Hat Gluster Storage op-version by executing the following command on one of the glusterfs-registry pods.
# gluster vol get all cluster.op-version
- Set the cluster.op-version to 31302 on any one of the pods:
Note
Ensure all the glusterfs-registry pods are updated before changing the cluster.op-version.# gluster volume set all cluster.op-version 31302
- Execute the following steps to enable server.tcp-user-timeout on all volumes.
Note
The "server.tcp-user-timeout" option specifies the maximum amount of the time (in seconds) the transmitted data from the application can remain unacknowledged from the brick.It is used to detect force disconnections and dead connections (if a node dies unexpectedly, a firewall is activated, etc.,) early and make it possible for applications to reduce the overall failover time.- List the glusterfs pod using the following command:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterblock-registry-provisioner-dc-1-nvnhc 1/1 Running 1 7d glusterfs-registry-4cpcc 1/1 Running 0 7d glusterfs-registry-9xj78 1/1 Running 0 7d glusterfs-registry-b9p5j 1/1 Running 0 7d
- Remote shell into one of the glusterfs-registry pods. For example:
#oc rsh glusterfs-registry-g6vd9
- Execute the following command:
# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done
For example:# for eachVolume in `gluster volume list`; do echo $eachVolume; gluster volume set $eachVolume server.tcp-user-timeout 42 ; done volume1 volume set: success volume2 volume set: success
- If a gluster-block-registry-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-registry-dc>
For example:# oc delete dc glusterblock-registry-provisioner-dc
- Depending on the OCP version, edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs-registry - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry
If the template has only IMAGE_NAME then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs-registry - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: registry
- Delete the following resources from the old pod
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner # oc delete serviceaccounts glusterblock-registry-provisioner
- After editing the template, execute the following command to create the deployment configuration:
# oc process <gluster_block_provisioner_template> | oc create -f -
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. It is enabled by default from Container-Native Storage 3.6. During an upgrade from Container-Native Storage 3.9 to Red Hat Openshift Container Storage 3.10, to turn brick multiplexing on, execute the following commands:
- To exec into the Gluster pod, execute the following command and rsh into any of the glusterfs_registry pods:
# oc rsh <gluster_pod_name>
- Verify if brick multiplexing is enabled. If it is disabled, then execute the following command to enable brick multiplexing:
# gluster volume set all cluster.brick-multiplex on
Note
You can check the brick multiplex status by executing the following command:# gluster v get all all
For example:#oc rsh glusterfs-registry-g6vd9 sh-4.2# gluster volume set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (Independent/Converged). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
- List all the volumes in the trusted storage pool. This step is only required if the volume set operation is performed:For example:
# gluster volume list heketidbstorage vol_194049d2565d2a4ad78ef0483e04711e ... ...
Restart all the volumes. This step is only required if the volume set operation is performed along with the previous step:# gluster vol stop <VOLNAME> # gluster vol start <VOLNAME>
- Support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To enable S3 compatible object store, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html/operations_guide/s3_object_store.
Note
- After upgrading the glusterfs registry pods, proceed with the steps listed in Section 6.3, “Upgrading the client on Red Hat Openshift Container Platform Nodes” to upgrade the client on Red Hat Openshift Container Platform Nodes.
6.3. Upgrading the client on Red Hat Openshift Container Platform Nodes
- To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
# oc adm drain <node_name> --ignore-daemonsets
- To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access) :
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
- Execute the command on the node to upgrade the client on the node to glusterfs-fuse-3.12.2-32.el7.x86_64 version:
# yum install glusterfs-client
- To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
# oc adm manage-node --schedulable=true <node_name>
- Create and add the following content to the multipath.conf file:
Note
Make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.# cat >> /etc/multipath.conf <<EOF # LIO iSCSI devices { device { vendor "LIO-ORG" user_friendly_names "yes" # names like mpatha path_grouping_policy "failover" # one path per group hardware_handler "1 alua" path_selector "round-robin 0" failback immediate path_checker "tur" prio "alua" no_path_retry 120 rr_weight "uniform" } } EOF
- Execute the following commands to start multipath daemon and [re]load the multipath configuration:
# systemctl start multipathd
# systemctl reload multipathd
6.4. Starting the Heketi Pods
- Execute the following command to navigate to the project where the Heketi pods are running:
# oc project <project_name>
For example:# oc project glusterfs
- Execute the following command to get the
DeploymentConfig
:# oc get dc
For example:# oc get dc NAME REVISION DESIRED CURRENT TRIGGERED BY glusterblock-provisioner-dc 1 1 1 config heketi 1 1 1 config
- Execute the following command to increase the replica count from 0 to 1. This brings back the Heketi pod:
# oc scale dc <heketi_dc> --replicas=1
- Execute the following command to verify that the Heketi pod is present:
# oc get pods
For example:# oc get pods glusterblock-provisioner-dc-1-fc8sc 1/1 Running 0 3d glusterfs-bd6kv 1/1 Running 0 2h glusterfs-vhpcw 1/1 Running 1 1d glusterfs-z6nkk 1/1 Running 0 1d heketi-1-6sccl 1/1 Running 0 2h
Chapter 7. Upgrading Your Red Hat Openshift Container Storage in Independent Mode
7.1. Prerequisites
- Ensure to have the supported versions of OpenShift Container Platform with Red Hat Gluster Storage Server and Red Hat Openshift Container Storage. For more information on supported versions, see Section 3.1.1, “Supported Versions”
- If Heketi is running as a standalone service in one of the Red Hat Gluster Storage nodes, then ensure to open the port for Heketi. By default the port number for Heketi is 8080. To open this port execute the following command on the node where Heketi is running:
# firewall-cmd --zone=zone_name --add-port=8080/tcp # firewall-cmd --zone=zone_name --add-port=8080/tcp --permanent
If Heketi is configured to listen on a different port, then change the port number in the command accordingly.
7.2. Upgrading your Independent Mode Setup
7.2.1. Upgrading the Red Hat Gluster Storage Cluster
7.2.2. Upgrading/Migration of Heketi in RHGS node
Note
Important
- In OCS 3.11, upgrade of Heketi in RHGS node is not supported. Hence, you have to migrate heketi to a new heketi pod.
- Ensure to migrate to the supported heketi deployment now, as there might not be a migration path in the future versions.
- Ensure that cns-deploy rpm is installed in the master node. This provides template files necessary to setup heketi pod.
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
# yum install cns-deploy
- Use the newly created containerized Red Hat Gluster Storage project on the master node:
# oc project <project-name>
For example:# oc project gluster
- Execute the following command on the master node to create the service account:
# oc create -f /usr/share/heketi/templates/heketi-service-account.yaml serviceaccount/heketi-service-account created
- Execute the following command on the master node to install the heketi template:
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template.template.openshift.io/heketi created
- Verify if the templates are created
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS heketi Heketi service deployment template 5 (3 blank) 3
- Execute the following command on the master node to grant the heketi Service Account the necessary privileges:
# oc policy add-role-to-user edit system:serviceaccount:gluster:heketi-service-account role "edit" added: "system:serviceaccount:gluster:heketi-service-account"
# oc adm policy add-scc-to-user privileged -z heketi-service-account scc "privileged" added to: ["system:serviceaccount:gluster:heketi-service-account"]
- On the RHGS node, where heketi is running, execute the following commands:
- Create the heketidbstorage volume:
# heketi-cli volume create --size=2 --name=heketidbstorage
- Mount the volume:
# mount -t glusterfs 192.168.11.192:heketidbstorage /mnt/
where 192.168.11.192 is one of the RHGS node. - Stop the heketi service:
# systemctl stop heketi
- Disable the heketi service:
# systemctl disable heketi
- Copy the heketi db to the heketidbstorage volume:
# cp /var/lib/heketi/heketi.db /mnt/
- Unmount the volume:
# umount /mnt
- Copy the following files from the heketi node to the master node:
# scp /etc/heketi/heketi.json topology.json /etc/heketi/heketi_key OCP_master_node:/root/
where OCP_master_node is the hostname of the master node.
- On the master node, set the environment variables for the following three files that were copied from the heketi node. Add the following lines to ~/.bashrc file and run the bash command to apply and save the changes:
export SSH_KEYFILE=heketi_key export TOPOLOGY=topology.json export HEKETI_CONFIG=heketi.json
Note
If you have changed the value for "keyfile" in /etc/heketi/heketi.json to a different value, change here accordingly. - Execute the following command to create a secret to hold the configuration file:
# oc create secret generic heketi-config-secret --from-file=${SSH_KEYFILE} --from-file=${HEKETI_CONFIG} --from-file=${TOPOLOGY} secret/heketi-config-secret created
- Execute the following command to label the secret:
# oc label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret/heketi-config-secret labeled
- Get the IP addresses of all the glusterfs nodes, from the heketi-gluster-endpoints.yml file. For example:
# cat heketi-gluster-endpoints.yaml apiVersion: v1 kind: Endpoints metadata: name: heketi-storage-endpoints subsets: - addresses: - ip: 192.168.11.208 ports: - port: 1 - addresses: - ip: 192.168.11.176 ports: - port: 1 - addresses: - ip: 192.168.11.192 ports: - port: 1
In the above example, 192.168.11.208, 192.168.11.176, 192.168.11.192 are the glusterfs nodes. - Execute the following command to create the endpoints:
# oc create -f ./heketi-gluster-endpoints.yaml
# cat heketi-gluster-service.yaml apiVersion: v1 kind: Service metadata: name: heketi-storage-endpoints spec: ports: - port: 1
- Execute the following command to create the service:
# oc create -f ./heketi-gluster-service.yaml
- Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# # oc process heketi | oc create -f - service/heketi created route.route.openshift.io/heketi created deploymentconfig.apps.openshift.io/heketi created
- To verify if Heketi is migrated execute the following command on the master node:
# oc rsh po/<heketi-pod-name>
For example:# oc rsh po/heketi-1-p65c6
- Execute the following command to check the cluster IDs
# heketi-cli cluster list
From the output verify if the cluster ID matches with the old cluster.
7.2.3. Upgrading if existing version deployed using cns-deploy
7.2.3.1. Upgrading Heketi in Openshift node
- Execute the following command to update the heketi client and cns-deploy packages:
# yum update cns-deploy -y # yum update heketi-client -y
- Backup the Heketi database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
- Execute the following command to delete the heketi template.
# oc delete templates heketi
- Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following command to install the heketi template.
# oc create -f /usr/share/heketi/templates/heketi-template.yaml template "heketi" created
- Execute the following command to grant the heketi Service Account the necessary privileges.
# oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
For example,# oc policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account # oc adm policy add-scc-to-user privileged -z heketi-service-account
- Execute the following command to generate a new heketi configuration file.
# sed -e "s/\${HEKETI_EXECUTOR}/ssh/" -e "s#\${HEKETI_FSTAB}#/etc/fstab#" -e "s/\${SSH_PORT}/22/" -e "s/\${SSH_USER}/root/" -e "s/\${SSH_SUDO}/false/" -e "s/\${BLOCK_HOST_CREATE}/true/" -e "s/\${BLOCK_HOST_SIZE}/500/" "/usr/share/heketi/templates/heketi.json.template" > heketi.json
- The
BLOCK_HOST_SIZE
parameter controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes (For more information, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/operations_guide/#Block_Storage). This default configuration will dynamically create block-hosting volumes of 500GB in size as more space is required. - Alternatively, copy the file
/usr/share/heketi/templates/heketi.json.template
toheketi.json
in the current directory and edit the new file directly, replacing each "${VARIABLE}
" string with the required parameter.Note
JSON formatting is strictly required (e.g. no trailing spaces, booleans in all lowercase).
Note
If theheketi-config-secret
file already exists, then delete the file and run the following command.Execute the following command to create a secret to hold the configuration file.# oc create secret generic heketi-config-secret --from-file=private_key=${SSH_KEYFILE} --from-file=./heketi.json
- Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi
- Execute the following command to edit the heketi template. Edit the HEKETI_USER_KEY, HEKETI_ADMIN_KEY, and HEKETI_EXECUTOR parameters.
# oc edit template heketi parameters: - description: Set secret for those creating volumes as type _user_ displayName: Heketi User Secret name: HEKETI_USER_KEY value: <heketiuserkey> - description: Set secret for administration of the Heketi service as user _admin_ displayName: Heketi Administrator Secret name: HEKETI_ADMIN_KEY value: <adminkey> - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7:v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Execute the following command to deploy the Heketi service, route, and deployment configuration which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
- Execute the following command to verify that the containers are running:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
7.2.3.2. Upgrading Gluster Block
- Execute the following command to upgrade the gluster block:
# yum update gluster-block
- Enable and start the gluster block service:
# systemctl enable gluster-blockd # systemctl start gluster-blockd
- Execute the following command to update the heketi client and cns-deploy packages
# yum update cns-deploy -y # yum update heketi-client -y
- To use gluster block, add the following two parameters to the
glusterfs
section in the heketi configuration file at /etc/heketi/heketi.JSON:auto_create_block_hosting_volume block_hosting_volume_size
Where:auto_create_block_hosting_volume
: Creates Block Hosting volumes automatically if not found or if the existing volume is exhausted. To enable this, set the value totrue
.block_hosting_volume_size
: New block hosting volume will be created in the size mentioned. This is considered only if auto_create_block_hosting_volume is set to true. Recommended size is 500G.For example:..... ..... "glusterfs" : { "executor" : "ssh", "db" : "/var/lib/heketi/heketi.db", "sshexec" : { "rebalance_on_expansion": true, "keyfile" : "/etc/heketi/private_key" }, "auto_create_block_hosting_volume": true, "block_hosting_volume_size": 500G }, ..... .....
- Restart the Heketi service:
# systemctl restart heketi
Note
This step is not applicable if heketi is running as a pod in the Openshift cluster. - If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>
For example:# oc delete dc glusterblock-provisioner-dc
- Execute the following commands to deploy the gluster-block provisioner:
# sed -e 's/\\\${NAMESPACE}/<NAMESPACE>/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:<NAMESPACE>:glusterblock-provisioner
For example:# sed -e 's/\\\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | oc create -f -
# oc adm policy add-cluster-role-to-user glusterblock-provisioner-runner system:serviceaccount:storage-project:glusterblock-provisioner
- Delete the following resources from the old pod
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete serviceaccounts glusterblock-registry-provisioner
- Execute the following command to create a glusterblock-provisioner.
# oc process <gluster_block_provisioner_template> | oc create -f -
7.2.4. Upgrading if existing version deployed using Ansible
7.2.4.1. Upgrading Heketi in Openshift node
- Execute the following command to update the heketi client:
# yum update heketi-client -y
- Backup the Heketi database file
# oc rsh <heketi_pod_name> # cp -a /var/lib/heketi/heketi.db /var/lib/heketi/heketi.db.`date +%s`.`heketi --version | awk '{print $2}'` # exit
- Execute the following command to get the current HEKETI_ADMIN_KEY.The OCS admin can choose to set any phrase for user key as long as it is not used by their infrastructure. It is not used by any of the OCS default installed resources.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following step to edit the template:
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner 3 (2 blank) 4 template glusterfs GlusterFS DaemonSet 5 (1 blank) 1 template heketi Heketi service deployment 7 (3 blank) 3 template
If the existing template has IMAGE_NAME and IMAGE_VERSION as two parameters, then edit the template to change the HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, IMAGE_VERSION and CLUSTER_NAME as shown in the example below.# oc edit template heketi - description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7 - displayName: heketi container image version name: IMAGE_VERSION required: true value: v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME, then edit the template to change the HEKETI_EXECUTOR, HEKETI_FSTAB, HEKETI_ROUTE, IMAGE_NAME, and CLUSTER_NAME as shown in the example below.- description: Set the executor type, kubernetes or ssh displayName: heketi executor type name: HEKETI_EXECUTOR value: ssh - description: Set the fstab path, file that is populated with bricks that heketi creates displayName: heketi fstab path name: HEKETI_FSTAB value: /etc/fstab - description: Set the hostname for the route URL displayName: heketi route name name: HEKETI_ROUTE value: heketi-storage - displayName: heketi container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-volmanager-rhel7:v3.10 - description: A unique name to identify this heketi service, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Execute the following command to delete the deployment configuration, service, and route for heketi:
# oc delete deploymentconfig,service,route heketi-storage
- Execute the following command to get the current HEKETI_ADMIN_KEY.
oc get secret heketi-storage-admin-secret -o jsonpath='{.data.key}'|base64 -d;echo
- Execute the following command to deploy the Heketi service, route, and deploymentconfig which will be used to create persistent volumes for OpenShift:
# oc process heketi | oc create -f - service "heketi" created route "heketi" created deploymentconfig "heketi" created
- Execute the following command to verify that the containers are running:
# oc get pods
For example:# oc get pods NAME READY STATUS RESTARTS AGE glusterfs-0h68l 1/1 Running 0 3d glusterfs-0vcf3 1/1 Running 0 3d glusterfs-gr9gh 1/1 Running 0 3d heketi-1-zpw4d 1/1 Running 0 3h storage-project-router-2-db2wl 1/1 Running 0 4d
7.2.4.2. Upgrading Gluster Block if Deployed by Using Ansible
- Execute the following command to upgrade the gluster block:
# yum update gluster-block
- Enable and start the gluster block service:
# systemctl enable gluster-blockd # systemctl start gluster-blockd
- Execute the following command to update the heketi client
# yum update heketi-client -y
- Restart the Heketi service:
# systemctl restart heketi
Note
This step is not applicable if heketi is running as a pod in the Openshift cluster. - If a gluster-block-provisoner-pod already exists then delete it by executing the following commands:
# oc delete dc <gluster-block-dc>
For example:# oc delete dc glusterblock-provisioner-dc
- Edit the glusterblock-provisioner template to change the IMAGE_NAME, IMAGE_VERSION and NAMESPACE.
# oc get templates NAME DESCRIPTION PARAMETERS OBJECTS glusterblock-provisioner glusterblock provisioner template 3 (2 blank) 4 glusterfs GlusterFS DaemonSet template 5 (1 blank) 1 heketi Heketi service deployment template 7 (3 blank) 3
If the template has IMAGE_NAME and IMAGE_VERSION as two separate parameters, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7 - displayName: glusterblock provisioner container image version name: IMAGE_VERSION required: true value: v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
If the template has only IMAGE_NAME as a parameter, then update the glusterblock-provisioner template as following. For example:# oc edit template glusterblock-provisioner - displayName: glusterblock provisioner container image name name: IMAGE_NAME required: true value: rhgs3/rhgs-gluster-block-prov-rhel7:v3.10 - description: The namespace in which these resources are being created displayName: glusterblock provisioner namespace name: NAMESPACE required: true value: glusterfs - description: A unique name to identify which heketi service manages this cluster, useful for running multiple heketi instances displayName: GlusterFS cluster name name: CLUSTER_NAME value: storage
- Delete the following resources from the old pod
# oc delete clusterroles.authorization.openshift.io glusterblock-provisioner-runner
# oc delete serviceaccounts glusterblock-registry-provisioner
- Execute the following command to create a glusterblock-provisioner.
# oc process <gluster_block_provisioner_template> | oc create -f -
7.2.5. Enabling S3 Compatible Object store
Note
7.3. Upgrading Gluster Nodes and heketi pods in glusterfs Registry Namespace
7.3.1. Upgrading the Red Hat Gluster Storage Registry Cluster
7.3.2. Upgrading Heketi Registry pod
Note
7.3.3. Upgrading Gluster Block
7.4. Upgrading the client on Red Hat Openshift Container Platform Nodes
- To drain the pod, execute the following command on the master node (or any node with cluster-admin access):
# oc adm drain <node_name> --ignore-daemonsets
- To check if all the pods are drained, execute the following command on the master node (or any node with cluster-admin access):
# oc get pods --all-namespaces --field-selector=spec.nodeName=<node_name>
- Execute the command on the node to upgrade the client on the node:
# yum update glusterfs-client
- To enable node for pod scheduling execute the following command on the master node (or any node with cluster-admin access):
# oc adm manage-node --schedulable=true <node_name>
- Create and add the following content to the multipath.conf file:
Note
Make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.# cat >> /etc/multipath.conf <<EOF # LIO iSCSI devices { device { vendor "LIO-ORG" user_friendly_names "yes" # names like mpatha path_grouping_policy "failover" # one path per group hardware_handler "1 alua" path_selector "round-robin 0" failback immediate path_checker "tur" prio "alua" no_path_retry 120 } } EOF
- Execute the following commands to start multipath daemon and [re]load the multipath configuration:
# systemctl start multipathd
# systemctl reload multipathd
Part IV. Uninstalling
Chapter 8. Uninstall Red Hat Openshift Container Storage
Warning
ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
openshift_storage_glusterfs_wipe
which, when enabled, will destroy any data on the block devices that were used for Red Hat Gluster Storage backend storage. For more information about the settings/variables that will be destroyed, see Appendix B, Settings that are destroyed when using uninstall playbook. It is recommended to use this variable in the following format:
ansible-playbook -i <path_to_inventory_file> -e "openshift_storage_glusterfs_wipe=true" /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
Note
Part V. Workloads
Chapter 9. Managing Arbitrated Replicated Volumes
9.1. Managing Arbiter Brick Size
# heketi-cli volume create --size=4 --gluster-volume-options='user.heketi.arbiter true,user.heketi.average-file-size 1024'
9.2. Managing Arbiter Brick Placement
- supported: both arbiter bricks and data bricks are allowed.
- required: only arbiter bricks are allowed, data bricks are rejected.
- disabled: only data bricks are allowed, arbiter bricks are rejected.
Note
9.2.1. Setting Tags with the Heketi CLI
# heketi-cli node settags <node id> arbiter:<tag>
# heketi-cli node settags e2a792a43ca9a6bac4b9bfa792e89347 arbiter:disabled
# heketi-cli device settags <device id> arbiter:<tag>
# heketi-cli device settags 167fe2831ad0a91f7173dac79172f8d7 arbiter:required
9.2.2. Removing Tags using Heketi CLI
# heketi-cli node rmtags <node id> arbiter
# heketi-cli node rmtags e2a792a43ca9a6bac4b9bfa792e89347 arbiter
# heketi-cli device rmtags <device id> arbiter
# heketi-cli device rmtags 167fe2831ad0a91f7173dac79172f8d7 arbiter
9.2.3. Viewing Tags with the Heketi CLI
# heketi-cli node info <node id>
# heketi-cli node info e2a792a43ca9a6bac4b9bfa792e89347 Node Id: e2a792a43ca9a6bac4b9bfa792e89347 State: online Cluster Id: ddb14817873c13c5bb42a5c04969daf9 Zone: 1 Management Hostname: 10.0.0.1 Storage Hostname: 10.0.0.1 Tags: arbiter: disabled test: demonstration Devices: Id:0b39f89c0677e8c0b796caf00204e726 Name:/dev/vdb State:online Size (GiB):500 Used (GiB):0 Free (GiB):500 Bricks:0 Id:167fe2831ad0a91f7173dac79172f8d7 Name:/dev/vdg State:online Size (GiB):500 Used (GiB):0 Free (GiB):500 Bricks:0
# heketi-cli device info <device id>
# heketi-cli device info 167fe2831ad0a91f7173dac79172f8d7 Device Id: 167fe2831ad0a91f7173dac79172f8d7 Name: /dev/vdg State: online Size (GiB): 500 Used (GiB): 0 Free (GiB): 500 Tags: arbiter: required foobar: magic Bricks:
9.3. Creating Persistent Volumes
Important
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "user.heketi.arbiter true" volumenameprefix: "test-vol" allowVolumeExpansion: "true"
Part VI. Appendix
Appendix A. Optional Deployment Method (with cns-deploy)
A.1. Setting up Converged mode
A.1.1. Configuring Port Access
- On each of the OpenShift nodes that will host the Red Hat Gluster Storage container, add the following rules to
/etc/sysconfig/iptables
in order to open the required ports:-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m multiport --dports 49152:49664 -j ACCEPT -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24010 -j ACCEPT -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 3260 -j ACCEPT -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT
Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
For more information about Red Hat Gluster Storage Server ports, see https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html/administration_guide/chap-getting_started.- Execute the following command to reload the iptables:
# systemctl reload iptables
- Execute the following command on each node to verify if the iptables are updated:
# iptables -L
A.1.2. Enabling Kernel Modules
cns-deploy
tool, you must ensure that the dm_thin_pool
, dm_multipath
, and target_core_user
modules are loaded in the OpenShift Container Platform node. Execute the following commands only on Gluster nodes to verify if the modules are loaded:
# lsmod | grep dm_thin_pool
# lsmod | grep dm_multipath
# lsmod | grep target_core_user
# modprobe dm_thin_pool
# modprobe dm_multipath
# modprobe target_core_user
Note
# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/dm_multipath.conf dm_multipath
# cat /etc/modules-load.d/target_core_user.conf target_core_user
A.1.3. Starting and Enabling Services
# systemctl add-wants multi-user rpcbind.service # systemctl enable rpcbind.service # systemctl start rpcbind.service
# systemctl status rpcbind rpcbind.service - RPC bind service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2017-08-30 21:24:21 IST; 1 day 13h ago Main PID: 9945 (rpcbind) CGroup: /system.slice/rpcbind.service └─9945 /sbin/rpcbind -w
Note
cns-deploy --abort
command. Use the -g
option if Gluster is containerized.
rm -rf /var/lib/heketi /etc/glusterfs /var/lib/glusterd /var/log/glusterfs
command on every node that was running a Gluster pod and also run wipefs -a <device>
for every storage device that was consumed by Heketi. This erases all the remaining Gluster states from each node. You must be an administrator to run the device wiping command
A.2. Setting up Independent Mode
A.2.1. Installing Red Hat Gluster Storage Server on Red Hat Enterprise Linux (Layered Install)
Important
/var
partition that is large enough (50GB - 100GB) for log files, geo-replication related miscellaneous files, and other files.
Perform a base install of Red Hat Enterprise Linux 7 Server
Independent mode is supported only on Red Hat Enterprise Linux 7.Register the System with Subscription Manager
Run the following command and enter your Red Hat Network username and password to register the system with the Red Hat Network:# subscription-manager register
Identify Available Entitlement Pools
Run the following commands to find entitlement pools containing the repositories required to install Red Hat Gluster Storage:# subscription-manager list --available
Attach Entitlement Pools to the System
Use the pool identifiers located in the previous step to attach theRed Hat Enterprise Linux Server
andRed Hat Gluster Storage
entitlements to the system. Run the following command to attach the entitlements:# subscription-manager attach --pool=[POOLID]
For example:# subscription-manager attach --pool=8a85f9814999f69101499c05aa706e47
Enable the Required Channels
For Red Hat Gluster Storage 3.3 on Red Hat Enterprise Linux 7.x- Run the following commands to enable the repositories required to install Red Hat Gluster Storage
# subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
Verify if the Channels are Enabled
Run the following command to verify if the channels are enabled:# yum repolist
Update all packages
Ensure that all packages are up to date by running the following command.# yum update
Important
If any kernel packages are updated, reboot the system with the following command.# shutdown -r now
Kernel Version Requirement
Independent mode requires the kernel-3.10.0-690.el7 version or higher to be used on the system. Verify the installed and running kernel versions by running the following command:# rpm -q kernel kernel-3.10.0-862.11.6.el7.x86_64
# uname -r 3.10.0-862.11.6.el7.x86_64
Install Red Hat Gluster Storage
Run the following command to install Red Hat Gluster Storage:# yum install redhat-storage-server
- To enable gluster-block execute the following command:
# yum install gluster-block
Reboot
Reboot the system.
A.2.2. Configuring Port Access
# firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp # firewall-cmd --zone=zone_name --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
Note
- Port 24010 and 3260 are for gluster-blockd and iSCSI targets respectively.
- The port range starting at 49664 defines the range of ports that can be used by GlusterFS for communication to its volume bricks. In the above example, the total number of bricks allowed is 512. Configure the port range based on the maximum number of bricks that could be hosted on each node.
A.2.3. Enabling Kernel Modules
- You must ensure that the
dm_thin_pool
andtarget_core_user
modules are loaded in the Red Hat Gluster Storage nodes.# modprobe target_core_user
# modprobe dm_thin_pool
Execute the following command to verify if the modules are loaded:# lsmod | grep dm_thin_pool
# lsmod | grep target_core_user
Note
To ensure these operations are persisted across reboots, create the following files and update each file with the content as mentioned:# cat /etc/modules-load.d/dm_thin_pool.conf dm_thin_pool
# cat /etc/modules-load.d/target_core_user.conf target_core_user
- You must ensure that the
dm_multipath
module is loaded on all OpenShift Container Platform nodes.# modprobe dm_multipath
Execute the following command to verify if the modules are loaded:# lsmod | grep dm_multipath
Note
To ensure these operations are persisted across reboots, create the following file and update it with the content as mentioned:# cat /etc/modules-load.d/dm_multipath.conf dm_multipath
A.2.4. Starting and Enabling Services
# systemctl start sshd
# systemctl enable sshd
# systemctl start glusterd
# systemctl enable glusterd
# systemctl start gluster-blockd
# systemctl enable gluster-blockd
A.3. Setting up the Environment
A.3.1. Preparing the Red Hat OpenShift Container Platform Cluster
- On the master or client, execute the following command to login as the cluster admin user:
# oc login
For example:oc login Authentication required for https://dhcp46-24.lab.eng.blr.redhat.com:8443 (openshift) Username: test Password: Login successful. You have access to the following projects and can switch between them with 'oc project <project_name>': * default kube-system logging management-infra openshift openshift-infra Using project "default".
- On the master or client, execute the following command to create a project, which will contain all the containerized Red Hat Gluster Storage services:
# oc new-project <project_name>
For example:# oc new-project storage-project Now using project "storage-project" on server "https://master.example.com:8443"
- After the project is created, execute the following command on the master node to enable the deployment of the privileged containers as Red Hat Gluster Storage container can only run in the privileged mode.
# oc adm policy add-scc-to-user privileged -z default
- Execute the following steps on the master to set up the router:
Note
If a router already exists, proceed to Step 5. To verify if the router is already deployed, execute the following command:# oc get dc --all-namespaces
To list all routers in all namespaces execute the following command:# oc get dc --all-namespaces --selector=router=router NAMESPACE NAME REVISION DESIRED CURRENT TRIGGERED BY default router 31 5 5 config
- Execute the following command to enable the deployment of the router:
# oc adm policy add-scc-to-user privileged -z router
- Execute the following command to deploy the router:
# oc adm router storage-project-router --replicas=1
- Edit the subdomain name in the config.yaml file located at
/etc/origin/master/master-config.yaml
.For example:subdomain: "cloudapps.mystorage.com"
- For OpenShift Container Platform 3.7 and 3.9 execute the following command to restart the services :
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
Note
If the router setup fails, use the port forward method as described in https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Port_Fwding .
For more information regarding router setup, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html-single/configuring_clusters/#setting-up-a-router - Execute the following command to verify if the router is running:
# oc get dc <router_name>
For example:# oc get dc storage-project-router NAME REVISION DESIRED CURRENT TRIGGERED BY storage-project-router 1 1 1 config
Note
Ensure you do not edit the/etc/dnsmasq.conf
file until the router has started. - After the router is running, the client has to be setup to access the services in the OpenShift cluster. Execute the following steps on the client to set up the DNS.
- Execute the following command to find the IP address of the router:
# oc get pods -o wide --all-namespaces | grep router storage-project storage-project-router-1-cm874 1/1 Running 119d 10.70.43.132 dhcp43-132.lab.eng.blr.redhat.com
- Edit the /etc/dnsmasq.conf file and add the following line to the file:
address=/.cloudapps.mystorage.com/<Router_IP_Address>
where, Router_IP_Address is the IP address of the node where the router is running. - Restart the
dnsmasq
service by executing the following command:# systemctl restart dnsmasq
- Edit /etc/resolv.conf and add the following line:
nameserver 127.0.0.1
For more information regarding setting up the DNS, see https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/installing_clusters/install-config-install-prerequisites#envirornment-requirements.
A.3.2. Deploying Containerized Red Hat Gluster Storage Solutions
cns-deploy
tool.
Note
- It is recommended that a separate cluster for OpenShift Container Platform infrastructure workload (registry, logging and metrics) and application pod storage. Hence, if you have more than 6 nodes ensure you create multiple clusters with a minimum of 3 nodes each. The infrastructure cluster should belong to the
default
project namespace. - If you want to enable encryption on Red Hat Openshift Container Storage setup, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/operations_guide/#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Enabling_Encryption before proceeding with the following steps.
- You must first provide a topology file for heketi which describes the topology of the Red Hat Gluster Storage nodes and their attached storage devices. A sample, formatted topology file (topology-sample.json) is installed with the ‘heketi-client’ package in the /usr/share/heketi/ directory.
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "node1.example.com" ], "storage": [ "192.168.68.3" ] }, "zone": 1 }, "devices": [ "/dev/sdb", "/dev/sdc", "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/sdh", "/dev/sdi" ] }, { "node": { "hostnames": { "manage": [ "node2.example.com" ], "storage": [ "192.168.68.2" ] }, "zone": 2 }, "devices": [ "/dev/sdb", "/dev/sdc", "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/sdh", "/dev/sdi" ] }, ....... .......
where,- clusters: Array of clusters.Each element on the array is a map which describes the cluster as follows.
- nodes: Array of OpenShift nodes that will host the Red Hat Gluster Storage containerEach element on the array is a map which describes the node as follows
- node: It is a map of the following elements:
- zone: The value represents the zone number that the node belongs to; the zone number is used by heketi for choosing optimum position of bricks by having replicas of bricks in different zones. Hence zone number is similar to a failure domain.
- hostnames: It is a map which lists the manage and storage addresses
- manage: It is the hostname/IP Address that is used by Heketi to communicate with the node
- storage: It is the IP address that is used by other OpenShift nodes to communicate with the node. Storage data traffic will use the interface attached to this IP. This must be the IP address and not the hostname because, in an OpenShift environment, Heketi considers this to be the endpoint too.
- devices: Name of each disk to be added
Note
Copy the topology file from the default location to your location and then edit it:# cp /usr/share/heketi/topology-sample.json /<Path>/topology.json
Edit the topology file based on the Red Hat Gluster Storage pod hostname under thenode.hostnames.manage
section andnode.hostnames.storage
section with the IP address. For simplicity, the /usr/share/heketi/topology-sample.json file only sets up 4 nodes with 8 drives each.Important
Heketi stores its database on a Red Hat Gluster Storage volume. In cases where the volume is down, the Heketi service does not respond due to the unavailability of the volume served by a disabled trusted storage pool. To resolve this issue, restart the trusted storage pool which contains the Heketi volume.
A.3.2.1. Deploying Converged Mode
- Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:
# cns-deploy -n <namespace> -g --admin-key <Key> topology.json
Note
- From Container-Native Storage 3.6, support for S3 compatible Object Store in Red Hat Openshift Container Storage is under technology preview. To deploy S3 compatible object store in Red Hat Openshift Container Storage see Step 1a below.
- In the above command, the value for
admin-key
is the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZE
parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
For example:# cns-deploy -n storage-project -g --admin-key secret topology.json Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift. Before getting started, this script has some requirements of the execution environment and of the container platform that you should verify. The client machine that will run this script must have: * Administrative access to an existing Kubernetes or OpenShift cluster * Access to a python interpreter 'python' Each of the nodes that will host GlusterFS must also have appropriate firewall rules for the required GlusterFS ports: * 111 - rpcbind (for glusterblock) * 2222 - sshd (if running GlusterFS in a pod) * 3260 - iSCSI targets (for glusterblock) * 24006 - glusterblockd * 24007 - GlusterFS Management * 24008 - GlusterFS RDMA * 49152 to 49251 - Each brick for every volume on the host requires its own port. For every new brick, one new port will be used starting at 49152. We recommend a default range of 49152-49251 on each host, though you can adjust this to fit your needs. The following kernel modules must be loaded: * dm_snapshot * dm_mirror * dm_thin_pool * dm_multipath * target_core_user For systems with SELinux, the following settings need to be considered: * virt_sandbox_use_fusefs should be enabled on each node to allow writing to remote GlusterFS volumes In addition, for an OpenShift deployment you must: * Have 'cluster_admin' role on the administrative account doing the deployment * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC * Have a router deployed that is configured to allow apps to access services running in the cluster Do you wish to proceed with deployment? [Y]es, [N]o? [Default: Y]: Y Using OpenShift CLI. Using namespace "storage-project". Checking for pre-existing resources... GlusterFS pods ... not found. deploy-heketi pod ... not found. heketi pod ... not found. glusterblock-provisioner pod ... not found. gluster-s3 pod ... not found. Creating initial resources ... template "deploy-heketi" created serviceaccount "heketi-service-account" created template "heketi" created template "glusterfs" created role "edit" added: "system:serviceaccount:storage-project:heketi-service-account" OK node "ip-172-18-5-29.ec2.internal" labeled node "ip-172-18-8-205.ec2.internal" labeled node "ip-172-18-6-100.ec2.internal" labeled daemonset "glusterfs" created Waiting for GlusterFS pods to start ... OK secret "heketi-config-secret" created secret "heketi-config-secret" labeled service "deploy-heketi" created route "deploy-heketi" created deploymentconfig "deploy-heketi" created Waiting for deploy-heketi pod to start ... OK Creating cluster ... ID: 30cd12e60f860fce21e7e7457d07db36 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node ip-172-18-5-29.ec2.internal ... ID: 4077242c76e5f477a27c5c47247cb348 Adding device /dev/xvdc ... OK Creating node ip-172-18-8-205.ec2.internal ... ID: dda0e7d568d7b2f76a7e7491cfc26dd3 Adding device /dev/xvdc ... OK Creating node ip-172-18-6-100.ec2.internal ... ID: 30a1795ca515c85dca32b09be7a68733 Adding device /dev/xvdc ... OK heketi topology loaded. Saving /tmp/heketi-storage.json secret "heketi-storage-secret" created endpoints "heketi-storage-endpoints" created service "heketi-storage-endpoints" created job "heketi-storage-copy-job" created service "heketi-storage-endpoints" labeled deploymentconfig "deploy-heketi" deleted route "deploy-heketi" deleted service "deploy-heketi" deleted job "heketi-storage-copy-job" deleted pod "deploy-heketi-1-frjpt" deleted secret "heketi-storage-secret" deleted template "deploy-heketi" deleted service "heketi" created route "heketi" created deploymentconfig "heketi" created Waiting for heketi pod to start ... OK heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run administrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list You can find it at https://github.com/heketi/heketi/releases . Alternatively, use it from within the heketi pod: # /bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list For dynamic provisioning, create a StorageClass similar to this: --- apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" Ready to create and provide GlusterFS volumes. clusterrole "glusterblock-provisioner-runner" created serviceaccount "glusterblock-provisioner" created clusterrolebinding "glusterblock-provisioner" created deploymentconfig "glusterblock-provisioner-dc" created Waiting for glusterblock-provisioner pod to start ... OK Ready to create and provide Gluster block volumes. Deployment complete!
Note
For more information on the cns-deploy commands, refer to the man page of cns-deploy.# cns-deploy --help
- To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --yes --admin-key <key> --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
object-account
,object-user
, andobject-password
are required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-sc
andobject-capacity
are optional parameters. Where,object-sc
is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacity
is the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:# cns-deploy /opt/topology.json --deploy-gluster --namespace storage-project --yes --admin-key secret --log-file=/var/log/cns-deploy/444-cns-deploy.log --object-account testvolume --object-user adminuser --object-password itsmine --verbose Using OpenShift CLI. Checking status of namespace matching 'storage-project': storage-project Active 56m Using namespace "storage-project". Checking for pre-existing resources... GlusterFS pods ... Checking status of pods matching '--selector=glusterfs=pod': No resources found. Timed out waiting for pods matching '--selector=glusterfs=pod'. not found. deploy-heketi pod ... Checking status of pods matching '--selector=deploy-heketi=pod': No resources found. Timed out waiting for pods matching '--selector=deploy-heketi=pod'. not found. heketi pod ... Checking status of pods matching '--selector=heketi=pod': No resources found. Timed out waiting for pods matching '--selector=heketi=pod'. not found. glusterblock-provisioner pod ... Checking status of pods matching '--selector=glusterfs=block-provisioner-pod': No resources found. Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'. not found. gluster-s3 pod ... Checking status of pods matching '--selector=glusterfs=s3-pod': No resources found. Timed out waiting for pods matching '--selector=glusterfs=s3-pod'. not found. Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1 template "deploy-heketi" created /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1 serviceaccount "heketi-service-account" created /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1 template "heketi" created /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1 template "glusterfs" created /usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1 role "edit" added: "system:serviceaccount:storage-project:heketi-service-account" /usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account OK Marking 'dhcp46-122.lab.eng.blr.redhat.com' as a GlusterFS node. /usr/bin/oc -n storage-project label nodes dhcp46-122.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1 node "dhcp46-122.lab.eng.blr.redhat.com" labeled Marking 'dhcp46-9.lab.eng.blr.redhat.com' as a GlusterFS node. /usr/bin/oc -n storage-project label nodes dhcp46-9.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1 node "dhcp46-9.lab.eng.blr.redhat.com" labeled Marking 'dhcp46-134.lab.eng.blr.redhat.com' as a GlusterFS node. /usr/bin/oc -n storage-project label nodes dhcp46-134.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1 node "dhcp46-134.lab.eng.blr.redhat.com" labeled Deploying GlusterFS pods. /usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1 daemonset "glusterfs" created Waiting for GlusterFS pods to start ... Checking status of pods matching '--selector=glusterfs=pod': glusterfs-6fj2v 1/1 Running 0 52s glusterfs-ck40f 1/1 Running 0 52s glusterfs-kbtz4 1/1 Running 0 52s OK /usr/bin/oc -n storage-project create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=/opt/topology.json secret "heketi-config-secret" created /usr/bin/oc -n storage-project label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret "heketi-config-secret" labeled /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= deploy-heketi | /usr/bin/oc -n storage-project create -f - 2>&1 service "deploy-heketi" created route "deploy-heketi" created deploymentconfig "deploy-heketi" created Waiting for deploy-heketi pod to start ... Checking status of pods matching '--selector=deploy-heketi=pod': deploy-heketi-1-hf9rn 1/1 Running 0 2m OK Determining heketi service URL ... OK /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1 Creating cluster ... ID: 252509038eb8568162ec5920c12bc243 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node dhcp46-122.lab.eng.blr.redhat.com ... ID: 73ad287ae1ef231f8a0db46422367c9a Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node dhcp46-9.lab.eng.blr.redhat.com ... ID: 0da1b20daaad2d5c57dbfc4f6ab78001 Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node dhcp46-134.lab.eng.blr.redhat.com ... ID: 4b3b62fc0efd298dedbcdacf0b498e65 Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK heketi topology loaded. /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --image rhgs3/rhgs-volmanager-rhel7:3.3.0-17 2>&1 Saving /tmp/heketi-storage.json /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- cat /tmp/heketi-storage.json | /usr/bin/oc -n storage-project create -f - 2>&1 secret "heketi-storage-secret" created endpoints "heketi-storage-endpoints" created service "heketi-storage-endpoints" created job "heketi-storage-copy-job" created Checking status of pods matching '--selector=job-name=heketi-storage-copy-job': heketi-storage-copy-job-87v6n 0/1 Completed 0 7s /usr/bin/oc -n storage-project label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints service "heketi-storage-endpoints" labeled /usr/bin/oc -n storage-project delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1 deploymentconfig "deploy-heketi" deleted route "deploy-heketi" deleted service "deploy-heketi" deleted job "heketi-storage-copy-job" deleted pod "deploy-heketi-1-hf9rn" deleted secret "heketi-storage-secret" deleted /usr/bin/oc -n storage-project delete dc,route,template --selector="deploy-heketi" 2>&1 template "deploy-heketi" deleted /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= heketi | /usr/bin/oc -n storage-project create -f - 2>&1 service "heketi" created route "heketi" created deploymentconfig "heketi" created Waiting for heketi pod to start ... Checking status of pods matching '--selector=heketi=pod': heketi-1-zzblp 1/1 Running 0 31s OK Determining heketi service URL ... OK heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run administrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list You can find it at https://github.com/heketi/heketi/releases . Alternatively, use it from within the heketi pod: # /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list For dynamic provisioning, create a StorageClass similar to this: --- apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" Ready to create and provide GlusterFS volumes. sed -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | /usr/bin/oc -n storage-project create -f - 2>&1 clusterrole "glusterblock-provisioner-runner" created serviceaccount "glusterblock-provisioner" created clusterrolebinding "glusterblock-provisioner" created deploymentconfig "glusterblock-provisioner-dc" created Waiting for glusterblock-provisioner pod to start ... Checking status of pods matching '--selector=glusterfs=block-provisioner-pod': glusterblock-provisioner-dc-1-xm6bv 1/1 Running 0 6s OK Ready to create and provide Gluster block volumes. /usr/bin/oc -n storage-project create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs secret "heketi-storage-project-admin-secret" created /usr/bin/oc -n storage-project label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret secret "heketi-storage-project-admin-secret" labeled sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/' -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | /usr/bin/oc -n storage-project create -f - 2>&1 storageclass "glusterfs-for-s3" created sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${VOLUME_CAPACITY}/2Gi/' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | /usr/bin/oc -n storage-project create -f - 2>&1 persistentvolumeclaim "gluster-s3-claim" created persistentvolumeclaim "gluster-s3-meta-claim" created Checking status of persistentvolumeclaims matching '--selector=glusterfs in (s3-pvc, s3-meta-pvc)': gluster-s3-claim Bound pvc-35b6c1f0-9c65-11e7-9c8c-005056b3ded1 2Gi RWX glusterfs-for-s3 18s gluster-s3-meta-claim Bound pvc-35b86e7a-9c65-11e7-9c8c-005056b3ded1 1Gi RWX glusterfs-for-s3 18s /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/gluster-s3-template.yaml 2>&1 template "gluster-s3" created /usr/bin/oc -n storage-project process -p S3_ACCOUNT=testvolume -p S3_USER=adminuser -p S3_PASSWORD=itsmine gluster-s3 | /usr/bin/oc -n storage-project create -f - 2>&1 service "gluster-s3-service" created route "gluster-s3-route" created deploymentconfig "gluster-s3-dc" created Waiting for gluster-s3 pod to start ... Checking status of pods matching '--selector=glusterfs=s3-pod': gluster-s3-dc-1-x3x4q 1/1 Running 0 6s OK Ready to create and provide Gluster object volumes. Deployment complete!
- Execute the following command to let the client communicate with the container:
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
For example:# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
To verify if Heketi is loaded with the topology execute the following command:# heketi-cli topology info
Note
A.3.2.2. Deploying Independent Mode
- To set a passwordless SSH to all Red Hat Gluster Storage nodes, execute the following command on the client for each of the Red Hat Gluster Storage node:
# ssh-copy-id -i /root/.ssh/id_rsa root@<ip/hostname_rhgs node>
- Execute the following command on the client to deploy heketi pod and to create a cluster of Red Hat Gluster Storage nodes:
# cns-deploy -n <namespace> --admin-key <Key> -s /root/.ssh/id_rsa topology.json
Note
- Support for S3 compatible Object Store is under technology preview. To deploy S3 compatible object store see Step 2a below.
- In the above command, the value for
admin-key
is the secret string for heketi admin user. The heketi administrator will have access to all APIs and commands. Default is to use no secret. - The
BLOCK_HOST_SIZE
parameter in cns-deploy controls the size (in GB) of the automatically created Red Hat Gluster Storage volumes hosting the gluster-block volumes. This default configuration will dynamically create block-hosting volumes of 500GB in size when more space is required. If you want to change this value then use --block-host in cns-deploy. For example:# cns-deploy -n storage-project -g --admin-key secret --block-host 1000 topology.json
For example:# cns-deploy -n storage-project --admin-key secret -s /root/.ssh/id_rsa topology.json Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift. Before getting started, this script has some requirements of the execution environment and of the container platform that you should verify. The client machine that will run this script must have: * Administrative access to an existing Kubernetes or OpenShift cluster * Access to a python interpreter 'python' Each of the nodes that will host GlusterFS must also have appropriate firewall rules for the required GlusterFS ports: * 2222 - sshd (if running GlusterFS in a pod) * 24007 - GlusterFS Management * 24008 - GlusterFS RDMA * 49152 to 49251 - Each brick for every volume on the host requires its own port. For every new brick, one new port will be used starting at 49152. We recommend a default range of 49152-49251 on each host, though you can adjust this to fit your needs. The following kernel modules must be loaded: * dm_snapshot * dm_mirror * dm_thin_pool For systems with SELinux, the following settings need to be considered: * virt_sandbox_use_fusefs should be enabled on each node to allow writing to remote GlusterFS volumes In addition, for an OpenShift deployment you must: * Have 'cluster_admin' role on the administrative account doing the deployment * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC * Have a router deployed that is configured to allow apps to access services running in the cluster Do you wish to proceed with deployment? [Y]es, [N]o? [Default: Y]: y Using OpenShift CLI. Using namespace "storage-project". Checking for pre-existing resources... GlusterFS pods ... not found. deploy-heketi pod ... not found. heketi pod ... not found. Creating initial resources ... template "deploy-heketi" created serviceaccount "heketi-service-account" created template "heketi" created role "edit" added: "system:serviceaccount:storage-project:heketi-service-account" OK secret "heketi-config-secret" created secret "heketi-config-secret" labeled service "deploy-heketi" created route "deploy-heketi" created deploymentconfig "deploy-heketi" created Waiting for deploy-heketi pod to start ... OK Creating cluster ... ID: 60bf06636eb4eb81d4e9be4b04cfce92 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node dhcp47-104.lab.eng.blr.redhat.com ... ID: eadc66f9d03563bcfc3db3fe636c34be Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node dhcp47-83.lab.eng.blr.redhat.com ... ID: 178684b0a0425f51b8f1a032982ffe4d Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node dhcp46-152.lab.eng.blr.redhat.com ... ID: 08cd7034ef7ac66499dc040d93cf4a93 Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK heketi topology loaded. Saving /tmp/heketi-storage.json secret "heketi-storage-secret" created endpoints "heketi-storage-endpoints" created service "heketi-storage-endpoints" created job "heketi-storage-copy-job" created service "heketi-storage-endpoints" labeled deploymentconfig "deploy-heketi" deleted route "deploy-heketi" deleted service "deploy-heketi" deleted job "heketi-storage-copy-job" deleted pod "deploy-heketi-1-30c06" deleted secret "heketi-storage-secret" deleted template "deploy-heketi" deleted service "heketi" created route "heketi" created deploymentconfig "heketi" created Waiting for heketi pod to start ... OK heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run administrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list You can find it at https://github.com/heketi/heketi/releases . Alternatively, use it from within the heketi pod: # /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list For dynamic provisioning, create a StorageClass similar to this: --- apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" Deployment complete!
Note
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.# cns-deploy --help
- To deploy S3 compatible object store along with Heketi and Red Hat Gluster Storage pods, execute the following command:
# cns-deploy /opt/topology.json --deploy-gluster --namespace <namespace> --admin-key <Key> --yes --log-file=<path/to/logfile> --object-account <object account name> --object-user <object user name> --object-password <object user password> --verbose
object-account
,object-user
, andobject-password
are required credentials for deploying the gluster-s3 container. If any of these are missing, gluster-s3 container deployment will be skipped.object-sc
andobject-capacity
are optional parameters. Where,object-sc
is used to specify a pre-existing StorageClass to use to create Red Hat Gluster Storage volumes to back the object store andobject-capacity
is the total capacity of the Red Hat Gluster Storage volume which will store the object data.For example:# cns-deploy /opt/topology.json --deploy-gluster --namespace storage-project --admin-key secret --yes --log-file=/var/log/cns-deploy/444-cns-deploy.log --object-account testvolume --object-user adminuser --object-password itsmine --verbose Using OpenShift CLI. Checking status of namespace matching 'storage-project': storage-project Active 56m Using namespace "storage-project". Checking for pre-existing resources... GlusterFS pods ... Checking status of pods matching '--selector=glusterfs=pod': No resources found. Timed out waiting for pods matching '--selector=glusterfs=pod'. not found. deploy-heketi pod ... Checking status of pods matching '--selector=deploy-heketi=pod': No resources found. Timed out waiting for pods matching '--selector=deploy-heketi=pod'. not found. heketi pod ... Checking status of pods matching '--selector=heketi=pod': No resources found. Timed out waiting for pods matching '--selector=heketi=pod'. not found. glusterblock-provisioner pod ... Checking status of pods matching '--selector=glusterfs=block-provisioner-pod': No resources found. Timed out waiting for pods matching '--selector=glusterfs=block-provisioner-pod'. not found. gluster-s3 pod ... Checking status of pods matching '--selector=glusterfs=s3-pod': No resources found. Timed out waiting for pods matching '--selector=glusterfs=s3-pod'. not found. Creating initial resources ... /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/deploy-heketi-template.yaml 2>&1 template "deploy-heketi" created /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-service-account.yaml 2>&1 serviceaccount "heketi-service-account" created /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/heketi-template.yaml 2>&1 template "heketi" created /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/glusterfs-template.yaml 2>&1 template "glusterfs" created /usr/bin/oc -n storage-project policy add-role-to-user edit system:serviceaccount:storage-project:heketi-service-account 2>&1 role "edit" added: "system:serviceaccount:storage-project:heketi-service-account" /usr/bin/oc -n storage-project adm policy add-scc-to-user privileged -z heketi-service-account OK Marking 'dhcp46-122.lab.eng.blr.redhat.com' as a GlusterFS node. /usr/bin/oc -n storage-project label nodes dhcp46-122.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1 node "dhcp46-122.lab.eng.blr.redhat.com" labeled Marking 'dhcp46-9.lab.eng.blr.redhat.com' as a GlusterFS node. /usr/bin/oc -n storage-project label nodes dhcp46-9.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1 node "dhcp46-9.lab.eng.blr.redhat.com" labeled Marking 'dhcp46-134.lab.eng.blr.redhat.com' as a GlusterFS node. /usr/bin/oc -n storage-project label nodes dhcp46-134.lab.eng.blr.redhat.com storagenode=glusterfs 2>&1 node "dhcp46-134.lab.eng.blr.redhat.com" labeled Deploying GlusterFS pods. /usr/bin/oc -n storage-project process -p NODE_LABEL=glusterfs glusterfs | /usr/bin/oc -n storage-project create -f - 2>&1 daemonset "glusterfs" created Waiting for GlusterFS pods to start ... Checking status of pods matching '--selector=glusterfs=pod': glusterfs-6fj2v 1/1 Running 0 52s glusterfs-ck40f 1/1 Running 0 52s glusterfs-kbtz4 1/1 Running 0 52s OK /usr/bin/oc -n storage-project create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=/opt/topology.json secret "heketi-config-secret" created /usr/bin/oc -n storage-project label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret "heketi-config-secret" labeled /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= deploy-heketi | /usr/bin/oc -n storage-project create -f - 2>&1 service "deploy-heketi" created route "deploy-heketi" created deploymentconfig "deploy-heketi" created Waiting for deploy-heketi pod to start ... Checking status of pods matching '--selector=deploy-heketi=pod': deploy-heketi-1-hf9rn 1/1 Running 0 2m OK Determining heketi service URL ... OK /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1 Creating cluster ... ID: 252509038eb8568162ec5920c12bc243 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node dhcp46-122.lab.eng.blr.redhat.com ... ID: 73ad287ae1ef231f8a0db46422367c9a Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node dhcp46-9.lab.eng.blr.redhat.com ... ID: 0da1b20daaad2d5c57dbfc4f6ab78001 Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK Creating node dhcp46-134.lab.eng.blr.redhat.com ... ID: 4b3b62fc0efd298dedbcdacf0b498e65 Adding device /dev/sdd ... OK Adding device /dev/sde ... OK Adding device /dev/sdf ... OK heketi topology loaded. /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --image rhgs3/rhgs-volmanager-rhel7:3.3.0-17 2>&1 Saving /tmp/heketi-storage.json /usr/bin/oc -n storage-project exec -it deploy-heketi-1-hf9rn -- cat /tmp/heketi-storage.json | /usr/bin/oc -n storage-project create -f - 2>&1 secret "heketi-storage-secret" created endpoints "heketi-storage-endpoints" created service "heketi-storage-endpoints" created job "heketi-storage-copy-job" created Checking status of pods matching '--selector=job-name=heketi-storage-copy-job': heketi-storage-copy-job-87v6n 0/1 Completed 0 7s /usr/bin/oc -n storage-project label --overwrite svc heketi-storage-endpoints glusterfs=heketi-storage-endpoints heketi=storage-endpoints service "heketi-storage-endpoints" labeled /usr/bin/oc -n storage-project delete all,service,jobs,deployment,secret --selector="deploy-heketi" 2>&1 deploymentconfig "deploy-heketi" deleted route "deploy-heketi" deleted service "deploy-heketi" deleted job "heketi-storage-copy-job" deleted pod "deploy-heketi-1-hf9rn" deleted secret "heketi-storage-secret" deleted /usr/bin/oc -n storage-project delete dc,route,template --selector="deploy-heketi" 2>&1 template "deploy-heketi" deleted /usr/bin/oc -n storage-project process -p HEKETI_EXECUTOR=kubernetes -p HEKETI_FSTAB=/var/lib/heketi/fstab -p HEKETI_ADMIN_KEY= -p HEKETI_USER_KEY= heketi | /usr/bin/oc -n storage-project create -f - 2>&1 service "heketi" created route "heketi" created deploymentconfig "heketi" created Waiting for heketi pod to start ... Checking status of pods matching '--selector=heketi=pod': heketi-1-zzblp 1/1 Running 0 31s OK Determining heketi service URL ... OK heketi is now running and accessible via http://heketi-storage-project.cloudapps.mystorage.com . To run administrative commands you can install 'heketi-cli' and use it as follows: # heketi-cli -s http://heketi-storage-project.cloudapps.mystorage.com --user admin --secret '<ADMIN_KEY>' cluster list You can find it at https://github.com/heketi/heketi/releases . Alternatively, use it from within the heketi pod: # /usr/bin/oc -n storage-project exec -it <HEKETI_POD> -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list For dynamic provisioning, create a StorageClass similar to this: --- apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" Ready to create and provide GlusterFS volumes. sed -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/glusterblock-provisioner.yaml | /usr/bin/oc -n storage-project create -f - 2>&1 clusterrole "glusterblock-provisioner-runner" created serviceaccount "glusterblock-provisioner" created clusterrolebinding "glusterblock-provisioner" created deploymentconfig "glusterblock-provisioner-dc" created Waiting for glusterblock-provisioner pod to start ... Checking status of pods matching '--selector=glusterfs=block-provisioner-pod': glusterblock-provisioner-dc-1-xm6bv 1/1 Running 0 6s OK Ready to create and provide Gluster block volumes. /usr/bin/oc -n storage-project create secret generic heketi-storage-project-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs secret "heketi-storage-project-admin-secret" created /usr/bin/oc -n storage-project label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret secret "heketi-storage-project-admin-secret" labeled sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/' -e 's/\${NAMESPACE}/storage-project/' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | /usr/bin/oc -n storage-project create -f - 2>&1 storageclass "glusterfs-for-s3" created sed -e 's/\${STORAGE_CLASS}/glusterfs-for-s3/' -e 's/\${VOLUME_CAPACITY}/2Gi/' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | /usr/bin/oc -n storage-project create -f - 2>&1 persistentvolumeclaim "gluster-s3-claim" created persistentvolumeclaim "gluster-s3-meta-claim" created Checking status of persistentvolumeclaims matching '--selector=glusterfs in (s3-pvc, s3-meta-pvc)': gluster-s3-claim Bound pvc-35b6c1f0-9c65-11e7-9c8c-005056b3ded1 2Gi RWX glusterfs-for-s3 18s gluster-s3-meta-claim Bound pvc-35b86e7a-9c65-11e7-9c8c-005056b3ded1 1Gi RWX glusterfs-for-s3 18s /usr/bin/oc -n storage-project create -f /usr/share/heketi/templates/gluster-s3-template.yaml 2>&1 template "gluster-s3" created /usr/bin/oc -n storage-project process -p S3_ACCOUNT=testvolume -p S3_USER=adminuser -p S3_PASSWORD=itsmine gluster-s3 | /usr/bin/oc -n storage-project create -f - 2>&1 service "gluster-s3-service" created route "gluster-s3-route" created deploymentconfig "gluster-s3-dc" created Waiting for gluster-s3 pod to start ... Checking status of pods matching '--selector=glusterfs=s3-pod': gluster-s3-dc-1-x3x4q 1/1 Running 0 6s OK Ready to create and provide Gluster object volumes. Deployment complete!
- Brick multiplexing is a feature that allows adding multiple bricks into one process. This reduces resource consumption and allows us to run more bricks than before with the same memory consumption. Execute the following commands on one of the Red Hat Gluster Storage nodes on each cluster to enable brick-multiplexing:
- Execute the following command to enable brick multiplexing:
# gluster vol set all cluster.brick-multiplex on
For example:# gluster vol set all cluster.brick-multiplex on Brick-multiplexing is supported only for container workloads (CNS/CRS). Also it is advised to make sure that either all volumes are in stopped state or no bricks are running before this option is modified.Do you still want to continue? (y/n) y volume set: success
- Restart the heketidb volumes:
# gluster vol stop heketidbstorage Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: heketidbstorage: success
# gluster vol start heketidbstorage volume start: heketidbstorage: success
- Execute the following command to let the client communicate with the container:
# export HEKETI_CLI_SERVER=http://heketi-<project_name>.<sub_domain_name>
For example:# export HEKETI_CLI_SERVER=http://heketi-storage-project.cloudapps.mystorage.com
To verify if Heketi is loaded with the topology execute the following command:# heketi-cli topology info
Note
Appendix B. Settings that are destroyed when using uninstall playbook
- glusterfs_config_facts.yml
- glusterfs_registry_facts.yml
ansible-playbook -i <path_to_inventory_file> -e "openshift_storage_glusterfs_wipe=true" /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml
glusterfs_timeout: "{{ openshift_storage_glusterfs_timeout }}" glusterfs_namespace: "{{ openshift_storage_glusterfs_namespace }}" glusterfs_is_native: "{{ openshift_storage_glusterfs_is_native | bool }}" glusterfs_name: "{{ openshift_storage_glusterfs_name }}" # map_from_pairs is a custom filter plugin in role lib_utils glusterfs_nodeselector: "{{ openshift_storage_glusterfs_nodeselector | default(['storagenode', openshift_storage_glusterfs_name] | join('=')) | map_from_pairs }}" glusterfs_use_default_selector: "{{ openshift_storage_glusterfs_use_default_selector }}" glusterfs_storageclass: "{{ openshift_storage_glusterfs_storageclass }}" glusterfs_storageclass_default: "{{ openshift_storage_glusterfs_storageclass_default | bool }}" glusterfs_image: "{{ openshift_storage_glusterfs_image }}" glusterfs_block_deploy: "{{ openshift_storage_glusterfs_block_deploy | bool }}" glusterfs_block_image: "{{ openshift_storage_glusterfs_block_image }}" glusterfs_block_host_vol_create: "{{ openshift_storage_glusterfs_block_host_vol_create }}" glusterfs_block_host_vol_size: "{{ openshift_storage_glusterfs_block_host_vol_size }}" glusterfs_block_host_vol_max: "{{ openshift_storage_glusterfs_block_host_vol_max }}" glusterfs_block_storageclass: "{{ openshift_storage_glusterfs_block_storageclass | bool }}" glusterfs_block_storageclass_default: "{{ openshift_storage_glusterfs_block_storageclass_default | bool }}" glusterfs_s3_deploy: "{{ openshift_storage_glusterfs_s3_deploy | bool }}" glusterfs_s3_image: "{{ openshift_storage_glusterfs_s3_image }}" glusterfs_s3_account: "{{ openshift_storage_glusterfs_s3_account }}" glusterfs_s3_user: "{{ openshift_storage_glusterfs_s3_user }}" glusterfs_s3_password: "{{ openshift_storage_glusterfs_s3_password }}" glusterfs_s3_pvc: "{{ openshift_storage_glusterfs_s3_pvc }}" glusterfs_s3_pvc_size: "{{ openshift_storage_glusterfs_s3_pvc_size }}" glusterfs_s3_meta_pvc: "{{ openshift_storage_glusterfs_s3_meta_pvc }}" glusterfs_s3_meta_pvc_size: "{{ openshift_storage_glusterfs_s3_meta_pvc_size }}" glusterfs_wipe: "{{ openshift_storage_glusterfs_wipe | bool }}" glusterfs_heketi_is_native: "{{ openshift_storage_glusterfs_heketi_is_native | bool }}" glusterfs_heketi_is_missing: "{{ openshift_storage_glusterfs_heketi_is_missing | bool }}" glusterfs_heketi_deploy_is_missing: "{{ openshift_storage_glusterfs_heketi_deploy_is_missing | bool }}" glusterfs_heketi_cli: "{{ openshift_storage_glusterfs_heketi_cli }}" glusterfs_heketi_image: "{{ openshift_storage_glusterfs_heketi_image }}" glusterfs_heketi_admin_key: "{{ openshift_storage_glusterfs_heketi_admin_key }}" glusterfs_heketi_user_key: "{{ openshift_storage_glusterfs_heketi_user_key }}" glusterfs_heketi_topology_load: "{{ openshift_storage_glusterfs_heketi_topology_load | bool }}" glusterfs_heketi_wipe: "{{ openshift_storage_glusterfs_heketi_wipe | bool }}" glusterfs_heketi_url: "{{ openshift_storage_glusterfs_heketi_url }}" glusterfs_heketi_port: "{{ openshift_storage_glusterfs_heketi_port }}" glusterfs_heketi_executor: "{{ openshift_storage_glusterfs_heketi_executor }}" glusterfs_heketi_ssh_port: "{{ openshift_storage_glusterfs_heketi_ssh_port }}" glusterfs_heketi_ssh_user: "{{ openshift_storage_glusterfs_heketi_ssh_user }}" glusterfs_heketi_ssh_sudo: "{{ openshift_storage_glusterfs_heketi_ssh_sudo | bool }}" glusterfs_heketi_ssh_keyfile: "{{ openshift_storage_glusterfs_heketi_ssh_keyfile }}" glusterfs_heketi_fstab: "{{ openshift_storage_glusterfs_heketi_fstab }}" glusterfs_nodes: "{{ groups.glusterfs | default([]) }}"
glusterfs_timeout: "{{ openshift_storage_glusterfs_registry_timeout }}" glusterfs_namespace: "{{ openshift_storage_glusterfs_registry_namespace }}" glusterfs_is_native: "{{ openshift_storage_glusterfs_registry_is_native | bool }}" glusterfs_name: "{{ openshift_storage_glusterfs_registry_name }}" # map_from_pairs is a custom filter plugin in role lib_utils glusterfs_nodeselector: "{{ openshift_storage_glusterfs_registry_nodeselector | default(['storagenode', openshift_storage_glusterfs_registry_name] | join('=')) | map_from_pairs }}" glusterfs_use_default_selector: "{{ openshift_storage_glusterfs_registry_use_default_selector }}" glusterfs_storageclass: "{{ openshift_storage_glusterfs_registry_storageclass }}" glusterfs_storageclass_default: "{{ openshift_storage_glusterfs_registry_storageclass_default | bool }}" glusterfs_image: "{{ openshift_storage_glusterfs_registry_image }}" glusterfs_block_deploy: "{{ openshift_storage_glusterfs_registry_block_deploy | bool }}" glusterfs_block_image: "{{ openshift_storage_glusterfs_registry_block_image }}" glusterfs_block_host_vol_create: "{{ openshift_storage_glusterfs_registry_block_host_vol_create }}" glusterfs_block_host_vol_size: "{{ openshift_storage_glusterfs_registry_block_host_vol_size }}" glusterfs_block_host_vol_max: "{{ openshift_storage_glusterfs_registry_block_host_vol_max }}" glusterfs_block_storageclass: "{{ openshift_storage_glusterfs_registry_block_storageclass | bool }}" glusterfs_block_storageclass_default: "{{ openshift_storage_glusterfs_registry_block_storageclass_default | bool }}" glusterfs_s3_deploy: "{{ openshift_storage_glusterfs_registry_s3_deploy | bool }}" glusterfs_s3_image: "{{ openshift_storage_glusterfs_registry_s3_image }}" glusterfs_s3_account: "{{ openshift_storage_glusterfs_registry_s3_account }}" glusterfs_s3_user: "{{ openshift_storage_glusterfs_registry_s3_user }}" glusterfs_s3_password: "{{ openshift_storage_glusterfs_registry_s3_password }}" glusterfs_s3_pvc: "{{ openshift_storage_glusterfs_registry_s3_pvc }}" glusterfs_s3_pvc_size: "{{ openshift_storage_glusterfs_registry_s3_pvc_size }}" glusterfs_s3_meta_pvc: "{{ openshift_storage_glusterfs_registry_s3_meta_pvc }}" glusterfs_s3_meta_pvc_size: "{{ openshift_storage_glusterfs_registry_s3_meta_pvc_size }}" glusterfs_wipe: "{{ openshift_storage_glusterfs_registry_wipe | bool }}" glusterfs_heketi_is_native: "{{ openshift_storage_glusterfs_registry_heketi_is_native | bool }}" glusterfs_heketi_is_missing: "{{ openshift_storage_glusterfs_registry_heketi_is_missing | bool }}" glusterfs_heketi_deploy_is_missing: "{{ openshift_storage_glusterfs_registry_heketi_deploy_is_missing | bool }}" glusterfs_heketi_cli: "{{ openshift_storage_glusterfs_registry_heketi_cli }}" glusterfs_heketi_image: "{{ openshift_storage_glusterfs_registry_heketi_image }}" glusterfs_heketi_admin_key: "{{ openshift_storage_glusterfs_registry_heketi_admin_key }}" glusterfs_heketi_user_key: "{{ openshift_storage_glusterfs_registry_heketi_user_key }}" glusterfs_heketi_topology_load: "{{ openshift_storage_glusterfs_registry_heketi_topology_load | bool }}" glusterfs_heketi_wipe: "{{ openshift_storage_glusterfs_registry_heketi_wipe | bool }}" glusterfs_heketi_url: "{{ openshift_storage_glusterfs_registry_heketi_url }}" glusterfs_heketi_port: "{{ openshift_storage_glusterfs_registry_heketi_port }}" glusterfs_heketi_executor: "{{ openshift_storage_glusterfs_registry_heketi_executor }}" glusterfs_heketi_ssh_port: "{{ openshift_storage_glusterfs_registry_heketi_ssh_port }}" glusterfs_heketi_ssh_user: "{{ openshift_storage_glusterfs_registry_heketi_ssh_user }}" glusterfs_heketi_ssh_sudo: "{{ openshift_storage_glusterfs_registry_heketi_ssh_sudo | bool }}" glusterfs_heketi_ssh_keyfile: "{{ openshift_storage_glusterfs_registry_heketi_ssh_keyfile }}" glusterfs_heketi_fstab: "{{ openshift_storage_glusterfs_registry_heketi_fstab }}" glusterfs_nodes: "{% if groups.glusterfs_registry is defined and groups['glusterfs_registry'] | length > 0 %}{% set nodes = groups.glusterfs_registry %}{% elif 'groups.glusterfs' is defined and groups['glusterfs'] | length > 0 %}{% set nodes = groups.glusterfs %}{% else %}{% set nodes = '[]' %}{% endif %}{{ nodes }}"
Appendix C. Revision History
Revision History | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Revision 1.0-03 | Thu May 2 2019 | |||||||||||
| ||||||||||||
Revision 1.0-02 | Wed Sep 12 2018 | |||||||||||
| ||||||||||||
Revision 1.0-01 | Tue Sep 11 2018 | |||||||||||
|